Character Encoding Unicode , ASCII , UTF-8 , UTF-16

June 15, 2016 by Linux Guru

Filed under Localization

Last modified June 15, 2016

Character Encoding Unicode , ASCII , UTF-8 , UTF-16

Unicode vs ASCII:- ASCII and Unicode are two character encodings. Basically, they are standards on how to represent difference characters in binary so that they can be written, stored, transmitted, and read in digital media. The main difference between the two is in the way they encode the character and the number of bits that they use for each. ASCII originally used seven bits to encode each character. This was later increased to eight with Extended ASCII to address the apparent inadequacy of the original. In contrast, Unicode uses a variable bit encoding program where you can choose between 32, 16, and 8-bit encoding. Using more bits lets you use more characters at the expense of larger files while fewer bits give you a limited choice but you save a lot of space.

1.ASCII uses an 8-bit encoding while Unicode uses a variable bit encoding.
2.Unicode is standardized while ASCII isn’t.
3.Unicode represents most written languages in the world while ASCII does not.
4.ASCII has its equivalent within Unicode.

Another major advantage of Unicode is that at its maximum it can accommodate a huge number of characters. Because of this, Unicode currently contains most written languages and still has room for even more. This includes typical left-to-right scripts like English and even right-to-left scripts like Arabic. Chinese, Japanese, and the many other variants are also represented within Unicode. So Unicode won’t be replaced anytime soon.

UTF-8 vs UTF-16 :- UTF stands for Unicode Transformation Format. It is a family of standards for encoding the Unicode character set into its equivalent binary value. UTF was developed so that users have a standardized means of encoding the characters with the minimal amount of space.UTF-8 and UTF 16 are only two of the established standards for encoding. They only differ in how many bytes they use to encode each character. Since both are variable width encoding, they can use up to four bytes to encode the data but when it comes to the minimum, UTF-8 only uses 1 byte (8bits) and UTF-16 uses 2 bytes(16bits). This bears a huge impact on the resulting size of the encoded files. When using ASCII only characters, a UTF-16 encoded file would be roughly twice as big as the same file encoded with UTF-8.

1. UTF-8 and UTF-16 are both used for encoding characters
2. UTF-8 uses a byte at the minimum in encoding the characters while UTF-16 uses two
3. A UTF-8 encoded file tends to be smaller than a UTF-16 encoded file
4. UTF-8 is compatible with ASCII while UTF-16 is incompatible with ASCII
5. UTF-8 is byte oriented while UTF-16 is not
6. UTF-8 is better in recovering from errors compared to UTF-16

Leave a Comment