Senator Guerra Souty original series calendar,replica hublot blue steel peach pointer collocation of rolex replica Rome digital scale, track type minute replica watches scale shows that the classical model is swiss replica watches incomparable, wearing elegant dress highlights.
mr-ponna.com

 

YOU ARE HERE: HOME Questions What is the difference between UTF ASCII and ANSI code format of encoding



About Unicode, ASCII, and ANSI encoding

View(s): 17825

What is the difference between UTF, ASCII and ANSI code format of encoding?

Answer 1)

The Basics
Letters are represented in a computer by numeric codes. Pretty much everybody agrees that, when the computer sees a code of 100 (decimal), it represents a lowercase "d". We don't all agree on what 250 represents, and therein lies the rub.

ASCII vs ANSI
We commonly refer to character encoding as a letter's "ASCII value," when we really mean "ANSI value." A lot of the time that's sufficient, but in fact the ASCII standard is pretty much obsolete.

ASCII (American Standard Code for Information Interchange) is a 7-bit standard that has been around since the late 1950s (its current incarnation dates from 1968). It defines 128 different characters, which is more than enough for English: upper- and lowercase letters, punctuation, numerals, control codes (remember control-c?), and nonprinting codes such as tab, return, and backspace. 

ASCII and ANSI are pretty good as long as you are western European. These two mappings are extremely limited in that they may only code (i.e. assign a number to) 256 letters, so that there is no space to include other glyphs from other languages.

Unicode 
Unicode fixes the limitations of ASCII and ANSI, by providing enough space for over a million different symbols. Like the above two systems, each character is given a number, so that Russian ? is 042F, and the Korean won symbol ? is 20A9. (Note that all Unicode numbers are Hexadecimal, meaning that one counts by 16’s not 10’s, not a problem as users really don’t need to know the mapping numbers anyway.) So, although not yet totally comprehensive, Unicode covers most of the world’s writing systems. Most importantly, the mapping is consistent, so that any user anywhere on any computer has the same encoding as everyone else, no matter what font is being used.

So Unicode is a map, a chart of (what will one day be) all of the characters, letters, symbols, punctuation marks, etc. necessary for writing all of the world’s languages past and present.

What is the difference between UTF-8, UTF-16?
UTF-8 uses variable byte to store a Unicode. In different code range, it has its own code length, varies from 1 byte to 6 bytes. Because it varies from 8 bits (1 byte), it is so called "UTF-8". UTF-8 is suitable for using on Internet, networks or some kind of applications that needs to use slow connection.

Unicode (or UCS) Transformation Format, 16-bit encoding form. The UTF-16 is the Unicode Transformation Format that serializes a Unicode scalar value (code point) as a sequence of two bytes, in either big-endian or little-endian format. Because it is grouped by 16-bits (2 bytes), it is also called "UTF-16", which is the most commonly used standard.
  Asked in:  SemanticSpace Technologies   Expertise Level:  Experienced
  Last updated on Saturday, 23 March 2013
4/5 stars (4 vote(s))

Register Login Ask Us Write to Us Help