[Home]Color space encoding

HomePage | Recent Changes | Preferences

If you're doing graphics work on a computer, after you've chosen a color space, you're faced with the problem of encoding your color information in a format usable by a computer.

The average human is said to be able to see and appreciate the difference between millions of colours. This varies, of course, by individual and environmental conditions and the kind of colours being viewed. However, for the sake of convenience, lets say millions. When computer programmers had to confront the problem, they said (for the sake of an even greater convenience), how about sixteen million? This happens to be a very easy number to deal with on a computer.

Most current computers are 32 bit processors. This means they can easily deal with numbers up to about four billion. They do this by representing each individual number as patterns of zero's and ones... Kind of like the example above.

Computers usually use television style monitors as displays. Because the monitors are self-illuminated, they use _additive_ colour to generate the spectrum. The basic additive colours are red, green and blue. Combinations of the colours can be created to form almost any colour that the human eye can see. Here's a simple mix...

 red   green   blue
   \    /   \   /
   yellow   cyan
       \    /
       white

Mixing red and green gives yellow. Green and blue makes cyan. If you were to add cyan and yellow, they'd come out white. This is the opposite of what happens when you mix paints, or printer's ink. That's a _subtractive_ colour system and you start with cyan, yellow and magenta instead.

So, by combining red, green and blue, you can make just about any colour. So, any colour which is displayed by your computer screen is broken down into its red-green-blue, or RGB, components. The computer has 32 bits to work with. If we assign eight (another power of 2!) bits to each colour, we can fit them comfortably into this space. Like so:

 1       8      16      24      32 bits
 | red   | green | blue  | ?     | colours

That gives us 24 bits of colour resolution. 2 to the 24th power is... 16.7 million. "Wait a minute!", you say, "what about those left over eight bits? Why not use them?" In the beginning it was done as tradeoff between accuracy and convenience. Digital to analog converters got very expensive the more bits of accuracy you added to them. (They still do, but it's less pronounced.) This meant that it was important to choose an accuracy that was good enough, but not so good that it was very expensive. Fortunately, eight bits fell right in the sweet spot. It also made the design of hardware to display these coloured pixels relatively easy and straightforward.

Some people did try other schemes, such as sixteen bit colour, where either blue or green had a few bits more than the others -- since 3 doesn't divide evenly into sixteen. Early PC graphics adapters called this "HiColour?" mode. Other formats used ten bits per colour or even sixteen. These tended to be large and clumsy to work with, so they're not usually seen.

So, for a while, those eight extra bits were wasted, casualties of convenience. However, later on, a couple of smart guys were looking thing over and realized that the last eight bits could be used to represent "coverage", also called "alpha." This controls the opacity of the pixel. Typically, alpha ranges from zero to 255 with zero meaning fully transparent, and 255 meaning fully opaque. It doesn't mean much when you're just displaying the picture, but when you composite it with another -- it makes all the difference. More about this below. For now, rest assured that the eight bits didn't go to waste in the end.


HomePage | Recent Changes | Preferences
This page is read-only | View other revisions
Last edited September 25, 2001 7:07 am by 63.224.100.xxx (diff)
Search: