Video Color Resolution
This page contains a variety of topics mostly related to analog video. In the U.S. digital video took over TV and most video applications before this web page was completed and some of the subtopics below are just outlines of the intended material.
Color (chrominance) resolution represents the ability to reproduce fine multicolored picture detail, or the ability to reproduce a sharp transition from one color to another.
To preserve all of the color detail in a picture, say 640 pixels wide by 480 pixels high, we would need 640 x 480 pixels for red content, 640 x 480 pixels for green content, and 640 x 480 pixels for blue content. This is three times the amount of data that a comparable monochrome (black and white) picture would require.
In practice, television and entertainment video provides a much lower color resolution. The most common standard for digital video provides color resolution of half of the luminance resolution both horizontally and vertically, namely one set of color information for each block of four pixels. The purpose is to reduce the amount of data that needs to be transmitted and/or stored.
Using just the data (pixel content) for luminance, we get a black and white picture. The human eye is much more sensitive to fine light/dark details compared with fine color changes. Thus video systems get away with not preserving as much color resolution.
(We will use the terms color, chrominance, and chroma interchangeably although this is not absolutely correct usage.)
The human eye is more sensitive to green compared with to red or blue. This does not mean that the green data carries more picture detail. Instead a video signal with full detail called the luminance signal is created and separate video signals with lesser detail carry color information. The luminance data by itself creates a black and white picture.
Several standards have been set for color resolution in digital video of which just a few are commonly used. The notation goes as follows: X:Y:Z. X is the number of luminance pixels in a row within the reference block in question. Y is the number of color pixels taken from an odd row of pixels. Z is the number of color pixels taken from an even row of pixels.
For DVD's and U.S. HDTV the standard is 4:2:0. For every four pixels in a row, there are two pixels of color information taken from odd scan lines and no color information taken from even scan lines. The result is that every 2x2 block of pixels on the screen share the same coloration or hue. (In a rough sense, red, pink, and blood are the same hue while red and yellow are different hues.) If full color resolution were to be maintained, the notation would be 4:4:4. Other standards in use are 4:1:1 and 4:2:2. During video processing, additional color pixels need to be interpolated or synthesized to yield a video signal of 4:4:4 going into the LCD or other display device.
Frequently Asked Questions
What is the color resolution of NTSC, HDTV, S-video, etc?
How did the color resolution standards evolve?
What are some of the weak links in color resolution?
How has color resolution degraded further due to skimping?
What are some of the easily seen results of color resolution.
How can I find out or judge the color resolution?
How does the human eye resolve color detail?
Video Links
Color Decoders
Video Glossary
Other Video Topics
Articles on other subjects
Color Resolution Outline and Links
How do they get away with this?
Other reasons why not resolving fine horizontal detail in color is not very obtrusive:
Fine repetitive color changes are interpreted by the human eye as a solid patch of a third color.
In the vertical direction in analog video, the color resolution is much greater, sometimes equal to the luminance resolution and usually to within two pixels or scan lines which is less than one half of one percent of the screen height.
Lesser Color Resolution Means
o Closely spaced details are discolored
o Color transitions are less crisp
How do you find out?
o Experts don't know
o Difficult to judge by sight
o Difficult to measure or quantify
Why resolution so low?
o Not enough bandwidth
o Skimping
Why so hard to see?
o Eye not so sensitive
o Few good test patterns
What is?
(typical horizontal resolutions)
Where does loss occur?
Color video quickly evolved into a luminance signal with full resolution that produced a black and white picture, together with chrominance signals that added color but with much less resolution.
o To maintain compatibility with black and white video
o To fit within the allotted channel space for broadcasting or to fit within the capacity of the storage media provided.
o The human eye is much less sensitive to incorrect coloration or lack of color of fine picture details compared with loss of the light/dark (luminance) fine details themselves.
It is common for TV sets, VCR's, and other video equipment to have even less color resolution than the standards, both analog and digital, provide for.
o general
o Simplified color circuitry
o Delay elements
o Japanese laser disk manufacturing
o Cable TV compression
o Downrezzing
What Is Color Resolution All About?
Color resolution refers to the number of color changes that can be reproduced in a given distance span.
Color resolution affects how crisp a boundary between areas of different colors can be.
Color resolution is measured in "lines of resolution" over a given distance span just as is regular resolution (luminance resolution) where each "line of resolution" equals one color change.
Changes between a light and a dark shade of the same color are luminance changes and are not affected by limitations in the color resolution.
As objects become narrower and narrower so as to push the limit of color resolution, the loss of color as seen on the screen is gradual and often subtle.
One of the reasons HDTV looks much better than standard TV is because HDTV can have much greater color resolution.
Horizontal Color Resolution for Typical Video Applications
Note: Correctly, "lines of resolution" requires a distance to be stated. For most video topics, the distance is equal to the screen height. Also, a black line and the white space next to it, such as in a test pattern, count as two lines.
Video Sources
NTSC Broadcasts (composite) 120 lines best, 40 - 48 lines typical for reddish orange and greenish blue; 40 - 48 lines for most other color transitions.
VHS, S-VHS, 8mm, Hi-8, all Beta (composite or S-video) The standard is 400 KHz bandwidth yielding 32 lines best, 25 lines typical.
12" Laser Disk (composite) 120 lines best, but could be as low as 40 lines.
DVD (S-video) -- The standard is 120 lines.
DVD (component video) -- 270 lines best, 240 lines acceptable. Color resolution is half of the luminance resolution.
Computer Video -- Color resolution is equal to the luminance resolution, which can exceed 1000 lines. The most common limiting factor is the monitor's picture tube.
U.S. HDTV -- The horizontal color resolution is half of the luminance resolution most of the time, 1/4 of the luminance resolution in a few cases. Both 1080i and 720p normally have a 16:9 aspect ratio, and the color resolution (lines per picture height) is 540 (ooccasionally 270) for 1080i and 360 (occasionally 180) for 720p. When the color horizontal resolution is a half of the luminance resolution, the color vertical resolution is also half the luminance resolution. When the color horizontal resolution is one fourth of the luminance resolution the color vertical resolution equals the luminance resolution.
Video Formats
Composite video and S-video have a practical limit of 120 lines of color resolution and a theoretical limit of 140 lines. While S-video connections were first made available for VCR's, the color resolution for recording remained around 32 lines.
There is no theoretical limit for color resolution for either Y/Pb/Pr (YUV) component video or RGB. Witness that the color resolution standard for 1080I HDTV is 540 TV lines (per distance equal to the picture height @ 16:9) while the luminance resolution for DVD is less, 405 TVL if the program has the same 16:9 aspect ratio.
The most common digital video transmission formats have color resolution half of the luminance resolution both horizontally and vertically.
Vertical Color Resolution
For all analog video (NTSC, PAL, S-video, composite video, etc.) the vertical color resolution equals the vertical luminance resolution. Each scan line can have color independent of its neighbors.
The same is true of digital video, although most available source material has vertical color resolution half of the luminance resolution, that is, every two scan lines share the same coloration.
Mediocre comb filters will detract from vertical color resolution of analog sources. The "two line" comb filter actually loses more vertical color resolution than "no" comb filter. It is common to observe a discolored stripe two scan lines thick where two color patches that are one atop the other meet. In that case it took three scan lines to transition from one color to the other. Most "three line" and better (3D) comb filters do not produce this artifact.
For most digital video material, some added loss of vertical color resolution occurs with interlaced (live video) material because each pair of scan lines that share color consists of two odd scan lines or two even scan lines. Some added loss of vertical color resolution occurs with film source video also, if the processing does not match up the color with the proper scan lines.
Many HDTV sets use a scaler. Rather than draw 480 scan lines on the screen for DVD (and NTSC broadcasts) and 1080 scan lines for HDTV, they always draw 1080 scan lines for a complete video frame. Since the 1080 scan lines are always in the form of two staggered alternately drawn 540 scan line fields, there is reasonable compatibility with DVD. During the conversion (scaling) from 480 scan lines to 540 scan lines, discoloration can occur where two color patches that are one atop the other meet. This discoloration should not be more than a single scan line. Properly done, the scaling actually does not reduce the vertical color resolution because all of the original independently colored scan lines should still be there.
Finding out the color resolution
The high end home theater viewer will want to make sure the TV set has good color horizontal resolution after the comb filter. For an ordinary (non-HDTV) TV, eighty lines would probably be an absolute minimum; 120 lines is the specification for S-video, but the more the better. However one of the big improvements of HDTV over ordinary TV is much improved color resolution. 120 lines is marginal color resolution for HDTV.
. Technical information about color resolution can be found in textbooks available in most public libraries but almost nothing is said about this in advertising.
Asking the Manufacturer
This is not easy. Horizontal color resolution of the TV internal circuits is almost never published. Even knowledgeable salesmen and manufacturers' customer service rep's usually don't know. You might have to call the manufacturer's headquarters. If the resolution is above 80 lines (one megahertz) and the design engineer has some pride in the product, he might give you the answer.
Magazine Articles, Peer Reviews
o Almost nonexistent.
There is no easy way of knowing what the preserved color resolution of any given program material is, considering that there are numerous places in the production process where color resolution could be lost. Without source material that has color resolution known to be good, it is more difficult to verify the color resolution of a TV set.
Obvious Things You Can See
For an ordinary TV broadcast viewed up close, you will notice that, going from left to right, any change of color (other than to or from black or white) is blended, smeared if you insist (or has a blackor gray gap in between). This is a direct consequence of less than 50 lines of color horizontal resolution for most if not all colors. If you are familiar with a color wheel representation of colors (red, orange, yellow, green, blue, purple, magenta, back to red with pastels near the center) the transition or smearing may look like the intermediate colors a line drawn on the wheel between two colors goes through.
Lesser color resolution means that as the electron beam draws a scan line, it may be unable to get all the way from one desired color to the next before it has to start changing to a third color for a spot yet further along the scan line.
Provided one of the colors is dark and the other light, fine horizontal detail is not lost completely although the colors actually seen will not be the right colors.
The lesser sensitivity of the human eye to color resolution shortcomings which made NTSC composite video possible also makes judging color resolution more difficult and time consuming.
Judging Color Resolution
It is at best moderately difficult to measure color resolution using the best test patterns available today, and very difficult to measure color resolution viewing normal program material.
It is much better to compare the same picture on two TV sets as opposed to trying to evaluate color resolution on just one set. "Good" color resolution as seen proves good color resolution in the TV set, but "not so good" color resolution does not prove deficient resolution in the TV set. It is entirely possible that the source material, or some other component such as the DVD player, could be deficient.
Available Color Resolution Tests
First I will say a few words about really bad color resolution. You see obtrusive dark gaps or discoloration where two patches of contrasting color meet, even if you are not sitting really close to (within eight times the screen height of) the TV.
o Barney the dinosaur. Observe Barney himself, notably where purple meets green. The lesser the dark gap and the more freedom from graininess, the better.
o A standard color bar test pattern with the large upright bars at the top of the screen. Observe the dark boundary between the red and blue bars, and between the green and magenta bars. The thinner and the more freedom from crawling zipper edges, the better.
o On a newscast or sportscast, observe the colored bar that appears from time to time near the bottom of the screen (usually with red and blue stripes) where players' names, sports statistics, stock ticker numbers, etc. are displayed. Look for a horizontal line of discoloration, two scan lines thick, where the color stripes meet. Note, sometimes this line of discoloration is part of the decorative trim around the colored bars and not a resolution deficiency. Compare several TV sets including some really cheap ones.
o The Snell & Wilcox zone plate test on the Video Essentials test disk.
The only formal color resolution test pattern I know of is the sets of medium to thin colored vertical stripes near the bottom of the Snell and Wilcox zone plate chart, one of which is recorded on the Video Essentials DVD. There are three tests, for 0.5 MHz or 40 lines of resolution, for 1.0 MHz (80 lines), and for 1.5 MHz (120 lines).
In the example I am judging the color resolution to be around 80 lines, or 1.0 MHz, maybe a bit more.
If you see dark gaps in the 0.5 MHz (yellow and blue) test almost as wide as or wider than the colored stripes, then the color resolution is quite poor.
To keep this discussion simple I cannot explain the consequences of sine waves versus square waves here except that an absolutely sharp transition from one color to another requires a square wave while a sine wave represents a blended (or smeared) color transition or a transition with a dark gap. Meanwhile the upper limit of resolution or frequency response is where the electronic circuit will reproduce a sine wave but not a square wave of a given frequency.
If there is no red in the 1.5 MHz bars the color resolution is well below 120 lines. If there is no red in the 1.0 MHz bars, the color resolution is well below 80 lines. Be sure to stand far enough back so you do not see the color dots or stripes on a picture tube, and also note that poor convergence will greatly confuse the test results.
See, also, web site
When comparing TV sets in a store, be aware that the source material fed to all of the sets may be of inferior quality.
Conducting Tests Yourself
If you want to be thorough about examining the TV, you would need to feed test video into all three kinds of inputs, composite (yellow RCA jack), S-video, and component video or digital video input. A DVD player with all three of these kinds of video as outputs can be used. Dark gaps between color patches will never be completely absent if the source is composite or S-video.
Most of the superiority of DVD over other formats is improved color resolution, and it can be seen only with S-video or component video connections.
Some viewers have reported that they don't see much difference between S-video and composite video. Some viewers have reported they can't tell the difference between component video and S-video. The reason is likely to be poor color resolution in at least one place in the signal path, anywhere from the recording of source material on the disk to the color circuits in the TV.
When you view DVD and HDTV programs today (1998) on a non-computer TV screen, most of the improvement is color resolution. Display limitations such as misconvergence and large dot pitch hide much of the improvement afforded by the increased number of scan lines of HDTV.
Vertical color resolution is equal to the vertical luminance resolution for older analog video (except SECAM). Each scan line can be any color independent of any other scan line. Interestingly, for DVD and most U.S. HDTV and DTV, and also SECAM, vertical color resolution is half the luminance resolution; every two scan lines share the same color. Since SECAM is an interlaced format, this causes additional anomalies.
The NTSC standard has a 1.5 MHz color signal lower sideband width which permits 120 lines of resolution but only for some colors. Earlier (1950's) TV sets probably had circuits that had to start compromising (rolling off) the signal "earlier" starting at say 1.2 MHz (100 lines) to prevent exceeding the bandwidth. Today's circuitry could handle color signals close to 1.5 MHz but then too they might have been dumbed down to handle less.
S-video has a 1.5 MHz standard for 120 lines of color resolution) although we believe the maximum frequency that can be modulated onto the 3.58 MHz subcarrier and then reliably demodulated is half of the subcarrier, 1.78 MHz, allowing about 140 lines of resolution.
Note: Specifying "lines of resolution" requires a distance span. Video professionals use for this distance the height of the picture as displayed on the screen. A more general distance reference is across the largest circle that fits in the space we are talking about.
Skimping, sources don't always use the full resolution
Fortunately, digital video and component video connections today allow for much improved color resolution.
The next thread is "downrezzing"
Need To Fit Space Provided
Some History
In all current forms of full color consumer video, color resolution is deliberately cut so that the color signal will fit in the channel bandwidth or on the disk or tape without interfering excessively with the luminance signal (the latter forms the picture in black and white). Within these limits the NTSC composite video signal was originally optimized for the human eye's sensitivity to certain colors. Custom and habit, and manufacturing cost cutting, often result in even less color resolution in the finished picture than NTSC is capable of delivering.
The original black and white NTSC picture signal spanned most of the bandwidth for a broadcast TV channel. It so happens that for most real life pictures, the amount of fine detail information is quite small and also occupies a contiguous portion of the bandwidth (towards the higher frequency side).
Color video is initially recorded or captured as red content, green content, and blue content (RGB) signal components. At the other end of the video signal path, at the picture tube(s) or LCD panel(s) or other display element(s) the video information must be back in RGB form.
For RGB, the three signal components must each carry the full resolution of the picture. As such, they would occupy three times the storage or transmission channel space as a black and white video signal. The goal in developing the complete system of color TV was to in some fashion compress the RGB signals for most of the video signal path, in particular to use less spectrum space for the over the air broadcasting.
By formatting video as a luminance signal and two color component signals, the latter two can be carried at much less resolution, saving on storage and transmission space requirements.
. It so happens that any three colors, not just red, green, and blue, could be the basis of subsignals used to reconstruct a color picture. With compatibility with black and white TV in mind, the designers chose to transmit luminance, or white/black, with full resolution and two other colors (approximately orange and green respectively) at a much reduced horizontal resolution. Actually the color signals represent their primary color and its inverse, here, (approx.) orange/blue and (approx.) green/purple, for a total of four primary colors not including black and white. Different combinations of these color signals and luminance produce mixtures that represent all colors.
Although NTSC broadcast TV provided 330 lines of resolution, most TV sets even back in the 1950's had no more than about 240 lines as a result of manufacturing cost cutting. To define composite video, two color subsignals were modulated onto a subcarrier and then superimposed on the fine detail portion of the luminance signal. On the average (cheaper) black and white TV set, interference was hardly noticed. Those few TV sets (those with full bandwidth and resolution) that experienced problems (a silk screened picture) were often hand modified by a serviceman who added a capacitor amidst the large easily soldered discrete components. That same capacitor, tinier nowadays, is still found in today's TV sets without comb filters.
While the three colors red, green, and blue are needed to make a video picture, for details in the 40 to 120 "lines of resolution" range (medium details) only two primary colors, reddish orange and greenish blue, will suffice. Still smaller picture details can be left uncolored. Viewed by itself such a picture (ultimate NTSC approximates this) looks very natural although of course it looks inferior compared to today's DVD picture.
To make the luminance/color interference tolerable and keep the total signal within the allotted channel space, the color signal bandwidth and therefore the color resolution had to be reduced. Since the human eye is more sensitive to reddish orange and greenish blue, one color component signal (called I for in-phase) is based on these colors. A second component (called Q for quadrature modulation) is based on greens and purples such that combinations of I and Q could produce any color. The I signal giving 120 lines of resolution (wideband relatively speaking) and the Q signal giving 40 lines of resolution (narrow band) gave the best resolution compromise. In order to finalize the NTSC standard, viewers' opinions of picture quality were gathered using laboratory tests involving different combinations of luminance and color content. Reception of the same signal on black and white TV sets was also taken into account as the final NTSC standard had to be compatible. Click here for more on color decoders.
Common Causes For Color Resolution Loss
While standards exist for broadcasting, further loss of color resolution can occur just about anywhere in the video signal path.
o At the uplink end of digital satellite and digital cable TV systems, the bandwidth of analog circuits prior to digital conversion may be inferior or the digitizing process may be inferior.
o On analog cable systems the bandwidth of the production and transmission electronics in general may be insufficient, or some of the video signal may be removed to reduce inter-channel interference. For over the air TV broadcasting, any two adjacent channels (other than the 4-5, 6-7, and 13-14 pairs which are really not contiguous in terms of frequency band) are never both in use in the same city or region. Whereas for cable TV, almost all of the channels are in use.
o The comb filter (used by broadcasts and composite video sources) may introduce artifacts. The most common artifact regarding color resolution is a horizontal line of discoloration, usually two scan lines thick, where color patches one atop another meet.
o The color decoder (used by broadcasts, composite video, and S-video sources) may have bandwidth limitations.
o The video stages just prior to the final conversion to RGB (used by all video sources) may have bandwidth limitations.
o Cables used to connect components may be inferior. The most likely time this happens is when an audio-video cable set (with red, white, and yellow cables) is used instead of the proper red, green, and blue cables for component video connections.
The stem of the T has a discoloration tending to the background color, contrasted with the top of the T
In this wedge diagram you can see that the transition from one color to the other is of a constant width while the color
Digital TV Color Resolution
The necessity of reducing color resolution to fit the allowable transmission or storage space has continued into the digital age. The most common standard for U.S. video, both for HDTV and digital SDTV broadcasting and DVD, is half the luminance resolution, both horizontally and vertically. In other words, there is one color pixel for every 2x2 block of luminance pixels. There is one other common standard where every four luminance pixels in a row share the same color.
If each pixel could be individually colored, there is the need for each pixel to have three values for Y, Pb, and Pr respectively, or Y, I, and Q respectively*, or R, G, and B respectively. Twelve values would be needed for each 2x2 block of pixels. Instead the 2x2 block of pixels customarily has one value for luminance for each pixel and two values for the shared color Pb and Pr respectively for a total of six values or a 50% savings of space. In the current DVD and HDTV spec's each value is represented by one eight bit byte giving a numeric range of approximately 0 to 255 (typically 16 to 235 for luminance, 16 to 240 for Pb and Pr, more correctly called Cb and Cr in a digital setting.
* In practice there have been no instances of digital video other than experiments where I and Q were used as the color components.
Transitioning To A Different Color, Oversimplified
Let's say that the source material has a narrow upright red bar, about 8 pixels thick, on a blue background. When the electron beam is drawing a scan line containing a cross section of that bar, when it gets to the place where the first red pixel would be, let's say it takes eight pixels to change from blue to red.
(Analog video resolution is not expressed in terms of pixels, but what I meant was that I arbitrarily divided up into eight slices the horizontal span it took to transition from blue to red and that for this discussion the red bar was exactly that width.)
The first pixel in the transition would be about 4% of the way to red. The next several pixels are about 15%, 32%, 50%, 68%, 85%, 96% and 100% of the way, respectively, suggesting part of a sine wave. At this point in the source material, the red bar had ended and blue background begins again. The transition back to blue also takes eight pixels. After the 100% red pixel, the next pixel is 4% of the way back to blue, or 96% red. The next two pixels are 15% and 32% of the way back, or with 85% and 68% red respectively.
Some folks will say that the 4% pixel passes for full blue, the 96% and 100% red pixels, and the 96% pixel on the other side are full red, and the five pixels (15%, 32%, 50%, 68%, 85% red) in between are in the intervening blend region or smear region. In this case the amount of red is 3/8 of the width of the original red bar.
Some folks will say that red consists of the 85%, 96% and 100% pixels and the 96% and 85% pixels on the other side for a total of five pixels. They would say that the amount of red is 5/8 of the width of the original red bar.
Let's just take the average and say that color resolution is achieved if half of the width of the bar is approximately the color expected.
If the red bar in the source material were narrower, there would not be a 100% red pixel but instead the color would have to start tending back to blue sooner. If the red bar in the source material were wider, there would be several 100% red pixels in a row before the transition back to blue begins.
Depending on the exact nature of the electronics, the blend region (at least the 32%, 50% and 68% pixels) may be a smooth blend from blue to red, or may be gray. Depending on the source material and the recording means, the transition may be represented by a high frequency component that cannot be accommodated by the video color circuits. The result is a momentary loss of color signal that shows up on the screen as a gray gap.
There are other idiosyncrasies that can only be explained with knowledge of waveforms and filtering:
1. If color resolution is stated in terms of bandwidth, it is only necessary for the transition to go 50% of the way to the desired color, or if there were several narrow red bars with equally narrow blue spaces, the transitions need only go from 25% to 75%. The equipment may meet spec's but the colors may still not look right.
2. Two or more of the narrow red bars closely spaced (with blue gaps in between) may show a different amount of red for each bar than just a single red bar, and the first and last bars may look different from those in the middle.
Every color signal represents the transitions from one color to another. For example the I signal's actual waveform represents color content changes from reddish orange to greenish blue with the middle standing for no reddish orange and also no greenish blue. The R signal's waveform represents color content changes from deep red through moderately intense red to no red. With three signals, such as YIQ or RGB, we do color mixing to represent all of the colors needed for the picture.
How is the eye less sensitive to color detail?
The retina of the eye has many more luminance sensors, called rods, than color sensors, called cones.
Why Red, Green, and Blue?
It so happens that the normal human eye has three kinds of color receptors that are sensitive primarily to red light, green light, and blue light respectively. While, for example, yellow light stimulates both the red and green receptors (and minimally if at all the blue receptors) the same stimuli are accomplished by representing yellow portions of a video picture using red and green. Thus video pictures could be made up using picture tube phosphor dots or stripes that glow in just three colors, red, green, and blue. Depending on the exact shades of red, green, and blue used, the range of colors that eye can be fooled into presenting to the brain will differ. Unfortunately the red phosphor in common use today is a wee bit too orange. (Paintings can also be made using just red, green, and blue colors as tiny juxtaposed stripes or dots that do not overlap.)
Why Orange and Blue
Viewing tests showed that the human eye is more sensitive to fine color detail involving reddish orange and greenish blue compared with fine color changes involving other colors. We believe that highway signs pointing out temporary or unusual conditions such as construction are orange because of this visual acuity characteristic. Also, if you are in a dark room and the light level is gradually increasing such as at dawn, you will see objects appearing first as orange or blue (or gray).
It was RCA that suggested giving reddish orange and greenish blue more resolution than other colors. Earlier RCA had proposed a color TV system with only reddish orange and greenish blue shades in addition to black, grays, and white. With CBS' red, green, and blue color wheel system in competition, the orange and blue system was passed over quickly.
What is 4:4:4? 4:2:0?
Digital video is often described using three numbers X:Y:Z as follows: "For every X luminance pixels there are Y color pixels on odd scan lines and Z color pixels on even scan lines".
So 4:4:4 means every luminance pixel has its own color information. The 4:2:0 format is the most common, for every four luminance pixels there are two color pixels for odd scan lines and no color pixels recorded or saved for even scan lines. This produces a matrix of color pixels half the dimensions of the matrix of luminance pixels, for example 320 x 240 for a 640 x 480 pixel picture. In actuality each color pixel should be the average of the colors of the four nearby luminance pixels. Format 4:1:1 is also used, every scan line has its own color but one color pixel is shared by four luminance pixels in a row.
We might think of broadcast NTSC as 4:(0.5):(0.5) since it takes the equivalent of eight luminance pixels (broadcast standard and limit) to transition from one color to another. The pixel analogy is not quite correct. The color transition in analog video could span "pixels 4 through 11" as opposed to "pixels 1 through 8" or "pixels 9 through 16" while in digital video (4:1:1) color is shared by pixels 1 through 4 and pixels 5 through 8, and never pixels 3 through 6. We would expect that digitizing NTSC analog video as 4:(0.5):(0.5) would produce noticeable degradation, using 4:1:1 or 4:2:0 would produce some degradation, and using 4:2:2 would produce negligible degradation.
Although we are not sure exactly when, the de-facto NTSC standard of 0.5 Mhz or 40 lines of resolution for all colors became established very quickly. Most TV theory textbooks even as far back as the 1960's mention an 0.5 Mhz bandwidth for color circuits. As early as 1954 RCA advertised "simplified color TV circuitry". Again we are not sure but this could have included the change from YIQ decoding with 100+ lines of resolution for some colors to Y,Pb,Pr decoding with 40 lines of resolution for all colors.
Meanwhile the broadcast standard still called for the full YIQ encoding.
In the process of handling the luminance and color, the color signal became delayed requiring a matching delay circuit in the luminance signal path. The different bandwidths of the I and Q signals resulted in different delays for each so additional delay circuits were needed. TV set makers started to cut the I signal back to 0.5 to 0.6 MHz to match the Q signal so only one delay interval would have to be dealt with.
By the mid 1970's the B-Y and R-Y signals we use today in "component video" started to get widespread use in TV color decoders. The I and Q signals, being more complicated and expensive to work with, started to give way to B-Y and R-Y (also called U and V respectively). Neither of the latter is optimized for the red-orange and green-blue medium detail primaries, and when they were used, there would be just provide just 40 to 48 lines of resolution for all colors. In fact, the early usage of U and V was referred to as narrow band color(3). We believe that there were a few non-standardized instances where the V signal was given a 1.5 MHz bandwidth. One recent reader writes that some laserdisks are made with 1.5 MHz or even 2 MHz bandwidth for both color components. We have not verified this nor heard of problems of common comb filters' being unable to filter this much chrominance information.
Although S-video standards provide 120 lines of resolution, a manufacturer of high grade de-interlacing and scaling units (which we won't name) admitted that the color decoder they used for S-video input as well as composite video had only 60 lines of resolution.
A few years ago, the more upscale TV sets would have brochures that advertised "high video bandwidth" to emphasize the higher luminance horizontal resolution. Manufacturers never did and still don't say anything about the color resolution. With S-video and component video program sources, consumers will have to be more careful in selecting a TV set that does have better color resolution.
If the TV set has an excellent comb filter that removes most of the dot crawl but has only 48 lines of color resolution in the circuits that follow, picture quality of an S-video feed will be degraded almost to that of a composite feed. A component video feed might then produce a picture almost indistinguishable from an S-video feed. (This difference or lack of difference can also be attributed to misconvergence on the screen.)
We can just hope that an HDTV compatible TV has better color resolution but it is still a good idea to verify that using the viewing tests suggested earlier.
Some TV sets decode the C color signal into I and Q, most decode it into U and V. For run of the mill TV sets and others with just 0.5 MHz chroma bandwidth, video constructed from I and Q and video constructed from U and V will both be decoded reasonably well regardless of whether the decoding circuits use I and Q or use U and V. Either way, red, green, and blue components are eventually constructed for feeding to the picture tube. Some complications arise when decoding wide band color such as correct NTSC color with the 1.5 MHz lower sideband, although for the most part any errors are hardly noticed by the viewer. Some color error occurs because the U and V are attenuated by different factors prior to being encoded as the C signal and a YIQ decoder (as well as numerous YUV decoders) doesn't have the complementary boost for the decoded color component signals.
Extended Color Resolution for Laser Disks
Laser disk programs, which are based on composite video, can have the same color horizontal resolution as programs transmitted as S-video.
For non-aerial transmission of video, the 4.2 MHz upper limit does not apply and luminance information goes up to 5.3 MHz (425 lines) for laser disks and 7.0 MHz (540 lines) for DVD. This permits even composite video to carry 120 lines of color resolution. This writer is not sure which laser disks actually take advantage of this. Some laser disks were indeed produced using equipment with just a 4.2 MHz video bandwidth. This occurred because many Japanese laser disk producers used equipment built for the broadcast industry as opposed to custom wider band equipment (1). (Japan uses NTSC.)
Where Might Excessive Loss Occur?
1. If the camera has separate pickup elements (CCD or comparable) for luminance and color and the one for color has far fewer pixels of resolution,
2. As the color components (either the I and Q pair or the U and V pair) are generated from the camera RGB signals and the bandwidth either by design or improper adjustment is too small,
3. As the color components (either the I and Q pair or the U and V pair) are generated by a DVD player as the disk is played and their bandwidth either by design or improper adjustment is too small
4. (mentioned above) As the C signal is band limited just prior to combining with the luminance signal to become composite video for broadcast or recording on laser disks.
5. During comb filtering; all comb filters split the incoming signal into different frequency bands, which some comb filters might make too narrow,
6. (mentioned above) In the TV set color circuits that follow the comb filter, if they are skimpy enough to barely pass the color from composite video,
7. When recording using any consumer grade analog VCR or camcorder (including Super VHS and High 8); the bandwidth of the recorded color signal is enough for at most 32 lines of horizontal resolution.
8. If the cables connecting the various devices are too long or of poor quality. Occasionally someone uses an audio/video cable set with red, white, and yellow color coding for Y/Pb/Pr (YUV) component video and the red and white cables may or may not pass the Pr and Pb signals adequately.
9. If the TV converted incoming Y/Pb/Pr component video into S-video (believe it, some do) and then decoded the S-video using its usual broadcast video circuits.
Some experts(2) have questioned the legality of NTSC broadcasts lacking medium detail color information such as from source material recorded on consumer grade VCR's including S-VHS and Hi-8. This is a content limitation, not a technical shortcoming. Ordinary VHS recordings also have luminance resolution limited to about 240 lines, or 3 MHz. This would be no less legal than broadcasts transmitted over coaxial cables back in the 1950's, the bandwidth was about 3 MHz and the finer luminance details (and all ability to recover color information if it was a color broadcast) were lost. It does not make sense to forbid the transmission of inferior source material, since for some applications such as news, that may be the only video material available.
Viewed alone, pictures with incorrect medium color detail rendering would not stand out as being incorrect due to the already limited human visual acuity of color detail. But they would look inferior in a side by side comparison with correctly constructed (YIQ) and correctly demodulated broadcast presented on a high quality TV set.
Edge Shimmering
Even if there is only one color transition in a relatively wide horizontal span (two solid color patches side by side), if this transition is abrupt, there will be a high frequency requirement that the color circuits must handle for proper reproduction. The "unpredictable" color that results when the upper sideband that would contain needed signal information is missing could show up as a minute amount of a clashing color at the color boundary. Furthermore the unpredictable color could vary from one video frame to the next, resulting in a shimmering appearance. Interestingly enough, this kind of edge shimmering is less obtrusive with color decoders with less resolution, since fine detail is then reproduced in gray shades as opposed to unpredictable colors.
A controversy involving high definition TV has to do with picture quality versus copying and piracy of video material. One proposal called for the deliberate cutting of resolution for all HDTV material except when delivered via certain encrypted and sealed video signal paths.
But cutting resolution digitally often adds problems of its own. Suppose HDTV with 1280 luminance pixels and 640 color pixels across is "downrezzed" to half its resolution. There are now only 320 color pixels across. But the color pixels can only be centered on 320 possible positions across the screen, not 640 not to mention no constraints on exact horizontal position for analog video.
As 0.5 Mhz (equal to 40 lines of color resolution or 2% of the screen width) became the de-facto standard for NTSC broadcasts, the term "coarse detail" has become equated with picture details larger than 2% of the screen width. "Medium detail" has become equated with picture details between one and two percent of the screen width.
To Recap, What The Textbooks Say
1. The goal was to reproduce coarse details in full color, medium details in shades of orange and blue, and fine details in black and white.
The YIQ system of encoding and decoding color in NTSC accomplished this. There was the limitation of being unable to reproduce medium details in full color. The system could provide about 1.5 Mhz of bandwidth for only one of the two color components and only 0.5 Mhz of bandwidth for the other. Based on preliminary viewing tests, it was found that having the wider band component represent oranges and greenish blues gave a better picture compared with having it represent reds and cyans, or yellows and blues. With no color information above 1.5 Mhz retained, corresponding picture details that narrow remain uncolored.
2. While black and white telecasts had 330 lines of resolution, color telecasts offered only 240 lines of luminance resolution
It is true that, in its early days, color TV in practice was limited to 240 lines of luminance resolution while black and white telecasts continued to have 330 lines of resolution. As we go up the frequency range in the video signal past about 3 Mhz (corresponding to 240 lines), the preponderance of luminance information in normal subject matter is getting less and less and the preponderance of chrominance information is getting greater and greater. The fine luminance detail was still supposed to be there per FCC standards. TV sets of the time could not use that portion of the composite video signal for luminance without displaying a lot of noise as a result of chrominance contamination. Comb filters are needed to extract the finest luminance details and comb filters back then were prohibitively expensive. It might be noted that most (black and white) TV sets then had no more than 240 lines of resolution so hardly anyone noticed the deficiency.
3. One digital video standard has color resolution equal to half the luminance resolution both horizontally and vertically.
In that standard, there is one color pixel for every 2x2 block of luminance pixels, or in other words every 2x2 block of luminance pixels share the same color. For a 720 x 480 pixel format there is a 360 x 240 block of Pr pixels and a 360 x 240 block of Pb pixels.
Notes
(1), (2) Richard Emery, see below.
(3) Charles Poynton, published volume whose title we do not recall.
Richard Emery, various private communications in 2001-3.
Bernard Hartman, Fundamentals of Television, 1975 (Charles F. Merrill Publishing Co., Columbus, OH)
Clyde Herrick, Color Television Theory and Servicing, 2'nd edition, 1977 (Reston Publishing Co., VA)
Milton Kaufman and Milton Kiver, Television Simplified, 7'th edition, 1973 (Van Nostrand Reinhold, NY) pp. 499-503.
Alvin Liff, Color and Black and White TV Theory and Service, 2'nd edition, 1979, 1988, (Prentice Hall, NJ)
Charles Poynton, an article posted ca. 1993 on a corporate electronic bulletin board at Digital Equipment Corp. (Hewlett Packard.)
Greg Rogers, Video Signal Formats, (web page http://www.cybertheater.com/Tech_Archive/YC_Comp_Format/yc_comp_format.html)
SMPTE: Society of Motion Picture and Television Engineers
Go to topics on other subjects.
All parts (c) Copyright 1997-2004, Allan W. Jayne, Jr. unless otherwise noted or other origin stated.
P.O. Box 762, Nashua, NH 03061
603-889-1111 -- Click here to e-mail
us.
If you would like to contribute an idea for our web page, please send
us an e-mail. Sorry, but due to the volume of e-mail we cannot reply personally
to all inquiries.