Click to see how pixels are printed using dots of colored ink.
Click to see how some cameras inflate their pixel counts.
Click here to explore the original meaning of "resolution".
Click to see the effects of pixelization as an image is enlarged.
Click for an Excel worksheet that converts pixels into print sizes.
An image sensor sits against a background enlargement of its square pixels, each capable of capturing one pixel in the final image. Courtesy of IBM.
Click to see how all cameras are just dark boxes.
Click to see where the name "charge-coupled device" comes from.
RGB uses just three basic colors to create full color images.
Click to explore how red, green and blue can create full color images.
Click to explore how cyan, magenta and yellow can also create full color images.
Click to explore how more pixels give sharper images.
Click to see the effects of compression.
Click to explore the differences between JPEG and RAW formats.
Digital photographs are actually mosaics of millions of tiny squares called picture elements— or just pixels. Like the impressionists who painted wonderful scenes with small dabs of paint, your computer and printer can use these tiny pixels to display or print photographs. To do so, the computer divides the screen or printed page into a grid of pixels. It then uses the values stored in the digital photograph to specify the brightness and color of each pixel in this grid—a form of painting by number.
A digital image that looks sharp and has smooth transitions in tones (top) is actually made up of millions of individual square pixels ( bottom). Each pixel is a solid, uniform color.
A few camera companies, even some that are otherwise respectable, try to deceive you into thinking their cameras have higher resolution than they really do. They use software to inflate the size of a captured image and then use this inflated size in advertising claims about the camera. This way each captured pixel can suddenly become four, and voila' a 2 megapixel image suddenly and magically becomes 8.
Number of Pixels
The quality of a digital image depends in part on the number of pixels used to create the image (sometimes referred to as resolution). At a given size, more pixels add detail and sharpen edges. However, there are always size limits. When you enlarge any digital image enough, the pixels begin to show—an effect called pixelization. This is not unlike traditional silver-based prints where grain begins to show when prints are enlarged past a certain point.
The term "resolution" has two meanings in photography. Originally it referred to the ability of a camera system to resolve pairs of fine lines such as those found on a test chart. In this usage it's an indicator of sharpness, not image size. With the introduction of digital cameras the term began being used to indicate the number of pixels a camera could capture.
When a digital image is displayed or printed at the correct size for the number of pixels it contains, it looks like a normal photograph. When enlarged too much (as is the eye here), its square pixels begin to show.
The pixel size of a digital photograph is specified in one of two ways—by its dimensions in pixels or by the total number of pixels it contains. For example, the same image can be said to have 4368 × 2912 pixels (where "×" is pronounced "by" as in "4368 by 2912"), or to contain 12.7 million pixels or megapixels (4368 multiplied by 2912).
Image sizes are expressed as
dimensions in pixels (4368 × 2912) or by the total number of pixels (12.7 megapixels).
Good prints can be made using 200
pixels per inch. Using this as a guide you can calculate that a 2000 x 1600 pixel image (just over 3
megapixels) will make a good 10 x 8 inch print.
How an Image is Captured
Digital cameras are very much like earlier cameras. Beginning with the very first camera all have been basically black boxes with a lens, an aperture, and a shutter. The big difference between traditional film cameras and digital cameras is how they capture the image. Instead of film, digital cameras use a solid-state device called an image sensor. In some digital cameras the image sensor is a charge-coupled device (CCD), while in others it's a CMOS sensor. Both types can give very good results. On the surface of these fingernail-sized silicon chips are millions of photosensitive diodes, each of which captures a single pixel in the photograph to be.
When you take a picture the shutter opens briefly and each pixel on the image sensor records the brightness of the light that falls on it by accumulating an electrical charge. The more light that hits a pixel, the higher the charge it records. Pixels capturing light from highlights in the scene will have high charges. Those capturing light from shadows will have low charges.
After the shutter closes to end the exposure, the charge from each pixel is measured and converted into a digital number. This series of numbers is then used to reconstruct the image by setting the color and brightness of matching pixels on the screen or printed page.
It's All Black and White After All
It may be surprising, but pixels on an image sensor only capture brightness, not color. They record the gray scale—a series of tones ranging from pure white to pure black. How the camera creates a color image from the brightness recorded by each pixel is an interesting story with its roots in the distant past.
The gray scale, seen best in black and white photos, contains a range of tones from pure black to pure white.
When photography was first invented in the 1840s, it could only record black and white images. The search for color was a long and arduous process, and a lot of hand coloring went on in the interim (causing one photographer to comment "so you have to know how to paint after all!"). One major breakthrough was James Clerk Maxwell's 1860 discovery that color photographs could be created using black and white film and red, blue, and green filters. He had the photographer Thomas Sutton photograph a tartan ribbon three times, each time with a different color filter over the lens. The three black and white images were then projected onto a screen with three different projectors, each equipped with the same color filter used to take the image being projected. When brought into alignment, the three images formed a full-color photograph. Over a century later, image sensors work much the same way.
Colors in a photographic image are usually based on the three primary colors red, green, and blue (RGB). This is called the additive color system because when the three colors are combined in equal amounts, they form white. This RGB system is used whenever light is projected to form colors as it is on the display monitor (or in your eye). Another color system uses cyan, magenta, yellow and black (CMYK) to create colors. This system is used in a few sensors and almost all printers since it's the color system used with reflected light.
Since daylight is made up of red, green, and blue light; placing red, green, and blue filters over individual pixels on the image sensor can create color images just as they did for Maxwell in 1860. Using a process called interpolation, the camera computes the actual color of each pixel by combining the color it captured directly through its own filter with the other two colors captured by the pixels around it. How well it does this is affected in part by the image format, size, and compression you select.
One of the most important choices you'll make when shooting photos is what format to use—JPEG or RAW.
- JPEG is the default format used by almost every digital camera ever made. Named after its developer, the Joint Photographic Experts Group (and pronounced "jay-peg") this format lets you specify both image size and compression. The smallest size is best for the Web and e-mail (although it will usually have to be reduced) and the largest for prints.
The JPEG format compresses images to make their files smaller, but many cameras let you specify how much they are compressed. This is a useful feature because there is a trade-off between compression and image quality. Less compression gives you better images so you can make larger prints, but you can't store as many images. Because you can squeeze more smaller or more compressed images onto a storage device, there may be times when you'll want to switch to the smaller size and sacrifice quality for quantity.
- RAW images are often better than JPEG images because they are not processed in the camera, but on your more powerful desktop computer. These RAW files contain every bit of the captured data, unlike JPEGs which are always processed in the camera with some data being discarded. RAW files can be viewed, edited, and converted to other formats using most photo-editing software or programs included on a CD that comes with the camera. Some cameras let you capture RAW images by themselves or with a companion JPEG image that gives you an identical high quality RAW file and a smaller, more easily distributable JPEG image. When you use this feature, both the RAW and JPEG files have the same names but different extensions. The RAW format is discussed in more detail on page using3-8.html.
When you select an image format, size, and compression, you're not only affecting image quality but also how many images can be stored on your memory card. Sometimes when there is no storage space left, you can switch to a smaller size and higher compression to squeeze a few more images onto the
The number of new images you can store at the current settings is usually displayed on the camera's monitor or control panel.