Digital cameras are very much like all earlier cameras. Beginning with the very first camera all have been basically black boxes with a lens to gather the light, a wheel you turned to focus the image, an aperture that determines how bright the light is, and a shutter that determines how long the light enters.
Both the earliest cameras and the most state-of-the-art models available today are really just black boxes.
The big difference between traditional film cameras and digital cameras is how they capture the image. Instead of film, digital cameras use a solid-state device called an image sensor. In some digital cameras the image sensor is a charge-coupled device (CCD), while in others it's a CMOS sensor. Both types can give very good results. On the surface of these fingernail-sized silicon chips are millions of photosensitive diodes, each of which captures a single pixel in the photograph to be.
An image sensor sits
against a background enlargement of its square pixels, each capable of capturing one pixel in the final image. Courtesy of IBM.
When you take a picture the shutter opens briefly and each pixel on the image sensor records the brightness of the light that falls on it by accumulating an electrical charge. The more light that hits a pixel, the higher the charge it records. Pixels capturing light from highlights in the scene will have high charges. Those capturing light from shadows will have low charges.
After the shutter closes to end the exposure, the charge from each pixel is measured and converted into a digital number. This series of numbers is then used to reconstruct the image by setting the color and brightness of matching pixels on the screen or printed page.
It's All Black and White After All
It may be surprising, but pixels on an image sensor only capture brightness, not color. They record the gray scale—a series of tones ranging from pure white to pure black. How the camera creates a color image from the brightness recorded by each pixel is an interesting story with its roots in the distant past.
The gray scale, seen best in black and white photos, contains a range of tones from pure black to pure white.
When photography was first invented in the 1840s, it could only record black and white images. The search for color was a long and arduous process, and a lot of hand coloring went on in the interim (causing one photographer to comment "so you have to know how to paint after all!"). One major breakthrough was James Clerk Maxwell's 1860 discovery that color photographs could be created using black and white film and red, blue, and green filters. He had the photographer Thomas Sutton photograph a tartan ribbon three times, each time with a different color filter over the lens. The three black and white images were then projected onto a screen with three different projectors, each
equipped with the same color filter used to take the image being projected. When brought into alignment, the three images formed a full-color photograph. Over a century later, image sensors work much the same way.
Colors in a photographic image are usually based on the three primary colors red, green, and blue (RGB). This is called the additive color system because when the three colors are combined in equal amounts, they form white. This RGB system is used whenever light is projected to form colors as it is on the display monitor (or in your eye). Another color system uses cyan, magenta, yellow and black (CMYK) to create colors. This system is used in a few sensors and almost all printers since it's the color system used with reflected light.
Since daylight is made up of red, green, and blue light; placing red, green, and blue filters over individual pixels on the image sensor can create color images just as they did for Maxwell in 1860. Using a process called interpolation, the camera computes the actual color of each pixel by combining the color it captured directly through its own filter with the other two colors captured by the pixels around it. How well it does this is affected in part by the image format, size, and compression you select.
Because each pixel on the sensor has a color filter that only lets in one color, a captured image records the brightness of the red, green, and blue pixels separately. (There are usually twice as many photosites with green filters because the human eye is more sensitive to that color). Illustration courtesy of Foveon at www.foveon.com
To create a full color image, the camera's image processor calculates, or interpolates, the actual color of each pixel by looking at the brightness of the colors recorded by it and others around it. Here the full-color of some green pixels are about to be interpolated from the colors of the eight pixels surrounding them.
There's a Computer in Your Camera
Each time you take a picture millions of calculations have to be made in just a few seconds. It's these calculations that make it possible for the camera to interpolate, preview, capture, compress, filter, store, transfer, and display the image. All of these calculations are performed in the camera by an image processor that's similar to the one in your desktop computer, but dedicated to this single task. How well your processor performs its functions is critical to the quality of your images but it's hard to evaluate advertising claims about these devices. To most of us these processors are mysterious black boxes about which advertisers can say anything they want. The proof is in the pictures.
Cameras with the latest programmable digital media processors can perform functions that camera companies program them for. Currently these functions include in-camera photo editing and special effects such as red-eye removal,image enhancement, picture borders, stitching together panoramas, removing blur caused by camera shake, and much more.
Where We Seem to be Headed
As camera resolutions have improved, most people are satisfied with the quality and sharpness of their prints. For this reason, the marketing battle, especially in the point-and-shoot or pocket camera categories is now all about features. Since digital cameras are basically computers, companies can program them to do all sorts of things that older, mechanical cameras could never do. They can identify faces in a scene to focus on, detect and eliminate red-eye, and let you adjust colors and tones in your images. There is a tipping point somewhere in this endless checklist of possible features where complexity begins to increase rather than decrease and the usefulness of features begins to decline. We are probably already at that tipping point and perhaps beyond it. When you read about features ask yourself how often you would really use them and how much control you want to turn over to your camera.
When considering features, keep in mind that most of the great images in the history of photography were taken using cameras that only let you control focus, the aperture, and the shutter speed.