Understanding & Managing Image Quality - Part I
Understanding & Managing Image Quality
By Marc F Alter
What Is Image Quality?
When we take a picture, we are creating a digital image. Besides the equipment (and its capabilities) and our own picture taking abilities, the quality of our images depends on several factors including Bit Depth, Color Space, and File Types. These factors start with image capture and lead into post-processing. Other factors such as Pixilation and Noise also need to be considered during image editing and output.
The better we understand the factors that go into creating an image, the better we can make decisions on how to manage and control them with the goal of creating better quality images.
Understanding Digital Images:
Like a mosaic tile, a digital image is made up of millions of Pixels (commonly known as Megapixels). When we zoom out, the pixels all seem to merge together, thus creating the colors and shapes of our image.
When we zoom in and look at the pixels (typically shown in Photoshop as square dots), we can see the individual colors and their shade of brightness (luminously).
A Pixel is the smallest color unit in a digital image. Each pixel contains information (data) made up of three colors; Red (R), Green (G) and Blue (B). When a picture is taken, light travels into the camera, thru the lens, stimulates the sensor which then records millions of pixels onto the memory card. Each Pixel is recorded with an RGB color value as well as a (greyscale) brightness value. These values are set at certain intensities which when viewed, merge to make up individual pixel colors (each individual color and brightness is then represented by a unique number). The total number of colors and brightness values you have available to work with is based on the total range of numbers you have recorded in the digital image file. This range of colors is called Bit Depth (more about this later).
Pixel sizes and shapes can vary based on the device (camera) creating the image as well as the device displaying the image (monitor, printer, projector, etc). The number of Pixels used or displayed is known as Pixel Density and is usually expressed in Pixels Per Inch (PPI) or Pixels Per Centimeter (PPCM). This is not to be confused with Dots Per Inch (DPI) which is the number of ink spots printed on paper to create a printed image.
Images with more Pixels are considered higher resolution images because they can provide fine details within and allow for larger displays of smooth, continuous tones and colors. There is a common misbelief the more megapixels captured; the greater will be that image’s resolution. This however, is not always the case. Both the size of the Pixel and the number of Pixels are important factors when reviewing an image’s resolution and working to improve image quality. Typically, the larger the camera’s sensor the larger the Pixels that can be captured. Larger camera sensors also allow for a greater number of Pixels to be recorded. The greater the number of larger Pixels, the cleaner will be the image with less noise and finer delineations between colors, highlights and shadows, shapes and patterns.
The output media is the final determining factor for image resolution / image quality. The goal for obtaining good image quality is making sure you have just enough Pixels in your image to allow for the greatest image quality to be displayed. Not having enough Pixel colors for the size and type of output will hurt your image. If you have a low Pixel Density for a given output size, the viewer will start to see the individual Pixels instead of the mosaic of Pixels working together. This is known as Pixilation and can greatly hurt your image quality. Likewise, not having enough Pixel brightness values for the size and type of output will hurt your image by creating an effect called Banding.
Having too many Pixels can also hurt your images. If you upload a very large image files onto web sites or social media networks, your image may be too slow to load. Also, very large image files are subject to automatic downsizing to fit into the site or application. This automatic process may randomly delete pixels without any given intelligence on what may or may not be critical in your image.
When our camera records the image onto its sensor, it does so with its designed capabilities and limitations. Sometimes however, the technology is not developed enough to properly record the scene given the desired camera settings. This also occurs when the camera sensor is subject to extreme dynamic ranges and/or changes in temperatures. During certain camera techniques (such as extremely slow shutter speeds, low light conditions, or high sensitivity settings) the sensor may heat up. When these occur, random pixels may be incorrectly set resulting in what is known as Digital Noise (like the appearance of grain in film photography).
To some degree, there are steps you can take to avoid or reduce the effect of Pixilation and Noise. When taking pictures using extreme settings, look for camera features designed to reduce possible problems (ie; Noise Reduction). When editing, try to crop as little as possible as cropping usually deletes those Pixels you do not wish to display. Also, try to limit or reduce the number of adjustments you make. Each adjustment you make changes the characteristics of the image’s Pixels and thus has the potential for adding aberrations and/or noise. Many times, there are Photo editing Tools and Filters (such as Blur and DeNoise) that can be used to change the Pixel’s settings so they are similar to other nearby Pixels. During output, know your image size and resolution compared to the intended output media and size. Finally, when enlarging image size, take special notice of the algorithm used and do so in small increments.
We use computers to process our digital images. Computers in their most basic form, only understand instructions from two unique numbers; zeros and ones (Off or On). As such, computers work using a binary system known as Base 2. In such a system, to represent more numbers (more information), additional zeros and ones are used:
Example: In Base 2, the number “Two Hundred Fifteen” is translated into 11010111.
Each pixel in a digital photograph contains both color and luminosity information; with each combination being represented by a number. The range of numbers we have to work with controls the numbers of colors and hues we have to work with. If we used 1-Bit, our number range would be limited to one digit (numbers 0 and 1), thus only allowing for pure black and pure white. With 8-Bits, our number range can go from 0 to 11111111 (8 positions). Using 16-Bits, our number range can go from 0 to 1111111111111111 (16 positions). Therefore, larger bit files allow us a greater range of numbers and thus a greater range of colors (color values, hues, and luminosity).
Looking at the above Black & White examples, there are more “Black, White, Shades of Grey” available with each larger Bit Depth.
Although both 8-Bit and 16-Bit files both allow for 16.7 million pure colors, it is the luminosity (greyscale brightness) depth that allows for greater image quality. 8-Bit files allow for 256 shades of grey (luminosity) per channel. That’s 28 for each (R, G, B) channel equating to 224 = 16,777,216 colors. A 16-bit file manages the same 16.7 million colors but also allows for 65,536 shades of grey (luminosity) per channel (216). This equates to 248 for each (R, G, B) channel resulting in 281,474,976,710,656 colors. This an 8-Bit file only allows for 16.7 million possible color values while a 16-Bit file allows for 281 trillion possible color values.
Looking at the above Color examples, there are more “Color Gradients”
available with each larger Bit Depths.
As the human eye can only perceive about 7-10 million colors, why use the larger Bit files? The answer lies in our photo editing process. The difference between an 8-Bits (16.7 Million) colors and the 16-Bits (281 Trillion) colors is the larger number of colors and luminosity allows for smoother gradations between the colors. Thus an 8-Bit image containing the sky may show with color banding instead of a smooth color transition. Using 16-Bit files will allow programs such as Photoshop to improve the color gradations when they exist. In fact, the more editing processes we perform, the more the chance of color degradation. When working with 16-Bit files, we can reduce or eliminate color breakdowns in our images.
Note: Given the above, why not use 32-bits instead of 16-bits? Several reasons: (1) Mainly because most “non-professional / commercial” software programs and features are not available for 32-bit images. (2) 32-Bit image files are much larger than 8 or 16 bit files (thus they are harder to use and take up more file space). (3) It is hard for most people to be able to “see” the differences between 16 and 32-bit images.
Color Space (Color Profile):
When working in the digital world, different devices handle colors with different capabilities. Some devices, like low end monitors, cell phones, tablets, etc can only display a limited amount of the color range (their color gamut is small). Other devices, like high end monitors can display a greater amount of color ranges (their color gamut is large). Still other devices, such as ink jet printers, have different color range reproduction capabilities. Typically, when a devices can not handle a given color, it automatically downgrades the color to the next closest one it can handle.
To make matters even more difficult to manage, each person’s brain and eyes have different capabilities for detecting light and color and our minds interpret these impulses somewhat differently based on our own capabilities, experiences, and training. (See Appendix – How We See Light – for more information about this)
Color Space (also known as Color Profile) represents the range of colors and tones that are possible given a particular device. In digital photography, RGB (Red, Green & Blue) are used to create the colors that we see in an image. All other possible colors are created using these 3 primary colors. In Photoshop, there are 7 different Color Spaces to choose from including Adobe RGB (1998), Apple RGB, ColorMatch RGB, image P3, PhotoPro, and sRGB IE61966-2.1. For photography, the three most common are sRGB, Adobe RGB, and PhotoPro. Each has its own capabilities, limitations and uses.
sRGB (also known as sRGB IEC61966-2.1). This color space was originally created in 1996 by HP and Microsoft to use on monitors, printers, and the Internet. Although our technology and its associated capabilities have dramatically improved over the years, this is still considered the "default" color space for most images that are displayed (especially on “the Web”).
Adobe RGB (1998) This color space was created by Adobe in 1998. It was designed to include most of the colors available in CMYK (Cyan, Magenta, Yellow and Black) professional printers but by using the RGB primary colors. As such, this color space includes the same number of colors as sRGB but with a greater range of intensity and tones. Adobe RGB (1998) typically will produce richer highlights, mid-tones and shadows. This is a wider color space than sRGB and encompasses approximately 50% of all visible colors. It is a good choice for editing in 8-bit or 16-bit modes and handles more information for printing.
ProPhoto RGB (also known as ROMM RGB - Reference Output Medium Metric). Developed by Kodak around 1980 this color space was created especially for advanced printed image reproduction using an extremely large color gamut. This color space covers the largest range of colors, even goes beyond of what our eyes can see. It is believed to encompasses over 90% of all possible current surface colors and 100% of likely occurring real-world surface colors. One of the downsides to this color space is that approximately 13% of the representable colors are imaginary colors that do not exist and are not visible colors.
So which Color Space should I Use? It is always best to use the Color Space that will give off the best viewing experience given the use and output. This is especially true when editing photos.
sRGB: If you are only displaying your images on the web or on a low-end monitor or printing small (4”x6” or 5”x7”) images, using sRGB as your Color Space would work most of the time. sRGB has the smallest range of tones and colors out of the 3 most popular color spaces, but it is the most versatile and widely used. It is supported by almost all cameras, screens and image viewing software. If you want to keep things simple and avoid color shift problems during editing or sharing, your best bet would be to shoot and edit files in this color space. This approach however would limit how you display and/or print your images in the future (unless you have saved your original raw image and decide to start your editing all over again).
If you belong to a Camera Club, are entering your Images in Competitions, are giving Presentations with updated technologies and/or are learning and growing in your use of digital photography, you will eventually find this Color Space limiting.
Adobe RGB: If you want control over color and tones for editing, in most cases you will want to edit your images using Adobe RGB. By using Adobe RGB, you will obtain much richer color when displaying on high end monitors, projectors or when printing on coated paper. sRGB might be ok for images with skin tones or the image contains a softer mood, but Adobe RGB will give much better results for landscape, food, architecture and many other natural settings.
If you are the kind of photographer who likes to control every aspect of your workflow and print your images at home on advanced inkjet printers, then you should use Adobe RGB. If submitting work for publication, many will explicitly ask you to provide images in Adobe RGB because theoretically it has a wider color range.
If you do decide to use Adobe RGB you need to be aware of some limitations. Adobe RGB is not supported by all browsers. If you are displaying your images on the web, most likely people viewing your images will see them in slightly different colors. Also, Adobe RGB compresses colors and only special image viewing software’s can expand it back to reproduce all the colors in full gamut, all of the rest of the programs do not support this color space and may make the image look dull. So when you share your images (especially on the web), you may want to convert them to sRGB. This creates an additional step in your workflow but it will be worth it. Finally, if you send your images to a print lab, most of them work with sRGB color spaces (unless they specifically mention a different color space) which will mean your prints would have incorrect (dull) colors, if printed with Adobe RGB profile.
If you are a perfectionist who prints on high-end inkjet printers and wants to make use of the entire color range visible to human eye and even some imaginary color (this color space does use colors that do not exist in the real world) you should use ProPhoto RGB. You will however, be forced to use very specific steps in your workflow. This Color Space requires shooting in RAW format and opening images with ProPhoto RGB color space in 16-bit (Min) mode. If using this Color Space you must save your files using a format that supports 16-bit (ie; PSD or TIFF). Your printer will also have to support this format.
Because of these complexities, this Color Space is only recommended for photographers who have a very specific workflows and who print on specific high-end inkjet printers which can take advantage of such a high range of colors.
Making Your Final Decision: sRGB vs Adobe RGB
When selecting what Color Space to use it is helpful to understand what type of image you have and where and with what device your Image will be viewed. If you are not sure, it would be best to use a Color Space with a large gamut as you can always “downgrade” the Color Space if and when you need to. No matter what Color Space you do use, if displaying images on the Internet, only 256 colors are visible (all other colors and tones are automatically converted into these colors).
As you can see by the below example (with 3 shades of Blue), Adobe RGB will give deeper colors and more variations with tones. It is for this reason I highly recommend editing your images using Adobe RGB (1998).
File Types (JPG and RAW):
The 2 main types of files most digital cameras record are known as JPG (also known as JPEG) and RAW.
JPG (a file standardized in 1992 by the Joint Photographic Experts Group) is a small file (8-Bits) type based on each color channel (Red, Green, Blue) having 8 bits/channel resulting in a 24-bit color palette of approximately 16 million possible colors. JPG files usually have an .jpg or .jpeg file extension.
There are several advantages for using JPG files. As JPG files are small, they can be recorded on your camera’s memory card very fast. They can be accessed immediately with little or no processing needed. When they are edited, the process is usually quick, using very limited computer resources (memory, CPU, etc). Also because of their small size, JPGs are the most commonly used image file type, especially on the internet and in social media networks. In fact, most programs and devices that display images can handle JPGs fairly well.
There are several disadvantages for using JPG files for photography. One of the main disadvantages is the camera pre-processes the image based on what has been programmed into it by the camera manufacturer. This includes dropping many of the colors picked up by the camera’s sensor. You are thus starting out with a color limited image created based on what the camera manufacturer thinks your image should look like. Another main disadvantage of JPG files is found during post-processing. As JPGs are 8-Bit files, they can only contain 16.7 million colors. If using a RAW file format, the number of possible colors is based on what the sensor is capable of. Current entry level DSLR sensor bit depts are usually around 12 (with 14 and 16 also possible). A 12-Bit file can allow for 68,719,476,736 possible colors for any given pixel. So that is 16.7 million versus 68.7 billion colors.
A third disadvantage of JPGs file is when saved and closed, the file compresses and loses information (pixel data). In fact, the higher the rate of JPG compression, the more the image quality will be reduced. JPG files are considered “lossy” as data (information) in the file is usually permanently lost. In fact, every time a JPG file is opened and saved, the file is further compressed, and more pixels are lost (forever). Thus, if you are editing JPG files, it is always best to edit a copy and leave the original file in-tact.
A RAW file is typically a large file that contains minimally processed data from your camera’s sensor. Currently there are over 500 different types of RAW files; as each camera manufacturer has created file types that are propriety to their own camera lines. Although each type of RAW file is different, for the most part they have many of the same features and capabilities.
A RAW file is the digital equivalent of a negative. It contains the “raw” data taken when the light passes through the lens, hits the sensor and is recorded on the data card. This is mostly unprocessed data. In most cases a RAW file contains the ingredients for a wide dynamic range of colors, shades and luminosity. Unlike JPG pictures that are pre-processed by the camera, RAW files need to be processed using photo editing software. In most instances RAW files need several different edits performed (ie; contrast enhancements, white balancing, sharpening, etc) to output the best possible image. RAW files contain not only pixel information but also Meta Data which identifies additional information such as Camera Make & Model, Lens Make & Model, Exposure Settings, and even Copyright Information. This data is often used for filtering, sorting and cataloguing images. Many RAW files also contain an embedded JPG which is used by the camera’s LCD Display and can also be later extracted into a separate file.
Some of the advantages of using RAW files is the number of pixels available, a wider dynamic range and color gamut, more shades of available color, and the ability to adjust the Color Space and White Balance after the image has been taken. Raw image files can be merged (blended) to create higher dynamic range images, focused stacked images or stitched together to create panoramic images. RAW files give you greater latitude to recover images that are over-exposed (too light) or underexposed (too dark).
Raw image files are also “lossless”, meaning they do not compress and thus do not suffer from image-compression artifacts. When editing a RAW file, you must select another file type to save your changes. As a result, the original RAW file is left intact (considered non-destructive).
Another important advantage of using a RAW file is the availability of the approximately 281 trillion possible colors (versus the 16 million colors available in 8-Bit JPG files). The biggest advantage about RAW files is about choice. Using RAW, after taking your picture, you have many different options on how you may wish to edit, output and/or use your image. With JPG, not only is the image pre-processed but your choices are also limited.
The main disadvantage of using RAW image files are they must be post processed. Other than photo editing software, most programs and devices cannot “read” and display a RAW file. Processing digital images takes time and experience. Many programs do not allow you to print RAW files thus they must be saved in another format. As a result, if you are taking pictures for someone else (a friend or a client), you probably cannot give them the RAW file.
RAW files are large, they take more time for the camera to record the image onto the sensor. When shooting fast sequences, you may need to purchase faster (and more expensive) memory cards and/or cameras with larger buffer space. Large RAW files also require more computer resources to process and more storage space. Finally, as each RAW file is proprietary to the camera manufacturer, you cannot guarantee future photo editing/viewing programs will have the necessary software to decode (open) these files.
Basic Color Terms to Understand:
Color – A visual perception of light that allows us to different between otherwise like objects. This visual perception is derived from a combination of Hue, Saturation and Light (Luminosity). Color is usually used as a general term used to describe every combination of Hue, Tint, Tone and Shade.
Hue – The attributes of color that allow us to differentiate between continuous color. Reaching the cones of our eyes, Hue is made up of Chromatic Signals derived from the light’s wavelength. Hues typically refer to the Color Family from where they derived consisting of Primary Colors [Red (R), Green (G) and Blue (B)] as well as combinations of these colors into Secondary Colors [Yellow (Y), Orange (O), and Violet (V)]. Hues are usually perceived as bold and exciting.
Saturation – A degree of difference from a light source of the same color. Saturation defines a range of colors (from 0% to 100%) given a constant level of light. A pure color is considered 100% while pure grey (absence of that color) is considered 0%.
Luminosity (Also known as Lightness Value) - A relative quality / brightness of light. Reaching the cones of our eyes, Luminosity is made up of Achromatic Signals (light / greyscale) derived from the light’s energy at a specific wavelength. Luminosity defines a range from pure dark (0%) to pure light (100%). By changing a color’s light value you can make a color lighter or darker.
Tone – The result of mixing a combination of pure colors with greyscale colors; excluding extreme White and extreme Black (applying Luminosity values). By adding grey to a pure color, you are changing that color’s Tonal Value. As such, a Tone is “softer” than the original color. Hue and Tone do not represent the same colors. Hues are created by mixing pure colors while Tones are created by mixing color with grey. Tones are usually perceived as subtle.
How We See Light:
Color Theory is based on our ability to “see light”. This encompasses our eye’s capabilities to absorb light wavelengths, our ability for our minds to process this light and our brain’s ability to interpret graduations in light.
Our eyes have photoreceptors called Cones. There are 3 types of Cones; S-Cones which are sensitive to short-wavelength light (Blues), M-Cones which are sensitive to medium wavelength light (Greens) and L-Cones which are sensitive to long wavelength light (Reds). The light stimulated by the combination of these Cones make up our perception of color.
As humans, we have some commonality in how our eyes pick up and interpret light but there are also differences between individuals. Some individuals are naturally more sensitive to interpreting light while others are less so. As a result, many individuals interpret medium light wavelengths (Blueish / Yellowish light) as Green but not all. Some individuals have color deficiencies due to Cones that are defective, damaged or non-existent. Also, based on experience and training, individuals can learn how to use their light sensitivity to a greater extent and become better at interpreting light’s nuances.
Other File Types:
Besides JPG and RAW, there are many other files types you can use (although most digital cameras only allow you to capture in JPG, RAW or both). In Photoshop, there are over 25 different file types to use but most photographers only use a few. Some of these are:
PSD (PhotoShop Document) – This is Photoshop’s propriety file type and the default (unless you change it). PSDs are specifically designed to support all of Photoshop’s capabilities including layers, layer masks, adjustment layers, channels, paths, etc. This file type extension is *.psd.
TIFF (Tagged Image File Format) – Originally developed in 1986 as an industry standard to overcome proprietary scanned files, TIFF files have been enhanced to include greyscale and then color graphics. Today TIFF files are one of the most commonly used amongst digital artists, printers, publishers and photographers. TIFF files can be either compressed or uncompressed using lossless compression and are can grow to become large files. TIFF files support both 8-Bit and 16-Bit Depths. The primary color space for TIFFs is CMYK but also supports may other color profiles. This file type ends in *.tif
GIFF (Graphics Interchange Format File) – This is one of the oldest graphic file types having been developed before JPGs. All major web browsers support GIFF files which are usually used for displaying web graphics. GIFF files are usually small and can even support simple animations and transparencies. GIFFs can only display 256 colors and therefore are not good for most photographic images. This file type extension is *.gif
PNG (Portable Network Graphics) – Originally designed to replace JPG files, PNGs can handle 48-Bit colors resulting in more than 1 billion possible colors (compared to JPGs 16.7 million possible colors). PNG files are “lossless” so they can compress their data over and over without losing pixel information in the process. PNG files however are not as widely supported as JPGs or TIFFs and thus their use is limited.
DNG (Digital Negative) – Developed by Adobe to be universally used to replace many of the camera manufacturer’s propriety 500 different RAW formats. As DNG files were originally developed for Photoshop/Lightroom image files, like PSDs, they are specifically designed to take full advantage of Adobe’s Product Line features. DNG files use an open standard, meaning they are freely available to the industry. Unfortunately, industry acceptance of DNG files is still a question with many manufacturers unwilling to give up their RAW formats and for users not knowing if DNGs will last far into the future.
Sources & Additional Information
No comments posted.
Recent PostsThe Exposure Triangle Digital Image Sharpening Photo Post Processing In The Digital Age - Should I or Shouldn't I Monitor Calibration For Photo Editing Understanding & Managing Image Quality – Part II Understanding & Managing Image Quality - Part I The Rules of Composition (Well, they are really more like Guidelines than Rules) I See The Light Focus Methods, Modes & Areas Exposure Modes, Metering Modes & Exposure Compensation