Marc F Alter: Blog en-us (C) 2021 Marc F Alter (Marc F Alter) Mon, 16 Mar 2020 07:40:00 GMT Mon, 16 Mar 2020 07:40:00 GMT Marc F Alter: Blog 90 120 The Exposure Triangle For a PDF of this article, select : The Exposure Triangle by Marc F Alter

The Exposure Triangle


Many times, the essence of capturing a good photograph is getting the exposure right. As such, exposures can vary from image to image depending on the amount of light that is available at the time you take your picture. Exposure consists of three main elements; F/stop, Shutter Speed and ISO. These three elements are also known as The Exposure Triangle.


F/stop (also known as F-stop or Aperture) is the size of the opening in the lens’ shutter that lets in light. Technically speaking, F/stop is a ratio representing the amount of light let into the camera. Similar to a fraction, this is why the F/stop is often seen with a slash (ie; f/2.8) and why the lower the number, the larger the size of the shutter opening and the more light that is let into the lens. One characteristic is each F/stop (ie; f1.4, f2.8, f/5.6, f8.0, f/11…) doubles the amount of light coming into the camera.  Another characteristic of F-stop is the lower the F/stop, the lower will be the Depth of Field (DOF) while the higher the F/stop, the greater will be the DOF.


Shutter Speed is how fast the camera’s shutter stays open. The slower the shutter speed, the greater is the amount of light let into the camera. Conversely, the faster the shutter speed, the lower is the amount of light let into the camera. Shutter Speed is typically expressed in fractions of a second. Thus 200 means your shutter will be open for 1/200th of a second while 500 means your shutter will be open for 1/500th of a second. A fast Shutter Speed will help to capture an image quickly, freezing the action before you, while a slow Shutter Speed will capture an image slowly, allowing “Time” to affect the scene before you.


 ISO is the sensitivity to light for a digital camera’s sensor (or film). ISO is named after the International Standards Organization (also known as the International Organization for Standardization, IOS) and is derived from the Greek term; ISOS, meaning equal. In photography, the lower the ISO, the less sensitive the sensor (or film) will be to the light while the higher the ISO, the more sensitive the sensor (or film) will be to the light.  A higher ISO has a tendency of adding electronic noise (in digital photography) or film grain (in film photography) while a lower ISO tends to improve the quality of an image.


The Exposure Triangle therefore represents the relationship between F/stop, Shutter Speed and ISO. Each of these 3 elements work together to help you control the amount of light you let into your camera and therefore, how you expose your image when taking a picture. If you wish to let in more or less light, you can adjust any of these 3 elements or any combination of them. Which element you adjust and to what degree, will depend on your goal and preference for taking the picture.


Perfect Exposure is the right amount of light you bring into your camera with a given F/stop, Shutter Speed and ISO, which captures the brightest “lights”, the darkest “darks” and all the possible colors and tones in-between. So, what is the best combination of F/stop, Shutter Speed and ISO? It will depend on the amount of light available and what it is you are looking to achieve. Say for example, you are taking a picture of a waterfall with your ISO is set at 400 and your light meter indicates f/8.0 with a Speed of 200. This may be a “good exposure” but may not show the waterfall as best as it could (or as you would like it).


If you wish to blur the water and make it look silky smooth, you could lower the Shutter Speed but this would let in too much light. You can re-gain the correct exposure by lowering the amount of light in your exposure by increasing your F/stop and/or lowering your ISO.


On the other hand, if you wish to “freeze” the water and catch the splashes as they bounce off the rocks, you could increase the Shutter Speed but this would let in too little light. You can re-gain the correct exposure by increasing the amount of light in your Exposure by decreasing your F/stop and/or raising your ISO.


F/stop Standard Scale (Aperture Opening)

…f/1.4       f/2.8            f/5.6    f/8       f/11     f/16     f/22     f/32…


Shutter Speed (in Fractions of a Second)

…8       15        30        60        125      250      500      1000    2000…


ISO (Light Sensitivity)

…2000 1800    1600    1200    1000    800      600      400      200      125…



More Light                                         Less Light


It is important to note that many times single change in one of these values represents a doubling or halving of the amount of light coming into the camera. Thus, if your camera suggests an exposure (see below), any of these sample combinations would work:

ISO     F-Stop     Shutter Speed (SS)

400     f/8.0         1/200               Camera recommended Exposure

200     f/5.6         1/200               Lower ISO by 1, lower F-Stop by 1 for equal light

200     f/8.0         1/125               Lower ISO by 1, lower SS by 1 for equal light

100     f/5.6         1/125               Lower ISO by 2, lower F-Stop by 1 and lower

                                                  SS by 1 for equal light


Marc’s Tips On How To Get The Perfect Exposure

  1. Before going out, make sure your camera is set to your “Default Values” (this is to make sure you always know the setting of your camera when you start)
    1. Shutter Speed = 100
    2. F/stop = F/8.0
    3. ISO = 100 to 400
    4. Exposure Compensation = Off
    5. Bracketing = Off
    6. Exposure Metering Mode (EMM) = set accordingly (I mostly use Aperture Mode)
      1. Auto or Program (Professional) – If you want your Camera to do most of the thinking for you (and if you do not care about the results)
      2. Aperture – If your priority is to control your DOF
      3. Shutter – If your priority is to control your Action (Time)
      4. Manual – If you think you can control the exposure better than your Camera’s technology which cost you hundreds/thousands of dollars.
    7. Other (Your battery is charged, you have a spare battery, you have a memory card and a spare, you have lens wipes, etc).


  1. Take a test shot and evaluate the results in your viewfinder
    1. Histogram / Blinkies (Highlight Alert). If you have any Blinkies blinking, there are some areas in your picture that are letting in too much light. You will need to compensate for this (lower your ISO, raise your F/stop, raise your Shutter Speed, add a Filter (ie; ND or VND), set your Exposure Compensation to under-expose, take Bracketed Shots, recompose, etc).


  1. After making your adjustments, take another shot and then reevaluate, readjust and retake if necessary.


  1. Unless very obviously bad, do not delete any images from your camera until after you have had a chance to evaluate your image on your computer (the camera’s viewfinder is too small, the camera’s Histogram / Blinkies are based on a compressed JPG image, etc).


(Marc F Alter) Aperture F-Stop ISO Light Sensitivity marc alter photography marc f alter mfa images Perfect Exposure Shutter Speed The Exposure Triangle Sun, 15 Mar 2020 16:06:51 GMT
Digital Image Sharpening Digital Image Sharpening

What is Sharpening
As we had previously discussed, when we take a digital picture, we are capturing light photons and recording them on an image sensor in the form of pixels. It takes millions of pixels (similar to a mosaic) that allows our mind to combine these pixels and thus see patterns and shapes which create our image. Sharpening is the process where we enhance the contrast between these pixels so as to create the visual perception of a sharper and more realistic image. 


Sharpening does not actually make an image sharper but rather it increases the contrast between light and dark pixels along the edges. It does this by comparing adjacent pixels and determines their differences in brightness or contrast values. When it finds a difference, it determines there’s an edge. Sharpening makes darker pixels slightly darker and lighter pixels slightly lighter. The end result is the edges between these darker and lighter pixels become slightly more pronounced. The more Sharpening that is applied, the more pronounced these edges become.



Why Sharpen
When viewing an image our eyes are usually drawn to the brightest and sharpest areas in the image. Thus, when we sharpen a subject in our image; our eyes are drawn to that subject. A sharp subject that is perceived as clear, with texture and details, also is perceived as more lifelike. Sharping can help make our images more interesting and compelling to look at. 



When is Sharpening Needed
If you’re shooting using a JPEG file format you almost never need to sharpen your image as the process of creating the JPEG picture includes some degree of automatic Sharpening. If you are shooting using a RAW file format, you should almost always perform some degree of Sharpening as this format typically contains lots of image data but leaves the pixels in an unprocessed, flat, un-sharpened state. 


Exceptions to these guidelines may be when you wish to further sharpen your JPG image and/or when you may wish to leave your RAW image (or parts of it) with a “soft” effect. 


Where & What To Sharpen:
When viewing the world around us, our minds typically perceive most objects and scenes as being sharp. Objects that are closer will seem sharper with more details and texture defining the subject. Objects that are farther away or of similar light and color will (to some degree) blend and thus be perceived as “softer”. 


When Sharpening an image in post processing, most people will Sharpen the entire image at once (in essence doing Global Sharpening). This may not however create a realistic scene as our eyes and mind perceive it. 

A more realistic way to sharpen our image would be to do so selectively. By sharpening only our subject(s) and it’s supporting elements, you can direct the viewers eyes to the area of the image you have determined is most important. 



When & How to Sharpen

Using Photoshop there are many different methods how an image can be sharpened. Ultimately the process of what type and how much to sharpen is a matter of individual taste and thus is more of an Art then a Science. That being said, not enough sharpening may leave your image dull and unrealistic while too much sharpening might cause “pixelization” where the image itself breaks down into a puzzle of dots. 

When I first started learning how to edit my digital images, I was told to only sharpen at the very end of my workflow, just before printing. The reasoning behind this is that sharpening modifies the pixel information and if I was going to do additional adjustments (to these pixels) I should do so with the pixels in an un-altered state. This is no longer the case as the techniques and tools available for sharpening nowadays have been greatly enhanced. 

To get the most out of your images, Sharpening can and should be done at several different stages (as long as it is done carefully and with knowledge of what effect Sharpening is having in the image). As such, sharpening is now best done during Capture, Pre-Image Processing (of Raw Files), Image Processing (known as Creative) and Output.


1)    Focus, DOF And Its Effect on Creating Sharp Images During Image Capture
When we take an image with our camera; we choose the ISO; Shutter Speed, and Aperture (f-stop) as well as the Focus Point(s). These decisions determine what subjects in our image are sharp and to what degree DOF (Depth Of Field) and Motion Blur may or may not affect our composition. 


Using a low f-stop (such as f/2.8) would allow us to bring our subject into sharp focus while blurring the background. Using a higher f-stop (such as f/16) would increase our DOF and allow us to retain sharpness throughout most of our image. Likewise, using a low/slow shutter speed (lasting several seconds or longer) would blur moving objects and light while using a high/fast shutter speed (lasting fractions of a second) would “freeze” moving objects so they become sharp, clear and detailed. Neither method and setting is right or wrong; it’s more a matter of what is your subject and what you are trying to achieve. 



2)    Sharpening During Pre-Processing (also known as Deconvolution Sharpening or Capture Sharpening) 


Deconvolution Sharpening is designed to compensate for and correct the inherent “softness” created by our camera and lens during “Image Capture”. This “softness” is typically created by our camera’s technology limitations such as the use of poor-quality cameras and lens or in higher quality camera’s “low-pass, anti-alias” filters. Image “softness” is also part of the “unprocessed nature” of image capture using the RAW file format.


Guidelines For How To Do Deconvolution Sharpening:
i)    When using ACR (Abode Camera Raw), access the Details tab. 

ii)    Zoom in and expand your view (ie; 100%)

iii)    Set the Radius Slider all the way to the left (to .50) and the Detail Slider all the way to the right (to 100).

iv)    You can now move the Amount Slider to the left or right to get the amount of sharpening you desire (typically somewhere between 30 and 50).

v)    To use the Mask Slider, decrease your Zoom (ie; fit in Window) press the ALT Key (or OPT key on a MAC) while moving this slider to the left or right so you can see what areas are being sharpened or not (when you do this, the image will temporarily turn black & white with the white areas being affected by the sharpening and the dark areas, not). 

vi)    Below the Sharpening Sliders are adjustments for reducing Noise. Noise Reduction is and can be a complex subject and although it affects an image's sharpness, is an entirely different topic. As such, it needs to be addressed at a future time.

3)    Sharpening During Image Processing (Creative Sharpening)
Photoshop has several different tools for sharpening. Some of these represent technologies developed over different periods of time while other may be useful in different situations. Presented below are some of these different tools and techniques (not represented in any order of priority): 


i)    The Sharpen Tool 
The Sharpen tool is a brush that can be used to selectively sharpen small areas or objects within your image (ie; a person’s eyes or the stigma of a flower, etc). As a brush, you have the ability of selecting its shape, strength and Layer Opacity as well as the ability to add a Layer Mask. 


When you use the Sharpening Tool to paint over an object in your image, this brush will increase the pixel contrasts only in this brushed specific area. The more you paint over this area, the more pixel contrast will be created. 

Great care should be taken when using this tool as it has a tendency of increasing Noise. To overcome this problem, later versions of Photoshop include a new “Protect Detail” feature that allows you to set the brush’s sensitivity. Also, when used directly on a Pixel Layer, it works in a destructive manner, thus you will want to Sharpen on a separate layer.

Guidelines For How To Use The Sharpening Tool:
(1)    Create a new layer above your Pixel layer. Rename this layer “Sharpen Tool”. This will allow you to Sharpen in a non-destructive manner.

(2)    Zoom in and expand your view (ie; 100%)

(3)    Select the Sharpen Tool and set as follows:
(i)    Select your desired Brush

(ii)    Set the Amount to a low value (maybe around 15 to 25 or so)

(iii)    Turn on the “Sample All Layers” and “Protect Details”

(iv)    Begin to “paint” the area you want sharpened

b)    Filter --> Sharpen -->  Sharpen 

The Sharpen Filter can be used for quick and easy sharpening but has very limited capabilities. This Filter will apply a small amount of Sharpening to a Pixel Layer (how much Sharpening is not known as finding any information about this Filter is extremely difficult). As there are much better Photoshop Tools available for Sharpening, I usually do not recommend using this as there are no Adjustment Sliders for you to control its effect. That being said, this Filter works fairly well and can be used when you need a fast and simple method to sharpen an image. You can also use a Selection with this Filter to selectively Sharpen an area or object within your image. 

Guidelines For Using The Sharpen Filter:
(1)    Create a new layer above your Pixel layer. Rename this layer “Sharpen Filter”. This will allow you to Sharpen in a non-destructive manner.

(2)    Zoom in and expand your view (ie; 100%)

(3)    Select the Filter Menu

(4)    Select the Sharpen Menu 

(5)    Select Sharpen (there are no Adjustment Sliders to use with this tool). Using the Selection Tool, you can select an area or object within your image


c)    Filter --> Sharpen --> Sharpen Edges 

Sharpen Edges is a Filter similar to the Sharpen Filter in that there are no Adjustment Sliders from which you can control its affect. It concentrates its sharpening technique to detecting edges and adjusting the contrast for those edges. As such, it works on the detailed areas of your image, leaving the flatter areas alone. This filter may be good for landscapes and other types of images with lots of details. As there are no adjustments, I usually do not recommend using this filter.

Guidelines For Using The Sharpen Edges Filter:
(1)    Create a new layer above your Pixel layer. Rename this layer “Sharpen Edges Filter”. This will allow you to Sharpen in a non-destructive manner.

(2)    Zoom in and expand your view (ie; 100%)

(3)    Select the Filter Menu

(4)    Select the Sharpen Menu 

(5)    Select Sharpen Edges (there are no Adjustment Sliders to use with this tool). 

d)    Filter --> Sharpen --> Sharpen More 

Similar to the Sharpen Filter, the Sharpen More Filter will apply a greater amount of Sharpening to a Pixel Layer (but once again, how much Sharpening is not known as finding any information about this Filter is extremely difficult). As there are much better Photoshop Tools available for Sharpening, I usually do not recommend using this as there are no Adjustment Sliders for you to control its effect. This Filter is still available, works and can be used when you need a fast and simple method to sharpen an image but care must be used as sometimes the amount of sharpening applied is too much for many images.  


e)    Filter --> Sharpen --> Smart Sharpen

The Smart Sharpen Filter is one of the more advanced Sharpening tools available in Photoshop. It contains the most amount of Adjustment Sliders and will produce excellent results when properly used. Some of its advantages include the ability to automatically detect and sharpen edges without creating noise (with a Noise Reduction Adjustment Slider), the ability to control the edge fade for Shadows and Highlights so image detail can be retained and when pushed to the extremes, will create smaller hallo’s. This Filter also contains a Preview option as well as a thumbnail image that can be zoomed separately from the main image. 

Guidelines For Using The Smart Sharpen Filter:
(1)    Merge all your Layers together into a single layer. Rename this layer “Smart Sharpen Filter”. Optionally convert this layer into a Smart Object so that you can later adjust these settings. 

(2)    Zoom in and expand your view (ie; 100%)

(3)    Select the Filter Menu

(4)    Select the Sharpen Menu 

(5)    Select Smart Sharpen

(6)    Set each of the Adjustment Sliders as needed for the image:

(a)    Preview – If you turn this on so as you made adjustments both the main image and the thumbnail image will display the results of your adjustments.

(b)    Preset – Initially set this to Default. Once you make adjustments you have the option to then Save these and load them for future images you may work on.

(c)    Amount – This Adjustment Slider represents the amount of Sharpening that will be applied and works in conjunction with the Radius Adjustment Slider. Typically, the more to the right you move the Amount Slider, the more Sharpening (contrast between lighter and darker pixels) will be applied. The more to the left you move the Amount Slider the less Sharpening will be applied. The Amount may be expressed as a percentage from 0% (no sharpening) to 500%. 

(d)    Radius – This Adjustment Slider is used to determine how much of each pixel being sharpened is affected. Typically expressed in Pixels, this setting controls the hallo created when an edge pixel is made darker or lighter. Radius has a big impact on sharpening because thicker edges make the increased contrast from the Amount setting more obvious.

Setting this adjustment to 1 means the contrast adjustment will be 1 pixel in size. Less than 1 will decrease the contrast adjustment to less than a pixel while increasing this to a value greater than 1 will increase the contrast adjustment to include more than a single pixel (from the edge). Depending on the image, this value is usually set to 1 or less so that the effect of the Sharpen enhances the detail edges but does not become too obvious. That being said, each image is different and should be treated as such. 

(e)    Reduce Noise – You can use this Adjustment Slider to somewhat reduce noise you may have in your image. Noise Reduction has a tendency of reducing an image’s sharpness so care must be used when performing this adjustment. Personally, I have found this Slider to be of little value and I prefer to perform my Noise Reduction as a separate task before beginning sharpening. Noise Reduction can be a complex subject and as such, it needs to be addressed during a future discussion.

(f)    Remove – You have several different options for how you want the Smart Sharpen Filter to work. These include Gaussian Blur, Lens Blur, and Motion Blur. 

Gaussian Blur typically causes the Smart Sharpen to behave similar to Unsharp Mask but with no edge detection ability (more about this later). Lens Blur allows the Smart Sharpen Filter to automatically detect edges and Motion Blur is typically used to  remove blur caused by moving subjects or camera shake (Motion Blur opens up the Directional option so you can help PhotoShop determine what Motion needs to be adjusted). 

Do not let these Names dictate which to use. Review how the pixels actually behave for the image in front of you and then make your adjustments accordingly.

(7)    If you select the > option, adjustments for Shadows and Highlights will appear. Under this option are 3 additional Adjustment Sliders:

(a)    Shadows

(i)    Fade Amount - The Fade Amount Slider acts as an Amount setting for applying sharpening in shadow areas of your image, but in a reverse manner (you would bee “Fading” or removing Sharpness). The higher the value (the more to the right you move this Slider), the less sharpening is applied. With the Fade Amount set to 0%, sharpening is not used in the image’s Highlights.

(ii)    Tonal Width – The Tonal Width Slider controls the range of shadows (darker pixels) that is affected by the Fade Amount. A lower Tonal Width limits the “fade” to affect the darkest edge pixels, where higher Tonal Width limits effect mid-tones. 

(iii)    Radius - The Radius Adjustment Slider controls the width around each pixel  to determine if the pixel fits within the specified tonal range. If it does, the pixel will be adjusted by the Fade Amount. Otherwise, it will not. Typically, a Radius of 1 pixel) is used but often times this setting should be increased.

(a)    Highlights

(i)    Fade Amount - The Fade Amount Slider for Highlights acts the same as for Shadows (it is used to “Fade” or remove Sharpness). The higher the value (the more to the right you move this Slider), the less sharpening is applied. With the Fade Amount set to 0%, sharpening is not used in the image’s Highlights.

(ii)    Tonal Width – The Tonal Width Slider controls the range of brightness (brighter pixels) that is affected by the Fade Amount. A lower Tonal Width limits the “fade” to affect the brightest edge pixels, where higher Tonal Width limits effect more of the mid-tones. 

(iii)    Radius - The Radius Adjustment Slider controls the width around each pixel  to determine if the pixel fits within the specified tonal range. If it does, the pixel will be adjusted by the Fade Amount. Otherwise, it will not. Typically, a Radius of 1 pixel) is used but often times this setting should be increased.


f)    Filter --> Sharpen --> Unsharp Mask

The UnSharp Mask is not actually a Mask as the name implies but a Filter. Although it is an early Photoshop developed tool, it is still in high use as it is quick and easy to learn and use. Although it does not have as many Adjustment Sliders as the Smart Sharpen Filter, it is still highly effective in helping to sharpen an image. Options for this filter include a Preview as well as a thumbnail image that can be zoomed separately from the main image, Amount, Radius and Threshold Adjustment Sliders.

Guidelines For Using The Smart Sharpen Filter:
(1)    Merge all your Layers together into a single layer. Rename this layer “UnSharp Mask Filter”. Optionally convert this layer into a Smart Object so that you can later adjust these settings. 

(2)    Zoom in and expand your view (ie; 100%)

(3)    Select the Filter Menu

(4)    Select the Sharpen Menu 

(5)    Select UnSharp Mask

(a)    Preview – If you turn this on so as you made adjustments both the main image and the thumbnail image will display the results of your adjustments. 

(b)    Amount – This Adjustment Slider represents the amount of Sharpening that will be applied and works in conjunction with the Radius Adjustment Slider. Typically, the more to the right you move the Amount Slider, the more Sharpening (contrast between lighter and darker pixels) will be applied. The more to the left you move the Amount Slider the less Sharpening will be applied. The Amount may be expressed as a percentage from 0% (no sharpening) to 500%. 

(c)    Radius – This Adjustment Slider is used to determine how much of each pixel being sharpened is affected. Expressed in Pixels, this setting controls the hallo created when an edge pixel is made darker or lighter. Radius has a big impact on sharpening because thicker edges make the increased contrast from the Amount setting more obvious.

Setting this adjustment to 1 means the contrast adjustment will be 1 pixel in size. Less than 1 will decrease the contrast adjustment to less than a pixel while increasing this to a value greater than 1 will increase the contrast adjustment to include more than a single pixel (from the edge). Depending on the image, this value is usually set to 1 or less so that the effect of the Sharpen enhances the detail edges but does not become too obvious. That being said, each image is different and should be treated as such. 

(d)    Threshold – This Adjustment Slider controls how sensitive Photoshop is when determining where edges exist. When the threshold is low there doesn’t need to be a big difference between two adjacent pixels for Photoshop to detect an edge. But as you increase the threshold, Photoshop needs to detect a larger difference between pixels to determine an edge exists.


g)    Filter --> Other --> High Pass

The High Pass Filter (HPF) is one of my favorite tools for sharpening as it is often quick, easy and accurate to use. Photoshop sharpening is all about locating edges and then increasing the contrast along these edges. HPF is basically an edge detection tool. It is used to find and highlight ledges and ignores non-edges. Once edges are detected, Layer Blending modes are then used to boost the contrast along these edges; all without affecting color or non-edge areas.


Guidelines For Using The High Pass Filter For Sharpening:
(1)    Merge all your Layers together into a single layer. Rename this layer “HPF Sharpening”. Optionally convert this layer into a Smart Object so that you can later adjust these settings. 

(2)    Zoom in and expand your view (ie; 100%)

(3)    Select the Filter Menu

(4)    Select the Other Menu 

(5)    Select High Pass (the Layer image will turn grey and white)

(6)    Adjust as follows:

(a)    Preview – If you turn this on so as you made adjustments both the main image and the thumbnail image will display the results of your adjustments. 

(b)    Radius – Set the Radius all the way to the left (to 0.1) and then slowly increase it by moving it to the right until the edges are beginning to be detected. When this happens you will see outlines begin to form as the light side of the edges become lighter and the dark side of the edges become darker. Those areas with no edges will stay grey. 

As the Radius effects the width of the detected edge, the more pixels you include, the larger the contrast between the detected edges. You will want to keep the highlighted edges to a minimum so usually 1 – 2 Pixels is enough (although some images may require more). For the best HPF sharpening results, try to find a Radius value that's large enough to highlight the edges while keeping the highlights small and as close to the edge as possible. Select the OK button when done.

(c)    You now need to change the Layer’s Blending Mode. The most common Modes used with the HPF are Overlay, Soft Light, Hard Light and Linear Light. In most cases, I use Overlay.

a.    Overlay produces a higher contrast effect, resulting in a stronger amount of sharpening

b.    Soft Light gives you lower contrast and more subtle sharpening

c.    Hard Light and Linear Light will give you higher contrast then Overlay

4.    Sharpening During Output Processing
An often overlooked aspect of digital image processing is to sharpen your image for the intended output. If displaying your images on the internet, whether it be on a web site or in social media, the viewing device resolution is usually low and thus you do not want your image to appear overly sharpened. Typically when displaying images on high resolution devices (such as on 4k and/or HDTVs) you will need a greater amount of detail, selective sharpening. Printing also usually requires extremely sharp images with very thin, non-detectable halos around the edges.

Sharpening Tips:

1.    Before doing any type of enhancements, first review your image and think carefully about what your image represents, the message you are trying to communicate and what changes (if any) may be needed to better communicate your intensions. 

2.    It is almost always best to do selective sharpening. This will usually produce a more compelling image. 

3.    Sharpening and Noise often have an inverse relationship (Sharpening can introduce Noise and reducing Noise can reduce an image’s Sharpness). Avoid trying to sharpen images that have a lot of Noise. It is also best to first try to reduce or eliminate the Noise prior to sharpening as sharpening may intensify the Noise; thus making it harder to then get rid of it.  

4.    When doing any type of sharpening to an image, it is usually best to do so when viewing your image at or near 100% so that each image pixel uses one screen pixel; giving you the most accurate sharpening view.

5.    One method to find the best sharpening settings is to use a low Radius (ie; around 1 or 1.5) and then set the Amount Adjustment Slider all the way to the right. This takes the sharpening effect to the extreme and will produce large halos. Then start to move the Amount Adjustment Slider to the left until the halos disappear. 

6.    Always make sharpening adjustments in small increments so that you can clearly see the affect your sharpening is having on your image.

7.    If you Sharpen your image on a separate layer, you can change that Layer’s Blending Mode, Opacity and or use Masks. This will help you avoid color shifts along the pixel edges as well as eliminate those areas you do not want sharpened.

8.    When working with high resolution images, increasing the amount of sharpening contrast is usually less noticeable then when working with lower resolution images. This is because higher resolution images have a greater number of pixels to work with.

9.    Photoshop can only sharpen one layer at a time so if you have multiple layers, you may need to combine them into a single pixel layer before sharpening. 

10.    Convert your Sharpen Layer into a Smart Object. This will allow you to later go back and re-adjust your Sharpening Adjustment Settings. In this manner, you can use the same image for different types of output.


Sharpening Myths & Limitations

1.    Some people believe that Sharpening can help bring an out of focus image into focus (or at least improve it). Unfortunately, this is not the case. Sharpening can only use the pixel information that is available and cannot by itself, create sharpness where it does not already exist. If the subject or object in an image is blurry, Sharpening will only make it seem more so. 

Photoshop does includes a Shake Reduction Filter (Filter  Sharpen  Shake Reduction) which is designed to automatically detect motion blur in an image (ie; caused by movement when taking a picture) and counteracts it with sharpening and ghost removal logic. Although not always useful, it can at times be used to reduce or eliminate some amount of motion blur.

2.    At times, there is also a tendency to “Over Sharpen”. An image becomes Over Sharpened when the edges between the pixels become so pronounced the viewer can see the actual differences between the dark and light pixels. When this occurs, a Halo will start to appear along the edges. The greater the Amount of Sharpening and/or the greater the number of Pixels used to determine the edge, the larger will be the Halo. If when sharpening an image, you start to see these halos forming, try reducing the Amount or Pixels to eliminate them.

3.    In many pictures over-sharpening is an undesired effect as it makes our subject seem unnatural. In other cases, however, this can be intentional as a unique style of the photographer whose work may be perceived as more artistic. 

As we have seen, there are many decisions to make when Sharpening a digital image. From when to sharpen, what tool to use, and how much all become important factors in trying to get the most out of your image. As such, Sharpening may take time and practice to master but ultimately it becomes an extremely useful tool and an expression of your creative image making process. 


For more information about the concepts, techniques and tools of digital photography, see my blog at








(Marc F Alter) Capture Sharpening Deconvolution Sharpening High Pass Filter image sharpening marc alter photography marc f alter mfa images Photoshop sharpen Sharpen Edges Sharpen More sharpening tips Sharpening Tool Smart Sharpen Wed, 29 Jan 2020 02:26:46 GMT
Photo Post Processing In The Digital Age - Should I or Shouldn't I Photo Post Processing In The Digital Age - Should I or Shouldn't I

What Is Post Processing?

In the olden days (for me, that would be in the 1960s), I would load my Kodak or Fuji film, take pictures with my Argus C3 camera, send it to a photo developer and then days or weeks later, receive 3” x 5” or 4” x 6” pictures in the mail. Basically, what I shot was what I got.

Then (in the 1970s) I started getting into photography and began to develop my own film and prints. Sometimes I experimented, learning to push film (ASA 400 to ASA 800) or by using different temperatures of chemicals or different photo papers to achieve different results but mostly I was happy to see the magic of a blank paper turning into an 8” x 10” or 11” x 14” photograph. I started to learn techniques for cropping, dodging and burning but still, for the most part, what I shot was what I got.

Nowadays we can still do this. We can go out shooting pictures with our highly advanced DSLRs, download our pictures and print them as is. Some will come out good (a few may even be great) but many will not. Why is this?

If we are shooting and capturing our images with JPG files, other than composition, the camera is deciding how to process the image. With lots of technology intelligence written into the JPG logic, the camera still does not really “know” what the scene is or why we captured it. The camera is only a data collection tool and can only make assumptions about your intent and then try to render what it thinks you want. It will process our images just like the hundreds, thousands and/or millions of other images taken with the same camera and settings. Finally, if we are using JPG files, our output is limited to the inherent benefits and weaknesses of this file format (See my Blog “Understanding & Managing Image Quality - Part I and Part II” at for more details).

If we are shooting and capturing our images with RAW files however, we have at our disposal all the colors (hues & tones) and luminosity values picked up by the camera’s sensor and recorded on the digital card. Nonetheless, if we printed these files, we would find them flat in color and somewhat dull in sharpness. This is because RAW files are designed to be processed after they have been captured (hence the name “Post Processing). In 1992, the verb “Photoshop” was added to the English dictionary as a result of how frequently it was used to represent “Photography Post Processing).

Post processing is not a byproduct of digital photography. From the earliest days of photography, a battle existed between painters and the art world and photographers who saw their work and vision as something more than just capturing the scene in front of them. 


When To Post Process?

Even if you get it “right” in camera (proper shutter speed and exposure, sharp focus, etc), all digital images captured in a RAW format, need to be “post processed”. This process enhances the final output so the image itself is visually clear, concise and sharp. Depending on the reason for the image and how it will be used, certain post processing is considered acceptable and necessary while other techniques may not.

Most acceptable post processing tasks are techniques that do not materially change the image. This may include cropping, minor white balance & exposure adjustments, minor image clean-up (ie; removing dust spots) and sharpening. Some photography genres that usually keep a tight reign on the amount and extent of acceptable post processing include photojournalism (news reporting), street photography, travel, and wildlife photography. Some photo contests and/or contest categories even limit the amount of post processing that can be done to an image as the judges may wish an accurate reflection of the natural scene taken by the photographer.

 Many other uses of an image, however, recognize the image as “digital art” where the process to achieve the final product is as much a part of the image as when the photographer took it. In these cases, the image presented to the viewer is the final determining factor with the process a product of the photographer’s knowledge, processing skills, and experimentation.  Thus, image capture is only the first step in photography and post processing techniques becomes the mechanism for the photographer to achieve their own unique vision.

Many photographers work extremely hard on composition, exposure, and focus to get it as perfect as possible when they capture the image. They know, post processing has its limitations and not all issues can be corrected “after the fact”. This being said, there is a lot post processing can do to enhance a good image maybe even making it a great one. Some of this may include adjusting colors and light while other techniques may involve removing distracting subjects or adding new ones that add drama and story. Some of these techniques go way beyond the four techniques mentioned above and become a function of what you know, what tools you have available, what you are able to do and where your process leads you.


>;- )





(Marc F Alter) marc alter photography marc f alter MFA mfa images Photography Sun, 03 Nov 2019 22:16:19 GMT
Monitor Calibration For Photo Editing Monitor Calibration For Photo Editing


Why Calibrate Your Display?

Every display device, whether it be a desktop monitor, laptop screen, smart phone, tablet, projector, TV, etc displays images differently. The same image displayed on your monitor can look vastly different when displayed on your friend’s monitor (or maybe even a contest judge’s monitor). This is because each type of device typically has different capabilities. Even devices with the same or similar capabilities may be adjusted differently resulting in how images are displayed (ie; Normal verses Vivid, etc). These adjustments may not only effect how colors are displayed but also how contrast and sharpness are viewed.


So how do you get “true color” between different devices so that your images will display consistently across different devices? This is a challenge as you do not have control over other people’s devices. Nonetheless, by calibrating your display (monitor) you can establish a commonality with other devices that are also calibrated. This is a highly recommended process you should take, especially before doing any type of photo editing. Calibration will also help when preparing an image for printing as you will want the colors sent to your printer to be as close as possible to matching that printers color settings.


What is Monitor Calibration And What Tools Do I Need?

Monitor calibration is the process of measuring and adjusting the colors on your monitor so that it will match a common standard. To do this you will need some type of sensor device that measures the values put out by your monitor. Such devices are called spectrophotometers or colorimeters and there are many on the market to choose from. You will also need monitor calibration software that works with your device.


Note: Some devices (usually very specialized commercial display devices) are calibrated at the factory and/or have self-calibration capabilities. These are typically extremely expensive and are not normally available to those of us doing photography for fun and profit.


Some computer systems, like Windows 10 and MacOS, offer calibration tools built right into the computer’s operating system or have software programs that do not require sensor devices. The problem with these tools and programs is they rely on the user’s visual perception to make intricate monitor setting adjustments. This is problematic as these types of adjustments can only be as good as your own visual capabilities and usually will not match the settings of other calibrated devices (it is believed that as many as 1 out of 12 men and 1 out of 255 women have some form of color vision deficiency).  Also our vision and perception of colors and hues may be perceived differently based on the lighting around us, our gender, our age and our level of tiredness.


For the most accurate colors you will want to invest in a monitor calibration kit.  This type of kit typically includes a spectrometer (or colorimeter) sensor device, software and instructions. These types of tools do not rely on a user’s visual perception to make adjustments.  A monitor calibration kit compares the monitor’s color output with known colors through a combination of hardware and software that are specifically designed to work together.


How Do I Calibrate My Monitor?

There are various steps you need to take to successfully and accurately calibrate your monitor. These can seem very technical and complex but once you understand the steps and have practiced them a few times, it will become more familiar and easier.


Each computer and display (monitor) combination may produce somewhat different results and may require somewhat different steps, but there is commonality between all of these. To help guide you through the process, I will describe the steps I take with my Windows PC and my NEC PA272W monitor.



Pre-Calibration Steps

Prior to calibrating your computer’s monitor, there are several steps you should take to ensure your computer’s components are at the latest updates and will therefore take full advantage of your monitor calibration settings. Below is a short list of these pre-calibration steps. More detailed step-by-step instructions can be reviewed in the Appendix section of this White Paper.

  1. Make sure your Computer has the most recent Microsoft Updates.
  2. Make sure your computer’s graphics card has the most recent, updated driver.
  3. Make sure your computer’s Monitor has the most recent, updated driver.
  4. Make sure the Monitor Controls on the actual Monitor (NEC PA272W) are properly set as follows:
    1. Mode                                          =             AdobeRGB
    2. White                                         =             6500k
    3. All Other Settings                   =             Monitor Default
  5. Purchase a Monitor Calibration Kit (Sensor, Software & Instructions) and install the software.

Note: There are several excellent available Monitor Calibration Kits currently available. Each has its own capabilities, weaknesses and advocates but basically, they can all do a good job provided you have followed the Pre-Calibration Steps and correctly follow the steps provided with the calibration software.

Some of the more common Monitor Calibration Kits available are:

  • Datacolor Spyder5PRO (S5P100)
  • Datacolor DC S3P100 Spyder 3 Pro
  • X-Rite i1Display Pro – Display Calibration (EODIS3)
  • X-Rite ColorMunki Smile (CMUNSML)
  • X-Rite ColorMunki (CMUNDISCCPP)
  • Wacom EODIS3-DCWA Color Manager
  • NEC SpectraSensor Pro Color Calibration Sensor
  • ViewSonic CS-xRi1 Color Calibration Kit

Some of the newer calibration software kits now have the ability of evaluating the measured capabilities of you monitor as well as reporting on its display consistencies. After you have performed your monitor calibration, you should review these reports, so you are more aware of your monitor’s strengths and weaknesses.

Datacolor’s SpyderX Elite – Additional Capabilities, Tests & Reports

When looking to upgrade my monitor calibration device and software (the last time I had purchased one was in 2011), one of the reasons I chose Datacolor’s SpyderX Elite for monitor calibration is its for many of its advanced capabilities. One of these is the ability for the software to analyze my monitor and report what percentage of various color gamuts my monitor is capable of and if it falls short, where in the gamut’s spectrum it does this. The software also has the ability to compare different displays and to determine which has the best color reproduction.

As I am mostly concerned about my photography work using both the sRGB and AdobeRGB color gamuts, I see my monitor has the following color capabilities:

  • sRGB – According to Datacolor’s SpyderX Elite software, my monitor is capable of producing 95% of sRGB’s color values.
  • Adobe RGB - According to Datacolor’s SpyderX Elite software, my monitor is capable of producing 89% of AdobeRGB’s color values.

Although I originally purchased my NEC PA272W monitor in 2015, at matching 9% of AdobeRGB’s 16.7 million colors, it still seems capable of meeting many of my current photography requirements (although lower than the marketing hype of 99.3% of the AdobeRGB color gamut that it was supposed to meet).

The Datacolor’s SpyderX Elite software is also capable of producing many other tests. 

Each of these tests would be useful in helping to evaluate my current monitor, its capabilities and shortcomings. Given today’s technologies I wonder how much more color gamut I would be able to utilize using a newer 4k or 8k monitor (and if making such an investment would be of any use in helping to make my images better). This may be a question I would answer given anticipated upcoming sales promotions and the holidays…




(Marc F Alter) marc alter photography marc f alter MFA mfa images Photo Photography Mon, 28 Oct 2019 04:40:11 GMT
Understanding & Managing Image Quality – Part II Understanding & Managing Image Quality – Part II

By Marc F Alter


In the last session we discussed some of the critical elements in creating quality images in digital photography. Amongst these were Bit Depth, Color Space and File Types. Increased Bit Depth allowed us to create images with a larger number of colors (16.7 million for 8-Bit versus 281 trillion for 16-Bit), Color Space (sRGB vs Adobe RGB vs ProPhoto RGB) defined the colors that are available, and File Types (mainly JPG or RAW) each had their own advantages and disadvantages but basically JPGs rely on the camera to preprocess our images while RAW demands we do it ourselves.


We also discussed that when we take a picture using a digital camera, the camera’s sensor captures the scene and records it in the form of Pixels. These Pixels are essentially data representing tiny dabs of color (Hue, Saturation and Light). If we look at our image normally (Zoom Out) the formation of Pixels causes our minds to recognize shapes, patterns and light. If we look at our image close-up (Zoom In), we can actually “see” the individual Pixels.


Using these concepts, we can now discuss the things you can do to improve the quality of your images.


Pre-Image Capture

Whether film or digital, knowing the basics of Photography is important if your goal is to take your camera off “Automatic” and create better and unique images. Some of these include knowing the Exposure Triangle (and the trade-offs between ISO, F-Stops and Shutter Speed), becoming familiar with Compositional Rules (really more like Guidelines), and learning about Light and its characteristics (Intensity, Quality, Color, & Direction), These are all important concepts to learn and become knowledgeable about (before going out and shooting).


Knowing and becoming intimately familiar with your camera, its functions, capabilities and limitations are also critical elements to creating a successful quality image. When a scene unfolds before you, you need to be ready to “Capture” it, for in a second it will be gone forever (photography is about capturing split seconds of time). Chief amongst these functions are knowing how your camera needs to be set and used to get good exposures and focus. Setting ISO (Film Speed / Sensor Light Sensitivity), F-Stop (More Light / Less Light and DOF-Depth of Field) and Shutter Speed (Short or Long Slices of Time). Also needed is knowledge on how to set different Exposure Modes, Metering Modes and Exposure Compensation.  Focus is equally important. Knowing the different Focus Methods, Focus Modes and Focus Areas will help your image tell the intended story. Finally knowing more advanced camera functions, like how to manage White Balance, take single or bursts of shots, displaying a Grid for compositions, using custom programmable buttons, switching from Photo to Video, being able to view Exposure Highlights and Histograms, using “Live View”, etc. might not always seem important until the time when you need to use it.


Once you have learned these concepts and are familiar with your equipment you are ready to go out and shoot. As in most tasks in life, planning is an essential element of success and the same is true in Digital Photography. Whether you have a pre-conceived concept of what you what to achieve or just going out to “wing it” and capture the scene as it unfolds before you, knowledge of where you are going and then preparing for this will greatly enhance your chances of success. Knowing the weather, wearing the right clothes, having food & water available could increase the time you are out shooting and also make it more enjoyable. Gathering your equipment ahead of time, making sure it is clean and working will help prevent possible problems when “in the field”. Finally, check your camera’s settings and make sure they have been reset since your last excursion (nothing is worse then getting to a spot where a unique scene suddenly unfolds, you take the shot and then see your exposure or focus is set for a completely different event).



Capture represents the act of actually taking the picture and recording the image on the camera’s Memory Card. As we discussed, prior to capture there are several tasks you should take prior to going out. That being said, it is always good to review these settings (again) when out in the field.


After taking your shot it is best to review your Image on the camera’s LCD (Display). Most of today’s digital cameras can display a host of useful information including the Image itself (so you can check the composition and focus), Blinkees and/or Histogram (to see if any areas have clipped highlights or dark areas) as well as the ISO, F-Stop and Shutter Settings. After reviewing, you may wish to make some setting changes and shoot a similar image again. Unlike film which is limited to only taking a certain number of shots, most Memory Cards have greater storage capabilities as well as the ability to create more room by deleting pictures you know are not good.




Should you delete pictures when out in the field? That all depends on the image in question, your intension and/or if you have similar images. Care must be taken when reviewing your image on the camera’s LCD, for although it is useful, it can also be misleading. The LCD itself is small and even if you zoom in (ie; to check Focus Sharpness), the display can be misleading. Also, most LCD display only JPG images and these are pre-processed and can be somewhat misleading. The camera’s displayed Blinkees and Histogram may be reading the Image’s JPG values and not the actual values picked up by the sensor. This is especially important if you are shooting in a RAW format as the JPG data may not accurately reflect what the camera’s sensor is actually recording (it does however, give you a good approximation).


Post Processing

Everyone processes images differently. This is why today’s photography is an art form as much as it is a science. From the photo editing tools we use (and there are many to choose from) to the workflow we develop, there can be vast differences in not only how we captured the image before us but also in how we process it. That being said, to create high quality images, there are a few “Best Practices” that should be followed:




  1. Photo File Organization – When downloading images from your camera’s Memory Card onto your computer, it is best to organize your images in a manner that allows you to quickly and easily locate a desired image. The method you use to organize can vary greatly, depending on how many images you expect to have (much more then you could ever imagine) and the capabilities of the software you will be using to initially view and select your images. Nonetheless, there are three most common methods for organizing your images, (1) that by “Date & Place”, (2) that by “Similar Subject” and (3) None-At-All.


  1. Date & Place – With this method you basically create top level folders (usually by Year in which the image was taken), followed by sub-folders (representing Months) and then additional folders (representing the actual Place or Event of your Shots).


    1. 2019
      1. September
        1. Sunken Meadow State Park – Ospreys
          1. NEF5625
          2. NEF5626
          3. NEF5627
        2. West Neck Beach – Seagulls, Sunset,
          1. NEF5628
          2. NEF5629
          3. NEF5630
        3. Maine Vacation
          1. NEF5631
          2. NEF5632
          3. NEF5633


  1. Similar Subject – With this method you review your images and then place them in specific folders you have arranged by Subject. This may allow you to more easily and quickly find specific images.


    1. Animals
      1. Birds
        1. Ospreys
          1. NEF5625
          2. NEF5626
          3. NEF5627
        2. Seagulls
          1. NEF5628
          2. NEF5629
          3. NEF5630
      2. Mammals
        1. Squirrels
          1. NEF5634
          2. NEF5635
          3. NEF5636
        2. Elephants
          1. NEF5637
          2. NEF5638
          3. NEF5639



  1. None-At-All – If you are not well organized but all your images have unique File Names, you may decide not to utilize any type of file organization. In this case, you can place all your images into one massive folder and rely on “technology” to find the specific image you may be looking for:


    1. Pictures
      1. NEF5625
      2. NEF5626
      3. NEF5627
      4. NEF5628
      5. NEF5629
      6. NEF5630


No matter what technique you use (any of the above, others or combinations), eventually you will find them inadequate. This does not mean you should not use them but rather you should along with your software’s capabilities. Many “Photo Browsing & Editing” software has the capabilities of using the Image’s Metadata to help you locate specific images (ie; Images taken at a specific time and place). Other software capabilities also allow you to further Categorize your images using Keywords, Colors, Star Ratings, Labels and/or Combinations of these. These can be extremely useful when trying to locate a specific image, especially as your Image Library grows.


  1. Transfer, Review, Delete, Delete, Delete – Given today’s digital technology we are now initially capturing our images on a Memory Card instead of on film. As a result, we have a tendency to capture many more pictures then we ever had in the past. This coupled with our ability to take multiple images (bracketed exposures, high shooting burst rates, alternative cameras such as Smart Phones) means over time our images will take ever greater amount of disk (storage) space. Although costs for this storage has dropped tremendously (disk is cheap…), do we really want or need to save images that have poor quality and which we will never use?


Best Practices means shortly after we have transferred our images from our camera to our computer’s storage, we should review these and delete the ones we know we will never need. Chief amongst these would be those out of focus and those with poor exposures but where better exposure images exist. Beware because as digital technology changes as well as our own expertise, our ability to improve images, even those of questionable quality increases over time. Also, as we become more experienced photographers (and artists) we may delve into other areas of Photography (like Impressionism) where poorly exposed, out of focus and blurred images may be exactly what we are looking for.


  1. Backup your image files. After transferring your images onto your computer and going through a process of Review and Delete, it is Best Practice to then backup these files to another media and another location. The best time for a photographic disaster (fire, flood, computer failures, virus attacks, etc) is the day after you have successfully implemented a good, well thought-out backup strategy. That being said, most computer disasters occur without warning and usually at the worst possible time (ie; immediately before you start your backup strategy). With this in mind, the best time to complete your backups was yesterday. 


Most good backup plans contain (2) main elements; Redundancy & Consistency. Redundancy means you have several copies of your programs and/or data in different places. This is because the media on which you backup your data can also fail (usually at the worst possible time; like when you most need it). Consistency means you perform your backups on a regular basis (either manually or automatically).


Backups should take into account, several different disaster scenarios including simple hardware failure, loss of power, water damage, etc to name just a few. Some of the most overlooked and misunderstood but common computer disaster scenarios involve being infected by a Computer Virus (like a Crypto-Locker virus which locks up all files you have access to), Technology Changes (where older media is no longer accessible) and Policy Changes (Cloud and other backup services that change their data retention services). Not only must special precautions be taken to avoid these, but most backup plans do not properly take these into account. When thinking about developing a backup plan and how you are going to implement it, keep thinking about what kind of failures or disasters can happen and how your plan will overcome these.


  1. Monitor Calibration – In most cases, nobody sees image colors the same. This is partially due to how we as individuals view colors but also due to what devices we are using to display an image. Most computer screens are set too bright (especially LEDs) and as monitors get older, their colors shift over time. When viewing or editing an image, you will want to establish a standard that will best represent how others “may” view your image as well as create consistency when printing. To accomplish this, it is best to calibrate your monitor.


Monitor Calibration typically involves a sensor that is placed on the front of your screen as well as software that uses this device to analyze the sensor and create a file that sets (controls) your monitor’s brightness and displayed colors. There are several different types of Monitor Calibrations available, but most work the same way. Some monitors perform self-calibrating while others have Monitor Calibration built into them, but it is questionable how well most of these work as they usually require “visual perceptions” rather than measured sensored values.



  1. Post Processing – Workflow and techniques can vary greatly from one individual to the next but Best Practices demands we leave our original Photo Image Files in-tack. Whether we use JPGS (which permanently lose quality every time they are saved) or RAW files (which cannot be changed and therefore need to be copied), both techniques demand we somehow “Save” our Original Files. Some software perform this function automatically (keeping editing changes in a separate catalog or side-car file while other software requires, we perform a “SAVE AS”. In these cases, Best Practice suggests we include the Original File Name in the Saved File, so finding the Original File is never too burdensome.


Updating Meta Data is also an important task to consider. Nowadays, most digital cameras include common Meta Data embedded in each image file and includes useful information such as Date Image Was Taken, Camera Make & Model, Lens Make & Model and Exposure Settings. You should highly consider adding Copyright and Contact Information, so your ownership is clear.


Learning and using functions for managing Color Profiles, Bit Depths and Cropping (Delete Pixels or Not?) are important when editing a digital image. So can advanced capabilities such as Smart Objects or use of 3-D Imaging. One of the first tasks you should take when evaluating an image is to evaluate your image in terms of Colors, Hues and Luminosity to make sure the colors you are using are “in Gamut” for the devices/devices that will be used to view or print your image. Using Lightroom, Photoshop and/or many other Photo Editing programs, this is typically found as “Clipping Warnings” (both for Highlights as well as Darks). If your software does not have this function you should be able to view the image’s Histogram.


With digital editing tools becoming more and more advanced and with additional third party applications offering exciting new capabilities and ease of use, the possibilities are virtually endless. A good Rule of Thumb is to make as little changes as possible while at the same time managing White Balance, Exposure, Subject and Focus. One of the last functions to master is proper Sizing for Output, Sharpening and Color Profile Conversion.


There are two basic types of output, Printing and Digital Display. Each has its own characteristics, limitations but also some similarities. When developing an image for output and considering Image Quality, you must consider the media from which your image will be viewed.


Note: Do not confuse DPI, PPI and MP. Many people believe these are the same but, they are very different. DPI refers to Dots Per Inch and describes Dot Density (how many dots of ink a printer can print on a page). As such, DPI has no bearing to a Digitally Displayed Image. PPI refers to Pixels Per Inch and describes the Resolution of a device displaying a Digital Image. MP (MegaPixels or Millions of Pixels) refers to Image Resolution and represents the number of Pixels in the image. Careful to note that MP (ie; Image Size) alone does not determine Image Quality as Bit Depth (which controls the number of Colors, Hues and Luminosity) is also a determining factor.


Digital Display: When exporting an Image for Digital Display, consider where and how your image will be viewed. If you image will be viewed on the Internet, it means that many different platforms and devices (Monitors, Cell Phone Screens, etc) will be used to display your image. Many of these devices have very limited Color Profile capabilities (they can only display a limited number of colors) as well as low PPIs. As such, to prevent these devices from automatically recalibrating the colors in your image, it is best you do this yourself. In many cases using a Color Profile of sRGB and a PPI as low as 72 is good for most Internet viewing platforms.


Various Social Media environments have different recommended sizes for optimal viewing. As society’s devices and technologies change, so can the recommended Image Resolutions. Some of the most common Social Media Recommended Sizes (in Pixels) for 2019 are:

  • Facebook       
    • Profile picture size: 180 x 180
    • Cover photo size: 851 x 315
    • Link image size: 1200 x 628
    • Image post size: 1200 x 900
    • Highlighted image size: 1200 x 717
    • Event image size: 1920 x 1080
    • Video size: 1280 x 720
    • Maximum video length: 240 minutes
    • Ad size: 1280 x 628
    • Video ad size: 1280 x 720
    • Story ad size: 1080 x 1920
  • Instagram
    • Profile picture size: 180 x 180
    • Photo sizes: 1080 x 1080 (square), 1080 x 566 (landscape), 1080 x 1350 (portrait)
    • Stories size: 1080 x 1920
    • Minimum video sizes: 600 x 600 (square), 600 x 315 (landscape), 600 x 750 (portrait)
    • Maximum video length: 60 seconds
    • Minimum image ad size: 500 pixels wide
  • Twitter           
    • Profile picture size: 150 x 150
    • Header size: 1500 x 500
    • Post image size: 1024 x 512
    • Card image size: 1200 x 628
    • Video size: 720 x 720 (square), 1280 x 720 (landscape), 720 x 1280 (portrait)
    • Maximum video length: 140 seconds
    • Ad size (image): 1200 x 675
    • Ad size (video): 720 x 720 (square), 1280 x 720 (landscape), 720 x 1280 (portrait)
  • LinkedIn
    • Company logo size: 300 x 300
    • Cover photo size: 1536 x 768
    • Dynamic Ads size: 100 x 100 (company logo)
    • Sponsored Content image size: 1200 x 628
    • Personal pages:
      • Profile picture size: 400 x 400
      • Background photo size: 1584 x 396
      • Post image size: 1200 x 1200 (desktop) 1200 x 628 (mobile)
      • Link post size: 1200 x 628
      • Video size: 256 x 144 (minimum) to 4096 x 2304 (maximum)
      • Maximum video length: 10 minutes

Other than Social Media, If you know your image will be displayed on a high Resolution platform (ie; the Huntington Library’s 4K HDTV used in HCC Image Competitions), you will want to find out and conform to the optimum Resolution and Color Profile of the device being used (in the case of the Library’s 4K HDTV, this means creating an Image Size of 3240 Pixels x 2160 Pixels with a Color Profile = Adobe RGB). Displays with higher resolutions will display Pixels as smaller but with greater density, thus they will look sharper and display better Color then lower resolution devices.


Printing Your Image:

As with Digital Displays, when printing an Image, Color Profiles matter. If your printer is not capable of producing the colors in your image, it will automatically disregard those colors and replace them with what “it” thinks are the next closest colors. This could result in banding and/or color shifts. Make sure the Printer you are using can handle the Color Profile set in your image. You can check your Printer’s Manual or Specification Sheet for this information or if necessary, contact the Printer’s Manufacturer. If necessary, learn how to Manage and/or change the Image’s Color Profile during output.


Image Resolution / Image Size also matters. Image Resolution is the number of Pixels along the width and length of your image. The more Pixels Per Inch, the sharper your image can appear when being viewed. Also, the more Pixels in your image, the larger will be that Image’s File Size. An image that has 3744 Pixels x 5616 Pixels is considered to be 21 MegaPixels (3744 x 5616 = 21,026,304 Pixels or 21 Million Pixels = rounded to 21 MP).



Note: The number of Megapixels should not be confused with the number of colors in your image as many of the Pixels may contain the same color.


In most cases, if your Image has too many Pixels, it will be ok. In some cases however, the printer may need to determine how to handle these extra Pixels. If your image is too small and does not have enough Pixel information, your printer will also automatically try to compensate. This is similar in concept to a rubber band. Without increasing the number of Pixels but increasing the output size, you are telling the Printer to do the best it can by reorganizing the existing Pixels. These Printer calculations could result in Pixilation, the creation of Artificial Artifacts and/or Unsharp Images. To ensure your image is sized properly, do some simple math:

  1. For normal printing, images usually print best when their PPI (Pixels Per Inch) are set in the 150-300 Range (although some high end printers may allow for higher PPIs) with 300 usually being optimal (check your Printer Manual for what PPI works best). Thus:


  1. If you wish to print a 4” x 6”, you will need an image that is (4 x 300 = 1200 Pixels on the Short Side) X (6 x 300 = 1800 Pixels on the Long Side) = 1200 x 1800 Pixels.


  1. The same mathematical formula can be used for basically any size image.
    1. 8” x 10” = (8 x 300 = 2400 Pixels) X (10 x 300 = 3000 Pixels) = 2400 x 3000 Pixels = 7.2 MP
    2. 13” x 19” = (13 x 300 = 2900 Pixels) X (19 x 300 = 5700 Pixels) = 2900 x 5700 Pixels = 16.5 MP
    3. 20” x 30” = (20 x 300 = 6000 Pixels) X (30 x 300 = 9000 Pixels) = 6000 x 9000 Pixels = 54 MP

During Photo Editing you can resize your image for the Optimal Output. When doing so your Editing Software will use various algorithms to try and obtain the desired size and resolution. If resizing an image to be larger, be careful to do so in small increments (no more than 10% each time). This would allow the Photo Editing software to more accurately create additional PPIs by sampling the surrounding Pixels. Resizing in larger amounts will cause your image to lose sharpness and could introduce color shifts. There are some Photo Editing software products currently available that will do much of this resizing for you, but it may be best for you to first learn the techniques yourself.


Managing Color – You or Your Printer?

In Photoshop, when viewing the Print Dialog box, you are presented with several confusing but important decisions. Chief among these are selecting an “ICC Profile”, “Color Handling” and “Rendering Intent”. Without getting into a long discussion on Printer Setups and Printer Setting Options (which would probably be handled in another session), if you have Color Calibrated your Monitor and correctly set your Image’s Color Profile and Image Resolution for the size you are printing, you will want to (1) select the ICC Profile that best matches your Printer and Paper, (2) select “Photoshop Manage Colors” for Color Handling, and (3) select “Relative Colorimetric” for Rendering Intent.










Sources & Additional Information


(Marc F Alter) Mon, 28 Oct 2019 04:39:56 GMT
Understanding & Managing Image Quality - Part I  

Understanding & Managing Image Quality

By Marc F Alter

What Is Image Quality?


When we take a picture, we are creating a digital image. Besides the equipment (and its capabilities) and our own picture taking abilities, the quality of our images depends on several factors including Bit Depth, Color Space, and File Types. These factors start with image capture and lead into post-processing. Other factors such as Pixilation and Noise also need to be considered during image editing and output.


The better we understand the factors that go into creating an image, the better we can make decisions on how to manage and control them with the goal of creating better quality images.


Understanding Digital Images:

Like a mosaic tile, a digital image is made up of millions of Pixels (commonly known as Megapixels). When we zoom out, the pixels all seem to merge together, thus creating the colors and shapes of our image.

When we zoom in and look at the pixels (typically shown in Photoshop as square dots), we can see the individual colors and their shade of brightness (luminously).



A Pixel is the smallest color unit in a digital image. Each pixel contains information (data) made up of three colors; Red (R), Green (G) and Blue (B). When a picture is taken, light travels into the camera, thru the lens, stimulates the sensor which then records millions of pixels onto the memory card. Each Pixel is recorded with an RGB color value as well as a (greyscale) brightness value. These values are set at certain intensities which when viewed, merge to make up individual pixel colors (each individual color and brightness is then represented by a unique number). The total number of colors and brightness values you have available to work with is based on the total range of numbers you have recorded in the digital image file. This range of colors is called Bit Depth (more about this later).


Pixel sizes and shapes can vary based on the device (camera) creating the image as well as the device displaying the image (monitor, printer, projector, etc). The number of Pixels used or displayed is known as Pixel Density and is usually expressed in Pixels Per Inch (PPI) or Pixels Per Centimeter (PPCM). This is not to be confused with Dots Per Inch (DPI) which is the number of ink spots printed on paper to create a printed image.


Images with more Pixels are considered higher resolution images because they can provide fine details within and allow for larger displays of smooth, continuous tones and colors. There is a common misbelief the more megapixels captured; the greater will be that image’s resolution. This however, is not always the case. Both the size of the Pixel and the number of Pixels are important factors when reviewing an image’s resolution and working to improve image quality. Typically, the larger the camera’s sensor the larger the Pixels that can be captured. Larger camera sensors also allow for a greater number of Pixels to be recorded. The greater the number of larger Pixels, the cleaner will be the image with less noise and finer delineations between colors, highlights and shadows, shapes and patterns.


The output media is the final determining factor for image resolution / image quality. The goal for obtaining good image quality is making sure you have just enough Pixels in your image to allow for the greatest image quality to be displayed. Not having enough Pixel colors for the size and type of output will hurt your image. If you have a low Pixel Density for a given output size, the viewer will start to see the individual Pixels instead of the mosaic of Pixels working together. This is known as Pixilation and can greatly hurt your image quality. Likewise, not having enough Pixel brightness values for the size and type of output will hurt your image by creating an effect called Banding.


Having too many Pixels can also hurt your images. If you upload a very large image files onto web sites or social media networks, your image may be too slow to load. Also, very large image files are subject to automatic downsizing to fit into the site or application. This automatic process may randomly delete pixels without any given intelligence on what may or may not be critical in your image.


When our camera records the image onto its sensor, it does so with its designed capabilities and limitations.  Sometimes however, the technology is not developed enough to properly record the scene given the desired camera settings. This also occurs when the camera sensor is subject to extreme dynamic ranges and/or changes in temperatures. During certain camera techniques (such as extremely slow shutter speeds, low light conditions, or high sensitivity settings) the sensor may heat up. When these occur, random pixels may be incorrectly set resulting in what is known as Digital Noise (like the appearance of grain in film photography).


To some degree, there are steps you can take to avoid or reduce the effect of Pixilation and Noise. When taking pictures using extreme settings, look for camera features designed to reduce possible problems (ie; Noise Reduction). When editing, try to crop as little as possible as cropping usually deletes those Pixels you do not wish to display. Also, try to limit or reduce the number of adjustments you make. Each adjustment you make changes the characteristics of the image’s Pixels and thus has the potential for adding aberrations and/or noise.  Many times, there are Photo editing Tools and Filters (such as Blur and DeNoise) that can be used to change the Pixel’s settings so they are similar to other nearby Pixels. During output, know your image size and resolution compared to the intended output media and size. Finally, when enlarging image size, take special notice of the algorithm used and do so in small increments.       


Bit Depth:

We use computers to process our digital images. Computers in their most basic form, only understand instructions from two unique numbers; zeros and ones (Off or On). As such, computers work using a binary system known as Base 2. In such a system, to represent more numbers (more information), additional zeros and ones are used:


Example: In Base 2, the number “Two Hundred Fifteen” is translated into 11010111.


Each pixel in a digital photograph contains both color and luminosity information; with each combination being represented by a number. The range of numbers we have to work with controls the numbers of colors and hues we have to work with. If we used 1-Bit, our number range would be limited to one digit (numbers 0 and 1), thus only allowing for pure black and pure white. With 8-Bits, our number range can go from 0 to 11111111 (8 positions). Using 16-Bits, our number range can go from 0 to 1111111111111111 (16 positions). Therefore, larger bit files allow us a greater range of numbers and thus a greater range of colors (color values, hues, and luminosity).




Looking at the above Black & White examples, there are more “Black, White, Shades of Grey” available with each larger Bit Depth.


Although both 8-Bit and 16-Bit files both allow for 16.7 million pure colors, it is the luminosity (greyscale brightness) depth that allows for greater image quality. 8-Bit files allow for 256 shades of grey (luminosity) per channel. That’s 28 for each (R, G, B) channel equating to  224 = 16,777,216 colors. A 16-bit file manages the same 16.7 million colors but also allows for 65,536 shades of grey (luminosity) per channel (216). This equates to 248 for each (R, G, B) channel resulting in 281,474,976,710,656 colors. This an 8-Bit file only allows for 16.7 million possible color values while a 16-Bit file allows for 281 trillion possible color values.


Looking at the above Color examples, there are more “Color Gradients”

available with each larger Bit Depths.


As the human eye can only perceive about 7-10 million colors, why use the larger Bit files? The answer lies in our photo editing process. The difference between an 8-Bits (16.7 Million) colors and the 16-Bits (281 Trillion) colors is the larger number of colors and luminosity allows for smoother gradations between the colors. Thus an 8-Bit image containing the sky may show with color banding instead of a smooth color transition. Using 16-Bit files will allow programs such as Photoshop to improve the color gradations when they exist. In fact, the more editing processes we perform, the more the chance of color degradation. When working with 16-Bit files, we can reduce or eliminate color breakdowns in our images.


Note: Given the above, why not use 32-bits instead of 16-bits? Several reasons: (1) Mainly because most “non-professional / commercial” software programs and features are not available for 32-bit images. (2) 32-Bit image files are much larger than 8 or 16 bit files (thus they are harder to use and take up more file space). (3) It is hard for most people to be able to “see” the differences between 16 and 32-bit images.  


Color Space (Color Profile):

When working in the digital world, different devices handle colors with different capabilities. Some devices, like low end monitors, cell phones, tablets, etc can only display a limited amount of the color range (their color gamut is small). Other devices, like high end monitors can display a greater amount of color ranges (their color gamut is large). Still other devices, such as ink jet printers, have different color range reproduction capabilities. Typically, when a devices can not handle a given color, it automatically downgrades the color to the next closest one it can handle.


To make matters even more difficult to manage, each person’s brain and eyes have different capabilities for detecting light and color and our minds interpret these impulses somewhat differently based on our own capabilities, experiences, and training. (See Appendix – How We See Light – for more information about this)


Color Space (also known as Color Profile) represents the range of colors and tones that are possible given a particular device. In digital photography, RGB (Red, Green & Blue) are used to create the colors that we see in an image. All other possible colors are created using these 3 primary colors. In Photoshop, there are 7 different Color Spaces to choose from including Adobe RGB (1998), Apple RGB, ColorMatch RGB, image P3, PhotoPro, and sRGB IE61966-2.1. For photography, the three most common are sRGB, Adobe RGB, and PhotoPro. Each has its own capabilities, limitations and uses.



sRGB (also known as sRGB IEC61966-2.1). This color space was originally created in 1996 by HP and Microsoft to use on monitors, printers, and the Internet. Although our technology and its associated capabilities have dramatically improved over the years, this is still considered the "default" color space for most images that are displayed (especially on “the Web”).


Adobe RGB (1998) This color space was created by Adobe in 1998. It was designed to include most of the colors available in CMYK (Cyan, Magenta, Yellow and Black) professional printers but by using the RGB primary colors. As such, this color space includes the same number of colors as sRGB but with a greater range of intensity and tones. Adobe RGB (1998) typically will produce richer highlights, mid-tones and shadows. This is a wider color space than sRGB and encompasses approximately 50% of all visible colors. It is a good choice for editing in 8-bit or 16-bit modes and handles more information for printing.


ProPhoto RGB (also known as ROMM RGB - Reference Output Medium Metric). Developed by Kodak around 1980 this color space was created especially for advanced printed image reproduction using an extremely large color gamut. This color space covers the largest range of colors, even goes beyond of what our eyes can see. It is believed to encompasses over 90% of all possible current surface colors and 100% of likely occurring real-world surface colors. One of the downsides to this color space is that approximately 13% of the representable colors are imaginary colors that do not exist and are not visible colors.


So which Color Space should I Use? It is always best to use the Color Space that will give off the best viewing experience given the use and output. This is especially true when editing photos.


sRGB: If you are only displaying your images on the web or on a low-end monitor or printing small (4”x6” or 5”x7”) images, using sRGB as your Color Space would work most of the time. sRGB has the smallest range of tones and colors out of the 3 most popular color spaces, but it is the most versatile and widely used. It is supported by almost all cameras, screens and image viewing software. If you want to keep things simple and avoid color shift problems during editing or sharing, your best bet would be to shoot and edit files in this color space. This approach however would limit how you display and/or print your images in the future (unless you have saved your original raw image and decide to start your editing all over again).


If you belong to a Camera Club, are entering your Images in Competitions, are giving Presentations with updated technologies and/or are learning and growing in your use of digital photography, you will eventually find this Color Space limiting.


Adobe RGB: If you want control over color and tones for editing, in most cases you will want to edit your images using Adobe RGB. By using Adobe RGB, you will obtain much richer color when displaying on high end monitors, projectors or when printing on coated paper. sRGB might be ok for images with skin tones or the image contains a softer mood, but Adobe RGB will give much better results for landscape, food, architecture and many other natural settings.


If you are the kind of photographer who likes to control every aspect of your workflow and print your images at home on advanced inkjet printers, then you should use Adobe RGB. If submitting work for publication, many will explicitly ask you to provide images in Adobe RGB because theoretically it has a wider color range.


If you do decide to use Adobe RGB you need to be aware of some limitations. Adobe RGB is not supported by all browsers. If you are displaying your images on the web, most likely people viewing your images will see them in slightly different colors. Also, Adobe RGB compresses colors and only special image viewing software’s can expand it back to reproduce all the colors in full gamut, all of the rest of the programs do not support this color space and may make the image look dull. So when you share your images (especially on the web), you may want to convert them to sRGB. This creates an additional step in your workflow but it will be worth it. Finally, if you send your images to a print lab, most of them work with sRGB color spaces (unless they specifically mention a different color space) which will mean your prints would have incorrect (dull) colors, if printed with Adobe RGB profile.


ProPhoto RGB

If you are a perfectionist who prints on high-end inkjet printers and wants to make use of the entire color range visible to human eye and even some imaginary color (this color space does use colors that do not exist in the real world) you should use ProPhoto RGB. You will however, be forced to use very specific steps in your workflow. This Color Space requires shooting in RAW format and opening images with ProPhoto RGB color space in 16-bit (Min) mode. If using this Color Space you must save your files using a format that supports 16-bit (ie; PSD or TIFF). Your printer will also have to support this format.


Because of these complexities, this Color Space is only recommended for photographers who have a very specific workflows and who print on specific high-end inkjet printers which can take advantage of such a high range of colors.


Making Your Final Decision:              sRGB vs Adobe RGB

When selecting what Color Space to use it is helpful to understand what type of image you have and where and with what device your Image will be viewed. If you are not sure, it would be best to use a Color Space with a large gamut as you can always “downgrade” the Color Space if and when you need to. No matter what Color Space you do use, if displaying images on the Internet, only 256 colors are visible (all other colors and tones are automatically converted into these colors).


As you can see by the below example (with 3 shades of Blue), Adobe RGB will give deeper colors and more variations with tones. It is for this reason I highly recommend editing your images using Adobe RGB (1998).




File Types (JPG and RAW):

The 2 main types of files most digital cameras record are known as JPG (also known as JPEG) and RAW.


JPG (a file standardized in 1992 by the Joint Photographic Experts Group) is a small file (8-Bits) type based on each color channel (Red, Green, Blue) having 8 bits/channel resulting in a 24-bit color palette of approximately 16 million possible colors. JPG files usually have an .jpg or .jpeg file extension.


There are several advantages for using JPG files. As JPG files are small, they can be recorded on your camera’s memory card very fast. They can be accessed immediately with little or no processing needed. When they are edited, the process is usually quick, using very limited computer resources (memory, CPU, etc). Also because of their small size, JPGs are the most commonly used image file type, especially on the internet and in social media networks. In fact, most programs and devices that display images can handle JPGs fairly well.


There are several disadvantages for using JPG files for photography. One of the main disadvantages is the camera pre-processes the image based on what has been programmed into it by the camera manufacturer. This includes dropping many of the colors picked up by the camera’s sensor. You are thus starting out with a color limited image created based on what the camera manufacturer thinks your image should look like. Another main disadvantage of JPG files is found during post-processing. As JPGs are 8-Bit files, they can only contain 16.7 million colors. If using a RAW file format, the number of possible colors is based on what the sensor is capable of. Current entry level DSLR sensor bit depts are usually around 12 (with 14 and 16 also possible). A 12-Bit file can allow for 68,719,476,736 possible colors for any given pixel. So that is 16.7 million versus 68.7 billion colors.


A third disadvantage of JPGs file is when saved and closed, the file compresses and loses information (pixel data). In fact, the higher the rate of JPG compression, the more the image quality will be reduced. JPG files are considered “lossy” as data (information) in the file is usually permanently lost. In fact, every time a JPG file is opened and saved, the file is further compressed, and more pixels are lost (forever). Thus, if you are editing JPG files, it is always best to edit a copy and leave the original file in-tact.


A RAW file is typically a large file that contains minimally processed data from your camera’s sensor. Currently there are over 500 different types of RAW files; as each camera manufacturer has created file types that are propriety to their own camera lines. Although each type of RAW file is different, for the most part they have many of the same features and capabilities.  


A RAW file is the digital equivalent of a negative. It contains the “raw” data taken when the light passes through the lens, hits the sensor and is recorded on the data card. This is mostly unprocessed data. In most cases a RAW file contains the ingredients for a wide dynamic range of colors, shades and luminosity. Unlike JPG pictures that are pre-processed by the camera, RAW files need to be processed using photo editing software. In most instances RAW files need several different edits performed (ie; contrast enhancements, white balancing, sharpening, etc) to output the best possible image. RAW files contain not only pixel information but also Meta Data which identifies additional information such as Camera Make & Model, Lens Make & Model, Exposure Settings, and even Copyright Information. This data is often used for filtering, sorting and cataloguing images. Many RAW files also contain an embedded JPG which is used by the camera’s LCD Display and can also be later extracted into a separate file.


Some of the advantages of using RAW files is the number of pixels available, a wider dynamic range and color gamut, more shades of available color, and the ability to adjust the Color Space and White Balance after the image has been taken. Raw image files can be merged (blended) to create higher dynamic range images, focused stacked images or stitched together to create panoramic images. RAW files give you greater latitude to recover images that are over-exposed (too light) or underexposed (too dark).


Raw image files are also “lossless”, meaning they do not compress and thus do not suffer from image-compression artifacts. When editing a RAW file, you must select another file type to save your changes. As a result, the original RAW file is left intact (considered non-destructive).


Another important advantage of using a RAW file is the availability of the approximately 281 trillion possible colors (versus the 16 million colors available in 8-Bit JPG files). The biggest advantage about RAW files is about choice. Using RAW, after taking your picture, you have many different options on how you may wish to edit, output and/or use your image. With JPG, not only is the image pre-processed but your choices are also limited.


The main disadvantage of using RAW image files are they must be post processed. Other than photo editing software, most programs and devices cannot “read” and display a RAW file. Processing digital images takes time and experience. Many programs do not allow you to print RAW files thus they must be saved in another format. As a result, if you are taking pictures for someone else (a friend or a client), you probably cannot give them the RAW file.


RAW files are large, they take more time for the camera to record the image onto the sensor. When shooting fast sequences, you may need to purchase faster (and more expensive) memory cards and/or cameras with larger buffer space. Large RAW files also require more computer resources to process and more storage space. Finally, as each RAW file is proprietary to the camera manufacturer, you cannot guarantee future photo editing/viewing programs will have the necessary software to decode (open) these files.






Appendix Glossary:

  • Basic Color Terms to Understand
  • How We See Light
  • Other File Types


Basic Color Terms to Understand:


Color – A visual perception of light that allows us to different between otherwise like objects. This visual perception is derived from a combination of Hue, Saturation and Light (Luminosity). Color is usually used as a general term used to describe every combination of Hue, Tint, Tone and Shade.


Hue – The attributes of color that allow us to differentiate between continuous color. Reaching the cones of our eyes, Hue is made up of Chromatic Signals derived from the light’s wavelength. Hues typically refer to the Color Family from where they derived consisting of Primary Colors [Red (R), Green (G) and Blue (B)] as well as combinations of these colors into Secondary Colors [Yellow (Y), Orange (O), and Violet (V)]. Hues are usually perceived as bold and exciting.


Saturation – A degree of difference from a light source of the same color. Saturation defines a range of colors (from 0% to 100%) given a constant level of light. A pure color is considered 100% while pure grey (absence of that color) is considered 0%.


Luminosity (Also known as Lightness Value) - A relative quality / brightness of light. Reaching the cones of our eyes, Luminosity is made up of Achromatic Signals (light / greyscale) derived from the light’s energy at a specific wavelength. Luminosity defines a range from pure dark (0%) to pure light (100%). By changing a color’s light value you can make a color lighter or darker.


Tone – The result of mixing a combination of pure colors with greyscale colors; excluding extreme White and extreme Black (applying Luminosity values). By adding grey to a pure color, you are changing that color’s Tonal Value. As such, a Tone is “softer” than the original color. Hue and Tone do not represent the same colors. Hues are created by mixing pure colors while Tones are created by mixing color with grey. Tones are usually perceived as subtle.


How We See Light:

Color Theory is based on our ability to “see light”. This encompasses our eye’s capabilities to absorb light wavelengths, our ability for our minds to process this light and our brain’s ability to interpret graduations in light.


Our eyes have photoreceptors called Cones. There are 3 types of Cones; S-Cones which are sensitive to short-wavelength light (Blues), M-Cones which are sensitive to medium wavelength light (Greens) and L-Cones which are sensitive to long wavelength light (Reds). The light stimulated by the combination of these Cones make up our perception of color.


As humans, we have some commonality in how our eyes pick up and interpret light but there are also differences between individuals. Some individuals are naturally more sensitive to interpreting light while others are less so. As a result, many individuals interpret medium light wavelengths (Blueish / Yellowish light) as Green but not all. Some individuals have color deficiencies due to Cones that are defective, damaged or non-existent. Also, based on experience and training, individuals can learn how to use their light sensitivity to a greater extent and become better at interpreting light’s nuances.


Other File Types:

Besides JPG and RAW, there are many other files types you can use (although most digital cameras only allow you to capture in JPG, RAW or both). In Photoshop, there are over 25 different file types to use but most photographers only use a few. Some of these are:


PSD (PhotoShop Document) – This is Photoshop’s propriety file type and the default (unless you change it). PSDs are specifically designed to support all of Photoshop’s capabilities including layers, layer masks, adjustment layers, channels, paths, etc. This file type extension is *.psd.


TIFF (Tagged Image File Format) – Originally developed in 1986 as an industry standard to overcome proprietary scanned files, TIFF files have been enhanced to include greyscale and then color graphics. Today TIFF files are one of the most commonly used amongst digital artists, printers, publishers and photographers. TIFF files can be either compressed or uncompressed using lossless compression and are can grow to become large files. TIFF files support both 8-Bit and 16-Bit Depths. The primary color space for TIFFs is CMYK but also supports may other color profiles. This file type ends in *.tif


GIFF (Graphics Interchange Format File) – This is one of the oldest graphic file types having been developed before JPGs. All major web browsers support GIFF files which are usually used for displaying web graphics. GIFF files are usually small and can even support simple animations and transparencies. GIFFs can only display 256 colors and therefore are not good for most photographic images. This file type extension is *.gif


PNG (Portable Network Graphics) – Originally designed to replace JPG files, PNGs can handle 48-Bit colors resulting in more than 1 billion possible colors (compared to JPGs 16.7 million possible colors). PNG files are “lossless” so they can compress their data over and over without losing pixel information in the process. PNG files however are not as widely supported as JPGs or TIFFs and thus their use is limited.


DNG (Digital Negative) – Developed by Adobe to be universally used to replace many of the camera manufacturer’s propriety 500 different RAW formats. As DNG files were originally developed for Photoshop/Lightroom image files, like PSDs, they are specifically designed to take full advantage of Adobe’s Product Line features. DNG files use an open standard, meaning they are freely available to the industry. Unfortunately, industry acceptance of DNG files is still a question with many manufacturers unwilling to give up their RAW formats and for users not knowing if DNGs will last far into the future.


Sources & Additional Information



(Marc F Alter) Composition Image Marc Alter Marc F Alter MFA Photo Photography Mon, 28 Oct 2019 04:39:40 GMT
The Rules of Composition (Well, they are really more like Guidelines than Rules) What Are The Rules Of Composition (and why do we care)

As we have previously discussed, Photography is not all about light. There are many different factors and techniques that help make good and great images. Many of these focus on composition and, having been developed over time, are well known and coined as “The Rules of Composition”.  That being said, different people will recognize different sets of these rules, perhaps thinking that some are more important than others.


Some Common Rules & Guidelines Of Composition

  • Rules of Thirds – Probably the most well-known of all the Rules, this tells us to draw 2 evenly-spaced vertical and 2 evenly-spaced horizontal lines across our frame and the cross points (Power Points) where they meet is the best location for our Subject.


This rule works because when we look at a Subject centered in the frame, our eyes have no where else to go (other than looking at our Subject). The image is static and then our mind gets bored. When we place our Subject at one of the cross points, our mind searches other areas of the frame to bring it into context. These images are considered dynamic as they entice our mind to move around the frame. This movement makes looking at our image more exciting.


  • Rule of Odds – This is the next most common Composition rule. This rule tells us that our images are more appealing when we have an odd number of Subjects (1, 3, 5, etc).


This is because “image tension” is created with odd numbers, thus creating a “visual dynamic”. Care must be taken when using this rule for if there is more than one Subject, they usually must be relating to each other or to the story our image is telling.


  • Lines (Leading Lines, Diagonal Lines, Curved Lines, etc) – Another common and well used compositional technique is the use of lines to “lead” our Viewer into our image and to our Subject. Lines can be made up of many different elements including man made lines (ie; roads, paths, bridges), natural lines (streams, rivers, edges of objects) and perceived lines (lines we image in-between or leading to different objects, sometimes created by differences in lights and darks or patterns).


When using lines, its important to understand what they do within our image. Do they lead our Viewer to or away from our Subject? Are they straight lines creating a contrast between different elements or diagonal lines creating visual movement? Do the lines form “shapes” like rectangles or triangles that create visual tension and help to give our objects form and structure?


  • Simplification – This rule tells us that it’s better to keep the image simple and clear of distractions so that our Subject is the primary focus. In a sense, this rule is about “elimination & reduction”, removing anything in the image that does not serve the purpose of supporting our Subject. When using this rule, remember, less is more!!!


Although many may want to include environmental elements in our images (to give our images a sense of time and place), to do so, even a tiny bit, may distract the Viewer from the Subject and should therefore be avoided.


  • The Golden Rule (also known as the Golden Ratio Rule, Fibonacci’s Ratio, Phi Grid, etc), Golden Triangles & Spirals - These rules help to define the placement of various elements in our image and when followed, they create a flow visual between the objects. These rules are typically derived by mathematical formulas (ie; GR = a/b = (a+b)/a = 1.6180339887498948420 and is created by dividing a line into two parts so that the longer part divided by the smaller part is also equal to the whole length divided by the longer part. Did you get that?).


Appearing in many forms of nature and science (including flower pedals, spiral galaxies, sea shells, etc, in famous architecture such the Parthenon and Egyptian pyramids and famous works of art such as with the Mona Lisa, the Last Supper, and The Birth of Venus), studies have shown that when used to define the placement of our Subjects, Viewers find these images more attractive then when not used (ie; when observers view random faces in an image, the faces they feel are most attractive are those where the Golden Ratio proportions define the width of the face and the width of the eyes, nose, and eyebrows.


Not a mathematician? The Rule of Thirds combined with Leading Lines may approximate a simplified version of these formulas.


  • Rule of Space – This rule is about motion, or at least the perception of our Subject having the ability to move.  As such, leaving “just enough” negative space for our Subject to move from and into creates a natural flow in our image.


When creating an image with a moving object, you want to place the Subject with space behind (to give the impression of where our Subject has been) and a greater amount of space in front (to create the impression for where your Subject is going).


  • Fill The Frame – This rule tells us to get close and fill a significant amount of our picture with our Subject. Following this rule accomplishes several goals. Mostly, it allows our Subject to be obvious while eliminating any possible distracting objects.


How do you follow this rule? Identify your Subject and fill as much of the frame with your Subject as possible. Then crop in tight, look at it and then crop again. Don’t be afraid of cropping too much as long as the Subject is in focus. Don’t be afraid of cropping off some of your Subject (the heads and body) to emphasize other parts of your Subject (the eyes). The larger the Subject in our image, the more details and interest our Subject may have.


  • Balance & Symmetry – This rule is one of the least understood in Photography. Balance and Symmetry implies the left and right (or top and bottom) of our image draws the Viewer’s eye equally. Balance helps to add a calmness to an image but can also lead to an image being static and uninteresting. Images that are out of balance, can create visual tension and make our Viewer uncomfortable. Out of balance images are not necessarily bad as they can define a sense of flow and movement for our eyes by leading our Viewer into the image and supporting the Subject.


When deciding to use Balance or Out-Of-Balance, first decide what type of story and emotions are you trying to achieve and then determine how the placement of elements in your image support this story.


  • Patterns, Textures & Shapes (Oh My) – This rule tells us that using such objects in our image can help to give depth, structure and a sense of rhythm and balance. Patterns, textures and shapes can be found almost everywhere, we only need to look for them. One of the main ways of using patterns is to keep isolate them from their surroundings.


  • Color – This is one of the most noticeable aspects Viewers first notice about your image (even if they don’t realize it). Intense well saturated color with sharp contrasts make people notice your images but you must be careful. Use of color can grab the Viewer’s attention but color also sets the mood. Different colors set off different moods; warm colors (yellow, orange and red) usually set a warm and comfortable felling while cool colors (blue and black) elicit feelings of coldness and isolation. When using color, don’t overdue it as your image will then seem fake and un-natural. De-saturating your images or converting them to Black & White, allows your Viewers to focus on other aspects of you image, such as shapes and textures. 


  • Depth and Depth of Field (DOF) – Photography is a 2-dimensional medium expressing the world that is in 3 dimensions. One of the ways of simulating this depth is by creating a foreground, middle ground and background. This can be done using objects and/or layers of light and dark. Another method for creating this is by restricting what is in focus and what is not. By keeping our Subject in focus and gradually reducing the focus of objects father away, we create the illusion of depth (this can be done even with objects that are close as in flower and macro photography).


  • Framing (also known as Aspect Ratio or Viewpoint) – This rule represents the size relationship between the short and long sides of your image (usually width first, then height). Some of this may be predetermined by your camera and sensor (a 35mm film and most DSLR cameras have a 3:2 aspect ratio, most micro four-thirds cameras use a 4:3 aspect ratio) while cropping can allow you to change this (4”x5”, 8”x10”, 16”x20”, and 11”x14” = 5:4 aspect ratio derived from 4”x5” film cameras).


How does aspect ratio change how you compose your image? If you take a picture using a 3:2 ratio with your Subject in a rule of third position and then print it on an 8”x10” photo, your Subject will no longer be in the rule of third position.


What to do? There are several tips you might consider. Before taking a picture, see if you can adjust your camera’s aspect ratio using the camera settings. When taking a picture, leave room along the edges to do final cropping during post processing or printing by reducing your lens focus length (maybe even breaking the “Fill The Frame” rule). If you do crop during post processing, make sure you turn the “delete pixels” setting off so you can later go back and re-crop if necessary. As a final tip, feel free to “add pixels using your photo editing software (ie; content aware fill).


  • Orientation – This rule describes how you take and display your images. Typically, landscapes are horizontal (longer) and portraits are vertical (taller). This rule (guideline) does not always have to be followed. Some landscapes can be tall, and portraits can be long. It all depends on your Subject and the story you are trying to tell. While you should always consider your orientation during capture (when taking your picture), you can reconsider this orientation during your post processing (provided you have captured enough pixels around the edges to change the orientation).


  • Watch Your Background – When we first start taking pictures, our backgrounds are often cluttered with too much “stuff” that distract from our Subject. Background are notorious for containing distracting elements or bright spots that lead our eyes away from our Subject. This is probably the most common mistake most photographers make.


When viewing an image, does every element support the image’s Subject or distract from it? Are you telling a story that MUST contain environmental elements or do they take away from the main story? These are questions you must ask yourself, but to answer, you must understand what elements attract the Viewer’s eyes. Some of the more common background distractions may be:

  • Bright spots – Our eyes are naturally drawn to bright spots and objects (like a fly is to a light bulb).
  • Areas of contrast – Likewise, our eyes are drawn to intense contrasts between light and dark.
  • People (and Animals) – Always wary of danger, seeking safety in numbers or looking for food, the natural animal that we are is constantly evaluating our surroundings.
  • Saturated colors – We are drawn to intense bright colors, but do they always tell the story we wish to tell?
  • Items in focus - Always trying to make sense of the worlds around us, we look at objects and then try to organize them into something recognizable.


  • Photoshopped – Digital photography allows us to edit and create images like never before. From simple adjustments of cropping and sharpening to creating composite images from our ever growing library of photographic files, we are only limited by our skills and imagination. That being said, just because we can do a thing, does not always mean we should. 


Great care must be taken so that our image tells the story we are intending. If the story is one of nature, moving our saturation slider too far to the right will make the scene look un-real as our image takes on colors that are un-natural. If the story is one of graphics and patterns, bright, intense, over saturated colors might be just what our image is calling for. After making your post processing adjustments, take a step back and try to fully evaluate what you have done. As in the Rule of Simplification, sometimes less is more.

There are many more Rules of Composition that may or may not help tell the story of your image. Although useful to follow, they do not work for every image, and in fact, many great images can be (and have been) created by “Breaking the Rules of Composition”. Thus, is might be best to think of these “Rules” more as “Guidelines” then as actual rules that must be followed. The Rules (Guidelines) of Composition can be extremely useful when first learning how to make good images and/or in learning why an image is great. It’s also interesting to note that when following some “Rules” others may be broken.


As the photographer and artist, it is up to you to decide what rules to follow and what rules to break.


(Marc F Alter) Animals Areas of Contrast Aspect Ratio B&W Balance Black & White Breaking the Rules of Composition Bright spots Color Composition Curved Lines Depth Depth of Field Diagonal Lines DOF Fibonacci Fill The Frame Framing Golden Ratio Golden Rule Golden Triangle Guidelines Guidelines of Composition Horizontal Hue Image Landscape Leading Lines Lines Marc Alter Marc F Alter MFA Orientation Patterns People Phi Grid Photo Photography Photoshopped Portrait Rule of Odds Rule of Space Rules Rules of Composition Rules of Thirds Saturated Color Saturation Shapes Simplification Space Spirals Symmetry Textures Verticle Viewpoint Watch Your Background What our eyes see Wed, 06 Mar 2019 19:32:46 GMT
I See The Light I See The Light

By Marc F Alter


Photography: It’s All About Light

Ever since I started with photography, I have been told “It’s All About The Light”. I have often thought, what does this mean? In its simplest form, if I take a picture in a lightless room will I end up with just a black image? Likewise, if I take a picture of a polar bear in a snowstorm, will I end up with just a white image (provided he does not eat me first)? Most likely, yes in both instances.


In fact, the very definition of Photography (“the art or process of producing images by the action of radiant energy and especially light on a sensitive surface…[1]”) involves light. You cannot take a picture without some amount of light. When you click your camera’s shutter, light enters the lens and creates the image onto the film or sensor media. Without this light, there is no image.


In the past, we have discussed the Exposure Triangle and how to get good exposures. This, however, is not enough. Your understanding and use of light can be a deciding factor in determining if your image is spectacular (scoring a 9 of 9) or a terrible (scoring a 6 of 9 or worse).


What Is Light?

Light is actually a very complex subject. It’s more than just shining a light on something. Light is a form of electromagnetic energy that travels like a wave. It is part of an energy spectrum of long waves to short waves consisting of radio waves, microwaves, infrared waves, visible light, ultra violet light, x-rays and gamma rays. Scientifically, light has three main properties; Wavelength, Amplitude and Speed.


Wavelength represents what light we can see and what color it “sheds” on our subjects. As humans, we can only “see” light that falls within a limited wavelength (somewhere in the range of 300 to 800 nanometers) although various instruments and sensors (including our cameras) can detect, measure and capture light we cannot see. Amplitude represents the intensity and brightness of light. Although the speed of light is constant (in a vacuum it travels at 186,282 miles per second), different can wavelengths travel at different speeds as they travel through different mediums (like through particles in the atmosphere).


The Four Main Factors Of Light:

Photographically, light involves 4 main factors; (1) Intensity (2) Quality (3) Color and (4) Direction.


Light Intensity indicates how much light is present and its strength. Too much light and you need to shield your eyes. Not enough light and you may need to introduce another light source to see. There are several factors that affect light intensity. The amount of energy the light source radiates effects its strength as well as the distance from the light source to your subject. This light energy or output is typically measured in lumens. How does this light intensity affect your image capture workflow? Too little light and you will need to raise your exposure. Too much light and you will need reduce your exposure.


While light intensity is something that can be easily measured, Light Quality is more of a visual perception. At the extremes, light can be Soft or Hard. Soft Light usually comes from a light source that is bigger, diffused and/or farther away than the Subject, typically with multiple points of light coming from different directions.  Soft light gives evenly spread illumination with low contrast and smooth transitions from lights to darks, highlights to shadows. Shadows are shallow and allows colors to be more visible (saturated) and pronounced. Hard Light on the other hand, typically comes from a single, direct light source that is close to the Subject. As such it is harsh and directional. Hard light tends to increase contrast and produces long, shadows that are sharp with high contrasts.  Shadows are deep and color perception is reduced.


Similar to Light Quality, Light Color (also known as Light Temperature or Color Temperature) is both measurable and perceptual. It is typically measured using a Kelvin Scale but instead of measuring heat, Kelvins (expressed in Ks) are a Color Index, representing how we “experience” different light wavelengths.  As light sources emit light in different wavelengths, we “see” or “perceive” these light waves in different colors. Examples include Reds and Oranges at around 500-2000K, Yellows at around 2500-3500K, Daylight (Pure White) at 5000K and Blues at around 6500K+.


Photographically, these colors affect our mood and the mood perceived from our images. Red, Orange and Yellow hues give us a sense of warmth and calm while Blue to Black hues give us a sense of cold excitement. These color temperatures can have a great impact on our images and how our images are perceived. Depending on our lighting source and the color temperature it emits, we may experience a color cast that is unnatural and disturbing or use a color cast to create a desired feeling.


The Direction of Light is another important factor in Photography, especially when the light is not diffused. Lighting Direction represents the relationship between you (and your camera), your subject and the light source. There are three different types of Directional Lighting; Front Light, Side Light and Back Light.


Front Light is the most common as it luminates subjects evenly.  The down side of Front Lighting is sometimes it is considered flat as it reduces shadows (which typically adds interest and depth to our images).


Side Light is the next most common as it adds drama and texture to an image’s subject. Side Lighting also produces more depth to an image. With Side Lighting the greater the angle, the greater the shadows and the greater the contrast. Light set at a 30-45 degree angle, will produce more even results then light set at a 90 degree angle. Side Lighting may come from the left, right, top or bottom. Great care must be used with Side Lighting as it can easily add to an image (highlighting desired features) or detract from an image (highlighting undesired features).


Back Light is when the light source is behind your subject and the light is shinning directly into your camera’s lens. This often makes the background over exposed and your subject underexposed (silhouetted) and sometimes will also produce lens glare (Lens glare can vary depending on the angle of your light source as well as the size of your f-stop). Although difficult to use and control, Back Lighting can often add drama and great interest to a photo.


Types of Light

There are two main types of light, Natural Light and Artificial Light. Natural Light comes from two sources; the Sun (known as Sunlight) or the Moon (known as Moonlight). Natural Light (many times also referred to as Ambient Light or Available Light) does not require man-made equipment to be produced (its “free” to one and all). Great care however must be taken when using Natural Light as your light source as its intensity, quality, color & direction is constantly changing.


Natural Light is greatly affected by the weather. At the same time of day, clear sunny weather can produce great amounts of harsh light and cloudy, rainy weather can give softer more defused light.  You can use Natural Light to your advantage both outside as well as inside (as long as you have an opening or window to allow the light to enter).


There are several different definitions that help to define the types of Natural Light that occurs at different times of the day. Each with their own characteristics. Some of these include:

  • Daylight (Also known as Daytime) is when the Sun has risen and is at least 6 degrees above the horizon. During the day the Sun is high overhead. It emits strong and powerful hard light producing harsh illumination with high contrast and sharp transitions from lights and darks. Wavelengths are minimized with the result being that color casts are reduced. As the Sun is high in the sky, it comes in from straight overhead, producing low and short shadows. Color Temperature is approximately 6500k


  • Dappled Light is daylight that is filtered through tree leaves. The Sun is typically high in the sky and produces uneven diffusion and shadows as the light passes thru the leaves into the Subject.


  • Perpetual Daylight is when the Sun is above the horizon throughout the day. In these cases sunrise and dawn or sunset and dusk are very brief. This typically occurs when you are very close to the earth’s poles or when the summer equinox is approaching.


  • Twilight is when light is emitted before the Sun rises and after it has set. The Sun is either at the horizon or 18 degrees below. Morning Twilight is when the Sun is rising while Evening Twilight is when the Sun is setting. There are several different types of Twilights including:


  • Civil Twilight occurs right before the Sun rises or immediately after it has set. The Sun is either at the horizon or 6 degrees below. There is still enough light to be able to see objects using Natural Light. The sky is still bright. Clouds in the western sky are lit with reddish-orange sunlight while the eastern clouds are given a blueish-indigo cast. Depending on the season, in North America, Civil Twilight can last 20-30 minutes.


  • Nautical Twilight (taking its name from when stars would first appear in the sky and Sailors would look at these stars to calculate their bearings) occurs immediately after Civil Twilight when the sky has darkened to a deep blue tone, the horizon line can still be seen, and stars become visible. The Sun is 6 to 12 degrees below the horizon.


  • Astronomical Twilight occurs when the Sun is far below the horizon and the sky is darker. The Sun is 12 to 18 degrees below the horizon.


  • Perpetual Twilight occurs when the sun is below the horizon throughout the day, but never dips lower than 6 degrees below the horizon. Both sunrise and dawn or sunset and dusk are very brief. This can happen near the poles in the spring and fall.


  • Nighttime is when the Sun is at least 18 degrees below the horizon (you cannot see the Sun as the light source as it is below the horizon, so you will just have to take my word for it).


  • Perpetual Nighttime is when the Sun is more than 6 degrees below the horizon throughout the day. Like Perpetual Daylight, sunrise and dawn or sunset and dusk are very brief. This typically occurs when you are very close to the poles or when the winter solstice is approaching.


  • Magical Hours


  • Golden Hour occurs when the Sun is rising, and Civil Twilight is approaching. The Sun is 4 to 6 degrees below the horizon. The landscape starts to become illuminated and the sky starts to turn from reddish-orange to yellow (hence the name “Golden Hour”). The light will be soft, diffused and with low contrast as the Sun is still low in the sky.


  • Blue Hour occurs when the Sun is setting, and the end of Civil Twilight is approaching. The Sun is 4 to 6 degrees below the horizon. The horizon line is visible, the landscape is still lite and some of the brightest stars and closest planets are just becoming visible. Light becomes defused and the sky turns an intense, dark blue. Clouds can be illuminated with bright reddish-orange colors.

Artificial Light is when light comes from sources other than the Sun or Moon. With artificial Light, you typically have more control over the 4 different factors of light (Intensity, Quality, Color & Direction) then you would with Natural Light.


There are many different types of Artificial Light Sources (with more possibly being invented in the future). Each may have its own characteristics. Of these, the most common in use today are:

  • Incandescent Light (Tungsten) – Usually comes from common house light bulbs. These lights tend to have a warm color cast compared to natural daylight. Typically, this produces harsh light (which is why we commonly use lamp shades to soften or diffuse this light). Color Temperature approximately 2500 – 3500k (Warm White)


  • Fluorescent Light – Usually found in many commercial buildings and offices. In prior years these lights would give off a greenish hue but now there are several different types of Fluorescents, each giving off different color casts (cool or warm white, daylight, etc). With these different types, it would not be so surprising to find different bulbs in the same fixture. Color Temperature approximately 2700k to 6500k (typically Cool White)


  • CFL Curly Bulb (CFL) – The CFL (Compact Fluorescent Light) are typically found in warehouses or office buildings where lights are meant to be turned on for long periods of time. Recently they have also been appearing in households as replacements for Incandescent or Tungsten bulbs. CFLs are being phased out as they are hazardous to handle and difficult to dispose (they have mercury in them). Color Temperature approximately 3500k to 4500k (White to Cool White)


  • LED (Light Emitting Diode) – These lights are becoming extremely popular as replacements for CFLs as they are less expensive to buy and operate. They are being used in everything from flashlights to airplane lights. Like Fluorescent Lights, LED lights can come with different color casts (Soft White (2500K – 3000K), Bright White/Cool White (3500K – 4100K), and Daylight (5000K – 6500K).


  • Flash and Studio Strobe – Most on camera and off-camera Flash units and Studio Lights are designed to approximate Daylight (White Light) but some do give off a slightly cool Color Cast. As such, these typically have a Color Temperature of approximately 5000k.


  • Other lights – There can be other light sources that emit light with different Color Temperatures. Some of these are:


  • Fire (Candle Light, Oil Lamps, Kerosene/Paraffin Lamps, Lanterns, Torches, etc) – Used for thousands of years. Color Temperature approximately 1000k to 2000k (Warm Yellow Cast)


  • Metal Halide – This is a type of electrical lamp that produces light using an electric arc through a gaseous mixture of vaporized mercury and metal halides (metals with bromine or iodine) which emits a high intensity discharge of gas. Starting in the 1960s these lights have been used in automobile headlights, illuminating sports fields, parking lots and street lights. Color Temperature approximately 3000k to 20,000k

What Is White Balance?

No photographic discussion of Color Temperature would be complete without an understanding of White Balance. Basically, White Balance (WB) is a setting in your DSLR camera that allows you to control how the camera’s sensor reads the scene’s Color Temperature (Color Cast, Hue and Intensity of the Light Source) and interprets how this is used in the image.


As we have discussed, with Natural Light, different times of the day can produce light with different Color Temperatures. With Artificial Light, different Light Sources also emit different Color Temperatures. These different Color Temperatures range from very Warm to very Cool, often giving our images a Color Cast that is not true to the actual scene. Typically, we do not “see” these color casts as our minds automatically adjust what we see and “rationalize” these colors as “normal” for the scenes before use. Our DSLR cameras however are not as smart. The sensor captures only what it “sees”.


Most DSLR cameras have White Balance settings with the default set to “Auto”. This allows the camera to try and automatically adjust the image’s recorded colors. Most times the camera gets it right but depending on the WB compensation logic built into the camera and your intent, the colors captured may not be correct for your purpose.


We can compensate for these false color casts by adjusting the “White Balance”. This can be done both “In-Camera” when we are taking the picture and during “Post Processing” (as long as we are saving our images in Raw format).


In-Camera White Balance settings typically include the following:

  • Auto – Use this setting when you want your camera to try band make its best guess on the Color Cast for your images. You can also use this if you don’t understand the concept of White Balance and/or are too lazy to make the WB adjustments yourself.


  • Tungsten – This setting is used for shooting indoors under tungsten (incandescent) lighting. As Tungsten bulbs typically have a warm color temperature, this setting cools down the colors in your photos.


  • Fluorescent – This setting is also used for shooting indoors but when under fluorescent lights. As fluorescent lights typically have a cool Color Temperature, this setting warms up the colors in your photos.


  • Daylight/Sunny – This setting is used when outside on clear, sunny days when the Sun’s color temperature is white. Usually with this setting the camera will not adjust for this Color Temperature.


  • Cloudy – As clouds sometimes emit a cooler Color Temperature, this setting will typically warm up the image to better approach the white of “daylight”.


  • Flash – Like clouds, camera flashes can be slightly cool so this setting will slightly warm up the Color Temperature of your shots.


  • Shade – As the light under shade is usually cooler (bluer) than shooting in direct sunlight, this setting will generally warm up the Color Temperature of your shots.


  • Manual WB – In most cases, the above WB Presets can help to adjust the Color Cast of your scene during capture but even with these settings, the colors you capture may not be true or accurate. To compensate for this, you can use a White Card to measure the scene and then manually adjust the camera’s White Balance setting to capture accurate colors.

Some scenes can be more challenging than others when trying to correctly capture colors. If we are shooting and saving our images using the Raw Format, we can adjust any false Color Casts during our “Post Processing” workflow. Most good digital editing tools now contain White Balance or Color Temperature settings and sliders that are simple and easy to use. Depending on your expertise in these tools, you can even introduce a Color Cast that did not exist but that enhances the mood of your image.


How Do You Use Light?

Understanding the different aspects of light is the first step in determining how to use it. In both Image Capture and Post Processing, think about your subject and the mood you are trying to achieve. Use the 4 different factors of light (Intensity, Quality, Color & Direction) and determine how each of these will affect your image; both individually and combined. Some of the more common questions you may ask yourself are:

  • Is the light you are using going to enhance your image or detract from it?
  • Is the light illuminating your subject or competing with it?
  • Are you properly lighting your Subject or is the Subject your Light Source?
  • Are you trying to display a calm, serene scene or one of high drama and excitement and what Color Casts are you using to represent this?
  • Is your Light creating an even tone or harsh deep shadows?

Time of day and the light that is produced is critically important when taking pictures with Natural Light.

  • Daylight (High Noon)- the Sun is high in the sky
    • Intensity: When the Sun is high overhead, it emits strong and powerful illumination.
    • Quality: The Sun’s light is hard, producing harsh illumination with high contrast and sharp transitions from lights and darks.
    • Color: The Sun emits longer wavelengths, giving cool color hues. Although traveling through the same atmosphere, there is less distance and thus the effect of dust, moisture and other particles effecting different wavelengths are minimized. The result is that color casts are reduced.
    • Direction: As the Sun is high in the sky, it comes in straight, producing low and short shadows.

Many photographers believe this is the worst time of day to be taking pictures (especially for landscape and outdoor portrait images) as the Sun is typically high in the sky and thus produces harsh, intensive, top-down directional light.

Personally, I sometimes disagree with this concept as it depends on the type of picture you are looking to create. Sometimes harsh light might he better depending on your subject, especially if you’re trying to achieve a gritty or intense, exciting image. Harsh light can be great for Black & White photography as well as for capturing shapes and textures. You may need to work a little at it, but you can make great, unusual images at this time of day by learning to understand and use the characteristics of harsh light.

  • Twilight (Early in the morning or late in the afternoon) - the Sun is low in the sky:
    • Intensity: Which each second that the Sun rises and sets, its strength reduces therefore giving less and less light.
    • Quality: The Sun’s light is soft, producing evenly spread illumination with low contrast and smooth transitions.
    • Color: The Sun emits short wavelengths, giving warm color hues. As it travels through the atmosphere, the light bounces off dust, moisture and other particles causing different wavelengths to travel at different speeds thus giving off lots of color.
    • Direction: As the Sun is low in the sky, it comes in at an angle, producing long shadows.


Civil Twilight is an ideal time for urban, city and landscape photography. The artificial lights from various cityscapes start to appear and these can help create amazing images. The dim, low natural light allows shooting long exposures without the use of neutral density (ND) filters. If the sky is clear of weather and the moon is rising or setting, you might even be able to get it into your images.


The Golden Hour (when the Sun is rising) and the Blue Hour (when the Sun is setting) are ideal times for landscape and portrait photography. Light is soft, and the intensity is low with warm colors that are more pronounced. As the light is low, it may be a good time to include the Sun (during Golden Hour) or Moon (during Blue Hour) in your images.


As Astronomical Twilight approaches, the sky becomes darker. Star constellations become more visible and the time is right for night photography.  During night photography, you must consider the time and phase of the moon as it replaces the Sun in terms of emitting natural light (Moonlight instead of Sunlight). At the end of the Astronomical Twilight and during a New Moon, the completely darkened sky can be filled with Stars as well as other types of astronomical objects such as planets, galaxies, and nebulas.


A Final Word:

Although lighting is one of the most important factors on producing an amazing image, it is not the only factor. Other aspects such as composition, subject, exposure, focus, timing, tones of light and dark color, shapes, intent and inspiration all come into play. More about these at future HCC PMES sessions.



(Marc F Alter) Ambient Light Artificial Light Astronomical Twilight Back Light Blue Hour CFL CFL Curly Bulb Civil Twilight Color Index Color Temperature Curly Bulb Dappled Light Direction of Light Evening Twilight Fire Flash Fluorescent Light Front Light Golden Hour Hard Light Incandescent Light It's All About The Light Kelvin Kelvin Scale LED Light Light Amplitude Light Color Light Direction Light Emitting Diode Light Intensity Light Quality Light Speed Light Temperature Light Wavelength Magic Hour marc alter photography marc f alter Metal Halide mfa images Moonlight Morning Twilight Natural Light Nautical Twilight Nighttime Perpetual Daylight Perpetual Nighttime Perpetual Twilight Side Light Soft Light Strobe Studio Light Studio Strobe Sunlight Tungsten Light Twilight Types of Light WB What is light White Balance Mon, 21 Jan 2019 14:07:09 GMT
Focus Methods, Modes & Areas Focus Methods, Modes & Areas

By Marc F Alter



Automatic Focus Modes & Focus Areas:

Today’s digital cameras have several different options for how and when your camera will automatically focus. Obtaining a sharp image involves the capabilities of your camera (ie; can the camera automatically focus in low light) as well as making several different decisions. Some of these decisions are based on your Subject (is your subject stationary or moving), some based on your Exposure Settings (effected by Shutter Speed & F-Stop’s Depth of Field), and some based on your camera’s Focus Settings at the time when you take the picture.


For Focus Settings, the first decision you need to make is how you want your camera to focus; Manually or Automatically. Depending on the camera, the setting for this is usually found either on the camera, on the lens, in the camera’s Settings Menu and/or any combination of these.

  1. (M) Manual Focus – This setting tells the camera not to auto-focus. When set, you will need to manually focus, usually by turning the Focus Ring on the lens until your image is sharp in the viewfinder. Many lenses also have distance markers just above the Focus Ring which allows you to manually set the focus on the distance from your camera to your subject. Manual Focus is usually most beneficial when your subject is stationary and your want to ensure a really sharp image (and you have good eyesight). This method is also used when you have a fast-moving object and you don’t want to lose precious seconds waiting for your camera to Focus Lock on your subject. Extremely sharp focus can be obtained using the camera’s Live View and then Zooming on the area where you want to focus.


  1. (A or AF) Automatic Focus - This setting tells the camera to initiate its auto-focus functions. With this setting turned on, you would typically (most camera’s default setting) point your camera to your subject, press your Shutter Release Button ½ way down to focus and then all the way down to take the picture.


If you are using Auto-Focus there are several additional settings you must make decisions on so your camera knows how you want it to function. These settings are known as Focus Modes and Focus Areas.


  1. Focus Modes tell the camera how often you want it to automatically focus on your subject. There are **usually three types of Focus Modes; (S) for Single Shot Focus and (C) for Continuous Shot Focus and (A) for Auto Select.


  1. (AF-S or S) Auto-Focus Single Shot Focus (Also known as Single-Area Focus Mode or One Shot AF”) - This setting tells the camera to Focus once after it Locks onto the Subject. This setting is best used when you are taking a picture of a stationary object.


  1. (AF-C or C) Auto-Continuous Shot Focus (Also known as AI Servo Focus Mode) - This setting tells the camera to constantly Focus and is best used when you are taking a picture of a moving object. As your object moves, the camera will cause the lens to maintain its focus.  In many of today’s cameras, the focus calculations can include a prediction of where the subject will be when the picture is taken and will automatically move the lens focus area to capture this.


  1. Auto (also known as Single/Continuous Hybrid Mode) – Some cameras have an Automatic Focus Selection mode that acts as a hybrid between Single Shot and Continuous Shot Focusing. With this mode, the camera detects if the subject is stationary and if so, uses Single Shot Focus. If the Subject moves, the camera will automatically switch to Continuous Shot Focus.


** Some newer more advanced DSLRs have similar functions using different names as well as some additional advanced and/or hybrid auto-focus features. Some examples are:

  • Canon - One-Shot AF, Predictive AI Servo AF (AI Servo AF III), AI Focus AF, Manual focus
  • Nikon - Single-servo AF (AF-S), Full-time-servo AF (AF-F), Continuous-servo AF (AF-C): predictive focus tracking automatically activated according to subject status, Manual focus (M)
  • Sony - AF-A (Automatic AF), AF-S (Single-shot AF), AF-C (Continuous AF), DMF (Direct Manual Focus), Manual Focus
    1. Focus Areas (Also known as Focus Points) tell the camera how much of the image displayed in the viewfinder you want it to automatically focus on. Focus points are laid out in certain parts of the frame. The number of Focus Points available for use will vary from camera to camera (less expensive DSLRs typically have fewer Focus Points and simple Auto-Focus Systems while more expensive DSLRs typically have more Focus Points with more complex configuration options).


It is important to note the number of Focus Points is not the only factor in the Auto-Focus calculations but also the Type of Focus Sensors used (See below titled “Types of Auto-Focus Points “).


  1. Depending on your camera and model, there are usually several different options for choosing the Focus Area setting. The three most common types of Focus Areas are Single Point, Multi-Point and Automatic:


  1. Single Point Focus Area – the camera uses only a single stationary focus point to determine accurate focus. This mode is typically used when the scene has many different objects that might confuse the camera’s automatic Focus and when you want to manually identify the Subject to make sure it is as sharp as possible. Typically used for stationary objects such as landscapes, architecture and macro photography.


  1. Multi-Point Focus Area (Also known as Group Area) – the camera uses a set number of focus points that are grouped together. When using this option, you typically have the ability of selecting the number of focus points you wish to use. This number will vary based on the camera model but may allow for groups of 9, 21, 51. This mode is typically used for small Subjects that move fast erratically.


  1. Auto Area (also known as Automatic AF Point) –the camera detects the scene and determines the number of Focus Points used. This is typically used in many DSLRs, Point-And-Shoot and/or Cell Phone camera. In many of these cases, there may be a “Scene Setting” or “Face Detection” that will help determine how the camera focuses. The biggest issue with this is the camera, not the photographer, is determining where the Subject is and what should (and should not be) in Focus.


  1. Some newer more advanced DSLRs have similar functions using different names as well as some additional advanced and/or hybrid Focus Point features.


  1. Dynamic Single Point Area (Also known as AF Point Expansion) – typically the camera uses only a single focus point to determine accurate focus but if the subject moves, the camera will attempt to maintain focus on that subject. This is typically used for very fast moving &/or objects moving unpredictably.


  1. 3D Tracking – the photographer manually selects the focus point(s). When the shutter-release button is kept pressed halfway after the camera has focused, the photographer can change the composition and the camera will automatically choose a focus point(s) to keep the Subject in focus. Somewhat counter-intuitively, this mode is typically used not to capture moving objects, but rather to capture stationary objects while the photographer moves and/or changes his/her composition.


Note: When 3D Tracking is used in Nikon DSLR cameras, 11 Focus Point are used and the camera uses the Subject’s color to track the Subject. This mode may be problematic when the Subject’s color is the same as other objects and/or background color.

Separating Exposure & Focus (Back Button Focus)

The typical default setting on most cameras is combing Exposure and Focus. For most cameras, you point your camera to your subject, press your Shutter Release Button ½ way down. The camera then determines the Exposure and Focus. You then press the Shutter Release all the way down to take the picture. Using this method, your Subject is Exposed and Focus established with the simple press of the Shutter Release. This method is quick, simple and intuitive. The downside to this approach is that your Exposure and Focus are taken at the same time, causing you to lock your Focus and then recompose for Exposure.


Although this works for many images, there may be times when you wish to separate Exposure and Focus. This is especially important when taking pictures of fast-moving objects (such as birds in flight) or when your Subject and scene Exposure are vastly different. A method to address this situation is called “Back Button Focus” (BBF). This method allows you to use one button (usually a button located on the back of the camera) to Focus and then another button (Shutter Release) to create the Exposure and take your picture.


The disadvantages of using “Back Button Focus” is that it requires using two separate buttons to take a picture which is not simple or intuitive. You will need to “relearn” taking your pictures from using just the Shutter Release to using both the Shutter Release and the BBF Button. The advantages of using “Back Button Focus” however are worth the learning curve. Separating Exposure and Focus Controls allows you to have much better control over both your Focus and your Exposure.


Using BBF is accomplished in the camera’s Settings Menu. Not all cameras have this function and depending on your camera’s capabilities, there may be several different options to choose from. Some of these may include using both the Shutter Release and BBF and/or selecting the camera’s AF-On/AF-Off Button for BBF or selecting another button for BBF. Before deciding to try using Back Button Focus, you should read your camera’s manual and/or watch Internet videos about your camera and its BBF capabilities.

More Information Than You Really Want Or Need To Know


Auto-Focus: How Does The Camera Do This

Today’s DSLR (Digital Single Lens Reflex) cameras are really highly advanced mini-computers, performing vast calculations at incredible speeds. Most DSLRs will automatically change how it performs auto-focus functions based on using either Active or Passive Methods:

  1. Active Auto-Focus (Also known as Active AF) – The camera “shoots” a red beam to your subject. This beam bounces back to the camera’s sensors, the camera calculates the distance to the subject and then adjusts the lens to focus at this distance. This function typically works for stationary subjects that are within about 15-20 feet, in either bright or dim light conditions. 


  1. Passive Auto-Focus (Also known as Passive AF or Phase Detection) – The camera uses its sensors to detect contrasts within the image and then moves the lens back and forth until it detects sharpness. There are two types of methods used; Phase Detection and Contrast Detection:


  1. Phase Detection - The camera uses special sensors to detect contrast from the light that goes through the lens. Phase Detection is typically very fast as it uses one or more “Focus Points” and not the entire image. As a result, this method is usually used for moving subjects.


  1. Contrast Detection - The camera uses special sensors to detect contrast in the image itself. This method can only be done in Live View mode as it requires light to directly reach the sensor and therefore DSLRs must have their mirrors raised for this to work. As a result of these mechanics, this method is usually use for stationary subjects. This method can use different parts of the image to focus and is often more accurate than Phase Detection, especially when used in low-light conditions.

Auto-Focus Assist (AF Assist) – A feature on some cameras that helps the camera to autofocus by emitting a light onto the subject so the camera can better detect the contrasts. Usually used during Passive Auto-Focus.


Types of Auto-Focus Points

There are two main different types of Auto-Point Sensors used in most DSLRs; Vertical and Cross-Type. Vertical sensors detect contrast but only on vertical lines. Cross-Type sensors detect contrast on both vertical and horizontal lines. As a result, Cross-Type Sensors are considered more accurate and thus the more Cross Type Sensors your camera has, the more accurate your auto-focus will be.  When marketed, a camera’s manufacturer will usually list both the total auto-focus point sensors as well as the number of Cross-Type Sensors (ie; Nikon’s D5 has 153 Auto-Focus Points of which 99 are Cross-Type Sensors. The Canon 1DX has 61 Auto-Focus Points of which 41 are Cross-Type Sensors). Some of the newer Mirrorless camera (ie; Sony a6500 or Sony A7RIII, do not employ Cross Type Sensors).


(Marc F Alter) 3D Tracking Active Auto-Focus AF Point Expansion AI Servo Focus Mode Auto-Continuous Shot Focus Auto-Focus Single Shot Automatic AF Point Automatic Focus Back Button Focus Contrast Detection Dynamic Single Point Area Focus Areas Focus Methods Focus Modes Focus Points Group Area Hybrid Mode Manual Focus marc alter photography marc f alter Matrix Metering Modes mfa images Multi-Point Focus Area One Shot AF Passive Auto-Focus Phase Detection Predictive AI Servo AF Separating Exposure & Focus Single Point Focus Area Single-Area Focus Mode Types of Auto-Focus Points Thu, 03 Jan 2019 14:45:10 GMT
Exposure Modes, Metering Modes & Exposure Compensation Exposure Modes, Metering Modes & Exposure Compensation

By Marc F Alter

Exposure Modes:

In many ways our digital camera are like mini-computers; helping us achieve balanced compositions, focus and exposures. For Exposures, there are several different types of settings we can use that tell our camera how we want it to act. One of these is known as Exposure Modes. Typical DSLR (Digital Single Lens Reflex) Exposure Modes are Auto, P, A, S and M:


  • Auto (Automatic Exposure) – Automatic Exposure is a feature on many DSLRs that allows the photographer to simply point and shoot to get a fairly good exposed image (in some cases) based on the amount of light in the scene. With this mode, the camera automatically adjusts the ISO, Aperture and Shutter Speed to what it believes is the correct setting. This setting is usually in Green and is not available on some of the more advanced DSLRs. This is because setting your camera on Auto often also locks you out of other DSLR settings like Metering Modes, White Balance, Focus Modes, Exposure Compensation, etc.


  • P (Program) – Also joking known as “Professional”, this Exposure Mode is similar to automatic in that the DSLR attempts to automatically adjust the Shutter Speed and Aperture for optimal exposure.  Although some “P” mode adjustments may vary from one camera manufacturer or model to another, when you point your DSLR to a bright scene, the Aperture automatically adjusts to a higher F-Stop while setting the Shutter Speed reasonably high (to let in less light). When you point your DSLR to a dark scene, the Aperture automatically adjusts to a lower F-Stop while setting the Shutter Speed low (to let in more light).


The difference between Auto and Program Mode is with “P”, the photographer has the ability of adjusting the camera selected settings. You have the option of setting the ISO, Aperture, Shutter Speed and many of the other camera settings (ie; Metering Modes, White Balance, Focus Modes, Exposure Compensation, etc). Typically, if you change Aperture, the camera will automatically change the Shutter Speed. If you change the Shutter Speed, the camera will automatically change the Aperture. If you change both the Shutter Speed and the Aperture, the camera will change the ISO (light sensitivity setting).


  • A or Av (Aperture Priority) – With this Exposure Mode, the photographer determines and sets the Aperture size. Once this is done, the camera measures the amount of light at the scene and then automatically adjusts the Shutter Speed. This has the advantage of the photographer selecting the DOF (Depth of Field) needed for the composition. At a given F-Stop, if the scene is bright, the camera will automatically adjust the Shutter Speed to be faster (to reduce the amount of light coming into the camera). If the scene is dark, the camera will automatically adjust the Shutter Speed to be slower (to increase the amount of light coming into the camera). With this mode, the camera does not adjust the ISO.


  • S or Tv (Shutter Priority) – With this Exposure Mode, the photographer determines and sets the Shutter Speed. Once this is done, the camera measures the amount of light at the scene and then automatically adjusts the Aperture. This has the advantage of the photographer selecting the amount if time of shutter is open for the composition. At a given Shutter Speed, if the scene is bright, the camera will automatically adjust the Aperture to be higher (to reduce the amount of light coming into the camera). If the scene is dark, the camera will automatically adjust the Aperture to be lower (to increase the amount of light coming into the camera). With this mode, the camera does not adjust the ISO.


  • M (Manual) – This setting basically tells the camera to not make any automatic adjustments. It is up to the photographer to manually set the ISO, Aperture and Shutter Speed.


Metering Modes – In addition to Exposure Modes, many DSLRs also offer a setting that allows you to control how much of the scene the camera considers when calculating a “correct exposure”. Different Metering Modes typically include “Metrix, Center, Spot” (although some more advanced DSLRs may have additional settings as well).


  • Matrix/Evaluative/Full – This setting typically sets the camera to use the entire frame when calculating a correct exposure. Depending on the camera, “Matrix Metering” may use the center areas more than the corners when determining correct exposures. Some cameras also offer Group Metrix Metering, allowing the photographer to select how large an area to use and where that area is located within the frame. This type of metering works well in scenes that are evenly lite.


  • Center Priority / Center Weight – This setting sets the camera to use only the middle of the frame when calculating a correct exposure. This type of metering works well in scenes where the subject is in the center of the frame and you care less about how the outside areas are exposed.


  • Spot - This setting sets the camera to use a single area in the frame when calculating a correct exposure. This point is typically only about 5% of the frame and its location can be chosen by the photographer using the Focus Point(s). This type of metering works well in scenes that are unevenly lite and where the subject may be off center.


Exposure Compensation – Things never always go as planned and sometimes this is true for exposures. After taking care to select the right Exposure Mode and Metering Mode for your subject, you take a test shot and check the Image’s Blinkees and/or Histogram; you spot a problem. Either the Blinkees are flashing (indicating you are blowing out some white areas) and/or the Histogram shows its right up against the extreme right (too bright) or extreme left (too dark). What is a person (photographer) to do? This is where Exposure Compensation comes into use.

- ….. 3 ….. 2 ….. 1 ….. 0 ….. 1 ….. 2 ….. 3 ….. +

Under-Expose Image              Over-Expose Image


  • Exposure Compensation allows the photographer to intentionally adjust the automatically calculated exposure values when using P (Program), A (Aperture Priority), or S (Shutter Priority). Typically when using the Exposure Compensation function your camera, you view a scale with a Zero in the middle. As you adjust the Compensation to the right, you are telling the camera to Over-Expose from its automatic calculated exposure vales. As you adjust the Compensation to the left, you are telling the camera to Under-Expose from its automatic calculated exposure vales. You can usually move this setting in either 1/2 or 1/3 increments.


When else might you use Exposure Compensation? Maybe when you are shooting in conditions (ie; snow, fog, etc), where the camera is attempting to turn the scene grey instead of recording white. Or when your Subject is surrounded with lots of bright backlite light, resulting in your subject becoming underexposed. Each picture you take may be different and you need to learn how to adapt to capture the best image you can.


(Marc F Alter) Aperture Priority Automatic Exposure Center Priority Center Priority Metering Center Weight Center Weight Metering Evaluative Exposure Compensation Exposure Modes marc alter photography marc f alter Matrix Metering Modes mfa images Professional Exposure Program Exposure Shutter Priority Spot Spot Metering Thu, 29 Nov 2018 14:34:32 GMT
Welcome To The Ravings Of Marc F Alter Welcome to MFA Images. 
This site is designed to show my photography based art as it developes. 

Who Am I? 
I am a husband, father, IT guy, bicycle rider, hiker, and much more. I am also a photo enthusiast and aspiring photo-based artist. I love to take pictures and spend hours in front of a computer working on them.  I am not a purist, rather I love to manipulate and create. I love to try this and that and see what happens. I love to learn and I love to share what I have learned.


Why This Blog? 
I created this blog as a method to share what I have learned. If you occationally visit my blog you will learn all sorts of tips and tricks, new features and functions (hardware, software, techniques, etc). Right now there is nothing here as this is my first post.


If you like my pictures I hope you will share my site and blog with your friends. If you really LOVE my images, maybe you will even buy one (or a few hundred). In either case, thank you for visiting and come back soon.




(Marc F Alter) marc alter photography marc f alter mfa images Fri, 13 Apr 2018 23:40:28 GMT