“In order to appreciate the very different way in which a sensor responds to light, as opposed to film, we need to go behind the scenes in the processing software”.

The Camera Sensor

The sensor embedded in our camera captures light in a fundamental, linear form, which translated means “…arranged in or extended along a straight or nearly straight line”.

When recording an image, a digital camera uses millions and millions of ‘photosites’ or light cavities, each of which are activated when exposure begins.  The more light present, the more powerful the response of the sensor; this response happens at exactly the same rate, from the range of very dark to very bright, and once the shutter is closed, only then can the camera assess the number of photons that fell into each individual cavity.  The corresponding number is then arranged by intensity level, which in turn determines bit depth (for example, 0 – 255 for an 8-bit image), and assists in the creation of our histogram.  This process however, will only create a gray-scale image, as the cavities are not able to distinguish the amount of different colours contained in each.  So, in order to capture coloured images, particular coloured filters are placed over the cavities to capture the light.

Virtually all current digital cameras can only capture one of the three primary colours in each cavity, and so they discard roughly 2/3 of the incoming light.  As a result, the camera has to approximate the other two primary colours in order to have full colour at every pixel” (Cambridge in Colour).

A ‘Bayer Array’, is the most commonly used filter array in digital photography, a diagram of which can be seen below:

Colour Filter Array

Colour Filter Array

Photosites with Colour Filters

Photosites with Colour Filters

This array consists of alternating red-green and green-blue filters, but there are twice as many green filters as there are red or blue, and this is because each colour does not receive an equal fraction of the total area, and the human eye is more sensitive to green light than to both blue and red.  The excess of green pixels produces images that appear less noisy and have finer detail contained within, which would not be possible if all three-colour channels were treated equally.  This would then explain why there is less noise in the green channel than in the channels of the other two primary colours.  We will cover noise in more detail during a later project.

The Human Eye

Without going into too much detail, much the same as a camera, the human eye will follow a very regimented set of procedures to capture images on the retina (the eye’s equivalent to the cameras sensor).  The information is then interoperated, sent on to the brain, and there it is translated into the images we see before us every day.

The Human Eye

The Human Eye

Digital cameras use light in a linear fashion, while the eye is stimulated by non-linear, widely diverging rays of light that are passed through the cornea.  This light is refracted and bent through first the pupil, and then the lens, which helps to focus the light on to the back of the eye.

The difference between linear and non-liner light capture is explained below:

You enter a room that is pitch-black and turn on a 100-watt light source; both the human eye and a digital camera would see the same illumination and the same fixed amount of light intensity.  If you then turned on a second 100-watt light source, thus doubling the intensity of the light, the human eye would not register the change, put down to the built-in, nonlinear visual system, but on the other hand, to the camera, the room would appear to be twice as bright (Rodney 2007).

Although the human eye and a camera follow very similar processes, there are subtle differences (aside from the one mentioned above) between these two ‘systems for capturing light’, for example when the eye intercepts an image, it appears upside down on the retina, and it is the human brain that corrects this dysfunction and orientates the image the right way up.  A camera will only see an image in 2-D, which has to be placed directly within the field of vision of lens to be captured, whereas the eye will see images in 3-D and has a much wider field of vision, the eye can also interpret motion and has a wider range of brightness (as seen in the example above).

The traits of the human eye are very similar to the traits of film, although the human eye will out perform film.


We have seen that the human eye is better equipped to cope with a wide range of brightness, but not so our cameras sensor.  Once we have taken our photographs, we see our images exactly as we expect too on the LCD screen, but to achieve this, our camera undertakes mammoth processing activities before it shows us these images for the first time, in fact without this processing, our images would appear much darker than we expect them too.

It should be noted that if shooting in RAW, the camera takes the information received in the RAW file, such tonal and colour data, and creates a rendered JPEG version of the data, produced specifically for viewing in the LCD; this rendition is vastly different to what we would expect to see in the RAW data file.

The process that changes our images from dark to light in both a camera (or imaging device) or computer screen (or medium) is called Gamma Correction.

Gamma Correction

Understanding how gamma works, especially within photography, will not only assist in improving exposure techniques, but will also help in post-processing image editing.  Freeman (2011, p.120) states that ‘Gamma is a measure of the slope or gradient of the response of an imaging device or medium to exposure’.  This is actually a bit confusing (and probably a little deeper than we need to dig at the moment), but my understanding of Gamma Correction is the ability to change the contrast of mid-level tones, without effecting the black and white points within our image.  Therefore, a low value gamma reading will give darker images with higher contrast (detailed) areas, whereas higher gamma values lighten the shadows, but take away detail (or contrast) the image.

Gamma is also used in computing circles, and when applied to monitor screens it refers too a measure of the relationship between voltage input and brightness intensity (an easier explanation than the one above).  Because of the way a computer works, a RAW, uncorrected digital image would look darker and more contrasty than our eyes would find normal, much the same as a RAW file in our camera before it renders the information to create more pleasing JPEG images for us to view.



The New Oxford American Dictionary

Cambridge in Colour.  (n.d.) Colour Filter Array [Online Image].  Available at: <; [Accessed 23 April 2013].

Cambridge in Colour.  (n.d.) Photosites with Colour Filters [Online Image].  Available at: <; [Accessed 23 April 2013].

Freeman, M.  (2011) The Digital SLR Handbook.  Revised 3rd Edition.  East Sussex: The Ilex Press Limited. 

Pasadena Eye.  (n.d.) The Human Eye [Online Image].  Available at: <; [Accessed 23 April 2013].

Rodney, A.  (2007) Exposing for RAW [Online Article].  Available at: <; [Accessed 22 April 2013].


Cambridge in Colour.  (n.d.) Digital Camera Sensors [Online Article].  Available at: <; [Accessed 23 April 2013].

Cambridge in Colour.  (n.d.) Understanding Gamma Correction [Online Article].  Available at: <; [Accessed 24 April 2013].

Ferland, M.  (2012) Comparison of the Human Eye to a Camera [Online Article].  Available at: <; [Accessed 23 April 2013].

Fraser, B.  (2004) Linear Gamma [Online Article].  Available at: < photoshop/pdfs/linear_gamma.pdf> [Accessed 22 April 2013].

Hunter, C.  (20120 How is the Human Eye Different that a Camera? [Online Article].  Available at: <; [Accessed 23 Apr. 13].

Gardner, R.  (2007) Gamma Demystified; Linear Gamma [Online Article].  Available at: <; [Accessed 22 April 2013].

Kodak Education.  (n.d.) Understanding Film … The Basics [Online Article].  Available at: [Accessed 23 April 2013].

Pasadena Eye.  (n.d.) How does the human eye work? [Online Article].  Available at: <; [Accessed 23 April 2013].

Rodney, A.  (2007) Exposing for RAW [Online Article].  Available at: <; [Accessed 22 April 2013].

SuChin.  (n.d.) Newton’s Apple: How does the human eye see? [Online Article].  Available at: <; [Accessed 23 April 2013].

This entry was posted in Digital Image Qualities ~ Learning Log and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s