In a struggle to be happy and free

Drystone Wall

Category: photographic concepts Page 1 of 2

Focus

CRW_01136.CRW: Digital Rebel, EF 50mm 1:1.8 II @ 1/50, f/1.8, 100 ISO

CRW_01136.CRW: Digital Rebel, EF 50mm 1:1.8 II @ 1/50, f/1.8, 100 ISO

Don bought himself the AF‑S NIKKOR 85mm f/1.4G and from everything I’ve read, it’s a kick-ass lens. He mentioned wanting to take some very shallow depth-of-field photos, for which an f/1.4 lens is certainly ideal.

This reminded me of when I bought my first digital SLR prime lens nearly six years ago now. It was the EF 50mm f/1.8 II. It’s cheaply made, but also ridiculously inexpensive and the ideal way a new photographer can get into large aperture photography without paying a lot. The photo you see here is one I took on the first day I had the 50mm f/1.8.

Looking closely, you can see some of the optical shortcomings in the lens. Most obviously, the foreground text is tinted magenta and the background text is tinted green. Also, you can see some spherical distortion, especially in the blurry text to the right, just below the focus point.

It’s far from a perfect lens, but I managed to get it on sale for less than $100. Even now, it’s available in stores for less than $150. If you don’t have a prime and you have never used one, it’s not much to pay. As a point of comparison, the EF 50mm f/1.4 USM will set you back $460, and the EF 50mm f/1.2L USM is $1700. That’s $400 more for 1/3 stop and $1550 more for 1 stop … but if you need a huge aperture, there’s no other way.

All the king’s cinematographers…

I watched All the King’s Men this afternoon. It was okay, but I wouldn’t go so far as to recommend you watch it.

The most noteworthy aspect of the film is that it has a scene with the absolute worst bokeh I’ve ever seen in a major motion picture. Check it out:

What a freaking disaster! It looks like bacteria through a microscope.

The Wikipedia defines bokeh as:

In photography, bokeh is the blur, or the aesthetic quality of the blur, in out-of-focus areas of an image, or “the way the lens renders out-of-focus points of light.” Differences in lens aberrations and aperture shape cause some lens designs to blur the image in a way that is pleasing to the eye, while others produce blurring that is unpleasant or distracting — “good” or “bad” bokeh, respectively. Bokeh occurs for parts of the scene that lie outside the depth of field. Photographers sometimes deliberately use a shallow focus technique to create images with prominent out-of-focus regions.

Good bokeh has a creamy smoothness to it that does not distract. The trees in the background are not smoothly out of focus. There’s a linear pattern to the blur that traces the tree branches. It is ridiculously distracting once you recognize good bokeh.

With a $50 million budget they couldn’t rent a single good quality fast lens?


Screen capture ©2006 Film & Entertainment VIP Medienfonds 3A GmbH & Co.

My issues with 3D

I went to see Avatar last week. In my recap, I wrote,

Another issue with the 3D is that it’s shot like a conventional 2D film. I believe 3D needs a different shooting style

The largest issue to me is depth of field.

With conventional two-dimensional motion pictures, the viewer focuses on the image and all is well. The cinematographer decides what parts of the image are in focus and what parts are not. It may seem like a no-brainer to always have all of the image in focus, but selective focus is often used to isolate an object or character. In those cases, a perfectly focused background is a distraction. So the cinematographer uses a fast lens, opens it up, and blurs the background into a mish-mash of softness and colour, leaving the subject in sharp focus. It’s a very effective technique … and purely a product of 2D imagery.

Our eyes do work in the same way, but we almost never realize it. If you’re paying attention to someone talking to you, and they’re quite close, whatever is behind them is blurry. You don’t notice this, however. If you were to look, your eyes would automatically re-focus on the background in a split second. Blur? What blur?

So a blurry background is a photographic technique to draw the viewer’s attention to the important part of the image. How does this translate to 3D motion picture? Poorly. Many people report eye-strain and headaches when viewing 3D films. I’m sure there are many causes, but I can’t help but believe that using selective focus to generate blurry backgrounds is large among them.

Let’s go back to conventional 2D films for a moment. If the entire image is in focus, your eye can sweep across it and take it all in without changing focus. From the person in the foreground to the horizon in the background, your eye sees it all clearly by focusing on the screen. If the background is blurry, your eye isn’t fooled because it focuses on the screen and knows the image is blurry.

This all goes out the window with 3D films. Objects appear to be varying distances away and they all require your eyes to go through most of the same motions as when viewing real objects. Technically, your eyes don’t need to refocus, but they do need to vary their viewing angle in relation to each other, to work their depth-perception magic and allow your brain to build a three-dimensional image of the film. With 3D, your brain believes the image is not flat, so when you look at an area that’s obviously blurry, your eyes try to re-focus to correct the problem. Even when blurry, the parallax between your eyes indicates to the brain the relative distance of the blurry object. You brain knows it’s not a flat image, so the blur must be a focus issue. Eyes? Get focusing! But your eyes fail because they can’t fix the problem.

The only way I can see to correct this is to maximize the depth of field and make everything sharp and clear. The viewer can then look at whatever he or she pleases and everything will appear as it should.

Maybe. In the real 3D world around us, our eyes have to do two things. They each focus to generate a nice clear images of our surroundings (one image from each eye), and they have to vary their viewing angle so both of our eyes are looking directly at whatever it is that has our attention. Our brain then uses this angle to combine the two images into a single view with depth perception. When viewing a 3D film, our eyes need only focus on the screen. No further focusing is necessary, but our eyes still vary their viewing angles allowing our brains to calculate depth. Perhaps having to maintain the same focus and vary our eyes’ viewing angle causes eye-strain for some.

Another issue is shot-to-shot transitions.

Imagine a scene in which two characters, standing face to face, are conversing. Typical cinematography has the viewer jump back and forth, from behind one character to behind the other, so we can always see who is talking. We certainly don’t do this is real life, but this shooting style ensures we don’t miss anything. If the imaginary conversation goes on for any length of time, some of the shots may be from father away, showing the whole room, and then closer, giving us an over-the-shoulder perspective. We’ve all seen this before and it’s easily for us to assemble the very different and suddenly changing viewpoints in our heads.

Now imagine the same scene in 3D. We don’t know when the shot will change perspective, nor do we know what the perspective will be, but the depth in the image requires our eyes to rebuild a stereo image every time. I find no real issue with this when the ‘distance’ to the subject remains the same or varies only slightly when the shot changes, but if an over-the-shoulder shot changes to a room-wide view, there’s a moment of discontinuity as my eyes and brain work together to recalculates the image and my position relative to it.

In thinking about it, I realize that there seems to be a middle range where a perspective change is most disorienting. Cutting from behind one character to the other with little change in the distance to the speaker is a smooth transition. Similarly, cutting from a character to a landscape is also a smooth transition to me. But reverse from an over-the-shoulder to half-way across the room, and my enjoyment of the story is briefly interrupted as my eyes and brain play catch-up.

Unlike the selective focus issue, this sudden distance change never occurs in real life. Even if you move with your eyes closed, your senses give you a good idea where you are. When you open your eyes, you know what to expect. We don’t, however, teleport to various places within a room every second or five.

Certainly there are ways around this issue as well, but they’re unusual in current cinematography. Panning back and forth between two people talking is unusual because it’s so uninteresting. Individual shots of extremely long duration are difficult.

I wouldn’t be the least bit surprised if we discover that most of the cause of 3D fatigue is technical. Those can be fixed. At the same time however, I’ll be very surprised if the problem is entirely technical. Filmmakers are going to have to learn some new techniques, and unlearn some old ones, if 3D is to become a serious film-making medium.

Crop factor

I was thinking that it’s just a matter of time before I move to a full-frame camera. Full frame? Yes, full frame.

Back when everyone used film, cameras recorded an image 36 mm wide and 24 mm high on the film you loaded into the camera. Your prints were certainly larger, but that was the size of the image on the negative or slide. It was simple. Whether you used a cheap point-and-shoot or a professional SLR, the image your camera recorded was the same size if you used 35 mm film. 36 mm by 24 mm.

Note that the simulated film frame above is not to scale. It’s quite a bit larger than an actual film frame.

In the early 1990s, things started to change. Kodak introduced a digital SLR. They took a Nikon film camera and replaced the film mechanism with electronics and a digital sensor similar to those in video cameras. Back then, digital sensors were ruinously expensive. There was no way they could produce a sensor anywhere near the size of a 35 mm film frame. So what did they do? They used a smaller sensor. Even still, Kodak’s DCS 100 cost $25,000. And for that kind of money you got a digital SLR that captured a 1.3 megapixel (1320 × 1035) image.

These days, sensors are far more capable and much less expensive, but they’re still not cheap. For this reason, most digital SLRs continue to use sensors smaller than the 36 mm by 24 mm film frame. One common size is the APS‑C sensor. Even within this size format, there is slight size variation between manufacturers but I’m going to use the Nikon size in this example. Their APS‑C sensor size is 23.7 mm × 15.7 mm which is just about 2/3 the height and width of the full 35 mm frame. The smaller sensor has an important effect. This effect is so large that my current selection of lenses will have to change when I switch to a full frame camera.

Imagine you take a photo with your film SLR. You send the film off and when the print arrives, this is what you get:

Lovely!

But you want to try out your fancy new APS‑C digital SLR as well. After you took the photo with your film camera, you removed the lens, put it on your digital SLR, and took the same photo. You sent that image away to get printed as well. This is the print you get:

Now wait a minute! The photos are different even though you took them with the same lens! If you take some measurements, you’d realize that 1/6 of the image is missing around each outside edge. Fully 1/3 of the photo is missing! Wait a minute. This is familiar. Earlier I said that the APS‑C sensor used in most digital SLRs is 2/3 the size of a full film frame! Is this why 1/3 is missing? If you think so, you understand what’s going on.

Looking at the prints, the effect appears to be a free zoom, but it’s not. It appears that way because both images are printed at the same size. In reality, the images are not the same size at all. The sensor is smaller so your photo shows you less of the image the lens is delivering. For this reason, we describe the size reduction as a ‘crop factor.’ In this case the crop factor is 1.5 because the APS‑C sensor would need to be 1.5 times larger in height and width to match the size of a 35 mm film frame. That’s why the crop factor is 1.5, but we can say it in a different way to describe the effect of the crop factor. Your lens on an APS‑C camera will provide the same field of view as a lens with a focal length 1.5 times larger on a full frame camera. So for example, if you were using a 50 mm lens on an APS‑C camera, it would give the same field of view as a 75 mm lens on a full frame camera.

I’m not saying that the crop factor makes the 50 mm lens into a 75 mm lens. It doesn’t. It’s still a 50 mm lens. Only the field of view changes.

Lets step back a moment, okay? Showing what happens in the camera may make it clearer still.

Lenses project circular images. Your prints are rectangular because the film/sensor doesn’t capture all of the light the lens projects. In fact, most lenses are designed to project an image circle as small as possible while still covering a film frame or a full frame sensor. Taking the film option, this is a simplified representation of what’s going on inside the camera:

The round image is what the lens is projecting into the camera. The clear rectangle in the centre represents the portion of the image captured by the film. In the image below, the clear portion represents the portion of the image captured by the smaller APS‑C sensor:

The size of the image projected by the lens has not changed because the lens has not changed. The only difference is the portion of the projected image that’s used. The APS‑C sensor is smaller than a 35 mm film frame so it captures less of the image the lens projects.

When I say I’d need to change my lens selection if I went to a full frame camera, you realize that the cause isn’t the lenses themselves. The lenses I have would all suddenly offer a wider view. My wide-angle lens would be wider, which I like! My telephoto lens wouldn’t be so telephoto, which I do not like as much. This wouldn’t be an issue except that one builds a lens collection based on what one likes to shoot. I’d need a new lens to cover the wide end of normal to short telephoto range I enjoy. The change from a crop factor of 1.5 to no crop factor (or a crop factor of zero) would leave me with no lenses in this range.

So why would I bother switching when it will wreak havoc with my lens coverage? That’s a story for another time.

Exposure: I. introduction

I’ve heard a number of people who have recently purchased digital SLRs say that they have no idea what the manual and semi-automatic settings do. They leave the camera in automatic mode and take their photos. There’s nothing wrong with this, but understanding what the settings do and how they affect the photo being taken can allow you to understand the camera’s limitations and work around them to get better photographs.

A lot of people think this is too much work and would prefer to avoid it. That’s fine. My concern is that the nuts and bolts of photography can appear so complicated that those who wouldn’t mind learning some fairly simple concepts are being scared off. This is a shame because these concepts can really increase your creative freedom as well as reducing the chance that the limitations of the camera’s automatic settings will leave you with a poorly exposed photograph. Its these people for which I’ve conceived this series of posts. I don’t know if they’ll successfully impart the concepts I hope to describe, but all we can do is try.

Light

Photographs are made with light, and photography is the act of capturing light. What you photograph is an entirely separate issue, but the mechanics behind getting a properly exposed image involve how much light you allow to enter the camera.

There are only three variables you can adjust to vary the amount of light entering the camera. The specific settings used are called exposure settings, probably because they together vary how the film/sensor is exposed to light.

Proper exposure makes the photograph look just like the original scene. Under-exposure leaves the photograph looking too dark. Over-exposure makes the photograph look too light.

Reciprocity

How these three exposure variables relate to each other is a key concept known as reciprocity. The Wikipedia defines the word in photographic context as:

the inverse relationship between the intensity and duration of light that determines exposure of light-sensitive material.

Let’s leave photography behind for a moment. Imagine getting $1 from 100 people and then getting $2 from 50 people. Comparing these two situations, you’re getting a different amount of money from a different number of people, but you’re left with the same result in both cases: $100. Getting the same result requires the doubling of one variable if the other is halved. This is the essence of reciprocity.

See? I told you it wasn’t rocket science.

While the Wikipedia definition reciprocity gets the basic idea across, it ignores how the ‘light-sensitive material’ is also part of the relationship. I’m guessing this definition came from the pre-digital days when film was the light-sensitive material we used, and you were stuck with the same ‘film speed’ until you finished the roll. With digital cameras, you can change the sensitivity of the sensor between photographs, bringing it fully into consideration.

Exposure triangle

So we’ve got three variables. The intensity of light, the duration of light, and the sensitivity of the film/sensor to the light. These are sometimes referred to as the exposure triangle.

Photographic reciprocity means that there is no single proper exposure. All three variables can be changed to create an entirely different exposure that’s also correct, as long as reciprocity is used to calculate how the three exposure variables are changed in relation to each other.

If you allow twice the light for half the time, the exposure is unchanged despite two of the variables being entirely different. This may sound vague, but it’s only because we haven’t yet talked about the details of each exposure variable and the units used to measure them.

In the next three Exposure posts I’ll describe the exposure variables in detail.


Image courtesy of PostSecret

Page 1 of 2

Powered by WordPress & Theme by Anders Norén