Photography 101 Learning Tutorial

This is for anyone just getting into photography. The key and biggest problem is just learning about what you should know and the fact you should learn it. I tried to put most of that on here. First important thing is get out of the camera's more automated modes and just learn what does what with the settings. The thing is, it's not rocket science anyway. It may seem it must be, but really there are only so many things to control.

If your camera has options like the above, just stay out of those "landscape" "portrait" "sports" etc modes. There's nothing those modes are doing that you can't easily control yourself in the more manual modes. Stick with full manual or M, aperture priority or AV, or shutter priority TV. It's AV mode you'll likely spend most of your time in. That mode you pick the aperture yourself and the camera decides what shutter speed is needed. All the aperture is is the opening size of a round diaphragm inside the lens. Wide open it lets the most light in obviously. In Tv mode or shutter priority you are picking the shutter speed and then the camera is deciding the aperture size needed. In M or manual mode you pick both.

There will be 3 ways to control the exposure of a photo. Shutter speed, aperture size and ISO or camera sensitivity. Say you have a plastic pipe that needs a set amount of water flowing through it to be just right. But you have 3 valves letting water into the pipe. You turn one way up you can have the other 2 a lot lower. It's just a 3-way control. That is all a camera is for exposure. Shutter speed, aperture size opening and camera ISO sensitivity.

Here is an example of lens aperture openings inside the lens and a corresponding F-stop number. Just know that the smaller the number, the larger the aperture is open letting in the most light. An F-stop is just a figure given to define or explain differences in exposure. If you take a wide open aperture, or any size for that matter, then close that opening down to let in half the light as before, you have reduced the exposure/amount of light by 1 stop. If you double the size of the opening and let in twice the light, you have increased it by one stop.

Here is the same stop relation with shutter speed and ISO speed(a camera sensitivity setting). Take 1/15th a second shutter speed and increase it to 1/30th and you've reduced the exposure by 1 stop, as you are giving the exposure half the time. Go from say 1/250th to 1/125th and you are giving the exposure twice as long, so increasing it a stop. Go from 1/1000th to 1/250th and you have increased the exposure time 2 stops.

The same applies to ISO speeds. 100 ISO is essentially the base ISO of a camera. Often the least sensitive it can go. You lose a couple things as you increase the ISO/sensitivity. As you increase it you will increase the amount of noise or grain in an image. Also as you increase it you will lose dynamic range, the range of light a camera can capture. If you have a scene going from very bright to dark that is pushing the limits of the dynamic range your camera can capture, you will not be helping things increasing the ISO way up. It's not a huge effect but it is there, especially as you get to 800 ISO and higher basically.

Doubling any of these three available settings is increasing the exposure 1 stop. Halving any of them is decreasing them 1 stop. It's just most important to realize you have 3 ways to adjust the exposure up or down.

The thing is, a lot of cameras, especially DSLRs, will have a place to set using either 1/3 stop increments or 1/2. If you have it set to be able to adjust in 1/3 increments, then as you spin the wheel to adjust shutter or aperture, each click/adjustment is 1/3rd a stop. No reason to memorize aperture F-stop values, like what a stop increase is. You can know simply knowing what your camera is set at. Like mine is using 1/3 increments. So if I'm at F1.4 and spin once I hit F1.6, spin again and it's F1.8, then again and it's F2 which would then be a stop reduction from F1.4. Set for 1/3rd adjustments so 3 clicks change was a stop.

Often you'll just be using a lens wide open at the biggest aperture size, because that lets in the most light, so you can get the fastest shutter speed and be able to hand hold the camera. Sometimes though you'll want to stop the aperture down. But if I'm in manual mode and have the shot exposing right, I can do this, knowing this relation of things. Say I want to stop the aperture down. I can count the times the wheel is clicked and know if I want the same exposure I have to open the shutter that same amount. I don't have to know exactly how many stops I changed by changing the aperture. It is helpful to at least know the stop increase values, like the image higher up going from F1.4 through F8. If that continued the next full stop down would be F11 then F16 then F22. With F18 and F20 being options between F16 and F22 if you are set to change in 1/3rd increments.

I'll touch on this more in a bit, but most lenses perform their best stopped down a bit from wide open.

More on exposure. If you can understand the above table for 4 examples, you'll be good to go I think. Pretend each one on the left is a perfect setting to expose some scene just right. The example to the right is if I wanted the same exposure but with different camera settings used. That is why they have the = sign between them. Take the first one. I changed F1.4 to F2.0 letting in 1 stop less light, so I had to increase the shutter time from 1/100th to 1/50th...increasing the exposure time 1 stop. That gives you the same exact exposure. The next 3 examples below it do the same sorta thing. On the 2nd set I changed F4 to F5.6. That is 1 stop less light getting in. So to get the same exposure with different settings you can see I changed the ISO for that example. Both are at 1/100th shutter. F5.6 one on right is using 200 ISO, so it's twice as sensitive...that ISO increase has offset the 1 stop less aperture opening. The 3rd one down I change using 2 stops. F4 on the right is letting in 4 times the light as F8(2 stops) to get the same exposure 1/100th would need to be 4 times faster to let in 4 times less light so it goes to 1/400th. And the last example I did the same 2 stop change example but on the second set of settings to make it the same, I opted to change 2 controls not just one. F8 going to F4 is again 2 stops change or 4 times the light. But on the F8 one I was also using 400 ISO. So think about it. If I use F8, 1/100th shutter at 400 ISO and I change to F4 what could the other two settings be to get the same exposure using F4 now instead of F8. It's just a 2 stop change, so I need to change the others by 2 stops any which way I want. It's a 2 stop increase going from F8 to F4. I used 200 ISO instead of 400, so there is 1 stop less exposure. I just need to reduce another stop to offset that whole 2 stops gain I got going to F4. I took another stop off the shutter speed to 1/200th from 1/100th.

That might seem confusing at first, but really it's not. If you can understand that relation you have made it most of the way in photography. Well sorta. And again you don't even need to memorize anything really. Once you know what your camera is changing by, 1/3 or 1/2 stop increments, you can know how much you are changing things and what the settings are by how many times you spin the wheel changing them.

All lenses will have a max aperture. The more light they let in the "faster" they are considered. Telephotos let in less and less light unless they are made of a ton of glass. So for example my Canon 100-400L is considered an F4.5 to F5.6 lens(can see that on the end of the lens above). So at 100mm I can only open it to F4.5 with the camera and when at 400mm it will only go to F5.6. I can't pick anything faster. That's an important number to consider when buying lenses. Most 50mm lenses will go to F1.8 or more. Most ultrawides will only reach F2.8 if that. F2.8 for an ultrawide is considered fast. F4 for an ultrawide would be considered slow. You are somewhat limited if a lens is only so fast, only able to let in so much light. Thankfully camera noise at higher ISO settings have improved so much. So if you do have a slower lens, like a wide angle with a max aperture of F4 or worse, least you can ISO up now and not have as much noise as in the past.

Basically you want to lose as low of ISO setting as possible. Cleaner and more dynamic range available. You need to ISO up if you need faster shutter speed. You might need a faster shutter speed if it is getting darker out and you are trying to hand hold a shot and not using a tripod. The other reason a faster shutter speed might be needed is if your subject is moving quickly.

As far as hand holding goes in relation to shutter speed the basic rule is this. The slowest shutter speed you should hand hold at is the reciprocal of the focal length of the lens. Like a telephoto 400mm lens would be 1/400th of a second. 50mm lens would be down to 1/50th a second. 14mm lens down to 1/14th. Many lenses now have image stabilization that will change all that. That's for no stabilization. It's also highly dependant on the person. But you get the idea.

Rounding out the exposure control picture....

Say you have a 17mm lens with a max aperutre of F4. It's dusk or getting dim. At 100 ISO and at F4 your camera is giving you a 1/5th shutter speed for the scene in AV mode. You have to hand hold it but think that is too slow of a shutter. You can't speed up the shutter using a more open aperutre becuase it is already at F4. Then you have to bump up the ISO. If you bump it up to 200 ISO you could then get 1/10th shutter. You want faster so you bump it up again to 400 ISO. Now you get 1/20th a second shutter. Then say you know your lens also performs better stopped down to F5.6 and decide to do that. You then just lost a stop and are back to 1/10th. So you bump it up another ISO level to 800 ISO and are now back to 1/20th at F5.6. You may decide that you don't like the noise of 800 ISO on your camera and will live with your lens performance at F4. If that all makes sense you probably have a grasp on exposure controls on the camera.


So one might wonder, why not just always use the most wide open aperture and get the fastest shutter speed possible. There are a few things to know about what happens with different aperture changes. First off, many lenses, especially ultrawides, love to produce vignetting or darkening in the corners when used at their most open aperture setting. Above is an expensive Zeiss 21mm F2.8 on a full frame 5D II. Understand that full frame cameras image further out towards the edges of a lens' image circle. So full frame cameras will show lens faults more than crop ones, as it's harder to control lens flaws away from the center.

This animation starts at F2.8 and goes through 1/3 decreases in aperture. You can see what this does to that vignetting. The first dark frame is F2.8. The 4th frame is F4 or one stop down. You can see how much just that has helped get rid of the vignetting. This is a great lens. Other lesser lenses the vignetting is worse and harder to get rid of, especially on a full frame camera. If you put that lens on a crop sensor camera, like a digital rebel, it would not produce as wide of an image, as the sensor would only be imaging more towards the center. So the picture would be happening inside most of that vignetting area. A 21mm lens on a 1.6x crop sensor camera, like a digital rebel, would produce a 34mm view. You can take that backwards too. Say you have a 35mm lens on a crop sensor camera(full frame cameras are more expensive with a larger sensor). Put that on a full frame camera and you are closer to 22mm for a view. 10mm on a crop sensor camera is super duper wide. 16mm on a full frame is super duper wide producing the same view basically. Anyway, I really don't want to delve into full frame vs crop frame.


(If you have GIF annimation problems in Chrome google it, there's a fix. Mine was two versions of flash running and one needed disabled and that fixed the GIF problem in Chrome)

Again, corner sharpness is more of a problem on full frame as full frame images further out on the lens than a crop sensor would. Above is a cheap, but great Samyang 14mm on a full frame Canon 5D II. It's a full sized crop from the upper right corner. You can see what stopping down the lens aperture does for corner sharpness. F2.8 and F4 are fairly soft in the corners. Really though for a full frame 14mm F2.8 lens that's not that bad for extreme corners with a more wide aperture setting. Especially when one factors in price. Canon's version of 14mm F2.8 is $2200 and is going to be similar, while this lens is $400 or less...though it is manual focus and the aperture control is on the lens itself...not a big deal. By F5.6 and especially by F8 corners are plenty sharp. This lens amazes me. So stopping down the aperture helps bring down vignetting and helps with sharpness...TO A POINT......

Camera sensors have diffraction limitations. Anymore this is by F8 or F11. Basically you can gain sharpness stopping down up to the diffraction limiting aperture. Stopping down beyond that you start to lose sharpness. This brings up depth of field, another thing that the aperture controls.

DOF Animation(Depth of Field)

(If you have GIF annimation problems in Chrome google it, there's a fix. Mine was two versions of flash running and one needed disabled and that fixed the GIF problem in Chrome)

I'm using a Sigma 50mm F1.4 lens on a Canon 5D II taking shots of my T2i with the Zeiss 21 on it, with the focus set on the middle of the Zeiss' focus ring. You can see as I stop down, the depth of field increases. So you can see aperture is pretty important. It can control a lot of things. This is a pretty extreme example of trying to bring everything in the shot into focus. I'm so close to the camera and lens that I'm at the edge of the 50mm's minimum focusing distance. Anyway, pretty clear illustration what stopping down does to depth of field or depth of objects in focus. Also you'll gain some contrast stopping down from wide open. You can see it happening in this animation too.

There will be some scenes requiring F11 and beyond. The problem then comes back to diffraction effects. As you gain even more depth of field to bring things in focus, you begin to soften things as a whole because of diffraction limits. So it's a balance.

With this you'd probably want to learn about hyperfocal distance. I don't get that scientific about it, as there are calculators. Basically if you want something nearby in focus and everything out to infinity, you don't simply stop down and focus on the close object or at infinity either. The hyper focal distance deal is the point between near and far you'd want to focus, depending on your aperture, focal length and distance. Well it would tell you the aperture needed and the distance you'd want to set the focus, for a given focal length. 99% of the time I'm fine focusing at infinity and stopping down a bit, if I have to stop down at all. I almost never have anything close I need in focus. And when shooting ultrawide, "infinity" doesn't start that far away anyway. But just look how it expands both ways on the above animation for depth of field as you stop down. If I wanted the whole range from monitor to front of lens in focus, I wouldn't want to set the focus at the front of the lens and stop down or at the monitor and stop down. It's between there at the hyperfocal distance, which again there are calculators for if you ever need to be specific. Just know not to plop focus at infinity then stop down if you are needing all the range you can get. Below is an example of me not doing what I'm saying here lol.

Above is a real world example of stopping down for depth of field, even if I didn't do it completely properly. Can see how F14 improved that next rock area in focus compared to F5. The thing I did wrong was I never bothered thinking about hyper focal distance. I was focused on the young big horns. Soon as I went for depth of field I should have focused beyond them somewhere at the hyperfocal distance. Somewhat of an excuse I was actually trying to also capture lightning bolts happening very infrequently in this scene. Now that I think about it, I do seem to remember at least trying to guess a hyperfocal distance spot at some point for this. This scene is a good example of needing to know what changes does what to your camera. I'd stop down for depth of field to get the lightning bolt and the bighorns in focus, but that would make my shutter slower, which was needed to catch a lightning bolt anyway. That slower shutter though couldn't end up too slow or the sheep would blur too bad. It really was about as tricky as it gets for a scene. Big ol dynamic range too obviously.

For what it is worth, I actually got the shot lol. It didn't end up as cool as it was hard though. I used F13, 400 ISO, 1/4th a second. Probably the only time it will make sense to me to have to ISO up while stopped down to F13. Stopped down for the depth but if I got much slower than 1/4th the sheep would blur. I get much faster and getting lucky on the damn infrequent bolts would be impossible to catch. I mean think of 1/4th second getting infrequent bolts. I put the camera in consecutive shooting mode and would hold the shutter(cable release) down, firing a bunch off till the buffer was full and I had to wait. The lightning was silly infrequent though and what was happening was too far to the right behind rocks and out of shot. Plus the dang young big horns I'm shooting here kept flat out running across these rocks to new areas. Kept thinking, stop doing that! I may have done a more hyperfocal focus area by this point too, plus backed out to 100mm.

Say I set the aperture to F8 on the camera. Say it's on the F2.8 Zeiss 21mm lens above. Again the lens F number rating is as far open as you can use it and also relates to the amount of light getting though the lens. Just like fast telephotos require a lot more heavy glass to get that way, so do faster wide angle lenses. This one is fairly heavy. So I set the aperture to F8. When I look through the view finder or even on the live view screen, I'm looking at F2.8. The view through the view finder is always with the lens aperture wide open as it's just brighter then...clearer..for seeing what you are shooting. When you push the shutter button, the camera will instantly close the aperture to F8 right before it opens and closes the shutter. There is a button on the front of your camera called the depth of field preview button. Pushing that will close the aperture to whatever you have the camera set. So in this case if I push that I will see through the view finder or on the LCD what F8 has done for things like depth of field. It will be a darker view but you can get an idea on how much is then in focus. I don't use it much but for some things it can obviously be a nice tool.

So beyond the 3 controls for exposure, you'll want to learn a bit about white balance. Essentially the lower the "temp" the cooler/bluer the image will be. Warmer the temp the warmer/more red it will be. Auto on a camera does a pretty good job now-a-days. In software you have two sliders, one being for that temp and the other being a green to magenta tint.

The reason for white balance is to offset unwanted colors. Say you have a bunch of people in a white room wearing white shirts, but the room is lit by orange tungsten lights. If you shot that with a neutral day time white balance everyone would look quite orange. If you wanted their clothes and the walls to be white, the white balance would need to be offset cooler.

A quick example is above. On the right is how the image looks if I pick "daylight" white balance in Photoshop. On the left is if I fix it by moving the sliders manually. Reality was probably somewhere between those two actually as it was quite orange. Anyway, not that much to white balance.

Time to venture into image bit depth for a moment. Digital images are in RGB, having red green and blue color channels. If the image is 8 bit, it will have 256 levels or increments available from black/0 to white/255. That's quite a lot when you think of red green and blue each having 256 shades and those mixing with one another to create colors in your picture. Your monitor is likely 8 bit as is your tv and printer. The problem comes in if you have to adjust a photo much or have smooth gradients in the photo. Say you apply contrast to an image. You are re-assigning values and only have 256 to start with. You can quickly end up with banding in your image or posterization.

16 bit images on the other hand have 65,536 levels/increments to use from black to white for each RGB color channel. 2 times itself 16 times = 65,536. 2 times itself 8 times = 256. Computers use 1's and 0's. Just 2 numbers. 8 bit is using 8 of them. So 2 to the 8th power gives you that 256 while 2 to the 16th power gives you 65,536. Anyway the math isn't important. 16 bit just has a whole lot more to work with than 8 bit. You should really work in 16 bit and not 8 bit.

If you ever wonder if you should shoot in JPG or RAW it really is this simple. A JPG is in 8 bit. A RAW is captured in 12 or 14 bit. With the RAW you can convert it to a 16 bit or 8 bit if you want. Shoot in just JPG and you are stuck starting off with an 8 bit file. Seems really silly if your camera is capturing the file at 12 or 14 bit to let it convert to a JPG for you and start off at 8 bit with 256 levels.

If you are shooting in RAW you can pick the white balance in conversion later, it doesn't matter what you set the camera to.

If you are shooting in RAW it doesn't matter if you set noise reduction on the camera. It's not going to do noise reduction to the RAW data. You can do it in the RAW converter.

If you are shooting in RAW it also doesn't matter what color space you pick on the camera. RAW is RAW data.

The next thing to know about is a histogram. It shows you exactly how the image is distributed by the numbers, so you don't have to worry about your LCD brightness etc for judging. The left edge of the histogram is 0/black and the right side 255/white. The jagged peaks between there is your image and where it's falling on that scale of lightness by the 256 levels, 0-255 numbers. First off, nothing but black or white should be black or white in an image. Say you have a dark tree line somewhere in the shot. Are the trees pure black? I hope not, be some odd trees. If they are in your photo, you have clipped the shadows black. If a big area of your trees are 0 on the 0-255 is no info there. There's no differences there. So you can't later save that, it's gone if it is clipped. Say the same deal was damn near black but more like 5 to 10 on the scale, at least then you have some info/changes in that area to try and save later in post-processing. 5 to 10 on your monitor may look black but if those are the values it's not completely black and so that area will have changes in lightness. If the whole tree area is reading 0 there's no info there all evenly black. The highlights/whites are the same way over at the 255 number. Area of a photo ends up 245 to 250 or something and looks blown out, well it's not and at least you have something to work with. If the whole area is 255/white....there's nothing there to save...the highlights have been blown. Using the histogram shows you exactly where you are at with all that.

If you put the cursor over an image in photoshop and look at the info palette you can see the RGB values. I did that in two places for the above photo as an example. See the bottom right one is red 64, green 75, blue 32. It's heavier/brighter on the green channel because it is grass. By the way, if R G and B are the same number, you have a shade of grey. The other example is towards the top. Red 42, green 45, blue 52. They are almost all the same number and that area is closer to grey. Just a slight hedge towards blue.

Now note the big area of bright white. Cursoring over that area I get values between 247 and 250. The areas at 250 were actually blown out 255 in the image. The areas of the original image near 255 can be brought back down and have some detail/info. The areas of 255 can just be brought down but then that whole area comes down in a block. That is not saving blown highlights, that's just dimming down the bown highlights to a single dimmer shade. If you blow or clip things they are blown and clipped. Can't save them. Obviously in the above image I have brought them down to 250 but that whole area that was originally pure white at 255 is all 250, there are no gradients and changes in there becuase when clipped to 255 that whole area became the same lightness. Photoshop's RAW converter and others are great at bringing down very bright highlights in RAW files but blown is still very much blown.

Notice on the histograms on the right side that most of the image info is towards the left. I cursored over the histogram to see the number values. Basically most of the image is between 40 and 110, all darker than middle grey. So you can see on the histogram how the image is spread out.

Here is an example histogram of a blown out image. See that white jammed up against the right side? A lot of the pixels in the histogram...the image....have a value of 255 then. You want to keep the camera info/pixels/shot from having anything going up the right or left side of the histogram. Needed less exposure.

Here is an example of a shot with clipped black shadows. Needed more exposure.

Now if you have room to the right on a histogram it's best to keep your shadows not just off the black edge but kinda up away from it a bit. If you have to open/brighten shadows in post processing having them closer to being black will result in more noise when they are pushed/brightened. Also currently with the Canon sensors, if you have to push shadows far, you'll likely not just run into some noise but you'll stand the chance of seeing pattern noise, bands of vertical lines in the noise. New Sony sensors right now don't have this problem nearly as bad. Nikon is using several of Sony's sensors in their cams now. It's really only a problem if you have a scene with a big dynamic range, where when you keep the highlights from blowing out your shadows are ending up very close to being clipped black and need pushed a long ways after the fact.

Here is the back of my T2i. I'm using live view just to show this. I'm in AV mode where I pick the aperture and the camera picks the shutter speed. There will be a - / + scale at the bottom and in the view finder. You can offset the exposure with that. Most of the time mine is at +2/3 or +1.0 stops. If you look at the histogram you can see it's under-exposed. The right side of the image info is nowhere near clipping white. The majority of the image in the histogram is over near black/dark.

Push and hold the button for exposure compensation/offset then spin the wheel and watch the bar move down below on that - / + scale or the same scale in the view finder. In this example I moved it up 1 stop. So you are basically telling the camera to add a stop of exposure to whatever it thinks the exposure should be. It all depends on your scene, your camera and your metering mode.

There are metering mode settings to pick from. Basically they just go from using most of the image for the camera to judge exposure, to using less and less on a couple other metering options. I just use the big area evaluative one and then offset what I need. This offset works in AV or TV mode. It's not there in manual mode for obvious are controlling the whole exposure already.

I hate autofocus and rarely need it. If you don't need it, use manual focus. Live View mode on cameras now can be zoomed in 10x on the LCD. If you know you want it at infinity and nothing else, use that to get there then just leave it in manual focus mode on the lens. If you do need it, like on birds or whatever, there are 3 modes to be aware of. I hate that there are 3 as I can never remember which AI mode is which. AI Focus is stupid. You really only want to decide between one shot where you get one focus if you push the button half way or if you want AI Servo where with the button half pushed the camera will keep trying to focus, so you can track something. AI focus evidently the camera is supposed to guess what you are wanting at that moment. I'm great at not remembering which I want the rare times I want tracking autofocus. Live view zoomed in 10x on the LCD can't be beat for precise manual focusing. I'm always utterly shocked when I hear some people have never used that that shoot landscapes frequently.

On this focusing note it should be mentioned some lenses are what is called parafocal, but most are not. My 100-400L isn't and the 10-22 EF-s I had wasn't either. If it is parafocal, then as you zoom it, your focus doesn't change. Say I set my 10-22 to focus at infinity at 10mm using live view to get it exactly at infinity. If I zoomed to 22mm I would need to change the focus to get it just right again at infinity. If the lens was parafocal that wouldn't change. I have some blown shots with the 10-22 because of that. I figured it was close enough, but no, it's far enough off that those images I didn't refocus after switching to 22mm from 10mm aren't sharp enough. Glad I have primes now other than 100-400 and won't have to deal with that. The Zeiss 21mm I have now is even better yet. Every lens I've had has a play around infinity focus where the ring will turn past the marker(which is often wrong anyway). The Zeiss has a hard stop at infinity. I turn that to the very end and it stops and it is right at infinity. Makes life pretty darn simple lol. Great for night.

Also on focusing, many higher end type cameras will come with what is called autofocus microadjustment. A certain lens may back focus on your camera. That's where you place the autofocus on a spot and the camera and lens actually focuses beyond that somewhere. Front focus is the opposite where it focuses in front of the spot you want. If that is happening you can pay Canon to tweak your gear but you'll have to send them both together then wait a while to get them back. Well some cameras now have an offset that can be programmed in the camera. It can save some settings for several lenses. Just a - / + scale to have the camera and lens combination offset it to where it works right. I recently did it to my 100-400 and found it was pretty far off on my 5D II. The thing is it sounds like it might be more tricky with zooms. Where you'd have to have offsets for different focal lengths. Adjusting for one may make another zoom range worse. I haven't done much with it as again I rarely use autofocus for anything. It did help my 100-400L quite a bit on my 5D II. But only that 400mm focal range I did it at. Not sure what happened at the rest of the range after doing it.

If you don't have a cable release you should get one. They are cheap for a basic one that gets the job done. When your shutter gets slow you will of course be using a tripod. Rather than touching the camera to take the photo and shaking it, you just plug in a cable release. It is just a shutter button with the capability of locking it down too. Many cameras will allow up to 30 second exposures and if you want to go beyond that you use the BULB mode. There you can lock the shutter down and let it expose as long as you want. Good for motion shots or stars or whatever requires longer shots or shots you want to control when it starts and ends. Shooting lightning is a good example.

Color spaces and profiles really aren't much fun to go over lol. It can get rather involved. sRGB color space is smaller than Adobe RGB and Adobe RGB is smaller than ProPhoto. The bigger ones can just contain further out colors. If you are shooting in RAW it doesn't matter what you set your camera to. It's capturing RAW data. In RAW conversion you can pick the color space you want to work in.

If you convert to a 16 bit TIFF and work with that, there's not much reason to not work in the large ProPhoto RGB or at least Adobe RGB.

If you are working with 8 bit JPG or 8 bit TIFF the smaller color space could be better to work in.

Here is the thing to think about with these color spaces. Go back to the 8 bit deal having 256 levels from black to white. 8 bit in any of the color spaces will have the same number of available levels 256. Think of it like having a box of 256 crayons. Adobe RGB might have some brighter greens, blues, reds, etc than sRGB but it has the same exact amount of shades to describe them. You could run into banding posterization issues with any smooth gradient areas where a smaller color space might have worked better, because it would have more shades left to describe that area. It wouldn't be wise to use 8 bit in ProPhoto RGB because of this fact. If you are in 16 bit it's a none issue because instead of 256 crayons to describe some gradient you have 65,536 of them...plenty.

It's most wise to shoot in RAW and convert to 16 bit and work with that. After you are done working with an image you'll likely convert to 8 bit in the end as most everything wants and uses 8 bit. But while you are adjusting an image, which is moving shade values around and re-assigning want more bits. Chances are even working in 8 bit you won't create banding working on an image, but it does happen. It's a whole lot less likely to have the problem in 16 bit.

Here is a quick example on why you should plan on learning post-processing. There are some scenes a camera can process fine(though you'll be getting an 8 bit JPG file if you let the camera process the image). There are many others that need help, need developed in photoshop or some other application. The above scene was very much high dynamic range. Bright sun under and around a dark storm. I blew the shot a little bit anyway as you can see on the histogram in the upper right the highlights are blown. There was some room left in the shadows to give it less exposure. Even a half stop less exposure would have helped things. But still the resulting camera image would be 75% very dark and 25% very bright. Both those need help after the fact in post-processing. While standing there at the time the ground sure wasn't that hard to see and the sky wasn't that bright white.

The RAW converters now, in like Photoshop or Lightroom are so powerful. Such a nice change since the day I took the above photo back in 2004(it's 2013 now). Glad I was shooting in RAW. That was also my first year using a DSLR. Been using RAW from the get go. Anyway, there will be a main window of tools to get a lot of the work done, like exposure, shadow and highlight recovery sliders, contrast, saturation, etc. There are also now gradient or brush tools areas with most of the same adjustment options, so you can apply the adjustments to areas. That before always had to be done on a TIFF file in photoshop with a new adjustment layer and a mask. Now you can do it in the RAW converter. Nothing is altering the actual RAW file on the computer. Instead the program makes a "sidecar" file. That sidecar file is simply a file that contains the changes you have made. So when you later look at a certain RAW file you had done work on, it opens it up in the RAW converter with those settings all set again. The sidecar file just tells the software what you had set. You then "open" the file as a JPG or preferably a 16 bit TIFF file and go from there. Resize it and output it however you wish. So you always have a RAW file that never gets physically changed itself. It's so so nice to just open them up later and have them ready to go like this and then output for whatever need you had.

If you shoot in JPG the camera captures its RAW data and camera software adds the saturation, contrast etc and converts it to a JPG for you. It is post-processed by the camera to give you a JPG. Specifically it is post-processed by the software engineers that wrote the software in the camera. A RAW file is simply that same RAW data. You then use software with options to process it. The RAW will look very flat and lacking everything at first. But importantly, a JPG is post-processed. Just not by you. There's no magic camera that gives you "pure" images. Granted the camera software would do fine for most scenes. But you'd still likely be handed an 8 bit JPG. JPG is compressed TIFF is uncompressed. 8 bit is 256 levels 16 bit is 65,536. Shoot RAW and deal with learning how to process the RAW data yourself. You can go from the RAW data to whatever you want, over and over later. You can't go from an 8 big JPG to whatever you want later. It's been stripped of bit depth, probably dynamic range available as well as being compressed to JPG.

Anyway, what I will do for a storm image like this with a huge dynamic range is start in the main adjustment window. I will up the exposure to brighten the foreground to where it should be and add the saturation it also needed, completely ignoring what is happening above the foreground. I plan to bring the sky back down with the gradient adjustment tools seen above. Most often just one gradient adjustment will do the trick. Some images are helped by using a few of them to balance the light around. You can see the one on the horizon is highlighted and selected. Over on the right you can see it has a half stop reduction and a -30 saturation. So everywhere on the green side of that gradient...the sky....gets that adjustment applied to it. You can't go too far with lightness changes or it stands out too much along the horizon. Some times it's best to use another longer slider to get the rest of the exposure reduction. The vertical slider to the left of it is bringing the exposure down some more in the sky. The one going from lower left to upper right is lightening the shadow some more as that corner was still too dark. The top gradient is a tiny adjustment decreasing the saturation a bit more up there and the exposure a hair more. Most cloud or storm scenes benefit from not applying the same saturation to the sky as you do to the foreground. I always bring the saturation up to where it should be in the main adjustment window, knowing I'll decrease it in the sky with a gradient adjustment. Otherwise you end up with oddly blue or purple clouds. You see that quite often online.

The main point here is this. Some scenes like that a camera is never going to replicate something closer to reality than you can get adjusting the RAW file afterwards. There's no magic settings on the camera to get what you can get in post-processing. The processed image may still not be perfectly like it was in reality, but it is far far closer than the original...something a camera JPG would give you. There are graduated neutral density filters for this, but those often aren't as nice looking, especially if much of anything sticks above the horizon. You can see them on tv all the time, where a hill or rock is a lot darker than the rest as it is sticking up into the dark part of the neutral density filter. The filter is just darker on the top so it dims down the bright sky to fit more kindly into the dynamic range of the camera. Doesn't do much good when you have a dark foreground, bright bright sky, then dark cloud above that. Kinda a hassle too if you don't have much time. You can even take multiple exposures and blend them in photoshop too. I'm too stubborn and lazy though and always just take one and try to make it all fit the dynamic range. The shadows open so well now on the new Sony sensors in some of the Nikon's that it's almost all a none-issue now and you can fit it all in one exposure. Think many of the Canon's have 11 stops range and some of those Sony's in the 13 stops range of dynamic range.

Two points here. One is another thing to be aware of in photography. Chromatic aberrations. Some cheap lenses and even some really expensive ones, will produce chromatic aberrations or color fringing on sharp contrast edges, usually worse in the corners. The second note on this is in most cases it is super easy to remove on a RAW file in RAW conversion.

The left is before obviously, with the magenta and green fringing. The right is after one click in the RAW converter. That's from an $800 Canon EF-s 10-22. My 100-400L, even more expensive, does it too but not that badly. The Zeiss and Samyang prime lenses(one focal length lenses, no zoom) I just got don't seem to do this much at all.

Since I'm on the RAW converter subject and thinking about it. Here is one I bet a lot of people don't realize. I'm in Photoshop's RAW converter in the sharpening section. When you sharpen you just want to sharpen edges....detail. You don't want to sharpen a smooth blue sky because you will simply sharpen any noise and make the noise worse. Well that's why they have the mask slider. It just never tells you to hold down the ALT key while sliding that mask slider. I only recently stumbled onto that one. You get a black and white mask image so you can see what you are masking to not be sharpened. Areas in white get the sharpness applied to them and black doesn't.

My thought on filters is pretty simple. Unless it serves an image purpose, it's not needed. Using them for "protection" seems silly to me. A few things say why. I've had a DSLR and used lenses for 8 years now and am anything but overly cautious. I don't know that I've scratched the end glass on any of them. It's really not that easy to do when you think about it. I maybe had a hairline nick, hard to even find, on the one. It will take some pretty serious gashes of the glass to ever even affect the image quality. It would take such a gash that to really hit it hard enough to do it, if you had a filter on there "protecting" it, it would have busted right through the filter anyway. But mostly, it's not that easy to nick the front glass. Maybe if you are rock climbing with the camera around your neck.

What having an added flat piece of glass will do is, at least let a bit less light in for starters. Decrease the contrast perhaps. Be another element to possibly introduce lens flares. And finally perhaps affect sharpness. Only if you spend some money on a high quality filter does it sound like you can avoid a lot of those potential issues, but really why bother. It IS hard to scratch the front glass. And anything that would be devastating to the front glass would have blown through your little filter anyway. But mostly to me it's not worth adding potential image issues to something that is just a non-issue to me. I'm not worried about getting scratches on the front of my lenses because I know how hard and unlikely it is. And lastly, if I do demolish the front glass, I'll just have to send the lens to the manufacture for repair, which would simply consist of them replacing out just that part. Oh well. A random nick or scratch here or there after many years of use isn't even going to affect the image anyway. Why have a little flat filter there that whole time over a pretty overblown concern.

I would for sure slap one on there if you are using the lens in say sand or a bunch of water maybe. I learned the hard way on my 10-22 how much a filter would have helped keep sand out of the end. The area where the lens moves isn't covered without a filter on there. I shot in a sand storm several times and the lens got all bounded up. So if I were to shoot in blowing sand again, I'd have one on there. That is it.

Only recently did I try a polarizer and wind up buying one. I should have done that long ago. I was surprised at just how much it could open shadows and dim highlights on a dynamic scene. Just look at the blue sky difference and mostly the shadow difference.

If you want protection that makes a ton of sense, get and use a lens hood for your lens. It gets damn hard to scratch a lens with a hood on. A hood also has a couple great purposes. They shield out stray light from entering the lens at that sharp angle. Light coming in the edges love to cause lens flares, which you really don't want. They will help a lot with that. The other thing that happens is you gain contrast usually. It can be a huge gain in contrast having the hood on compared to not having it on, depending on the scene and lighting. This was another thing I never overly utilized until recently. My Samyang 14's hood is permanently attached/built in so I can't even take it off if I wanted. The lens really protrudes on that though, so it's much easier to damage that. Glass bubbles outward. Just a lot easier to actually scratch a lens when it is bulging out and not sunk down in the end some. Anyway, want protection for your lens, don't waste it on a filter, snatch up a lens hood. A lens hood has no chance of hurting image quality either and every bit of a chance to help it. A "protective" filter that it's only purpose is to "protect" has no chance of helping image quality. If you have a hood on your lens it gets damn hard to end up with a scratch for obvious reasons. All these years I rarely used a hood, never a filter and hell I've found it super hard to scratch a lens. And again even if I did get a scratch here or there, it's just not going to affect quality. It's been shown it takes some serious lens destruction to affect the resulting images. Such destruction your filter would not have stood in the way of. I wouldn't get too worried over a random possible scratch happening.

Once you are on your computer working with the images you'll want to calibrate your monitor. To a big degree you can do this yourself using various online websites about it. But a calibration hardware device like a Spyder or X-rite i1...etc will do it best and get the color right. I use a Spyder3 Elite at the moment and it works well enough. But a big part of the reason to calibrate is to get your monitor brightness and contrast correct. If your monitor is set too bright, your images will end up too dark to everyone else. Opposite if your monitor is set too dim. The other thing is the lighting in the room the computer is in. It is best to have a pretty dark working environment. Windows open with bright sun outside isn't likely to cut it. You basically want your blacks black and your whites white but then have even increments between the shades between the two. Just google "monitor calibration" and you'll find some good sites on that whole subject.