Designer working with a stylus on a graphics tablet

Online Photoshop Tutorials

Understand what Photoshop is actually doing

Blending modes. Smart objects. Color grading. The logic behind the tools.

01

Blending Modes, Explained: What Photoshop Is Actually Doing to Your Pixels

Most designers use blending modes by feel, cycling through the list until something looks right. That works up to a point. Knowing the math behind them is what lets you predict the result before you try it.

Photoshop layers panel showing multiple blending modes on overlapping color layers

Why Most Explanations Don’t Stick

The standard tutorial approach to blending modes is a grid of before/after images: here’s what Multiply does to a photo, here’s Overlay, here’s Soft Light. It’s useful as a reference. It’s not useful for understanding what’s actually happening, because it skips the part that makes the results predictable.

Blending modes are math. Each one is a formula that takes the pixel value of the layer you’re blending (the “blend layer”) and the pixel value of what’s beneath it (the “base layer”), runs a calculation, and outputs a result. Once you know the formula, you can predict what any combination of colors will produce. You stop guessing and start choosing.

The pixel values Photoshop uses run from 0 to 1 in the formulas, even though the interface shows 0 to 255. A pure white pixel is 1. A pure black pixel is 0. A 50% gray is 0.5. Keeping that in mind makes the math easier to follow.

The Multiply Group

Multiply is the one most designers encounter first, usually when they’re trying to remove a white background from a scan or a texture. The formula is simple: base times blend equals result.

If you multiply any color by white (1), you get the original color back unchanged. If you multiply anything by black (0), you get black. If you multiply two mid-tones, you get something darker than either. That’s the entire behavior of Multiply: it can only make things the same or darker, never lighter. White on a Multiply layer is invisible. Black is completely opaque.

This is why it’s the right choice for ink textures, watercolor washes, and hand-drawn linework scanned on white paper. The white paper disappears; the marks stay.

Screen is the inverse. Its formula is: 1 minus the product of (1 minus base) and (1 minus blend). In practice, this means Screen can only make things lighter. Black on a Screen layer is invisible. White is completely opaque. It’s Multiply’s mirror image, which is why it’s what you reach for to blend light sources, lens flares, and fire captured on black backgrounds.

Multiply and Screen Are Complementary

That relationship is worth pausing on. If you take any layer and set it to Multiply, then duplicate it and set the duplicate to Screen, the Screen layer undoes what the Multiply layer did. They cancel each other out. This isn’t a trick you’ll use often, but understanding it tells you something important: these two modes are mathematically inverses, not just visually opposite.

The same inverse relationship holds for most of the major blending mode pairs. Lighten and Darken are inverses. Color Burn and Color Dodge are inverses. Linear Burn and Linear Dodge are inverses. If you understand one side of each pair, you understand both.

The Overlay Group: Where It Gets Interesting

Overlay is where many designers get confused, because its behavior changes depending on whether the base layer is light or dark.

The formula switches at the midpoint. If the base pixel is darker than 50% gray (value below 0.5), Overlay applies a Multiply-like calculation. If it’s lighter than 50% gray (value above 0.5), it applies a Screen-like calculation. At exactly 50% gray, nothing happens.

What this means in practice: Overlay increases contrast. Dark areas get darker, light areas get lighter, and the midpoint stays put. A 50% gray layer on Overlay is completely invisible, which is why texture designers frequently use it: paint at 50% gray on an Overlay layer, and you can add highlights by going lighter than gray and shadows by going darker, without touching the underlying image at all.

Soft Light uses the same principle but with a gentler formula. The transition at the midpoint is less abrupt, so the contrast increase is more subtle. Hard Light is Overlay’s inverse: instead of the base layer controlling which formula applies, it’s the blend layer.

This is worth running as an experiment if you haven’t. Take an image, add a Hard Light layer with a color painted on it, then swap the two layers. Set the painted layer to Overlay instead. With Hard Light, the blend layer color drives the result. With Overlay, the base layer does. Same image, same colors, different layer order, different outcome.

Luminosity, Color, and the Separation Group

The separation modes sit apart from the others because they don’t combine pixel brightness through multiplication or addition. They separate the attributes of color (hue, saturation, luminosity) and apply only one of them.

Color mode takes the hue and saturation of the blend layer and applies them to the luminosity of the base layer. This is the correct mode for colorizing black-and-white photos, for applying a color grade without affecting tonal contrast, and for fixing selective color problems without touching the exposure. It leaves the light/dark relationships of the original completely intact.

Luminosity does the opposite: it takes the brightness values of the blend layer and applies them to the color of the base. This is less commonly used but genuinely useful when you’ve done a tone correction (curves adjustment, for instance) that shifted the colors in ways you didn’t want. Set the corrected layer to Luminosity and only the tonal changes carry through.

Hue and Saturation exist as their own modes too, though they see less use. Hue alone changes the color without touching saturation or brightness. Saturation alone changes the intensity of the existing color. Knowing they exist matters less than knowing Color and Luminosity, but they’re there when you need to make a single-attribute change.

Making Decisions Instead of Guessing

The practical payoff of understanding blending mode math is that you can think forward instead of backward. Before you try Overlay, you know it will increase contrast and that a 50% gray layer will disappear entirely. Before you use Multiply, you know white areas on that layer will be invisible and that you’re going to darken whatever is beneath. Before you try Color, you know you’re keeping the underlying tonal structure intact.

Cycling through the list to find something that looks right still has its place, especially with texture layers where you genuinely don’t know which will fit. But for intentional work, knowing the formula gets you to the right mode on the first or second try instead of the eighth.

The deeper benefit is that you can combine modes in predictable ways. A Multiply layer for the shadow pass, a Screen layer for the light pass, a Soft Light layer for a midtone contrast boost: each one is doing a defined thing, and the stack as a whole is legible. You can revisit it six months later and know exactly why each layer is there.

That’s the difference between understanding the tool and being along for the ride.

Read full article →
02

Smart Objects and Why You Should Never Rasterize Anything You Might Change

Rasterizing a layer collapses its edit history and makes the change permanent. Smart objects keep that history open. The difference matters more than most designers realize until they've had to rebuild something from scratch.

Photoshop smart object badge on a layer thumbnail with transform handles visible

What a Rasterized Layer Actually Is

When you rasterize a layer in Photoshop, you’re converting whatever was on that layer, a vector shape, a type layer, a placed file, into a flat grid of pixels at the current document resolution. The original information is gone. If you had a vector shape at 300 pixels wide and rasterized it, you now have a bitmap that can’t be made larger without losing quality. If you had a type layer set in a specific font at a specific size and rasterized it, you now have a bitmap of letters and the text is gone.

Photoshop will ask you to rasterize in a few situations: when you try to run a pixel-based filter on a non-pixel layer, when you try to paint on a type layer, when you try to use certain transform functions. Each prompt is an invitation to flatten something irreversible. The question to ask each time is whether you’ll ever need to go back to the original.

Most of the time, the answer is yes, and Smart Objects are how you keep that option open.

What a Smart Object Is

A Smart Object is a container. It wraps a layer or a set of layers inside a protected package that Photoshop treats as a single unit. When you scale, rotate, warp, or apply filters to a Smart Object, Photoshop doesn’t modify the actual pixels inside. It records the transformation and applies it on the fly when rendering. The original content stays intact.

This matters most for three things: scaling, Smart Filters, and linked files.

When you scale a regular pixel layer down to 10% of its original size and then scale it back up, it looks terrible. Photoshop threw away 90% of the pixel information when you scaled it down, and scaling back up just interpolates the remaining data. When you do the same thing with a Smart Object, Photoshop scales back up from the full original. The transformation is reversible because the original was never touched.

Smart Filters are filters applied to Smart Objects. A Gaussian Blur applied to a regular layer destroys the original pixel data. The same filter applied to a Smart Object is stored as an instruction attached to the container. You can double-click the filter entry in the Layers panel and adjust it any time. You can disable it. You can delete it. You can change the blend mode or opacity of the filter itself using the icon next to it. None of this is possible once you’ve rasterized and applied a filter directly to the pixels.

Linked vs. Embedded Smart Objects

Photoshop has two kinds of Smart Objects. Embedded Smart Objects store a copy of the original file inside the document. Linked Smart Objects store a reference to an external file on disk. The practical difference matters depending on what you’re building.

An embedded Smart Object is self-contained. Move the PSD to another machine and the Smart Object comes with it. The tradeoff is file size: the original content lives inside the document, so large embedded objects make large files.

A linked Smart Object points to an external file. You can edit the external file in its native application (Illustrator for vectors, Camera Raw for raw files, another Photoshop document for PSDs), save it, and every document that links to it updates automatically. This is the right choice for assets shared across multiple documents, for logos that might change, and for any element you want to edit in a better tool than Photoshop’s layer editor.

Linked Smart Objects also make it possible to build layered mockup systems where swapping one source file updates the mockup everywhere it appears. Change the product image once; every marketing comp with that product updates. That kind of system requires linked Smart Objects and couldn’t work with embedded or rasterized content.

The Camera Raw Workflow

One of the most powerful uses of Smart Objects is with Camera Raw files. When you open a raw file in Photoshop through Camera Raw, you can choose to open it as a Smart Object rather than a flattened layer. This keeps the connection between the Photoshop document and the raw file live.

Double-click the Smart Object thumbnail and Camera Raw reopens, with all your previous adjustments intact and editable. You’re not working on a flattened JPEG interpretation of the raw file. You’re re-rendering it from the original sensor data with new parameters. The difference in what’s recoverable in shadows, highlights, and color is substantial.

The same principle applies to vector files placed from Illustrator. Place an Illustrator file as a linked Smart Object, and you can edit it in Illustrator any time by double-clicking. Scaling it in Photoshop doesn’t degrade the output because Photoshop renders the vector at whatever size you need.

When You Actually Do Need to Rasterize

The case for rasterizing is real, even if it’s narrower than most default workflows make it. Some filters don’t run on Smart Objects at all. Some brushwork genuinely needs to apply to a pixel layer. Some performance-heavy documents with many Smart Objects benefit from flattening layers that are truly finished.

The discipline is to rasterize with intention rather than by default. Before clicking Rasterize or flattening a layer, ask whether the transformation is truly final. If there’s any chance you’ll need to revisit the original, either work on a duplicate or keep the Smart Object and work destructively on a stamped copy. The extra step takes five seconds. Rebuilding an element from scratch because the original was rasterized takes much longer.

Building the Habit

The biggest shift in working non-destructively isn’t technical. It’s the habit of asking “will I need to change this?” at each step rather than “does this look right?” The second question produces faster early decisions. The first produces less rebuilding later.

Smart Objects, adjustment layers, layer masks, and blend mode stacks are all tools for keeping decisions revisable. A document built entirely with these approaches is one where you can go back to any point and make a different call. A document built with frequent rasterization and direct pixel edits is one where you’re committed to the choices you made in order, and going back means losing everything that came after.

Most experienced Photoshop users arrive at the non-destructive workflow the same way: by having to rebuild something they wish they hadn’t flattened. You can either learn it the expensive way or front-load the discipline. Either way, Smart Objects are the foundation.

Read full article →
03

Color Grading a Flat Photo: What Works and What Doesn't

A flat, well-exposed photo is easier to grade than one that's already been processed. The work is in understanding what you're changing and in what order, not in finding the right preset.

Photoshop Curves adjustment panel showing an S-curve on a portrait photograph

What “Flat” Actually Means

A flat photo, in the context most digital photographers mean, is one that’s been deliberately underprocessed. Raw files shot with a neutral picture profile, log-gamma footage, or scans from a well-exposed negative all look flat: low contrast, muted colors, a histogram that sits in the middle rather than spreading to the edges.

Flat is good. It means you have information across the full tonal range. A photo that came out of the camera looking already punchy and saturated has likely had contrast added and highlights clipped in-camera, which means some of that information is gone before you even open it. A flat file gives you maximum latitude to decide where the contrast goes and how the colors behave.

The goal of color grading isn’t to add saturation until the photo looks exciting. It’s to place tones and colors in the right relationships to each other so the image reads the way you intend it to. That’s a more controlled operation than it sounds.

Start With Exposure and White Balance

Before any creative grading, the technical foundation needs to be right. Exposure first: if the overall brightness is off, every subsequent adjustment is trying to correct for it. A Levels or Curves adjustment to set the white point and black point correctly takes thirty seconds and saves you from chasing color casts that were actually just overexposure.

White balance second. A color cast in the raw file contaminates every color decision you make on top of it. A portrait graded with the wrong white balance will look stylized in the wrong way: not the cool-shadow warm-highlight look you were going for, but just wrong skin tones with a layer of treatment on top.

If you’re working on a raw file opened as a Smart Object, fix both in Camera Raw before you start adding adjustment layers. If you’re working on a JPEG or a merged pixel layer, a Curves adjustment targeting the individual R, G, and B channels will let you neutralize a cast without affecting overall brightness.

Curves Is the Primary Tool

Most color grading in Photoshop runs through Curves. It’s worth spending time here rather than reaching for Hue/Saturation, Color Balance, or Vibrance first, because Curves can do everything those tools do and it gives you more precise control.

The master Curves channel controls overall brightness and contrast. An S-curve is the classic contrast boost: pull the shadows slightly down and the highlights slightly up, and you increase separation across the tonal range. The steeper you make the S, the more contrast you add. A very steep S-curve produces the crushed-blacks, blown-highlights look common in commercial and fashion work. A gentle S is more natural.

The individual R, G, and B channels let you shift the color temperature and add tints to specific tonal ranges. Lifting the blue channel in the shadows while pulling it down slightly in the highlights gives you the cool-shadows warm-highlights split that reads as “cinematic.” Pulling the red channel down in the highlights desaturates skin tones in a way that looks less processed than reducing overall saturation. Adding a small amount of red to the shadows warms them without touching the rest of the image.

This split-toning through individual Curves channels is more precise than the dedicated Split Toning tools in Lightroom or the Color Balance adjustment in Photoshop, because Curves lets you draw the exact shape of how the effect transitions across tones.

The Order of Operations

Color grading adjustments interact with each other, so order matters.

Exposure and white balance first, before anything creative. Contrast second: establish the overall tonal structure before you start adjusting color, because adding contrast shifts color saturation (higher contrast makes colors appear more saturated, lower contrast makes them look flatter). Color work third: once the tones are right, you can adjust hues and saturation against a stable baseline. Any localized adjustments, dodging, burning, targeted color corrections, last.

If you add a saturation boost before you’ve set the contrast, and then add contrast afterward, you’ll often end up over-saturated because the contrast increase added apparent saturation on top of your deliberate increase. Doing contrast first means your saturation adjustment is working on the final tonal structure.

What Hue/Saturation Is Actually Good For

Hue/Saturation is a blunt instrument for overall saturation but a precise one for selective color work. The dropdown menu at the top lets you target a specific color range: Reds, Yellows, Greens, Cyans, Blues, Magentas. Used this way, you can shift the hue of a blue sky without touching anything else, saturate only the skin tones, or desaturate the greens in a background that’s competing with the subject.

The Targeted Adjustment tool (the hand icon) inside Hue/Saturation lets you click directly on a color in the image to select it. Drag left to desaturate, right to saturate. This is faster than guessing which color range your target falls into, since many real-world colors sit between the named categories.

The lightness slider in Hue/Saturation is worth avoiding. It affects the entire selected color range uniformly, which tends to look flat. If you want to darken or lighten a specific color, a Curves adjustment or a luminosity mask is more precise.

Luminosity Masks and the Limits of Global Adjustments

Global adjustments treat every pixel the same regardless of its position in the tonal range. A Curves adjustment you intend to affect only the highlights will also shift the midtones and shadows unless you constrain it.

Luminosity masks solve this by selecting pixels based on how light or dark they are. A highlights luminosity mask selects bright pixels at full strength and falls off smoothly into the midtones, with shadows nearly unselected. Run your Curves adjustment through that mask and it applies most strongly to the highlights and fades out toward the shadows, which is exactly what “highlight adjustment” should mean.

Building luminosity masks manually from Lab channels or using a panel that generates them automatically both work. The important thing is knowing they exist. They’re the mechanism behind the kind of tonal control that makes a grade look integrated rather than applied.

Presets Work Until They Don’t

Film emulation presets, VSCO packs, Lightroom presets imported into Camera Raw: all of these are legitimate starting points. They encode color decisions that real photographers and colorists made for specific film stocks, light conditions, and aesthetic targets. Using one as a starting point isn’t lazy. Starting and stopping there is.

The problem with applying a preset and calling it done is that it was designed for some other image. Its contrast assumptions fit a different exposure, its color shifts work best with a different white balance, its shadow treatment was designed for a different subject. Applying it wholesale produces results that vary from “looks pretty good” to “completely wrong for this image” depending on how closely your photo matches the preset’s target conditions.

Use presets to find a direction, then adjust from there. If the preset adds too much contrast, back off the Curves. If the color shift is right but too strong, reduce the opacity of the adjustment layer. If the shadows are too warm, adjust the blue channel. The preset got you to a starting point in ten seconds. The adjustment gets you to the result you actually wanted.

That’s the arc of most color grading work: establish the technical baseline, apply the creative intention, then refine until what you see matches what you were going for. The tools are precise enough to get there. The question is just whether you understand what each one does.

Read full article →