From the ACR Team: Merge to Panorama

Your next installment of the “From the ACR Team” series is all about the Merge to Panorama feature in Adobe Camera Raw (ACR) and Adobe Photoshop Lightroom, both Lightroom for Windows and Mac as well as Lightroom Classic. In addition to helping you get the most from the Merge to Panorama, we are excited to share more details about the new Fill Edges feature released in November of 2019, which helps you maximize the view in your merged panoramas.

I’m Josh Bury, a Senior Computer Scientist on the ACR team and lead engineer for the Merge to HDR and Panorama features. I’ve been with the ACR team and Adobe for 7 years now. Some of my past projects included brush support for Graduated and Radial Filters, and Color and Luminance Range masking.

I will be referencing Merge in ACR, but the merge features in Lightroom Desktop and Classic are built on top of the ACR merge “engine”, so everything here applies to all three products. There is a lot to cover with Merge to Panorama, so let’s get started!

Going beyond the limitations of our cameras

Modern digital cameras are a technological marvel. They have significantly pushed the boundary of what is possible to capture in a photograph and have, for the most part, made photographic experimentation effectively free. As photographers, we are now most often limited by our own creativity, imagination, and willingness to “get out there and shoot”. That said, there are still some cases where we can claim that our gear is keeping us from achieving our creative vision.

Combining multiple captures from our digital cameras to form new images is a powerful way to overcome several limitations that still exist in typical imaging setups. That’s what the Merge to Panorama tool is all about: overcoming the limitations in optical field of view and digital sensor resolution so that you can make photos that are otherwise not possible with your camera alone. (Merge to HDR is about overcoming a different set of limitations, but that discussion will have to wait for another time.)

Earlier this fall I took a trip to Alaska with my son. Part of this trip was spent on the road between Anchorage and Fairbanks. Both days we were on the road we were blessed with great weather and enjoyed incredible views. This was my first time in Alaska and I just couldn’t get enough of the towering snowy peaks that were seemingly in all directions. I wanted to capture them in a way that showed how numerous they were, but also made them the main subject and conveyed their impressive size. I had a 200-500mm lens that I rented for wildlife, but It also worked perfectly for capturing close-up panoramas of the Alaska Range.

For this shot I merged 8 images, each at 300mm focal length, and ended up with a 250 megapixel panorama that has an incredible amount of detail. After merging, I decided to edit this as black and white, and switched to the Adobe Monochrome raw profile. Wanting a little more contrast between the mountains and the sky, I changed to the B&W Red Filter profile (like many of our profiles, this one lets you tweak the strength of the profile effect with the Amount slider, the default value worked great for me here though). After that I tuned the result with an increase in Shadows and Whites, a touch of Texture, and a dash of Dehaze.

You may have noticed something a little different about my panorama workflow in ACR compared to what you may have done with other stitching tools. I did all my editing, including choosing a raw camera profile, after stitching the panorama. The Merge features in ACR work their magic at a very early stage in our raw processing pipeline. This means two things: 1. The image created by the merge should be treated just like any other raw file as far as editing is concerned and 2. You can (and should) save your editing for the merged result. As a photographer, I love this order of operations because I much prefer making my edit decisions while viewing the final panorama and giving up the flexibility of a raw file is not something I want to do until I have to. Some of the edits you may have made on the original images are copied to the result, but only a couple are actually “baked in” and not editable after the merge.

Merge to Panorama

Have you ever arrived at a scene, aimed your camera, and realized that it’s just too big to capture? Or, how about capturing a scene that you want to make a large print out of, but the resolution of your camera is not high enough? Merge to Panorama gives you a way to overcome both of these limitations, while still giving you a raw photo for maximum editing flexibility. (Technically, it’s not a raw photo since it didn’t come out of the camera this way, plus there are some edits applied during the merge process, but for most practical purposes, it is a raw photo.)

Merge to Panorama takes a set of images with overlapping views of a scene and stitches them together to create a single image with the full view. The most common use of this is to capture a scene that is wider than the field of view of the lens you have on hand. Merge to Panorama is also a great way to create a photo of a scene that is much higher resolution than what your camera is capable of in a single image. This gives you a way to participate in the megapixel wars without upgrading to the latest and greatest cameras. Kidding aside, using a longer lens to capture a scene in pieces can easily give you the resolution you need, even if your camera’s resolution is relatively low.

Panorama shooting best practices

Here’s what you need to know in the field and when merging and editing to get the most out of Merge to Panorama.

Use a (level) tripod

One of the most fundamental steps of a panorama stitcher is figuring out how to arrange multiple images of a scene to reconstruct a wider view. This is done by so-called computer vision routines that analyze the images to identify salient features, then compare those features between images to find corresponding pairs that can be used to construct a model of how the camera saw the scene. This includes things like which direction, in 3D space, the camera was facing and the precise focal length for each image.

One important detail of this alignment model that will affect how you capture the images in the field, is that it assumes that the camera was in the same real-world position for each of the images in the set. This means that, as much as possible, all of the images that will be merged to a panorama should have been taken from the same fixed position. This matters the most when parts of the scene are relatively close to the camera (inside a building for example, especially if you can see a more distant scene out a window). Depending on the lens you are using and the scene you are shooting, it can be helpful to use a special panorama bracket on your tripod, usually referred to as a nodal slide rail. These allow you to offset your camera so that the center of rotation when aiming is at the optical center of your camera-lens system, known as the lens’s nodal point. This prevents foreground elements from shifting relative to background elements when panning across the scene (the technical term here is parallax and it’s something that you want to avoid when capturing images for pano stitching).

In the illustration above, you can see that the camera’s axes are aligned around the len’s theoretical nodal point (each camera-lens system’s nodal points are different, and while it’s great to look up and adjust for the nodal point of your camera-lens system, it’s really only super important when your subject is close to your camera). The sphere shows how the camera would rotate, vertically and/or horizontally, while capturing the frames for the panorama. That is, the goal is to keep the center of the camera-lens system steady and rotate around the nodal point, instead of creating a sweeping motion with the camera. By following this approach, you’ll improve the quality of the result by minimizing parallax and the resulting alignment mismatches.

Shooting from a firm tripod is great, but it also pays to take a few moments to level it, especially if the panorama you are creating has a wide horizontal sweep. You want to avoid scenarios where the resulting panorama is somewhat diagonal rather than horizontal (for example) as this can make it difficult to crop out the unwanted edges without removing important parts of the scene. If I’m shooting a horizontal panorama handheld, I try and take note of where the horizon intersects the view in my camera and keep it there as I pan across the scene.

Here’s an example of me not following my own advice and showing what happens when your sweep is not level. This is the result of stitching the Alaskan peaks panorama I shared above. Without taking advantage of Boundary Warp and Fill Edges, I would not have been able to get the final image that I wanted from this set (more on these tools below).

Speaking of shooting handheld, all of this is not to say that you shouldn’t even attempt panoramas if you don’t have a tripod. For most scenes, you can still get great results shooting handheld. Just try to stay level and capture all of the images for your pano from the same spot, rotating your camera as described above.

Framing the images

When capturing panoramas, it’s usually worth it to do a “dry run”: go through the motions of moving across the scene with your camera to make sure you can get it all in. Check for trees or other objects that may be in the way; if there are any obstructions that you don’t want in the final panorama, it’s best to change your position before starting the capture (it goes without saying though, always make sure you are in a position that is safe for you and your camera). Performing a dry run of the pano capture can also identify positions that may be mechanically difficult or impossible to achieve. Maybe part of your tripod prevents the ball head from tilting how it needs to for part of the pano, or if you’re using a cable release, the cable might get in the way after you rotate past a certain position. If that’s the case, see if your tripod can be adjusted or repositioned to work around the problem.

I find it helpful to orient the camera opposite of what the primary orientation of the panorama will be. For example, when shooting for a horizontal panorama I will rotate the camera to portrait (vertical) orientation. This gives me more vertical field of view in the result without needing to shoot multiple rows of images.

Shoot raw

The big advantage of shooting raw for merging to panorama (and HDR) is that it allows ACR and Lightroom to work with what is known as scene-referred image data. This means that the pixel values can still be mapped to the amount of light coming from each part of the scene. Put another way, the pixel values are said to be linear, which means that pixels with twice the value of other pixels captured twice as much light from the scene (pixel values are proportional to the number of photons they collected). (Important technical detail: this relationship only holds for pixels that are not too close to the maximum value that they can hold.)

When you capture images straight to JPEG, your camera renders the raw pixel data into an image that is ready to be displayed. This is a nice shortcut when you don’t need maximum flexibility (or you are limited in storage space or time), but it means that the image you are working with has already been edited (by your camera) and is no longer scene referred (it can no longer be mapped directly to the amount of light in each part of the scene). Images like this are what we call output referred because they are ready for display or print. ACR can partially undo the tonal changes applied by the camera and get the image close to a scene-referred state, but it’s never as good as working with a scene-referred image from the start.

Camera settings

When capturing images for merging to a panorama, it’s best to approach camera settings for the individual images as though they are part of one big photo (because they are). You probably wouldn’t want the focus of an image to be different on one side of a photo vs another, and the same is true for panoramas. I typically aim the camera at the main subject of the scene to focus, then switch to manual focus so that focus stays the same for all of the images I’m capturing for that pano.

I usually do the same for exposure settings. If the scene does not require merging to HDR, I switch to manual mode and set the exposure according to the important highlights in the scene; as usual, expose for the highlights, process for the shadows. If you forget to set the exposure manually, it is usually not a problem if the exposure varies image-to-image when merging a panorama as prior to stitching, ACR will apply some exposure compensation to your images if they have varying exposures (this works best if the images are raw).

HDR panoramas

Capturing a full-moon lit panorama of Kilimanjaro at sunrise proved impossible without capturing a series of bracketed exposures and combining into an HDR panorama.

For very high contrast scenes, you can combine the benefits of merging to HDR and panorama by merging your images to an HDR panorama. There are two ways to go about this, and they both require the same order of operations when capturing the source images in the field. Think “exposure bracket, then pan”. In other words, you want to capture all of the exposures needed for HDR merge before panning your camera to the next position for the panorama rather than the other way around.

In ACR or Lightroom, you can manually merge each exposure bracket to an HDR, then select those HDR images and merge them to a panorama to get an HDR panorama. Or, if each exposure bracket has the same number of images and the same relative exposure offsets (usually the case if you are using your camera’s auto bracketing feature), you can select them all and merge to an HDR panorama in a single step by choosing “Merge to HDR Panorama”. If ACR is unable to detect the bracket size, you’ll have to go the manual route and merge the exposure brackets to HDR individually first. Either way, you will end up with an HDR panorama ready for further editing. When you use the single-step “Merge to HDR Panorama” path, the HDR images are merged with alignment enabled and deghost turned off.

Merge to Panorama Options

In the Merge to Panorama preview dialog there are a number of options to choose from, but you will quickly get the hang of what they do. Feel free to experiment, that’s why we have the merge preview window after all.

Projection

I like to start at the top by choosing a projection for my panorama. Recall that the alignment model for stitching the images has the camera shooting the images from a single location. You can think of the aligned images as being mapped to the surface of a sphere that is centered on the camera. The projection step then, is the process of mapping the stitched pano from the surface of a sphere to a flat plane. This is the same problem that mapmakers face when representing the spherical surface of the earth on a flat map. As with maps, different projections make different trade-offs, and the same is true with panorama projection.

Spherical is a great first stop and it is our default option. You can think of this projection as mapping the pano to a sphere centered on the camera, cutting it from pole-to-pole along a longitude line, and laying it out on a flat surface. This option can handle panoramas that are fairly wide in both horizontal and vertical directions better than Cylindrical and Perspective, but it can result in distortion that adds curvature to parts of the scene that are normally straight (like horizon lines and tall buildings).

If preserving straight lines is important to your composition, you will want to try out Perspective (architectural photography is a good example of this). You can visualize this projection by imagining the panorama projecting outward from the camera onto a virtual plane (the same as a movie projector projecting onto a screen). While this is great for preserving straight lines, it is limited to panoramas that don’t have too wide a field of view. Imagine “zooming out” the lens on a movie projector to increase its field of view. As you zoom out, the image on the screen quickly becomes very large. In fact, if you could increase the projector’s field of view to 180 degrees, you would need an infinitely large screen to receive the entire image. So, while Spherical and Cylindrical have no problem with wide panoramas (even those that have a field of view greater than 180 degrees), not all panoramas will work with Perspective.

Cylindrical projection is a hybrid of Spherical and Perspective. Horizontally it’s the same as Spherical, but in the vertical direction, it projects like Perspective. This makes sense if you think about the shape of a cylinder: it’s curved in one direction and straight in the other. Cylindrical is great for wide panoramas with straight vertical structures. Like with Perspective though, you can’t go too wide in the vertical direction.

Most of the panoramas I shoot are of landscapes. Depending on the lens I have mounted, I will either use Spherical or Perspective projection when stitching the panorama. Generally speaking, I use Spherical for panos captured with a wide lens and Perspective for panos that I shoot at longer focal lengths (where I’m usually going for a higher-resolution view of the scene).

Note that the above projection descriptions are geared towards projecting panoramas that are primarily horizontal. If you are shooting a panorama that is primarily vertical, the Spherical and Cylindrical projections will change their orientation to match.

Boundary Warp and Fill Edges

Photographs are almost always rectangles, but the edges of panoramas are hardly ever that simple and cropping them sometimes results in important parts of the scene getting removed. Just below the Projection options, you will find Boundary Warp and Fill Edges. These controls reduce the amount of cropping necessary and make it possible to keep more of the scene in the final result.

This hand-held pano wasn’t kept parallel and needs to either be cropped, warped, or filled.

Boundary Warp works by stretching the jagged edges of your panorama to fill the rectangular area that bounds it. This is presented as a slider because sometimes you only need a little bit to keep important parts of the scene inside the final crop. Note that if you have Auto Crop turned on, you will not see the uneven edges, but rather the rectangular area that they bound. If you have Auto Crop turned on while adjusting Boundary Warp, you can see how more of the scene is included for larger Boundary Warp values.

The same pano, with Boundary Warp applied. Note how the left peak is nearly cut off.

Because Boundary Warp stretches the image to do its job, it can sometimes introduce unwanted distortion. That’s where the Fill Edges option comes in. Fill Edges uses content from the image to fill in the transparent edges, similar to Content-Aware Fill in Photoshop, which shares the same underlying technology. Fill Edges works best on skies and natural textures. Even when it’s not perfect, you will almost always be able to use a more generous crop on your final pano.

Mixing both Boundary Warp and Fill Edges, a balance can be struck between the details and objects throughout the scene.

Boundary Warp and Fill Edges can be used in isolation, but they work great together. I typically start with Boundary Warp, gradually increasing it as much as I can for the scene. If I don’t want to go all the way to 100, I check Fill Edges to take care of the rest. Using at least a little Boundary Warp, especially when there are large areas outside of the pano, makes it easier for Fill Edges to produce a natural result (the larger the area to fill, the harder it is for Fill Edges to reconstruct a natural looking scene).

Auto Crop and Auto Settings

These are fairly self-explanatory and are both “non-destructive” in the sense that they can be changed after leaving the merge preview window while editing. Auto Crop applies the largest crop that will fit within the edges of the panorama. Auto Settings computes a good starting point for exposure, contrast, saturation, and other basic edits (the same thing as the Auto button in the Edit panel).

Examples

I want to share a few examples of how I use these tools in my own photography. Hopefully this will help make the descriptions above a little less abstract.

Example 1: Alaskan Peaks

You’ve already seen the problem I created for myself on this one. To fix it, I used a combination of Boundary Warp and Fill Edges. I started by adjusting Boundary Warp. I ended up setting it to 28. Beyond that the snow line across the bottom was too far off level for my taste. Here’s what the merge looks like with just Boundary Warp applied:

I then checked Fill Edges to do the rest so I would have the maximum amount of flexibility when cropping:

After that, I cropped it in a little tighter and applied the basic adjustments described previously.

Example 2: Golden Gate Sunrise

This is a shot of the Golden Gate Bridge just after sunrise. For this pano, I wanted all of the bridge cables to be straight, parallel, and vertical, so I chose Perspective projection. Here’s what the initial preview looked like:

The horizontal field of view for this one was almost too wide for Perspective projection, that’s why the pano is narrower in the center. Cylindrical worked fine on this one too, but I preferred the how the bridge structures looked with Perspective. This image is a good example of when you may not want to reach for Boundary Warp to fill in the edges. As soon as you start to apply Boundary Warp in this image, the bridge cables start to curve and are no longer parallel. Here’s what that looks like (Boundary Warp is set to 100 here to make the effect more obvious):

Fill Edges came to the rescue on this one. Here’s what it looks like with Boundary Warp set to 0 and Fill Edges turned on:

After merging, I used Guided Upright to correct for perspective keystoning in the bridge cables and make sure they were vertical. Then I tightened up the crop (removing the unintended silhouette on the left) and applied some Exposure, Highlights, and Shadows tweaks. To bring out the vibrant color I remember from that morning, I switched to the Adobe Landscape raw camera profile and bumped up global Saturation a little. The blues weren’t quite keeping up with the oranges at that point, so I used the HSL tool to increase saturation in the blues a little. Here’s the final result:

Example 3: Moody St. Helens

Here’s another example where Fill Edges gave me more to work with after merging. Here’s what the pano looked like before activating Fill Edges:

And here’s my final image:

Conclusion

We believe our job on the photography team at Adobe is to provide the tools needed by photographers to capture and craft the images exactly as they’re envisioned. We added the ability to capture panoramas into Camera Raw and Lightroom to expand the range of photographic capabilities in your toolset, enabling you to capture scenes that are larger than your lens or camera can capture on their own. I hope this article was enlightening and either gave you some new found inspiration to go out and try capturing some panoramas or at least some additional insights to help power-up your panorama skills. In a future article, I’ll be going into more depth on how the HDR merge tools work, and until then, happy shooting!