Behind All the Buzz: Deblur Sneak Peek
Are you one of the nearly 1 million people who have watched the “Adobe MAX 2011 Photoshop Image Deblurring Sneak” on YouTube? We’re blown away by the response to the recent on-stage demo by Jue Wang, senior research scientist at Adobe. He’s a part of the Advanced Technology Labs (ATL) group, a brain trust of brilliant inventors who dream up new ways to make software intelligent, faster and uber futuristic.
http://tv.adobe.com/embed/816/11422/
Rainn Wilson (aka Dwight from The Office) was the emcee of the “sneaks” portion of the Adobe MAX conference in Los Angeles, CA – a place where developers, designers and digital artists meet to get inspired.
After watching blurry text, restored to clear, legible font Rainn made a suggestion to the executive team sitting in the front row.
Rainn: “You should do this in the next Photoshop…People will really…seriously, I’m just a chump. People will really like this. People will love it.”
Adobe Technology Previews
Not many software companies can say their feature set was inspired or rather, demanded by an actor from The Office! We even heard the folks from NPR discuss it yesterday – relating the capability to what we often see in movies and TV shows where experts race to look at a blurry photograph and magically make the image crystal clear. Everyone is talking about this sneak that unblurs blurry photos and wondering when they’ll see it in the next version of Photoshop.
Adobe previews technology because we want to give you a peek at what we’re exploring for the future. We’re not just focused on the features we can deliver to you today – but we’re looking far beyond that – even taking the cool stuff you see in the movies and trying to make it real, useful and practical in your everyday life.
How the Deblur Effort Began
Deblur isn’t anything new for the Photoshop team. Jeff Chien, Principal Scientist, has been working at Adobe for over 20 years and collaborated with Todor Georgiev, Sr. Research Scientist II, taking on the challenge of Unsharp Mask in Photoshop and trying to improve it with Smart Sharpen. He has been responsible for bringing marquis features like the Healing Brush, Match Color, Content Aware Scaling and Content Aware Fill to life and now works hand in hand with Jue Wang and others from the ATL group to bring futuristic innovation into the product. He has always been fascinated with the idea of taking a blurry image and exposing more detail, as it is a very common and important problem to solve.
Jeff commented, “We added Smart Sharpen in CS2, but deblur technology wasn’t mature enough yet for Photoshop and it’s been nagging me ever since. Given the nature of the heavy computation needed, the technology is really dependent on the evolution of the hardware, which provides a more powerful CPU and GPU for us to leverage.”
Challenges with Deblur
However, there is still quite a bit of development left to do before this feature is ready for prime time. Although some of these early demos will wow audiences, there is a lot more to blur than meets the eye. For instance, there are algorithms that estimate where blurs occur in an image and determine what type they are, and others that then reconstruct the image.
The before image below has a blur caused by camera shake. The after image shows the type of magic that can occur when the right algorithm is applied using Jue’s new prototype.
http://blogs.adobe.com/photoshopdotcom/files/2011/10/Plaza.png
The tricky part is when an image has more than one kind of blur, which occurs in most images. Current deblur technology can’t solve for different blur types occurring in different parts of a single image, or on top of one another. For example, if you photograph a person running and also shake the camera when you press the shutter, the runner will be blurry because he is moving and the whole image might have some blur due to the camera shake. If an image has other issues like the noise you often get from camera phones, or if it was taken in low light, the algorithms might identify the wrong parts of an image as blurry, and thus add artifacts in the deblur process that actually make it look worse.
Strong edges in an image help the technology estimate the type of blur. The image below shows the same algorithm run on the image above without the benefit of strong edges. You see that it fails in this case.
http://blogs.adobe.com/photoshopdotcom/files/2011/10/sarah-dog.png
Next Steps for Deblur Technology
This issue isn’t solved yet, which is why this project is still a prototype and not yet in a product. Special thanks to Jue, Jeff and a bunch of really smart people who are dedicating a lot of time to this problem so they can eventually ship deblur to customers: Professor Seungyong Lee and his students Jin Cho and Sunghyun Cho, Jue Wang, Jeff Chien, Sarah Kong, Simon Chen, Steve Schiller, Gregg Wilensky, Scott Cohen and Sylvain Paris.
Currently, the most practical use case for the deblur technology we have seen from this prototype is for image forensics – when an investigator needs to deblur an image enough to read some text like a phone number or license plate – but isn’t trying to perfect an image. For example, you can see how well the prototype deblurs the text in the image below.
http://blogs.adobe.com/photoshopdotcom/files/2011/10/deblur-poster.png
Eventually, we will move beyond image forensics to solve more common issues – like a low resolution image from your camera phone – and make it as crisp as what you saw in the real-world moment.
When our research team and scientists chase after solutions to complex problems, they’re always placing their bets on what they think will really amaze, surprise and impact our customers. For the team, your enthusiasm around this topic is validation that their hard work is paying off. The first step is to finish solving the complex formulas and technology behind the feature. After that, we will build a simple user experience around it to seamlessly fit it into a real-world workflow.
Put simply – our work has just begun…
UPDATE: For those who are curious – some additional background on the images used during the recent MAX demo of our “deblur” technology. The first two images we showed – the crowd scene and the image of the poster, were examples of motion blur from camera shake. The image of Kevin Lynch was synthetically blurred from a sharp image taken from the web. What do we mean by synthetic blur? A synthetic blur was created by extracting the camera shake information from another real blurry image and applying it to the Kevin Lynch image to create a realistic simulation. This kind of blur is created with our research tool. Because the camera shake data is real, it is much more complicated than anything we can simulate using Photoshop’s blur capabilities. When this new image was loaded as a JPEG into the deblur plug-in, the software has no idea it was synthetically generated. This is common practice in research and we used the Kevin example because we wanted it to be entertaining and relevant to the audience – Kevin being the star of the Adobe MAX conference!
For more information and examples on the common practice of synthetic blurring being used as part of research in this area, check out:
http://grail.cs.washington.edu/projects/mdf_deblurring/synth_results/index.html
http://www.cse.cuhk.edu.hk/\\~leojia/projects/robust_deblur/
http://www.wisdom.weizmann.ac.il/\\~levina/papers/deconvLevinEtalCVPR09.pdf