The typical approach to increasing the resolution of an image is to start with the low-res version and use intelligent algorithms to predict and add additional details and pixels in order to artificially generate a high-res version. But because a low-res version of an image can lack significant details, fine features are often lost in the process, resulting in, particularly with faces, an overly soft and smoothed out appearance in the results lacking fine details. The approach a team of researchers from Duke University has developed, called Pulse (Photo Upsampling via Latent Space Exploration), tackles the problem in an entirely different way by taking advantage of the startling progress made with machine learning in recent years.
Pulse starts with a low-res image, but it doesn’t work with or process it directly. It instead uses it as a target reference for an AI-based face generator that relies on generative adversarial networks to randomly create realistic headshots. We’ve seen these tools used before in videos where thousands of non-existent but lifelike headshots are generated, but in this case, after the faces are created, they’re downsized to the resolution of the original low-res reference and compared it against it, looking for a match. It seems like an entirely random process that would take decades to find a high-res face that matches the original sample when it’s shrunk, but the process is able to quickly find a close comparison and then gradually tweak and adjust it until it produces a down-sampled result that matches the original low-res sample.