Image recompression is a concept as old as image encoders themselves. The most basic way to describe it is that your input image, which was once processed by a lossy compression algorithm, is processed again by the same algorithm. Image recompression is a useful tool in web performance because it reduces file size. That said, it can only be truly useful if we can also limit its detrimental effect on image quality.
Understandably, Image recompression is a topic that has attracted its fair share of attention, but this time we're going to address it from a different angle and explain how you can recompress JPEGs to lossy WebP with a median savings of 17%, while also limiting quality loss.
Limiting quality loss in lossy recompression
Generation loss is what we want to limit when we recompress images. If you've ever copied a copy of a document, you've seen generation. You also know that it worsens the more the cycle is repeated. Similarly, every time you re-encode a lossy image using a lossy encoding algorithm, you lose a bit more quality. Repeat this process enough, and you'll cross a line where the image's quality becomes unacceptable.
Given the risk of generation loss, it's tempting to ask why we'd bother at all with image recompression, but a persuasive argument is that we don't always get to work with lossless sources as developers and designers. Sometimes you have to make do with what you've got. While you could eyeball every image you export from your favorite imaging software, that approach doesn't scale. It's feasible for a handful of images, sure, but effectively impossible when it comes to large image libraries. The solution here is automation, and such automation requires an image similarity scoring algorithm to ensure quality degradation is kept in check without having to personally examine every single image.
For those who have never heard of image similarity scoring, it's when two images are compared to each other, and a score is given based on how they differ. Examples of these programs range from the relatively simplistic PNSR method to more advanced algorithms such as SSIM or DSSIM.
When we pair an image similarity scoring algorithm with a program that can analyze that algorithm's output, we have a solution for automating the lossy image recompression while limiting quality loss. We already have solutions that do this for JPEGs. For example, jpeg-recompress uses SSIM (and other methods) to figure out which quality setting is best for recompressing an input JPEG.
For WebP images, though, there's nothing so straightforward. Kraken.io offers a solution in their API to recompress WebPs from JPEGs, and Cloudinary's
q_auto setting does this as well. Yet, maybe you can't afford one of these solutions at a level which meets your business needs. Or maybe you don't want to be dependent on outside vendors. That's why I've been kicking around a Node.js program that does what jpeg-recompress does, but for WebP.
Say hello to webp-recompress
webp-recompress—not so subtly named after its inspiration—is a tool I've been tinkering with that takes an input JPEG and uses SSIMULACRA, an image similarity scoring algorithm created by Jon Sneyers at Cloudinary. I won't get into why I chose SSIMULACRA over another solution, especially since Jon himself explains how it works so well.
If you're comfortable in the terminal, using SSIMULACRA should be similar to a lot of other command-line utilities you've used. The program takes a source image as its first argument, and a derivative image in the second argument like so:
ssimulacra original.jpg original-q75.jpg
Given this command, SSIMULACRA will provide a score between 0 and 1, where 0 means the image is identical to its source and 1 means the image is completely different. According to SSIMULACRA's help text, images that score above 0.1 are likely to contain distortions which are “perceptible/annoying”. Using SSIMULACRA, webp-recompress adopts the following strategy:
- It verifies that the input is a JPEG.
- Assuming the input is a JPEG, its quality is guessed.
- webp-recompress then undergoes several iterations of encoding lossy WebP images at various quality settings. It stops when it finds a WebP image candidate that is both smaller and within a certain SSIMULACRA scoring threshold.
This is just an overview of what the program does, so if you're interested in knowing what's going on under the hood in detail, I recommend reading the webp-recompress documentation and maybe even the project's source code.
So, now that we basically know how webp-recompress works, how can we measure its effectiveness?
Putting WebP recompression to the test
In order to accurately measure how well webp-recompress works, we need an image corpus. An image corpus is a very large set of images which we can process and then compare the output to the original corpus. A large dataset means we can account for the effectiveness of our recompression strategy across a variety of image subjects.
To examine how well webp-recompress performs, I gathered around 20,000 JPEGs from various websites. Most of these images are product and food photography, with a healthy chunk of stock photography, logos/line art, and miscellaneous subjects. If nothing else, this corpus would give us an idea how well recompressing JPEGs to lossy WebP works for typical web imagery, regardless of whether that imagery might be better suited to other formats. So, let's see how webp-recompress did.
File size reduction
Across the entire image corpus, webp-recompress was able to reduce file sizes by about 25.5%—or 41.6 KB—on average. While averages are easier to calculate, they don't convey the entire story. Across a large dataset such as our image corpus, percentiles provide a more nuanced picture of how well recompression works at various points. Below is a table that shows the effectiveness of webp-recompress at reducing file sizes, with percentiles calculated based on the output size of WebP images:
|JPEG (KiB)||WebP (KiB)||Difference|
When I look at percentiles for any dataset, I tend to focus on five in particular: the 10th, 25th, 50th, 75th, and 95th. In this situation, these data points don't provide just a broad sense of how well JPEG-to-lossy WebP recompression works in typical scenarios, but also how well it works in scenarios that are atypical, yet aren't exceedingly rare. Across all of the percentiles mentioned, we see anywhere from a 15% to 30% decrease in file size. The median reduction in file size is just shy of 17%.
File size reduction is just one half of effective lossy image recompression. Because webp-recompress uses SSIMULACRA to try and limit quality loss, we might expect that it would do a better job than if we merely converted the entire image corpus to WebP using a single quality setting. That's because with SSIMULACRA—or any image similarity scoring algorithm—we can limit quality loss in a way that's more informed than mere guesswork. Below is a table of SSIMULACRA scores which measure quality loss across the entire image corpus after it was converted to WebP using webp-recompress:
When you remember that a score below 0.01 means that artifacts are likely to be imperceptible, this table should be very encouraging. Even so, I've personally observed that quality scores less than 0.05 can be acceptable—but that depends heavily on the image's content. Images with many intricate details—such as a forest with dense canopy, for example—may present with compression artifacts that are less perceptible.
Given that every percentile except for the 100th is well beneath that threshold, webp-recompress seems to do a good job of recompressing images without obnoxious quality loss. Below are a few examples of some select comparisons of input JPEGs next to output WebP images recompressed by webp-recompress, first starting with an image that ranked the 25th percentile of SSIMULACRA scores:
Now for the image that ranked at the 50th percentile of SSIMULACRA scores:
And finally, the image that ranked at the 75th percentile:
These are reasonably acceptable outcomes. Most of the time, it seems like webp-recompress does anywhere from a good to a decent job of mitigating quality loss while delivering smaller file sizes. Yet, as always, there are exceptions.
Outliers and caveats
Every dataset has outliers. A handful of images in the corpus didn't respond well to WebP recompression. This may just be a limitation of the lossy WebP format in that some images may not be able to be encoded from lossy sources while providing both a reduction in file size and effective mitigation of quality loss. In particular, I've found that images with well-defined geometric patterns tend to suffer significant quality loss when processed by webp-recompress:
It could be argued that SVG or a lossless format would be a much better fit for this type of imagery—and that argument would be correct—but it's also possible that some content authors might not know this. Ideally, content management systems should guide content authors toward making these kinds of decisions, but that's a topic wholly outside the scope of this article.
Additionally, I make no claim that webp-recompress is ready for production. The wrapper around SSIMULACRA I put together that it depends on (ssimulacra-bin) only works on macOS, and will require some work to ensure it can work on other operating systems. SSIMULACRA also requires OpenCV to be installed.
Furthermore, webp-recompress runs into issues in some edge cases where image sources that are in CMYK or greyscale color spaces can't be compared to the lossy WebP output after conversion. This is because WebP is limited in what color spaces it supports.
Still, I honestly believe that this test demonstrates the viability of WebP recompression when guided by structural similarity analysis. With some additional work and community help, I could see it being used in build pipelines to deliver images that are smaller and of sufficient quality.
The future of image formats is bright
The landscape of alternative image formats is evolving. It's equally true that WebP, while young as far as image formats go, has been around for a while. Given these realities, it's natural to wonder why we're saying anything about WebP in 2020 and not about newer or upcoming image formats such as AVIF or JPEG XL. The reasoning for this is three-fold:
- AVIF is only supported in Firefox, but behind a flag.
- JPEG XL is still in development, and its bitstream has not been frozen—though it will be soon. Even so, it will take time for operating systems and browsers to adopt it.
- WebP currently has the best browser support of any alternative image format.
At the risk of pigeon-holing myself as the WebP Guy—which might be a foregone conclusion at this point—it's a matter of simple pragmatism. While I acknowledge that WebP has limitations—such as a lack of OS-level support that makes their use outside of web browsers difficult—it's a versatile format that combines the best aspects of established formats while delivering on smaller file sizes. With intelligent recompression, we can nick just a little bit more off of those image sizes without sacrificing too much in quality. Most of the time, anyway.
While we're at it, who's to say that the techniques I've applied in webp-recompress couldn't be applied to another lossy format? JPEG XL will have the ability to losslessly transcode JPEGs at a smaller file size. That's a big deal, but we might also be able to apply this same recompression strategy to JPEG XL with more pronounced benefits for loading performance. I'm excited to see how far that could go.
In the meantime, if you're new to WebP as a format, feel free to check out The WebP Manual, a book that I'm quite proud to have written for Smashing Magazine. It covers a wide array of aspects regarding the format, including a fair amount of performance statistics that may dovetail well with this article.
Thank you to Rachel Andrew and Eric Portis, whose valuable input helped whip this article into fighting shape.
Feel like reading more? Head on back to the article list!