Everything You Need To Know About Google’s JPEG Encoder Guetzli

GlobalDots
6 Min read

A website is the digital face of a company, the first thing a visitor sees. And it’s a well known fact that people build their opinions largely based on first impressions. In order to build a great first impression, online businesses focus on compelling and crisp visual elements – they want the company’s face to be as attractive as possible. The fact that an average of 65% of a page payload is made out of images just confirms it. The problem, however, is that images are heavy in terms of web performance. Simply put, a lot of them can end up significantly slowing page load times which ultimately ends up affecting user experience. No business owner wants their company’s face to be the “beautiful but slow” one.

Reduce your AWS costs by over 50%

Discover your Cloud Saving Potential – Answer just 5 simple questions. AppsFlyer, Playtika, Lufthansa, IBM, top leading companies are already using our FinOps services.

Reduce your AWS costs 
by over 50%

Image Source

Tweet this: About 65% of a web page payload is made out of images

As software engineer Ronan Cremin noted, today’s web pages are the same size as the iconic 90s computer game Doom. The installer of Doom amounts at 2.39MB of space while an average web page requires 2.6MB of downloaded data, according to HTTP Archive. It means web pages are getting heavier which then affects overall page speed.

With internet users getting increasingly impatient, page load times are critical as never before. Speed is crucial on the internet, which further fuels the need for smaller or compressed high quality files – the smaller the file, the faster it loads. In order to help reduce image sizes, Google launched their own open-source JPEG encoder which they claim can reduce images by up to 35% more than any other available method, without any losses in terms of quality.

Introducing Guetzli

Curiously named Guetzli (translates to “cookie” from Swiss German), Google describes it as:

(…) a JPEG encoder for digital images and web graphics that can enable faster online experiences by producing smaller JPEG files while still maintaining compatibility with existing browsers, image processing applications and the JPEG standard.

Image Source

Tweet this: Guetzli encoder – smaller JPEGs while maintaining quality and compatibility

From what it achieves it’s similar to Google’s Zopfli algorithm, which generates smaller PNG and gzip files without the need for a new format. It is quite opposed to the techniques used in RNN-based image compression, RAISR, and WebP, which all require client changes for compression gains.

How it Works

The perceived quality of JPEGs is directly linked to the compression process they withstand:

  • Color space transformation
  • Discrete cosine transformation
  • Quantization

As for Guetzli, it is designed to target the last stage – quantization, during which the more quality reduction is applied, the smaller the output file becomes. Guetzli’s algorithm is built to balance minimal quality loss and file size reduction. It employs a complex search algorithm which closes the gap between JPEG and Guetzli psychovisual models. The downside, however, is that the process with search algorithms requires a lot more time to complete when compared to other methods.

Butteraugli – The Quality Metric

Encoders are made to trim unnecessary file data away. The best ones manage to do so without losing any visual quality. The thing is, it’s hard to objectively measure post-encoding visual quality. Research and development related to images and visual components still largely relies on real-world data, where people are asked to express their opinion on what they notice or not. It’s often impossible to consistently run that kind of tests which propelled the need for an objective metric, a mathematical one which will determine the level image quality.

For years, the main quality metric was Peak signal-to-noise ratio (PSNR) which measured the signal of the original image compared to the encoder’s lossy one. PSNR had to be replaced with newer metrics that could follow the rapid tech leaps. That’s why in 2004, the Structural SIMilarity (SSIM) index was introduced as a better model. It was crafted upon insights about perceptual concepts. For example, the human brain focuses more on structures and patterns, and easily dismisses bright portions of an image.

Image Source

Tweet this:The Butteraugli image quality metric is the brain behind Guetzli’s algorithm

In 2016, researchers at Google developed an new metric for image quality – Butteraugli. This particular metric takes image modeling to a whole new level. It follows the complicated biological processes of the human body (mainly the activity of the cells in the retina).

Butteraugli as a metric is still new and relatively untested. And it’s the brains that runs Guetzli’s algorithm. The team behind Guetzli explained that Butteraugli as a metric performs best on high-quality image levels, but is CPU and RAM-heavy once encoding is started. They also noted the tool still has room for improvements.

Comparison

Eric Portis from Cloudinary ran a few interesting comparison tests to check Guetzli’s capabilities against other encoders. Along with Guetzli, he encoded JPEGs with mozjpeg (Mozilla’s encoder), libjpeg (Independent JPEG Group) and q_auto (Cloudinary). Since Guetzli doesn’t allow for encoding at quality levels below 84, all the images were rendered at 84. The results:

  1. libjpeg generated heavy HQ images
  2. Average Guetzi was smaller than libjpeg
  3. mozjpeg was faster and even smaller
  4. q_auto produced the slimmest images

Curiously, Guetzli encoded JPEGs were slightly bigger than mozjpeg and q_auto JPEGs but when measured through DSSIM they appeared of lower quality. When the quality metric was shifted to Butteraugli, Guetzli showed by far the highest levels of encoded image quality. The test performed by Google’s researchers showed similar results. These results end backing up Google’s claims “Guetzli allows for 35% better compression” but only when the metric of their own design is applied.

Image Source

Tweet this: Guetzli allows 35% better compression but only when the Butteraugli metric is applied

Here’s what Portis also found about Guetzli:

  • Images are blockier than other encoders’ images, but that’s usually hard to notice without a loupe
  • Images are a bit crisper, preserve more fine detail and present sharper textures
  • Images are much better at keeping ringing artifacts in check, observable to unassisted eyes too
  • It’s great for JPEGs that suffer from typical ringing artifacts
  • Not the best when looking for smallest possible JPEGs

Check out more about Eric Portis’ comparison test in his original article. Also, Brian Rinaldi wrote an interesting article comparing Guetzli and Photoshop compressed images.

Conclusion

Google did a great job at providing yet another powerful open-source tool. Guetzli is a powerful encoder built for the future and has yet to show its full applicability. However, the smart approach it employs results in extremely long computation times as it’s quite heavy on CPU and RAM (Google said it requires about one minute of CPU time per megapixel). The Butteraugli metric is yet to prove itself as fully cut for what it is intended, but for now it seems to do the work. In its current version, Guetzli is perfect for keeping great quality in compressed images, but if the only parameter you look after is “image size” then there are still better (and faster) alternatives.

The key takeaway – the truth is far more sophisticated than the simple “35% better” headline. Finally, Guetzli is an ingenious tool, that offers great possibilities that are yet to be fully achieved and, more important, offers new ideas about what can be done with image compression. If you feel your site is too heavy, bloated and it could run faster, don’t hesitate to ask for help. Our experts here at GlobalDots can help you with anything web performance and security related issues.

Latest Articles

Cut Big Data Costs by 23%: 7 Key Practices

In this webinar, we reveal a solution that cuts big data costs by 23% and enhances system efficiency - without changing a single line of code. We’ll also explore 7 key practices that will free your engineers to process and analyze data at the pace and scale they need - and ensure they never lose control of the process.

Developer AXE-WEB
15th April, 2024
Project FOCUS: A New Age of FinOps Visibility

It’s easy for managers and team leaders to get caught up in the cultural scrum of FinOps. Hobbling many FinOps projects, however, is a lack of on-the-ground support for the DevOps teams that are having to drive this widespread change – this is how all too many FinOps projects become abandoned on the meeting room […]

Nesh (Steven Puddephatt) Senior Solutions Engineer @ GlobalDots
27th March, 2024
Optimize Your Cloud Spend with a FinOps Maturity Assessment

Achieving FinOps is a tall order: it demands a degree of organizational self-awareness that some companies are constantly battling for. Consider the predicament that many teams find themselves in: while their cloud environments may contain a number of small things that could be optimized, there are no single glaring mistakes that are consuming massive quantities […]

Nesh (Steven Puddephatt) Senior Solutions Engineer @ GlobalDots
27th March, 2024

Unlock Your Cloud Potential

Schedule a call with our experts. Discover new technology and get recommendations to improve your performance.

Unlock Your Cloud Potential