Real User Monitoring (RUM) vs Synthetic Tests

GlobalDots
4 Min read

Real User Monitoring (RUM) is a great improvement for business’ understanding of application performance. The data gathered shows the full timing, based on real pages being loaded, from real browsers, in real locations around the world. The technology applies to desktop, mobile, and tablet browsers equally well. The biggest advantage of measuring actual data is that there’s no need to pre-define the important use cases. As each user goes through the application, RUM captures everything, so no matter what pages they see, there will be performance data available, making it practical for large sites or complex apps. Unfortunately, while RUM provides this starting point, it doesn’t necessarily point to the precise asset. Additionally, the growing trend of “single page apps”–apps which do not perform full page loads to gather new data, like GMail or Facebook–do not yield very good RUM data.

How One AI-Driven Media Platform Cut EBS Costs for AWS ASGs by 48%

How One AI-Driven Media Platform Cut EBS Costs for AWS ASGs by 48%

real-user-data-updated

Image source

The biggest advantage of measuring actual data is that there’s no need to pre-define the important use cases. As each user goes through the application, RUM captures everything, so no matter what pages they see, there will be performance data available. This is particularly important for large sites or complex apps, where the functionality or interesting content is constantly changing. In short, RUM completes the monitoring spectrum. It provides a clearer understanding of web performance, enabling you to take targeted action and remove performance inhibitors. This puts website operators in a better position to commit to customer demands, marketing strategies and business revenue objectives.

RUM’s greatest asset is also it’s greatest weakness — it only works if people are visiting and using your site. While RUM offers incredible insight via the ability to monitor, capture and analyze every user interaction on your website — you still need real traffic for it to work.

Synthetic performance monitoring, sometimes called proactive monitoring, involves having external agents run scripted transactions against a web application. These scripts are meant to follow the steps a typical user might–search, view product, log in, check out–in order to assess the experience of a user. The basic idea behind synthetic monitoring is to ensure that your Web properties and key user transactions are always performing properly — even when there is no real user traffic coming through the given site or application. Synthetic monitoring really provides the ability to test and monitor from different rendering agents or browsers, depending on the provider these may be actual versions of today’s most popular browsers or pseudo-browsers, developed by the provider to closely emulate Chrome, FireFox and IE.

rum-wp-img-001

Image source

Most companies use synthetic (sometimes called external) monitoring to track website performance. That is, they synthetically generate traffic from beyond the firewall to see how sites perform for the outside world. Although this provides an important, structured perspective of the website experience, it isn’t based on measurements of real user activity. An overreliance on this approach leads to inaccurate views. What’s needed is a perspective delivered by users themselves.

Results are typically displayed using a waterfall chart, creating a visual representation of every request the page or transaction makes, in order, over total execution time, providing an easy way to identify any performance flaws. The monitoring aspect of synthetic testing begins when these tests are run at regular intervals so that you can baseline site performance; identify any issues and target areas for optimization.

Unlike RUM, synthetics don’t track real user sessions. This has a couple important implications. First, because the script is executing a known set of steps at regular intervals from a known location, its performance is predictable. That means it’s more useful for alerting than often-noisy RUM data. Second, because it occurs predictably and externally, it’s better for assessing site availability and network problems than RUM is–particularly if your synthetic monitoring has integrated network insight.

meeting room #4Image source

RUM data, by definition, is from real users. It is the ground truth for what users are experiencing. Synthetic data, even when generated using real browsers over a real network, can never match the diversity of performance variables that exist in the real world: browsers, mobile devices, geo locations, network conditions, user accounts, page view flow… Synthetic data allows us to create a consistent testing enviroment by eliminating the variables. The variables we choose for testing match a certain segment of users, but fail to capture the diversity of users that actually visit a page. That’s what RUM is for.

The summary – pros and cons

Real User Monitoring:

  • great for monitoring real users’ page interaction
  • no need to pre-define any data
  • provides a clearer understanding of web performance
  • enables targeted action and removes performance inhibitors
  • it only works when a page has traffic; no traffic, no actual function
  • isn’t reccomended for pages that are still trying to generate traffic

Synthetic Performance Monitoring:

  • simple to use and handy for startups
  • provides important, structured perspective of the website experience
  • easy to identify any performance flaws
  • fails to deliver perspective from actual users
  • isn’t as representative as RUM

RUM vs SPM

Why not both? Both types of monitoring provide data on your site performance, but it really variates between the size of the page that’s being monitored and the traffic it generates. The best practice is to combine the core of RUM with the data collected from Synthetic monitoring to gain the best performance feedback on the page at hand.

Further reading:

Latest Articles

How to Defeat Bad Bots in 2024 (and Why It’s Still So Hard)

Introduction  Bots today outnumber human users in eCommerce sites: From 15% in 2017, to 30% in 2019, to 64% in 2021. Some extreme cases we’ve witnessed peaked in 90-99.8% bot traffic. But perhaps the more concerning bit is the traffic share of bad bots: an approximate 39% of all internet traffic in 2021.   Hackers are […]

Dr. Eduardo Rocha Senior Solutions Engineer & Security Analyst @ GlobalDots
13th June, 2024
Cut Big Data Costs by 23%: 7 Key Practices

In this webinar, we reveal a solution that cuts big data costs by 23% and enhances system efficiency - without changing a single line of code. We’ll also explore 7 key practices that will free your engineers to process and analyze data at the pace and scale they need - and ensure they never lose control of the process.

GlobalDots
15th April, 2024

Unlock Your Cloud Potential

Schedule a call with our experts. Discover new technology and get recommendations to improve your performance.

Unlock Your Cloud Potential