20th November, 2014 4 Min read
Book a Demo
Real User Monitoring (RUM) is a great improvement for business’ understanding of application performance. The data gathered shows the full timing, based on real pages being loaded, from real browsers, in real locations around the world. The technology applies to desktop, mobile, and tablet browsers equally well. The biggest advantage of measuring actual data is that there’s no need to pre-define the important use cases. As each user goes through the application, RUM captures everything, so no matter what pages they see, there will be performance data available, making it practical for large sites or complex apps. Unfortunately, while RUM provides this starting point, it doesn’t necessarily point to the precise asset. Additionally, the growing trend of “single page apps”–apps which do not perform full page loads to gather new data, like GMail or Facebook–do not yield very good RUM data.
Reduce your AWS costs by over 50%
Discover your Cloud Saving Potential – Answer just 5 simple questions. AppsFlyer, Playtika, Lufthansa, IBM, top leading companies are already using our FinOps services.
The biggest advantage of measuring actual data is that there’s no need to pre-define the important use cases. As each user goes through the application, RUM captures everything, so no matter what pages they see, there will be performance data available. This is particularly important for large sites or complex apps, where the functionality or interesting content is constantly changing. In short, RUM completes the monitoring spectrum. It provides a clearer understanding of web performance, enabling you to take targeted action and remove performance inhibitors. This puts website operators in a better position to commit to customer demands, marketing strategies and business revenue objectives.
RUM’s greatest asset is also it’s greatest weakness — it only works if people are visiting and using your site. While RUM offers incredible insight via the ability to monitor, capture and analyze every user interaction on your website — you still need real traffic for it to work.
Synthetic performance monitoring, sometimes called proactive monitoring, involves having external agents run scripted transactions against a web application. These scripts are meant to follow the steps a typical user might–search, view product, log in, check out–in order to assess the experience of a user. The basic idea behind synthetic monitoring is to ensure that your Web properties and key user transactions are always performing properly — even when there is no real user traffic coming through the given site or application. Synthetic monitoring really provides the ability to test and monitor from different rendering agents or browsers, depending on the provider these may be actual versions of today’s most popular browsers or pseudo-browsers, developed by the provider to closely emulate Chrome, FireFox and IE.
Most companies use synthetic (sometimes called external) monitoring to track website performance. That is, they synthetically generate traffic from beyond the firewall to see how sites perform for the outside world. Although this provides an important, structured perspective of the website experience, it isn’t based on measurements of real user activity. An overreliance on this approach leads to inaccurate views. What’s needed is a perspective delivered by users themselves.
Results are typically displayed using a waterfall chart, creating a visual representation of every request the page or transaction makes, in order, over total execution time, providing an easy way to identify any performance flaws. The monitoring aspect of synthetic testing begins when these tests are run at regular intervals so that you can baseline site performance; identify any issues and target areas for optimization.
Unlike RUM, synthetics don’t track real user sessions. This has a couple important implications. First, because the script is executing a known set of steps at regular intervals from a known location, its performance is predictable. That means it’s more useful for alerting than often-noisy RUM data. Second, because it occurs predictably and externally, it’s better for assessing site availability and network problems than RUM is–particularly if your synthetic monitoring has integrated network insight.
RUM data, by definition, is from real users. It is the ground truth for what users are experiencing. Synthetic data, even when generated using real browsers over a real network, can never match the diversity of performance variables that exist in the real world: browsers, mobile devices, geo locations, network conditions, user accounts, page view flow… Synthetic data allows us to create a consistent testing enviroment by eliminating the variables. The variables we choose for testing match a certain segment of users, but fail to capture the diversity of users that actually visit a page. That’s what RUM is for.
Real User Monitoring:
Synthetic Performance Monitoring:
Why not both? Both types of monitoring provide data on your site performance, but it really variates between the size of the page that’s being monitored and the traffic it generates. The best practice is to combine the core of RUM with the data collected from Synthetic monitoring to gain the best performance feedback on the page at hand.
As we navigate through the evolving landscape of IT infrastructure, a closer look at the cost trends for 2024 reveals significant shifts. From cloud expenses feeling the pressure of economic changes. With global cloud spending expected to hit over $1 trillion and various sectors facing unique challenges, staying informed is more crucial than ever. Dive […]
EX.CO is a video technology platform that enables publishers to monetize video content on websites.
Justt is a chargeback mitigation startup based in Tel Aviv. Chargebacks, as defined, are demands by a credit card provider for a retailer to reimburse losses on fraudulent or disputed transactions. Justt’s objective is to assist merchants worldwide in combating false chargebacks using its proprietary artificial intelligence technology.
The cloud used to be viewed as a place of significant cost savings: rather than purchasing and maintaining dozens of server stacks, organizations could outsource this and purchase compute power on an as-needed basis. In the ensuing rush to cloud architecture, however, many companies simply lifted-and-shifted their old financial bad habits. The sheer speed of […]
Schedule a call with our experts. Discover new technology and get recommendations to improve your performance.