BenchmarkXPRT Blog banner

Category: Benchmark metrics

Passing two important WebXPRT milestones

Over the past few months, we’ve been excited to see a substantial increase in the total number of completed WebXPRT runs. To put the increase in perspective, we had more total WebXPRT runs last month alone (40,453) than we had in the first two years WebXPRT was available (36,674)! This boost has helped us to reach two important milestones as we close in on the end of 2023.

The first milestone is that the number of WebXPRT 4 runs per month now exceeds the number of WebXPRT 3 runs per month. When we release a new version of an XPRT benchmark, it can take a while for users to transition from using the older version. For OEM labs and tech journalists, adding a new benchmark to their testing suite often involves a significant investment in back testing and gathering enough test data for meaningful comparisons. When the older version of the benchmark has been very successful, adoption of the new version can take longer. WebXPRT 3 has been remarkably popular around the world, so we’re excited to see WebXPRT 4 gain traction and take the lead even as the total number of WebXPRT runs increases each month. The chart below shows the number of WebXPRT runs per month for each version of WebXPRT over the past ten years. WebXPRT 4 usage first surpassed WebXPRT 3 in August of this year, and after looking at data for the last three months, we think its lead is here to stay.

The second important milestone is the cumulative number of WebXPRT runs, which recently passed 1.25 million, as the chart below shows. For us, this moment represents more than a numerical milestone. For a benchmark to succeed, developers need the trust and support of the benchmarking community. WebXPRT’s consistent year-over-year growth tells us that the benchmark continues to hold value for manufacturers, OEM labs, the tech press, and end users. We see it as a sign of trust that folks repeatedly return to the benchmark for reliable performance metrics. We’re grateful for that trust, and for everyone that has contributed to the WebXPRT development process over the years.

We look forward to seeing how far WebXPRT’s reach can extend in 2024! If you have any questions or comments about using WebXPRT, let us know!

Justin

Making progress with WebXPRT 4 in iOS 17

In recent blog posts, we discussed an issue that we encountered when attempting to run WebXPRT 4 on iOS 17 devices. If you missed those posts, you can find more details about the nature of the problem here. In short, the issue is that the Encrypt Notes and OCR scan subtest in WebXPRT 4 gets stuck when the Tesseract.js Optical Character Recognition (OCR) engine attempts to scan a shopping receipt. We’ve verified that the issue occurs on devices running iOS 17, iPadOS 17, and macOS Sonoma with Safari 17.

After a good bit of troubleshooting and research to try and identify the cause of the problem, we decided to build an updated version of WebXPRT 4 that uses a newer version of Tesseract for the OCR task. Aside from updating Tesseract in the new build, we aimed to change as little as possible. To try and maximize continuity, we’re still using the original input image for the receipt scanning task, and we decided to stick with using the WASM library instead of a WASM-SIMD library. Aside from a new version of tesseract.js, WebXPRT 4 version number updates, and updated documentation where necessary, all other aspects of WebXPRT 4 will remain the same.

We’re currently testing a candidate build of this new version on a wide array of devices. The results so far seem promising, but we want to complete our due diligence and make sure this is the best approach to solving the problem. We know that OEM labs and tech reviewers put a lot of time and effort into compiling databases of results, so we hope to provide a solution that minimizes results disruption and inconvenience for WebXPRT 4 users. Ideally, folks would be able to integrate scores from the new build without any questions or confusion about comparability.

We don’t yet have an exact release date for a new WebXPRT 4 build, but we can say that we’re shooting for the end of October. We appreciate everyone’s patience as we work towards the best possible solution. If you have any questions or concerns about an updated version of WebXPRT 4, please let us know.

Justin

A note about CrXPRT 2

Recent visitors to CrXPRT.com may have seen a notice that encourages visitors to use WebXPRT 4 instead of CrXPRT 2 for performance testing on high-end Chromebooks. The notice reads as follows:

NOTE: Chromebook technology has progressed rapidly since we released CrXPRT 2, and we’ve received reports that some CrXPRT 2 workloads may not stress top-bin Chromebook processors enough to give the necessary accuracy for users to compare their performance. So, for the latest test to compare the performance of high-end Chromebooks, we recommend using WebXPRT 4.

We made this recommendation because of the evident limitations of the CrXPRT 2 performance workloads when testing newer high-end hardware. CrXPRT 2 itself is not that old (2020), but when we created the CrXPRT 2 performance workloads, we started with a core framework of CrXPRT 2015 performance workloads. In a similar way, we built the CrXPRT 2015 workloads on a foundation of WebXPRT 2015 workloads. At the time, the harness and workload structures we used to ensure WebXPRT 2015’s cross-browser capabilities provided an excellent foundation that we could adapt for our new ChromeOS benchmark. Consequently, CrXPRT 2 is a close developmental descendant of WebXPRT 2015. Some of the legacy WebXPRT 2015/CrXPRT 2 workloads do not stress current high-end processors—a limitation that prevents effective performance testing differentiation—nor do they engage the latest web technologies.

In the past, the Chromebook market skewed heavily toward low-cost devices with down-bin, inexpensive processors, making this limitation less of an issue. Now, however, more Chromebooks offer top-bin processors on par with traditional laptops and workstations. Because of the limitations of the CrXPRT 2 workloads, we now recommend WebXPRT 4 for both cross-browser and ChromeOS performance testing on the latest high-end Chromebooks. WebXPRT 4 includes updated test content, newer JavaScript tools and libraries, modern WebAssembly workloads, and additional Web Workers tasks that cover a wide range of performance requirements.

While CrXPRT 2 continues to function as a capable performance and battery life comparison test for many ChromeOS devices, WebXPRT 4 is a more appropriate tool to use with new high-end devices. If you haven’t yet used WebXPRT 4 for Chromebook comparison testing, we encourage you to give it a try!

If you have any questions or concerns about CrXPRT 2 or WebXPRT 4, please don’t hesitate to ask!

Justin

Check out the WebXPRT 4 results viewer

New visitors to our site may not be aware of the WebXPRT 4 results viewer and how to use it. The viewer provides WebXPRT 4 users with an interactive, information-packed way to browse test results that is not available for earlier versions of the benchmark. With the viewer, users can explore all of the PT-curated results that we’ve published on WebXPRT.com, find more detailed information about those results, and compare results from different devices. The viewer currently displays over 460 results, and we add new entries each week.

The screenshot below shows the tool’s default display. Each vertical bar in the graph represents the overall score of a single test result, with bars arranged from lowest to highest. To view a single result in detail, the user hovers over a bar until it turns white and a small popup window displays the basic details of the result. If the user clicks to select the highlighted bar, the bar turns dark blue, and the dark blue banner at the bottom of the viewer displays additional details about that result.

In the example above, the banner shows the overall score (227), the score’s percentile rank (66th) among the scores in the current display, the name of the test device, and basic hardware disclosure information. If the source of the result is PT, users can click the Run info button to see the run’s individual workload scores. If the source is an external publisher, users can click the Source link to navigate to the original site.

The viewer includes a drop-down menu that lets users quickly filter results by major device type categories, and a tab that with additional filtering options, such as browser type, processor vendor, and result source. The screenshot below shows the viewer after I used the device type drop-down filter to select only desktops.

The screenshot below shows the viewer as I use the filter tab to explore additional filter options, such processor vendor.

The viewer also lets users pin multiple specific runs, which is helpful for making side-by-side comparisons. The screenshot below shows the viewer after I pinned four runs and viewed them on the Pinned runs screen.

The screenshot below shows the viewer after I clicked the Compare runs button. The overall and individual workload scores of the pinned runs appear in a table.

We’re excited about the WebXPRT 4 results viewer, and we want to hear your feedback. Are there features you’d really like to see, or ways we can improve the viewer? Please let us know, and send us your latest test results!

Justin

How we evaluate new WebXPRT workload proposals

A key value of the BenchmarkXPRT Development Community is our openness to user feedback. Whether it’s positive feedback about our benchmarks, constructive criticism, ideas for completely new benchmarks, or proposed workload scenarios for existing benchmarks, we appreciate your input and give it serious consideration.

We’re currently accepting ideas and suggestions for ways we can improve WebXPRT 4. We are open to adding both non-workload features and new auxiliary tests, which can be experimental or targeted workloads that run separately from the main test and produce their own scores. You can read more about experimental WebXPRT 4 workloads here. However, a recent user question about possible WebGPU workloads has prompted us to explain the types of parameters that we consider when we evaluate a new WebXPRT workload proposal.

Community interest and real-life relevance

The first two parameters we use when evaluating a WebXPRT workload proposal are straightforward: are people interested in the workload and is it relevant to real life? We originally developed WebXPRT to evaluate device performance using the types of web-based tasks that people are likely to encounter daily, and real-life relevancy continues to be an important criterion for us during development. There are many technologies, functions, and use cases that we could test in a web environment, but only some of them are both relevant to common applications or usage patterns and likely to be interesting to lab testers and tech reviewers.

Maximum cross-platform support

Currently, WebXPRT runs in almost any web browser, on almost any device that has a web browser, and we would ideally maintain that broad level of cross-platform support when introducing new workloads. However, technical differences in the ways that different browsers execute tasks mean that some types of scenarios would be impossible to include without breaking our cross-platform commitment.

One reason that we’re considering auxiliary workloads with WebXPRT, e.g., a battery life rundown, is that those workloads would allow WebXPRT to offer additional value to users while maintaining the cross-platform nature of the main test. Even if a battery life test ran on only one major browser, it could still be very useful to many people.

Performance differentiation

Computer benchmarks such as the XPRTs exist to provide users with reliable metrics that they can use to gauge how well target platforms or technologies perform certain tasks. With a broadly targeted benchmark such as WebXPRT, if the workloads are so heavy that most devices can’t handle them, or so light that most devices complete them without being taxed, the results will have little to no use for OEM labs, the tech press, or independent users when evaluating devices or making purchasing decisions.

Consequently, with any new WebXPRT workload, we try to find a sweet spot in terms of how demanding it is. We want it to run on a wide range of devices—from low-end devices that are several years old to brand-new high-end devices and everything in between. We also want users to see a wide range of workload scores and resulting overall scores, so they can easily grasp the different performance capabilities of the devices under test.

Consistency and replicability

Finally, workloads should produce scores that consistently fall within an acceptable margin of error, and are easily to replicate with additional testing or comparable gear. Some web technologies are very sensitive to uncontrollable or unpredictable variables, such as internet speed. A workload that measures one of those technologies would be unlikely to produce results that are consistent and easily replicated.

We hope this post will be useful for folks who are contemplating potential new WebXPRT workloads. If you have any general thoughts about browser performance testing, or specific workload ideas that you’d like us to consider, please let us know.

Justin

Best practices in benchmarking

From time to time, a tester writes to ask for help determining why they see different WebXPRT scores on two systems that have the same hardware configuration. The scores sometimes differ by a significant percentage. This can happen for many reasons, including different software stacks, but score variability can also result from different testing behavior and environments. While a small amount of variability is normal, these types of questions provide an opportunity to talk about the basic benchmarking practices we follow in the XPRT lab to produce the most consistent and reliable scores.

Below, we list a few basic best practices you might find useful in your testing. Most of them relate to evaluating browser performance with WebXPRT, but several of these practices apply to other benchmarks as well.

  • Test with clean images: We typically use an out-of-box (OOB) method for testing new devices in the XPRT lab. OOB testing means that other than running the initial OS and browser version updates that users are likely to run after first turning on the device, we change as little as possible before testing. We want to assess the performance that buyers are likely to see when they first purchase the device, before installing additional apps and utilities. This is the best way to provide an accurate assessment of the performance retail buyers will experience. While OOB is not appropriate for certain types of testing, the key is to not test a device that’s bogged down with programs that will influence results.
  • Turn off automatic updates: We do our best to eliminate or minimize app and system updates after initial setup. Some vendors are making it more difficult to turn off updates completely, but you should always double-check update settings before testing.
  • Get a baseline for system processes: Depending on the system and the OS, a significant amount of system-level activity can be going on in the background after you turn it on. As much as possible, we like to wait for a stable (idle) baseline of system activity before kicking off a test. If we start testing immediately after booting the system, we often see higher variance in the first run before the scores start to tighten up.
  • Hardware is not the only important factor: Most people know that different browsers produce different performance scores on the same system. However, testers aren’t always aware of shifts in performance between different versions of the same browser. While most updates don’t have a large impact on performance, a few updates have increased (or even decreased) browser performance by a significant amount. For this reason, it’s always worthwhile to record and disclose the extended browser version number for each test run. The same principle applies to any other relevant software.
  • Use more than one data point: Because of natural variance, our standard practice in the XPRT lab is to publish a score that represents the median from three to five runs, if not more. If you run a benchmark only once, and the score differs significantly from other published scores, your result could be an outlier that you would not see again under stable testing conditions.

We hope these tips will help make your testing more accurate. If you have any questions about the XPRTs, or about benchmarking in general, feel free to ask!

Justin

Check out the other XPRTs:

Forgot your password?