BenchmarkXPRT Blog banner

Category: Future of performance evaluation

You asked, and we heard you: WebXPRT 5 is on the way!

We’re excited to announce that WebXPRT 5 is officially on the way! Since we launched WebXPRT 4 in February 2022, it’s proven to be an exceptionally successful and reliable go-to benchmark for OEM labs, the tech press, and individual users alike—to the tune of over 644,000 runs to date. In past blog posts, we’ve discussed new features and possible auxiliary workloads that we contemplated adding to WebXPRT 4. As we’ve considered user comments and suggestions, changes in web technology, and how we can best position WebXPRT as a relevant browser benchmark in the future, however, it became clear that it was time for an all-new WebXPRT.

Now that we’ve announced WebXPRT 5, the first question for many existing WebXPRT users may be, “When will WebXPRT 5 be available?” We’re not yet ready to share an anticipated WebXPRT 5 release date, but we can share that a lot of groundwork is already complete, and the remaining work is moving along rapidly. We’ll continue to issue updates here in the blog as we reach important milestones.

The second question for many existing WebXPRT users may be, “How will WebXPRT change?” We’re not yet ready to share extensive details about WebXPRT 5’s workloads—rest assured that we will as soon as we can firm up everything—but we can share a few key guidelines we tried to follow in our WebXPRT 5 design. Each of these points of emphasis is a result of feedback we’ve received from labs, as well as features that users have asked for.

  • Provide more AI-related workloads. In past blog posts, we’ve discussed the growing importance of local, browser-side AI. WebXPRT 4 already includes timed AI tasks in two of its workloads: the Organize Album using AI workload and the Encrypt Notes and OCR Scan workload. We’re working on ways to expand WebXPRT’s AI portfolio in the next version.
  • Add WebGPU workloads. As a web API, WebGPU enables web-based applications—such as image-based GenAI and inference workloads—to directly access the graphics rendering and computational capabilities of a system’s GPU. We hope to incorporate WebGPU measures in WebXPRT 5.
  • Improve WebXPRT’s utility as a tool for test labs, publications, and engineering analysis.
    • Update the workloads with longer operations. Many of WebXPRT’s existing workloads no longer challenge cutting-edge consumer hardware as much as many of us would like. Testing labs have asked for longer and more demanding workloads. We’re working on incorporating workloads that are accessible enough to be run by a broad range of devices yet challenging enough to allow performance differentiation among high-end systems.
    • Enable more precise performance measures. Labs and testers have also asked for more granular insight into the workloads to help with engineering-level performance analysis. Currently, some WebXPRT 4 workload scores include multiple timed tasks. If we separate those compound scores so that each workload reports results from only one timed task, users will be able to more precisely assess how well a device performs while handling specific operations. We’re looking into this approach.
  • Modernize the harness to make it more flexible and to speed future work. WebXPRT 4’s current harness works with server-side sessions on a LAMP (Linux, Apache, MySQL and PHP) stack. If we implement the harness via JavaScript on the client side, it will pave the way for faster development and testing cycles in the future.

We expect WebXPRT 5 to carry on the WebXPRT legacy of reliability and real-world relevance, while providing users with compelling new workloads and features. As has been our habit with new benchmark releases, however, we won’t force anyone to change versions anytime soon. Instead, we will continue to make WebXPRT 4 available for quite some time after WebXPRT 5 goes live.

If you have any questions or comments about WebXPRT, please let us know!

Justin

Multi-tab testing in a future version of WebXPRT?

In previous posts about our recommended best practices for producing consistent and reliable WebXPRT scores, we’ve emphasized the importance of “clean” testing. Clean testing involves minimizing the amount of background activity on a system during test runs to ensure stable test conditions. With stable test conditions, we can avoid common scenarios in which startup tasks, automatic updates, and other unpredictable processes contribute to high score variances and potentially unfair comparisons.

Clean testing is a vital part of accurate performance benchmarking, but it doesn’t always show us what kind of performance we can expect in typical everyday conditions. For example, while a browser performance test like WebXPRT can provide clean testing scores that serve as a valuable proxy for overall system performance, an entire WebXPRT test run involves only two open browser tabs. Most of us will have many more tabs open at any given time during the day. Those tabs—and any associated background services, extensions, plug-ins, or renderers—have the potential to require CPU cycles and frequently consume memory resources. Depending on the number of tabs you leave open, the performance impact on your system can be noticeable. Even with modern browser tab management and resource-saving features, a proliferation of tabs can still have a significant impact on your computing experience.

To address this type of computing, we’ve been considering the possibility of adding one or more multi-tab testing features to a future version of WebXPRT. There are several ways we could do this, including the following options:

  • We could open each full workload cycle in a new tab, resulting in seven total tabs.
  • We could open each individual workload iteration in a new tab, resulting in 42 total tabs.
  • We could allow users to run multiple full tests back-to-back while keeping the tabs from the previous test(s) open.

If we do decide to add multi-tab features to a future version of WebXPRT, we could integrate them into the main score or make them optional and thus not affect traditional WebXPRT testing. We’re looking at all these options.

Whenever we have multiple choices, we seek your input. We want to know if a feature like this is something you’d like to see. Below, you’ll find two quick survey questions that will help us gauge your interest in this topic. We would appreciate your input!

Would you be interested in using future WebXPRT multi-tab testing features?

How many browser tabs do you typically leave open at one time?

If you’d like to share additional thoughts or ideas related to possible multi-tab features, please let us know!

Justin

CrXPRT 2 functionality is ending with ChromeOS 139

Back in January, we discussed the ChromeOS team’s decision to eventually end support for all user-installed Chrome Apps—including CrXPRT 2—upon the release of Chrome 138 in July of this year. As best we can tell, the move is part of their overall strategy of transitioning all support to Chrome extensions and Progressive Web Apps. We knew that after the support cutoff date, we would not be able to publish any fixes or updates for CrXPRT 2, but we weren’t exactly sure how the transition would affect the app’s overall functionality.

We’ve now confirmed that while CrXPRT 2 still functions normally through Chrome 138.0.7204.255 (beta), the app does not launch at all on Chrome Canary 139. Consequently, we expect that stable channel system updates will disable CrXPRT 2 on most systems after Chrome 139 goes live on August 5th. We will initially leave CrXPRT 2 on our site for those who want to use it on older versions of Chrome, but over time we will archive it as an inactive benchmark.

We want to extend our heartfelt thanks to the many people around the world who used CrXPRT 2 for lab evaluations, product reviews, and individual testing over the past several years. We’re grateful for your support! We will update readers here in the blog if we decide to pursue new ChromeOS benchmark development work in the future.

Justin

The XPRTs: What would you like to see in 2025?

If you’re a new follower of the XPRT family of benchmarks, you may not be aware of one of the characteristics of the XPRTs that sets them apart from many benchmarking efforts—our openness and commitment to valuing the feedback of tech journalists, lab engineers, and anyone else that uses the XPRTs on a regular basis. That feedback helps us to ensure that as the XPRTs grow and evolve, the resources we offer will continue to meet the needs of those that use them.

In the past, user feedback has influenced specific aspects of our benchmarks, such as the length of test runs, UI features, results presentation, and the addition or subtraction of specific workloads. More broadly, we have also received suggestions for entirely new XPRTs and ways we might target emerging technologies or industry use cases.

As we look forward to what’s in store for the XPRTs in 2025, we’d love to hear your ideas about new XPRTs—or new features for existing XPRTs. Are you aware of hardware form factors, software platforms, new technologies, or prominent applications that are difficult or impossible to evaluate using existing performance benchmarks? Should we incorporate additional or different technologies into existing XPRTs through new workloads? Do you have suggestions for ways to improve any of the XPRTs or XPRT-related tools, such as results viewers?

We’re especially interested in your thoughts about the next steps for WebXPRT. If our recent blog posts about the potential addition of an AI-focused auxiliary workload, what a WebXPRT battery life test would entail, or possible WebAssembly-based test scenarios have piqued your interest, we’d love to hear your thoughts!

We’re genuinely interested in your answers to these questions and any other ideas you have, so please feel free to contact us. We look forward to hearing your thoughts and working together to figure out how they could help shape the XPRTs in 2025!

Justin

Speaking of potential future WebXPRT workloads

In recent blog posts, we’ve discussed several types of potential future WebXPRT workloads—from an auxiliary AI-focused workload to a WebXPRT battery life test—and many of the factors that we would need to consider when developing those workloads. In today’s post, we’re discussing other types of workloads that we may consider for future WebXPRT versions. We’re also inviting you to send us your WebXPRT workload ideas!

Currently, the most promising web technology for future WebXPRT workloads is WebAssembly (Wasm). Wasm is a binary instruction format that works across all modern browsers, provides a sandboxed environment that operates at native speeds, and takes advantage of common hardware specs across platforms. Wasm’s capabilities offer web developers significant flexibility in running complex client applications within the browser.

We first made use of Wasm in WebXPRT 4’s Organize Album and Encrypt Notes workloads, but Wasm has the potential to support many more types of test scenarios. Here are just a few of the use-case categories that Wasm supports:

  • Gaming
  • Image and video editing
  • Video augmentation
  • CAD applications
  • Interactive learning portals
  • Language translation

Those categories and the possibilities they open for additional workloads are exciting! When thinking through possible new workload scenarios, it’s important to remember that workload proposals need to fit within a set of basic guidelines that uphold WebXPRT’s strengths as a benchmark. You can read about those guidelines in more detail in this blog post, but in short, new workloads ideally should

  • be relevant to real-life scenarios
  • have cross-platform support
  • clearly differentiate in their performance between different types of devices
  • produce consistent and easily replicated results

After testing with WebXPRT or reviewing the list of use cases that Wasm supports, have you considered a new workload or test scenario that you would like to see? If so, please let us know! Your ideas could end up playing a role in shaping the next version of WebXPRT!

Justin

Thinking through a potential WebXPRT 4 battery life test

In recent blog posts, we’ve discussed some of the technical considerations we’re working through on our path toward a future AI-focused WebXPRT 4 auxiliary workload. While we’re especially excited about adding to WebXPRT 4’s AI performance evaluation capabilities, AI is not the only area of potential WebXPRT 4 expansion that we’ve thought about. We’re always open to hearing suggestions for ways we can improve WebXPRT 4, including any workload proposals you may have. Several users have asked about the possibility of a WebXPRT 4 battery life test, so today we’ll discuss what one might look like and some of the challenges we’d have to overcome to make it a reality.

Battery life tests fall into two primary categories: simple rundown tests and performance-weighted tests. Simple rundown tests measure battery life during extreme idle periods and loops of movie playbacks, etc., but do not reflect the wide-ranging mix of activities that characterize a typical day for most users. While they can be useful for performing very specific apples-to-apples comparisons, these tests don’t always give consumers an accurate estimate of the battery life they would experience in daily use.

In contrast, performance-weighted battery life tests, such as the one in CrXPRT 2, attempt to reflect real-world usage. The CrXPRT battery life test simulates common daily usage patterns for Chromebooks by including all the productivity workloads from the performance test, plus video playback, audio playback, and gaming scenarios. It also includes periods of wait/idle time. We believe this mixture of diverse activity and idle time better represents typical real-life behavior patterns. This makes the resulting estimated battery life much more helpful for consumers who are trying to match a device’s capabilities with their real-world needs.

From a technical standpoint, WebXPRT’s cross-platform nature presents us with several challenges that we did not face while developing the CrXPRT battery life test for ChromeOS. While the WebXPRT performance tests run in almost any browser, cross-browser differences and limitations in battery life reporting may restrict any future battery life test to a single browser or browser family. For instance, with the W3C Battery Status API, we can currently query battery status data from non-mobile Chromium-based browsers (e.g., Chrome, Edge, Opera, etc.), but not from Firefox or Safari. If a WebXPRT 4 battery life test supported only a single browser family, such as Chromium-based browsers, would you still be interested in using it? Please let us know.

A browser-based battery life workflow also presents other challenges that we do not face in native client applications, such as CrXPRT:

  • A browser-based battery life test may require the user to check the starting and ending battery capacities, with no way for the app to independently verify data accuracy.
  • The battery life test could require more babysitting in the event of network issues. We can catch network failures and try to handle them by reporting periods of network disconnection, but those interruptions could influence the battery life duration.
  • The factors above could make it difficult to achieve repeatability. One way to address that problem would be to run the test in a standardized lab environment with a steady internet connection, but a long list of standardized environmental requirements would make the battery life test less attractive and less accessible to many testers.

We’re not sharing these thoughts to make a WebXPRT 4 battery life test seem like an impossibility. Rather, we want to offer our perspective on what the test might look like and describe some of the challenges and considerations in play. If you have thoughts about battery life testing, or experience with battery life APIs in one or more of the major browsers, we’d love to hear from you!

Justin

Check out the other XPRTs:

Forgot your password?