BenchmarkXPRT Blog banner

Yearly Archives: 2025

You asked, and we heard you: WebXPRT 5 is on the way!

We’re excited to announce that WebXPRT 5 is officially on the way! Since we launched WebXPRT 4 in February 2022, it’s proven to be an exceptionally successful and reliable go-to benchmark for OEM labs, the tech press, and individual users alike—to the tune of over 644,000 runs to date. In past blog posts, we’ve discussed new features and possible auxiliary workloads that we contemplated adding to WebXPRT 4. As we’ve considered user comments and suggestions, changes in web technology, and how we can best position WebXPRT as a relevant browser benchmark in the future, however, it became clear that it was time for an all-new WebXPRT.

Now that we’ve announced WebXPRT 5, the first question for many existing WebXPRT users may be, “When will WebXPRT 5 be available?” We’re not yet ready to share an anticipated WebXPRT 5 release date, but we can share that a lot of groundwork is already complete, and the remaining work is moving along rapidly. We’ll continue to issue updates here in the blog as we reach important milestones.

The second question for many existing WebXPRT users may be, “How will WebXPRT change?” We’re not yet ready to share extensive details about WebXPRT 5’s workloads—rest assured that we will as soon as we can firm up everything—but we can share a few key guidelines we tried to follow in our WebXPRT 5 design. Each of these points of emphasis is a result of feedback we’ve received from labs, as well as features that users have asked for.

  • Provide more AI-related workloads. In past blog posts, we’ve discussed the growing importance of local, browser-side AI. WebXPRT 4 already includes timed AI tasks in two of its workloads: the Organize Album using AI workload and the Encrypt Notes and OCR Scan workload. We’re working on ways to expand WebXPRT’s AI portfolio in the next version.
  • Add WebGPU workloads. As a web API, WebGPU enables web-based applications—such as image-based GenAI and inference workloads—to directly access the graphics rendering and computational capabilities of a system’s GPU. We hope to incorporate WebGPU measures in WebXPRT 5.
  • Improve WebXPRT’s utility as a tool for test labs, publications, and engineering analysis.
    • Update the workloads with longer operations. Many of WebXPRT’s existing workloads no longer challenge cutting-edge consumer hardware as much as many of us would like. Testing labs have asked for longer and more demanding workloads. We’re working on incorporating workloads that are accessible enough to be run by a broad range of devices yet challenging enough to allow performance differentiation among high-end systems.
    • Enable more precise performance measures. Labs and testers have also asked for more granular insight into the workloads to help with engineering-level performance analysis. Currently, some WebXPRT 4 workload scores include multiple timed tasks. If we separate those compound scores so that each workload reports results from only one timed task, users will be able to more precisely assess how well a device performs while handling specific operations. We’re looking into this approach.
  • Modernize the harness to make it more flexible and to speed future work. WebXPRT 4’s current harness works with server-side sessions on a LAMP (Linux, Apache, MySQL and PHP) stack. If we implement the harness via JavaScript on the client side, it will pave the way for faster development and testing cycles in the future.

We expect WebXPRT 5 to carry on the WebXPRT legacy of reliability and real-world relevance, while providing users with compelling new workloads and features. As has been our habit with new benchmark releases, however, we won’t force anyone to change versions anytime soon. Instead, we will continue to make WebXPRT 4 available for quite some time after WebXPRT 5 goes live.

If you have any questions or comments about WebXPRT, please let us know!

Justin

Multi-tab testing in a future version of WebXPRT?

In previous posts about our recommended best practices for producing consistent and reliable WebXPRT scores, we’ve emphasized the importance of “clean” testing. Clean testing involves minimizing the amount of background activity on a system during test runs to ensure stable test conditions. With stable test conditions, we can avoid common scenarios in which startup tasks, automatic updates, and other unpredictable processes contribute to high score variances and potentially unfair comparisons.

Clean testing is a vital part of accurate performance benchmarking, but it doesn’t always show us what kind of performance we can expect in typical everyday conditions. For example, while a browser performance test like WebXPRT can provide clean testing scores that serve as a valuable proxy for overall system performance, an entire WebXPRT test run involves only two open browser tabs. Most of us will have many more tabs open at any given time during the day. Those tabs—and any associated background services, extensions, plug-ins, or renderers—have the potential to require CPU cycles and frequently consume memory resources. Depending on the number of tabs you leave open, the performance impact on your system can be noticeable. Even with modern browser tab management and resource-saving features, a proliferation of tabs can still have a significant impact on your computing experience.

To address this type of computing, we’ve been considering the possibility of adding one or more multi-tab testing features to a future version of WebXPRT. There are several ways we could do this, including the following options:

  • We could open each full workload cycle in a new tab, resulting in seven total tabs.
  • We could open each individual workload iteration in a new tab, resulting in 42 total tabs.
  • We could allow users to run multiple full tests back-to-back while keeping the tabs from the previous test(s) open.

If we do decide to add multi-tab features to a future version of WebXPRT, we could integrate them into the main score or make them optional and thus not affect traditional WebXPRT testing. We’re looking at all these options.

Whenever we have multiple choices, we seek your input. We want to know if a feature like this is something you’d like to see. Below, you’ll find two quick survey questions that will help us gauge your interest in this topic. We would appreciate your input!

Would you be interested in using future WebXPRT multi-tab testing features?

How many browser tabs do you typically leave open at one time?

If you’d like to share additional thoughts or ideas related to possible multi-tab features, please let us know!

Justin

WebXPRT can help you choose the right back-to-school tech

For many students, the excitement and anticipation of a new school year is right round the corner! In addition to being an opportunity to dive into new subjects, meet new people, and make progress toward learning goals, the back-to-school season often provides students and teachers with a chance to shop for new technology to meet their needs in the coming year. The tech marketplace can be confusing, however, with a slew of brands, options, and claims competing for back-to-school dollars.

Never fear: WebXPRT can help!

Whether you’re shopping for a new phone, tablet, Chromebook, laptop, or desktop, WebXPRT can provide industry-trusted performance scores that can help give you confidence that you’re making a smart purchasing decision.

And in this age of AI, WebXPRT performance scores do account for specific AI tasks. The benchmark includes timed AI tasks in two workloads, which reflect the types of light browser-side inference tasks that are now quite common in consumer-oriented web applications and extensions. You can read more about that in previous blog entries on the “Organize Album using AI” and “Encrypt Notes and OCR Scan” workloads.

To see how devices stack up, the WebXPRT 4 results viewer is a good place to start. The viewer displays the WebXPRT 4 scores of over 975 devices—including many of the hottest new releases—and we’re adding more scores all the time. To learn more about the viewer’s capabilities and how you can use it to compare devices, check out this blog post.

Another way to find WebXPRT scores is to go directly to the tech press. If you’re considering a popular device, there’s a good chance that a recent tech review includes a WebXPRT score for that device. There are two quick ways to find these reviews: You can either (1) search for “WebXPRT” on a tech review site or (2) use a search engine and enter the device name and WebXPRT as search terms, such as “Lenovo ThinkPad X1 Carbon” and “WebXPRT.”

Here are a few recent articles and tech reviews that used WebXPRT:


If you’re excited about the opportunity to buy new tech for school, WebXPRT can provide you with the information you need to make more confident tech purchases. As this new school year begins, we hope you’ll find the data you need on our site or in an XPRT-related tech review. Feel free to contact us if you have any questions about WebXPRT or WebXPRT scores!

Justin

CrXPRT 2 functionality is ending with ChromeOS 139

Back in January, we discussed the ChromeOS team’s decision to eventually end support for all user-installed Chrome Apps—including CrXPRT 2—upon the release of Chrome 138 in July of this year. As best we can tell, the move is part of their overall strategy of transitioning all support to Chrome extensions and Progressive Web Apps. We knew that after the support cutoff date, we would not be able to publish any fixes or updates for CrXPRT 2, but we weren’t exactly sure how the transition would affect the app’s overall functionality.

We’ve now confirmed that while CrXPRT 2 still functions normally through Chrome 138.0.7204.255 (beta), the app does not launch at all on Chrome Canary 139. Consequently, we expect that stable channel system updates will disable CrXPRT 2 on most systems after Chrome 139 goes live on August 5th. We will initially leave CrXPRT 2 on our site for those who want to use it on older versions of Chrome, but over time we will archive it as an inactive benchmark.

We want to extend our heartfelt thanks to the many people around the world who used CrXPRT 2 for lab evaluations, product reviews, and individual testing over the past several years. We’re grateful for your support! We will update readers here in the blog if we decide to pursue new ChromeOS benchmark development work in the future.

Justin

Browser-based AI tests in WebXPRT 4: optical character recognition

In our previous blog post, we discussed the rapidly expanding influence of AI-enhanced technologies in areas like everyday browser activity—and the growing need for objective performance data that can help us understand how well new consumer devices will handle AI tasks. We noted that WebXPRT 4 already includes timed AI tasks in two of its workloads—the “Organize Album using AI” and “Encrypt Notes and OCR Scan”—and we provided some technical details for the Organize Album workload. In today’s post, we’ll focus on the Encrypt Notes workload.

The Encrypt Notes workload includes two separate scenarios that reflect common web-based productivity app tasks. The first scenario syncs a set of encrypted notes, and the second scenario uses AI-based optical character recognition (OCR) to scan a receipt, extract data, and then add that data to an expense report.

Here are the details for each scenario:

  • The encrypt notes scenario downloads a set of notes, encrypts that data, temporarily stores it in the browser’s localStorage object (the localStorageDB.js database layer), and then decrypts and renders it for display. This scenario measures HTML5 Local Storage, JavaScript, AES encryption, and WebAssembly (Wasm) performance. 
  • The OCR scan scenario uses a Wasm-based version of Tesseract.js (tesseract-core.wasm.js v2.20) to scan an expense receipt. Tesseract.js is a JavaScript port of the Tesseract OCR engine—a popular open-source C/C++ library that extracts text from images and PDFs. The scenario then adds the receipt to an expense report. This scenario measures HTML5 Local Storage, JavaScript, and Wasm performance. 

We mention this test under the AI umbrella in part because people sometimes use the term “OCR” to refer to a spectrum of AI and non-AI technologies. In this case, though, the specifics make this workload clearly have an AI component. The Wasm-based Tesseract library that we use in WebXPRT 4 is based on a version of C/C++ (v4.x) that uses Long Short-Term Memory (LSTM). LSTM is a type of recurrent neural network (RNN) that is particularly well-suited for processing and predicting sequential data. As such, it is clearly an AI component of the Encrypt Notes and OCR Scan workload.

To produce a score for each iteration of the workload, WebXPRT calculates the total time that it takes for a system to sync (encrypt, decrypt, and render) the notes, use OCR to scan the receipt, and add the scanned data to an expense report. In a standard test, WebXPRT runs seven iterations of the entire six-workload performance suite before calculating an overall test score. You can find out more about the WebXPRT results calculation process here.

Along with our post on the Organize Album workload, we hope this information provides a deeper understanding of WebXPRT 4’s AI-equipped workloads. As we mentioned last time, if you want to explore the structure of these workloads in more detail, you can check out previous blog posts for information about how to access and use the WebXPRT 4 source code for free. You can also read more about WebXPRT’s overall structure and other workloads in the Exploring WebXPRT 4 white paper.

If you have any questions about WebXPRT 4, please let us know!

Justin

Browser-based AI tests in WebXPRT 4: face detection and image classification

I recently revisited an XPRT blog entry that we posted from CES Las Vegas back in 2020. In that post, I reflected on the show’s expanded AI emphasis, and I wondered if we were reaching a tipping point where AI-enhanced and AI-driven tools and applications would become a significant presence in people’s daily lives. It felt like we were approaching that point back then with the prevalence of AI-powered features such as image enhancement and text recommendation, among many others. Now, seamless AI integration with common online tasks has become so widespread that many people unknowingly benefit from AI interactions several times a day.

As AI’s role in areas like everyday browser activity continues to grow—along with our expectations for what our consumer devices should be able to handle—reliable AI-oriented benchmarking is more vital than ever. We need objective performance data that can help us understand how well a new desktop, laptop, tablet, or phone will handle AI tasks.

WebXPRT 4 already includes timed AI tasks in two of its workloads: the “Organize Album using AI” workload and the “Encrypt Notes and OCR Scan” workload. These two workloads reflect the types of light browser-side inference tasks that are now fairly common in consumer-oriented web apps and extensions. In today’s post, we’ll provide some technical information about the Organize Album workload. In a future post, we’ll do the same for the Encrypt Notes workload.

The Organize Album workload includes two different timed tasks that reflect a common scenario of organizing online photo albums. The workload utilizes the AI inference and JavaScript capabilities of the WebAssembly (Wasm) version of OpenCV.js—an open-source computer vision and machine learning library. In WebXPRT 4, we used OpenCV.js version 4.5.2.

Here are the details for each task:

  • The first task measures the time it takes to complete a face detection job with a set of five 720 x 480 photos that we sourced from commercial photo sites. The workload loads a Caffe deep learning framework model (res10_300x300_ssd_iter_140000_fp16.caffemodel) using the commands found here
  • The second task measures the time it takes to complete an image classification job (labeling based on object detection) with a different set of five 718 x 480 photos that we sourced from the ImageNet computer vision dataset. The workload loads an ONNX-based SqueezeNet machine learning model (squeezenet.onnx v 1.0) using the commands found here.

To produce a score for each iteration of the workload, WebXPRT calculates the total time that it takes for a system to organize both albums. In a standard test, WebXPRT runs seven iterations of the entire six-workload performance suite before calculating an overall test score. You can find out more about the WebXPRT results calculation process here.

We hope this post will give you a better sense of how WebXPRT 4 measures one kind of AI performance. As a reminder, if you want to dig into the details at a more granular level, you can access the WebXPRT 4 source code for free. In previous blog posts, you can find information about how to access and use the code. You can also read more about WebXPRT’s overall structure and other workloads in the Exploring WebXPRT 4 white paper.

If you have any questions about this workload or any other aspect of WebXPRT 4, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?