BenchmarkXPRT Blog banner

Tag Archives: benchmark

Get your WebXPRT 5 Preview results on WebXPRT.com: how to submit them

The WebXPRT 5 Preview has been available for only a few weeks, but users have already started submitting test results for us to review for publication in the WebXPRT 5 Preview results viewer. We’re excited to receive those submissions, but we know that some of our readers are either new to WebXPRT or may never have submitted a test result. In today’s post, we’ll cover the straightforward process of submitting your WebXPRT 5 Preview test results for publication in the viewer.

Unlike sites that automatically publish all results submissions, we publish only results that meet a set of evaluation criteria. Those results can come from OEM labs, third-party labs, tech media sources, or independent user submissions. What’s important to us is that the scores must be consistent with general expectations, and for sources outside of our labs and data centers, each must include enough detailed system information that we can determine whether the score makes sense. That said, if your scores are different from what you see in our database, please don’t hesitate to send them to us; we may be able to work it out together.

The actual result submission process is simple. On the end-of-test results page that displays after a test run, click the Submit your results button below the overall score. Then, complete the short submission form that pops up, and click Submit.

When filling in the system information fields in the submission form, please be as specific as possible. Detailed device information helps us assess whether individual scores represent valid test runs.

That’s all there is to it!

Figure 1 below shows the end-of-test results screen and the Submit your results button below the overall score.

Figure 1: A screenshot of the WebXPRT 5 Preview end-of-test results screen, showing the Submit your results button below the overall score.

Figure 2 below shows how the results submission form would look if I filled in the necessary information and submitted a score at the end of a recent WebXPRT 5 Preview run on one of the systems here in our lab.

Figure 2: A screenshot of the WebXPRT 5 Preview results submission pop-up window after filling in the email address and system information fields.

After you submit your test result, we’ll review the information. If the test result meets the evaluation criteria, we’ll contact you to confirm how we should display its source in our database. For that purpose, you can choose one of the following:

  • Your first and last name
  • “Independent tester” (if you wish to remain anonymous)
  • Your company’s name, if you have permission to submit the result under that name. If you want to use a company name, please provide a valid corresponding company email address.

As always, we will not publish any additional information about you or your company without your permission.

We look forward to seeing your scores! If you have questions about WebXPRT 5 Preview testing or results submission—or you’d like to share feedback on WebXPRT 5—please let us know!

Justin

The WebXPRT 5 Preview is here!

We’re excited to announce that the WebXPRT 5 Preview is now available!

The Preview is available to everyone. You can access it at www.WebXPRT5.com or through a link on WebXPRT.com.

You are free to publish scores from testing with this Preview build; in fact, we encourage it. We want to know how it is performing for you, so we love to see both test scores and any feedback you would like to give.

We may still tweak a few things in the benchmark between this Preview and the final release. We plan to limit any potential changes, however, to areas like the UI and other features, things we do not expect to affect test scores.

Longtime WebXPRT users will notice that while the WebXPRT 5 Preview UI has a new look and feel, the basic layout has not changed. The general process for kicking off both manual and automated tests is the same as with WebXPRT 4, so the transition from WebXPRT 4 to WebXPRT 5 testing should be straightforward.

We also encourage you to check out our recent XPRT blog post on the WebXPRT 5 workload lineup for more details about what’s new in the Preview release—including more AI-oriented scenarios than ever before!

After you try the WebXPRT 5 Preview, please send us your comments. Thanks, and happy testing!

Justin

WebXPRT 5: The workload lineup

The WebXPRT 5 development process heading into the final stretch, so we’d like to share more information about the workloads you’re likely to see in the WebXPRT 5 Preview release—and when that release may be available. We’re still actively testing candidate builds, studying results from multiple system tests, and so on, so some details could change. That said, we’re now close enough to provide a clearer picture of the workload lineup.

Core workloads

WebXPRT 5 will likely include the following seven workloads:  

  • Video Background Blur with AI. Blurs the background of a video call using an AI-powered Segmentation model.
  • Photo Effects. Applies a filter to six photos using the Canvas API.
  • Detect Faces with AI. Detects faces and organizes photos in an album using computer vision (OpenCV.js with Caffe Model).
  • Image Classification with AI. Labels images in an album using machine learning (OpenCV.js and ML Classify with the SqueezeNet model).
  • Document Scan with AI. Scans a document image and converts it to text using ML-based OCR (Wasm with LSTM).
  • School Science Project. Processes a DNA sequencing task using Regex and String manipulation.
  • Homework Spellcheck. Spellchecks a document using Typo.js and Web Workers.

The sub-scores for each of these tests will contribute to WebXPRT 5’s main overall score. (We’ll discuss scoring in future blogs.)

Experimental workloads

We’re currently planning to include an experimental workload section, something we’ve long discussed, in WebXPRT 5. Workloads in this section will use cutting-edge browser technologies that may not be compatible with the same broad range of platforms and devices as the technologies in WebXPRT 5’s core workloads. For that reason, we will not include the scores from the experimental section—in the Preview build and future releases—in WebXPRT 5’s main overall score.

In addition, WebXPRT 5’s experimental workloads will be completely optional.

Moving forward, WebXPRT’s experimental workload section will provide users with a straightforward way to learn how well certain browsers or systems handle new browser-based technologies (e.g., new web apps or AI capabilities). We’ll benefit from the ability to offer workloads for large-scale testing and user feedback before committing to including them as core WebXPRT workloads. Because future experimental workloads will run independently of the main test, we can add them without affecting the main WebXPRT score or requiring users to repeat testing to obtain comparable scores. We think it will be a win-win scenario in many respects.  

We’re still evaluating whether we can finish the first experimental workload in time to include it in the WebXPRT 5 Preview release, but we will definitely have at least the section and the framework for adding such a workload. When we are confident that an experimental workload is ready to go, we’ll share more information here in the blog and be all set up to incorporate it.

Timeline

If all goes well, we hope to publish the WebXPRT 5 Preview very soon, followed by a general release in early 2026. If that timeline changes significantly, we’ll provide an update here in the blog as soon as possible.

What about an “AI score”?

We’re still discussing the concept of a stand-alone WebXPRT 5 “AI score,” and we go back and forth on it. That score would combine WebXPRT’s AI-related subscores into a single score for use in AI capability comparisons. Because we’re just now beefing up WebXPRT’s AI capabilities, we’ve definitely decided not to include an AI score right now. We would love your feedback on the concept as we plan WebXPRT’s future. If that’s something that you would be interested in, please let us know!

If you have any questions about the WebXPRT 5 details we’ve shared above, please feel free to ask!

Justin

WebXPRT 5: Starting to assemble the pieces

In our last blog post, we shared the exciting news that we’re currently working on WebXPRT 5. In that post, we described some of the ways that WebXPRT may evolve with the release of WebXPRT 5. In today’s post, we’ll revisit some of the points of emphasis from the last post and focus on potential workload changes in a bit more detail.

With any benchmark development project, there are always technical challenges you need to iron out. That is especially true with a cross-platform, browser-based benchmark like WebXPRT. Because we’re in the middle of exploring the technical feasibility of a few of the options we’ll mention, we’re not yet ready to say for certain that all these features will be available in the initial WebXPRT 5 release. We can, however, now paint a clearer picture of the overall direction we’re headed.

In the section below, you’ll find updated info on where we stand with respect to some of the key development focal points we discussed in our last post. If there’s an item from that post or previous posts that we didn’t mention below—such as updating the test harness—it doesn’t mean that we’re dropping that goal. We’re just focusing on workloads today.

One of our key goals with WebXPRT 5 is providing more AI-related workloads. In past blog posts, we’ve discussed the growing importance of local, browser-side AI. With WebXPRT 5, we’re investigating two ways that we can expand WebXPRT’s AI portfolio: 1) updating existing WebXPRT 4 AI-oriented workloads, and 2) adding all-new AI workloads.

Here are some possible ways those AI-related changes may play out in both categories:

Updating existing WebXPRT 4 AI-oriented workloads

  • Splitting the existing Organize Album using AI workload’s timed tasks—face detection and image classification—into two independent workloads.
  • Updating the face detection and image classification tasks with the latest versions of the OpenCV.js computer vision and machine learning libraries.
  • Updating the Caffe deep learning framework for the face detection task.
  • Updating the ONNX-based SqueezeNet machine learning model for the image classification tasks.
  • Updating the version of the Tesseract.js OCR engine that WebXPRT uses in the Encrypt Notes and OCR Scan workload. 

Potentially adding all-new AI workloads (either core or experimental workloads)

  • We’re exploring the idea of including a workload that uses an AI-powered segmentation model to blur the background of a video call.
  • We’re exploring the feasibility of including a local LLM chat workload.
  • We would eventually like to include a WebGPU-based web AI framework for a computer vision workload.

In addition to the goal of adding more AI, we previously discussed the possibility of adding non-AI WebGPU workloads. As a web API, WebGPU enables web-based applications—such as image-based GenAI and inference workloads—to directly access the graphics rendering and computational capabilities of a system’s GPU. In the future, WebXPRT 5 could use that technology to execute complex 3D rendering workloads.

We hope today’s post gives you a better sense of where WebXPRT 5 may be headed. We want to reemphasize that while we are actively investigating the possible changes mentioned above, nothing is set in stone. As the pieces start to fall into place, we’ll provide more information here in the blog.

If you have any questions or comments about WebXPRT 5, please feel free to contact us!

Justin

CrXPRT 2 functionality is ending with ChromeOS 139

Back in January, we discussed the ChromeOS team’s decision to eventually end support for all user-installed Chrome Apps—including CrXPRT 2—upon the release of Chrome 138 in July of this year. As best we can tell, the move is part of their overall strategy of transitioning all support to Chrome extensions and Progressive Web Apps. We knew that after the support cutoff date, we would not be able to publish any fixes or updates for CrXPRT 2, but we weren’t exactly sure how the transition would affect the app’s overall functionality.

We’ve now confirmed that while CrXPRT 2 still functions normally through Chrome 138.0.7204.255 (beta), the app does not launch at all on Chrome Canary 139. Consequently, we expect that stable channel system updates will disable CrXPRT 2 on most systems after Chrome 139 goes live on August 5th. We will initially leave CrXPRT 2 on our site for those who want to use it on older versions of Chrome, but over time we will archive it as an inactive benchmark.

We want to extend our heartfelt thanks to the many people around the world who used CrXPRT 2 for lab evaluations, product reviews, and individual testing over the past several years. We’re grateful for your support! We will update readers here in the blog if we decide to pursue new ChromeOS benchmark development work in the future.

Justin

Best practices for WebXPRT testing

One of the strengths of WebXPRT is that it’s a remarkably easy benchmark to run. Its upfront simplicity attracts users with a wide range of technical skills—everyone from engineers in cutting-edge OEM labs to veteran tech journalists to everyday folks who simply want to test their gear’s browser performance. With so many different kinds of people running the test each day, it’s certain that at least some of them use very different approaches to testing. In today’s blog, we’re going to share some of the key benchmarking practices we follow in the XPRT lab—and encourage you to consider—in order to produce the most consistent and reliable WebXPRT scores.

We offer these best practices as tips you might find useful in your testing. Each step relates to evaluating browser performance with WebXPRT, but several of these practices will apply to other benchmarks as well.

  • Test with clean images: In the XPRT lab, we typically use an out-of-box (OOB) method for testing new devices. OOB testing means that other than running the initial OS and browser version updates that users are likely to run after first turning on the device, we change as little as possible before testing. We want to assess the performance that buyers are likely to see when they first purchase the device and before they install additional software. This approach is the best way to provide an accurate assessment of the performance retail buyers will experience from their new devices. That said, the OOB method is not appropriate for certain types of testing, such as when you want to compare largely identical systems or when you want to remove as much pre-loaded software as possible. The OOB method is less relevant to users who want to see how their device performs as it is.
  • Browser updates can have a significant impact: Most people know that different browsers often produce different performance scores on the same system. They may not know that there can be shifts in performance between different versions of the same browser. While most browser updates don’t have a large impact on performance, a few updates have increased (or even decreased) browser performance by a significant amount. For this reason, it’s always important to record and disclose the extended browser version number for each test run. The same principle applies to any other relevant software.
  • Turn off automatic updates: We do our best to eliminate or minimize app and system updates after initial setup. Some vendors are making it more difficult to turn off updates completely, but you should always double-check update settings before testing. On Windows systems, the same considerations apply to turning off User Account Control notifications.
  • Let the system settle: Depending on the system and the OS, a significant amount of system-level activity can be going on in the background after you turn it on. As much as possible, we like to wait for a stable baseline (idle time) of system activity before kicking off a test. If we start testing immediately after booting the system, we often see higher variance in the first run before the scores start to tighten up.
  • Run the test more than once: Because of natural variance, our standard practice in the XPRT lab is to publish a score that represents the median of three to five runs, if not more. If you run a benchmark only once and the score differs significantly from other published scores, your result could be an outlier that you would not see again under stable testing conditions or over the course of multiple runs.
  • Clear the cache: Browser caching can improve web page performance, including the loading of the types of JavaScript and HTML5 assets that WebXPRT uses in its workloads. Depending on the platform under test, browser caching may or may not significantly change WebXPRT scores, but clearing the cache before testing and between each run can help improve the accuracy and consistency of scores.

We hope these tips will serve as a good baseline methodology for your WebXPRT testing. If you have any questions about WebXPRT, the other XPRTs, or benchmarking in general, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?