BenchmarkXPRT Blog banner

Tag Archives: BenchmarkXPRT Development Community

The WebXPRT 5 source code is now available!

We’re happy to announce that the WebXPRT 5 source code is now available! We’re offering the source code in the form of a build package that contains all the necessary files and step-by-step instructions for setting up a locally hosted version of WebXPRT 5. While you’re free to use the code for purposes of review, internal testing, or experimentation, we do ask that you publish only test results from the official version of WebXPRT 5 that we host at WebXPRT.com.

We’re offering the build package upon request, rather than posting a permanent download link, to prevent bots or other malicious actors from downloading it. This method also lets us engage with folks who are interested in the source code and answer any questions they may have.

To request the code, simply click the “Request WebXPRT 5 source code” link in the gray Helpful Info box on the WebXPRT 5 home page (see Figure 1 below). Clicking the link will allow you to email the BenchmarkXPRT Support team directly and request the code.

Figure 1: A screenshot showing the location of the link to request WebXPRT 5 source code on WebXPRT.com

After we receive your request, we’ll send you a secure link to the current WebXPRT 5 build package.

If you have any questions about accessing the WebXPRT 5 source code, let us know!

Justin

WebXPRT 5 source code access is on the way

Recently, a member of the tech press asked us if we were planning to offer a way for users to set up an offline version of WebXPRT 5 for locally hosted tests. The short answer is “yes.”

The long answer is that the question provides us with a good opportunity to talk about XPRT source code access and let new users know how it works.

Since the early days of the BenchmarkXPRT Development Community, we’ve provided free access to the benchmark source code. We believe that by publishing XPRT code and allowing interested parties to access and review that code, we’re doing our part to encourage transparency and honesty in the benchmarking industry.

While we offer free access to the XPRT source code, our approach to derivative work differs from some traditional open-source models that encourage developers to alter products and even take them in substantially different directions. Because benchmarking requires a product that remains static to enable valid comparisons, we prioritize maintaining the integrity and consistency of the benchmark over time. So, we allow people to download the source, but we also reserve the right to control derivative works. This approach discourages a situation where someone publishes an unauthorized version of the benchmark and calls it an “XPRT.”

For WebXPRT 5, we’ll offer the code in the form of a build package—containing all the necessary files and instructions—that will be available upon request. By offering the code upon request, as opposed to posting a permanent download link, we can prevent bots or other malicious actors from downloading it. This method also lets us engage with users interested in the source code and answer their questions.

With the WebXPRT 5 build package, you’ll be able to set up your own WebXPRT 5 instance for purposes of review, internal testing, or experimentation. We do ask that you publish only test results from the official version of WebXPRT 5 that we host at WebXPRT.com.

We expect to have the build package ready within the next few weeks. When it’s available, we’ll let readers know here in the blog, and we’ll provide more details about the access and setup process.

If you have any questions about accessing the WebXPRT 5 source code, please let us know!

Justin

WebXPRT 5 is live!

The big day has finally arrived—WebXPRT 5 is now available!

You can access the benchmark at WebXPRT.com or WebXPRT5.com. For longtime WebXPRT users, the WebXPRT 5 UI will have an all-new look but a very familiar feel. The general process for kicking off both manual and automated tests is the same as with WebXPRT 4, so the transition to WebXPRT 5 testing will be straightforward. For legacy testing purposes, we will continue to make WebXPRT 4 available on our site.

Here is a quick overview of the differences between WebXPRT 4 and WebXPRT 5:

General changes

  • We’ve updated the aesthetics of the WebXPRT UI to make WebXPRT 5 visually distinct from older versions. We did not significantly change the flow of the UI.
  • We’ve updated content in some of the workloads to reflect changes in everyday technology, such as upgrading most of the photos in the photo processing workloads to higher resolutions.
  • We’ve updated the base calibration system for score calculations and adjusted the scoring scale. WebXPRT 5 scores will be in a lower numerical range than WebXPRT 4 scores. You should not compare these results to scores from previous versions of WebXPRT.

The workloads

WebXPRT 5 includes the following seven workloads:

  • Video Background Blur with AI. Blurs the background of a video call using an AI-powered segmentation model.
  • Photo Effects. Applies a filter to six photos using the Canvas API.
  • Detect Faces with AI. Detects faces and organizes photos in an album using computer vision (OpenCV.js with Caffe Model).
  • Image Classification with AI. Labels images in an album using machine learning (OpenCV.js and ML Classify with the SqueezeNet model).
  • Document Scan with AI. Scans a document image and converts it to text using ML-based OCR (Wasm with LSTM).
  • School Science Project. Processes a DNA sequencing task using Regex and String manipulation.
  • Homework Spellcheck. Spellchecks a document using Typo.js and Web Workers.

We’re thankful for all of the feedback we received during the WebXPRT 5 development process and Preview period, and we look forward to seeing your WebXPRT 5 results. If you have any questions about WebXPRT, please feel free to contact us!

Justin

Get your WebXPRT 5 Preview results on WebXPRT.com: how to submit them

The WebXPRT 5 Preview has been available for only a few weeks, but users have already started submitting test results for us to review for publication in the WebXPRT 5 Preview results viewer. We’re excited to receive those submissions, but we know that some of our readers are either new to WebXPRT or may never have submitted a test result. In today’s post, we’ll cover the straightforward process of submitting your WebXPRT 5 Preview test results for publication in the viewer.

Unlike sites that automatically publish all results submissions, we publish only results that meet a set of evaluation criteria. Those results can come from OEM labs, third-party labs, tech media sources, or independent user submissions. What’s important to us is that the scores must be consistent with general expectations, and for sources outside of our labs and data centers, each must include enough detailed system information that we can determine whether the score makes sense. That said, if your scores are different from what you see in our database, please don’t hesitate to send them to us; we may be able to work it out together.

The actual result submission process is simple. On the end-of-test results page that displays after a test run, click the Submit your results button below the overall score. Then, complete the short submission form that pops up, and click Submit.

When filling in the system information fields in the submission form, please be as specific as possible. Detailed device information helps us assess whether individual scores represent valid test runs.

That’s all there is to it!

Figure 1 below shows the end-of-test results screen and the Submit your results button below the overall score.

Figure 1: A screenshot of the WebXPRT 5 Preview end-of-test results screen, showing the Submit your results button below the overall score.

Figure 2 below shows how the results submission form would look if I filled in the necessary information and submitted a score at the end of a recent WebXPRT 5 Preview run on one of the systems here in our lab.

Figure 2: A screenshot of the WebXPRT 5 Preview results submission pop-up window after filling in the email address and system information fields.

After you submit your test result, we’ll review the information. If the test result meets the evaluation criteria, we’ll contact you to confirm how we should display its source in our database. For that purpose, you can choose one of the following:

  • Your first and last name
  • “Independent tester” (if you wish to remain anonymous)
  • Your company’s name, if you have permission to submit the result under that name. If you want to use a company name, please provide a valid corresponding company email address.

As always, we will not publish any additional information about you or your company without your permission.

We look forward to seeing your scores! If you have questions about WebXPRT 5 Preview testing or results submission—or you’d like to share feedback on WebXPRT 5—please let us know!

Justin

WebXPRT 5: The workload lineup

The WebXPRT 5 development process heading into the final stretch, so we’d like to share more information about the workloads you’re likely to see in the WebXPRT 5 Preview release—and when that release may be available. We’re still actively testing candidate builds, studying results from multiple system tests, and so on, so some details could change. That said, we’re now close enough to provide a clearer picture of the workload lineup.

Core workloads

WebXPRT 5 will likely include the following seven workloads:  

  • Video Background Blur with AI. Blurs the background of a video call using an AI-powered segmentation model.
  • Photo Effects. Applies a filter to six photos using the Canvas API.
  • Detect Faces with AI. Detects faces and organizes photos in an album using computer vision (OpenCV.js with Caffe Model).
  • Image Classification with AI. Labels images in an album using machine learning (OpenCV.js and ML Classify with the SqueezeNet model).
  • Document Scan with AI. Scans a document image and converts it to text using ML-based OCR (Wasm with LSTM).
  • School Science Project. Processes a DNA sequencing task using Regex and String manipulation.
  • Homework Spellcheck. Spellchecks a document using Typo.js and Web Workers.

The sub-scores for each of these tests will contribute to WebXPRT 5’s main overall score. (We’ll discuss scoring in future blogs.)

Experimental workloads

We’re currently planning to include an experimental workload section, something we’ve long discussed, in WebXPRT 5. Workloads in this section will use cutting-edge browser technologies that may not be compatible with the same broad range of platforms and devices as the technologies in WebXPRT 5’s core workloads. For that reason, we will not include the scores from the experimental section—in the Preview build and future releases—in WebXPRT 5’s main overall score.

In addition, WebXPRT 5’s experimental workloads will be completely optional.

Moving forward, WebXPRT’s experimental workload section will provide users with a straightforward way to learn how well certain browsers or systems handle new browser-based technologies (e.g., new web apps or AI capabilities). We’ll benefit from the ability to offer workloads for large-scale testing and user feedback before committing to including them as core WebXPRT workloads. Because future experimental workloads will run independently of the main test, we can add them without affecting the main WebXPRT score or requiring users to repeat testing to obtain comparable scores. We think it will be a win-win scenario in many respects.  

We’re still evaluating whether we can finish the first experimental workload in time to include it in the WebXPRT 5 Preview release, but we will definitely have at least the section and the framework for adding such a workload. When we are confident that an experimental workload is ready to go, we’ll share more information here in the blog and be all set up to incorporate it.

Timeline

If all goes well, we hope to publish the WebXPRT 5 Preview very soon, followed by a general release in early 2026. If that timeline changes significantly, we’ll provide an update here in the blog as soon as possible.

What about an “AI score”?

We’re still discussing the concept of a stand-alone WebXPRT 5 “AI score,” and we go back and forth on it. That score would combine WebXPRT’s AI-related subscores into a single score for use in AI capability comparisons. Because we’re just now beefing up WebXPRT’s AI capabilities, we’ve definitely decided not to include an AI score right now. We would love your feedback on the concept as we plan WebXPRT’s future. If that’s something that you would be interested in, please let us know!

If you have any questions about the WebXPRT 5 details we’ve shared above, please feel free to ask!

Justin

Multi-tab testing in a future version of WebXPRT?

In previous posts about our recommended best practices for producing consistent and reliable WebXPRT scores, we’ve emphasized the importance of “clean” testing. Clean testing involves minimizing the amount of background activity on a system during test runs to ensure stable test conditions. With stable test conditions, we can avoid common scenarios in which startup tasks, automatic updates, and other unpredictable processes contribute to high score variances and potentially unfair comparisons.

Clean testing is a vital part of accurate performance benchmarking, but it doesn’t always show us what kind of performance we can expect in typical everyday conditions. For example, while a browser performance test like WebXPRT can provide clean testing scores that serve as a valuable proxy for overall system performance, an entire WebXPRT test run involves only two open browser tabs. Most of us will have many more tabs open at any given time during the day. Those tabs—and any associated background services, extensions, plug-ins, or renderers—have the potential to require CPU cycles and frequently consume memory resources. Depending on the number of tabs you leave open, the performance impact on your system can be noticeable. Even with modern browser tab management and resource-saving features, a proliferation of tabs can still have a significant impact on your computing experience.

To address this type of computing, we’ve been considering the possibility of adding one or more multi-tab testing features to a future version of WebXPRT. There are several ways we could do this, including the following options:

  • We could open each full workload cycle in a new tab, resulting in seven total tabs.
  • We could open each individual workload iteration in a new tab, resulting in 42 total tabs.
  • We could allow users to run multiple full tests back-to-back while keeping the tabs from the previous test(s) open.

If we do decide to add multi-tab features to a future version of WebXPRT, we could integrate them into the main score or make them optional and thus not affect traditional WebXPRT testing. We’re looking at all these options.

Whenever we have multiple choices, we seek your input. We want to know if a feature like this is something you’d like to see. Below, you’ll find two quick survey questions that will help us gauge your interest in this topic. We would appreciate your input!

Would you be interested in using future WebXPRT multi-tab testing features?

How many browser tabs do you typically leave open at one time?

If you’d like to share additional thoughts or ideas related to possible multi-tab features, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?