BenchmarkXPRT Blog banner

Tag Archives: BenchmarkXPRT Development Community

Get your WebXPRT 5 Preview results on WebXPRT.com: how to submit them

The WebXPRT 5 Preview has been available for only a few weeks, but users have already started submitting test results for us to review for publication in the WebXPRT 5 Preview results viewer. We’re excited to receive those submissions, but we know that some of our readers are either new to WebXPRT or may never have submitted a test result. In today’s post, we’ll cover the straightforward process of submitting your WebXPRT 5 Preview test results for publication in the viewer.

Unlike sites that automatically publish all results submissions, we publish only results that meet a set of evaluation criteria. Those results can come from OEM labs, third-party labs, tech media sources, or independent user submissions. What’s important to us is that the scores must be consistent with general expectations, and for sources outside of our labs and data centers, each must include enough detailed system information that we can determine whether the score makes sense. That said, if your scores are different from what you see in our database, please don’t hesitate to send them to us; we may be able to work it out together.

The actual result submission process is simple. On the end-of-test results page that displays after a test run, click the Submit your results button below the overall score. Then, complete the short submission form that pops up, and click Submit.

When filling in the system information fields in the submission form, please be as specific as possible. Detailed device information helps us assess whether individual scores represent valid test runs.

That’s all there is to it!

Figure 1 below shows the end-of-test results screen and the Submit your results button below the overall score.

Figure 1: A screenshot of the WebXPRT 5 Preview end-of-test results screen, showing the Submit your results button below the overall score.

Figure 2 below shows how the results submission form would look if I filled in the necessary information and submitted a score at the end of a recent WebXPRT 5 Preview run on one of the systems here in our lab.

Figure 2: A screenshot of the WebXPRT 5 Preview results submission pop-up window after filling in the email address and system information fields.

After you submit your test result, we’ll review the information. If the test result meets the evaluation criteria, we’ll contact you to confirm how we should display its source in our database. For that purpose, you can choose one of the following:

  • Your first and last name
  • “Independent tester” (if you wish to remain anonymous)
  • Your company’s name, if you have permission to submit the result under that name. If you want to use a company name, please provide a valid corresponding company email address.

As always, we will not publish any additional information about you or your company without your permission.

We look forward to seeing your scores! If you have questions about WebXPRT 5 Preview testing or results submission—or you’d like to share feedback on WebXPRT 5—please let us know!

Justin

WebXPRT 5: The workload lineup

The WebXPRT 5 development process heading into the final stretch, so we’d like to share more information about the workloads you’re likely to see in the WebXPRT 5 Preview release—and when that release may be available. We’re still actively testing candidate builds, studying results from multiple system tests, and so on, so some details could change. That said, we’re now close enough to provide a clearer picture of the workload lineup.

Core workloads

WebXPRT 5 will likely include the following seven workloads:  

  • Video Background Blur with AI. Blurs the background of a video call using an AI-powered Segmentation model.
  • Photo Effects. Applies a filter to six photos using the Canvas API.
  • Detect Faces with AI. Detects faces and organizes photos in an album using computer vision (OpenCV.js with Caffe Model).
  • Image Classification with AI. Labels images in an album using machine learning (OpenCV.js and ML Classify with the SqueezeNet model).
  • Document Scan with AI. Scans a document image and converts it to text using ML-based OCR (Wasm with LSTM).
  • School Science Project. Processes a DNA sequencing task using Regex and String manipulation.
  • Homework Spellcheck. Spellchecks a document using Typo.js and Web Workers.

The sub-scores for each of these tests will contribute to WebXPRT 5’s main overall score. (We’ll discuss scoring in future blogs.)

Experimental workloads

We’re currently planning to include an experimental workload section, something we’ve long discussed, in WebXPRT 5. Workloads in this section will use cutting-edge browser technologies that may not be compatible with the same broad range of platforms and devices as the technologies in WebXPRT 5’s core workloads. For that reason, we will not include the scores from the experimental section—in the Preview build and future releases—in WebXPRT 5’s main overall score.

In addition, WebXPRT 5’s experimental workloads will be completely optional.

Moving forward, WebXPRT’s experimental workload section will provide users with a straightforward way to learn how well certain browsers or systems handle new browser-based technologies (e.g., new web apps or AI capabilities). We’ll benefit from the ability to offer workloads for large-scale testing and user feedback before committing to including them as core WebXPRT workloads. Because future experimental workloads will run independently of the main test, we can add them without affecting the main WebXPRT score or requiring users to repeat testing to obtain comparable scores. We think it will be a win-win scenario in many respects.  

We’re still evaluating whether we can finish the first experimental workload in time to include it in the WebXPRT 5 Preview release, but we will definitely have at least the section and the framework for adding such a workload. When we are confident that an experimental workload is ready to go, we’ll share more information here in the blog and be all set up to incorporate it.

Timeline

If all goes well, we hope to publish the WebXPRT 5 Preview very soon, followed by a general release in early 2026. If that timeline changes significantly, we’ll provide an update here in the blog as soon as possible.

What about an “AI score”?

We’re still discussing the concept of a stand-alone WebXPRT 5 “AI score,” and we go back and forth on it. That score would combine WebXPRT’s AI-related subscores into a single score for use in AI capability comparisons. Because we’re just now beefing up WebXPRT’s AI capabilities, we’ve definitely decided not to include an AI score right now. We would love your feedback on the concept as we plan WebXPRT’s future. If that’s something that you would be interested in, please let us know!

If you have any questions about the WebXPRT 5 details we’ve shared above, please feel free to ask!

Justin

Multi-tab testing in a future version of WebXPRT?

In previous posts about our recommended best practices for producing consistent and reliable WebXPRT scores, we’ve emphasized the importance of “clean” testing. Clean testing involves minimizing the amount of background activity on a system during test runs to ensure stable test conditions. With stable test conditions, we can avoid common scenarios in which startup tasks, automatic updates, and other unpredictable processes contribute to high score variances and potentially unfair comparisons.

Clean testing is a vital part of accurate performance benchmarking, but it doesn’t always show us what kind of performance we can expect in typical everyday conditions. For example, while a browser performance test like WebXPRT can provide clean testing scores that serve as a valuable proxy for overall system performance, an entire WebXPRT test run involves only two open browser tabs. Most of us will have many more tabs open at any given time during the day. Those tabs—and any associated background services, extensions, plug-ins, or renderers—have the potential to require CPU cycles and frequently consume memory resources. Depending on the number of tabs you leave open, the performance impact on your system can be noticeable. Even with modern browser tab management and resource-saving features, a proliferation of tabs can still have a significant impact on your computing experience.

To address this type of computing, we’ve been considering the possibility of adding one or more multi-tab testing features to a future version of WebXPRT. There are several ways we could do this, including the following options:

  • We could open each full workload cycle in a new tab, resulting in seven total tabs.
  • We could open each individual workload iteration in a new tab, resulting in 42 total tabs.
  • We could allow users to run multiple full tests back-to-back while keeping the tabs from the previous test(s) open.

If we do decide to add multi-tab features to a future version of WebXPRT, we could integrate them into the main score or make them optional and thus not affect traditional WebXPRT testing. We’re looking at all these options.

Whenever we have multiple choices, we seek your input. We want to know if a feature like this is something you’d like to see. Below, you’ll find two quick survey questions that will help us gauge your interest in this topic. We would appreciate your input!

Would you be interested in using future WebXPRT multi-tab testing features?

How many browser tabs do you typically leave open at one time?

If you’d like to share additional thoughts or ideas related to possible multi-tab features, please let us know!

Justin

Browser-based AI tests in WebXPRT 4: face detection and image classification

I recently revisited an XPRT blog entry that we posted from CES Las Vegas back in 2020. In that post, I reflected on the show’s expanded AI emphasis, and I wondered if we were reaching a tipping point where AI-enhanced and AI-driven tools and applications would become a significant presence in people’s daily lives. It felt like we were approaching that point back then with the prevalence of AI-powered features such as image enhancement and text recommendation, among many others. Now, seamless AI integration with common online tasks has become so widespread that many people unknowingly benefit from AI interactions several times a day.

As AI’s role in areas like everyday browser activity continues to grow—along with our expectations for what our consumer devices should be able to handle—reliable AI-oriented benchmarking is more vital than ever. We need objective performance data that can help us understand how well a new desktop, laptop, tablet, or phone will handle AI tasks.

WebXPRT 4 already includes timed AI tasks in two of its workloads: the “Organize Album using AI” workload and the “Encrypt Notes and OCR Scan” workload. These two workloads reflect the types of light browser-side inference tasks that are now fairly common in consumer-oriented web apps and extensions. In today’s post, we’ll provide some technical information about the Organize Album workload. In a future post, we’ll do the same for the Encrypt Notes workload.

The Organize Album workload includes two different timed tasks that reflect a common scenario of organizing online photo albums. The workload utilizes the AI inference and JavaScript capabilities of the WebAssembly (Wasm) version of OpenCV.js—an open-source computer vision and machine learning library. In WebXPRT 4, we used OpenCV.js version 4.5.2.

Here are the details for each task:

  • The first task measures the time it takes to complete a face detection job with a set of five 720 x 480 photos that we sourced from commercial photo sites. The workload loads a Caffe deep learning framework model (res10_300x300_ssd_iter_140000_fp16.caffemodel) using the commands found here
  • The second task measures the time it takes to complete an image classification job (labeling based on object detection) with a different set of five 718 x 480 photos that we sourced from the ImageNet computer vision dataset. The workload loads an ONNX-based SqueezeNet machine learning model (squeezenet.onnx v 1.0) using the commands found here.

To produce a score for each iteration of the workload, WebXPRT calculates the total time that it takes for a system to organize both albums. In a standard test, WebXPRT runs seven iterations of the entire six-workload performance suite before calculating an overall test score. You can find out more about the WebXPRT results calculation process here.

We hope this post will give you a better sense of how WebXPRT 4 measures one kind of AI performance. As a reminder, if you want to dig into the details at a more granular level, you can access the WebXPRT 4 source code for free. In previous blog posts, you can find information about how to access and use the code. You can also read more about WebXPRT’s overall structure and other workloads in the Exploring WebXPRT 4 white paper.

If you have any questions about this workload or any other aspect of WebXPRT 4, please let us know!

Justin

Recent XPRT mentions in articles, reviews, and more!

Here at the XPRTs, our primary goal is to provide free, easy-to-use benchmark tools that can help everyone—from OEM labs to tech press journalists to individual consumers—understand how well devices will perform while completing everyday computing tasks. We track progress toward that goal in several ways, but one of the most important is how much people use and discuss the XPRTs. When the name of one of our apps appears in an ad, article, or tech review, we call it a “mention.” Tracking mentions helps us gauge our reach.

We occasionally like to share a sample of recent XPRT mentions here in the blog. If you just started following the XPRTs, it may be surprising to see our program’s global reach. If you’re a longtime reader and you’re used to seeing WebXPRT or CrXPRT in major tech press articles, it may be surprising to learn more about overseas tech press publications or see how some government agencies use the XPRTs to make decisions. In any case, we hope you’ll enjoy exploring the links below!

Recent mentions include:

If you’d like to receive monthly updates on XPRT-related news and activity, we encourage you to sign up for the BenchmarkXPRT Development Community newsletter. It’s completely free, and all you need to do to join the newsletter mailing list is let us know! We won’t publish, share, or sell any of the contact information you provide, and we’ll only send you the monthly newsletter and occasional benchmark-related announcements, such as important news about patches or releases.

If you have any questions about the XPRTs, suggestions, or requests for future blog topics, please feel free to contact us.

Justin

Archiving AIXPRT and CloudXPRT

Some of our readers have been following the XPRTs since the early days, and they may remember using legacy versions of benchmarks such as HDXPRT 2014 or WebXPRT 2013. For many years, whenever we released a new version of a benchmark, we would maintain a link to the previous version on the benchmark’s main page. However, as interest in the older versions understandably waned and we stopped formally supporting them, many of those legacy XPRTs stopped working on the latest versions of the operating systems or browsers that we designed them to test. While we wanted to continue to provide a way for users to access those legacy XPRTs, we also wanted to avoid potential confusion for new users who might see links to old versions on our site. We decided that the best solution was to archive older tests in a separate section of the site—the XPRT archive.

Recently, as we discussed XPRT plans for 2025, it became clear that we needed to add AIXPRT and CloudXPRT to the archive. Both benchmarks represent landmark efforts toward our ongoing goal of providing cutting-edge performance assessment tools, but even though a few tech press publications and OEM labs experimented with them, neither benchmark gained enough widespread adoption to justify their continued support. As a result, we decided to focus our resources elsewhere and halt development on both benchmarks. Since then, ongoing updates to their respective software components and target platforms have rendered them largely unusable. By archiving both benchmarks, we hope to avoid any future confusion for visitors who may otherwise try to use them.

Over the coming weeks, we’ll be moving the AIXPRT and CloudXPRT installation packages to the XPRT archive page. We’re grateful to everyone who has used AIXPRT and CloudXPRT in the past, and we apologize for any inconvenience this change may cause.

If you have any questions or concerns about access to either of these benchmarks—or about anything else related to the XPRTs, please let us know

Justin

Check out the other XPRTs:

Forgot your password?