BenchmarkXPRT Blog banner

Category: Collaborative benchmark development

Browser-based AI tests in WebXPRT 4: face detection and image classification

I recently revisited an XPRT blog entry that we posted from CES Las Vegas back in 2020. In that post, I reflected on the show’s expanded AI emphasis, and I wondered if we were reaching a tipping point where AI-enhanced and AI-driven tools and applications would become a significant presence in people’s daily lives. It felt like we were approaching that point back then with the prevalence of AI-powered features such as image enhancement and text recommendation, among many others. Now, seamless AI integration with common online tasks has become so widespread that many people unknowingly benefit from AI interactions several times a day.

As AI’s role in areas like everyday browser activity continues to grow—along with our expectations for what our consumer devices should be able to handle—reliable AI-oriented benchmarking is more vital than ever. We need objective performance data that can help us understand how well a new desktop, laptop, tablet, or phone will handle AI tasks.

WebXPRT 4 already includes timed AI tasks in two of its workloads: the “Organize Album using AI” workload and the “Encrypt Notes and OCR Scan” workload. These two workloads reflect the types of light browser-side inference tasks that are now fairly common in consumer-oriented web apps and extensions. In today’s post, we’ll provide some technical information about the Organize Album workload. In a future post, we’ll do the same for the Encrypt Notes workload.

The Organize Album workload includes two different timed tasks that reflect a common scenario of organizing online photo albums. The workload utilizes the AI inference and JavaScript capabilities of the WebAssembly (Wasm) version of OpenCV.js—an open-source computer vision and machine learning library. In WebXPRT 4, we used OpenCV.js version 4.5.2.

Here are the details for each task:

  • The first task measures the time it takes to complete a face detection job with a set of five 720 x 480 photos that we sourced from commercial photo sites. The workload loads a Caffe deep learning framework model (res10_300x300_ssd_iter_140000_fp16.caffemodel) using the commands found here
  • The second task measures the time it takes to complete an image classification job (labeling based on object detection) with a different set of five 718 x 480 photos that we sourced from the ImageNet computer vision dataset. The workload loads an ONNX-based SqueezeNet machine learning model (squeezenet.onnx v 1.0) using the commands found here.

To produce a score for each iteration of the workload, WebXPRT calculates the total time that it takes for a system to organize both albums. In a standard test, WebXPRT runs seven iterations of the entire six-workload performance suite before calculating an overall test score. You can find out more about the WebXPRT results calculation process here.

We hope this post will give you a better sense of how WebXPRT 4 measures one kind of AI performance. As a reminder, if you want to dig into the details at a more granular level, you can access the WebXPRT 4 source code for free. In previous blog posts, you can find information about how to access and use the code. You can also read more about WebXPRT’s overall structure and other workloads in the Exploring WebXPRT 4 white paper.

If you have any questions about this workload or any other aspect of WebXPRT 4, please let us know!

Justin

The XPRTs: What would you like to see in 2025?

If you’re a new follower of the XPRT family of benchmarks, you may not be aware of one of the characteristics of the XPRTs that sets them apart from many benchmarking efforts—our openness and commitment to valuing the feedback of tech journalists, lab engineers, and anyone else that uses the XPRTs on a regular basis. That feedback helps us to ensure that as the XPRTs grow and evolve, the resources we offer will continue to meet the needs of those that use them.

In the past, user feedback has influenced specific aspects of our benchmarks, such as the length of test runs, UI features, results presentation, and the addition or subtraction of specific workloads. More broadly, we have also received suggestions for entirely new XPRTs and ways we might target emerging technologies or industry use cases.

As we look forward to what’s in store for the XPRTs in 2025, we’d love to hear your ideas about new XPRTs—or new features for existing XPRTs. Are you aware of hardware form factors, software platforms, new technologies, or prominent applications that are difficult or impossible to evaluate using existing performance benchmarks? Should we incorporate additional or different technologies into existing XPRTs through new workloads? Do you have suggestions for ways to improve any of the XPRTs or XPRT-related tools, such as results viewers?

We’re especially interested in your thoughts about the next steps for WebXPRT. If our recent blog posts about the potential addition of an AI-focused auxiliary workload, what a WebXPRT battery life test would entail, or possible WebAssembly-based test scenarios have piqued your interest, we’d love to hear your thoughts!

We’re genuinely interested in your answers to these questions and any other ideas you have, so please feel free to contact us. We look forward to hearing your thoughts and working together to figure out how they could help shape the XPRTs in 2025!

Justin

More than two million XPRT benchmark runs and downloads!

As we near the end of 2024, we’re excited to share that the XPRTs have passed another notable milestone—over 2,000,000 combined runs and downloads! The rate of growth in the total number of XPRT runs and downloads is exciting. It took about seven and a half years for the XPRTs to pass one million total runs and downloads—but it’s taken less than half that, three and a half years, to add another million. Figure 1 shows the climb to the two-million-run mark.

Figure 1: The cumulative number of total yearly XPRT runs and downloads over time.

As you would expect, most of the runs contributing to that total come from WebXPRT tests. If you’ve run WebXPRT in any of the 983 cities and 84 countries from which we’ve received completed test data—including newcomers El Salvador, Malaysia, Morocco, and Saudi Arabia—we’re grateful for your help in reaching this milestone! As Figure 2 illustrates, WebXPRT use has grown steadily since the debut of WebXPRT 2013. On average, we now record more than twice as many WebXPRT runs each month than we recorded in WebXPRT’s entire first year. With over 340,000 runs so far in 2024—an increase of more than 16 percent over last year’s total—that growth is showing no signs of slowing down.

Figure 2: The cumulative number of total yearly WebXPRT runs over time.

This milestone isn’t just about numbers. Establishing and maintaining a presence in the industry and experiencing year-over-year growth requires more than technical know-how and marketing efforts. It requires the ongoing trust and support of the benchmarking community—including OEM labs, the tech press, and independent computer enthusiasts—and those who simply want to know how good their devices are at web browsing.

Once again, we’re thankful for the support of everyone who’s used the XPRTs over the years, and we look forward to another million!

If you have any questions or comments about any of the XPRTs, we’d love to hear from you!

Justin

Speaking of potential future WebXPRT workloads

In recent blog posts, we’ve discussed several types of potential future WebXPRT workloads—from an auxiliary AI-focused workload to a WebXPRT battery life test—and many of the factors that we would need to consider when developing those workloads. In today’s post, we’re discussing other types of workloads that we may consider for future WebXPRT versions. We’re also inviting you to send us your WebXPRT workload ideas!

Currently, the most promising web technology for future WebXPRT workloads is WebAssembly (Wasm). Wasm is a binary instruction format that works across all modern browsers, provides a sandboxed environment that operates at native speeds, and takes advantage of common hardware specs across platforms. Wasm’s capabilities offer web developers significant flexibility in running complex client applications within the browser.

We first made use of Wasm in WebXPRT 4’s Organize Album and Encrypt Notes workloads, but Wasm has the potential to support many more types of test scenarios. Here are just a few of the use-case categories that Wasm supports:

  • Gaming
  • Image and video editing
  • Video augmentation
  • CAD applications
  • Interactive learning portals
  • Language translation

Those categories and the possibilities they open for additional workloads are exciting! When thinking through possible new workload scenarios, it’s important to remember that workload proposals need to fit within a set of basic guidelines that uphold WebXPRT’s strengths as a benchmark. You can read about those guidelines in more detail in this blog post, but in short, new workloads ideally should

  • be relevant to real-life scenarios
  • have cross-platform support
  • clearly differentiate in their performance between different types of devices
  • produce consistent and easily replicated results

After testing with WebXPRT or reviewing the list of use cases that Wasm supports, have you considered a new workload or test scenario that you would like to see? If so, please let us know! Your ideas could end up playing a role in shaping the next version of WebXPRT!

Justin

Thinking through a potential WebXPRT 4 battery life test

In recent blog posts, we’ve discussed some of the technical considerations we’re working through on our path toward a future AI-focused WebXPRT 4 auxiliary workload. While we’re especially excited about adding to WebXPRT 4’s AI performance evaluation capabilities, AI is not the only area of potential WebXPRT 4 expansion that we’ve thought about. We’re always open to hearing suggestions for ways we can improve WebXPRT 4, including any workload proposals you may have. Several users have asked about the possibility of a WebXPRT 4 battery life test, so today we’ll discuss what one might look like and some of the challenges we’d have to overcome to make it a reality.

Battery life tests fall into two primary categories: simple rundown tests and performance-weighted tests. Simple rundown tests measure battery life during extreme idle periods and loops of movie playbacks, etc., but do not reflect the wide-ranging mix of activities that characterize a typical day for most users. While they can be useful for performing very specific apples-to-apples comparisons, these tests don’t always give consumers an accurate estimate of the battery life they would experience in daily use.

In contrast, performance-weighted battery life tests, such as the one in CrXPRT 2, attempt to reflect real-world usage. The CrXPRT battery life test simulates common daily usage patterns for Chromebooks by including all the productivity workloads from the performance test, plus video playback, audio playback, and gaming scenarios. It also includes periods of wait/idle time. We believe this mixture of diverse activity and idle time better represents typical real-life behavior patterns. This makes the resulting estimated battery life much more helpful for consumers who are trying to match a device’s capabilities with their real-world needs.

From a technical standpoint, WebXPRT’s cross-platform nature presents us with several challenges that we did not face while developing the CrXPRT battery life test for ChromeOS. While the WebXPRT performance tests run in almost any browser, cross-browser differences and limitations in battery life reporting may restrict any future battery life test to a single browser or browser family. For instance, with the W3C Battery Status API, we can currently query battery status data from non-mobile Chromium-based browsers (e.g., Chrome, Edge, Opera, etc.), but not from Firefox or Safari. If a WebXPRT 4 battery life test supported only a single browser family, such as Chromium-based browsers, would you still be interested in using it? Please let us know.

A browser-based battery life workflow also presents other challenges that we do not face in native client applications, such as CrXPRT:

  • A browser-based battery life test may require the user to check the starting and ending battery capacities, with no way for the app to independently verify data accuracy.
  • The battery life test could require more babysitting in the event of network issues. We can catch network failures and try to handle them by reporting periods of network disconnection, but those interruptions could influence the battery life duration.
  • The factors above could make it difficult to achieve repeatability. One way to address that problem would be to run the test in a standardized lab environment with a steady internet connection, but a long list of standardized environmental requirements would make the battery life test less attractive and less accessible to many testers.

We’re not sharing these thoughts to make a WebXPRT 4 battery life test seem like an impossibility. Rather, we want to offer our perspective on what the test might look like and describe some of the challenges and considerations in play. If you have thoughts about battery life testing, or experience with battery life APIs in one or more of the major browsers, we’d love to hear from you!

Justin

Gain a deeper understanding of WebXPRT 4 with our results calculation white paper

More people around the world are using WebXPRT 4 now than ever before. It’s exciting to see that growth, which also means that many people are visiting our site and learning about the XPRTs for the first time. Because new visitors may not know how the XPRT family of benchmarks differs from other benchmarking efforts, we occasionally like to revisit the core values of our open development community here in the blog—and show how those values translate into more free resources for you.

One of our primary values is transparency in all our benchmark development and testing processes. We share information about our progress with XPRT users throughout the development process, and we invite people to contribute ideas and feedback along the way. We also publish both the source code of our benchmarks and detailed information about how they work, unlike benchmarks that use a “black box” model.

For WebXPRT 4 users who are interested in knowing more about the nuts and bolts of the benchmark, we offer several information-packed resources, including our focus for today, the WebXPRT 4 results calculation and confidence interval white paper. The white paper explains the WebXPRT 4 confidence interval, how it differs from typical benchmark variability, and the formulas the benchmark uses to calculate the individual workload scenario scores and overall score on the end-of-test results screen. The paper also provides an overview of the statistical methodology that WebXPRT uses to translate raw timings into scores.

In addition to the white paper’s discussion of the results calculation process, we’ve also provided a results calculation spreadsheet that shows the raw data from a sample test run and reproduces the calculations WebXPRT uses to generate both the workload scores and an overall score.

In potential future versions of WebXPRT, it’s likely that we’ll continue to use the same—or very similar—statistical methodologies and results calculation formulas that we’ve documented in the results calculation white paper and spreadsheet. That said, if you have suggestions for how we could improve those methods or formulas—either in part or in whole—please don’t hesitate to contact us. We’re interested in hearing your ideas!

The white paper is available on WebXPRT.com and on our XPRT white papers page. If you have any questions about the paper or spreadsheet, WebXPRT, or the XPRTs in general, please let us know.

Justin

Check out the other XPRTs:

Forgot your password?