BenchmarkXPRT Blog banner

Category: Cross-platform benchmarks

WebXPRT benchmarking tips from the XPRT lab

Occasionally, we receive inquiries from XPRT users asking for help determining why two systems with the same hardware configuration are producing significantly different WebXPRT scores. This can happen for many reasons, including different software stacks, but score variability can also result from different testing behaviors and environments. While some degree of variability is normal, these types of questions provide us with an opportunity to talk about some of the basic benchmarking practices we follow in the XPRT lab to produce the most consistent and reliable scores.

Below, we list a few basic best practices you might find useful in your testing. Most of them relate to evaluating browser performance with WebXPRT, but several of these practices apply to other benchmarks as well.

  • Hardware is not the only important factor: Most people know that different browsers produce different performance scores on the same system. Testers are not, however, always aware of shifts in performance between different versions of the same browser. While most updates don’t have a large impact on performance, a few updates have increased (or even decreased) browser performance by a significant amount. For this reason, it’s always important to record and disclose the extended browser version number for each test run. The same principle applies to any other relevant software.
  • Keep a thorough record of system information: We record detailed information about a test system’s key hardware and software components, including full model and version numbers. This information is not only important for later disclosure if we choose to publish a result, it can also sometimes help to pinpoint system differences that explain why two seemingly identical devices are producing very different scores. We also want people to be able to reproduce our results to the closest extent possible, so that commitment involves recording and disclosing more detail than you’ll find in some tech articles and product reviews.
  • Test with clean images: We typically use an out-of-box (OOB) method for testing new devices in the XPRT lab. OOB testing means that other than running the initial OS and browser version updates that users are likely to run after first turning on the device, we change as little as possible before testing. We want to assess the performance that buyers are likely to see when they first purchase the device and before they install additional software. This is the best way to provide an accurate assessment of the performance retail buyers will experience from their new devices. That said, the OOB method is not appropriate for certain types of testing, such as when you want to compare as close to identical system images as possible, or when you want to remove as much pre-loaded software as possible.
  • Turn off automatic updates: We do our best to eliminate or minimize app and system updates after initial setup. Some vendors are making it more difficult to turn off updates completely, but you should always double-check update settings before testing.
  • Get a baseline for system processes: Depending on the system and the OS, a significant amount of system-level activity can be going on in the background after you turn it on. As much as possible, we like to wait for a stable baseline (idle time) of system activity before kicking off a test. If we start testing immediately after booting the system, we often see higher variance in the first run before the scores start to tighten up.
  • Use more than one data point: Because of natural variance, our standard practice in the XPRT lab is to publish a score that represents the median from three to five runs, if not more. If you run a benchmark only once and the score differs significantly from other published scores, your result could be an outlier that you would not see again under stable testing conditions or over the course of multiple runs.


We hope these tips will help make your testing more accurate. If you have any questions about WebXPRT, the other XPRTs, or benchmarking in general, feel free to ask!

Justin

Local AI and new frontiers for performance evaluation

Recently, we discussed some ways the PC market may evolve in 2024, and how new Windows on Arm PCs could present the XPRTs with many opportunities for benchmarking. In addition to a potential market shakeup from Arm-based PCs in the coming years, there’s a much broader emerging trend that could eventually revolutionize almost everything about the way we interact with our personal devices—the development of local, dedicated AI processing units for consumer-oriented tech.

AI already impacts daily life for many consumers through technologies such as such as predictive text, computer vision, adaptive workflow apps, voice recognition, smart assistants, and much more. Generative AI-based technologies are rapidly establishing a permanent, society-altering presence across a wide range of industries. Aside from some localized inference tasks that the CPU and/or GPU typically handle, the bulk of the heavy compute power that fuels those technologies has been in the cloud or in on-prem servers. Now, several major chipmakers are working to roll out their own versions of AI-optimized neural processing units (NPUs) that will enable local devices to take on a larger share of the AI load.

Examples of dedicated AI hardware in recently-released or upcoming consumer devices include Intel’s new Meteor Lake NPU, Apple’s Neural Engine for M-series SoCs, Qualcomm’s Hexagon NPU, and AMD’s XDNA 2 architecture. The potential benefits of localized, NPU-facilitated AI are straightforward. On-device AI could reduce power consumption and extend battery life by offloading those tasks from the CPUs. It could alleviate certain cloud-related privacy and security concerns. Without the delays inherent in cloud queries, localized AI could execute inference tasks that operate much closer to real time. NPU-powered devices could fine-tune applications around your habits and preferences, even while offline. You could pull and utilize relevant data from cloud-based datasets without pushing private data in return. Theoretically, your device could know a great deal about you and enhance many areas of your daily life without passing all that data to another party.

Will localized AI play out that way? Some tech companies envision a role for on-device AI that enhances the abilities of existing cloud-based subscription services without decoupling personal data. We’ll likely see a wide variety of capabilities and services on offer, with application-specific and SaaS-determined privacy options.

Regardless of the way on-device AI technology evolves in the coming years, it presents an exciting new frontier for benchmarking. All NPUs will not be created equal, and that’s something buyers will need to understand. Some vendors will optimize their hardware more for computer vision, or large language models, or AI-based graphics rendering, and so on. It won’t be enough for business and consumers to simply know that a new system has dedicated AI processing abilities. They’ll need to know if that system performs well while handling the types of AI-related tasks that they do every day.

Here at the XPRTs, we specialize in creating benchmarks that feature real-world scenarios that mirror the types of tasks that people do in their daily lives. That approach means that when people use XPRT scores to compare device performance, they’re using a metric that can help them make a buying decision that will benefit them every day. We look forward to exploring ways that we can bring XPRT benchmarking expertise to the world of on-device AI.

Do you have ideas for future localized AI workloads? Let us know!

Justin

WebXPRT 4 is good to go with the latest Apple software release

Last month, we reported the good news that our WebXPRT 4 tests successfully ran to completion on the beta releases of iOS 17.2, iPadOS 17.2, and macOS Sonoma 14.2 with Safari 17.2. When we tested with those beta builds, WebXPRT 4 did not encounter the issue of test runs getting stuck on iOS 17.1 while attempting to complete the receipt scanning task in the Encrypt Notes and OCR Scan subtest. Unfortunately, during the past several weeks, this fix was only available to Apple users running beta software through the Apple Developer Program.

We’re happy to report that Apple has now finalized and published the general releases of iOS 17.2, iPadOS 17.2, and Safari 17.2. WebXPRT 4 tests running on those platforms should now complete without any problems.

We do appreciate everyone’s patience as we worked to find a solution to this problem, and we look forward to seeing your WebXPRT 4 scores from all the latest Apple devices! If you have any questions or concerns about WebXPRT 4, or you encounter any additional issues when running the test on any platform, please let us know.

Justin

The evolving PC market brings new opportunities for WebXPRT

Here at the XPRTs, we have to spend time examining what’s next in the tech industry, because the XPRTs have to keep up with the pace of innovation. In our recent discussions about 2024, a major recurring topic has been the potential impact of Qualcomm’s upcoming line of SOCs designed for Windows on Arm PCs.

Now, Windows on Arm PCs are certainly not new. Since Windows RT launched on the Arm-based Microsoft Surface RT in 2012, various Windows on Arm devices have come and gone, but none of them—except for some Microsoft SQ-based Surface devices—have made much of a name for themselves in the consumer market.

The reasons for these struggles are straightforward. While Arm-based PCs have the potential to offer consumers the benefits of excellent battery life and “always-on” mobile communications, the platform has historically lagged Intel- and AMD-based PCs in performance. Windows on Arm devices have also faced the challenge of a lack of large-scale buy-in from app developers. So, despite the past involvement of device makers like ASUS, HP, Lenovo, and Microsoft, the major theme of the Windows on Arm story has been one of very limited market acceptance.

Next year, though, the theme of that story may change. If it does, WebXPRT 4 is well-positioned to play an important part.

At the recent Qualcomm Technology Summit, the company unveiled the new 4nm Snapdragon X Elite SOC, which includes an all-new 12-core Oryon CPU, an integrated Adreno GPU, and an integrated Hexagon NPU (neural processing unit) designed for AI-powered applications. Company officials presented performance numbers that showed the X Elite surpassing the performance of late-gen AMD, Apple, and Intel competitor platforms, all while using less power.

Those are massive claims, and of course the proof will come—or not—only when systems are available for test. (In the past, companies have made similar claims about Windows on Arm advantages, only to see those claims evaporate by the time production devices show up on store shelves.)

Will Snapdragon X Elite systems demonstrate unprecedented performance and battery life when they hit the market? How will the performance of those devices stack up to Intel’s Meteor Lake systems and Apple’s M3 offerings? We don’t yet know how these new devices may shake up the PC market, but we do know that it looks like 2024 will present us with many golden opportunities for benchmarking. Amid all the marketing buzz, buyers everywhere will want to know about potential trade-offs between price, power, and battery life. Tech reviewers will want to dive into the details and provide useful data points, but many traditional PC benchmarks simply won’t work with Windows on ARM systems. As a go-to, cross-platform favorite of many OEMs—that runs on just about anything with a browser—WebXPRT 4 is in a perfect position to provide reviewers and consumers with relevant performance comparison data.

It’s quite possible that 2024 may be the biggest year for WebXPRT yet!

Justin

Passing two important WebXPRT milestones

Over the past few months, we’ve been excited to see a substantial increase in the total number of completed WebXPRT runs. To put the increase in perspective, we had more total WebXPRT runs last month alone (40,453) than we had in the first two years WebXPRT was available (36,674)! This boost has helped us to reach two important milestones as we close in on the end of 2023.

The first milestone is that the number of WebXPRT 4 runs per month now exceeds the number of WebXPRT 3 runs per month. When we release a new version of an XPRT benchmark, it can take a while for users to transition from using the older version. For OEM labs and tech journalists, adding a new benchmark to their testing suite often involves a significant investment in back testing and gathering enough test data for meaningful comparisons. When the older version of the benchmark has been very successful, adoption of the new version can take longer. WebXPRT 3 has been remarkably popular around the world, so we’re excited to see WebXPRT 4 gain traction and take the lead even as the total number of WebXPRT runs increases each month. The chart below shows the number of WebXPRT runs per month for each version of WebXPRT over the past ten years. WebXPRT 4 usage first surpassed WebXPRT 3 in August of this year, and after looking at data for the last three months, we think its lead is here to stay.

The second important milestone is the cumulative number of WebXPRT runs, which recently passed 1.25 million, as the chart below shows. For us, this moment represents more than a numerical milestone. For a benchmark to succeed, developers need the trust and support of the benchmarking community. WebXPRT’s consistent year-over-year growth tells us that the benchmark continues to hold value for manufacturers, OEM labs, the tech press, and end users. We see it as a sign of trust that folks repeatedly return to the benchmark for reliable performance metrics. We’re grateful for that trust, and for everyone that has contributed to the WebXPRT development process over the years.

We look forward to seeing how far WebXPRT’s reach can extend in 2024! If you have any questions or comments about using WebXPRT, let us know!

Justin

Good news for WebXPRT 4 testing!

Over the past several weeks, we’ve been working to find a solution to a problem with WebXPRT 4 test failures on Apple devices running iOS 17/17.1, iPadOS 17/17.1, and macOS Sonoma with Safari 17/17.1. While we put significant effort into an updated WebXPRT version that would mitigate this issue, we are happy to report that it now looks like we’ll be able to stick with the current version!

Last Thursday, Apple released the iOS 17.2 beta for participants in the Apple Developer Program. When we tested the current version of WebXPRT 4 on iOS 17.2, the tests completed without any issues. We then successfully completed tests on iPadOS 17.2 and macOS Sonoma 14.2 with Safari 17.2. Now that we have good reasons to believe that the iOS 17.2 release will solve the problem, sticking with the current WebXPRT 4 build will maximize continuity and minimize disruption for WebXPRT users.

Apple has not yet published a public release date for iOS/iPad OS/Safari 17.2. Based on past development schedules, it seems likely that they will release it between mid-November and early December, but that’s simply our best guess. Until then, users who want to test WebXPRT 4 on devices running iOS 17/17.1, iPadOS 17/17.1, or macOS Sonoma with Safari 17/17.1 will need to update those devices to iOS/iPad OS/Safari 17.2 via the Apple Developer Program.

To help Apple users better navigate testing until the public 17.2 release, we’ve added a function to the current WebXPRT 4 start page that will notify users if they need to update their operating system to test.

We appreciate everyone’s patience as we worked to find a solution to this problem! If you have any questions or concerns about WebXPRT 4, please let us know.

Justin

Check out the other XPRTs:

Forgot your password?