BenchmarkXPRT Blog banner

Category: Benchmark metrics

How to submit WebXPRT results for publication

It’s been a while since we last discussed the process for submitting WebXPRT results to be considered for publication in the WebXPRT results browser and the WebXPRT Processor Comparison Chart, so we thought we’d offer a refresher.

Unlike sites that publish all results they receive, we hand-select results from internal lab testing, user submissions, and reliable tech media sources. In each case, we evaluate whether the score is consistent with general expectations. For sources outside of our lab, that evaluation includes confirming that there is enough detailed system information to help us determine whether the score makes sense. We do this for every score on the WebXPRT results page and the general XPRT results page. All WebXPRT results we publish automatically appear in the processor comparison chart as well.

Submitting your score is quick and easy. At the end of the WebXPRT test run, click the Submit your results button below the overall score, complete the short submission form, and click Submit again. The screenshot below shows how the form would look if I submitted a score at the end of a WebXPRT 3 run on my personal system.

After you submit your score, we’ll contact you to confirm how we should display the source. You can choose one of the following:

  • Your first and last name
  • “Independent tester” (for those who wish to remain anonymous)
  • Your company’s name, provided that you have permission to submit the result in their name. To use a company name, we ask that you provide a valid company email address.


We will not publish any additional information about you or your company without your permission.

We look forward to seeing your score submissions, and if you have suggestions for the processor chart or any other aspect of the XPRTs, let us know!

Justin

Publishing CloudXPRT results from testing on pre-production gear

We recently received questions about whether we accept CloudXPRT results submissions from testing on pre-production gear, and how we would handle any differences between results from pre-production and production-level tests.  

To answer first question, we are not opposed to pre-production results submissions. We realize that vendors often want to include benchmark results in launch-oriented marketing materials they release before their hardware or software is publicly available. To help them do so, we’re happy to consider pre-production submissions on a case-by-case basis. All such submissions must follow the normal CloudXPRT results submission process, and undergo vetting by the CloudXPRT Results Review Group according to the standard review and publication schedule. If we decide to publish pre-production results on our site, we will clearly note their pre-production status.

In response to the second question, the CloudXPRT Results Review Group will handle any challenges to published results or perceived discrepancies between pre-production and production-level results on a case-by-case basis. We do not currently have a formal process for challenges; anyone who would like to initiate a challenge or express comments or concerns about a result should address the review group via benchmarkxprtsupport@principledtechnologies.com. Our primary concern is always to ensure that published results accurately reflect the performance characteristics of production-level hardware and software. If it becomes necessary to develop more policies in the future, we’ll do so, but we want to keep things as simple as possible.

If you have any questions about the CloudXPRT results submission process, please let us know!

Justin

WebXPRT passes the 750,000-run milestone!

We’re excited to see that users have successfully completed over 750,000 WebXPRT runs! If you’ve run WebXPRT in any of the more than 654 cities and 68 countries from which we’ve received complete test data—including newcomers Belize, Cambodia, Croatia, and Pakistan—we’re grateful for your help. We could not have reached this milestone without you!

As the chart below illustrates, WebXPRT use has grown steadily over the years. We now record, on average, almost twice as many WebXPRT runs in one month as we recorded in the entirety of our first year. In addition, with over 82,000 runs to date in 2021, there are no signs that growth is slowing.

Developing a new benchmark is never easy, and the obstacles multiply when you attempt to create a cross-platform benchmark, such as WebXPRT, that will run on a wide variety of devices. Establishing trust with the benchmarking community is another challenge. Transparency, consistency, and technical competency on our part are critical factors in building that trust, but the people who take time out of their busy schedules to run the benchmark for the first time also play a role. We thank all of the manufacturers, OEM labs, and members of the tech press who decided to give WebXPRT a try, and we look forward to your input as we continue to improve WebXPRT in the years to come. 

If you have any questions or comments about WebXPRT, we’d love to hear from you!

Justin

Considering a battery life test for WebXPRT 4

A few weeks ago, we discussed the beginnings of a WebXPRT 4 development plan, and asked for reader feedback about potential workload changes. So far, the two most common feedback topics have been the possible addition of a WebAssembly workload, and the feasibility of including a browser-based battery life test. Today, we discuss what a WebXPRT 4 battery life test would look like, and some of the challenges we’d have to overcome to make it a reality.

Battery life tests fall into two primary categories: simple rundown tests and performance-weighted tests. Simple rundown tests measure battery life during extreme idle periods and loops of movie playbacks, etc., but do not reflect the wide-ranging mix of activities that characterize a typical day for most users. While they can be useful for performing very specific apple-to-apples comparisons, these tests have limited value when it comes to giving consumers a realistic estimation of the battery life they would experience during everyday use.

In contrast, performance-weighted battery life tests, such as the one in CrXPRT 2, attempt to reflect real-world usage. The CrXPRT battery life test simulates common daily usage patterns for Chromebooks by including all the productivity workloads from the performance test, plus video playback, audio playback, and gaming scenarios. It also includes periods of wait/idle time. We believe this mixture of diverse activity and idle time better represents typical real-life behavior patterns. This makes the resulting estimated battery life much more helpful for consumers who are trying to match a device’s capabilities with their real-world needs.

From a technical standpoint, WebXPRT’s cross-platform nature presents us with several challenges that we did not face while developing the CrXPRT battery life test for Chrome OS. While the WebXPRT performance tests run in almost any browser, cross-browser differences in battery life reporting may restrict the battery life test to a single browser. For instance, Mozilla has deprecated the battery status API for Firefox, and we’re not yet sure if there’s another approach that would work. If a WebXPRT 4 battery life test supported only a single browser, such as Chrome or Safari, would you still be interested in using it? Please let us know.

A browser-based battery life workflow also presents other challenges that we do not face in native client applications such as CrXPRT:

  • A browser-based battery life test would require the user to check the starting and ending battery capacities, with no way for the app to independently verify data accuracy.
  • The battery life test could require more babysitting in the event of network issues. We can catch network failures and try to handle them by reporting periods of network disconnection, but those interruptions could influence the battery life duration.
  • The factors above could make it difficult to achieve repeatability. One way to address that problem would be to run the test in a standardized lab environment lab with a steady internet connection, but a long list of standardized environmental requirements would make the battery life test less attractive and less accessible to many testers.

Our intention with today’s blog is not to make a WebXPRT 4 battery life test seem like an impossibility. Rather, we want to share our perspective on what the test might look like, and describe some of the challenges and considerations in play. If you have thoughts about battery life testing, or experience with battery life APIs in one or more of the major browsers, we’d love to hear from you!

Justin

The CloudXPRT v1.1 beta is on the way

As we’ve been working on improvements and updates for CloudXPRT, we’ve been using feedback from community members to determine which changes will help testers most in the short term. To make some of those changes available to the community as soon as possible, we plan to release a beta version of CloudXPRT v1.1 in the coming weeks.

During the v1.1 beta period, the CloudXPRT v1.01 installation packages on CloudXPRT.com and our GitHub repository will continue to include the officially supported version of CloudXPRT. However, interested testers can experiment with the v1.1 beta version in new environments while we finalize the build for official release. 

The CloudXPRT v1.1 beta includes the following primary changes:

  • We’re adding support for Ubuntu 20.04.2 or later, the number one request we’ve received.
  • We’re consolidating and standardizing the installation packages for both workloads. Instead of one package for the data analytics workload and four separate packages for the web microservices workload, each workload will have two installation packages: one for all on-premises testing and one for testing with all three supported CSPs.
  • We’re incorporating Terraform to help create and configure VMs, which will help to prevent situations when testers do not allocate enough storage per VM prior to testing.
  • We use Kubespray to manage Kubernetes clusters, and Kubespray uses Calico as the default network plug in. Calico has not always worked well for CloudXPRT in the CSP environment, so we’re replacing Calico with Weave.


At the start of the beta period, we will share a link to the v1.1 beta download page here in the blog. You’ll be free to share this link. To avoid confusion, we will not add the beta download to the v1.01 downloads available on CloudXPRT.com.

As the beta release date approaches, we’ll share more details about timelines, access, and any additional changes to the benchmark. If you have any questions about the upcoming CloudXPRT v1.1 beta, please let us know!

Justin

We’ve fixed an installation bug in the CloudXPRT Data Analytics Workload package

Yesterday, we published an updated CloudXPRT Data Analytics workload package that fixes a problem during the package installation process. CloudXPRT uses the Helm utility, which serves as a package manager for the Kubernetes container orchestration system. Helm accesses files in a default repository, and the version of Helm that we originally used with CloudXPRT tries to access files that are no longer available. We fixed the problem by updating the code to use the latest version of Helm.

This update does not change how the benchmark workload runs, and has no impact on benchmark results. We apologize if this bug caused headaches for any testers during installation, and we appreciate your patience as we worked on a fix.

As a reminder for testers interested in experimenting with the CloudXPRT Data Analytics workload, the Overview of the CloudXPRT Data Analytics Workload paper is now available. You can find links to the paper and other resources in the Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.

If you have any questions, or have encountered any obstacles during testing, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?