BenchmarkXPRT Blog banner

Tag Archives: white paper

Exploring the XPRT white paper library

As part of our commitment to publishing reliable, unbiased benchmarks, we strive to make the XPRT development process as transparent as possible. In the technology assessment industry, it’s not unusual for people to claim that any given benchmark contains hidden biases, so we take preemptive steps to address this issue by publishing XPRT benchmark source code, detailed system disclosures and test methodologies, and in-depth white papers. Today, we’re focusing on the XPRT white paper library.

The XPRT white paper library currently contains 21 white papers that we’ve published over the last 12 years. We started publishing white papers to provide XPRT users with more information about how we design our benchmarks, why we make certain development decisions, and how the benchmarks work. If you have questions about any aspect of one of the XPRT benchmarks, the white paper library is a great place to find some answers.

For example, the Exploring WebXPRT 4 white paper describes the design and structure of WebXPRT 4, including detailed information about the benchmark’s harness, HTML5 and WebAssembly (WASM) capability checks, and the structure of the performance test workloads. It also includes explanations of the benchmark’s scoring methodology, how to automate tests, and how to submit results for publication.

The companion WebXPRT 4 results calculation white paper explains the formulas that WebXPRT 4 uses to calculate the individual workload scenario scores and overall score, provides an overview of the statistical techniques WebXPRT uses to translate raw timings into scores, and explains the benchmark’s confidence interval and how it differs from typical benchmark variability. To supplement the white paper’s discussion of the results calculation process, we published a results calculation spreadsheet that shows the raw data from a sample test run and reproduces the exact calculations WebXPRT uses to produce test scores.

We hope that the XPRT white paper library will prove to be a useful resource for you. If you have questions about any of our white papers, or suggestions for topics that you’d like us to cover in possible future white papers, please let us know!

Justin

Celebrating 10 years of WebXPRT!

We’re excited to announce that it’s been 10 years since the initial launch of WebXPRT! In early 2013, we introduced WebXPRT as a unique browser performance benchmark in a market space that was already crowded with a variety of specialized measurement tools. Our goal was to offer a benchmark that could compare the performance of almost any web-enabled device, using scenarios created to mirror real-world tasks. We wanted it to be a free, easily accessible, easy-to-run, useful, and appealing testing option for OEM labs, vendors, and the tech press.

When we look back on the last 10 years of WebXPRT, we can’t help but conclude that our efforts have been successful. Since those early days, the WebXPRT market presence has grown from humble beginnings into a worldwide industry standard. Hundreds of tech press publications have used WebXPRT in thousands of articles and reviews, and testers have now run the benchmark well over 1.1 million times.

Below, I’ve listed some of the WebXPRT team’s accomplishments over the last decade. If you’ve been following WebXPRT from the beginning, this may all be familiar, but if you’re new to the  community, it may be interesting to see some of the steps that contributed to making WebXPRT what it is today.

In future blog posts, we’ll look at how the number of WebXPRT runs has grown over time, and how WebXPRT use has grown among OEMs, vendors, and the tech press worldwide. Do you have any thoughts that you’d like to share from your WebXPRT testing experience? If so, let us know!

Justin

Looking back on 2022 with the XPRTs

Around the beginning of each new year, we like to take the opportunity to look back and summarize the XPRT highlights from the previous year. Readers of our newsletter are familiar with the stats and updates we include each month, but for our blog readers who don’t receive the newsletter, we’ve compiled some highlights from 2022 below.

Benchmarks
In the past year, we released WebXPRT 4, and the CloudXPRT v1.2 update package.

XPRTs in the media
Journalists, advertisers, and analysts referenced the XPRTs thousands of times in 2022. It’s always rewarding to know that the XPRTs have proven to be useful and reliable assessment tools for technology publications around the world. Media sites that used the XPRTs in 2022 include AnandTech, Android Authority, Benchlife.info (China), BodNara (South Korea), ComputerBase (Germany), DISKIDEE (Belgium), eTeknix, Expert Reviews, Gadgets 360, Hardware.info (The Netherlands), Hardware Zone (Singapore), ITC.ua (Ukraine), ITmedia (Japan), Itndaily.ru (Russia), Notebookcheck, PCMag, PC-Welt (Germany), PCWorld, TechPowerUp, Tom’s Guide, TweakTown, and ZOL.com (China).

Downloads and confirmed runs
In 2022, we had more than 10,800 benchmark downloads and 183,300 confirmed runs. Users have run our most popular benchmark, WebXPRT, more than 1,135,500 times since its debut in 2013! WebXPRT continues to be a go-to, industry-standard performance benchmark for OEM labs, vendors, and leading tech press outlets around the globe.

XPRT media, tools, and publications
Part of our mission with the XPRTs is to produce tools and materials that help testers better understand the ins and outs of benchmarking in general and the XPRTs in particular. To help achieve this goal, we published the following in 2022:

We’re thankful for everyone who used the XPRTs, joined the community, and sent questions and suggestions throughout 2022. We’re excited to see what’s in store for the XPRTs in 2023!

Justin

The Exploring WebXPRT 4 white paper is now available

This week, we published the Exploring WebXPRT 4 white paper. It describes the design and structure of WebXPRT 4, including detailed information about the benchmark’s harness, HTML5 and WebAssembly (WASM) capability checks, and changes we’ve made to the structure of the performance test workloads. We explain the benchmark’s scoring methodology, how to automate tests, and how to submit results for publication. The white paper also includes information about the third-party functions and libraries that WebXPRT 4 uses during the HTML5 and WASM capability checks and performance workloads.

The Exploring WebXPRT 4 white paper promotes the high level of transparency and disclosure that is a core value of the BenchmarkXPRT Development Community. We’ve always believed that transparency builds trust, and trust is essential for a healthy benchmarking community. That’s why we involve community members in the benchmark development process and disclose how we build our benchmarks and how they work.

You can find the paper on WebXPRT.com and our XPRT white papers page. If you have any questions about WebXPRT 4, please let us know, and be sure to check out our other XPRT white papers.

Justin

Default requirements for CloudXPRT results submissions

Over the past few weeks, we’ve received questions about whether we require specific test configuration settings for official CloudXPRT results submissions. Currently, testers have the option to edit up to 12 configuration options for the web microservices workload and three configuration options for the data analytics workload. Not all configuration options have an impact on testing and results, but a few of them can drastically affect key results metrics and how long it takes to complete a test. Because new CloudXPRT testers may not anticipate those outcomes, and so many configuration permutations are possible, we’ve come up with a set of requirements for all future results submissions to our site. Please note that testers are still free to adjust all available configuration options—and define service level agreement (SLA) settings—as they see fit for their own purposes. The requirements below apply only to results testers want to submit for publication consideration on our site, and to any resulting comparisons.


Web microservices results submission requirement

Starting with the May results submission cycle, all web microservices results submissions must have the workload.cpurequestsvalue, which lets the user designate the number of CPU cores the workload assigns to each pod, set to 4. Currently, the benchmark supports values of 1, 2, and 4, with the default value of 4. While 1 and 2 CPU cores per pod may be more appropriate for relatively low-end systems or configurations with few vCPUs, a value of 4 is appropriate for most datacenter processors, and it often enables CSP instances to operate within the benchmark’s max default 95th percentile latency SLA of 3,000 milliseconds.

In future CloudXPRT releases, we may remove the option to change the workload.cpurequests value from the config.json file and simply fix the value in the benchmark’s code to promote test predictability and reasonable comparisons. For more information about configuration options for the web microservices workload, please consult the Overview of the CloudXPRT Web Microservices Workload white paper.


Data analytics results submission requirement

Starting with the May results submission cycle, all data analytics results submissions must have the best reported performance (throughput_jobs/min) correspond to a 95th percentile SLA latency of 90 seconds or less. We have received submissions where the throughput was extremely high, but the 95th percentile SLA latency was up to 10 times the 90 seconds that we recommend in CloudXPRT documentation. High latency values may be acceptable for the unique purposes of individual testers, but they do not provide a good basis for comparison between clusters under test. For more information about configuration options with the data analytics workload, please consult the Overview of the CloudXPRT Data Analytics Workload white paper.

We will update CloudXPRT documentation to make sure that testers know to use the default configuration settings if they plan to submit results for publication. If you have any questions about CloudXPRT or the CloudXPRT results submission process, please let us know.

Justin

Coming soon: a white paper about the CloudXPRT web microservices workload

Soon, we’ll be expanding our portfolio of CloudXPRT resources with a white paper that focuses on the benchmark’s web microservices workload. While we summarized the workload in the Introduction to CloudXPRT white paper, the new paper will discuss the workload in much greater detail.

In addition to providing practical information about the web microservices installation packages and minimum system requirements, the paper describes the workload’s test configuration variables, structural components, task workflows, and test metrics. It also discusses interpreting test results and the process for submitting results for publication.

As we’ve noted, CloudXPRT is one of the more complex tools in the XPRT family, with no shortage of topics to explore further. We plan to publish a companion overview for the data analytics workload, and possible future topics include the impact of adjusting specific test configuration options, recommendations for results reporting, and methods for analysis.

We hope that the upcoming Overview of the CloudXPRT Web Microservices Workload paper will serve as a go-to resource for CloudXPRT testers, and will answer any questions you have about the workload. Once it goes live, we’ll provide links in the Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.

If you have any questions, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?