BenchmarkXPRT Blog banner

Tag Archives: white paper

The Introduction to CloudXPRT white paper is now available!

Today, we published the Introduction to CloudXPRT white paper. The paper provides an overview of our latest benchmark and consolidates CloudXPRT-related information that we’ve published in the XPRT blog over the past several months. It describes the CloudXPRT workloads, choosing and downloading installation packages, submitting CloudXPRT results for publication, and possibilities for additional development in the coming months.

CloudXPRT is one of the most complex tools in the XPRT family, and there are more CloudXPRT-related topics to discuss than we could fit in this first paper. In future white papers, we will discuss in greater detail each of the benchmark workloads, the range of test configuration options, results reporting, and methods for analysis.

We hope that Introduction to CloudXPRT will provide testers who are interested in CloudXPRT with a solid foundation of understanding on which they can build. Moving forward, we will provide links to the paper in the Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.

If you have any questions about CloudXPRT, please let us know!

Justin

The Introduction to AIXPRT white paper is now available!

Today, we published the Introduction to AIXPRT white paper. The paper serves as an overview of the benchmark and a consolidation of AIXPRT-related information that we’ve published in the XPRT blog over the past several months. For folks who are completely new to AIXPRT and veteran testers who need to brush up on pre-test configuration procedures, we hope this paper will be a quick, one-stop reference that helps reduce the learning curve.

The paper describes the AIXPRT toolkits and workloads, adjusting key test parameters (batch size, level of precision, number of concurrent instances, and default number of requests), using alternate test configuration files, understanding and submitting results, and accessing the source code.

We hope that Introduction to AIXPRT will prove to be a valuable resource. Moving forward, readers will be able to access the paper from the Helpful Info box on AIXPRT.com and the AIXPRT section of our XPRT white papers page. If you have any questions about AIXPRT, please let us know!

Justin

Odds and ends

Today, we want to share quick updates on a few XPRT topics.

In case you missed yesterday’s announcement, the CrXPRT 2 Community Preview (CP) is now available. BenchmarkXPRT Development Community members can access the preview using a direct link we’ve posted on the CrXPRT tab in the XPRT Members’ Area (login required). This tab also provides a link to the CrXPRT 2 CP user manual. You can find a summary of what’s new with CrXPRT 2 in last week’s blog. During the preview period, we allow testers to publish CP test scores. Note that CrXPRT 2 overall performance test scores and battery life measurements are not comparable to those from CrXPRT 2015.

We’ll soon be publishing our first AIXPRT whitepaper, Introduction to AIXPRT. It will summarize the AIXPRT toolkits and workloads; how to adjust test parameters such as batch size, levels of precision, and concurrent instances; how to use alternate test configuration files; and how to understand test results. When the paper is available, we’ll post it on the XPRT white papers page and make an announcement here in the blog.

Finally, in response to decreased downloads and usage of BatteryXPRT, we have ended support for the benchmark. We’re always monitoring usage of the XPRTs so that we can better direct our resources to the current needs of users. We’ve removed BatteryXPRT from the Google Play Store, but it is still available for download on BatteryXPRT.com.

If you have any questions about CrXPRT 2, AIXPRT, or BatteryXPRT, please let us know!

Justin

Transparent goals

Recently, Forbes published an article discussing a new report on phone battery life from Which?, a UK consumer advocacy group. In the report, Which? states that they tested the talk time battery life of 50 phones from five brands. During the tests, phones from three of the brands lasted longer than the manufacturers’ claims, while phones from another brand underperformed by about five percent. The fifth brand’s published battery life numbers were 18 to 51 percent higher than Which? recorded in their tests.

Folks can read the article for more details about the tests and the brands. While the report raises some interesting questions, and the article provides readers with brief test methodology descriptions from Which? and one manufacturer, we don’t know enough about the tests to say which set of claims is correct. Any number of variables related to test workloads or device configuration settings could significantly affect the results. Both parties may be using sound benchmarking principles in good faith, but their test methodologies may not be comparable. As it is, we simply don’t have enough information to evaluate the study.

Whether the issue is battery life or any other important device spec, information conflicts, such as the one that the Forbes article highlights, can leave consumers scratching their heads, trying to decide which sources are worth listening to. At the XPRTs, we believe that the best remedy for this type of problem is to provide complete transparency into our testing methodologies and development process. That’s why our lab techs verify all the hardware specs for each XPRT Weekly Tech Spotlight entry. It’s why we publish white papers explaining the structure of our benchmarks in detail, as well as how the XPRTs calculate performance results. It’s also why we employ an open development community model and make each XPRT’s source code available to community members. When we’re open about how we do things, it encourages the kind of honest dialogue between vendors, journalists, consumers, and community members that serves everyone’s best interests.

If you love tech and share that same commitment to transparency, we’d love for you to join our community, where you can access XPRT source code and previews of upcoming benchmarks. Membership is free for anyone with a verifiable corporate affiliation. If you have any questions about membership or the registration process, please feel free to ask.

Justin

The Exploring WebXPRT 3 white paper is now available

Today, we published the Exploring WebXPRT 3 white paper. The paper describes the differences between WebXPRT 3 and WebXPRT 2015, including changes we made to the harness and the structure of the six performance test workloads. We also explain the benchmark’s scoring methodology, how to automate tests, and how to submit results for publication. Readers will also find additional detail about the third-party functions and libraries that WebXPRT uses during the HTML5 capability checks and performance workloads.

Because data collection and privacy concerns are more relevant than ever, we also discuss the WebXPRT data collection mechanisms and our commitment to respecting testers’ privacy. Finally, for readers who may be unfamiliar with the XPRTs, we describe the other benchmark tools in the XPRT family, the role of the BenchmarkXPRT Development Community, and how you can contribute to the XPRTs.

Along with the WebXPRT 3 results calculation white paper and spreadsheet, the Exploring WebXPRT 3 white paper is designed to promote the high level of transparency and disclosure that is a core value of the BenchmarkXPRT Development Community. Both WebXPRT white papers and the results calculation spreadsheet are available on WebXPRT.com and on our XPRT white papers page. If you have any questions about the WebXPRT, please let us know, and be sure to check out our other XPRT white papers.

Justin

The WebXPRT 3 results calculation white paper is now available

As we’ve discussed in prior blog posts, transparency is a core value of our open development community. A key part of being transparent is explaining how we design our benchmarks, why we make certain development decisions, and how the benchmarks actually work. This week, to help WebXPRT 3 testers understand how the benchmark calculates results, we published the WebXPRT 3 results calculation and confidence interval white paper.

The white paper explains what the WebXPRT 3 confidence interval is, how it differs from typical benchmark variability, and how the benchmark calculates the individual workload scenario and overall scores. The paper also provides an overview of the statistical techniques WebXPRT uses to translate raw times into scores.

To supplement the white paper’s overview of the results calculation process, we’ve also published a spreadsheet that shows the raw data from a sample test run and reproduces the calculations WebXPRT uses.

The paper and spreadsheet are both available on WebXPRT.com and on our XPRT white papers page. If you have any questions about the WebXPRT results calculation process, please let us know, and be sure to check out our other XPRT white papers.

Justin

Check out the other XPRTs:

Forgot your password?