We’re excited to have recently passed an important milestone: one million XPRT runs and downloads! Most importantly, that huge number does not just reflect past successes. As the chart below illustrates, XPRT use has grown steadily over the years. In 2021, we record, on average, more XPRT runs and downloads in one month (23,395) than we recorded in the entire first year we started tracking these stats (17,051).
We reached one million
runs and downloads in about seven and a half years. At the current rate, we’ll
reach two million in roughly three and a half more years. With WebXPRT 4 on the way, there’s a good chance we can reach that mark even sooner!
As always, we’re grateful for all the testers that have helped us reach this milestone. If you have any questions or comments about using any of the XPRTs to test your gear, let us know!
As we’ve been working
on improvements and updates for CloudXPRT, we’ve been using feedback from
community members to determine which changes will help testers most in the
short term. To make some of those changes available to the community as soon as
possible, we plan to release a beta version of CloudXPRT v1.1 in the coming
During the v1.1 beta
period, the CloudXPRT v1.01 installation packages on CloudXPRT.com and our GitHub repository will continue to include the officially supported
version of CloudXPRT. However, interested testers can experiment with the v1.1
beta version in new environments while we finalize the build for official
The CloudXPRT v1.1
beta includes the following primary changes:
We’re adding support for Ubuntu 20.04.2 or later, the number one
request we’ve received.
We’re consolidating and standardizing the installation packages
for both workloads. Instead of one package for the data analytics workload and
four separate packages for the web microservices workload, each workload will
have two installation packages: one for all on-premises testing and one for
testing with all three supported CSPs.
We’re incorporating Terraform to help create and
configure VMs, which will help to prevent situations when testers do not
allocate enough storage per VM prior to testing.
We use Kubespray to manage Kubernetes
clusters, and Kubespray uses Calico as the default network plug in. Calico has not always worked
well for CloudXPRT in the CSP environment, so we’re replacing Calico with Weave.
At the start of the
beta period, we will share a link to the v1.1 beta download page here in the
blog. You’ll be free to share this link. To avoid confusion, we will not add the
beta download to the v1.01 downloads available on CloudXPRT.com.
As the beta release
date approaches, we’ll share more details about timelines, access, and any additional
changes to the benchmark. If you have any questions about the upcoming
CloudXPRT v1.1 beta, please let us know!
CloudXPRT is undoubtedly
the most complex tool in the XPRT family of benchmarks. To run the cloud-native
benchmark’s multiple workloads across different hardware and software platforms,
testers need two things: (1) at least a passing familiarity with a wide range
of cloud-related toolkits, and (2) an understanding that changing even one test
configuration variable can affect test results. While the complexity of CloudXPRT
makes it a powerful and flexible tool for measuring application performance on
real-world IaaS stacks, it also creates a steep learning curve for new users.
Benchmark setup and
configuration can involve a number of complex steps, and the corresponding
instructions should be thorough, unambiguous, and intuitive to follow. For all
of the XPRT tools, we strive to publish documentation that provides quick,
easy-to-find answers to the questions users might have. Community members have asked
us to improve the clarity and readability of the CloudXPRT setup,
configuration, and individual workload documentation. In response, we are
working to create more—and better—CloudXPRT documentation.
If you’re intimidated
by the benchmark’s complexity, helping you is one of our highest priorities. In
the coming weeks and months, we’ll be evaluating all of our CloudXPRT
documentation, particularly from the perspective of new users, and will release
more information about the new documentation as it becomes available.
We also want to remind
you of some of the existing CloudXPRT resources. We encourage everyone to check
out the Introduction to CloudXPRT and Overview of the CloudXPRT Web Microservices Workload white papers. (Note
that we’ll soon be publishing a paper on the benchmark’s data analytics
workload.) Also, a couple of weeks ago, we published the CloudXPRT learning tool, which we designed to serve as an information
hub for common CloudXPRT topics and questions, and to help tech journalists,
OEM lab engineers, and everyone who is interested in CloudXPRT find the answers
they need as quickly as possible.
Thanks to all who let us know that there was room for improvement in the CloudXPRT documentation. We rely on that kind of feedback and always welcome it. If you have any questions or suggestions regarding CloudXPRT or any of the other XPRTs, please let us know!
This week, we’re sharing news on two topics that we’ve discussed
here in the blog over the past several months: CloudXPRT v1.01 and a potential
AIXPRT OpenVINO update.
Last week, we announced that we were very close to releasing an
updated CloudXPRT build (v1.01) with two minor bug fixes, an improved post-test
results processing script, and an adjustment to one of our test configuration
recommendations. Our testing and prep is complete, and the new version is live
in the CloudXPRT GitHub repository and on our site!
None of the v1.01
changes affect performance or test results, so scores from the new build are
comparable to those from previous CloudXPRT builds. If you’d like to know more
about the changes, take a look at last week’s blog post.
The AIXPRT OpenVINO
In late July, we discussed our plans to update the AIXPRT OpenVINO packages
with OpenVINO 2020.3 Long-Term Support (LTS). While there are no
known problems with the existing AIXPRT OpenVINO package, the LTS version
targets environments that benefit from maximum stability and don’t require a
constant stream of new tools and feature changes, so we thought it would be
well suited for a benchmark like AIXPRT.
We initially believed that
the update process would be relatively simple, and we’d be able to release a
new AIXPRT OpenVINO package in September. However, we’ve discovered that the
process is involved enough to require substantial low-level recoding. At this
time, it’s difficult to estimate when the updated build will be ready for
release. For any testers looking forward to the update, we apologize for the
If you have any questions or comments about
these or any other XPRT-related topics, please let us know!
WebXPRT continues to be the most widely-used XPRT benchmark, with just over 625,000 runs to date. From the first WebXPRT release in 2013, WebXPRT has been popular with device manufacturers, developers, tech journalists, and consumers because it’s easy to run, it runs on almost anything with a web browser, and its workloads reflect the types of web-based tasks that people are likely to encounter on a daily basis.
We realize that many folks who follow the XPRTs may be unaware of the wide variety of WebXPRT uses that we frequently read about in the tech press. Today, we thought it would be interesting to bring the numbers to life. In addition to dozens of device reviews, here’s a sample of WebXPRT 3 mentions over the past few weeks.
AnandTech used WebXPRT to compare Firefox, Edge Chromium, Edge Classic, Opera, Chrome, and Internet Explorer browser performance.
Intel used WebXPRT test data in promotional material for their line of 11th Gen (Tiger Lake) Core processors.
PCMag used WebXPRT (and CrXPRT) to measure the performance of the Acer Chromebook Spin 713.
PCTempo (Italy) used WebXPRT to compare performance across nine popular browsers.
As we plan for the next version of WebXPRT, we want to be sure we build a benchmark that continues WebXPRT’s legacy of relevant workloads, ease-of-use, and broad compatibility. We know what works well in our lab, but to build a benchmark that meets the needs of a diverse group of users all around the world, it’s important that we hear from all types of testers. We recently discussed some of the new technologies that we’re considering for WebXPRT 4, so please don’t hesitate to let us know what you think about those proposals, or send any additional ideas you may have!
Soon, we’ll be expanding
our portfolio of CloudXPRT resources with a white paper that focuses on the benchmark’s
web microservices workload. While we summarized the workload in the Introduction to CloudXPRT white paper, the new paper will discuss the
workload in much greater detail.
In addition to providing practical information about the web microservices installation packages and minimum system requirements, the paper describes the workload’s test configuration variables, structural components, task workflows, and test metrics. It also discusses interpreting test results and the process for submitting results for publication.
As we’ve noted, CloudXPRT is one of the more complex tools in the XPRT family, with no shortage of topics to explore further. We plan to publish a companion overview for the data analytics workload, and possible future topics include the impact of adjusting specific test configuration options, recommendations for results reporting, and methods for analysis.
We hope that the
upcoming Overview of the CloudXPRT Web Microservices Workload paper will
serve as a go-to resource for CloudXPRT testers, and will answer any questions
you have about the workload. Once it goes live, we’ll provide links in the
Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.