Soon, we’ll be publishing
a CloudXPRT white paper that focuses on the benchmark’s data analytics
workload. We summarized the workload in the Introduction to CloudXPRT white paper, but in the same way that
the Overview of the CloudXPRT Web Microservices Workload paper did, the new
paper will discuss the workload in much greater detail.
In addition to
providing practical information about the installation package and minimum
system requirements for the data analytics workload, the paper will describe
test configuration variables, structural components, task workflows, and test
metrics. It will also include guidance on interpreting test results and submitting
them for publication.
As we’ve noted, CloudXPRT is one of the more complex tools in the XPRT family,
with no shortage of topics to explore. Possible future topics include the
impact of adjusting specific test configuration options, recommendations for
results reporting, and methods for results analysis. If there are specific
topics that you’d like us to address in future white papers, please feel free
to send us your ideas!
We hope that the
upcoming Overview of the CloudXPRT Data Analytics Workload paper
will serve as a go-to resource for CloudXPRT testers, and will answer any
questions you have about the workload. Once it goes live, we’ll provide links
in the Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.
If you have any questions, please let us know!
This week, we’re sharing news on two topics that we’ve discussed
here in the blog over the past several months: CloudXPRT v1.01 and a potential
AIXPRT OpenVINO update.
Last week, we announced that we were very close to releasing an
updated CloudXPRT build (v1.01) with two minor bug fixes, an improved post-test
results processing script, and an adjustment to one of our test configuration
recommendations. Our testing and prep is complete, and the new version is live
in the CloudXPRT GitHub repository and on our site!
None of the v1.01
changes affect performance or test results, so scores from the new build are
comparable to those from previous CloudXPRT builds. If you’d like to know more
about the changes, take a look at last week’s blog post.
The AIXPRT OpenVINO
In late July, we discussed our plans to update the AIXPRT OpenVINO packages
with OpenVINO 2020.3 Long-Term Support (LTS). While there are no
known problems with the existing AIXPRT OpenVINO package, the LTS version
targets environments that benefit from maximum stability and don’t require a
constant stream of new tools and feature changes, so we thought it would be
well suited for a benchmark like AIXPRT.
We initially believed that
the update process would be relatively simple, and we’d be able to release a
new AIXPRT OpenVINO package in September. However, we’ve discovered that the
process is involved enough to require substantial low-level recoding. At this
time, it’s difficult to estimate when the updated build will be ready for
release. For any testers looking forward to the update, we apologize for the
If you have any questions or comments about
these or any other XPRT-related topics, please let us know!
We want to let CloudXPRT testers know that we’re close to
releasing an updated version (build 1.01) with two minor bug fixes, an improved
post-test results processing script, and an adjustment to one of our test
configuration recommendations. None of these changes will affect performance or
test results, so scores from previous CloudXPRT builds will be comparable to
those from the new build.
The most significant changes in CloudXPRT build 1.01 are as
- In previous builds, some testers encountered warnings during setup to update the version of Kubernetes Operations (kops) when testing on public-cloud platforms (the CloudXPRT 1.00 recommendation is kops version 1.16.0). We are adjusing the kops installation instructions in the setup instructions for the web microservices and data analytics workloads to prevent these warnings.
- In previous builds, post-test cleanup instructions for public-cloud testing environments do not always delete all of the resources that CloudXPRT creates during setup. We are updating instructions to ensure a more thorough cleanup process. This change applies to test instructions for the web microservices and data analytics workloads.
- We are reformatting the optional results graphs the web microservices postprocess program creates to make them easier to interpret.
- In previous builds, the recommended time interval for the web-microservices workload is 120 seconds if the hpamode option is enabled and 60 seconds if it is disabled. Because we’ve found that the 60-second difference has no significant impact on test results, we are changing the recommendation to 60 seconds for both hpamode settings.
We hope these changes
will improve the CloudXPRT setup and testing experience. We haven’t set the
release date for the updated build yet, but when we do, we’ll announce it here
in the blog. If you have any questions about CloudXPRT, or would like to report
bugs or other issues, please feel free to contact us!
WebXPRT continues to be the most widely-used XPRT benchmark, with just over 625,000 runs to date. From the first WebXPRT release in 2013, WebXPRT has been popular with device manufacturers, developers, tech journalists, and consumers because it’s easy to run, it runs on almost anything with a web browser, and its workloads reflect the types of web-based tasks that people are likely to encounter on a daily basis.
We realize that many folks who follow the XPRTs may be unaware of the wide variety of WebXPRT uses that we frequently read about in the tech press. Today, we thought it would be interesting to bring the numbers to life. In addition to dozens of device reviews, here’s a sample of WebXPRT 3 mentions over the past few weeks.
As we plan for the next version of WebXPRT, we want to be sure we build a benchmark that continues WebXPRT’s legacy of relevant workloads, ease-of-use, and broad compatibility. We know what works well in our lab, but to build a benchmark that meets the needs of a diverse group of users all around the world, it’s important that we hear from all types of testers. We recently discussed some of the new technologies that we’re considering for WebXPRT 4, so please don’t hesitate to let us know what you think about those proposals, or send any additional ideas you may have!
Soon, we’ll be expanding
our portfolio of CloudXPRT resources with a white paper that focuses on the benchmark’s
web microservices workload. While we summarized the workload in the Introduction to CloudXPRT white paper, the new paper will discuss the
workload in much greater detail.
In addition to providing practical information about the web microservices installation packages and minimum system requirements, the paper describes the workload’s test configuration variables, structural components, task workflows, and test metrics. It also discusses interpreting test results and the process for submitting results for publication.
As we’ve noted, CloudXPRT is one of the more complex tools in the XPRT family, with no shortage of topics to explore further. We plan to publish a companion overview for the data analytics workload, and possible future topics include the impact of adjusting specific test configuration options, recommendations for results reporting, and methods for analysis.
We hope that the
upcoming Overview of the CloudXPRT Web Microservices Workload paper will
serve as a go-to resource for CloudXPRT testers, and will answer any questions
you have about the workload. Once it goes live, we’ll provide links in the
Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.
If you have any questions,
please let us know!
The CloudXPRT Preview period has ended, and CloudXPRT version 1.0 installation packages are now available on CloudXPRT.com and the BenchmarkXPRT GitHub repository! Like the Preview build, CloudXPRT version 1.0 includes two workloads: web microservices and data analytics (you can find more details about the workloads here). Testers can use metrics from the workloads to compare IaaS stack (both hardware and software) performance and to evaluate whether any given stack is capable of meeting SLA thresholds. You can configure CloudXPRT to run on local datacenter, Amazon Web Services, Google Cloud Platform, or Microsoft Azure deployments.
Several different test packages are available for download from the CloudXPRT download page. For detailed installation instructions and hardware and software requirements for each, click the package’s readme link. On CloudXPRT.com, the Helpful Info box contains resources such as links to the Introduction to CloudXPRT white paper, the CloudXPRT master readme, and the CloudXPRT GitHub repository.
The GitHub repository also contains the CloudXPRT
source code. The source code is freely available for testers to download and
Performance results from this release are comparable
to performance results from the CloudXPRT Preview build. Testers who wish to
publish results on CloudXPRT.com can find more information about the results
submission and review process in the blog. We post the monthly results cycle schedule on the results
We’re thankful for all the input we received during the CloudXPRT development process and Preview period. If you have any questions about CloudXPRT, please let us know.