We want to let CloudXPRT testers know that updated installer packages are on the way. The packages will include several fixes for bugs that we discovered in the initial CloudXPRT Preview release (build 0.95). The fixes do not affect CloudXPRT test results, but do help to facilitate installation and remove potential sources of confusion during the setup and testing process.
Along with a few text edits
and other minor fixes, we made the following changes in the upcoming build:
updated the data analytics setup code to prevent error messages that occurred
when the benchmark treated one-node configurations as a special case.
configured the data analytics workload to use a go.mod file for all the
required go modules. With this change, we can explicitly state the release
version of the necessary go modules, and updates to the latest go release won’t
break the benchmark. This change also removes the need to include large gosrc.tar.gz
files in the source code.
added a cleanup utility script for the web microservices workload. If something
goes wrong during configuration or a test run, testers can use this script to
clean everything and start over.
fixed an error that prevented the benchmark from successfully retrieving the cluster_config.json
file in certain multi-node setups.
the web microservices workload, we changed the output format of the request
rate metric from integer to float. This change allows us to report workload
data with a higher degree of precision.
the web microservices workload, we added an overall summary line to results log
file that reports the best throughput numbers from the test run.
- In the web microservices code, we
modified a Kubernetes option that the benchmark used to create the Cassandra
schema. Prior to this change, the option generated an inconsequential but
distracting error message about TTY input.
We haven’t set the release date for the updated build yet, but when we do, we’ll announce it here in the blog. If you have any questions about CloudXPRT, please let us know!
The CloudXPRT Preview installation packages are now available on CloudXPRT.com and the BenchmarkXPRT GitHub repository! The CloudXPRT Preview includes two workloads: web microservices and data analytics (you can find more details about the workloads here). Testers can use metrics from the workloads to compare IaaS stack (both hardware and software) performance and to evaluate whether any given stack is capable of meeting SLA thresholds. You can configure CloudXPRT to run on local datacenter, Amazon Web Services, Google Cloud Platform, or Microsoft Azure deployments.
Several different test packages are available for
download from the CloudXPRT download
page. For detailed installation instructions and
hardware and software requirements for each, click the package’s readme link. The
Helpful Info box on CloudXPRT.com also contains resources such as links to the
CloudXPRT master readme and the CloudXPRT GitHub repository. Soon, we will add
a link to the CloudXPRT Preview source code, which will be freely available for
testers to download and review.
All interested parties may now publish CloudXPRT
results. However, until we begin the formal results submission and review process in July, we will publish only results we
produce in our own lab. We anticipate adding the first set of those within the coming
We’re thankful for all the input we received during the initial CloudXPRT development process, and we welcome feedback on the CloudXPRT Preview. If you have any questions about CloudXPRT, or would like to share your comments and suggestions, please let us know.
A few months
ago, we wrote about the possibility of creating a datacenter XPRT. In the
intervening time, we’ve discussed the idea with folks both in and outside of the
XPRT Community. We’ve heard from vendors of datacenter products, hosting/cloud
providers, and IT professionals that use those products and services.
thread that emerged was the need for a cloud benchmark that can accurately
measure the performance of modern, cloud-first applications deployed on modern infrastructure
as a service (IaaS) platforms, whether those platforms are on-premises, hosted
elsewhere, or some combination of the two (hybrid clouds). Regardless of where
clouds reside, applications are increasingly using them in latency-critical,
highly available, and high-compute scenarios.
datacenter benchmarks do not give a clear indication of how applications will
perform on a given IaaS infrastructure, so the benchmark should use cloud-native
components on the actual stacks used for on-prem and public cloud management.
We are planning to call the benchmark CloudXPRT. Our goal is for CloudXPRT to address the needs described above while also including the elements that have made the other XPRTs successful. We plan for CloudXPRT to
- Be relevant to on-prem (datacenter), private, and public cloud
- Run on top of cloud platform software such as Kubernetes
- Include multiple workloads that address common scenarios like web
applications, AI, and media analytics
- Support multi-tier workloads
- Report relevant metrics including both throughput and critical
latency for responsiveness-driven applications and maximum throughput for
applications dependent on batch processing
workloads will use cloud-native components on an actual stack to provide
end-to-end performance metrics that allow users to choose the best IaaS
configuration for their business.
building and testing preliminary versions of CloudXPRT for the last few months.
Based on the progress so far, we are shooting to have a Community Preview of
CloudXPRT ready in mid- to late-March with a version for general availability ready
about two months later.
coming weeks, we’ll be working on getting out more information about CloudXPRT
and continuing to talk with interested parties about how they can help. We’d
love to hear what workflows would be of most interest to you and what you would
most like to see in a datacenter/cloud benchmark. Please feel free to contact us!