happy to announce that the CloudXPRT v1.2 update package is now available! The
update prevents potential installation failures on Google Cloud Platform and
Microsoft Azure, and ensures that the web microservices workload works on
Ubuntu 22.04. The update uses updated software components such as Kubernetes
v1.23.7, Kubespray v2.18.1, and Kubernetes Metrics Server v1, and incorporates
some additional minor script changes.
CloudXPRT v1.2 web microservices workload installation package is available at the CloudXPRT.com download
page and the BenchmarkXPRT GitHub repository.
you get started with v1.2, please note the following updated system
Ubuntu 20.04.2 or 22.04 for on-premises testing
Ubuntu 18.04, 20.04.2, or 22.04 for CSP (AWS/Azure/GCP) testing
CloudXPRT is designed to run on high-end servers, physical nodes or VMs under
test must meet the following minimum specifications:
16 logical or virtual CPUs
8 GB of RAM
10 GB of available disk space (50 GB for the data analytics workload)
update package includes only the updated v1.2 test harness and the updated web
microservices workload. It does not include the data analytics workload. As we
stated in the blog,
now that we’ve published the web microservices package, we will assess the
level of interest users express about a possible refresh of the v1.1 data
analytics workload. For now, the v1.1 data analytics workload will continue to
be available via CloudXPRT.com
for some time to serve as a reference resource for users who have worked with
the package in the past.
Please let us know if you have any questions about the CloudXPRT v1.2 test package. Happy testing!
developed our first cloud benchmark, CloudXPRT,
to measure the performance of cloud applications deployed on modern infrastructure
as a service (IaaS) platforms. When we first released CloudXPRT in
February of 2021, the benchmark included two test packages: a web microservices
workload and a data analytics workload. Both supported on-premises and cloud
service provider (CSP) testing with Amazon Web Services (AWS), Google Cloud
Platform (GCP), and Microsoft Azure.
is our most complex benchmark, requiring sustained compatibility between many
software components across multiple independent test environments. As vendors
roll out updates for some components and stop supporting others, it’s
inevitable that something will break. Since CloudXPRT’s launch, we’ve become
aware of installation failures while attempting to set up CloudXPRT on Ubuntu
virtual machines with GCP and Microsoft Azure. Additionally, while the web
microservices workload continues to run in most instances with a few
configuration tweaks and workarounds, the data analytics workload fails
consistently due to compatibility issues with Minio, Prometheus, and Kafka
within the Kubernetes environment.
response, we’re working to fix problems with the web microservices workload and
bring all necessary components up to date. We’re developing an updated test
package that will work on Ubuntu 22.04, using Kubernetes v1.23.7 and Kubespray
v2.18.1. We’re also updating Kubernetes Metrics Server from v1beta1 to v1, and will
incorporate some minor script changes. Our goal is to ensure successful
installation and testing with the on-premises and CSP platforms that we
supported when we first launched CloudXPRT.
are currently focusing on the web microservices workload for two reasons.
First, more users have downloaded it than the data analytics workload. Second, we
think we have a clear path to success. Our plan is to publish the updated web
microservices test package, and see what feedback and interest we receive from
users about a possible data analytics refresh. The existing data analytics workload
will remain available via CloudXPRT.com for the time being to serve as a
apologize for the inconvenience that these issues have caused. We’ll provide
more information about a release timeline and final test package details here
in the blog as we get closer to publication. If you have any questions about
the future of CloudXPRT, please feel free to contact us!
Many businesses want
to move critical applications to the cloud, but choosing the right cloud-based
infrastructure as a service (IaaS) platform can be a complex and costly project.
We developed CloudXPRT to help speed up and simplify the process by providing a
powerful benchmarking tool that allows users to run multiple workloads on cloud
platform software in on-premises and popular public cloud environments.
This week, we made
some changes to the CloudXPRT results viewer that we think will simplify the results-browsing experience and
allow visitors to more quickly and easily find important data.
The first set of
changes involves how we present test system information in the main results
table and on the individual results details pages. We realized that there was
potential for confusion around the “CPU” and “Number of nodes” categories. We
removed those and created the following new fields: “Cluster components,”
“Nodes (work + control plane),” and
“vCPUs (work + control plane).” These new categories better describe test
configurations and clarify how many CPUs engage with the workload.
The second set of
changes involves the number of data points that we list in the table for each web
microservices test run. For example, previously, we published a unique entry
for each level of concurrency a test run records. If a run scaled to 32
concurrent instances, we presented the data for each instance in its own row. This
helped to show the performance curve during a single test as the workload
scaled up, but it made it more difficult for visitors to identify the best
throughput results from an individual run. We decided to consolidate the
results from a complete test run on a single row, highlighting only the maximum
number of successful requests (throughout). All the raw data from each run remains
available for download on the details page for each result, but visitors don’t
have to wade through all that data to find the configuration’s main “score.”
We view the development of the CloudXPRT results viewer as an ongoing process. As we add results and receive feedback from testers about the data presentation formats that work best for them, we’ll continue to add more features and tweak existing ones to make them as useful as possible. If you have any questions about CloudXPRT results or the results viewer, please let us know!
Today, we published an
updated CloudXPRT Preview build (v0.97), along with the build’s source code.
The new build fixes a few minor bugs, and makes several improvements to help
facilitate installation, setup, and testing. The fixes do not affect CloudXPRT
test results, so results from the new build are comparable to results from the
original build (v0.95). You can find more detailed information about the
changes in last week’s blog.
The CloudXPRT Preview
v0.97 source code is available to the public via the CloudXPRT GitHub
repository. As we’ve discussed in the past, publishing XPRT source code is
part of our commitment to making the XPRT development process as transparent as
possible. By allowing all interested parties to download and review our source
code, we’re encouraging openness and honesty in the benchmarking industry and
are inviting the kind of constructive feedback that helps to ensure that the
XPRTs continue to contribute to a level playing field.
While the CloudXPRT
source code is available to the public, our approach to derivative works differs
from some open-source models. Traditional open-source models encourage
developers to change products and even take them in different directions.
Because benchmarking requires a product that remains static to enable valid
comparisons over time, we allow people to download the source, but we reserve
the right to control derivative works. This discourages a situation where
someone publishes an unauthorized version of the benchmark and calls it an
We encourage you to
download and review the source and send us any feedback you have. Your
questions and suggestions may influence future versions of CloudXPRT.
If you have any questions about CloudXPRT or the source code, please let us know!
The CloudXPRT Preview installation packages are now available on CloudXPRT.com and the BenchmarkXPRT GitHub repository! The CloudXPRT Preview includes two workloads: web microservices and data analytics (you can find more details about the workloads here). Testers can use metrics from the workloads to compare IaaS stack (both hardware and software) performance and to evaluate whether any given stack is capable of meeting SLA thresholds. You can configure CloudXPRT to run on local datacenter, Amazon Web Services, Google Cloud Platform, or Microsoft Azure deployments.
Several different test packages are available for
download from the CloudXPRT download
page. For detailed installation instructions and
hardware and software requirements for each, click the package’s readme link. The
Helpful Info box on CloudXPRT.com also contains resources such as links to the
CloudXPRT master readme and the CloudXPRT GitHub repository. Soon, we will add
a link to the CloudXPRT Preview source code, which will be freely available for
testers to download and review.
All interested parties may now publish CloudXPRT
results. However, until we begin the formal results submission and review process in July, we will publish only results we
produce in our own lab. We anticipate adding the first set of those within the coming
We’re thankful for all the input we received during the initial CloudXPRT development process, and we welcome feedback on the CloudXPRT Preview. If you have any questions about CloudXPRT, or would like to share your comments and suggestions, please let us know.