month, we announced
that we’re working on an updated CloudXPRT web microservices test package. The purpose
of the update is to fix installation failures on Google Cloud Platform and
Microsoft Azure, and ensure that the web microservices workload works on Ubuntu
22.04, using updated software components such as Kubernetes v1.23.7, Kubespray
v2.18.1, and Kubernetes Metrics Server v1. The update also incorporates some
additional minor script changes.
are still testing the updated test package with on-premises hardware and Amazon
Web Services, Google Cloud Platform, and Microsoft Azure configurations. So
far, testing is progressing well, and we feel increasingly confident that we
will be able to release the updated test package soon. We would like to share a
more concrete release schedule, but because of the complexity of the workload
and the CSP platforms involved, we are waiting until we are certain that
everything is ready to go.
name of the updated package will be CloudXPRT v1.2, and it will include only the
updated v1.2 test harness and the updated web microservices workload. It will
not include the data analytics workload. As we stated in last month’s blog, we plan
to publish the updated web microservices package, and see what kind of interest
we receive from users about a possible refresh of the v1.1 data analytics workload.
For now, the v1.1 data analytics workload will continue to be available via CloudXPRT.com
for some time to serve as a reference resource for users that have worked with
the package in the past.
soon as possible, we’ll provide more information about the CloudXPRT v1.2 release
date here in the blog. If you have any questions about the update or CloudXPRT
in general, please feel free to contact us!
developed our first cloud benchmark, CloudXPRT,
to measure the performance of cloud applications deployed on modern infrastructure
as a service (IaaS) platforms. When we first released CloudXPRT in
February of 2021, the benchmark included two test packages: a web microservices
workload and a data analytics workload. Both supported on-premises and cloud
service provider (CSP) testing with Amazon Web Services (AWS), Google Cloud
Platform (GCP), and Microsoft Azure.
is our most complex benchmark, requiring sustained compatibility between many
software components across multiple independent test environments. As vendors
roll out updates for some components and stop supporting others, it’s
inevitable that something will break. Since CloudXPRT’s launch, we’ve become
aware of installation failures while attempting to set up CloudXPRT on Ubuntu
virtual machines with GCP and Microsoft Azure. Additionally, while the web
microservices workload continues to run in most instances with a few
configuration tweaks and workarounds, the data analytics workload fails
consistently due to compatibility issues with Minio, Prometheus, and Kafka
within the Kubernetes environment.
response, we’re working to fix problems with the web microservices workload and
bring all necessary components up to date. We’re developing an updated test
package that will work on Ubuntu 22.04, using Kubernetes v1.23.7 and Kubespray
v2.18.1. We’re also updating Kubernetes Metrics Server from v1beta1 to v1, and will
incorporate some minor script changes. Our goal is to ensure successful
installation and testing with the on-premises and CSP platforms that we
supported when we first launched CloudXPRT.
are currently focusing on the web microservices workload for two reasons.
First, more users have downloaded it than the data analytics workload. Second, we
think we have a clear path to success. Our plan is to publish the updated web
microservices test package, and see what feedback and interest we receive from
users about a possible data analytics refresh. The existing data analytics workload
will remain available via CloudXPRT.com for the time being to serve as a
apologize for the inconvenience that these issues have caused. We’ll provide
more information about a release timeline and final test package details here
in the blog as we get closer to publication. If you have any questions about
the future of CloudXPRT, please feel free to contact us!
Over the past few
weeks, we’ve received questions about whether we require specific test
configuration settings for official CloudXPRT results submissions. Currently, testers have the option to edit up to 12 configuration
options for the web microservices workload and three configuration options for the
data analytics workload. Not all configuration options have an impact on
testing and results, but a few of them can drastically affect key results
metrics and how long it takes to complete a test. Because new CloudXPRT testers
may not anticipate those outcomes, and so many configuration permutations are
possible, we’ve come up with a set of requirements for all future results
submissions to our site. Please note that testers are still free to adjust all
available configuration options—and define service level agreement (SLA)
settings—as they see fit for their own purposes. The requirements below apply only
to results testers want to submit for publication consideration on our site,
and to any resulting comparisons.
results submission requirement
Starting with the May results
submission cycle, all web microservices results submissions must have the workload.cpurequestsvalue, which lets the user designate the number of CPU cores the workload
assigns to each pod, set to 4. Currently, the benchmark supports values of 1,
2, and 4, with the default value of 4. While 1 and 2 CPU cores per pod may be
more appropriate for relatively low-end systems or configurations with few
vCPUs, a value of 4 is appropriate for most datacenter processors, and it often
enables CSP instances to operate within the benchmark’s max default 95th
percentile latency SLA of 3,000 milliseconds.
In future CloudXPRT releases, we may remove the option to change the workload.cpurequests value from the config.json file and simply fix the value in the benchmark’s code to promote test predictability and reasonable comparisons. For more information about configuration options for the web microservices workload, please consult the Overview of the CloudXPRT Web Microservices Workload white paper.
Data analytics results
Starting with the May
results submission cycle, all data analytics results submissions must have the best
reported performance (throughput_jobs/min) correspond to a 95th
percentile SLA latency of 90 seconds or less. We have received submissions where
the throughput was extremely high, but the 95th percentile SLA
latency was up to 10 times the 90 seconds that we recommend in CloudXPRT
documentation. High latency values may be acceptable for the unique purposes of
individual testers, but they do not provide a good basis for comparison between
clusters under test. For more information about configuration options with the
data analytics workload, please consult the Overview of the CloudXPRT Data Analytics Workload white paper.
We will update
CloudXPRT documentation to make sure that testers know to use the default
configuration settings if they plan to submit results for publication. If you
have any questions about CloudXPRT or the CloudXPRT results submission process,
please let us know.
We’re happy to announce
that CloudXPRT v1.1 will move from beta to general release status tomorrow! The
installation packages will be available at the CloudXPRT.com download page and the BenchmarkXPRT GitHub repository. You will find more details about the v1.1
updates in a previous blog post, but the most
prominent changes are the consolidation of the five previous installation
packages into two packages (one per workload) and added support for Ubuntu
20.04.2 with on-premises testing.
Before you get started
with v1.1, please note the following updated system requirements:
- Ubuntu 20.04.2 or later for on-premises testing
- Ubuntu 18.04 and 20.04.2 or later for CSP (AWS/Azure/GCP)
CloudXPRT is designed
to run on high-end servers. Physical nodes or VMs under test must meet the
following minimum specifications:
- 16 logical or virtual CPUs
- 8 GB of RAM
- 10 GB of available disk space (50 GB for the data analytics
We have also made
significant adjustments to the installation and test configuration instructions
in the readmes for both workloads, so please revisit these documents even if
you’re familiar with previous test processes.
As we noted during the
beta period, we have not observed any significant differences in performance
between v1.01 and v1.1, but we haven’t tested every possible test configuration
across every platform. If you observe different results when testing the same
configuration with v1.01 and v1.1, please send us the details so we can
If you have any questions about CloudXPRT v1.1, please let us know!
Last week, we announced that a CloudXPRT v1.1
beta was on the way. We’re happy to say that the v1.1 beta is now available to
the public on a dedicated CloudXPRT v1.1 beta download page. While CloudXPRT v1.01
remains the officially supported version on CloudXPRT.com and in our GitHub
repository, interested testers can use the v1.1
beta version in new environments as we finalize the v1.1 build for official
release. You are welcome to publish results as we do not expect results to
change in the final, official release.
As we mentioned in
last week’s post, the CloudXPRT v1.1 beta includes the following changes:
- We’ve added support for Ubuntu 20.04.2 or later for on-premises
- We’ve consolidated and standardized the installation packages
for both workloads. Instead of one package for the data analytics workload and
four separate packages for the web microservices workload, each workload has a
single installation package that supports on-premises testing and testing with
all three supported CSPs.
- We’ve incorporated Terraform to help create and
configure VMs, which helps to prevent problems when testers do not allocate
enough storage per VM prior to testing.
- We’ve replaced the Calico network plugin in Kubespray with Weave, which helps to avoid some
of the network issues testers have occasionally encountered in the CPS
Please feel free to
share the link to the beta download page. (To avoid confusion, the beta will
not appear in the main CloudXPRT download table.) We can’t yet state
definitively whether results from the new version will be comparable to those
from v1.01. We have not observed any significant differences in performance,
but we haven’t tested every possible test configuration across every platform.
If you observe different results when testing the same configuration with v1.01
and v1.1 beta, please send us the details so we can investigate.
If you have any questions about CloudXPRT or the CloudXPRT v1.1 beta, please let us know!
As we’ve been working
on improvements and updates for CloudXPRT, we’ve been using feedback from
community members to determine which changes will help testers most in the
short term. To make some of those changes available to the community as soon as
possible, we plan to release a beta version of CloudXPRT v1.1 in the coming
During the v1.1 beta
period, the CloudXPRT v1.01 installation packages on CloudXPRT.com and our GitHub repository will continue to include the officially supported
version of CloudXPRT. However, interested testers can experiment with the v1.1
beta version in new environments while we finalize the build for official
The CloudXPRT v1.1
beta includes the following primary changes:
- We’re adding support for Ubuntu 20.04.2 or later, the number one
request we’ve received.
- We’re consolidating and standardizing the installation packages
for both workloads. Instead of one package for the data analytics workload and
four separate packages for the web microservices workload, each workload will
have two installation packages: one for all on-premises testing and one for
testing with all three supported CSPs.
- We’re incorporating Terraform to help create and
configure VMs, which will help to prevent situations when testers do not
allocate enough storage per VM prior to testing.
- We use Kubespray to manage Kubernetes
clusters, and Kubespray uses Calico as the default network plug in. Calico has not always worked
well for CloudXPRT in the CSP environment, so we’re replacing Calico with Weave.
At the start of the
beta period, we will share a link to the v1.1 beta download page here in the
blog. You’ll be free to share this link. To avoid confusion, we will not add the
beta download to the v1.01 downloads available on CloudXPRT.com.
As the beta release
date approaches, we’ll share more details about timelines, access, and any additional
changes to the benchmark. If you have any questions about the upcoming
CloudXPRT v1.1 beta, please let us know!