happy to announce that the CloudXPRT v1.2 update package is now available! The
update prevents potential installation failures on Google Cloud Platform and
Microsoft Azure, and ensures that the web microservices workload works on
Ubuntu 22.04. The update uses updated software components such as Kubernetes
v1.23.7, Kubespray v2.18.1, and Kubernetes Metrics Server v1, and incorporates
some additional minor script changes.
CloudXPRT v1.2 web microservices workload installation package is available at the CloudXPRT.com download
page and the BenchmarkXPRT GitHub repository.
you get started with v1.2, please note the following updated system
- Ubuntu 20.04.2 or 22.04 for on-premises testing
- Ubuntu 18.04, 20.04.2, or 22.04 for CSP (AWS/Azure/GCP) testing
CloudXPRT is designed to run on high-end servers, physical nodes or VMs under
test must meet the following minimum specifications:
- 16 logical or virtual CPUs
- 8 GB of RAM
- 10 GB of available disk space (50 GB for the data analytics workload)
update package includes only the updated v1.2 test harness and the updated web
microservices workload. It does not include the data analytics workload. As we
stated in the blog,
now that we’ve published the web microservices package, we will assess the
level of interest users express about a possible refresh of the v1.1 data
analytics workload. For now, the v1.1 data analytics workload will continue to
be available via CloudXPRT.com
for some time to serve as a reference resource for users who have worked with
the package in the past.
Please let us know if you have any questions about the CloudXPRT v1.2 test package. Happy testing!
developed our first cloud benchmark, CloudXPRT,
to measure the performance of cloud applications deployed on modern infrastructure
as a service (IaaS) platforms. When we first released CloudXPRT in
February of 2021, the benchmark included two test packages: a web microservices
workload and a data analytics workload. Both supported on-premises and cloud
service provider (CSP) testing with Amazon Web Services (AWS), Google Cloud
Platform (GCP), and Microsoft Azure.
is our most complex benchmark, requiring sustained compatibility between many
software components across multiple independent test environments. As vendors
roll out updates for some components and stop supporting others, it’s
inevitable that something will break. Since CloudXPRT’s launch, we’ve become
aware of installation failures while attempting to set up CloudXPRT on Ubuntu
virtual machines with GCP and Microsoft Azure. Additionally, while the web
microservices workload continues to run in most instances with a few
configuration tweaks and workarounds, the data analytics workload fails
consistently due to compatibility issues with Minio, Prometheus, and Kafka
within the Kubernetes environment.
response, we’re working to fix problems with the web microservices workload and
bring all necessary components up to date. We’re developing an updated test
package that will work on Ubuntu 22.04, using Kubernetes v1.23.7 and Kubespray
v2.18.1. We’re also updating Kubernetes Metrics Server from v1beta1 to v1, and will
incorporate some minor script changes. Our goal is to ensure successful
installation and testing with the on-premises and CSP platforms that we
supported when we first launched CloudXPRT.
are currently focusing on the web microservices workload for two reasons.
First, more users have downloaded it than the data analytics workload. Second, we
think we have a clear path to success. Our plan is to publish the updated web
microservices test package, and see what feedback and interest we receive from
users about a possible data analytics refresh. The existing data analytics workload
will remain available via CloudXPRT.com for the time being to serve as a
apologize for the inconvenience that these issues have caused. We’ll provide
more information about a release timeline and final test package details here
in the blog as we get closer to publication. If you have any questions about
the future of CloudXPRT, please feel free to contact us!
recently published a set of CloudXPRT Data Analytics and Web Microservices
workload test results
submitted by Quanta Computer, Inc.
The Quanta submission is the first set of CloudXPRT results that we’ve
published using the formal results submission and approval process.
We’re grateful to the Quanta team for carefully following the submission
guidelines, enabling us to complete the review process without a hitch.
If you are unfamiliar
with the process, you can find general information about how we review
submissions in a previous blog post.
Detailed, step-by-step instructions are available on the results submission page.
As a reminder for testers who are considering submitting results for July, the
submission deadline is tomorrow, Friday July 16, and the publication date is
Friday July 30. We list the submission and publication dates for the rest of
2021 below. Please note that we do not plan to review submissions in December,
so if we receive results submissions after November 30, we may not publish them
until the end of January 2022.
Submission deadline: Tuesday 8/17/21
Publication date: Tuesday 8/31/21
Submission deadline: Thursday 9/16/21
Publication date: Thursday 9/30/21
Submission deadline: Friday 10/15/21
Publication date: Friday 10/29/21
Submission deadline: Tuesday 11/16/21
Publication date: Tuesday 11/30/21
Submission deadline: N/A
Publication date: N/A
If you have any questions about the CloudXPRT results submission, review, or publication process, please let us know!
Over the past few
weeks, we’ve received questions about whether we require specific test
configuration settings for official CloudXPRT results submissions. Currently, testers have the option to edit up to 12 configuration
options for the web microservices workload and three configuration options for the
data analytics workload. Not all configuration options have an impact on
testing and results, but a few of them can drastically affect key results
metrics and how long it takes to complete a test. Because new CloudXPRT testers
may not anticipate those outcomes, and so many configuration permutations are
possible, we’ve come up with a set of requirements for all future results
submissions to our site. Please note that testers are still free to adjust all
available configuration options—and define service level agreement (SLA)
settings—as they see fit for their own purposes. The requirements below apply only
to results testers want to submit for publication consideration on our site,
and to any resulting comparisons.
results submission requirement
Starting with the May results
submission cycle, all web microservices results submissions must have the workload.cpurequestsvalue, which lets the user designate the number of CPU cores the workload
assigns to each pod, set to 4. Currently, the benchmark supports values of 1,
2, and 4, with the default value of 4. While 1 and 2 CPU cores per pod may be
more appropriate for relatively low-end systems or configurations with few
vCPUs, a value of 4 is appropriate for most datacenter processors, and it often
enables CSP instances to operate within the benchmark’s max default 95th
percentile latency SLA of 3,000 milliseconds.
In future CloudXPRT releases, we may remove the option to change the workload.cpurequests value from the config.json file and simply fix the value in the benchmark’s code to promote test predictability and reasonable comparisons. For more information about configuration options for the web microservices workload, please consult the Overview of the CloudXPRT Web Microservices Workload white paper.
Data analytics results
Starting with the May
results submission cycle, all data analytics results submissions must have the best
reported performance (throughput_jobs/min) correspond to a 95th
percentile SLA latency of 90 seconds or less. We have received submissions where
the throughput was extremely high, but the 95th percentile SLA
latency was up to 10 times the 90 seconds that we recommend in CloudXPRT
documentation. High latency values may be acceptable for the unique purposes of
individual testers, but they do not provide a good basis for comparison between
clusters under test. For more information about configuration options with the
data analytics workload, please consult the Overview of the CloudXPRT Data Analytics Workload white paper.
We will update
CloudXPRT documentation to make sure that testers know to use the default
configuration settings if they plan to submit results for publication. If you
have any questions about CloudXPRT or the CloudXPRT results submission process,
please let us know.
Last week, we announced that a CloudXPRT v1.1
beta was on the way. We’re happy to say that the v1.1 beta is now available to
the public on a dedicated CloudXPRT v1.1 beta download page. While CloudXPRT v1.01
remains the officially supported version on CloudXPRT.com and in our GitHub
repository, interested testers can use the v1.1
beta version in new environments as we finalize the v1.1 build for official
release. You are welcome to publish results as we do not expect results to
change in the final, official release.
As we mentioned in
last week’s post, the CloudXPRT v1.1 beta includes the following changes:
- We’ve added support for Ubuntu 20.04.2 or later for on-premises
- We’ve consolidated and standardized the installation packages
for both workloads. Instead of one package for the data analytics workload and
four separate packages for the web microservices workload, each workload has a
single installation package that supports on-premises testing and testing with
all three supported CSPs.
- We’ve incorporated Terraform to help create and
configure VMs, which helps to prevent problems when testers do not allocate
enough storage per VM prior to testing.
- We’ve replaced the Calico network plugin in Kubespray with Weave, which helps to avoid some
of the network issues testers have occasionally encountered in the CPS
Please feel free to
share the link to the beta download page. (To avoid confusion, the beta will
not appear in the main CloudXPRT download table.) We can’t yet state
definitively whether results from the new version will be comparable to those
from v1.01. We have not observed any significant differences in performance,
but we haven’t tested every possible test configuration across every platform.
If you observe different results when testing the same configuration with v1.01
and v1.1 beta, please send us the details so we can investigate.
If you have any questions about CloudXPRT or the CloudXPRT v1.1 beta, please let us know!
CloudXPRT is undoubtedly
the most complex tool in the XPRT family of benchmarks. To run the cloud-native
benchmark’s multiple workloads across different hardware and software platforms,
testers need two things: (1) at least a passing familiarity with a wide range
of cloud-related toolkits, and (2) an understanding that changing even one test
configuration variable can affect test results. While the complexity of CloudXPRT
makes it a powerful and flexible tool for measuring application performance on
real-world IaaS stacks, it also creates a steep learning curve for new users.
Benchmark setup and
configuration can involve a number of complex steps, and the corresponding
instructions should be thorough, unambiguous, and intuitive to follow. For all
of the XPRT tools, we strive to publish documentation that provides quick,
easy-to-find answers to the questions users might have. Community members have asked
us to improve the clarity and readability of the CloudXPRT setup,
configuration, and individual workload documentation. In response, we are
working to create more—and better—CloudXPRT documentation.
If you’re intimidated
by the benchmark’s complexity, helping you is one of our highest priorities. In
the coming weeks and months, we’ll be evaluating all of our CloudXPRT
documentation, particularly from the perspective of new users, and will release
more information about the new documentation as it becomes available.
We also want to remind
you of some of the existing CloudXPRT resources. We encourage everyone to check
out the Introduction to CloudXPRT and Overview of the CloudXPRT Web Microservices Workload white papers. (Note
that we’ll soon be publishing a paper on the benchmark’s data analytics
workload.) Also, a couple of weeks ago, we published the CloudXPRT learning tool, which we designed to serve as an information
hub for common CloudXPRT topics and questions, and to help tech journalists,
OEM lab engineers, and everyone who is interested in CloudXPRT find the answers
they need as quickly as possible.
Thanks to all who let us know that there was room for improvement in the CloudXPRT documentation. We rely on that kind of feedback and always welcome it. If you have any questions or suggestions regarding CloudXPRT or any of the other XPRTs, please let us know!