recently published a set of CloudXPRT Data Analytics and Web Microservices
workload test results
submitted by Quanta Computer, Inc.
The Quanta submission is the first set of CloudXPRT results that we’ve
published using the formal results submission and approval process.
We’re grateful to the Quanta team for carefully following the submission
guidelines, enabling us to complete the review process without a hitch.
If you are unfamiliar
with the process, you can find general information about how we review
submissions in a previous blog post.
Detailed, step-by-step instructions are available on the results submission page.
As a reminder for testers who are considering submitting results for July, the
submission deadline is tomorrow, Friday July 16, and the publication date is
Friday July 30. We list the submission and publication dates for the rest of
2021 below. Please note that we do not plan to review submissions in December,
so if we receive results submissions after November 30, we may not publish them
until the end of January 2022.
Submission deadline: Tuesday 8/17/21
Publication date: Tuesday 8/31/21
Submission deadline: Thursday 9/16/21
Publication date: Thursday 9/30/21
Submission deadline: Friday 10/15/21
Publication date: Friday 10/29/21
Submission deadline: Tuesday 11/16/21
Publication date: Tuesday 11/30/21
Submission deadline: N/A
Publication date: N/A
If you have any questions about the CloudXPRT results submission, review, or publication process, please let us know!
We recently received questions about whether we accept CloudXPRT
results submissions from testing on pre-production gear, and how we would handle
any differences between results from pre-production and production-level tests.
To answer first question, we are not opposed to pre-production
results submissions. We realize that vendors often want to include benchmark
results in launch-oriented marketing materials they release before their
hardware or software is publicly available. To help them do so, we’re happy to
consider pre-production submissions on a case-by-case basis. All such submissions
must follow the normal CloudXPRT results
submission process, and undergo
vetting by the CloudXPRT Results Review Group according to the standard review
and publication schedule. If we decide to publish pre-production results on our site, we
will clearly note their pre-production status.
In response to the second question, the CloudXPRT Results Review Group will handle any challenges to published results or perceived discrepancies between pre-production and production-level results on a case-by-case basis. We do not currently have a formal process for challenges; anyone who would like to initiate a challenge or express comments or concerns about a result should address the review group via firstname.lastname@example.org. Our primary concern is always to ensure that published results accurately reflect the performance characteristics of production-level hardware and software. If it becomes necessary to develop more policies in the future, we’ll do so, but we want to keep things as simple as possible.
If you have any questions about the CloudXPRT results submission process, please let us know!
We’re happy to announce
that CloudXPRT v1.1 will move from beta to general release status tomorrow! The
installation packages will be available at the CloudXPRT.com download page and the BenchmarkXPRT GitHub repository. You will find more details about the v1.1
updates in a previous blog post, but the most
prominent changes are the consolidation of the five previous installation
packages into two packages (one per workload) and added support for Ubuntu
20.04.2 with on-premises testing.
Before you get started
with v1.1, please note the following updated system requirements:
Ubuntu 20.04.2 or later for on-premises testing
Ubuntu 18.04 and 20.04.2 or later for CSP (AWS/Azure/GCP)
CloudXPRT is designed
to run on high-end servers. Physical nodes or VMs under test must meet the
following minimum specifications:
16 logical or virtual CPUs
8 GB of RAM
10 GB of available disk space (50 GB for the data analytics
We have also made
significant adjustments to the installation and test configuration instructions
in the readmes for both workloads, so please revisit these documents even if
you’re familiar with previous test processes.
As we noted during the
beta period, we have not observed any significant differences in performance
between v1.01 and v1.1, but we haven’t tested every possible test configuration
across every platform. If you observe different results when testing the same
configuration with v1.01 and v1.1, please send us the details so we can
If you have any questions about CloudXPRT v1.1, please let us know!
Today, we expand our portfolio
of CloudXPRT resources with a paper on the benchmark’s data analytics workload.
While we summarized the workload in the Introduction to CloudXPRT white paper, the new paper goes into much
In addition to providing practical information about the data analytics installation package and minimum system requirements, the paper describes the workload’s test configuration variables, structural components, task workflows, and test metrics. It also discusses interpreting test results and the process for submitting results for publication.
CloudXPRT is the most
complex tool in the XPRT family, and the new paper is part of our effort to create more—and better—CloudXPRT documentation. We plan to
publish additional CloudXPRT white papers in the coming months, with possible
future topics including the impact of adjusting specific test configuration
options, recommendations for results reporting, and methods for analysis.
Soon, we’ll be expanding
our portfolio of CloudXPRT resources with a white paper that focuses on the benchmark’s
web microservices workload. While we summarized the workload in the Introduction to CloudXPRT white paper, the new paper will discuss the
workload in much greater detail.
In addition to providing practical information about the web microservices installation packages and minimum system requirements, the paper describes the workload’s test configuration variables, structural components, task workflows, and test metrics. It also discusses interpreting test results and the process for submitting results for publication.
As we’ve noted, CloudXPRT is one of the more complex tools in the XPRT family, with no shortage of topics to explore further. We plan to publish a companion overview for the data analytics workload, and possible future topics include the impact of adjusting specific test configuration options, recommendations for results reporting, and methods for analysis.
We hope that the
upcoming Overview of the CloudXPRT Web Microservices Workload paper will
serve as a go-to resource for CloudXPRT testers, and will answer any questions
you have about the workload. Once it goes live, we’ll provide links in the
Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.
Many businesses want
to move critical applications to the cloud, but choosing the right cloud-based
infrastructure as a service (IaaS) platform can be a complex and costly project.
We developed CloudXPRT to help speed up and simplify the process by providing a
powerful benchmarking tool that allows users to run multiple workloads on cloud
platform software in on-premises and popular public cloud environments.