We’re happy to announce
that the CloudXPRT learning tool is now live! We
designed the tool to serve as an information hub for common CloudXPRT topics
and questions, and to help tech journalists, OEM lab engineers, and everyone
who is interested in CloudXPRT find the answers they need as quickly as
The tool features four
primary areas of content:
The Q&A section provides quick answers to the questions we
receive most from testers and the tech press.
The CloudXPRT: the basics section describes specific topics such
as the benchmark’s target platforms, workloads, companion cloud software, and
hardware and software requirements.
The Testing and results section covers the testing process,
metrics, and how to publish results.
The cloud primer provides brief, easy-to-understand definitions of
key cloud computing terms and concepts.
The first screenshot below shows the home screen. To illustrate how some of the pop-up information sections appear, the second screenshot shows part of the Key terms and concepts module in the Cloud primer section.
We’re excited about the new CloudXPRT learning tool! If you have any questions about the tool, or suggestions for additional content to include in it, please let us know!
The CloudXPRT Preview period has ended, and CloudXPRT version 1.0 installation packages are now available on CloudXPRT.com and the BenchmarkXPRT GitHub repository! Like the Preview build, CloudXPRT version 1.0 includes two workloads: web microservices and data analytics (you can find more details about the workloads here). Testers can use metrics from the workloads to compare IaaS stack (both hardware and software) performance and to evaluate whether any given stack is capable of meeting SLA thresholds. You can configure CloudXPRT to run on local datacenter, Amazon Web Services, Google Cloud Platform, or Microsoft Azure deployments.
Several different test packages are available for download from the CloudXPRT download page. For detailed installation instructions and hardware and software requirements for each, click the package’s readme link. On CloudXPRT.com, the Helpful Info box contains resources such as links to the Introduction to CloudXPRT white paper, the CloudXPRT master readme, and the CloudXPRT GitHub repository.
The GitHub repository also contains the CloudXPRT
source code. The source code is freely available for testers to download and
Performance results from this release are comparable
to performance results from the CloudXPRT Preview build. Testers who wish to
publish results on CloudXPRT.com can find more information about the results
submission and review process in the blog. We post the monthly results cycle schedule on the results
We’re thankful for all the input we received during the CloudXPRT development process and Preview period. If you have any questions about CloudXPRT, please let us know.
Many businesses want
to move critical applications to the cloud, but choosing the right cloud-based
infrastructure as a service (IaaS) platform can be a complex and costly project.
We developed CloudXPRT to help speed up and simplify the process by providing a
powerful benchmarking tool that allows users to run multiple workloads on cloud
platform software in on-premises and popular public cloud environments.
This week, we made
some changes to the CloudXPRT results viewer that we think will simplify the results-browsing experience and
allow visitors to more quickly and easily find important data.
The first set of
changes involves how we present test system information in the main results
table and on the individual results details pages. We realized that there was
potential for confusion around the “CPU” and “Number of nodes” categories. We
removed those and created the following new fields: “Cluster components,”
“Nodes (work + control plane),” and
“vCPUs (work + control plane).” These new categories better describe test
configurations and clarify how many CPUs engage with the workload.
The second set of
changes involves the number of data points that we list in the table for each web
microservices test run. For example, previously, we published a unique entry
for each level of concurrency a test run records. If a run scaled to 32
concurrent instances, we presented the data for each instance in its own row. This
helped to show the performance curve during a single test as the workload
scaled up, but it made it more difficult for visitors to identify the best
throughput results from an individual run. We decided to consolidate the
results from a complete test run on a single row, highlighting only the maximum
number of successful requests (throughout). All the raw data from each run remains
available for download on the details page for each result, but visitors don’t
have to wade through all that data to find the configuration’s main “score.”
We view the development of the CloudXPRT results viewer as an ongoing process. As we add results and receive feedback from testers about the data presentation formats that work best for them, we’ll continue to add more features and tweak existing ones to make them as useful as possible. If you have any questions about CloudXPRT results or the results viewer, please let us know!
Today, we published an
updated CloudXPRT Preview build (v0.97), along with the build’s source code.
The new build fixes a few minor bugs, and makes several improvements to help
facilitate installation, setup, and testing. The fixes do not affect CloudXPRT
test results, so results from the new build are comparable to results from the
original build (v0.95). You can find more detailed information about the
changes in last week’s blog.
The CloudXPRT Preview
v0.97 source code is available to the public via the CloudXPRT GitHub
repository. As we’ve discussed in the past, publishing XPRT source code is
part of our commitment to making the XPRT development process as transparent as
possible. By allowing all interested parties to download and review our source
code, we’re encouraging openness and honesty in the benchmarking industry and
are inviting the kind of constructive feedback that helps to ensure that the
XPRTs continue to contribute to a level playing field.
While the CloudXPRT
source code is available to the public, our approach to derivative works differs
from some open-source models. Traditional open-source models encourage
developers to change products and even take them in different directions.
Because benchmarking requires a product that remains static to enable valid
comparisons over time, we allow people to download the source, but we reserve
the right to control derivative works. This discourages a situation where
someone publishes an unauthorized version of the benchmark and calls it an
We encourage you to
download and review the source and send us any feedback you have. Your
questions and suggestions may influence future versions of CloudXPRT.
If you have any questions about CloudXPRT or the source code, please let us know!
We want to let CloudXPRT testers know that updated installer packages are on the way. The packages will include several fixes for bugs that we discovered in the initial CloudXPRT Preview release (build 0.95). The fixes do not affect CloudXPRT test results, but do help to facilitate installation and remove potential sources of confusion during the setup and testing process.
Along with a few text edits
and other minor fixes, we made the following changes in the upcoming build:
updated the data analytics setup code to prevent error messages that occurred
when the benchmark treated one-node configurations as a special case.
configured the data analytics workload to use a go.mod file for all the
required go modules. With this change, we can explicitly state the release
version of the necessary go modules, and updates to the latest go release won’t
break the benchmark. This change also removes the need to include large gosrc.tar.gz
files in the source code.
added a cleanup utility script for the web microservices workload. If something
goes wrong during configuration or a test run, testers can use this script to
clean everything and start over.
fixed an error that prevented the benchmark from successfully retrieving the cluster_config.json
file in certain multi-node setups.
the web microservices workload, we changed the output format of the request
rate metric from integer to float. This change allows us to report workload
data with a higher degree of precision.
the web microservices workload, we added an overall summary line to results log
file that reports the best throughput numbers from the test run.
In the web microservices code, we
modified a Kubernetes option that the benchmark used to create the Cassandra
schema. Prior to this change, the option generated an inconsequential but
distracting error message about TTY input.
We haven’t set the release date for the updated build yet, but when we do, we’ll announce it here in the blog. If you have any questions about CloudXPRT, please let us know!