We want to let CloudXPRT testers know that we’re close to
releasing an updated version (build 1.01) with two minor bug fixes, an improved
post-test results processing script, and an adjustment to one of our test
configuration recommendations. None of these changes will affect performance or
test results, so scores from previous CloudXPRT builds will be comparable to
those from the new build.
The most significant changes in CloudXPRT build 1.01 are as
In previous builds, some testers encountered warnings during setup to update the version of Kubernetes Operations (kops) when testing on public-cloud platforms (the CloudXPRT 1.00 recommendation is kops version 1.16.0). We are adjusing the kops installation instructions in the setup instructions for the web microservices and data analytics workloads to prevent these warnings.
In previous builds, post-test cleanup instructions for public-cloud testing environments do not always delete all of the resources that CloudXPRT creates during setup. We are updating instructions to ensure a more thorough cleanup process. This change applies to test instructions for the web microservices and data analytics workloads.
We are reformatting the optional results graphs the web microservices postprocess program creates to make them easier to interpret.
In previous builds, the recommended time interval for the web-microservices workload is 120 seconds if the hpamode option is enabled and 60 seconds if it is disabled. Because we’ve found that the 60-second difference has no significant impact on test results, we are changing the recommendation to 60 seconds for both hpamode settings.
We hope these changes
will improve the CloudXPRT setup and testing experience. We haven’t set the
release date for the updated build yet, but when we do, we’ll announce it here
in the blog. If you have any questions about CloudXPRT, or would like to report
bugs or other issues, please feel free to contact us!
Soon, we’ll be expanding
our portfolio of CloudXPRT resources with a white paper that focuses on the benchmark’s
web microservices workload. While we summarized the workload in the Introduction to CloudXPRT white paper, the new paper will discuss the
workload in much greater detail.
In addition to providing practical information about the web microservices installation packages and minimum system requirements, the paper describes the workload’s test configuration variables, structural components, task workflows, and test metrics. It also discusses interpreting test results and the process for submitting results for publication.
As we’ve noted, CloudXPRT is one of the more complex tools in the XPRT family, with no shortage of topics to explore further. We plan to publish a companion overview for the data analytics workload, and possible future topics include the impact of adjusting specific test configuration options, recommendations for results reporting, and methods for analysis.
We hope that the
upcoming Overview of the CloudXPRT Web Microservices Workload paper will
serve as a go-to resource for CloudXPRT testers, and will answer any questions
you have about the workload. Once it goes live, we’ll provide links in the
Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.
The CloudXPRT Preview period has ended, and CloudXPRT version 1.0 installation packages are now available on CloudXPRT.com and the BenchmarkXPRT GitHub repository! Like the Preview build, CloudXPRT version 1.0 includes two workloads: web microservices and data analytics (you can find more details about the workloads here). Testers can use metrics from the workloads to compare IaaS stack (both hardware and software) performance and to evaluate whether any given stack is capable of meeting SLA thresholds. You can configure CloudXPRT to run on local datacenter, Amazon Web Services, Google Cloud Platform, or Microsoft Azure deployments.
Several different test packages are available for download from the CloudXPRT download page. For detailed installation instructions and hardware and software requirements for each, click the package’s readme link. On CloudXPRT.com, the Helpful Info box contains resources such as links to the Introduction to CloudXPRT white paper, the CloudXPRT master readme, and the CloudXPRT GitHub repository.
The GitHub repository also contains the CloudXPRT
source code. The source code is freely available for testers to download and
Performance results from this release are comparable
to performance results from the CloudXPRT Preview build. Testers who wish to
publish results on CloudXPRT.com can find more information about the results
submission and review process in the blog. We post the monthly results cycle schedule on the results
We’re thankful for all the input we received during the CloudXPRT development process and Preview period. If you have any questions about CloudXPRT, please let us know.
Many businesses want
to move critical applications to the cloud, but choosing the right cloud-based
infrastructure as a service (IaaS) platform can be a complex and costly project.
We developed CloudXPRT to help speed up and simplify the process by providing a
powerful benchmarking tool that allows users to run multiple workloads on cloud
platform software in on-premises and popular public cloud environments.
This week, we made
some changes to the CloudXPRT results viewer that we think will simplify the results-browsing experience and
allow visitors to more quickly and easily find important data.
The first set of
changes involves how we present test system information in the main results
table and on the individual results details pages. We realized that there was
potential for confusion around the “CPU” and “Number of nodes” categories. We
removed those and created the following new fields: “Cluster components,”
“Nodes (work + control plane),” and
“vCPUs (work + control plane).” These new categories better describe test
configurations and clarify how many CPUs engage with the workload.
The second set of
changes involves the number of data points that we list in the table for each web
microservices test run. For example, previously, we published a unique entry
for each level of concurrency a test run records. If a run scaled to 32
concurrent instances, we presented the data for each instance in its own row. This
helped to show the performance curve during a single test as the workload
scaled up, but it made it more difficult for visitors to identify the best
throughput results from an individual run. We decided to consolidate the
results from a complete test run on a single row, highlighting only the maximum
number of successful requests (throughout). All the raw data from each run remains
available for download on the details page for each result, but visitors don’t
have to wade through all that data to find the configuration’s main “score.”
We view the development of the CloudXPRT results viewer as an ongoing process. As we add results and receive feedback from testers about the data presentation formats that work best for them, we’ll continue to add more features and tweak existing ones to make them as useful as possible. If you have any questions about CloudXPRT results or the results viewer, please let us know!
We’re happy to announce that the CloudXPRT results viewer is now live with results from the first few rounds of CloudXPRT
Preview testing we conducted in our lab. Here are some tips to help you to
navigate the viewer more efficiently:
Click the tabs at the top of the table to switch from Data analytics
workload results to Web microservices workload results.
Click the header of any column to sort the data on that
variable. Single click to sort A to Z and double-click to sort Z to A.
Click the link in the Source/details column to visit a detailed
page for that result, where you’ll find additional test configuration and
system hardware information and the option to download results files.
By default, the viewer displays eight results per page, which
you can change to 16, 48, or Show all.
The free-form search field above the table lets you filter for
variables such as cloud service or processor.
We’ll be adding more features, including expanded filtering and
sorting mechanisms, to the results viewer in the near future. We’re also
investigating ways to present multiple data points in a graph format, which
will allow visitors to examine performance behavior curves in conjunction with
factors such as concurrency and resource utilization.
We welcome your CloudXPRT results submissions! To learn about
the new submission and review process we’ll be using, take a look at last week’s blog.
If you have any questions or suggestions for ways that we can
improve the results viewer, please let us know!