BenchmarkXPRT Blog banner

Category: microservices

We welcome your CloudXPRT results!

We recently published a set of CloudXPRT Data Analytics and Web Microservices workload test results submitted by Quanta Computer, Inc. The Quanta submission is the first set of CloudXPRT results that we’ve published using the formal results submission and approval process. We’re grateful to the Quanta team for carefully following the submission guidelines, enabling us to complete the review process without a hitch.

If you are unfamiliar with the process, you can find general information about how we review submissions in a previous blog post. Detailed, step-by-step instructions are available on the results submission page. As a reminder for testers who are considering submitting results for July, the submission deadline is tomorrow, Friday July 16, and the publication date is Friday July 30. We list the submission and publication dates for the rest of 2021 below. Please note that we do not plan to review submissions in December, so if we receive results submissions after November 30, we may not publish them until the end of January 2022.


Submission deadline: Tuesday 8/17/21

Publication date: Tuesday 8/31/21


Submission deadline: Thursday 9/16/21

Publication date: Thursday 9/30/21


Submission deadline: Friday 10/15/21

Publication date: Friday 10/29/21


Submission deadline: Tuesday 11/16/21

Publication date: Tuesday 11/30/21


Submission deadline: N/A

Publication date: N/A

If you have any questions about the CloudXPRT results submission, review, or publication process, please let us know!


Default requirements for CloudXPRT results submissions

Over the past few weeks, we’ve received questions about whether we require specific test configuration settings for official CloudXPRT results submissions. Currently, testers have the option to edit up to 12 configuration options for the web microservices workload and three configuration options for the data analytics workload. Not all configuration options have an impact on testing and results, but a few of them can drastically affect key results metrics and how long it takes to complete a test. Because new CloudXPRT testers may not anticipate those outcomes, and so many configuration permutations are possible, we’ve come up with a set of requirements for all future results submissions to our site. Please note that testers are still free to adjust all available configuration options—and define service level agreement (SLA) settings—as they see fit for their own purposes. The requirements below apply only to results testers want to submit for publication consideration on our site, and to any resulting comparisons.

Web microservices results submission requirement

Starting with the May results submission cycle, all web microservices results submissions must have the workload.cpurequestsvalue, which lets the user designate the number of CPU cores the workload assigns to each pod, set to 4. Currently, the benchmark supports values of 1, 2, and 4, with the default value of 4. While 1 and 2 CPU cores per pod may be more appropriate for relatively low-end systems or configurations with few vCPUs, a value of 4 is appropriate for most datacenter processors, and it often enables CSP instances to operate within the benchmark’s max default 95th percentile latency SLA of 3,000 milliseconds.

In future CloudXPRT releases, we may remove the option to change the workload.cpurequests value from the config.json file and simply fix the value in the benchmark’s code to promote test predictability and reasonable comparisons. For more information about configuration options for the web microservices workload, please consult the Overview of the CloudXPRT Web Microservices Workload white paper.

Data analytics results submission requirement

Starting with the May results submission cycle, all data analytics results submissions must have the best reported performance (throughput_jobs/min) correspond to a 95th percentile SLA latency of 90 seconds or less. We have received submissions where the throughput was extremely high, but the 95th percentile SLA latency was up to 10 times the 90 seconds that we recommend in CloudXPRT documentation. High latency values may be acceptable for the unique purposes of individual testers, but they do not provide a good basis for comparison between clusters under test. For more information about configuration options with the data analytics workload, please consult the Overview of the CloudXPRT Data Analytics Workload white paper.

We will update CloudXPRT documentation to make sure that testers know to use the default configuration settings if they plan to submit results for publication. If you have any questions about CloudXPRT or the CloudXPRT results submission process, please let us know.


The CloudXPRT v1.1 beta is available!

Last week, we announced that a CloudXPRT v1.1 beta was on the way. We’re happy to say that the v1.1 beta is now available to the public on a dedicated CloudXPRT v1.1 beta download page. While CloudXPRT v1.01 remains the officially supported version on and in our GitHub repository, interested testers can use the v1.1 beta version in new environments as we finalize the v1.1 build for official release. You are welcome to publish results as we do not expect results to change in the final, official release.

As we mentioned in last week’s post, the CloudXPRT v1.1 beta includes the following changes:

  • We’ve added support for Ubuntu 20.04.2 or later for on-premises testing.
  • We’ve consolidated and standardized the installation packages for both workloads. Instead of one package for the data analytics workload and four separate packages for the web microservices workload, each workload has a single installation package that supports on-premises testing and testing with all three supported CSPs.
  • We’ve incorporated Terraform to help create and configure VMs, which helps to prevent problems when testers do not allocate enough storage per VM prior to testing.
  • We’ve replaced the Calico network plugin in Kubespray with Weave, which helps to avoid some of the network issues testers have occasionally encountered in the CPS environment.

Please feel free to share the link to the beta download page. (To avoid confusion, the beta will not appear in the main CloudXPRT download table.) We can’t yet state definitively whether results from the new version will be comparable to those from v1.01. We have not observed any significant differences in performance, but we haven’t tested every possible test configuration across every platform. If you observe different results when testing the same configuration with v1.01 and v1.1 beta, please send us the details so we can investigate.

If you have any questions about CloudXPRT or the CloudXPRT v1.1 beta, please let us know!


Improved CloudXPRT documentation is coming soon

CloudXPRT is undoubtedly the most complex tool in the XPRT family of benchmarks. To run the cloud-native benchmark’s multiple workloads across different hardware and software platforms, testers need two things: (1) at least a passing familiarity with a wide range of cloud-related toolkits, and (2) an understanding that changing even one test configuration variable can affect test results. While the complexity of CloudXPRT makes it a powerful and flexible tool for measuring application performance on real-world IaaS stacks, it also creates a steep learning curve for new users.

Benchmark setup and configuration can involve a number of complex steps, and the corresponding instructions should be thorough, unambiguous, and intuitive to follow. For all of the XPRT tools, we strive to publish documentation that provides quick, easy-to-find answers to the questions users might have. Community members have asked us to improve the clarity and readability of the CloudXPRT setup, configuration, and individual workload documentation. In response, we are working to create more—and better—CloudXPRT documentation.

If you’re intimidated by the benchmark’s complexity, helping you is one of our highest priorities. In the coming weeks and months, we’ll be evaluating all of our CloudXPRT documentation, particularly from the perspective of new users, and will release more information about the new documentation as it becomes available.

We also want to remind you of some of the existing CloudXPRT resources. We encourage everyone to check out the Introduction to CloudXPRT and Overview of the CloudXPRT Web Microservices Workload white papers. (Note that we’ll soon be publishing a paper on the benchmark’s data analytics workload.) Also, a couple of weeks ago, we published the CloudXPRT learning tool, which we designed to serve as an information hub for common CloudXPRT topics and questions, and to help tech journalists, OEM lab engineers, and everyone who is interested in CloudXPRT find the answers they need as quickly as possible.

Thanks to all who let us know that there was room for improvement in the CloudXPRT documentation. We rely on that kind of feedback and always welcome it. If you have any questions or suggestions regarding CloudXPRT or any of the other XPRTs, please let us know!


The CloudXPRT learning tool is now live!

We’re happy to announce that the CloudXPRT learning tool is now live! We designed the tool to serve as an information hub for common CloudXPRT topics and questions, and to help tech journalists, OEM lab engineers, and everyone who is interested in CloudXPRT find the answers they need as quickly as possible.

The tool features four primary areas of content:

  • The Q&A section provides quick answers to the questions we receive most from testers and the tech press.
  • The CloudXPRT: the basics section describes specific topics such as the benchmark’s target platforms, workloads, companion cloud software, and hardware and software requirements.
  • The Testing and results section covers the testing process, metrics, and how to publish results.
  • The cloud primer provides brief, easy-to-understand definitions of key cloud computing terms and concepts.

The first screenshot below shows the home screen. To illustrate how some of the pop-up information sections appear, the second screenshot shows part of the Key terms and concepts module in the Cloud primer section. 

We’re excited about the new CloudXPRT learning tool! If you have any questions about the tool, or suggestions for additional content to include in it, please let us know!


Next up: a white paper about the CloudXPRT data analytics workload

Soon, we’ll be publishing a CloudXPRT white paper that focuses on the benchmark’s data analytics workload. We summarized the workload in the Introduction to CloudXPRT white paper, but in the same way that the Overview of the CloudXPRT Web Microservices Workload paper did, the new paper will discuss the workload in much greater detail.

In addition to providing practical information about the installation package and minimum system requirements for the data analytics workload, the paper will describe test configuration variables, structural components, task workflows, and test metrics. It will also include guidance on interpreting test results and submitting them for publication.

As we’ve noted, CloudXPRT is one of the more complex tools in the XPRT family, with no shortage of topics to explore. Possible future topics include the impact of adjusting specific test configuration options, recommendations for results reporting, and methods for results analysis. If there are specific topics that you’d like us to address in future white papers, please feel free to send us your ideas!

We hope that the upcoming Overview of the CloudXPRT Data Analytics Workload paper will serve as a go-to resource for CloudXPRT testers, and will answer any questions you have about the workload. Once it goes live, we’ll provide links in the Helpful Info box on and the CloudXPRT section of our XPRT white papers page.

If you have any questions, please let us know!


Check out the other XPRTs:

Forgot your password?