BenchmarkXPRT Blog banner

Category: Hybrid cloud

Improved CloudXPRT documentation is coming soon

CloudXPRT is undoubtedly the most complex tool in the XPRT family of benchmarks. To run the cloud-native benchmark’s multiple workloads across different hardware and software platforms, testers need two things: (1) at least a passing familiarity with a wide range of cloud-related toolkits, and (2) an understanding that changing even one test configuration variable can affect test results. While the complexity of CloudXPRT makes it a powerful and flexible tool for measuring application performance on real-world IaaS stacks, it also creates a steep learning curve for new users.

Benchmark setup and configuration can involve a number of complex steps, and the corresponding instructions should be thorough, unambiguous, and intuitive to follow. For all of the XPRT tools, we strive to publish documentation that provides quick, easy-to-find answers to the questions users might have. Community members have asked us to improve the clarity and readability of the CloudXPRT setup, configuration, and individual workload documentation. In response, we are working to create more—and better—CloudXPRT documentation.

If you’re intimidated by the benchmark’s complexity, helping you is one of our highest priorities. In the coming weeks and months, we’ll be evaluating all of our CloudXPRT documentation, particularly from the perspective of new users, and will release more information about the new documentation as it becomes available.

We also want to remind you of some of the existing CloudXPRT resources. We encourage everyone to check out the Introduction to CloudXPRT and Overview of the CloudXPRT Web Microservices Workload white papers. (Note that we’ll soon be publishing a paper on the benchmark’s data analytics workload.) Also, a couple of weeks ago, we published the CloudXPRT learning tool, which we designed to serve as an information hub for common CloudXPRT topics and questions, and to help tech journalists, OEM lab engineers, and everyone who is interested in CloudXPRT find the answers they need as quickly as possible.

Thanks to all who let us know that there was room for improvement in the CloudXPRT documentation. We rely on that kind of feedback and always welcome it. If you have any questions or suggestions regarding CloudXPRT or any of the other XPRTs, please let us know!

Justin

The CloudXPRT learning tool is now live!

We’re happy to announce that the CloudXPRT learning tool is now live! We designed the tool to serve as an information hub for common CloudXPRT topics and questions, and to help tech journalists, OEM lab engineers, and everyone who is interested in CloudXPRT find the answers they need as quickly as possible.

The tool features four primary areas of content:

  • The Q&A section provides quick answers to the questions we receive most from testers and the tech press.
  • The CloudXPRT: the basics section describes specific topics such as the benchmark’s target platforms, workloads, companion cloud software, and hardware and software requirements.
  • The Testing and results section covers the testing process, metrics, and how to publish results.
  • The cloud primer provides brief, easy-to-understand definitions of key cloud computing terms and concepts.

The first screenshot below shows the home screen. To illustrate how some of the pop-up information sections appear, the second screenshot shows part of the Key terms and concepts module in the Cloud primer section. 

We’re excited about the new CloudXPRT learning tool! If you have any questions about the tool, or suggestions for additional content to include in it, please let us know!

Justin

Next up: a white paper about the CloudXPRT data analytics workload

Soon, we’ll be publishing a CloudXPRT white paper that focuses on the benchmark’s data analytics workload. We summarized the workload in the Introduction to CloudXPRT white paper, but in the same way that the Overview of the CloudXPRT Web Microservices Workload paper did, the new paper will discuss the workload in much greater detail.

In addition to providing practical information about the installation package and minimum system requirements for the data analytics workload, the paper will describe test configuration variables, structural components, task workflows, and test metrics. It will also include guidance on interpreting test results and submitting them for publication.

As we’ve noted, CloudXPRT is one of the more complex tools in the XPRT family, with no shortage of topics to explore. Possible future topics include the impact of adjusting specific test configuration options, recommendations for results reporting, and methods for results analysis. If there are specific topics that you’d like us to address in future white papers, please feel free to send us your ideas!

We hope that the upcoming Overview of the CloudXPRT Data Analytics Workload paper will serve as a go-to resource for CloudXPRT testers, and will answer any questions you have about the workload. Once it goes live, we’ll provide links in the Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.

If you have any questions, please let us know!

Justin

CloudXPRT version 1.0 is here!

The CloudXPRT Preview period has ended, and CloudXPRT version 1.0 installation packages are now available on CloudXPRT.com and the BenchmarkXPRT GitHub repository! Like the Preview build, CloudXPRT version 1.0 includes two workloads: web microservices and data analytics (you can find more details about the workloads here). Testers can use metrics from the workloads to compare IaaS stack (both hardware and software) performance and to evaluate whether any given stack is capable of meeting SLA thresholds. You can configure CloudXPRT to run on local datacenter, Amazon Web Services, Google Cloud Platform, or Microsoft Azure deployments.

Several different test packages are available for download from the CloudXPRT download page. For detailed installation instructions and hardware and software requirements for each, click the package’s readme link. On CloudXPRT.com, the Helpful Info box contains resources such as links to the Introduction to CloudXPRT white paper, the CloudXPRT master readme, and the CloudXPRT GitHub repository.

The GitHub repository also contains the CloudXPRT source code. The source code is freely available for testers to download and review.

Performance results from this release are comparable to performance results from the CloudXPRT Preview build. Testers who wish to publish results on CloudXPRT.com can find more information about the results submission and review process in the blog. We post the monthly results cycle schedule on the results submission page.

We’re thankful for all the input we received during the CloudXPRT development process and Preview period. If you have any questions about CloudXPRT, please let us know.

Justin

Improving the CloudXPRT results viewer

This week, we made some changes to the CloudXPRT results viewer that we think will simplify the results-browsing experience and allow visitors to more quickly and easily find important data.

The first set of changes involves how we present test system information in the main results table and on the individual results details pages. We realized that there was potential for confusion around the “CPU” and “Number of nodes” categories. We removed those and created the following new fields: “Cluster components,” “Nodes (work + control plane),”  and “vCPUs (work + control plane).” These new categories better describe test configurations and clarify how many CPUs engage with the workload.

The second set of changes involves the number of data points that we list in the table for each web microservices test run. For example, previously, we published a unique entry for each level of concurrency a test run records. If a run scaled to 32 concurrent instances, we presented the data for each instance in its own row. This helped to show the performance curve during a single test as the workload scaled up, but it made it more difficult for visitors to identify the best throughput results from an individual run. We decided to consolidate the results from a complete test run on a single row, highlighting only the maximum number of successful requests (throughout). All the raw data from each run remains available for download on the details page for each result, but visitors don’t have to wade through all that data to find the configuration’s main “score.”

We view the development of the CloudXPRT results viewer as an ongoing process. As we add results and receive feedback from testers about the data presentation formats that work best for them, we’ll continue to add more features and tweak existing ones to make them as useful as possible. If you have any questions about CloudXPRT results or the results viewer, please let us know!

Justin

The CloudXPRT Preview is almost here

We’re happy to announce that we’re planning to release the CloudXPRT Preview next week! After we take the CloudXPRT Preview installation and source code packages live, they will be freely available to the public via CloudXPRT.com and the BenchmarkXPRT GitHub repository. All interested parties will be able to publish CloudXPRT results. However, until we begin the formal results submission and review process in July, we will publish only results we produce in our own lab. We’ll share more information about that process and the corresponding dates here in the blog in the coming weeks.

We do have one change to report regarding the CloudXPRT workloads we announced in a previous blog post. The Preview will include the web microservices and data analytics workloads (described below), but will not include the AI-themed container scaling workload. We hope to add that workload to the CloudXPRT suite in the near future, and are still conducting testing to make sure we get it right.

If you missed the earlier workload-related post, here are the details about the two workloads that will be in the preview build:

  • In the web microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
  • The data analytics workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework  that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.

The CloudXPRT Preview provides OEMs, the tech press, vendors, and other testers with an opportunity to work with CloudXPRT directly and shape the future of the benchmark with their feedback. We hope that testers will take this opportunity to explore the tool and send us their thoughts on its structure, workload concepts and execution, ease of use, and documentation. That feedback will help us improve the relevance and accessibility of CloudXPRT testing and results for years to come.

If you have any questions about the upcoming CloudXPRT Preview, please feel free to contact us.

Justin

Check out the other XPRTs:

Forgot your password?