BenchmarkXPRT Blog banner

Tag Archives: microservices

Coming soon: a white paper about the CloudXPRT web microservices workload

Soon, we’ll be expanding our portfolio of CloudXPRT resources with a white paper that focuses on the benchmark’s web microservices workload. While we summarized the workload in the Introduction to CloudXPRT white paper, the new paper will discuss the workload in much greater detail.

In addition to providing practical information about the web microservices installation packages and minimum system requirements, the paper describes the workload’s test configuration variables, structural components, task workflows, and test metrics. It also discusses interpreting test results and the process for submitting results for publication.

As we’ve noted, CloudXPRT is one of the more complex tools in the XPRT family, with no shortage of topics to explore further. We plan to publish a companion overview for the data analytics workload, and possible future topics include the impact of adjusting specific test configuration options, recommendations for results reporting, and methods for analysis.

We hope that the upcoming Overview of the CloudXPRT Web Microservices Workload paper will serve as a go-to resource for CloudXPRT testers, and will answer any questions you have about the workload. Once it goes live, we’ll provide links in the Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.

If you have any questions, please let us know!

Justin

A CloudXPRT build with bug fixes is on the way

We want to let CloudXPRT testers know that updated installer packages are on the way. The packages will include several fixes for bugs that we discovered in the initial CloudXPRT Preview release (build 0.95). The fixes do not affect CloudXPRT test results, but do help to facilitate installation and remove potential sources of confusion during the setup and testing process.

Along with a few text edits and other minor fixes, we made the following changes in the upcoming build:

  • We updated the data analytics setup code to prevent error messages that occurred when the benchmark treated one-node configurations as a special case.
  • We configured the data analytics workload to use a go.mod file for all the required go modules. With this change, we can explicitly state the release version of the necessary go modules, and updates to the latest go release won’t break the benchmark. This change also removes the need to include large gosrc.tar.gz files in the source code.
  • We added a cleanup utility script for the web microservices workload. If something goes wrong during configuration or a test run, testers can use this script to clean everything and start over.
  • We fixed an error that prevented the benchmark from successfully retrieving the cluster_config.json file in certain multi-node setups.
  • In the web microservices workload, we changed the output format of the request rate metric from integer to float. This change allows us to report workload data with a higher degree of precision.
  • In the web microservices workload, we added an overall summary line to results log file that reports the best throughput numbers from the test run.
  • In the web microservices code, we modified a Kubernetes option that the benchmark used to create the Cassandra schema. Prior to this change, the option generated an inconsequential but distracting error message about TTY input.

We haven’t set the release date for the updated build yet, but when we do, we’ll announce it here in the blog. If you have any questions about CloudXPRT, please let us know!

Justin

The CloudXPRT Preview is almost here

We’re happy to announce that we’re planning to release the CloudXPRT Preview next week! After we take the CloudXPRT Preview installation and source code packages live, they will be freely available to the public via CloudXPRT.com and the BenchmarkXPRT GitHub repository. All interested parties will be able to publish CloudXPRT results. However, until we begin the formal results submission and review process in July, we will publish only results we produce in our own lab. We’ll share more information about that process and the corresponding dates here in the blog in the coming weeks.

We do have one change to report regarding the CloudXPRT workloads we announced in a previous blog post. The Preview will include the web microservices and data analytics workloads (described below), but will not include the AI-themed container scaling workload. We hope to add that workload to the CloudXPRT suite in the near future, and are still conducting testing to make sure we get it right.

If you missed the earlier workload-related post, here are the details about the two workloads that will be in the preview build:

  • In the web microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
  • The data analytics workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework  that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.

The CloudXPRT Preview provides OEMs, the tech press, vendors, and other testers with an opportunity to work with CloudXPRT directly and shape the future of the benchmark with their feedback. We hope that testers will take this opportunity to explore the tool and send us their thoughts on its structure, workload concepts and execution, ease of use, and documentation. That feedback will help us improve the relevance and accessibility of CloudXPRT testing and results for years to come.

If you have any questions about the upcoming CloudXPRT Preview, please feel free to contact us.

Justin

Check out the other XPRTs:

Forgot your password?