BenchmarkXPRT Blog banner

Tag Archives: on-premises

CloudXPRT status and next steps

We developed our first cloud benchmark, CloudXPRT, to measure the performance of cloud applications deployed on modern infrastructure as a service (IaaS) platforms. When we first released CloudXPRT in February of 2021, the benchmark included two test packages: a web microservices workload and a data analytics workload. Both supported on-premises and cloud service provider (CSP) testing with Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. 

CloudXPRT is our most complex benchmark, requiring sustained compatibility between many software components across multiple independent test environments. As vendors roll out updates for some components and stop supporting others, it’s inevitable that something will break. Since CloudXPRT’s launch, we’ve become aware of installation failures while attempting to set up CloudXPRT on Ubuntu virtual machines with GCP and Microsoft Azure. Additionally, while the web microservices workload continues to run in most instances with a few configuration tweaks and workarounds, the data analytics workload fails consistently due to compatibility issues with Minio, Prometheus, and Kafka within the Kubernetes environment. 

In response, we’re working to fix problems with the web microservices workload and bring all necessary components up to date. We’re developing an updated test package that will work on Ubuntu 22.04, using Kubernetes v1.23.7 and Kubespray v2.18.1. We’re also updating Kubernetes Metrics Server from v1beta1 to v1, and will incorporate some minor script changes. Our goal is to ensure successful installation and testing with the on-premises and CSP platforms that we supported when we first launched CloudXPRT.

We are currently focusing on the web microservices workload for two reasons. First, more users have downloaded it than the data analytics workload. Second, we think we have a clear path to success. Our plan is to publish the updated web microservices test package, and see what feedback and interest we receive from users about a possible data analytics refresh. The existing data analytics workload will remain available via CloudXPRT.com for the time being to serve as a reference resource.

We apologize for the inconvenience that these issues have caused. We’ll provide more information about a release timeline and final test package details here in the blog as we get closer to publication. If you have any questions about the future of CloudXPRT, please feel free to contact us!

Justin

Check out our new CloudXPRT video!

Many businesses want to move critical applications to the cloud, but choosing the right cloud-based infrastructure as a service (IaaS) platform can be a complex and costly project. We developed CloudXPRT to help speed up and simplify the process by providing a powerful benchmarking tool that allows users to run multiple workloads on cloud platform software in on-premises and popular public cloud environments.

To help spread the word about what CloudXPRT can do and why it matters to businesses, we’ve published a new video, Choose the best IaaS configuration for your business with CloudXPRT, on YouTube and CloudXPRT.com. If you know anyone who is evaluating cloud options, or who would be interested in CloudXPRT testing or results, we encourage you to share the video with them. As always, if you have any questions about CloudXPRT, please let us know!

Justin

Video: Choose the best IaaS configuration for your business with CloudXPRT.

Improving the CloudXPRT results viewer

This week, we made some changes to the CloudXPRT results viewer that we think will simplify the results-browsing experience and allow visitors to more quickly and easily find important data.

The first set of changes involves how we present test system information in the main results table and on the individual results details pages. We realized that there was potential for confusion around the “CPU” and “Number of nodes” categories. We removed those and created the following new fields: “Cluster components,” “Nodes (work + control plane),”  and “vCPUs (work + control plane).” These new categories better describe test configurations and clarify how many CPUs engage with the workload.

The second set of changes involves the number of data points that we list in the table for each web microservices test run. For example, previously, we published a unique entry for each level of concurrency a test run records. If a run scaled to 32 concurrent instances, we presented the data for each instance in its own row. This helped to show the performance curve during a single test as the workload scaled up, but it made it more difficult for visitors to identify the best throughput results from an individual run. We decided to consolidate the results from a complete test run on a single row, highlighting only the maximum number of successful requests (throughout). All the raw data from each run remains available for download on the details page for each result, but visitors don’t have to wade through all that data to find the configuration’s main “score.”

We view the development of the CloudXPRT results viewer as an ongoing process. As we add results and receive feedback from testers about the data presentation formats that work best for them, we’ll continue to add more features and tweak existing ones to make them as useful as possible. If you have any questions about CloudXPRT results or the results viewer, please let us know!

Justin

Now available: An updated CloudXPRT Preview build and source code

Today, we published an updated CloudXPRT Preview build (v0.97), along with the build’s source code. The new build fixes a few minor bugs, and makes several improvements to help facilitate installation, setup, and testing. The fixes do not affect CloudXPRT test results, so results from the new build are comparable to results from the original build (v0.95). You can find more detailed information about the changes in last week’s blog.

The CloudXPRT Preview v0.97 source code is available to the public via the CloudXPRT GitHub repository. As we’ve discussed in the past, publishing XPRT source code is part of our commitment to making the XPRT development process as transparent as possible. By allowing all interested parties to download and review our source code, we’re encouraging openness and honesty in the benchmarking industry and are inviting the kind of constructive feedback that helps to ensure that the XPRTs continue to contribute to a level playing field.

While the CloudXPRT source code is available to the public, our approach to derivative works differs from some open-source models. Traditional open-source models encourage developers to change products and even take them in different directions. Because benchmarking requires a product that remains static to enable valid comparisons over time, we allow people to download the source, but we reserve the right to control derivative works. This discourages a situation where someone publishes an unauthorized version of the benchmark and calls it an “XPRT.”

We encourage you to download and review the source and send us any feedback you have. Your questions and suggestions may influence future versions of CloudXPRT.

If you have any questions about CloudXPRT or the source code, please let us know!

Justin

A CloudXPRT build with bug fixes is on the way

We want to let CloudXPRT testers know that updated installer packages are on the way. The packages will include several fixes for bugs that we discovered in the initial CloudXPRT Preview release (build 0.95). The fixes do not affect CloudXPRT test results, but do help to facilitate installation and remove potential sources of confusion during the setup and testing process.

Along with a few text edits and other minor fixes, we made the following changes in the upcoming build:

  • We updated the data analytics setup code to prevent error messages that occurred when the benchmark treated one-node configurations as a special case.
  • We configured the data analytics workload to use a go.mod file for all the required go modules. With this change, we can explicitly state the release version of the necessary go modules, and updates to the latest go release won’t break the benchmark. This change also removes the need to include large gosrc.tar.gz files in the source code.
  • We added a cleanup utility script for the web microservices workload. If something goes wrong during configuration or a test run, testers can use this script to clean everything and start over.
  • We fixed an error that prevented the benchmark from successfully retrieving the cluster_config.json file in certain multi-node setups.
  • In the web microservices workload, we changed the output format of the request rate metric from integer to float. This change allows us to report workload data with a higher degree of precision.
  • In the web microservices workload, we added an overall summary line to results log file that reports the best throughput numbers from the test run.
  • In the web microservices code, we modified a Kubernetes option that the benchmark used to create the Cassandra schema. Prior to this change, the option generated an inconsequential but distracting error message about TTY input.

We haven’t set the release date for the updated build yet, but when we do, we’ll announce it here in the blog. If you have any questions about CloudXPRT, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?