BenchmarkXPRT Blog banner

Tag Archives: cloud applications

CloudXPRT status and next steps

We developed our first cloud benchmark, CloudXPRT, to measure the performance of cloud applications deployed on modern infrastructure as a service (IaaS) platforms. When we first released CloudXPRT in February of 2021, the benchmark included two test packages: a web microservices workload and a data analytics workload. Both supported on-premises and cloud service provider (CSP) testing with Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. 

CloudXPRT is our most complex benchmark, requiring sustained compatibility between many software components across multiple independent test environments. As vendors roll out updates for some components and stop supporting others, it’s inevitable that something will break. Since CloudXPRT’s launch, we’ve become aware of installation failures while attempting to set up CloudXPRT on Ubuntu virtual machines with GCP and Microsoft Azure. Additionally, while the web microservices workload continues to run in most instances with a few configuration tweaks and workarounds, the data analytics workload fails consistently due to compatibility issues with Minio, Prometheus, and Kafka within the Kubernetes environment. 

In response, we’re working to fix problems with the web microservices workload and bring all necessary components up to date. We’re developing an updated test package that will work on Ubuntu 22.04, using Kubernetes v1.23.7 and Kubespray v2.18.1. We’re also updating Kubernetes Metrics Server from v1beta1 to v1, and will incorporate some minor script changes. Our goal is to ensure successful installation and testing with the on-premises and CSP platforms that we supported when we first launched CloudXPRT.

We are currently focusing on the web microservices workload for two reasons. First, more users have downloaded it than the data analytics workload. Second, we think we have a clear path to success. Our plan is to publish the updated web microservices test package, and see what feedback and interest we receive from users about a possible data analytics refresh. The existing data analytics workload will remain available via CloudXPRT.com for the time being to serve as a reference resource.

We apologize for the inconvenience that these issues have caused. We’ll provide more information about a release timeline and final test package details here in the blog as we get closer to publication. If you have any questions about the future of CloudXPRT, please feel free to contact us!

Justin

Default requirements for CloudXPRT results submissions

Over the past few weeks, we’ve received questions about whether we require specific test configuration settings for official CloudXPRT results submissions. Currently, testers have the option to edit up to 12 configuration options for the web microservices workload and three configuration options for the data analytics workload. Not all configuration options have an impact on testing and results, but a few of them can drastically affect key results metrics and how long it takes to complete a test. Because new CloudXPRT testers may not anticipate those outcomes, and so many configuration permutations are possible, we’ve come up with a set of requirements for all future results submissions to our site. Please note that testers are still free to adjust all available configuration options—and define service level agreement (SLA) settings—as they see fit for their own purposes. The requirements below apply only to results testers want to submit for publication consideration on our site, and to any resulting comparisons.


Web microservices results submission requirement

Starting with the May results submission cycle, all web microservices results submissions must have the workload.cpurequestsvalue, which lets the user designate the number of CPU cores the workload assigns to each pod, set to 4. Currently, the benchmark supports values of 1, 2, and 4, with the default value of 4. While 1 and 2 CPU cores per pod may be more appropriate for relatively low-end systems or configurations with few vCPUs, a value of 4 is appropriate for most datacenter processors, and it often enables CSP instances to operate within the benchmark’s max default 95th percentile latency SLA of 3,000 milliseconds.

In future CloudXPRT releases, we may remove the option to change the workload.cpurequests value from the config.json file and simply fix the value in the benchmark’s code to promote test predictability and reasonable comparisons. For more information about configuration options for the web microservices workload, please consult the Overview of the CloudXPRT Web Microservices Workload white paper.


Data analytics results submission requirement

Starting with the May results submission cycle, all data analytics results submissions must have the best reported performance (throughput_jobs/min) correspond to a 95th percentile SLA latency of 90 seconds or less. We have received submissions where the throughput was extremely high, but the 95th percentile SLA latency was up to 10 times the 90 seconds that we recommend in CloudXPRT documentation. High latency values may be acceptable for the unique purposes of individual testers, but they do not provide a good basis for comparison between clusters under test. For more information about configuration options with the data analytics workload, please consult the Overview of the CloudXPRT Data Analytics Workload white paper.

We will update CloudXPRT documentation to make sure that testers know to use the default configuration settings if they plan to submit results for publication. If you have any questions about CloudXPRT or the CloudXPRT results submission process, please let us know.

Justin

The CloudXPRT v1.1 general release is tomorrow!

We’re happy to announce that CloudXPRT v1.1 will move from beta to general release status tomorrow! The installation packages will be available at the CloudXPRT.com download page and the BenchmarkXPRT GitHub repository. You will find more details about the v1.1 updates in a previous blog post, but the most prominent changes are the consolidation of the five previous installation packages into two packages (one per workload) and added support for Ubuntu 20.04.2 with on-premises testing.

Before you get started with v1.1, please note the following updated system requirements:

  • Ubuntu 20.04.2 or later for on-premises testing
  • Ubuntu 18.04 and 20.04.2 or later for CSP (AWS/Azure/GCP) testing

CloudXPRT is designed to run on high-end servers. Physical nodes or VMs under test must meet the following minimum specifications:

  • 16 logical or virtual CPUs
  • 8 GB of RAM
  • 10 GB of available disk space (50 GB for the data analytics workload)

We have also made significant adjustments to the installation and test configuration instructions in the readmes for both workloads, so please revisit these documents even if you’re familiar with previous test processes.

As we noted during the beta period, we have not observed any significant differences in performance between v1.01 and v1.1, but we haven’t tested every possible test configuration across every platform. If you observe different results when testing the same configuration with v1.01 and v1.1, please send us the details so we can investigate.

If you have any questions about CloudXPRT v1.1, please let us know!

Justin

Fixes for minor CloudXPRT bugs are on the way

We want to let CloudXPRT testers know that we’re close to releasing an updated version (build 1.01) with two minor bug fixes, an improved post-test results processing script, and an adjustment to one of our test configuration recommendations. None of these changes will affect performance or test results, so scores from previous CloudXPRT builds will be comparable to those from the new build.

The most significant changes in CloudXPRT build 1.01 are as follows:

  • In previous builds, some testers encountered warnings during setup to update the version of Kubernetes Operations (kops) when testing on public-cloud platforms (the CloudXPRT 1.00 recommendation is kops version 1.16.0). We are adjusing the kops installation instructions in the setup instructions for the web microservices and data analytics workloads to prevent these warnings.
  • In previous builds, post-test cleanup instructions for public-cloud testing environments do not always delete all of the resources that CloudXPRT creates during setup. We are updating instructions to ensure a more thorough cleanup process. This change applies to test instructions for the web microservices and data analytics workloads.
  • We are reformatting the optional results graphs the web microservices postprocess program creates to make them easier to interpret.
  • In previous builds, the recommended time interval for the web-microservices workload is 120 seconds if the hpamode option is enabled and 60 seconds if it is disabled. Because we’ve found that the 60-second difference has no significant impact on test results, we are changing the recommendation to 60 seconds for both hpamode settings.


We hope these changes will improve the CloudXPRT setup and testing experience. We haven’t set the release date for the updated build yet, but when we do, we’ll announce it here in the blog. If you have any questions about CloudXPRT, or would like to report bugs or other issues, please feel free to contact us!

Justin

CloudXPRT version 1.0 is here!

The CloudXPRT Preview period has ended, and CloudXPRT version 1.0 installation packages are now available on CloudXPRT.com and the BenchmarkXPRT GitHub repository! Like the Preview build, CloudXPRT version 1.0 includes two workloads: web microservices and data analytics (you can find more details about the workloads here). Testers can use metrics from the workloads to compare IaaS stack (both hardware and software) performance and to evaluate whether any given stack is capable of meeting SLA thresholds. You can configure CloudXPRT to run on local datacenter, Amazon Web Services, Google Cloud Platform, or Microsoft Azure deployments.

Several different test packages are available for download from the CloudXPRT download page. For detailed installation instructions and hardware and software requirements for each, click the package’s readme link. On CloudXPRT.com, the Helpful Info box contains resources such as links to the Introduction to CloudXPRT white paper, the CloudXPRT master readme, and the CloudXPRT GitHub repository.

The GitHub repository also contains the CloudXPRT source code. The source code is freely available for testers to download and review.

Performance results from this release are comparable to performance results from the CloudXPRT Preview build. Testers who wish to publish results on CloudXPRT.com can find more information about the results submission and review process in the blog. We post the monthly results cycle schedule on the results submission page.

We’re thankful for all the input we received during the CloudXPRT development process and Preview period. If you have any questions about CloudXPRT, please let us know.

Justin

Check out our new CloudXPRT video!

Many businesses want to move critical applications to the cloud, but choosing the right cloud-based infrastructure as a service (IaaS) platform can be a complex and costly project. We developed CloudXPRT to help speed up and simplify the process by providing a powerful benchmarking tool that allows users to run multiple workloads on cloud platform software in on-premises and popular public cloud environments.

To help spread the word about what CloudXPRT can do and why it matters to businesses, we’ve published a new video, Choose the best IaaS configuration for your business with CloudXPRT, on YouTube and CloudXPRT.com. If you know anyone who is evaluating cloud options, or who would be interested in CloudXPRT testing or results, we encourage you to share the video with them. As always, if you have any questions about CloudXPRT, please let us know!

Justin

Video: Choose the best IaaS configuration for your business with CloudXPRT.

Check out the other XPRTs:

Forgot your password?