BenchmarkXPRT Blog banner

Tag Archives: Ubuntu

The CloudXPRT v1.2 update package is now available!

We’re happy to announce that the CloudXPRT v1.2 update package is now available! The update prevents potential installation failures on Google Cloud Platform and Microsoft Azure, and ensures that the web microservices workload works on Ubuntu 22.04. The update uses updated software components such as Kubernetes v1.23.7, Kubespray v2.18.1, and Kubernetes Metrics Server v1, and incorporates some additional minor script changes.

The CloudXPRT v1.2 web microservices workload installation package is available at the CloudXPRT.com download page and the BenchmarkXPRT GitHub repository.

Before you get started with v1.2, please note the following updated system requirements:

  • Ubuntu 20.04.2 or 22.04 for on-premises testing
  • Ubuntu 18.04, 20.04.2, or 22.04 for CSP (AWS/Azure/GCP) testing

Because CloudXPRT is designed to run on high-end servers, physical nodes or VMs under test must meet the following minimum specifications:

  • 16 logical or virtual CPUs
  • 8 GB of RAM
  • 10 GB of available disk space (50 GB for the data analytics workload)

The update package includes only the updated v1.2 test harness and the updated web microservices workload. It does not include the data analytics workload. As we stated in the blog, now that we’ve published the web microservices package, we will assess the level of interest users express about a possible refresh of the v1.1 data analytics workload. For now, the v1.1 data analytics workload will continue to be available via CloudXPRT.com for some time to serve as a reference resource for users who have worked with the package in the past.

Please let us know if you have any questions about the CloudXPRT v1.2 test package. Happy testing!

Justin

CloudXPRT status and next steps

We developed our first cloud benchmark, CloudXPRT, to measure the performance of cloud applications deployed on modern infrastructure as a service (IaaS) platforms. When we first released CloudXPRT in February of 2021, the benchmark included two test packages: a web microservices workload and a data analytics workload. Both supported on-premises and cloud service provider (CSP) testing with Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. 

CloudXPRT is our most complex benchmark, requiring sustained compatibility between many software components across multiple independent test environments. As vendors roll out updates for some components and stop supporting others, it’s inevitable that something will break. Since CloudXPRT’s launch, we’ve become aware of installation failures while attempting to set up CloudXPRT on Ubuntu virtual machines with GCP and Microsoft Azure. Additionally, while the web microservices workload continues to run in most instances with a few configuration tweaks and workarounds, the data analytics workload fails consistently due to compatibility issues with Minio, Prometheus, and Kafka within the Kubernetes environment. 

In response, we’re working to fix problems with the web microservices workload and bring all necessary components up to date. We’re developing an updated test package that will work on Ubuntu 22.04, using Kubernetes v1.23.7 and Kubespray v2.18.1. We’re also updating Kubernetes Metrics Server from v1beta1 to v1, and will incorporate some minor script changes. Our goal is to ensure successful installation and testing with the on-premises and CSP platforms that we supported when we first launched CloudXPRT.

We are currently focusing on the web microservices workload for two reasons. First, more users have downloaded it than the data analytics workload. Second, we think we have a clear path to success. Our plan is to publish the updated web microservices test package, and see what feedback and interest we receive from users about a possible data analytics refresh. The existing data analytics workload will remain available via CloudXPRT.com for the time being to serve as a reference resource.

We apologize for the inconvenience that these issues have caused. We’ll provide more information about a release timeline and final test package details here in the blog as we get closer to publication. If you have any questions about the future of CloudXPRT, please feel free to contact us!

Justin

Reports of CloudXPRT installation failures

Recently, CloudXPRT testers have reported installation failures while attempting to set up CloudXPRT on Ubuntu virtual machines with Google Cloud Platform (GCP) and Microsoft Azure. We have not yet determined whether the installation process fails consistently on these VMs or the problem occurs under only specific conditions. We believe these failures occur with only GCP and Azure, and you should still be able to successfully install and run CloudXPRT on both Amazon Web Services virtual machines and on-premises gear.

We apologize for the inconvenience that this issue causes for CloudXPRT testers and will let the community know as soon as we identify a reliable solution. If you have encountered any other issues during CloudXPRT testing, please feel free to contact us!

Justin

The CloudXPRT v1.1 beta is available!

Last week, we announced that a CloudXPRT v1.1 beta was on the way. We’re happy to say that the v1.1 beta is now available to the public on a dedicated CloudXPRT v1.1 beta download page. While CloudXPRT v1.01 remains the officially supported version on CloudXPRT.com and in our GitHub repository, interested testers can use the v1.1 beta version in new environments as we finalize the v1.1 build for official release. You are welcome to publish results as we do not expect results to change in the final, official release.

As we mentioned in last week’s post, the CloudXPRT v1.1 beta includes the following changes:

  • We’ve added support for Ubuntu 20.04.2 or later for on-premises testing.
  • We’ve consolidated and standardized the installation packages for both workloads. Instead of one package for the data analytics workload and four separate packages for the web microservices workload, each workload has a single installation package that supports on-premises testing and testing with all three supported CSPs.
  • We’ve incorporated Terraform to help create and configure VMs, which helps to prevent problems when testers do not allocate enough storage per VM prior to testing.
  • We’ve replaced the Calico network plugin in Kubespray with Weave, which helps to avoid some of the network issues testers have occasionally encountered in the CPS environment.

Please feel free to share the link to the beta download page. (To avoid confusion, the beta will not appear in the main CloudXPRT download table.) We can’t yet state definitively whether results from the new version will be comparable to those from v1.01. We have not observed any significant differences in performance, but we haven’t tested every possible test configuration across every platform. If you observe different results when testing the same configuration with v1.01 and v1.1 beta, please send us the details so we can investigate.

If you have any questions about CloudXPRT or the CloudXPRT v1.1 beta, please let us know!

Justin

Understanding AIXPRT’s default number of requests

A few weeks ago, we discussed how AIXPRT testers can adjust the key variables of batch size, levels of precision, and number of concurrent instances by editing the JSON test configuration file in the AIXPRT/Config directory. In addition to those key variables, there is another variable in the config file called “total_requests” that has a different default setting depending on the AIXPRT test package you choose. This setting can significantly affect a test run, so it’s important for testers to know how it works.

The total_requests variable specifies how many inference requests AIXPRT will send to a network (e.g., ResNet-50) during one test iteration at a given batch size (e.g., Batch 1, 2, 4, etc.). This simulates the inference demand that the end users place on the system. Because we designed AIXPRT to run on different types of hardware, it makes sense to set the default number of requests for each test package to suit the most likely hardware environment for that package.

For example, testing with OpenVINO on Windows aligns more closely with a consumer (i.e., desktop or laptop) scenario than testing with OpenVINO on Ubuntu, which is more typical of server/datacenter testing. Desktop testers require a much lower inference demand than server testers, so the default total_requests settings for the two packages reflect that. The default for the OpenVINO/Windows package is 500, while the default for the OpenVINO/Ubuntu package is 5,000.

Also, setting the number of requests so low that a system finishes each workload in less than 1 second can produce high run-to-run variation, so our default settings represent a lower boundary that will work well for common test scenarios.

Below, we provide the current default total_requests setting for each AIXPRT test package:

  • MXNet: 1,000
  • OpenVINO Ubuntu: 5,000
  • OpenVINO Windows: 500
  • TensorFlow Ubuntu: 100
  • TensorFlow Windows: 10
  • TensorRT Ubuntu: 5,000
  • TensorRT Windows: 500


Testers can adjust these variables in the config file according to their own needs. Finding the optimal combination of machine learning variables for each scenario is often a matter of trial and error, and the default settings represent what we think is a reasonable starting point for each test package.

To adjust the total_requests setting, start by locating and opening the JSON test configuration file in the AIXPRT/Config directory. Below, we show a section of the default config file (CPU_INT8.json) for the OpenVINO-Windows test package (AIXPRT_1.0_OpenVINO_Windows.zip). For each batch size, the total_requests setting appears at the bottom of the list of configurable variables. In this case, the default setting Is 500. Change the total_requests numerical value for each batch size in the config file, save your changes, and close the file.

Total requests snip

Note that if you are running multiple concurrent instances, OpenVINO and TensorRT automatically distribute the number of requests among the instances. MXNet and TensorFlow users must manually allocate the instances in the config file. You can find an example of how to structure manual allocation here. We hope to make this process automatic for all toolkits in a future update.

We hope this information helps you understand the total_requests setting, and why the default values differ from one test package to another. If you have any questions or comments about this or other aspects of AIXPRT, please let us know.

Justin

Coming soon: An interactive AIXPRT selector tool

AI workloads are now relevant to all types of hardware, from servers to laptops to IOT devices, so we intentionally designed AIXPRT to support a wide range of potential hardware, toolkit, and workload configurations. This approach provides AIXPRT testers with a tool that is flexible enough to adapt to a variety of environments. The downside is that the number of options makes it fairly complicated to figure out which AIXPRT download package suits your needs.

To help testers navigate this complexity, we’ve been working on a new interactive selector tool. The tool is not yet live, but the screenshots and descriptions below provide a preview of what’s to come.

The tool will include drop-down menus for the key factors that go into determining the correct AIXPRT download package, along with a description of the options. Users can proceed in any order but will need to make a selection for each category. Since not all combinations work together, each selection the user makes will eliminate some of the options in the remaining categories.

AIXPRT user guide snip 1

After a user selects an option, a check mark appears on the category icon, and the selection for that category appears in the category box (e.g., TensorFlow in the Toolkit category). This shows users which categories they’ve completed and the selections they’ve made. After a user selects options in more than one category, a Start over button appears in the lower-left corner. Clicking this button clears all existing selections and provides users with a clean slate.

Once every category is complete, a Download button appears in the lower-right corner. When you click this, a popup appears that provides a link for the correct download package and associated readme file.

AIXPRT user guide snip 2

We hope the selector tool will help make the AIXPRT download and installation process easier for those who are unfamiliar with the benchmark. Testers who already know exactly which package they need will be able to bypass the tool and go directly to a download table.

The tool will debut with the AIXPRT 1.0 GA in the next few days, and we’ll let everyone know when that happens! If you have any questions or comments about AIXPRT, please let us know.

Justin

Check out the other XPRTs:

Forgot your password?