BenchmarkXPRT Blog banner

Month: July 2020

Improving the CloudXPRT results viewer

This week, we made some changes to the CloudXPRT results viewer that we think will simplify the results-browsing experience and allow visitors to more quickly and easily find important data.

The first set of changes involves how we present test system information in the main results table and on the individual results details pages. We realized that there was potential for confusion around the “CPU” and “Number of nodes” categories. We removed those and created the following new fields: “Cluster components,” “Nodes (work + control plane),”  and “vCPUs (work + control plane).” These new categories better describe test configurations and clarify how many CPUs engage with the workload.

The second set of changes involves the number of data points that we list in the table for each web microservices test run. For example, previously, we published a unique entry for each level of concurrency a test run records. If a run scaled to 32 concurrent instances, we presented the data for each instance in its own row. This helped to show the performance curve during a single test as the workload scaled up, but it made it more difficult for visitors to identify the best throughput results from an individual run. We decided to consolidate the results from a complete test run on a single row, highlighting only the maximum number of successful requests (throughout). All the raw data from each run remains available for download on the details page for each result, but visitors don’t have to wade through all that data to find the configuration’s main “score.”

We view the development of the CloudXPRT results viewer as an ongoing process. As we add results and receive feedback from testers about the data presentation formats that work best for them, we’ll continue to add more features and tweak existing ones to make them as useful as possible. If you have any questions about CloudXPRT results or the results viewer, please let us know!

Justin

We’re working on an update for the AIXPRT OpenVINO workload

Shortly after the initial AIXPRT release, we noted that each of the toolkits AIXPRT uses (Intel OpenVINO, TensorFlow, NVIDIA TensorRT, and Apache MXNet) is on its own development schedule, and new versions will sometimes appear with little warning. When this happens, we’ll have to respond by updating specific AIXPRT installation packages, giving AIXPRT testers relatively short notice.

This is one of those times! Intel recently released OpenVINO 2020.3 Long-Term Support (LTS), and we’re planning to update the AIXPRT OpenVINO packages with the LTS version. The LTS version targets environments that benefit from maximum stability, and don’t require a constant stream of new tools and feature changes. In other words, it’s well suited for a benchmark, and we think it’s a good fit for AIXPRT moving forward.

We don’t yet know what impact the new version will have on AIXPRT OpenVINO test results. A substantial part of the development process will involve testing the new packages on a variety of platforms to see how performance changes. We’ll communicate our findings here in the blog, so AIXPRT testers will know what to expect.

Thankfully, the modular nature of the AIXPRT installation packages ensures that we don’t need to revise the entire AIXPRT suite every time a toolkit update goes live. If you test with only TensorFlow, TensorRT, or MXNet, or a combination of those toolkits, this update won’t affect your testing.

We’re not ready to commit to a release date for the new build, but anticipate it will be in September.

If you have any questions about AIXPRT or OpenVINO, please let us know!

Justin

Now available: An updated CloudXPRT Preview build and source code

Today, we published an updated CloudXPRT Preview build (v0.97), along with the build’s source code. The new build fixes a few minor bugs, and makes several improvements to help facilitate installation, setup, and testing. The fixes do not affect CloudXPRT test results, so results from the new build are comparable to results from the original build (v0.95). You can find more detailed information about the changes in last week’s blog.

The CloudXPRT Preview v0.97 source code is available to the public via the CloudXPRT GitHub repository. As we’ve discussed in the past, publishing XPRT source code is part of our commitment to making the XPRT development process as transparent as possible. By allowing all interested parties to download and review our source code, we’re encouraging openness and honesty in the benchmarking industry and are inviting the kind of constructive feedback that helps to ensure that the XPRTs continue to contribute to a level playing field.

While the CloudXPRT source code is available to the public, our approach to derivative works differs from some open-source models. Traditional open-source models encourage developers to change products and even take them in different directions. Because benchmarking requires a product that remains static to enable valid comparisons over time, we allow people to download the source, but we reserve the right to control derivative works. This discourages a situation where someone publishes an unauthorized version of the benchmark and calls it an “XPRT.”

We encourage you to download and review the source and send us any feedback you have. Your questions and suggestions may influence future versions of CloudXPRT.

If you have any questions about CloudXPRT or the source code, please let us know!

Justin

A CloudXPRT build with bug fixes is on the way

We want to let CloudXPRT testers know that updated installer packages are on the way. The packages will include several fixes for bugs that we discovered in the initial CloudXPRT Preview release (build 0.95). The fixes do not affect CloudXPRT test results, but do help to facilitate installation and remove potential sources of confusion during the setup and testing process.

Along with a few text edits and other minor fixes, we made the following changes in the upcoming build:

  • We updated the data analytics setup code to prevent error messages that occurred when the benchmark treated one-node configurations as a special case.
  • We configured the data analytics workload to use a go.mod file for all the required go modules. With this change, we can explicitly state the release version of the necessary go modules, and updates to the latest go release won’t break the benchmark. This change also removes the need to include large gosrc.tar.gz files in the source code.
  • We added a cleanup utility script for the web microservices workload. If something goes wrong during configuration or a test run, testers can use this script to clean everything and start over.
  • We fixed an error that prevented the benchmark from successfully retrieving the cluster_config.json file in certain multi-node setups.
  • In the web microservices workload, we changed the output format of the request rate metric from integer to float. This change allows us to report workload data with a higher degree of precision.
  • In the web microservices workload, we added an overall summary line to results log file that reports the best throughput numbers from the test run.
  • In the web microservices code, we modified a Kubernetes option that the benchmark used to create the Cassandra schema. Prior to this change, the option generated an inconsequential but distracting error message about TTY input.

We haven’t set the release date for the updated build yet, but when we do, we’ll announce it here in the blog. If you have any questions about CloudXPRT, please let us know!

Justin

The Introduction to CloudXPRT white paper is now available!

Today, we published the Introduction to CloudXPRT white paper. The paper provides an overview of our latest benchmark and consolidates CloudXPRT-related information that we’ve published in the XPRT blog over the past several months. It describes the CloudXPRT workloads, choosing and downloading installation packages, submitting CloudXPRT results for publication, and possibilities for additional development in the coming months.

CloudXPRT is one of the most complex tools in the XPRT family, and there are more CloudXPRT-related topics to discuss than we could fit in this first paper. In future white papers, we will discuss in greater detail each of the benchmark workloads, the range of test configuration options, results reporting, and methods for analysis.

We hope that Introduction to CloudXPRT will provide testers who are interested in CloudXPRT with a solid foundation of understanding on which they can build. Moving forward, we will provide links to the paper in the Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.

If you have any questions about CloudXPRT, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?