BenchmarkXPRT Blog banner

Tag Archives: on-premises

Improving the CloudXPRT results viewer

This week, we made some changes to the CloudXPRT results viewer that we think will simplify the results-browsing experience and allow visitors to more quickly and easily find important data.

The first set of changes involves how we present test system information in the main results table and on the individual results details pages. We realized that there was potential for confusion around the “CPU” and “Number of nodes” categories. We removed those and created the following new fields: “Cluster components,” “Nodes (work + control plane),”  and “vCPUs (work + control plane).” These new categories better describe test configurations and clarify how many CPUs engage with the workload.

The second set of changes involves the number of data points that we list in the table for each web microservices test run. For example, previously, we published a unique entry for each level of concurrency a test run records. If a run scaled to 32 concurrent instances, we presented the data for each instance in its own row. This helped to show the performance curve during a single test as the workload scaled up, but it made it more difficult for visitors to identify the best throughput results from an individual run. We decided to consolidate the results from a complete test run on a single row, highlighting only the maximum number of successful requests (throughout). All the raw data from each run remains available for download on the details page for each result, but visitors don’t have to wade through all that data to find the configuration’s main “score.”

We view the development of the CloudXPRT results viewer as an ongoing process. As we add results and receive feedback from testers about the data presentation formats that work best for them, we’ll continue to add more features and tweak existing ones to make them as useful as possible. If you have any questions about CloudXPRT results or the results viewer, please let us know!

Justin

Now available: An updated CloudXPRT Preview build and source code

Today, we published an updated CloudXPRT Preview build (v0.97), along with the build’s source code. The new build fixes a few minor bugs, and makes several improvements to help facilitate installation, setup, and testing. The fixes do not affect CloudXPRT test results, so results from the new build are comparable to results from the original build (v0.95). You can find more detailed information about the changes in last week’s blog.

The CloudXPRT Preview v0.97 source code is available to the public via the CloudXPRT GitHub repository. As we’ve discussed in the past, publishing XPRT source code is part of our commitment to making the XPRT development process as transparent as possible. By allowing all interested parties to download and review our source code, we’re encouraging openness and honesty in the benchmarking industry and are inviting the kind of constructive feedback that helps to ensure that the XPRTs continue to contribute to a level playing field.

While the CloudXPRT source code is available to the public, our approach to derivative works differs from some open-source models. Traditional open-source models encourage developers to change products and even take them in different directions. Because benchmarking requires a product that remains static to enable valid comparisons over time, we allow people to download the source, but we reserve the right to control derivative works. This discourages a situation where someone publishes an unauthorized version of the benchmark and calls it an “XPRT.”

We encourage you to download and review the source and send us any feedback you have. Your questions and suggestions may influence future versions of CloudXPRT.

If you have any questions about CloudXPRT or the source code, please let us know!

Justin

A CloudXPRT build with bug fixes is on the way

We want to let CloudXPRT testers know that updated installer packages are on the way. The packages will include several fixes for bugs that we discovered in the initial CloudXPRT Preview release (build 0.95). The fixes do not affect CloudXPRT test results, but do help to facilitate installation and remove potential sources of confusion during the setup and testing process.

Along with a few text edits and other minor fixes, we made the following changes in the upcoming build:

  • We updated the data analytics setup code to prevent error messages that occurred when the benchmark treated one-node configurations as a special case.
  • We configured the data analytics workload to use a go.mod file for all the required go modules. With this change, we can explicitly state the release version of the necessary go modules, and updates to the latest go release won’t break the benchmark. This change also removes the need to include large gosrc.tar.gz files in the source code.
  • We added a cleanup utility script for the web microservices workload. If something goes wrong during configuration or a test run, testers can use this script to clean everything and start over.
  • We fixed an error that prevented the benchmark from successfully retrieving the cluster_config.json file in certain multi-node setups.
  • In the web microservices workload, we changed the output format of the request rate metric from integer to float. This change allows us to report workload data with a higher degree of precision.
  • In the web microservices workload, we added an overall summary line to results log file that reports the best throughput numbers from the test run.
  • In the web microservices code, we modified a Kubernetes option that the benchmark used to create the Cassandra schema. Prior to this change, the option generated an inconsequential but distracting error message about TTY input.

We haven’t set the release date for the updated build yet, but when we do, we’ll announce it here in the blog. If you have any questions about CloudXPRT, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?