This week, we made
some changes to the CloudXPRT results viewer that we think will simplify the results-browsing experience and
allow visitors to more quickly and easily find important data.
The first set of
changes involves how we present test system information in the main results
table and on the individual results details pages. We realized that there was
potential for confusion around the “CPU” and “Number of nodes” categories. We
removed those and created the following new fields: “Cluster components,”
“Nodes (work + control plane),” and
“vCPUs (work + control plane).” These new categories better describe test
configurations and clarify how many CPUs engage with the workload.
The second set of
changes involves the number of data points that we list in the table for each web
microservices test run. For example, previously, we published a unique entry
for each level of concurrency a test run records. If a run scaled to 32
concurrent instances, we presented the data for each instance in its own row. This
helped to show the performance curve during a single test as the workload
scaled up, but it made it more difficult for visitors to identify the best
throughput results from an individual run. We decided to consolidate the
results from a complete test run on a single row, highlighting only the maximum
number of successful requests (throughout). All the raw data from each run remains
available for download on the details page for each result, but visitors don’t
have to wade through all that data to find the configuration’s main “score.”
We view the development of the CloudXPRT results viewer as an ongoing process. As we add results and receive feedback from testers about the data presentation formats that work best for them, we’ll continue to add more features and tweak existing ones to make them as useful as possible. If you have any questions about CloudXPRT results or the results viewer, please let us know!
Justin