BenchmarkXPRT Blog banner

Category: AIXPRT

News about the CloudXPRT source code

For much of the BenchmarkXPRT Development Community’s history, we offered community members exclusive access to XPRT benchmark source code. Back in February, we started to experiment with a different approach when we made the AIXPRT source code publicly available on GitHub. By allowing anyone who is interested in AIXPRT to download and review the source code, we reinforced our commitment to making the XPRT development process as transparent as possible. We also want the XPRTs to continue to contribute to fair practices in the benchmarking world, and we believe that expanded access to the source code encourages constructive feedback to help in this goal.

The feedback we received after publishing the AIXPRT source code was very positive; thank you to all who reached out. Because of that feedback and our desire to increase openness, we’ve decided use standard open source licenses to make the CloudXPRT source code available to the public when we release of the first build, or shortly thereafter. As with AIXPRT, folks will be able to download the CloudXPRT source code and submit potential workloads for future consideration, but we reserve the right to control derivative works.

We’ll share more information about the first CloudXPRT release and its source code in the coming weeks. If you have any questions about XPRT source code, feel free to ask.  We also welcome any thoughts about using this approach to release the source code of other XPRT benchmarks. As always, feel free to comment below or reach out by email.

Justin

More details about CloudXPRT’s workloads

About a month ago, we posted an update on the CloudXPRT development process. Today, we want to provide more details about the three workloads we plan to offer in the initial preview build:

  • In the web-tier microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
  • The machine learning (ML) training workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework  that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.
  • The AI-themed container scaling workload starts up a container and uses a version of the AIXPRT harness to launch Wide and Deep recommender system inference tasks in the container. Each container represents a fixed amount of work, and as the number of Wide and Deep jobs increases, CloudXPRT launches more containers in parallel to handle the load. The workload reports both the startup time for the containers and the Wide and Deep throughput results. Testers can use this workload to compare container startup time between IaaS stacks; optimize the balance between resource allocation, capacity, and throughput on a given stack; and confirm whether a given stack is suitable for specific SLAs.

We’re continuing to move forward with CloudXPRT development and testing and hope to add more workloads in subsequent builds. Like most organizations, we’ve adjusted our work patterns to adapt to the COVID-19 situation. While this has slowed our progress a bit, we still hope to release the CloudXPRT preview build in April. If anything changes, we’ll let folks know as soon as possible here in the blog.

If you have any thoughts or comments about CloudXPRT workloads, please feel free to contact us.

Justin

The Introduction to AIXPRT white paper is now available!

Today, we published the Introduction to AIXPRT white paper. The paper serves as an overview of the benchmark and a consolidation of AIXPRT-related information that we’ve published in the XPRT blog over the past several months. For folks who are completely new to AIXPRT and veteran testers who need to brush up on pre-test configuration procedures, we hope this paper will be a quick, one-stop reference that helps reduce the learning curve.

The paper describes the AIXPRT toolkits and workloads, adjusting key test parameters (batch size, level of precision, number of concurrent instances, and default number of requests), using alternate test configuration files, understanding and submitting results, and accessing the source code.

We hope that Introduction to AIXPRT will prove to be a valuable resource. Moving forward, readers will be able to access the paper from the Helpful Info box on AIXPRT.com and the AIXPRT section of our XPRT white papers page. If you have any questions about AIXPRT, please let us know!

Justin

Odds and ends

Today, we want to share quick updates on a few XPRT topics.

In case you missed yesterday’s announcement, the CrXPRT 2 Community Preview (CP) is now available. BenchmarkXPRT Development Community members can access the preview using a direct link we’ve posted on the CrXPRT tab in the XPRT Members’ Area (login required). This tab also provides a link to the CrXPRT 2 CP user manual. You can find a summary of what’s new with CrXPRT 2 in last week’s blog. During the preview period, we allow testers to publish CP test scores. Note that CrXPRT 2 overall performance test scores and battery life measurements are not comparable to those from CrXPRT 2015.

We’ll soon be publishing our first AIXPRT whitepaper, Introduction to AIXPRT. It will summarize the AIXPRT toolkits and workloads; how to adjust test parameters such as batch size, levels of precision, and concurrent instances; how to use alternate test configuration files; and how to understand test results. When the paper is available, we’ll post it on the XPRT white papers page and make an announcement here in the blog.

Finally, in response to decreased downloads and usage of BatteryXPRT, we have ended support for the benchmark. We’re always monitoring usage of the XPRTs so that we can better direct our resources to the current needs of users. We’ve removed BatteryXPRT from the Google Play Store, but it is still available for download on BatteryXPRT.com.

If you have any questions about CrXPRT 2, AIXPRT, or BatteryXPRT, please let us know!

Justin

Principled Technologies and the BenchmarkXPRT Development Community make the AIXPRT source code available to the public

Durham, NC, February 18 — Principled Technologies and the BenchmarkXPRT Development Community release the source code for the AIXPRT benchmark to the public. AIXPRT is a free tool that allows users to evaluate a system’s machine learning inference performance by running common image-classification, object detection, and recommender system workloads.

“Publishing the AIXPRT source code is part of our commitment to making the XPRT development process as transparent as possible,” said Bill Catchings, co-founder of Principled Technologies, which administers the BenchmarkXPRT Development Community. “By allowing all interested parties to download and review our source code, we’re taking tangible steps to improve openness in the benchmarking industry.”

To access the AIXPRT source code, visit the AIXPRT GitHub repository at https://github.com/BenchmarkXPRT/AIXPRT.

AIXPRT includes support for the Intel© OpenVINO™, TensorFlow™, and NVIDIA© TensorRT™ toolkits to run image-classification and object-detection workloads with the ResNet-50 and SSD-MobileNet v1 networks, as well as the MXNet™ toolkit with a Wide and Deep recommender system workload. The test reports FP32, FP16, and INT8 levels of precision.

To access AIXPRT, visit www.AIXPRT.com.

AIXPRT is part of the BenchmarkXPRT suite of performance evaluation tools, which includes WebXPRT, CrXPRT, MobileXPRT, TouchXPRT, and HDXPRT. The XPRTs help users get the facts before they buy, use, or evaluate tech products such as computers, tablets, and phones.

To learn more about the BenchmarkXPRT Development Community, go to www.BenchmarkXPRT.com or contact a BenchmarkXPRT Development Community representative directly by sending a message to BenchmarkXPRTsupport@PrincipledTechnologies.com.

About Principled Technologies, Inc.
Principled Technologies, Inc. is a leading provider of technology marketing, as well as learning and development services. It administers the BenchmarkXPRT Development Community.

Principled Technologies, Inc. is located in Durham, North Carolina, USA. For more information, please visit www.PrincipledTechnologies.com.

Company Contact
Justin Greene
BenchmarkXPRT Development Community
Principled Technologies, Inc.
1007 Slater Road, Ste. 300
Durham, NC 27704
BenchmarkXPRTsupport@PrincipledTechnologies.com

The AIXPRT source code is now public

This week, we have good news for AIXPRT testers: the AIXPRT source code is now available to the public via GitHub. As we’ve discussed in the past, publishing XPRT source code is part of our commitment to making the XPRT development process as transparent as possible. With other XPRT benchmarks, we’ve only made the source code available to community members. With AIXPRT, we have released the source code more widely. By allowing all interested parties, not just community members, to download and review our source code, we’re taking tangible steps to improve openness and honesty in the benchmarking industry and we’re encouraging the kind of constructive feedback that helps to ensure that the XPRTs continue to contribute to a level playing field.

Traditional open-source models encourage developers to change products and even take them in new and different directions. Because benchmarking requires a product that remains static to enable valid comparisons over time, we allow people to download the source code and submit potential workloads for future consideration, but we reserve the right to control derivative works. This discourages a situation where someone publishes an unauthorized version of the benchmark and calls it an “XPRT.”

We encourage you to download and review the source and send us any feedback you may have. Your questions and suggestions may influence future versions of AIXPRT. If you have any questions about AIXPRT or accessing the source code, please feel free to ask! Please also let us know if you think we should take this approach to releasing the source code with other XPRT benchmarks.

Justin

Check out the other XPRTs:

Forgot your password?