BenchmarkXPRT Blog banner

Category: Machine learning

Following up

This week, we’re sharing news on two topics that we’ve discussed here in the blog over the past several months: CloudXPRT v1.01 and a potential AIXPRT OpenVINO update.

CloudXPRT v1.01

Last week, we announced that we were very close to releasing an updated CloudXPRT build (v1.01) with two minor bug fixes, an improved post-test results processing script, and an adjustment to one of our test configuration recommendations. Our testing and prep is complete, and the new version is live in the CloudXPRT GitHub repository and on our site!

None of the v1.01 changes affect performance or test results, so scores from the new build are comparable to those from previous CloudXPRT builds. If you’d like to know more about the changes, take a look at last week’s blog post.

The AIXPRT OpenVINO update

In late July, we discussed our plans to update the AIXPRT OpenVINO packages with OpenVINO 2020.3 Long-Term Support (LTS). While there are no known problems with the existing AIXPRT OpenVINO package, the LTS version targets environments that benefit from maximum stability and don’t require a constant stream of new tools and feature changes, so we thought it would be well suited for a benchmark like AIXPRT.

We initially believed that the update process would be relatively simple, and we’d be able to release a new AIXPRT OpenVINO package in September. However, we’ve discovered that the process is involved enough to require substantial low-level recoding. At this time, it’s difficult to estimate when the updated build will be ready for release. For any testers looking forward to the update, we apologize for the delay.

If you have any questions or comments about these or any other XPRT-related topics, please let us know!

Justin

We’re working on an AIXPRT learning tool

For anyone interested in learning more about AIXPRT, the Introduction to AIXPRT white paper provides detailed information about its toolkits, workloads, system requirements, installation, test parameters, and results. However, for AIXPRT.com visitors who want to find the answers to specific AIXPRT-related questions quickly, a white paper can be daunting.

Because we want tech journalists, OEM lab engineers, and everyone who is interested in AIXPRT to be able to find the answers they need in as little time as possible, we’ve decided to develop a new learning tool that will serve as an information hub for common AIXPRT topics and questions.

The new learning tool will be available online through our site. It will offer quick bites of information about the fundamentals of AIXPRT, why the benchmark matters, the benefits of AIXPRT testing and results, machine learning concepts, key terms, and practical testing concerns.

We’re still working on the tool’s content and design. Because we’re designing this tool for you, we’d love to hear the topics and questions you think we should include. If you have any suggestions, please let us know!

Justin

Potential web technology additions for WebXPRT 4

A few months ago, we invited readers to send in their thoughts and ideas about web technologies and workload scenarios that may be a good fit for the next WebXPRT. We’d like to share a few of those ideas today, and we invite you to continue to send your feedback. We’re approaching the time when we need to begin firming up plans for a WebXPRT 4 development cycle in 2021, but there’s still plenty of time for you to help shape the future of the benchmark.

One of the most promising ideas for WebXPRT 4 is the potential addition of one or more WebAssembly (WASM) workloads. WASM is a low-level, binary instruction format that works across all modern browsers. It offers web developers a great deal of flexibility and provides the speed and efficiency necessary for running complex client applications in the browser. WASM enables a variety of workload scenario options, including gaming, video editing, VR, virtual machines, image recognition, and interactive educational content.

In addition, the Chrome team is dropping Portable Native Client (PNaCL) support in favor of WASM, which is why we had to remove a PNaCL workload when updating CrXPRT 2015 to CrXPRT 2. We generally model CrXPRT workloads on existing WebXPRT workloads, so familiarizing ourselves with WASM could ultimately benefit more than one XPRT benchmark.

We are also considering adding a web-based machine learning workload with TensorFlow for JavaScript (TensorFlow.js). TensorFlow.js offers pre-trained models for a wide variety of tasks including image classification, object detection, sentence encoding, natural language processing, and more. We could also use this technology to enhance one of WebXPRT’s existing AI-themed workloads, such as Organize Album using AI or Encrypt Notes and OCR Scan.

Other ideas include using a WebGL-based workload to target GPUs and investigating ways to incorporate a battery life test. What do you think? Let us know!

Justin

The CloudXPRT Preview is almost here

We’re happy to announce that we’re planning to release the CloudXPRT Preview next week! After we take the CloudXPRT Preview installation and source code packages live, they will be freely available to the public via CloudXPRT.com and the BenchmarkXPRT GitHub repository. All interested parties will be able to publish CloudXPRT results. However, until we begin the formal results submission and review process in July, we will publish only results we produce in our own lab. We’ll share more information about that process and the corresponding dates here in the blog in the coming weeks.

We do have one change to report regarding the CloudXPRT workloads we announced in a previous blog post. The Preview will include the web microservices and data analytics workloads (described below), but will not include the AI-themed container scaling workload. We hope to add that workload to the CloudXPRT suite in the near future, and are still conducting testing to make sure we get it right.

If you missed the earlier workload-related post, here are the details about the two workloads that will be in the preview build:

  • In the web microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
  • The data analytics workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework  that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.

The CloudXPRT Preview provides OEMs, the tech press, vendors, and other testers with an opportunity to work with CloudXPRT directly and shape the future of the benchmark with their feedback. We hope that testers will take this opportunity to explore the tool and send us their thoughts on its structure, workload concepts and execution, ease of use, and documentation. That feedback will help us improve the relevance and accessibility of CloudXPRT testing and results for years to come.

If you have any questions about the upcoming CloudXPRT Preview, please feel free to contact us.

Justin

More details about CloudXPRT’s workloads

About a month ago, we posted an update on the CloudXPRT development process. Today, we want to provide more details about the three workloads we plan to offer in the initial preview build:

  • In the web-tier microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
  • The machine learning (ML) training workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework  that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.
  • The AI-themed container scaling workload starts up a container and uses a version of the AIXPRT harness to launch Wide and Deep recommender system inference tasks in the container. Each container represents a fixed amount of work, and as the number of Wide and Deep jobs increases, CloudXPRT launches more containers in parallel to handle the load. The workload reports both the startup time for the containers and the Wide and Deep throughput results. Testers can use this workload to compare container startup time between IaaS stacks; optimize the balance between resource allocation, capacity, and throughput on a given stack; and confirm whether a given stack is suitable for specific SLAs.

We’re continuing to move forward with CloudXPRT development and testing and hope to add more workloads in subsequent builds. Like most organizations, we’ve adjusted our work patterns to adapt to the COVID-19 situation. While this has slowed our progress a bit, we still hope to release the CloudXPRT preview build in April. If anything changes, we’ll let folks know as soon as possible here in the blog.

If you have any thoughts or comments about CloudXPRT workloads, please feel free to contact us.

Justin

The Introduction to AIXPRT white paper is now available!

Today, we published the Introduction to AIXPRT white paper. The paper serves as an overview of the benchmark and a consolidation of AIXPRT-related information that we’ve published in the XPRT blog over the past several months. For folks who are completely new to AIXPRT and veteran testers who need to brush up on pre-test configuration procedures, we hope this paper will be a quick, one-stop reference that helps reduce the learning curve.

The paper describes the AIXPRT toolkits and workloads, adjusting key test parameters (batch size, level of precision, number of concurrent instances, and default number of requests), using alternate test configuration files, understanding and submitting results, and accessing the source code.

We hope that Introduction to AIXPRT will prove to be a valuable resource. Moving forward, readers will be able to access the paper from the Helpful Info box on AIXPRT.com and the AIXPRT section of our XPRT white papers page. If you have any questions about AIXPRT, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?