We’re happy to
announce that the AIXPRT learning tool is now live! We
designed the tool to serve as an information hub for common AIXPRT topics and
questions, and to help tech journalists, OEM lab engineers, and everyone who is
interested in AIXPRT find the answers they need in as little time as possible.
The tool features four
primary areas of content:
The Q&A section provides quick answers to the questions we
receive most from testers and the tech press.
The AIXPRT: the basics section describes specific topics such as
the benchmark’s toolkits, networks, workloads, and hardware and software
The testing and results section covers the testing process,
metrics, and how to publish results.
The AI/ML primer provides brief, easy-to-understand definitions of
key AI and ML terms and concepts for those who want to learn more about the
The first screenshot below shows the home screen. To show how some of the popup information sections appear, the second screenshot shows the Inference tasks (workloads) entry in the AI/ML Primer section.
We’re excited about the new AIXPRT learning tool, and we’re also happy to report that we’re working on a version of the tool for CloudXPRT. We hope to make the CloudXPRT tool available early next year, and we’ll post more information in the blog as we get closer to taking it live.
If you have any questions about the tool, please let us know!
For anyone interested
in learning more about AIXPRT, the Introduction to AIXPRT white paper provides detailed information
about its toolkits, workloads, system requirements, installation, test
parameters, and results. However, for AIXPRT.com visitors who want to find the answers to specific
AIXPRT-related questions quickly, a white paper can be daunting.
Because we want tech
journalists, OEM lab engineers, and everyone who is interested in AIXPRT to be
able to find the answers they need in as little time as possible, we’ve decided
to develop a new learning tool that will serve as an information hub for common
AIXPRT topics and questions.
The new learning tool
will be available online through our site. It will offer quick bites of
information about the fundamentals of AIXPRT, why the benchmark matters, the
benefits of AIXPRT testing and results, machine learning concepts, key terms,
and practical testing concerns.
We’re still working on the tool’s content and design. Because we’re designing this tool for you, we’d love to hear the topics and questions you think we should include. If you have any suggestions, please let us know!
A few months ago, we invited readers to send in their thoughts and ideas about web
technologies and workload scenarios that may be a good fit for the next WebXPRT. We’d like to share a few of those ideas today, and we invite
you to continue to send your feedback. We’re approaching the time when we need to begin firming up
plans for a WebXPRT 4 development cycle in 2021, but there’s still plenty of
time for you to help shape the future of the benchmark.
One of the most
promising ideas for WebXPRT 4 is the potential addition of one or more WebAssembly (WASM) workloads.
WASM is a low-level, binary instruction format that works across all modern browsers.
It offers web developers a great deal of flexibility and provides the speed and
efficiency necessary for running complex client applications in the browser. WASM
enables a variety of workload scenario options, including gaming, video editing, VR, virtual
machines, image recognition, and interactive educational content.
In addition, the
Chrome team is dropping Portable Native Client (PNaCL) support in favor of
WASM, which is why we had to remove a PNaCL workload when updating CrXPRT 2015 to CrXPRT 2. We
generally model CrXPRT workloads on existing WebXPRT workloads, so
familiarizing ourselves with WASM could ultimately benefit more than one XPRT
We are also
considering adding a web-based machine learning workload with TensorFlow for
tasks including image classification, object detection, sentence encoding,
natural language processing, and more. We could also use this technology to
enhance one of WebXPRT’s existing AI-themed workloads, such as Organize Album
using AI or Encrypt Notes and OCR Scan.
Other ideas include using
a WebGL-based workload to target GPUs and investigating ways to incorporate a
battery life test. What do you think? Let us know!
BenchmarkXPRT Development Community started almost 10 years ago with the development
of the High Definition Experience & Performance Ratings Test, also known as
HDXPRT. Back then, we distributed the benchmark to interested parties by
mailing out physical DVDs. We’ve come a long way since then, as testers now
freely and easily access six XPRT benchmarks from our site and major app
hardware manufacturers, and tech journalists—the core group of XPRT testers—work
within a constantly changing tech landscape. Because of our commitment to
providing those testers with what they need, the XPRTs grew as we developed
additional benchmarks to expand the reach of our tools from PCs to servers and
all types of notebooks, Chromebooks, and mobile devices.
today’s tech landscape continues to evolve at a rapid pace, our desire to play
an active role in emerging markets continues to drive us to expand our testing
capabilities into areas like machine learning (AIXPRT)
and cloud-first applications (CloudXPRT).
While these new technologies carry the potential to increase efficiency, improve
quality, and boost the bottom line for companies around the world, it’s often
difficult to decide where and how to invest in new hardware or services. The
ever-present need for relevant and reliable data is the reason many
organizations use the XPRTs to help make confident choices about their
company’s future tech.
We just released a new video that helps to explain what the XPRTs provide and how they can play an important role in a company’s tech purchasing decisions. We hope you’ll check it out!
excited about the continued growth of the XPRTs, and we’re eager to meet the
challenges of adapting to the changing tech landscape. If you have any questions
about the XPRTs or suggestions for future benchmarks, please let us know!
a month ago, we posted an update
on the CloudXPRT development process. Today, we want to provide more details
about the three workloads we plan to offer in the initial preview build:
In the web-tier microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
The machine learning (ML) training workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.
The AI-themed container scaling workload starts up a container and uses a version of the AIXPRT harness to launch Wide and Deep recommender system inference tasks in the container. Each container represents a fixed amount of work, and as the number of Wide and Deep jobs increases, CloudXPRT launches more containers in parallel to handle the load. The workload reports both the startup time for the containers and the Wide and Deep throughput results. Testers can use this workload to compare container startup time between IaaS stacks; optimize the balance between resource allocation, capacity, and throughput on a given stack; and confirm whether a given stack is suitable for specific SLAs.
We’re continuing to move forward with CloudXPRT development and testing and hope to add more workloads in subsequent builds. Like most organizations, we’ve adjusted our work patterns to adapt to the COVID-19 situation. While this has slowed our progress a bit, we still hope to release the CloudXPRT preview build in April. If anything changes, we’ll let folks know as soon as possible here in the blog.
If you have any thoughts or comments about CloudXPRT workloads, please feel free to contact us.
With four separate machine learning toolkits on their own development schedules, three workloads, and a wide range of possible configurations and use cases, AIXPRT has more moving parts than any of the XPRT benchmark tools to date. Because there are so many different components, and because we want AIXPRT to provide consistently relevant evaluation data in the rapidly evolving AI and machine learning spaces, we anticipate a cadence of AIXPRT updates in the future that will be more frequent than the schedules we’ve used for other XPRTs in the past. With that expectation in mind, we want to let AIXPRT testers know that when we release an AIXPRT update, they can expect minimized disruption, consideration for their testing needs, and clear communication.
Each AIXPRT toolkit (Intel OpenVINO, TensorFlow, NVIDIA TensorRT, and Apache MXNet) is on its own development schedule, and we won’t always have a lot of advance notice when new versions are on the way. Hypothetically, a new version of OpenVINO could release one month, and a new version of TensorRT just two months later. Thankfully, the modular nature of AIXPRT’s installation packages ensures that we won’t need to revise the entire AIXPRT suite every time a toolkit update goes live. Instead, we’ll update each package individually when necessary. This means that if you only test with a single AIXPRT package, updates to the other packages won’t affect your testing. For us to maintain AIXPRT’s relevance, there’s unfortunately no way to avoid all disruption, but we’ll work to keep it to a minimum.
Consideration for testers
As we move forward, when software compatibility issues force us to update an AIXPRT package, we may discover that the update has a significant effect on results. If we find that results from the new package are no longer comparable to those from previous tests, we’ll share the differences that we’re seeing in our lab. As always, we will use documentation and versioning to make sure that testers know what to expect and that there’s no confusion about which package to use.
When we update any package, we’ll make sure to communicate any updates in the new build as clearly as possible. We’ll document all changes thoroughly in the package readmes, and we’ll talk through significant updates here in the blog. We’re also available to answer questions about AIXPRT and any other XPRT-related topic, so feel free to ask!