Recently, a member of the tech
press asked us about the status of AIXPRT,
our benchmark that measures machine learning inference performance. We want to
share our answer here in the blog for the benefit of other readers. The writer said
it seemed like we had not updated AIXPRT in a long time, and wondered whether we
had any immediate plans to do so.
true that we haven’t updated AIXPRT in quite some time. Unfortunately, while a
few tech press publications and OEM labs began experimenting with AIXPRT
testing, the benchmark never got the traction we hoped for, and we’ve decided
to invest our resources elsewhere for the time being. The AIXPRT installation
packages are still available for people to use or reference as they wish, but
we have not updated the benchmark to work with the latest platform versions
(OpenVINO, TensorFlow, etc.). It’s likely that several components in each
package are out of date.
are interested in AIXPRT and would like us to bring it up to date, please let us know.
We can’t promise that we’ll revive the benchmark, but your feedback could be a
valuable contribution as we try to gauge the benchmarking community’s interest.
We’re happy to
announce that the AIXPRT learning tool is now live! We
designed the tool to serve as an information hub for common AIXPRT topics and
questions, and to help tech journalists, OEM lab engineers, and everyone who is
interested in AIXPRT find the answers they need in as little time as possible.
The tool features four
primary areas of content:
The Q&A section provides quick answers to the questions we
receive most from testers and the tech press.
The AIXPRT: the basics section describes specific topics such as
the benchmark’s toolkits, networks, workloads, and hardware and software
The testing and results section covers the testing process,
metrics, and how to publish results.
The AI/ML primer provides brief, easy-to-understand definitions of
key AI and ML terms and concepts for those who want to learn more about the
The first screenshot below shows the home screen. To show how some of the popup information sections appear, the second screenshot shows the Inference tasks (workloads) entry in the AI/ML Primer section.
We’re excited about the new AIXPRT learning tool, and we’re also happy to report that we’re working on a version of the tool for CloudXPRT. We hope to make the CloudXPRT tool available early next year, and we’ll post more information in the blog as we get closer to taking it live.
If you have any questions about the tool, please let us know!
Last month, we announced that we’re working on
a new AIXPRT learning tool. Because we want tech journalists, OEM lab
engineers, and everyone who is interested in AIXPRT to be able to find the
answers they need in as little time as possible, we’re designing this tool to serve
as an information hub for common AIXPRT topics and questions.
We’re still finalizing
aspects of the tool’s content and design, so some details may change, but we
can now share a sneak peak of the main landing page. In the screenshot below,
you can see that the tool will feature four primary areas of content:
The FAQ section will provide quick answers to the questions we
receive most from testers and the tech press.
The AIXPRT basics section will describe specific topics such as the
benchmark’s toolkits, networks, workloads, and hardware and software
The testing and results section will cover the testing process,
the metrics the benchmark produces, and how to publish results.
The AI/ML primer will provide brief, easy-to-understand definitions
of key AI and ML terms and concepts for those who want to learn more about the
We’re excited about the new AIXPRT learning tool, and will share more information here in the blog as we get closer to a release date. If you have any questions about the tool, please let us know!
This week, we’re sharing news on two topics that we’ve discussed
here in the blog over the past several months: CloudXPRT v1.01 and a potential
AIXPRT OpenVINO update.
Last week, we announced that we were very close to releasing an
updated CloudXPRT build (v1.01) with two minor bug fixes, an improved post-test
results processing script, and an adjustment to one of our test configuration
recommendations. Our testing and prep is complete, and the new version is live
in the CloudXPRT GitHub repository and on our site!
None of the v1.01
changes affect performance or test results, so scores from the new build are
comparable to those from previous CloudXPRT builds. If you’d like to know more
about the changes, take a look at last week’s blog post.
The AIXPRT OpenVINO
In late July, we discussed our plans to update the AIXPRT OpenVINO packages
with OpenVINO 2020.3 Long-Term Support (LTS). While there are no
known problems with the existing AIXPRT OpenVINO package, the LTS version
targets environments that benefit from maximum stability and don’t require a
constant stream of new tools and feature changes, so we thought it would be
well suited for a benchmark like AIXPRT.
We initially believed that
the update process would be relatively simple, and we’d be able to release a
new AIXPRT OpenVINO package in September. However, we’ve discovered that the
process is involved enough to require substantial low-level recoding. At this
time, it’s difficult to estimate when the updated build will be ready for
release. For any testers looking forward to the update, we apologize for the
If you have any questions or comments about
these or any other XPRT-related topics, please let us know!
Shortly after the initial
AIXPRT release, we noted that each of the toolkits AIXPRT uses (Intel OpenVINO,
TensorFlow, NVIDIA TensorRT, and Apache MXNet) is on its own development
schedule, and new versions will sometimes appear with little warning. When this
happens, we’ll have to respond by updating specific AIXPRT installation
packages, giving AIXPRT testers relatively short notice.
This is one of those
times! Intel recently released OpenVINO 2020.3 Long-Term Support (LTS), and we’re planning to update the AIXPRT
OpenVINO packages with the LTS version. The LTS version targets environments
that benefit from maximum stability, and don’t require a constant stream of new
tools and feature changes. In other words, it’s well suited for a benchmark,
and we think it’s a good fit for AIXPRT moving forward.
We don’t yet know what
impact the new version will have on AIXPRT OpenVINO test results. A substantial
part of the development process will involve testing the new packages on a
variety of platforms to see how performance changes. We’ll communicate our
findings here in the blog, so AIXPRT testers will know what to expect.
modular nature of the AIXPRT installation packages ensures that we don’t need
to revise the entire AIXPRT suite every time a toolkit update goes live. If you
test with only TensorFlow, TensorRT, or MXNet, or a combination of those
toolkits, this update won’t affect your testing.
We’re not ready to commit
to a release date for the new build, but anticipate it will be in September.
If you have any questions about AIXPRT or OpenVINO, please let us know!
With four separate machine learning toolkits on their own development schedules, three workloads, and a wide range of possible configurations and use cases, AIXPRT has more moving parts than any of the XPRT benchmark tools to date. Because there are so many different components, and because we want AIXPRT to provide consistently relevant evaluation data in the rapidly evolving AI and machine learning spaces, we anticipate a cadence of AIXPRT updates in the future that will be more frequent than the schedules we’ve used for other XPRTs in the past. With that expectation in mind, we want to let AIXPRT testers know that when we release an AIXPRT update, they can expect minimized disruption, consideration for their testing needs, and clear communication.
Each AIXPRT toolkit (Intel OpenVINO, TensorFlow, NVIDIA TensorRT, and Apache MXNet) is on its own development schedule, and we won’t always have a lot of advance notice when new versions are on the way. Hypothetically, a new version of OpenVINO could release one month, and a new version of TensorRT just two months later. Thankfully, the modular nature of AIXPRT’s installation packages ensures that we won’t need to revise the entire AIXPRT suite every time a toolkit update goes live. Instead, we’ll update each package individually when necessary. This means that if you only test with a single AIXPRT package, updates to the other packages won’t affect your testing. For us to maintain AIXPRT’s relevance, there’s unfortunately no way to avoid all disruption, but we’ll work to keep it to a minimum.
Consideration for testers
As we move forward, when software compatibility issues force us to update an AIXPRT package, we may discover that the update has a significant effect on results. If we find that results from the new package are no longer comparable to those from previous tests, we’ll share the differences that we’re seeing in our lab. As always, we will use documentation and versioning to make sure that testers know what to expect and that there’s no confusion about which package to use.
When we update any package, we’ll make sure to communicate any updates in the new build as clearly as possible. We’ll document all changes thoroughly in the package readmes, and we’ll talk through significant updates here in the blog. We’re also available to answer questions about AIXPRT and any other XPRT-related topic, so feel free to ask!