Recently, a member of the tech
press asked us about the status of AIXPRT,
our benchmark that measures machine learning inference performance. We want to
share our answer here in the blog for the benefit of other readers. The writer said
it seemed like we had not updated AIXPRT in a long time, and wondered whether we
had any immediate plans to do so.
true that we haven’t updated AIXPRT in quite some time. Unfortunately, while a
few tech press publications and OEM labs began experimenting with AIXPRT
testing, the benchmark never got the traction we hoped for, and we’ve decided
to invest our resources elsewhere for the time being. The AIXPRT installation
packages are still available for people to use or reference as they wish, but
we have not updated the benchmark to work with the latest platform versions
(OpenVINO, TensorFlow, etc.). It’s likely that several components in each
package are out of date.
are interested in AIXPRT and would like us to bring it up to date, please let us know.
We can’t promise that we’ll revive the benchmark, but your feedback could be a
valuable contribution as we try to gauge the benchmarking community’s interest.
A few months ago, we shared detailed information about the changes we expected
to make in WebXPRT 4. We are currently doing internal testing of the WebXPRT 4 Preview
build in preparation for releasing it to the public. We want to let our readers
know what to expect.
We’ve made some changes since our
last update and some of the details we present below could still change before
the preview release. However, we are much closer to the final product. Once we
release the WebXPRT 4 Preview, testers will be able to publish scores from Preview
build testing. We will limit any changes that we make between the Preview and
the final release to the UI or features that are not expected to affect test
Some of the non-workload changes we’ve
made in WebXPRT 4 relate to our typical benchmark update process.
We have updated the aesthetics of the WebXPRT UI to make WebXPRT 4 visually distinct from older versions. We did not significantly change the flow of the UI.
We have updated content in some of the workloads to reflect changes in everyday technology, such as upgrading most of the photos in the photo processing workloads to higher resolutions.
We have not yet added a looping function to the automation scripts, but are still considering it for the future.
We investigated the possibility of shortening the benchmark by reducing the default number of iterations from seven to five, but have decided to stick with seven iterations to ensure that score variability remains acceptable across all platforms.
Enhancement. We increased the efficiency of the
workload’s Canvas object creation function, and replaced the existing photos
with new, higher-resolution photos.
Organize Album Using AI. We replaced ConvNetJS with WebAssembly (WASM) based OpenCV.js for both the face detection and image classification tasks. We changed the images for the image classification tasks to images from the ImageNet dataset.
Stock Option Pricing. We updated the dygraph.js library.
Sales Graphs. We made no changes to this workload.
Encrypt Notes and OCR Scan. We replaced ASM.js with WASM for the Notes task and updated the WASM-based Tesseract version for the OCR task.
Online Homework. In addition to the existing scenario which uses four Web Workers, we have added a scenario with two Web Workers. The workload now covers a wider range of Web Worker performance, and we calculate the score by using the combined run time of both scenarios. We also updated the typo.js library.
As part of the WebXPRT 4 development
process, we researched the possibility of including two new workloads: a
natural language processing (NLP) workload, and an Angular-based message
scrolling workload. After much testing and discussion, we have decided to not
include these two workloads in WebXPRT 4. They will be good candidates for us
to add as experimental WebXPRT 4 workloads in 2022.
The release timeline
Our goal is to publish the WebXPRT 4
preview build by December 15th, which will allow testers to publish
scores in the weeks leading up to the Consumer Electronics Show in Las Vegas in
January 2022. We will provide more detailed information about the GA timeline
here in the blog as soon as possible.
If you have any questions about the details we’ve shared above, please feel free to ask!
As the WebXPRT 4 development process has progressed, we’ve started to discuss the possibility of offering experimental WebXPRT 4 workloads in 2022. These would be optional workloads that test cutting-edge browser technologies or new use cases. The individual scores for the experimental workloads would stand alone, and would not factor in the WebXPRT 4 overall score.
WebXPRT testers would be able to run the experimental workloads one of two ways: by manually selecting them on the benchmark’s home screen, or by adjusting a value in the WebXPRT 4 automation scripts.
Testers would benefit from experimental workloads by being able to compare how well certain browsers or systems handle new tasks (e.g., new web apps or AI capabilities). We would benefit from fielding workloads for large-scale testing and user feedback before we commit to including them as core WebXPRT workloads.
Do you have any general thoughts about experimental workloads for browser performance testing, or any specific workloads that you’d like us to consider? Please let us know.
We’re happy to
announce that the AIXPRT learning tool is now live! We
designed the tool to serve as an information hub for common AIXPRT topics and
questions, and to help tech journalists, OEM lab engineers, and everyone who is
interested in AIXPRT find the answers they need in as little time as possible.
The tool features four
primary areas of content:
The Q&A section provides quick answers to the questions we
receive most from testers and the tech press.
The AIXPRT: the basics section describes specific topics such as
the benchmark’s toolkits, networks, workloads, and hardware and software
The testing and results section covers the testing process,
metrics, and how to publish results.
The AI/ML primer provides brief, easy-to-understand definitions of
key AI and ML terms and concepts for those who want to learn more about the
The first screenshot below shows the home screen. To show how some of the popup information sections appear, the second screenshot shows the Inference tasks (workloads) entry in the AI/ML Primer section.
We’re excited about the new AIXPRT learning tool, and we’re also happy to report that we’re working on a version of the tool for CloudXPRT. We hope to make the CloudXPRT tool available early next year, and we’ll post more information in the blog as we get closer to taking it live.
If you have any questions about the tool, please let us know!
Last month, we announced that we’re working on
a new AIXPRT learning tool. Because we want tech journalists, OEM lab
engineers, and everyone who is interested in AIXPRT to be able to find the
answers they need in as little time as possible, we’re designing this tool to serve
as an information hub for common AIXPRT topics and questions.
We’re still finalizing
aspects of the tool’s content and design, so some details may change, but we
can now share a sneak peak of the main landing page. In the screenshot below,
you can see that the tool will feature four primary areas of content:
The FAQ section will provide quick answers to the questions we
receive most from testers and the tech press.
The AIXPRT basics section will describe specific topics such as the
benchmark’s toolkits, networks, workloads, and hardware and software
The testing and results section will cover the testing process,
the metrics the benchmark produces, and how to publish results.
The AI/ML primer will provide brief, easy-to-understand definitions
of key AI and ML terms and concepts for those who want to learn more about the
We’re excited about the new AIXPRT learning tool, and will share more information here in the blog as we get closer to a release date. If you have any questions about the tool, please let us know!
This week, we’re sharing news on two topics that we’ve discussed
here in the blog over the past several months: CloudXPRT v1.01 and a potential
AIXPRT OpenVINO update.
Last week, we announced that we were very close to releasing an
updated CloudXPRT build (v1.01) with two minor bug fixes, an improved post-test
results processing script, and an adjustment to one of our test configuration
recommendations. Our testing and prep is complete, and the new version is live
in the CloudXPRT GitHub repository and on our site!
None of the v1.01
changes affect performance or test results, so scores from the new build are
comparable to those from previous CloudXPRT builds. If you’d like to know more
about the changes, take a look at last week’s blog post.
The AIXPRT OpenVINO
In late July, we discussed our plans to update the AIXPRT OpenVINO packages
with OpenVINO 2020.3 Long-Term Support (LTS). While there are no
known problems with the existing AIXPRT OpenVINO package, the LTS version
targets environments that benefit from maximum stability and don’t require a
constant stream of new tools and feature changes, so we thought it would be
well suited for a benchmark like AIXPRT.
We initially believed that
the update process would be relatively simple, and we’d be able to release a
new AIXPRT OpenVINO package in September. However, we’ve discovered that the
process is involved enough to require substantial low-level recoding. At this
time, it’s difficult to estimate when the updated build will be ready for
release. For any testers looking forward to the update, we apologize for the
If you have any questions or comments about
these or any other XPRT-related topics, please let us know!