AIXPRT banner

Downloading AIXPRT

AIXPRT includes support for the Intel OpenVINO, TensorFlow, and NVIDIA TensorRT toolkits to run image-classification and object-detection workloads with the ResNet-50 and SSD-MobileNet v1 networks, as well as a Wide and Deep recommender system workload with the Apache MXNet toolkit. The test reports FP32, FP16, and INT8 levels of precision. Test systems must be running Ubuntu 18.04 LTS or Windows 10, and the minimum CPU and GPU requirements vary by toolkit. You can find more detail on hardware and software requirements in the installation package's readme files.

The table below displays currently available AIXPRT test packages. To download the correct package, click the Download link in the row corresponding to the system configuration you wish to test. The Readme links in the table’s far right column will take you a public GitHub repository containing the readmes for each installation package.

Operating system Toolkit Target hardware Install package DocumentationDocs
Windows 10 OpenVINO CPUs, Intel processor graphics, VPUs Download Readme
Windows 10 TensorFlow CPUs, GPUs, NVIDIA GPUs Download Readme
Windows 10 TensorRT NVIDIA GPUs Download Readme
Ubuntu OpenVINO CPUs, Intel processor graphics, VPUs Download Readme
Ubuntu TensorFlow CPUs, GPUs, NVIDIA GPUs Download Readme
Ubuntu TensorRT NVIDIA GPUs Download Readme
Ubuntu MXNet CPUs, GPUs Download Readme

FAQ

AIXPRT is a tool that makes it easier to evaluate a system’s machine learning inference performance by running common image-classification, object-detection, and recommender system workloads.
The skills needed to install and run AIXPRT successfully vary depending on the installation package. All the AIXPRT test packages require basic terminal skills. The TensorRT packages require familiarity with setting up environment variables, working within Visual Studio (in Windows), and building software from source.
AIXPRT use cases cut across a wide range of hardware segments, including desktops, laptops, edge devices, and servers. Not all AIXPRT workloads and test configurations will be applicable to each segment. In many cases, the ideal combination of test configuration variables remains an open question for ongoing research.
AIXPRT runs on Ubuntu 18.04 LTS or Windows 10 systems.
Depending on the operating system, toolkit, and workload, AIXPRT can target x86 CPUs, AMD discrete GPUs, Intel processor graphics, Intel Neural Compute Sticks, or NVIDIA GPUs.
AIXPRT includes support for the Intel OpenVINO, TensorFlow, and NVIDIA TensorRT toolkits to run image-classification and object-detection workloads with the ResNet-50 and SSD-MobileNet v1 networks, as well as a Wide and Deep recommender system workload with the Apache MXNet toolkit.
AIXPRT testers can adjust the following test configuration variables:

  • Precision (FP32, FP16, or INT8)
  • Batch size (1, 2, 4, 8, 16, 32, etc.)
  • Number of concurrent instances (1 or more depending on hardware support)
AIXPRT testers can adjust batch size, levels of precision, and number of concurrent instances by editing the JSON file in the AIXPRT/Config directory. While the process is straightforward, editing each of the variables in a config file can take some time, and testers don’t always know the appropriate values for their system. To address both issues, we are offering a selection of alternative config files that testers can download and drop into the AIXPRT/Config directory. To access the alternative config files, visit the AIXPRT public resources repository.
To understand AIXPRT results at a high level, it’s important to revisit the core purpose of the benchmark. AIXPRT’s bundled toolkits measure inference latency (the speed of image processing) and throughput (the number of images processed in a given time period) for image recognition (ResNet-50) and object detection (SSD-MobileNet v1) tasks. Testers have the option of adjusting variables such as batch size (the number of input samples to process simultaneously) to try and achieve higher levels of throughput, but higher throughput can come at the expense of increased latency per task. In real-time or near real-time use cases such as performing image recognition on individual photos captured by a camera, lower latency is important because it improves the user experience. In other cases, such as performing image recognition on a large library of photos, achieving higher throughput might be preferable; designating larger batch sizes or running concurrent instances might allow the overall workload to complete more quickly.

The dynamics of these performance tradeoffs ensure that there is no single good score for all machine learning scenarios. Some testers might prefer lower latency, while others would sacrifice latency to achieve the higher level of throughput that their use case demands.
In real-time or near real-time use cases such as performing image recognition on individual photos being captured by a camera, lower latency is important because it improves the user experience. In other cases, such as performing image recognition on a large library of photos, achieving higher throughput might be preferable; designating larger batch sizes or running concurrent instances might allow the overall workload to complete more quickly.

The dynamics of these performance tradeoffs ensure that there is no single good score for all machine learning scenarios. Some testers might prefer lower latency, while others would sacrifice latency to achieve the higher level of throughput that their use case demands.
In real-time or near real-time use cases such as performing image recognition on individual photos being captured by a camera, lower latency is important because it improves the user experience. In other cases, such as performing image recognition on a large library of photos, achieving higher throughput might be preferable; designating larger batch sizes or running concurrent instances might allow the overall workload to complete more quickly.

The dynamics of these performance tradeoffs ensure that there is no single good score for all machine learning scenarios. Some testers might prefer lower latency, while others would sacrifice latency to achieve the higher level of throughput that their use case demands.
We invite and encourage everyone to submit benchmark results from their testing for inclusion in the public results table available here.

Please follow the process below to prepare results for submission:

  • After completion of a benchmark run, please copy the following items into a folder for submission:
    • The JSON results file generated at AIXPRT/Results/{configFileName}/{results_filename}.json.
    • The log files generated at AIXPRT/Modules/Deep-learning/workloads/{workloadsFolder}/results/output/.
    • The input run configuration file located at AIXPRT/Config/{filename}.json.
  • Fill out the required fields in the SystemInfo.csv sheet found in the AIXPRT root directory. We collect this information to make it possible for others to reproduce the test and confirm that they get similar numbers.
  • If any scripts, models, or files are customized for result generation, please describe these changes in the SystemInfo.csv file.
  • Please zip up the system information CSV, results JSON, logs, and run configuration file and email the zip file as an attachment to the BenchmarkXPRT Community Administrator at the email address benchmarkxprtsupport@principledtechnologies.com.
  • Use "AIXPRT Results Submission" as the subject for your email.
  • Do not forget to specify in the email’s body your company name and name of the person responsible for the test.
  • Please make sure the email reply-to address you specify is a valid reply address inside your organization.
  • Due to the complexity of AIXPRT tests, and to be as transparent and accurate as possible with our published results, we may ask follow-up questions about the tests and/or system configuration.
  • We will verify the tester’s identity and validate the results before publishing them to the public database.
  • We will notify you if we publish your results.

Check out the other XPRTs:

Megaphone icon

Press releases

The XPRT benchmarks are constantly evolving. Get the latest announcements about the XPRT family right here.

White paper icon

White papers

Principled Technologies brings its technical prowess to the XPRT family with these informative, fact-based white papers. Read them here.

Webinars icon

Webinars

We periodically hold Webinars to inform members about what's going on in the BenchmarkXPRT community. Visit our Webinars archive.

Forgot your password?