For anyone interested
in learning more about AIXPRT, the Introduction to AIXPRT white paper provides detailed information
about its toolkits, workloads, system requirements, installation, test
parameters, and results. However, for AIXPRT.com visitors who want to find the answers to specific
AIXPRT-related questions quickly, a white paper can be daunting.
Because we want tech
journalists, OEM lab engineers, and everyone who is interested in AIXPRT to be
able to find the answers they need in as little time as possible, we’ve decided
to develop a new learning tool that will serve as an information hub for common
AIXPRT topics and questions.
The new learning tool
will be available online through our site. It will offer quick bites of
information about the fundamentals of AIXPRT, why the benchmark matters, the
benefits of AIXPRT testing and results, machine learning concepts, key terms,
and practical testing concerns.
We’re still working on the tool’s content and design. Because we’re designing this tool for you, we’d love to hear the topics and questions you think we should include. If you have any suggestions, please let us know!
A few months ago, we invited readers to send in their thoughts and ideas about web
technologies and workload scenarios that may be a good fit for the next WebXPRT. We’d like to share a few of those ideas today, and we invite
you to continue to send your feedback. We’re approaching the time when we need to begin firming up
plans for a WebXPRT 4 development cycle in 2021, but there’s still plenty of
time for you to help shape the future of the benchmark.
One of the most
promising ideas for WebXPRT 4 is the potential addition of one or more WebAssembly (WASM) workloads.
WASM is a low-level, binary instruction format that works across all modern browsers.
It offers web developers a great deal of flexibility and provides the speed and
efficiency necessary for running complex client applications in the browser. WASM
enables a variety of workload scenario options, including gaming, video editing, VR, virtual
machines, image recognition, and interactive educational content.
In addition, the
Chrome team is dropping Portable Native Client (PNaCL) support in favor of
WASM, which is why we had to remove a PNaCL workload when updating CrXPRT 2015 to CrXPRT 2. We
generally model CrXPRT workloads on existing WebXPRT workloads, so
familiarizing ourselves with WASM could ultimately benefit more than one XPRT
benchmark.
We are also
considering adding a web-based machine learning workload with TensorFlow for
JavaScript (TensorFlow.js). TensorFlow.js offers pre-trained models for a wide variety of
tasks including image classification, object detection, sentence encoding,
natural language processing, and more. We could also use this technology to
enhance one of WebXPRT’s existing AI-themed workloads, such as Organize Album
using AI or Encrypt Notes and OCR Scan.
Other ideas include using
a WebGL-based workload to target GPUs and investigating ways to incorporate a
battery life test. What do you think? Let us know!
We’re
happy to announce that we’re planning to release the CloudXPRT Preview next
week! After we take the CloudXPRT Preview installation and source code packages
live, they will be freely available to the public via CloudXPRT.com
and the BenchmarkXPRT GitHub repository.
All interested parties will be able to publish CloudXPRT results. However,
until we begin the formal results submission and review process
in July, we will publish only results we produce in our own lab. We’ll share
more information about that process and the corresponding dates here in the
blog in the coming weeks.
We do have one change to report regarding the CloudXPRT workloads we announced in a previous blog post. The Preview will include the web microservices and data analytics workloads (described below), but will not include the AI-themed container scaling workload. We hope to add that workload to the CloudXPRT suite in the near future, and are still conducting testing to make sure we get it right.
If
you missed the earlier workload-related post, here are the details about the
two workloads that will be in the preview build:
In the web microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
The data analytics workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.
The
CloudXPRT Preview provides OEMs, the tech press, vendors, and other testers
with an opportunity to work with CloudXPRT directly and shape the future of the
benchmark with their feedback. We hope that testers will take this opportunity
to explore the tool and send us their thoughts on its structure, workload
concepts and execution, ease of use, and documentation. That feedback will help
us improve the relevance and accessibility of CloudXPRT testing and results for
years to come.
If you have any questions about the upcoming CloudXPRT Preview, please feel free to contact us.
About
a month ago, we posted an update
on the CloudXPRT development process. Today, we want to provide more details
about the three workloads we plan to offer in the initial preview build:
In the web-tier microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
The machine learning (ML) training workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.
The AI-themed container scaling workload starts up a container and uses a version of the AIXPRT harness to launch Wide and Deep recommender system inference tasks in the container. Each container represents a fixed amount of work, and as the number of Wide and Deep jobs increases, CloudXPRT launches more containers in parallel to handle the load. The workload reports both the startup time for the containers and the Wide and Deep throughput results. Testers can use this workload to compare container startup time between IaaS stacks; optimize the balance between resource allocation, capacity, and throughput on a given stack; and confirm whether a given stack is suitable for specific SLAs.
We’re continuing to move forward with CloudXPRT development and testing and hope to add more workloads in subsequent builds. Like most organizations, we’ve adjusted our work patterns to adapt to the COVID-19 situation. While this has slowed our progress a bit, we still hope to release the CloudXPRT preview build in April. If anything changes, we’ll let folks know as soon as possible here in the blog.
If you have any thoughts or comments about CloudXPRT workloads, please feel free to contact us.
Today, we published the Introduction
to AIXPRT white paper. The paper serves as an overview of the
benchmark and a consolidation of AIXPRT-related information that we’ve
published in the XPRT blog
over the past several months. For folks who are completely new to AIXPRT and veteran
testers who need to brush up on pre-test configuration procedures, we hope this
paper will be a quick, one-stop reference that helps reduce the learning curve.
The paper describes the AIXPRT
toolkits and workloads, adjusting key test parameters (batch size, level of
precision, number of concurrent instances, and default number of requests),
using alternate test configuration files, understanding and submitting results,
and accessing the source code.
We hope that Introduction to AIXPRT will prove to be a valuable resource. Moving forward, readers will be able to access the paper from the Helpful Info box on AIXPRT.com and the AIXPRT section of our XPRT white papers page. If you have any questions about AIXPRT, please let us know!
During last year’s
Consumer Electronics Show (CES), one question
kept coming to mind as I walked the floor: Are we approaching the tipping point
where AI truly affects most people in meaningful ways on a daily basis? I think
it’s safe to say that we’ve reached that point as a result of AI integration with
phones. After all, for many of us, AI improves the quality of our photography,
recommends words and phrases as we text and search the web, and lets us know
when to allow extra drive time because traffic is heavy.
However, for me, the most intriguing aspects of
this year’s CES are the glimpses of how AI will change every area of our lives,
with and without mobile devices. The show floor is jam-packed with ways to
integrate AI with everything from athletic shoes to pet care to the kitchen
sink. Many of these ideas are fascinating on their own, and they’re all part of
a much bigger picture. The next few years will see increased AI utilization in
medicine, transportation, agriculture, water and energy distribution, natural
resource protection, and many more areas. Our personal smart devices will
connect to smart vehicles, smart homes, smart grids, and smart cities. In the
near future, CES shows won’t need AI sections because AI will be a part of
everything.
At each step of this journey, people will need objective data about how well their tech can handle the demands of common AI workloads. We’re excited that AIXPRT is already becoming a go-to tool for testing inference performance on laptops, desktops, and servers. There’s much more to come with AIXPRT in 2020, along with news about XPRTs in the datacenter, so stay tuned to the blog for exciting developments in the weeks to come!
I’ll leave you with pics from three of my favorite displays at this year’s show. The first is a model of Toyota’s Woven City. Toyota announced plans to build an entire mini city on existing company land near Mount Fuji. The city will house 2,000 people and will serve as an enormous real-time lab where designers and engineers can test ubiquitous AI and sensor technology. Toyota will also design the city to be fully sustainable with the use of hydrogen fuel cells and solar panels.
The second picture shows the electric Hyundai Urban Air Mobility prototype. Hyundai is partnering with Uber on this project, and the planned vertical take-off and landing (VTOL) craft will seat five passengers plus a pilot, have a range of 60 miles, and be able to recharge in less than 10 minutes. These concepts aren’t new, but battery and material sciences technologies are progressing to the point that this one may get off the ground!
The third picture shows BrainCo’s AI Prosthetic Hand display. The hand provides amputees with new levels of dexterity compared to previous prosthetics, and it uses AI to learn from the user’s patterns of movement. The idea is that the accuracy of gestures and grips will improve over time, allowing users to accomplish tasks that are impossible with existing technology. A young man in the booth was using the hand to paint beautiful and precise Chinese calligraphy. Very cool!
Cookie Notice: Our website uses cookies to deliver a smooth experience by storing logins and saving user information. By continuing to use our site, you agree with our usage of cookies as our privacy policy outlines.