BenchmarkXPRT Blog banner

Category: Future of performance evaluation

The HDXPRT 4 Community Preview is now available!

Today we’re releasing the HDXPRT 4 Community Preview (CP). Just like previous versions of HDXPRT, HDXPRT 4 uses trial versions of commercial applications to complete workload tasks. For some of those programs, such as Audacity and HandBrake, HDXPRT 4 includes installers in the HDXPRT installation package. For other programs, such as Adobe Photoshop Elements 2018 and CyberLink Media Espresso 7.5, users need to download the necessary installers prior to testing by using the links and instructions in the HDXPRT 4 User Manual.

In addition to the editing photos, editing music, and converting videos workloads from prior versions of the benchmark, HDXPRT 4 includes two new Photoshop Elements scenarios. The first utilizes an AI tool that corrects closed eyes in photos, and the second creates a single panoramic photo from seven separate photos.

HDXPRT 4 is compatible with systems running Windows 10, and the installation package is slightly smaller than previous versions at just over 4.7 GB.

Because this is a community preview, it is available only to community members, who may download the preview from the HDXPRT tab in the Members’ Area. Because we expect results from CP testing to be comparable to results from the general release, members may publish their CP test results.

After you try the CP, please send us your comments. If you send information that’s relevant to the entire community, we may post an anonymous version of your comments to the forum. Thanks for your participation!

Justin

New XPRTs for the new year

Happy 2019! January is already a busy time for the XPRTs, so we want to share a quick preview of what community members can expect in the coming months.

The MobileXPRT 3 community preview (CP) is still open, but draws to a close on January 18th. If you are not familiar with the updates and changes we implemented in the newest version of MobileXPRT, you can read more in the blog. Members can access this APK on the MobileXPRT tab in the Members’ Area. We also posted an installation guide that provides both a general overview of the app and detailed instructions for each step. The entire process takes about five minutes on most devices. If you haven’t already, give it a try!

We also recently published the first AIXPRT Request for Comments (RFC) preview build, an early version of one of the tools we’re developing to evaluate machine learning performance. You can find more details in Bill’s most recent blog post and on AIXPRT.com. Only BenchmarkXPRT Development Community members have access to our RFCs and the opportunity to provide feedback. However, because we’re seeking broad input from experts in this field, we’ll gladly make anyone interested in participating a member. To gain access to the AIXPRT repository, please send us a request.

Work on the HDXPRT 4 CP candidate build continues, and we hope to publish the preview for community members this month. We appreciate everyone’s patience as we work to get this right. We think it will be worth the wait.

On a general note, I’ll be travelling to CES 2019 in Las Vegas next week. CES is a great opportunity for us to survey emerging tech and industry trends, and I look forward to sharing my thoughts from the show. If you’ll be there and would like to discuss any aspect of the XPRTs in person, let me know.

Justin

The AIXPRT Request for Comments preview build

In the next few days, we’ll be publishing the first AIXPRT tool as a Request for Comments (RFC) preview build, an early version of one of the AIXPRT tools we’re developing to help evaluate machine learning performance.

We’re inviting folks to run the workload and send in their thoughts and suggestions. Only BenchmarkXPRT Development Community members have access to our RFCs and the opportunity to provide feedback. However, because we’re seeking broad input from experts in this field, we’ll gladly make anyone interested in participating a member.

This AIXPRT RFC preview build includes support for the Intel OpenVINO computer vision toolkit to run image classification workloads with ResNet-50 and SSD-MobileNet v1 networks. The test reports FP32 and FP16 levels of precision. The system requirements are:

  • Operating system = Ubuntu 16.04
  • CPU = 6th to 8th generation Intel Core or Xeon processors, or Intel Pentium processors N4200/5, N3350/5, N3450/5 with Intel HD Graphics


We welcome input on all aspects of the benchmark, including scope, workloads, metrics and scores, user experience, and reporting. We will add support for TensorFlow and TensorRT to the AIXPRT RFC preview build during the preview period. We are accepting feedback through January 25th, 2019, after which we’ll collect and evaluate responses before publishing the next build. Because this is an RFC release, we ask that testers do not publish scores or use the results for comparison purposes.

We’ll send out a community announcement when the RFC preview build is officially available, and we’ll also post an announcement and RFC preview build user guide on AIXPRT.com. We’re hosting the AIXPRT RFC preview build in a dedicated GitHub repository, so please contact us at BenchmarkXPRTsupport@principledtechnologies.com to gain access.

This is just the next step for AIXPRT. With your help, we hope to add more workloads and other frameworks in the coming months. We look forward to receiving your feedback!

Bill

XPRT collaborations: North Carolina State University

For those of us who work on the BenchmarkXPRT tools, a core goal is involving new contributors and interested parties in the benchmark development process. Adding voices to the discussion fosters the collaboration and innovation that lead to powerful benchmark tools with lasting relevance.

One vehicle for outreach that we especially enjoy is sponsoring a student project through North Carolina State University. Each semester, the Senior Design Center in the university’s Department of Computer Science partners with external companies and organizations to provide student teams with an opportunity to work on real-world programming projects. If you’ve followed the XPRTs for a while, you may remember previous student projects such as Nebula Wolf, a mini-game that shows how well different devices handle games, and VR Demo, a virtual reality prototype workload based on a room escape scenario.

This fall, a team of NC State students is developing a software console for automating machine learning tests. Ideally, the tool will let future testers specify custom workload combinations, compute a performance metric, and upload results to our database. The project will also assess the impact of the framework on performance scores. In fact, the console will perform many of the same functions we plan to implement with AIXPRT.

The students have worked very hard on the project, and have learned quite a bit about benchmarking practices and several new software tools. The project will wrap up in the next couple of weeks, and we’ll share additional details as soon as possible. Early next year, we’ll publish a video about the experience.

If you’d like to join the NC State students and hundreds of other XPRT community members in the future of benchmark development, please let us know!

Justin

XPRTs in the datacenter

The XPRTs have been very successful on desktops, notebooks, tablets, and phones. People have run WebXPRT over 295,000 times. It and other benchmarks such as MobileXPRT, HDXPRT, and CrXPRT are important tools globally for evaluating device performance on various consumer and business client platforms.

We’ve begun branching out with tests for edge devices with AIXPRT, our new artificial intelligence benchmark. While typical consumers won’t be able to run AIXPRT on their devices initially, we feel that it is important for the XPRTs to play an active role in a critical emerging market. (We’ll have some updates on the AIXPRT front in the next few weeks.)

Recently, both community members and others have asked about the possibility of the XPRTs moving into the datacenter. Folks face challenges in evaluating the performance and suitability to task of such datacenter mainstays as servers, storage, networking infrastructure, clusters, and converged solutions. These challenges include the lack of easy-to-run benchmarks, the complexity and cost of the equipment (multi-tier servers, large amounts of storage, and fast networks) necessary to run tests, and confusion about best testing practices.

PT has a lot of expertise in measuring datacenter performance, as you can tell from the hundreds of datacenter-focused test reports on our website. We see great potential in our working with the BenchmarkXPRT Development Community to help in this area. It is very possible that, as with AIXPRT, our approach to datacenter benchmarks would differ from the approach we’ve taken with previous benchmarks. While we have ideas for useful benchmarks we might develop down the road, more immediate steps could be drafting white papers, developing testing guidelines, or working with vendors to set up a lab.

Right now, we’re trying to gauge the level of interest in having such tools and in helping us carry out these initiatives. What are the biggest challenges you face in datacenter-focused performance and suitability to task evaluations? Would you be willing to work with us in this area? We’d love to hear from you and will be reaching out to members of the community over the coming weeks.

As always, thanks for your help!

Bill

AI and the next MobileXPRT

As we mentioned a few weeks ago, we’re in the early planning stages for the next version of MobileXPRT—MobileXPRT 3. We’re always looking for ways to make XPRT benchmark workloads more relevant to everyday users, and a new version of MobileXPRT provides a great opportunity to incorporate emerging tech such as AI into our apps. AI is everywhere and is beginning to play a huge role in our everyday lives through smarter-than-ever phones, virtual assistants, and smart homes. The challenge for us is to identify representative mobile AI workloads that have the necessary characteristics to work well in a benchmark setting. For MobileXPRT, we’re researching AI workloads that have the following characteristics:

  • They work offline, not in the cloud.
  • They don’t require additional training prior to use.
  • They support common use cases such as image processing, optical character recognition (OCR), etc.


We’re researching the possibility of using Google’s Mobile Vision library, but there may be other options or concerns that we’re not aware of. If you have tips for places we should look, or ideas for workloads or APIs we haven’t mentioned, please let us know. We’ll keep the community informed as we narrow down our options.

Justin

Check out the other XPRTs:

Forgot your password?