BenchmarkXPRT Blog banner

Category: AI

We want your thoughts about experimental WebXPRT 4 workloads

Two weeks ago, we discussed how users can automate WebXPRT 4 testing by appending several parameters and values to the benchmark’s URL. One of these lets you enable any available experimental workloads during the test run. While we don’t currently offer any experimental workloads for WebXPRT 4, we are seeking suggestions for possible future workload scenarios, or specific web technologies that you’d like to be able to test with an experimental workload.

The main purpose of optional, experimental workloads would be to test cutting-edge browser technologies or new use cases, even if the experimental workload doesn’t work on all browsers or devices. The individual scores for the experimental workloads would stand alone, and would not factor in the WebXPRT 4 overall score. WebXPRT 4 testers would be able to run the experimental workloads one of two ways: by adjusting a value in the WebXPRT 4 automation scripts, as mentioned above, or by manually selecting them on the benchmark’s home screen.

Testers would benefit from experimental workloads by learning how well certain browsers or systems handle new tasks (e.g., new web apps or AI capabilities). We would benefit from fielding workloads for large-scale testing and user feedback before we commit to including them as core WebXPRT workloads.

Do you have any general thoughts about experimental workloads for browser performance testing, or any specific workloads that you’d like us to consider? Please let us know.

Justin

A note about AIXPRT

Recently, a member of the tech press asked us about the status of AIXPRT, our benchmark that measures machine learning inference performance. We want to share our answer here in the blog for the benefit of other readers. The writer said it seemed like we had not updated AIXPRT in a long time, and wondered whether we had any immediate plans to do so.

It’s true that we haven’t updated AIXPRT in quite some time. Unfortunately, while a few tech press publications and OEM labs began experimenting with AIXPRT testing, the benchmark never got the traction we hoped for, and we’ve decided to invest our resources elsewhere for the time being. The AIXPRT installation packages are still available for people to use or reference as they wish, but we have not updated the benchmark to work with the latest platform versions (OpenVINO, TensorFlow, etc.). It’s likely that several components in each package are out of date.

If you are interested in AIXPRT and would like us to bring it up to date, please let us know. We can’t promise that we’ll revive the benchmark, but your feedback could be a valuable contribution as we try to gauge the benchmarking community’s interest.

Justin

Here’s what to expect in the WebXPRT 4 Preview

A few months ago, we shared detailed information about the changes we expected to make in WebXPRT 4. We are currently doing internal testing of the WebXPRT 4 Preview build in preparation for releasing it to the public. We want to let our readers know what to expect.

We’ve made some changes since our last update and some of the details we present below could still change before the preview release. However, we are much closer to the final product. Once we release the WebXPRT 4 Preview, testers will be able to publish scores from Preview build testing. We will limit any changes that we make between the Preview and the final release to the UI or features that are not expected to affect test scores.

General changes

Some of the non-workload changes we’ve made in WebXPRT 4 relate to our typical benchmark update process.

  • We have updated the aesthetics of the WebXPRT UI to make WebXPRT 4 visually distinct from older versions. We did not significantly change the flow of the UI.
  • We have updated content in some of the workloads to reflect changes in everyday technology, such as upgrading most of the photos in the photo processing workloads to higher resolutions.
  • We have not yet added a looping function to the automation scripts, but are still considering it for the future.
  • We investigated the possibility of shortening the benchmark by reducing the default number of iterations from seven to five, but have decided to stick with seven iterations to ensure that score variability remains acceptable across all platforms.

Workload changes

  • Photo Enhancement. We increased the efficiency of the workload’s Canvas object creation function, and replaced the existing photos with new, higher-resolution photos.
  • Organize Album Using AI. We replaced ConvNetJS with WebAssembly (WASM) based OpenCV.js for both the face detection and image classification tasks. We changed the images for the image classification tasks to images from the ImageNet dataset.
  • Stock Option Pricing. We updated the dygraph.js library.
  • Sales Graphs. We made no changes to this workload.
  • Encrypt Notes and OCR Scan. We replaced ASM.js with WASM for the Notes task and updated the WASM-based Tesseract version for the OCR task.
  • Online Homework. In addition to the existing scenario which uses four Web Workers, we have added a scenario with two Web Workers. The workload now covers a wider range of Web Worker performance, and we calculate the score by using the combined run time of both scenarios. We also updated the typo.js library.

Experimental workloads

As part of the WebXPRT 4 development process, we researched the possibility of including two new workloads: a natural language processing (NLP) workload, and an Angular-based message scrolling workload. After much testing and discussion, we have decided to not include these two workloads in WebXPRT 4. They will be good candidates for us to add as experimental WebXPRT 4 workloads in 2022.

The release timeline

Our goal is to publish the WebXPRT 4 preview build by December 15th, which will allow testers to publish scores in the weeks leading up to the Consumer Electronics Show in Las Vegas in January 2022. We will provide more detailed information about the GA timeline here in the blog as soon as possible.

If you have any questions about the details we’ve shared above, please feel free to ask!

Justin

Thinking about experimental WebXPRT workloads in 2022

As the WebXPRT 4 development process has progressed, we’ve started to discuss the possibility of offering experimental WebXPRT 4 workloads in 2022. These would be optional workloads that test cutting-edge browser technologies or new use cases. The individual scores for the experimental workloads would stand alone, and would not factor in the WebXPRT 4 overall score.

WebXPRT testers would be able to run the experimental workloads one of two ways: by manually selecting them on the benchmark’s home screen, or by adjusting a value in the WebXPRT 4 automation scripts.

Testers would benefit from experimental workloads by being able to compare how well certain browsers or systems handle new tasks (e.g., new web apps or AI capabilities). We would benefit from fielding workloads for large-scale testing and user feedback before we commit to including them as core WebXPRT workloads.

Do you have any general thoughts about experimental workloads for browser performance testing, or any specific workloads that you’d like us to consider? Please let us know.

Justin

The AIXPRT learning tool is now live (and a CloudXPRT version is on the way)!

We’re happy to announce that the AIXPRT learning tool is now live! We designed the tool to serve as an information hub for common AIXPRT topics and questions, and to help tech journalists, OEM lab engineers, and everyone who is interested in AIXPRT find the answers they need in as little time as possible.

The tool features four primary areas of content:

  • The Q&A section provides quick answers to the questions we receive most from testers and the tech press.
  • The AIXPRT: the basics section describes specific topics such as the benchmark’s toolkits, networks, workloads, and hardware and software requirements.
  • The testing and results section covers the testing process, metrics, and how to publish results.
  • The AI/ML primer provides brief, easy-to-understand definitions of key AI and ML terms and concepts for those who want to learn more about the subject.

The first screenshot below shows the home screen. To show how some of the popup information sections appear, the second screenshot shows the Inference tasks (workloads) entry in the AI/ML Primer section. 

We’re excited about the new AIXPRT learning tool, and we’re also happy to report that we’re working on a version of the tool for CloudXPRT. We hope to make the CloudXPRT tool available early next year, and we’ll post more information in the blog as we get closer to taking it live.

If you have any questions about the tool, please let us know!

Justin

A first look at the upcoming AIXPRT learning tool

Last month, we announced that we’re working on a new AIXPRT learning tool. Because we want tech journalists, OEM lab engineers, and everyone who is interested in AIXPRT to be able to find the answers they need in as little time as possible, we’re designing this tool to serve as an information hub for common AIXPRT topics and questions.

We’re still finalizing aspects of the tool’s content and design, so some details may change, but we can now share a sneak peak of the main landing page. In the screenshot below, you can see that the tool will feature four primary areas of content:

  • The FAQ section will provide quick answers to the questions we receive most from testers and the tech press.
  • The AIXPRT basics section will describe specific topics such as the benchmark’s toolkits, networks, workloads, and hardware and software requirements.
  • The testing and results section will cover the testing process, the metrics the benchmark produces, and how to publish results.
  • The AI/ML primer will provide brief, easy-to-understand definitions of key AI and ML terms and concepts for those who want to learn more about the subject.

We’re excited about the new AIXPRT learning tool, and will share more information here in the blog as we get closer to a release date. If you have any questions about the tool, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?