BenchmarkXPRT Blog banner

Tag Archives: ONNX

Web AI frameworks: Possible paths for the AI-focused WebXPRT 4 auxiliary workload

A few months ago, we announced that we’re moving forward with the development of a new auxiliary WebXPRT 4 workload focused on local, browser-side AI technology. Local AI has many potential benefits, and it now seems safe to say that it will be a common fixture of everyday life for many people in the future. As the growth of browser-based inference technology picks up steam, our goal is to equip WebXPRT 4 users with the ability to quickly and reliably evaluate how well devices can handle substantial local inference tasks in the browser.

To reach our goal, we’ll need to make many well-researched and carefully considered decisions along the development path. Throughout the decision-making process, we’ll be balancing our commitment to core XPRT values, such as ease of use and widespread compatibility, with the practical realities of working with rapidly changing emergent technologies. In today’s blog, we’re discussing one of the first decision points that we face—choosing a Web AI framework.

AI frameworks are suites of tools and libraries that serve as building blocks for developers to create new AI-based models and apps or integrate existing AI functions in custom ways. AI frameworks can be commercial, such as OpenAI, or open source, such as Hugging Face, PyTorch, and TensorFlow. Because the XPRTs are available at no cost for users and we publish our source code, open-source frameworks are the right choice for WebXPRT.

Because the new workload will focus on locally powered, browser-based inference tasks, we also need to choose an AI framework that has browser integration capabilities and does not rely on server-side computing. These types of frameworks—called Web AI—use JavaScript (JS) APIs and other web technologies, such as WebAssembly and WebGPU, to run machine learning (ML) tasks on a device’s CPU, GPU, or NPU.

Several emerging Web AI frameworks may provide the compatibility and functionality we need for the future WebXPRT workload. Here are a few that we’re currently researching:

  • ONNX Runtime Web: Microsoft and other partners developed the Open Neural Network Exchange (ONNX) as an open standard for ML models. With available tools, users can convert models from several AI frameworks to ONNX, which can then be used by ONNX Runtime Web. ONNX Runtime Web allows developers to leverage the broad compatibility of ONNX-formatted ML models—including pre-trained vision, language, and GenAI models—in their web applications.
  • Transformers.js: Transformers.js, which uses ONNX Runtime Web, is a JS library that allows users to run AI models from the browser and offline. Transformers.js supports language, computer vision, and audio ML models, among others.
  • MediaPipe: Google developed MediaPipe as a way for developers to adapt TensorFlow-based models for use across many platforms in real-time on-device inference applications such as face detection and gesture recognition. MediaPipe is particularly useful for inference work in images, videos, and live streaming.
  • TensorFlow.js: TensorFlow has been around for a long time, and the TensorFlow ecosystem provides users with a broad variety of models and datasets. TensorFlow is an end-to-end ML solution—training to inference—but with available pre-trained models, developers can focus on inference. TensorFlow.js is an open-source JS library that helps developers integrate TensorFlow with web apps.

We have not made final decisions about a Web AI framework or any aspect of the future workload. We’re still in the research, discussion, and experimentation stages of development, but we want to be transparent with our readers about where we are in the process. In future blog posts, we’ll discuss some of the other major decision points in play.

Most of all, we invite you to join us in these discussions, make recommendations, and give us any other feedback or suggestions you may have, so please feel free to share your thoughts!

Justin

Up next for WebXPRT 4: A new AI-focused workload!

We’re always thinking about ways to improve WebXPRT. In the past, we’ve discussed the potential benefits of auxiliary workloads and the role that such workloads might play in future WebXPRT updates and versions. Today, we’re very excited to announce that we’ve decided to move forward with the development of a new WebXPRT 4 workload focused on browser-side AI technology!

WebXPRT 4 already includes timed AI tasks in two of its workloads: the Organize Album using AI workload and the Encrypt Notes and OCR Scan workload. These two workloads reflect the types of light browser-side inference tasks that have been available for a while now, but most heavy-duty inference on the web has historically happened in on-prem servers or in the cloud. Now, localized AI technology is growing by leaps and bounds, and the integration of new AI capabilities with browser-based tasks is on the threshold of advancing rapidly.

Because of this growth, we believe now is the time to start work on giving WebXPRT 4 the ability to evaluate new browser-based AI capabilities—capabilities that are likely to become a part of everyday life in the next few years. We haven’t yet decided on a test scenario or software stack for the new workload, but we’ll be working to refine our plan in the coming months. There seems to be some initial promise in emerging frameworks such as ONNX Runtime Web, which allows users to run and deploy web-based machine learning models by using JavaScript APIs and libraries. In addition, new Web APIs like WebGPU (currently supported in Edge, Chrome, and tech preview in Safari) and WebNN (in development) may soon help facilitate new browser-side AI workloads.

We know that many longtime WebXPRT 4 users will have questions about how this new workload may affect their tests. We want to assure you that the workload will be an optional bonus workload and will not run by default during normal WebXPRT 4 tests. As you consider possibilities for the new workload, here are a few points to keep in mind:

  • The workload will be optional for users to run.
  • It will not affect the main WebXPRT 4 subtest or overall scores in any way.
  • It will run separately from the main test and will produce its own score(s).
  • Current and future WebXPRT 4 results will still be comparable to one another, so users who’ve already built a database of WebXPRT 4 scores will not have to retest their devices.
  • Because many of the available frameworks don’t currently run on all browsers, the workload may not run on every platform.

As we research available technologies and explore our options, we would love to hear from you. If you have ideas for an AI workload scenario that you think would be useful or thoughts on how we should implement it, please let us know! We’re excited about adding new technologies and new value to WebXPRT 4, and we look forward to sharing more information here in the blog as we make progress.

Justin

Check out the other XPRTs:

Forgot your password?