BenchmarkXPRT Blog banner

Category: Machine learning

Web APIs: Possible paths for the AI-focused WebXPRT 4 auxiliary workload

In our last blog post, we discussed one of the major decision points we’re facing as we work on what we hope will be the first new AI-focused WebXPRT 4 auxiliary workload: choosing a Web AI framework. In today’s blog, we’re discussing another significant decision that we need to make for the future workload’s development path: choosing a web API.

Many of you are familiar with the concept of an application programming interface (API). Simply put, APIs implement sets of software rules, tools, and/or protocols that serve as intermediaries that make it possible for different computer programs or components to communicate with each other. APIs simplify many development tasks for programmers and provide standardized ways for applications to share data, functions, and system resources.

Web APIs fulfill the intermediary role of an API—through HTTP-based communication—for web servers (on the server side) or web browsers (on the client side). Client-side web APIs make it possible for browser-based applications to expand browser functionality. They execute the kinds of JavaScript, HTML5, and WebAssembly (Wasm) workloads—among other examples—that support the wide variety of browser extensions many of us use every day. WebXPRT uses those types of browser-based workloads to evaluate system performance. To lay a solid foundation for the first future browser-based AI workload, we need to choose a web API that will be compatible with WebXPRT and the Web AI framework and AI inference workload(s) we ultimately choose.

Currently, there are three main web API paths for running AI inference in a web browser: Web Neural Network (WebNN), Wasm, and WebGPU. These three web technologies are in various stages of development and standardization. Each has different levels of support within the major browsers. Here are basic overviews of each of the three options, as well as a few of our thoughts on the benefits and limitations that each may bring to the table for a future WebXPRT AI workload:

  • WebNN is a JavaScript API that enables developers to directly execute machine learning (ML) tasks on neural networks within web-based applications. WebNN makes it easier to integrate ML models into web apps, and it allows web apps to leverage the power of neural processing units (NPUs). WebNN has a lot going for it. It’s hardware-agnostic and works with various ML frameworks. It’s likely to be a major player in future browser-based inference applications. However, as a web standard, WebNN is still in the development stage and is only available in developer previews for Chromium-based browsers. Full default WebNN support could take a year or more.
  • Wasm is a binary instruction format that works across all modern browsers. Wasm provides a sandboxed environment that operates at near-native speeds and takes advantage of common hardware specs across platforms. Wasm’s capabilities offer web developers a great deal of flexibility for running complex client applications in the browser. Simply put, Wasm can help developers adapt their existing code for additional platforms and browser-based applications without requiring extensive code rewrites. Wasm’s flexibility and cross-platform compatibility is one of the reasons that we’ve already made use of Wasm in two existing WebXPRT 4 workloads that feature AI tasks: Organize Album using AI, and Encrypt Notes and OCR Scan. Wasm can also work together with other web APIs, such as WebGPU.
  • WebGPU enables web-based applications to directly access the graphics rendering and computational capabilities of a system’s GPU. The parallel computational abilities of GPUs make them especially well-suited to efficiently handle some of the demands of AI inference workloads, including image-based GenAI workloads or large language models. Google Chrome and Microsoft Edge currently support WebGPU, and it’s available in Safari through a tech preview.

Right now, we don’t think that WebNN will be fully out of the development phase in time to serve as our go-to web API for a new WebXPRT AI workload. Wasm and/or WebGPU appear to our best options for now. When WebNN is fully baked and available in mainstream browsers, it’s possible that we could port any existing Wasm- or WebGPU-based WebXPRT AI workloads to WebNN, which may open the possibility of cross-platform browser-based NPU performance comparisons.

All that said and as we mentioned in our previous post about Web AI frameworks, we have not made any final decisions about a web API or any aspect of the future workload. We’re still in the early stages of this project. We want your input.

If this discussion has sparked web AI ideas that you think would benefit the process, or if you have feedback you’d like to share, please feel free to contact us!

Justin

Web AI frameworks: Possible paths for the AI-focused WebXPRT 4 auxiliary workload

A few months ago, we announced that we’re moving forward with the development of a new auxiliary WebXPRT 4 workload focused on local, browser-side AI technology. Local AI has many potential benefits, and it now seems safe to say that it will be a common fixture of everyday life for many people in the future. As the growth of browser-based inference technology picks up steam, our goal is to equip WebXPRT 4 users with the ability to quickly and reliably evaluate how well devices can handle substantial local inference tasks in the browser.

To reach our goal, we’ll need to make many well-researched and carefully considered decisions along the development path. Throughout the decision-making process, we’ll be balancing our commitment to core XPRT values, such as ease of use and widespread compatibility, with the practical realities of working with rapidly changing emergent technologies. In today’s blog, we’re discussing one of the first decision points that we face—choosing a Web AI framework.

AI frameworks are suites of tools and libraries that serve as building blocks for developers to create new AI-based models and apps or integrate existing AI functions in custom ways. AI frameworks can be commercial, such as OpenAI, or open source, such as Hugging Face, PyTorch, and TensorFlow. Because the XPRTs are available at no cost for users and we publish our source code, open-source frameworks are the right choice for WebXPRT.

Because the new workload will focus on locally powered, browser-based inference tasks, we also need to choose an AI framework that has browser integration capabilities and does not rely on server-side computing. These types of frameworks—called Web AI—use JavaScript (JS) APIs and other web technologies, such as WebAssembly and WebGPU, to run machine learning (ML) tasks on a device’s CPU, GPU, or NPU.

Several emerging Web AI frameworks may provide the compatibility and functionality we need for the future WebXPRT workload. Here are a few that we’re currently researching:

  • ONNX Runtime Web: Microsoft and other partners developed the Open Neural Network Exchange (ONNX) as an open standard for ML models. With available tools, users can convert models from several AI frameworks to ONNX, which can then be used by ONNX Runtime Web. ONNX Runtime Web allows developers to leverage the broad compatibility of ONNX-formatted ML models—including pre-trained vision, language, and GenAI models—in their web applications.
  • Transformers.js: Transformers.js, which uses ONNX Runtime Web, is a JS library that allows users to run AI models from the browser and offline. Transformers.js supports language, computer vision, and audio ML models, among others.
  • MediaPipe: Google developed MediaPipe as a way for developers to adapt TensorFlow-based models for use across many platforms in real-time on-device inference applications such as face detection and gesture recognition. MediaPipe is particularly useful for inference work in images, videos, and live streaming.
  • TensorFlow.js: TensorFlow has been around for a long time, and the TensorFlow ecosystem provides users with a broad variety of models and datasets. TensorFlow is an end-to-end ML solution—training to inference—but with available pre-trained models, developers can focus on inference. TensorFlow.js is an open-source JS library that helps developers integrate TensorFlow with web apps.

We have not made final decisions about a Web AI framework or any aspect of the future workload. We’re still in the research, discussion, and experimentation stages of development, but we want to be transparent with our readers about where we are in the process. In future blog posts, we’ll discuss some of the other major decision points in play.

Most of all, we invite you to join us in these discussions, make recommendations, and give us any other feedback or suggestions you may have, so please feel free to share your thoughts!

Justin

Contribute to WebXPRT’s AI capabilities with your NPU-equipped gear

A few weeks ago, we announced that we’re developing a new auxiliary WebXPRT 4 workload focused on local, browser-based AI technology. This is an exciting project for us, and as we work to determine the best approach from the perspective of frameworks, APIs, inference models, and test scenarios, we’re also thinking ahead to the testing process. To best understand how the new workload will impact system performance, we’re going to need to test it on hardware equipped with the latest generation of neural processing units (NPUs).

NPUs are not new, but the technology is advancing rapidly, and a growing number of PC and laptop manufacturers are releasing NPU-equipped systems. Several vendors have announced plans to release systems equipped with all-new NPUs in the latter half of this year. As is often the case with bleeding-edge technology, however, official release dates do not always coincide with widespread availability.

We want to evaluate new AI-focused WebXPRT workloads on the widest possible range of new systems, but getting a wide selection of gear equipped with the latest NPUs may take quite a while through normal channels. For that reason, we’ve decided to ask our readers for help to expedite the process.

If you’re an OEM or vendor representative with access to the latest generation of NPU-equipped gear and want to contribute to WebXPRT’s evolution, consider sending us any PCs, white boxes, laptops, 2-in-1s, or tablets (on loan) that would be suitable for NPU-focused testing. We have decades of experience serving as trusted testers of confidential and pre-release gear, so we’re well-acquainted with concerns about confidentiality that may come into play, and we won’t publish any information about the systems or related test results without your permission.

We will, though, be happy to share with you our test results on your systems, and we’d love to hear any guidance or other feedback from you on this new workload.

We’re open to any suitable gear, but we’re especially interested in AMD Ryzen AI, Apple M4, Intel Lunar Lake and Arrow Lake, and Qualcomm Snapdragon X Elite systems.

If you’re interested in sending us gear for WebXPRT development testing, please contact us. We’ll work out all the necessary details. Thanks in advance for your help!

Justin

Local AI and new frontiers for performance evaluation

Recently, we discussed some ways the PC market may evolve in 2024, and how new Windows on Arm PCs could present the XPRTs with many opportunities for benchmarking. In addition to a potential market shakeup from Arm-based PCs in the coming years, there’s a much broader emerging trend that could eventually revolutionize almost everything about the way we interact with our personal devices—the development of local, dedicated AI processing units for consumer-oriented tech.

AI already impacts daily life for many consumers through technologies such as such as predictive text, computer vision, adaptive workflow apps, voice recognition, smart assistants, and much more. Generative AI-based technologies are rapidly establishing a permanent, society-altering presence across a wide range of industries. Aside from some localized inference tasks that the CPU and/or GPU typically handle, the bulk of the heavy compute power that fuels those technologies has been in the cloud or in on-prem servers. Now, several major chipmakers are working to roll out their own versions of AI-optimized neural processing units (NPUs) that will enable local devices to take on a larger share of the AI load.

Examples of dedicated AI hardware in recently-released or upcoming consumer devices include Intel’s new Meteor Lake NPU, Apple’s Neural Engine for M-series SoCs, Qualcomm’s Hexagon NPU, and AMD’s XDNA 2 architecture. The potential benefits of localized, NPU-facilitated AI are straightforward. On-device AI could reduce power consumption and extend battery life by offloading those tasks from the CPUs. It could alleviate certain cloud-related privacy and security concerns. Without the delays inherent in cloud queries, localized AI could execute inference tasks that operate much closer to real time. NPU-powered devices could fine-tune applications around your habits and preferences, even while offline. You could pull and utilize relevant data from cloud-based datasets without pushing private data in return. Theoretically, your device could know a great deal about you and enhance many areas of your daily life without passing all that data to another party.

Will localized AI play out that way? Some tech companies envision a role for on-device AI that enhances the abilities of existing cloud-based subscription services without decoupling personal data. We’ll likely see a wide variety of capabilities and services on offer, with application-specific and SaaS-determined privacy options.

Regardless of the way on-device AI technology evolves in the coming years, it presents an exciting new frontier for benchmarking. All NPUs will not be created equal, and that’s something buyers will need to understand. Some vendors will optimize their hardware more for computer vision, or large language models, or AI-based graphics rendering, and so on. It won’t be enough for business and consumers to simply know that a new system has dedicated AI processing abilities. They’ll need to know if that system performs well while handling the types of AI-related tasks that they do every day.

Here at the XPRTs, we specialize in creating benchmarks that feature real-world scenarios that mirror the types of tasks that people do in their daily lives. That approach means that when people use XPRT scores to compare device performance, they’re using a metric that can help them make a buying decision that will benefit them every day. We look forward to exploring ways that we can bring XPRT benchmarking expertise to the world of on-device AI.

Do you have ideas for future localized AI workloads? Let us know!

Justin

A note about AIXPRT

Recently, a member of the tech press asked us about the status of AIXPRT, our benchmark that measures machine learning inference performance. We want to share our answer here in the blog for the benefit of other readers. The writer said it seemed like we had not updated AIXPRT in a long time, and wondered whether we had any immediate plans to do so.

It’s true that we haven’t updated AIXPRT in quite some time. Unfortunately, while a few tech press publications and OEM labs began experimenting with AIXPRT testing, the benchmark never got the traction we hoped for, and we’ve decided to invest our resources elsewhere for the time being. The AIXPRT installation packages are still available for people to use or reference as they wish, but we have not updated the benchmark to work with the latest platform versions (OpenVINO, TensorFlow, etc.). It’s likely that several components in each package are out of date.

If you are interested in AIXPRT and would like us to bring it up to date, please let us know. We can’t promise that we’ll revive the benchmark, but your feedback could be a valuable contribution as we try to gauge the benchmarking community’s interest.

Justin

Considering WebAssembly for WebXPRT 4

Earlier this month, we discussed a few of our ideas for possible changes in WebXPRT 4, including new web technologies that may work well in a browser benchmark. Today, we’re going to focus on one of those technologies, WebAssembly, in more detail.

WebAssembly (WASM) is a binary instruction format that works across all modern browsers. WASM provides a sandboxed environment that operates at native speeds and takes advantage of common hardware specs across platforms. WASM’s capabilities offer web developers a great deal of flexibility for running complex client applications in the browser. That level of flexibility may enable workload scenario options for WebXPRT 4 such as gaming, video editing, VR, virtual machines, and image recognition. We’re excited about those possibilities, but it remains to be seen which WASM use cases will meet the criteria we look for when considering new WebXPRT workloads, such as relevancy to real life, consistency and replicability, and the broadest-possible level of cross-browser support.

One WASM workload that we’re investigating is a web-based machine learning workload with TensorFlow for JavaScript (TensorFlow.js). TensorFlow.js offers pre-trained models for a wide variety of tasks, including image classification, object detection, sentence encoding, and natural language processing. TensorFlow.js originally used WebGL technology on the back end, but now it’s possible to run the workload using WASM. We could also use this technology to enhance one of WebXPRT’s existing AI-themed workloads, such as Organize Album using AI or Encrypt Notes and OCR Scan.

We’re can’t yet say that a WASM workload will definitely appear in WebXPRT 4, but the technology is promising. Do you have any experience with WASM, or ideas for WASM workloads? There’s still time for you to help shape the future of WebXPRT 4, so let us know what you think!

Justin

Check out the other XPRTs:

Forgot your password?