BenchmarkXPRT Blog banner

Tag Archives: artificial intelligence

Contribute to WebXPRT’s AI capabilities with your NPU-equipped gear

A few weeks ago, we announced that we’re developing a new auxiliary WebXPRT 4 workload focused on local, browser-based AI technology. This is an exciting project for us, and as we work to determine the best approach from the perspective of frameworks, APIs, inference models, and test scenarios, we’re also thinking ahead to the testing process. To best understand how the new workload will impact system performance, we’re going to need to test it on hardware equipped with the latest generation of neural processing units (NPUs).

NPUs are not new, but the technology is advancing rapidly, and a growing number of PC and laptop manufacturers are releasing NPU-equipped systems. Several vendors have announced plans to release systems equipped with all-new NPUs in the latter half of this year. As is often the case with bleeding-edge technology, however, official release dates do not always coincide with widespread availability.

We want to evaluate new AI-focused WebXPRT workloads on the widest possible range of new systems, but getting a wide selection of gear equipped with the latest NPUs may take quite a while through normal channels. For that reason, we’ve decided to ask our readers for help to expedite the process.

If you’re an OEM or vendor representative with access to the latest generation of NPU-equipped gear and want to contribute to WebXPRT’s evolution, consider sending us any PCs, white boxes, laptops, 2-in-1s, or tablets (on loan) that would be suitable for NPU-focused testing. We have decades of experience serving as trusted testers of confidential and pre-release gear, so we’re well-acquainted with concerns about confidentiality that may come into play, and we won’t publish any information about the systems or related test results without your permission.

We will, though, be happy to share with you our test results on your systems, and we’d love to hear any guidance or other feedback from you on this new workload.

We’re open to any suitable gear, but we’re especially interested in AMD Ryzen AI, Apple M4, Intel Lunar Lake and Arrow Lake, and Qualcomm Snapdragon X Elite systems.

If you’re interested in sending us gear for WebXPRT development testing, please contact us. We’ll work out all the necessary details. Thanks in advance for your help!

Justin

All about the AIXPRT Community Preview

Last week, Bill discussed our plans for the AIXPRT Community Preview (CP). I’m happy to report that, despite some last-minute tweaks and testing, we’re close to being on schedule. We expect to take the CP build live in the coming days, and will send a message to community members to let them know when the build is available in the AIXPRT GitHub repository.

As we mentioned last week, the AIXPRT CP build includes support for the Intel OpenVINO, TensorFlow (CPU and GPU), and TensorFlow with NVIDIA TensorRT toolkits to run image-classification workloads with ResNet-50 and SSD-MobileNet v1 networks. The test reports FP32, FP16, and INT8 levels of precision. Although the minimum CPU and GPU requirements vary by toolkit, the test systems must be running Ubuntu 16.04 LTS. You’ll be able to find more detail on those requirements in the installation instructions that we’ll post on AIXPRT.com.

We’re making the AIXPRT CP available to anyone interested in participating, but you must have a GitHub account. To gain access to the CP, please contact us and let us know your GitHub username. Once we receive it, we’ll send you an invitation to join the repository as a collaborator.

We’re allowing folks to quote test results during the CP period, and we’ll publish results from our lab and other members of the community at AIXPRT.com. Because this testing involves so many complex variables, we may contact testers if we see published results that seem to be significantly different than those from comparable systems. During the CP period, On the AIXPRT results page, we’ll provide detailed instructions on how to send in your results for publication on our site. For each set of results we receive , we’ll disclose all of the detailed test, software, and hardware information that the tester provides. In doing so, our goal is to make it possible for others to reproduce the test and confirm that they get similar numbers.

If you make changes to the code during testing, we ask that you email us and describe those changes. We’ll evaluate if those changes should become part of AIXPRT. We also require that users do not publish results from modified versions of the code during the CP period.

We expect the AIXPRT CP period to last about four to six weeks, placing the public release around the end of March or beginning of April. In the meantime, we welcome your thoughts and suggestions about all aspects of the benchmark.

Please let us know if you have any questions. Stay tuned to AIXPRT.com and the blog for more developments, and we look forward to seeing your results!

JNG

Check out the other XPRTs:

Forgot your password?