BenchmarkXPRT Blog banner

Tag Archives: browser benchmark

Good news for WebXPRT 4 testing!

Over the past several weeks, we’ve been working to find a solution to a problem with WebXPRT 4 test failures on Apple devices running iOS 17/17.1, iPadOS 17/17.1, and macOS Sonoma with Safari 17/17.1. While we put significant effort into an updated WebXPRT version that would mitigate this issue, we are happy to report that it now looks like we’ll be able to stick with the current version!

Last Thursday, Apple released the iOS 17.2 beta for participants in the Apple Developer Program. When we tested the current version of WebXPRT 4 on iOS 17.2, the tests completed without any issues. We then successfully completed tests on iPadOS 17.2 and macOS Sonoma 14.2 with Safari 17.2. Now that we have good reasons to believe that the iOS 17.2 release will solve the problem, sticking with the current WebXPRT 4 build will maximize continuity and minimize disruption for WebXPRT users.

Apple has not yet published a public release date for iOS/iPad OS/Safari 17.2. Based on past development schedules, it seems likely that they will release it between mid-November and early December, but that’s simply our best guess. Until then, users who want to test WebXPRT 4 on devices running iOS 17/17.1, iPadOS 17/17.1, or macOS Sonoma with Safari 17/17.1 will need to update those devices to iOS/iPad OS/Safari 17.2 via the Apple Developer Program.

To help Apple users better navigate testing until the public 17.2 release, we’ve added a function to the current WebXPRT 4 start page that will notify users if they need to update their operating system to test.

We appreciate everyone’s patience as we worked to find a solution to this problem! If you have any questions or concerns about WebXPRT 4, please let us know.

Justin

Making progress with WebXPRT 4 in iOS 17

In recent blog posts, we discussed an issue that we encountered when attempting to run WebXPRT 4 on iOS 17 devices. If you missed those posts, you can find more details about the nature of the problem here. In short, the issue is that the Encrypt Notes and OCR scan subtest in WebXPRT 4 gets stuck when the Tesseract.js Optical Character Recognition (OCR) engine attempts to scan a shopping receipt. We’ve verified that the issue occurs on devices running iOS 17, iPadOS 17, and macOS Sonoma with Safari 17.

After a good bit of troubleshooting and research to try and identify the cause of the problem, we decided to build an updated version of WebXPRT 4 that uses a newer version of Tesseract for the OCR task. Aside from updating Tesseract in the new build, we aimed to change as little as possible. To try and maximize continuity, we’re still using the original input image for the receipt scanning task, and we decided to stick with using the WASM library instead of a WASM-SIMD library. Aside from a new version of tesseract.js, WebXPRT 4 version number updates, and updated documentation where necessary, all other aspects of WebXPRT 4 will remain the same.

We’re currently testing a candidate build of this new version on a wide array of devices. The results so far seem promising, but we want to complete our due diligence and make sure this is the best approach to solving the problem. We know that OEM labs and tech reviewers put a lot of time and effort into compiling databases of results, so we hope to provide a solution that minimizes results disruption and inconvenience for WebXPRT 4 users. Ideally, folks would be able to integrate scores from the new build without any questions or confusion about comparability.

We don’t yet have an exact release date for a new WebXPRT 4 build, but we can say that we’re shooting for the end of October. We appreciate everyone’s patience as we work towards the best possible solution. If you have any questions or concerns about an updated version of WebXPRT 4, please let us know.

Justin

Investigating a possible issue with WebXPRT 4 in iOS 17

Yesterday, Apple revealed the iPhone 15 and iPhone 15 Pro at its annual fall event, along with a new version of the iOS mobile operating system (iOS 17). The official iOS 17 launch will take place on September 18th, but before then, users of newer iPhones can install the OS via the Apple Beta Software Program.

Today, a tech journalist informed us that during their testing of iPhone 15 and iPhone 15 Pro with iOS 17 Beta models, WebXPRT 4 has been freezing while running the Encrypt Notes and OCR Scan workload in the Safari 17 browser. Here in the lab, we were able to immediately replicate the issue on an iPhone 12 Pro with iOS 17 Beta model.

Our initial troubleshooting confirmed that WebXPRT 3 successfully runs to completion on iOS 17 Beta, so it appears that the problem is specific to WebXPRT 4. We also confirmed that WebXPRT 4 freezes at the same place when running in the Google Chrome browser on iOS 17 Beta, so we know that the problem does not occur only in Safari.

We’re currently investigating the issue, and will publish our findings here in the blog as soon as we feel confident that we’ve identified both the root cause and a workable solution, if a solution is necessary. One reason a solution would not be necessary is that the issue is a bug on the iOS 17 Beta side that Apple will resolve before the official launch.

We apologize for any inconvenience this issue might cause for tech reviewers and iPhone users, and we appreciate your patience while we figure out what’s going on. If you have any questions about WebXPRT 4, please don’t hesitate to ask!

Justin

Check out the WebXPRT 4 results viewer

New visitors to our site may not be aware of the WebXPRT 4 results viewer and how to use it. The viewer provides WebXPRT 4 users with an interactive, information-packed way to browse test results that is not available for earlier versions of the benchmark. With the viewer, users can explore all of the PT-curated results that we’ve published on WebXPRT.com, find more detailed information about those results, and compare results from different devices. The viewer currently displays over 460 results, and we add new entries each week.

The screenshot below shows the tool’s default display. Each vertical bar in the graph represents the overall score of a single test result, with bars arranged from lowest to highest. To view a single result in detail, the user hovers over a bar until it turns white and a small popup window displays the basic details of the result. If the user clicks to select the highlighted bar, the bar turns dark blue, and the dark blue banner at the bottom of the viewer displays additional details about that result.

In the example above, the banner shows the overall score (227), the score’s percentile rank (66th) among the scores in the current display, the name of the test device, and basic hardware disclosure information. If the source of the result is PT, users can click the Run info button to see the run’s individual workload scores. If the source is an external publisher, users can click the Source link to navigate to the original site.

The viewer includes a drop-down menu that lets users quickly filter results by major device type categories, and a tab that with additional filtering options, such as browser type, processor vendor, and result source. The screenshot below shows the viewer after I used the device type drop-down filter to select only desktops.

The screenshot below shows the viewer as I use the filter tab to explore additional filter options, such processor vendor.

The viewer also lets users pin multiple specific runs, which is helpful for making side-by-side comparisons. The screenshot below shows the viewer after I pinned four runs and viewed them on the Pinned runs screen.

The screenshot below shows the viewer after I clicked the Compare runs button. The overall and individual workload scores of the pinned runs appear in a table.

We’re excited about the WebXPRT 4 results viewer, and we want to hear your feedback. Are there features you’d really like to see, or ways we can improve the viewer? Please let us know, and send us your latest test results!

Justin

The role of potential WebXPRT 4 auxiliary workloads

As we mentioned in our most recent blog post, we’re seeking suggestions for ways to improve WebXPRT 4. We’re open to the prospect of adding both non-workload features and new auxiliary tests, e.g., a battery life or WebGPU-based graphics test scenario.

To prevent any confusion among WebXPRT 4 testers, we want to reiterate that any auxiliary workloads we might add will not affect existing WebXPRT 4 subtest or overall scores in any way. Auxiliary tests would be experimental or targeted workloads that run separately from the main test and produce their own scores. Current and future WebXPRT 4 results will be comparable to one another, so users who’ve already built a database of WebXPRT 4 scores will not have to retest their devices. Any new tests will be add-ons that allow us to continue expanding the rapidly growing body of published WebXPRT 4 test results while making the benchmark even more valuable to users overall.

If you have any thoughts about potential browser performance workloads, or any specific web technologies that you’d like to test, please let us know.

Justin

How we evaluate new WebXPRT workload proposals

A key value of the BenchmarkXPRT Development Community is our openness to user feedback. Whether it’s positive feedback about our benchmarks, constructive criticism, ideas for completely new benchmarks, or proposed workload scenarios for existing benchmarks, we appreciate your input and give it serious consideration.

We’re currently accepting ideas and suggestions for ways we can improve WebXPRT 4. We are open to adding both non-workload features and new auxiliary tests, which can be experimental or targeted workloads that run separately from the main test and produce their own scores. You can read more about experimental WebXPRT 4 workloads here. However, a recent user question about possible WebGPU workloads has prompted us to explain the types of parameters that we consider when we evaluate a new WebXPRT workload proposal.

Community interest and real-life relevance

The first two parameters we use when evaluating a WebXPRT workload proposal are straightforward: are people interested in the workload and is it relevant to real life? We originally developed WebXPRT to evaluate device performance using the types of web-based tasks that people are likely to encounter daily, and real-life relevancy continues to be an important criterion for us during development. There are many technologies, functions, and use cases that we could test in a web environment, but only some of them are both relevant to common applications or usage patterns and likely to be interesting to lab testers and tech reviewers.

Maximum cross-platform support

Currently, WebXPRT runs in almost any web browser, on almost any device that has a web browser, and we would ideally maintain that broad level of cross-platform support when introducing new workloads. However, technical differences in the ways that different browsers execute tasks mean that some types of scenarios would be impossible to include without breaking our cross-platform commitment.

One reason that we’re considering auxiliary workloads with WebXPRT, e.g., a battery life rundown, is that those workloads would allow WebXPRT to offer additional value to users while maintaining the cross-platform nature of the main test. Even if a battery life test ran on only one major browser, it could still be very useful to many people.

Performance differentiation

Computer benchmarks such as the XPRTs exist to provide users with reliable metrics that they can use to gauge how well target platforms or technologies perform certain tasks. With a broadly targeted benchmark such as WebXPRT, if the workloads are so heavy that most devices can’t handle them, or so light that most devices complete them without being taxed, the results will have little to no use for OEM labs, the tech press, or independent users when evaluating devices or making purchasing decisions.

Consequently, with any new WebXPRT workload, we try to find a sweet spot in terms of how demanding it is. We want it to run on a wide range of devices—from low-end devices that are several years old to brand-new high-end devices and everything in between. We also want users to see a wide range of workload scores and resulting overall scores, so they can easily grasp the different performance capabilities of the devices under test.

Consistency and replicability

Finally, workloads should produce scores that consistently fall within an acceptable margin of error, and are easily to replicate with additional testing or comparable gear. Some web technologies are very sensitive to uncontrollable or unpredictable variables, such as internet speed. A workload that measures one of those technologies would be unlikely to produce results that are consistent and easily replicated.

We hope this post will be useful for folks who are contemplating potential new WebXPRT workloads. If you have any general thoughts about browser performance testing, or specific workload ideas that you’d like us to consider, please let us know.

Justin

Check out the other XPRTs:

Forgot your password?