BenchmarkXPRT Blog banner

Tag Archives: cross-platform

Another milestone for WebXPRT!

Back in November, we discussed some of the trends we were seeing in the total number of completed and reported WebXPRT runs each month. The monthly run totals were increasing at a rate we hadn’t seen before. We’re happy to report that the upward trend has continued and even accelerated through the first quarter of this year! So far in 2024, we’ve averaged 43,744 WebXPRT runs per month, and our run total for the month of March alone (48,791) was more than twice the average monthly run total for 2023 (24,280).

The rapid increase in WebXPRT testing has helped us reach the milestone of 1.5 million runs much sooner than we anticipated. As the chart below shows, it took about six years for WebXPRT to log the first half-million runs and nine years to pass the million-run milestone. It’s only taken about one-and-a-half years to add another half-million.

This milestone means more to us than just reaching some large number. For a benchmark to be successful, it should ideally have widespread confidence and support from the benchmarking community, including manufacturers, OEM labs, the tech press, and other end users. When the number of yearly WebXPRT runs consistently increases, it’s a sign to us that the benchmark is serving as a valuable and trusted performance evaluation tool for more people around the world.

As always, we’re grateful for everyone who has helped us reach this milestone. If you have any questions or comments about using WebXPRT to test your gear, please let us know! And, if you have suggestions for how we can improve the benchmark, please share them. We want to keep making it better and better for you!

Justin

Passing two important WebXPRT milestones

Over the past few months, we’ve been excited to see a substantial increase in the total number of completed WebXPRT runs. To put the increase in perspective, we had more total WebXPRT runs last month alone (40,453) than we had in the first two years WebXPRT was available (36,674)! This boost has helped us to reach two important milestones as we close in on the end of 2023.

The first milestone is that the number of WebXPRT 4 runs per month now exceeds the number of WebXPRT 3 runs per month. When we release a new version of an XPRT benchmark, it can take a while for users to transition from using the older version. For OEM labs and tech journalists, adding a new benchmark to their testing suite often involves a significant investment in back testing and gathering enough test data for meaningful comparisons. When the older version of the benchmark has been very successful, adoption of the new version can take longer. WebXPRT 3 has been remarkably popular around the world, so we’re excited to see WebXPRT 4 gain traction and take the lead even as the total number of WebXPRT runs increases each month. The chart below shows the number of WebXPRT runs per month for each version of WebXPRT over the past ten years. WebXPRT 4 usage first surpassed WebXPRT 3 in August of this year, and after looking at data for the last three months, we think its lead is here to stay.

The second important milestone is the cumulative number of WebXPRT runs, which recently passed 1.25 million, as the chart below shows. For us, this moment represents more than a numerical milestone. For a benchmark to succeed, developers need the trust and support of the benchmarking community. WebXPRT’s consistent year-over-year growth tells us that the benchmark continues to hold value for manufacturers, OEM labs, the tech press, and end users. We see it as a sign of trust that folks repeatedly return to the benchmark for reliable performance metrics. We’re grateful for that trust, and for everyone that has contributed to the WebXPRT development process over the years.

We look forward to seeing how far WebXPRT’s reach can extend in 2024! If you have any questions or comments about using WebXPRT, let us know!

Justin

Good news for WebXPRT 4 testing!

Over the past several weeks, we’ve been working to find a solution to a problem with WebXPRT 4 test failures on Apple devices running iOS 17/17.1, iPadOS 17/17.1, and macOS Sonoma with Safari 17/17.1. While we put significant effort into an updated WebXPRT version that would mitigate this issue, we are happy to report that it now looks like we’ll be able to stick with the current version!

Last Thursday, Apple released the iOS 17.2 beta for participants in the Apple Developer Program. When we tested the current version of WebXPRT 4 on iOS 17.2, the tests completed without any issues. We then successfully completed tests on iPadOS 17.2 and macOS Sonoma 14.2 with Safari 17.2. Now that we have good reasons to believe that the iOS 17.2 release will solve the problem, sticking with the current WebXPRT 4 build will maximize continuity and minimize disruption for WebXPRT users.

Apple has not yet published a public release date for iOS/iPad OS/Safari 17.2. Based on past development schedules, it seems likely that they will release it between mid-November and early December, but that’s simply our best guess. Until then, users who want to test WebXPRT 4 on devices running iOS 17/17.1, iPadOS 17/17.1, or macOS Sonoma with Safari 17/17.1 will need to update those devices to iOS/iPad OS/Safari 17.2 via the Apple Developer Program.

To help Apple users better navigate testing until the public 17.2 release, we’ve added a function to the current WebXPRT 4 start page that will notify users if they need to update their operating system to test.

We appreciate everyone’s patience as we worked to find a solution to this problem! If you have any questions or concerns about WebXPRT 4, please let us know.

Justin

Check out the WebXPRT 4 results viewer

New visitors to our site may not be aware of the WebXPRT 4 results viewer and how to use it. The viewer provides WebXPRT 4 users with an interactive, information-packed way to browse test results that is not available for earlier versions of the benchmark. With the viewer, users can explore all of the PT-curated results that we’ve published on WebXPRT.com, find more detailed information about those results, and compare results from different devices. The viewer currently displays over 460 results, and we add new entries each week.

The screenshot below shows the tool’s default display. Each vertical bar in the graph represents the overall score of a single test result, with bars arranged from lowest to highest. To view a single result in detail, the user hovers over a bar until it turns white and a small popup window displays the basic details of the result. If the user clicks to select the highlighted bar, the bar turns dark blue, and the dark blue banner at the bottom of the viewer displays additional details about that result.

In the example above, the banner shows the overall score (227), the score’s percentile rank (66th) among the scores in the current display, the name of the test device, and basic hardware disclosure information. If the source of the result is PT, users can click the Run info button to see the run’s individual workload scores. If the source is an external publisher, users can click the Source link to navigate to the original site.

The viewer includes a drop-down menu that lets users quickly filter results by major device type categories, and a tab that with additional filtering options, such as browser type, processor vendor, and result source. The screenshot below shows the viewer after I used the device type drop-down filter to select only desktops.

The screenshot below shows the viewer as I use the filter tab to explore additional filter options, such processor vendor.

The viewer also lets users pin multiple specific runs, which is helpful for making side-by-side comparisons. The screenshot below shows the viewer after I pinned four runs and viewed them on the Pinned runs screen.

The screenshot below shows the viewer after I clicked the Compare runs button. The overall and individual workload scores of the pinned runs appear in a table.

We’re excited about the WebXPRT 4 results viewer, and we want to hear your feedback. Are there features you’d really like to see, or ways we can improve the viewer? Please let us know, and send us your latest test results!

Justin

The role of potential WebXPRT 4 auxiliary workloads

As we mentioned in our most recent blog post, we’re seeking suggestions for ways to improve WebXPRT 4. We’re open to the prospect of adding both non-workload features and new auxiliary tests, e.g., a battery life or WebGPU-based graphics test scenario.

To prevent any confusion among WebXPRT 4 testers, we want to reiterate that any auxiliary workloads we might add will not affect existing WebXPRT 4 subtest or overall scores in any way. Auxiliary tests would be experimental or targeted workloads that run separately from the main test and produce their own scores. Current and future WebXPRT 4 results will be comparable to one another, so users who’ve already built a database of WebXPRT 4 scores will not have to retest their devices. Any new tests will be add-ons that allow us to continue expanding the rapidly growing body of published WebXPRT 4 test results while making the benchmark even more valuable to users overall.

If you have any thoughts about potential browser performance workloads, or any specific web technologies that you’d like to test, please let us know.

Justin

How we evaluate new WebXPRT workload proposals

A key value of the BenchmarkXPRT Development Community is our openness to user feedback. Whether it’s positive feedback about our benchmarks, constructive criticism, ideas for completely new benchmarks, or proposed workload scenarios for existing benchmarks, we appreciate your input and give it serious consideration.

We’re currently accepting ideas and suggestions for ways we can improve WebXPRT 4. We are open to adding both non-workload features and new auxiliary tests, which can be experimental or targeted workloads that run separately from the main test and produce their own scores. You can read more about experimental WebXPRT 4 workloads here. However, a recent user question about possible WebGPU workloads has prompted us to explain the types of parameters that we consider when we evaluate a new WebXPRT workload proposal.

Community interest and real-life relevance

The first two parameters we use when evaluating a WebXPRT workload proposal are straightforward: are people interested in the workload and is it relevant to real life? We originally developed WebXPRT to evaluate device performance using the types of web-based tasks that people are likely to encounter daily, and real-life relevancy continues to be an important criterion for us during development. There are many technologies, functions, and use cases that we could test in a web environment, but only some of them are both relevant to common applications or usage patterns and likely to be interesting to lab testers and tech reviewers.

Maximum cross-platform support

Currently, WebXPRT runs in almost any web browser, on almost any device that has a web browser, and we would ideally maintain that broad level of cross-platform support when introducing new workloads. However, technical differences in the ways that different browsers execute tasks mean that some types of scenarios would be impossible to include without breaking our cross-platform commitment.

One reason that we’re considering auxiliary workloads with WebXPRT, e.g., a battery life rundown, is that those workloads would allow WebXPRT to offer additional value to users while maintaining the cross-platform nature of the main test. Even if a battery life test ran on only one major browser, it could still be very useful to many people.

Performance differentiation

Computer benchmarks such as the XPRTs exist to provide users with reliable metrics that they can use to gauge how well target platforms or technologies perform certain tasks. With a broadly targeted benchmark such as WebXPRT, if the workloads are so heavy that most devices can’t handle them, or so light that most devices complete them without being taxed, the results will have little to no use for OEM labs, the tech press, or independent users when evaluating devices or making purchasing decisions.

Consequently, with any new WebXPRT workload, we try to find a sweet spot in terms of how demanding it is. We want it to run on a wide range of devices—from low-end devices that are several years old to brand-new high-end devices and everything in between. We also want users to see a wide range of workload scores and resulting overall scores, so they can easily grasp the different performance capabilities of the devices under test.

Consistency and replicability

Finally, workloads should produce scores that consistently fall within an acceptable margin of error, and are easily to replicate with additional testing or comparable gear. Some web technologies are very sensitive to uncontrollable or unpredictable variables, such as internet speed. A workload that measures one of those technologies would be unlikely to produce results that are consistent and easily replicated.

We hope this post will be useful for folks who are contemplating potential new WebXPRT workloads. If you have any general thoughts about browser performance testing, or specific workload ideas that you’d like us to consider, please let us know.

Justin

Check out the other XPRTs:

Forgot your password?