BenchmarkXPRT Blog banner

Category: results

Check out the WebXPRT 4 results viewer

New visitors to our site may not be aware of the WebXPRT 4 results viewer and how to use it. The viewer provides WebXPRT 4 users with an interactive, information-packed way to browse test results that is not available for earlier versions of the benchmark. With the viewer, users can explore all of the PT-curated results that we’ve published on WebXPRT.com, find more detailed information about those results, and compare results from different devices. The viewer currently displays over 460 results, and we add new entries each week.

The screenshot below shows the tool’s default display. Each vertical bar in the graph represents the overall score of a single test result, with bars arranged from lowest to highest. To view a single result in detail, the user hovers over a bar until it turns white and a small popup window displays the basic details of the result. If the user clicks to select the highlighted bar, the bar turns dark blue, and the dark blue banner at the bottom of the viewer displays additional details about that result.

In the example above, the banner shows the overall score (227), the score’s percentile rank (66th) among the scores in the current display, the name of the test device, and basic hardware disclosure information. If the source of the result is PT, users can click the Run info button to see the run’s individual workload scores. If the source is an external publisher, users can click the Source link to navigate to the original site.

The viewer includes a drop-down menu that lets users quickly filter results by major device type categories, and a tab that with additional filtering options, such as browser type, processor vendor, and result source. The screenshot below shows the viewer after I used the device type drop-down filter to select only desktops.

The screenshot below shows the viewer as I use the filter tab to explore additional filter options, such processor vendor.

The viewer also lets users pin multiple specific runs, which is helpful for making side-by-side comparisons. The screenshot below shows the viewer after I pinned four runs and viewed them on the Pinned runs screen.

The screenshot below shows the viewer after I clicked the Compare runs button. The overall and individual workload scores of the pinned runs appear in a table.

We’re excited about the WebXPRT 4 results viewer, and we want to hear your feedback. Are there features you’d really like to see, or ways we can improve the viewer? Please let us know, and send us your latest test results!

Justin

The role of potential WebXPRT 4 auxiliary workloads

As we mentioned in our most recent blog post, we’re seeking suggestions for ways to improve WebXPRT 4. We’re open to the prospect of adding both non-workload features and new auxiliary tests, e.g., a battery life or WebGPU-based graphics test scenario.

To prevent any confusion among WebXPRT 4 testers, we want to reiterate that any auxiliary workloads we might add will not affect existing WebXPRT 4 subtest or overall scores in any way. Auxiliary tests would be experimental or targeted workloads that run separately from the main test and produce their own scores. Current and future WebXPRT 4 results will be comparable to one another, so users who’ve already built a database of WebXPRT 4 scores will not have to retest their devices. Any new tests will be add-ons that allow us to continue expanding the rapidly growing body of published WebXPRT 4 test results while making the benchmark even more valuable to users overall.

If you have any thoughts about potential browser performance workloads, or any specific web technologies that you’d like to test, please let us know.

Justin

Best practices in benchmarking

From time to time, a tester writes to ask for help determining why they see different WebXPRT scores on two systems that have the same hardware configuration. The scores sometimes differ by a significant percentage. This can happen for many reasons, including different software stacks, but score variability can also result from different testing behavior and environments. While a small amount of variability is normal, these types of questions provide an opportunity to talk about the basic benchmarking practices we follow in the XPRT lab to produce the most consistent and reliable scores.

Below, we list a few basic best practices you might find useful in your testing. Most of them relate to evaluating browser performance with WebXPRT, but several of these practices apply to other benchmarks as well.

  • Test with clean images: We typically use an out-of-box (OOB) method for testing new devices in the XPRT lab. OOB testing means that other than running the initial OS and browser version updates that users are likely to run after first turning on the device, we change as little as possible before testing. We want to assess the performance that buyers are likely to see when they first purchase the device, before installing additional apps and utilities. This is the best way to provide an accurate assessment of the performance retail buyers will experience. While OOB is not appropriate for certain types of testing, the key is to not test a device that’s bogged down with programs that will influence results.
  • Turn off automatic updates: We do our best to eliminate or minimize app and system updates after initial setup. Some vendors are making it more difficult to turn off updates completely, but you should always double-check update settings before testing.
  • Get a baseline for system processes: Depending on the system and the OS, a significant amount of system-level activity can be going on in the background after you turn it on. As much as possible, we like to wait for a stable (idle) baseline of system activity before kicking off a test. If we start testing immediately after booting the system, we often see higher variance in the first run before the scores start to tighten up.
  • Hardware is not the only important factor: Most people know that different browsers produce different performance scores on the same system. However, testers aren’t always aware of shifts in performance between different versions of the same browser. While most updates don’t have a large impact on performance, a few updates have increased (or even decreased) browser performance by a significant amount. For this reason, it’s always worthwhile to record and disclose the extended browser version number for each test run. The same principle applies to any other relevant software.
  • Use more than one data point: Because of natural variance, our standard practice in the XPRT lab is to publish a score that represents the median from three to five runs, if not more. If you run a benchmark only once, and the score differs significantly from other published scores, your result could be an outlier that you would not see again under stable testing conditions.

We hope these tips will help make your testing more accurate. If you have any questions about the XPRTs, or about benchmarking in general, feel free to ask!

Justin

Celebrating 10 years of WebXPRT!

We’re excited to announce that it’s been 10 years since the initial launch of WebXPRT! In early 2013, we introduced WebXPRT as a unique browser performance benchmark in a market space that was already crowded with a variety of specialized measurement tools. Our goal was to offer a benchmark that could compare the performance of almost any web-enabled device, using scenarios created to mirror real-world tasks. We wanted it to be a free, easily accessible, easy-to-run, useful, and appealing testing option for OEM labs, vendors, and the tech press.

When we look back on the last 10 years of WebXPRT, we can’t help but conclude that our efforts have been successful. Since those early days, the WebXPRT market presence has grown from humble beginnings into a worldwide industry standard. Hundreds of tech press publications have used WebXPRT in thousands of articles and reviews, and testers have now run the benchmark well over 1.1 million times.

Below, I’ve listed some of the WebXPRT team’s accomplishments over the last decade. If you’ve been following WebXPRT from the beginning, this may all be familiar, but if you’re new to the  community, it may be interesting to see some of the steps that contributed to making WebXPRT what it is today.

In future blog posts, we’ll look at how the number of WebXPRT runs has grown over time, and how WebXPRT use has grown among OEMs, vendors, and the tech press worldwide. Do you have any thoughts that you’d like to share from your WebXPRT testing experience? If so, let us know!

Justin

Comparing the performance of popular browsers with WebXPRT 4

If you’ve been reading the XPRT blog for a while, you know that we occasionally like to revisit a series of in-house WebXPRT comparison tests to see if recent updates have changed the performance rankings of popular web browsers. We published our most recent comparison last April, when we used WebXPRT 4 to compare the performance of five browsers on the same system.

For this round of tests, we used a Dell XPS 13 7930, which features an Intel Core i3-10110U processor and 4 GB of RAM, running Windows 11 Home updated to version 22H2 (22621.1105). We installed all current Windows updates, and updated each of the browsers under test: Brave, Google Chrome, Microsoft Edge, Mozilla Firefox, and Opera.

After the update process completed, we turned off updates to prevent them from interfering with test runs. We ran WebXPRT 4 three times on each of the five browsers. The score we post for each browser is the median of the three test runs.

In our last round of tests, Edge was the clear winner, with a 2.2 percent performance advantage over Chrome. Firefox came in last, about 3 percent slower than Opera, which was in the middle of the pack. With updated versions of the browsers, the only change in rank order was that Brave moved into a tie with Opera.

While the rank order from this round of tests was very similar to the previous round, we did observe two clear performance trends: (1) the range between high and low scores was tighter, dropping from a difference of 7.8 percent to 4.3 percent, and (2) every browser demonstrated improved performance. The chart below illustrates both trends. Firefox showed the single largest score improvement at 7.8 percent, but the performance jump for each browser was considerable.

Do these results mean that Microsoft Edge will always provide a speedier web experience, or Firefox will always be slower than the others? Not necessarily. It’s true that a device with a higher WebXPRT score will probably feel faster during daily web activities than one with a much lower score, but your experience depends in part on the types of things you do on the web, along with your system’s privacy settings, memory load, ecosystem integration, extension activity, and web app capabilities.

In addition, browser speed can noticeably increase or decrease after an update, and OS-specific optimizations can affect performance, such as with Edge on Windows 11 and Chrome on Chrome OS. All these variables are important to keep in mind when considering how WebXPRT results translate to your everyday experience.

Have you used WebXPRT to compare browser performance on the same system? Let us know how it turned out!

Justin

Looking back on 2022 with the XPRTs

Around the beginning of each new year, we like to take the opportunity to look back and summarize the XPRT highlights from the previous year. Readers of our newsletter are familiar with the stats and updates we include each month, but for our blog readers who don’t receive the newsletter, we’ve compiled some highlights from 2022 below.

Benchmarks
In the past year, we released WebXPRT 4, and the CloudXPRT v1.2 update package.

XPRTs in the media
Journalists, advertisers, and analysts referenced the XPRTs thousands of times in 2022. It’s always rewarding to know that the XPRTs have proven to be useful and reliable assessment tools for technology publications around the world. Media sites that used the XPRTs in 2022 include AnandTech, Android Authority, Benchlife.info (China), BodNara (South Korea), ComputerBase (Germany), DISKIDEE (Belgium), eTeknix, Expert Reviews, Gadgets 360, Hardware.info (The Netherlands), Hardware Zone (Singapore), ITC.ua (Ukraine), ITmedia (Japan), Itndaily.ru (Russia), Notebookcheck, PCMag, PC-Welt (Germany), PCWorld, TechPowerUp, Tom’s Guide, TweakTown, and ZOL.com (China).

Downloads and confirmed runs
In 2022, we had more than 10,800 benchmark downloads and 183,300 confirmed runs. Users have run our most popular benchmark, WebXPRT, more than 1,135,500 times since its debut in 2013! WebXPRT continues to be a go-to, industry-standard performance benchmark for OEM labs, vendors, and leading tech press outlets around the globe.

XPRT media, tools, and publications
Part of our mission with the XPRTs is to produce tools and materials that help testers better understand the ins and outs of benchmarking in general and the XPRTs in particular. To help achieve this goal, we published the following in 2022:

We’re thankful for everyone who used the XPRTs, joined the community, and sent questions and suggestions throughout 2022. We’re excited to see what’s in store for the XPRTs in 2023!

Justin

Check out the other XPRTs:

Forgot your password?