BenchmarkXPRT Blog banner

Tag Archives: BenchmarkXPRT

The WebXPRT 4 results viewer: A powerful tool for browsing hundreds of test results

In our recent blog post about the XPRT results database, we promised to discuss the WebXPRT 4 results viewer in more detail. We developed the results viewer to serve as a feature-rich interactive tool that visitors to WebXPRT.com can use to browse the test results that we’ve published on our site, dig into the details of each result, and compare scores from multiple devices. The viewer currently has almost 700 test results, and we add new PT-curated entries each week.

Figure 1 shows the tool’s default display. Each vertical bar in the graph represents the overall score of a single test result, with bars arranged left-to-right, from lowest to highest. To view a single result in detail, hover over a bar to highlight it, and a small popup window will display the basic details of the result. You can then click to select the highlighted bar. The bar will turn dark blue, and the dark blue banner at the bottom of the viewer will display additional details about that result.

Figure 1: The WebXPRT 4 results viewer tool’s default display

In the example in Figure 1, the banner shows the overall score (237), the score’s percentile rank (66th) among the scores in the current display, the name of the test device, and basic hardware configuration information. If the source of the result is PT, you can click the Run info button in the bottom right-hand corner of the display to see the run’s individual workload scores. If the source is an external publisher, users can click the Source link to navigate to the original site.

The viewer includes a drop-down menu that lets users quickly filter results by major device type categories, plus a tab with additional filtering options, such as browser type, processor vendor, and result source. Figure 2 shows the viewer after I used the device type drop-down filter to select only laptops.

Figure 2: Screenshot from the WebXPRT 4 results viewer showing results filtered by the device type drop-down menu.

Figure 3 shows the viewer as I use the filter tab to explore additional filter options, such as processor vendor.

Figure 3: Screenshot from the WebXPRT 4 results viewer showing the filter options available with the filter tab.

The viewer will also let you pin multiple specific runs, which is helpful for making side-by-side comparisons. Figure 4 shows the viewer after I pinned four runs and viewed them on the Pinned runs screen.

Figure 4: Screenshot from the WebXPRT 4 results viewer showing four pinned runs on the Pinned runs screen.

Figure 5 shows the viewer after I clicked the Compare runs button. The overall and individual workload scores of the pinned runs appear in a table.

Figure 5: Screenshot from the WebXPRT 4 results viewer showing four pinned runs on the Compare runs screen.

We hope that you’ll enjoy using the results viewer to browse our WebXPRT 4 results database and that it will become one of your go-to resources for device comparison data.  

Are there additional features you’d like to see in the viewer, or other ways we can improve it? Please let us know, and send us your latest test results!

Justin

Want to know how your device performs? Explore the XPRT results database

If you only recently started using the XPRT benchmarks, you may not know about one of the free resources we offer—the XPRT results database. Our results database currently holds more than 3,650 test results from over 150 sources, including global tech press outlets, OEM labs, and independent testers. It serves as a treasure trove of current and historical performance data across all the XPRT benchmarks and hundreds of devices. You can use these results and the results of the same XPRTs on your device to get a sense of how well your device performs.

We update the results database several times a week, adding selected results from our own internal lab testing, reliable media sources, and end-of-test user submissions. (After you run one of the XPRTs, you can choose to submit the results, but don’t worry—this is opt-in. Your results do not automatically appear in the database.) Before adding a result, we also look at any available system information and evaluate whether the score makes sense and is consistent with general expectations.

There are three primary ways that you can explore the XPRT results database.

The first is by visiting the main BenchmarkXPRT results browser, which displays results entries for all of the XPRT benchmarks in chronological order (see the screenshot below). You can filter the results by selecting a benchmark from the drop-down menu. You can also type values, such as a vendor name (e.g., Dell) or the name of a tech publication (e.g., PCWorld) into the free-form filter field. For results we’ve produced in our lab, clicking “PT” in the Source column takes you to a page with additional configuration information for the test system. For sources outside our lab, clicking the source name takes you to the original article or review that contains the result.

The second way to access our published results is by visiting the results page for an individual XPRT benchmark. Start by going to the page of the benchmark that interests you (e.g., CrXPRT.com) , and looking for the blue View Results button. Clicking the button takes you to a page that displays results for only that benchmark. You can use the free-form filter on the page to filter those results, and you can use the Benchmarks drop-down menu to jump to the other individual XPRT results pages.

The third way to view our results database is with the WebXPRT 4 results viewer. The viewer provides an information-packed, interactive tool with which you can explore data from the curated set of WebXPRT 4 results we’ve published on our site. We’ll discuss the features of the WebXPRT 4 results viewer in more detail in a future post.

You can use any of these approaches to compare the results of an XPRT on your device with our many published results. We hope you’ll take some time to explore the information in our results database and that it proves to be helpful to you. If you have ideas for new features or suggestions for improvement, we’d love to hear from you!

Justin

Another milestone for WebXPRT!

Back in November, we discussed some of the trends we were seeing in the total number of completed and reported WebXPRT runs each month. The monthly run totals were increasing at a rate we hadn’t seen before. We’re happy to report that the upward trend has continued and even accelerated through the first quarter of this year! So far in 2024, we’ve averaged 43,744 WebXPRT runs per month, and our run total for the month of March alone (48,791) was more than twice the average monthly run total for 2023 (24,280).

The rapid increase in WebXPRT testing has helped us reach the milestone of 1.5 million runs much sooner than we anticipated. As the chart below shows, it took about six years for WebXPRT to log the first half-million runs and nine years to pass the million-run milestone. It’s only taken about one-and-a-half years to add another half-million.

This milestone means more to us than just reaching some large number. For a benchmark to be successful, it should ideally have widespread confidence and support from the benchmarking community, including manufacturers, OEM labs, the tech press, and other end users. When the number of yearly WebXPRT runs consistently increases, it’s a sign to us that the benchmark is serving as a valuable and trusted performance evaluation tool for more people around the world.

As always, we’re grateful for everyone who has helped us reach this milestone. If you have any questions or comments about using WebXPRT to test your gear, please let us know! And, if you have suggestions for how we can improve the benchmark, please share them. We want to keep making it better and better for you!

Justin

Accessing the WebXPRT 4 source code

If you’re new to the XPRTs, you may not be aware that we provide free access to XPRT benchmark source code. Publishing XPRT source code is part of our commitment to making the XPRT development process as transparent as possible. By allowing interested parties to access and review our source code, we’re encouraging openness and honesty in the benchmarking industry. We’re also inviting constructive feedback that can help ensure that the XPRTs continue to improve and contribute to a level playing field for all the types of products they measure.

While we do offer free access to the XPRT source code, we’ve decided to offer the code upon request instead of using a permanent download link. This approach prevents bots or other malicious actors from downloading the code. It also has the benefit of allowing us to interact with users who are interested in the source code and answer any questions they may have. We’re always keen to learn more about what others are thinking about the XPRTs and the types of work they measure.

We recently received some questions about accessing the WebXPRT 4 source code, which made us realize that we needed to make a clearer way for people to ask for the code. In response, we added a “Request WebXPRT 4 source code” link to the gray Helpful Info box on WebXPRT.com (see it in the screenshot below). Clicking the link will allow you to email the BenchmarkXPRT Support team directly and request the code.

After we receive your request, we’ll send you a secure link to the current WebXPRT 4 build package. For those users who wish to set up a local instance of WebXPRT 4 for their own internal testbeds, the package will contain all the necessary files and installation instructions. We allow folks to set up their own instances for purposes of review, internal testing, or experimentation, but we ask that users publish only test results from the official WebXPRT 4 site.

While we offer free access to XPRT source code, our approach to derivative works differs from some traditional open-source models that encourage developers to change products and even take them in different directions. Because benchmarking requires a product that remains static to enable valid comparisons over time, we allow people to download the source, but we reserve the right to control derivative works. This discourages a situation where someone publishes an unauthorized version of the benchmark and calls it an “XPRT.”

If you have any questions about accessing the WebXPRT 4 source code, let us know!

Justin

WebXPRT in PT reports

We don’t just make WebXPRT—we use it, too. If you normally come straight to BenchmarkXPRT.com or WebXPRT.com, you may not even realize that Principled Technologies (PT) does a lot more than just managing and administering the BenchmarkXPRT Development Community. We’re also the tech world’s leading provider of hands-on testing and related fact-based marketing services. As part of that work, we’re frequent WebXPRT users.

We use the benchmark when we test devices such as Chromebooks, desktops, mobile workstations, and consumer laptops for our clients. (You can see a lot of that work and many of our clients on our public marketing portfolio page.) We run the benchmark for the same reasons that others do—it’s a reliable and easy-to-use tool for measuring how well devices handle web browsing and other web work.

We also sometimes use WebXPRT simply because our clients request it. They request it for the same reason the rest of us like and use it: it’s a great tool. Regardless of job titles and descriptions, most laptop and tablet users surf the web and access web-based applications every day. Because WebXPRT is a browser benchmark, higher scores on it could indicate that a device may provide a superior online experience.

Here are just a few of the recent PT reports that used WebXPRT:

  • In a project for Dell, we compared the performance of a Dell Latitude 7340 Ultralight to that of a 13-inch Apple MacBook Air (2022).
  • In this study for HP, we compared the performance of an HP ZBook Firefly G10, an HP ZBook Power G10, and an HP ZBook Fury G10.
  • Finally, in a set of comparisons for Lenovo, we evaluated the system performance and end-user experience of eight Lenovo ThinkBook, ThinkCentre, and ThinkPad systems along with their Apple counterparts.

All these projects, and many more, show how a variety of companies rely on PT—and on WebXPRT—to help buyers make informed decisions. P.S. If we publish scores from a client-commissioned study in the WebXPRT 4 results viewer, we will list the source as “PT”, because we did the testing.

By Mark L. Van Name and Justin Greene

WebXPRT benchmarking tips from the XPRT lab

Occasionally, we receive inquiries from XPRT users asking for help determining why two systems with the same hardware configuration are producing significantly different WebXPRT scores. This can happen for many reasons, including different software stacks, but score variability can also result from different testing behaviors and environments. While some degree of variability is normal, these types of questions provide us with an opportunity to talk about some of the basic benchmarking practices we follow in the XPRT lab to produce the most consistent and reliable scores.

Below, we list a few basic best practices you might find useful in your testing. Most of them relate to evaluating browser performance with WebXPRT, but several of these practices apply to other benchmarks as well.

  • Hardware is not the only important factor: Most people know that different browsers produce different performance scores on the same system. Testers are not, however, always aware of shifts in performance between different versions of the same browser. While most updates don’t have a large impact on performance, a few updates have increased (or even decreased) browser performance by a significant amount. For this reason, it’s always important to record and disclose the extended browser version number for each test run. The same principle applies to any other relevant software.
  • Keep a thorough record of system information: We record detailed information about a test system’s key hardware and software components, including full model and version numbers. This information is not only important for later disclosure if we choose to publish a result, it can also sometimes help to pinpoint system differences that explain why two seemingly identical devices are producing very different scores. We also want people to be able to reproduce our results to the closest extent possible, so that commitment involves recording and disclosing more detail than you’ll find in some tech articles and product reviews.
  • Test with clean images: We typically use an out-of-box (OOB) method for testing new devices in the XPRT lab. OOB testing means that other than running the initial OS and browser version updates that users are likely to run after first turning on the device, we change as little as possible before testing. We want to assess the performance that buyers are likely to see when they first purchase the device and before they install additional software. This is the best way to provide an accurate assessment of the performance retail buyers will experience from their new devices. That said, the OOB method is not appropriate for certain types of testing, such as when you want to compare as close to identical system images as possible, or when you want to remove as much pre-loaded software as possible.
  • Turn off automatic updates: We do our best to eliminate or minimize app and system updates after initial setup. Some vendors are making it more difficult to turn off updates completely, but you should always double-check update settings before testing.
  • Get a baseline for system processes: Depending on the system and the OS, a significant amount of system-level activity can be going on in the background after you turn it on. As much as possible, we like to wait for a stable baseline (idle time) of system activity before kicking off a test. If we start testing immediately after booting the system, we often see higher variance in the first run before the scores start to tighten up.
  • Use more than one data point: Because of natural variance, our standard practice in the XPRT lab is to publish a score that represents the median from three to five runs, if not more. If you run a benchmark only once and the score differs significantly from other published scores, your result could be an outlier that you would not see again under stable testing conditions or over the course of multiple runs.


We hope these tips will help make your testing more accurate. If you have any questions about WebXPRT, the other XPRTs, or benchmarking in general, feel free to ask!

Justin

Check out the other XPRTs:

Forgot your password?