BenchmarkXPRT Blog banner

Author Archives: Justin Greene

Which browser is the fastest? It’s complicated.

PCWorld recently published the results of a head-to-head browser performance comparison between Google Chrome, Microsoft Edge, Mozilla Firefox, and Opera. As we’ve noted about similar comparisons, no single browser was the fastest in every test. Browser speed sounds like a straightforward metric, but the reality is complex.

For the comparison, PCWorld used three JavaScript-centric test suites (JetStream, SunSpider, and Octane), one benchmark that simulates user actions (Speedometer), a few in-house tests of their own design, and one benchmark that simulates real-world web applications (WebXPRT). Edge came out on top in JetStream and SunSpider, Opera won in Octane and WebXPRT, and Chrome had the best results in Speedometer and PCWorld’s custom workloads.

The reason that the benchmarks rank the browsers so differently is that each one has a unique emphasis and tests a specific set of workloads and technologies. Some focus on very low-level JavaScript tasks, some test additional technologies such as HTML5, and some are designed to identify strengths or weakness by stressing devices in unusual ways. These approaches are all valid, and it’s important to understand exactly what a given score represents. Some scores reflect a very broad set of metrics, while others assess a very narrow set of tasks. Some scores help you to understand the performance you can expect from a device in your everyday life, and others measure performance in scenarios that you’re unlikely to encounter. For example, when Eric discussed a similar topic in the past, he said the tests in JetStream 1.1 provided information that “can be very useful for engineers and developers, but may not be as meaningful to the typical user.”

As we do with all the XPRTs, we designed WebXPRT to test how devices handle the types of real-world tasks consumers perform every day. While lab techs, manufacturers, and tech journalists can all glean detailed data from WebXPRT, the test’s real-world focus means that the overall score is relevant to the average consumer. Simply put, a device with a higher WebXPRT score is probably going to feel faster to you during daily use than one with a lower score. In today’s crowded tech marketplace, that piece of information provides a great deal of value to many people.

What are your thoughts on browser testing? We’d love to hear from you.

Justin

A note about a recent CrXPRT update

A tester from Acer recently contacted us about an issue where CrXPRT was freezing indefinitely during the Photo Effects workload. We initially thought the problem was limited to a specific hardware platform or Chrome OS version, but soon discovered the issue was affecting all CrXPRT tests, regardless of the system.

After quite a bit of troubleshooting, we were able to find and fix what turned out to be simple bug. The problem started with a change we made to increase security and strengthen compliance with GDPR by moving all our web pages to HTTPS. Specifically, we added a redirect that forced principledtechnologies.com to www.principledtechnologies.com. Chrome apps have a manifest property that defines which websites can connect to the application. Because we hadn’t reconfigured the CrXPRT path permissions to account for the new redirect, the test failed. We made the necessary edits to the manifest, tested the fix, and uploaded the updated package (build number 1.0.2.1) to the Chrome Web Store.

If you’re still encountering this problem during testing, check to be sure the app has updated on your system. The changes we made do not affect performance, and all completed CrXPRT test scores from before and after the update are valid and comparable.

We’re grateful whenever community members report issues! If you ever have any problems, questions, or comments regarding any of the XPRTs, please feel free to contact us.

Justin

WebXPRT passes another milestone!

We’re excited to see that users have successfully completed over 250,000 WebXPRT runs! From the original WebXPRT 2013 to the most recent version, WebXPRT 3, this tool has been popular with manufacturers, developers, consumers, and media outlets around the world because it’s easy to run, it runs quickly and on a wide variety of platforms, and it evaluates device performance using real-world tasks.

If you’ve run WebXPRT in any of the more than 458 cities and 64 countries from which we’ve received complete test data—including newcomers Lithuania, Luxembourg, Sweden, and Uruguay—we’re grateful for your help in reaching this milestone. Here’s to another quarter-million runs!

If you haven’t yet transitioned your browser testing to WebXPRT 3, now is a great time to give it a try! WebXPRT 3 includes updated photo workloads with new images and a deep learning task used for image classification. It also uses an optical character recognition task in the Encrypt Notes and OCR scan workload and combines part of the DNA Sequence Analysis scenario with a writing sample/spell check scenario to simulate online homework in the new Online Homework workload. Users carry out tasks like these on their browsers daily, making these workloads very effective for assessing how well a device will perform in the real world.

Happy testing to everyone, and if you have any questions about WebXPRT 3 or the XPRTs in general, feel free to ask!

Justin

CrXPRT helps to navigate the changing Chromebook market

Some people envision Chromebooks as low-end, plastic-shelled laptops that large organizations buy in bulk because they’re inexpensive and easy to manage. While many sub-$200 Chromebooks are still available, the platform is no longer limited to budget chipsets and little memory. Consumers can now choose systems that feature up to 16 GB of RAM, 8th generation Intel Core CPUs, and Core i7 configurations for those willing to pay around $1,600. In addition, some Chromebooks can now run Android apps, Microsoft Office mobile apps, Linux apps, and even Windows apps. While Chromebooks still depend heavily on connectivity and cloud storage, an increasing number of Chrome apps let you perform substantial productivity tasks offline. The Chrome OS landscape has changed so much that for certain use cases, the practical hardware gap between Chromebooks and traditional laptops is narrowing.

More consumers might be interested in Chromebooks than was the case a few years ago, but how they make sense of all the devices on the market? CrXPRT can help by providing objective data on Chromebook performance and battery life. Steven J. Vaughan Nichols offered a great example of the value CrXPRT can provide in his recent ZDNet article on the new Core i7-based Google Pixelbook. The Pixelbook’s CrXPRT score of 226 showed that it performs everyday tasks faster than any of the Chromebooks in our results database. When trying to decide whether it’s worth spending a few hundred or even a thousand dollars more on a new Chromebook, having the right data in hand can transform guesses into well-informed decisions.

You don’t have to be a tech journalist or even a techie to use CrXPRT. If you’d like to learn more about CrXPRT, we encourage you to read the CrXPRT feature here in the blog or visit CrXPRT.com.

Justin

The WebXPRT 3 results calculation white paper is now available

As we’ve discussed in prior blog posts, transparency is a core value of our open development community. A key part of being transparent is explaining how we design our benchmarks, why we make certain development decisions, and how the benchmarks actually work. This week, to help WebXPRT 3 testers understand how the benchmark calculates results, we published the WebXPRT 3 results calculation and confidence interval white paper.

The white paper explains what the WebXPRT 3 confidence interval is, how it differs from typical benchmark variability, and how the benchmark calculates the individual workload scenario and overall scores. The paper also provides an overview of the statistical techniques WebXPRT uses to translate raw times into scores.

To supplement the white paper’s overview of the results calculation process, we’ve also published a spreadsheet that shows the raw data from a sample test run and reproduces the calculations WebXPRT uses.

The paper and spreadsheet are both available on WebXPRT.com and on our XPRT white papers page. If you have any questions about the WebXPRT results calculation process, please let us know, and be sure to check out our other XPRT white papers.

Justin

More on the way for the XPRT Weekly Tech Spotlight

In the coming months, we’ll continue to add more devices and helpful features to the XPRT Weekly Tech Spotlight. We’re especially interested in adding data points and visual aids that make it easier to quickly understand the context of each device’s test scores. For instance, those of us who are familiar with WebXPRT 3 scores know that an overall score of 250 is pretty high, but site visitors who are unfamiliar with WebXPRT probably won’t know how that score compares to scores for other devices.

We designed Spotlight to be a source of objective data, in contrast to sites that provide subjective ratings for devices. As we pursue our goal of helping users make sense of scores, we want to maintain this objectivity and avoid presenting information in ways that could be misleading.

Introducing comparison aids to the site is forcing us to make some tricky decisions. Because we value input from XPRT community members, we’d love to hear your thoughts on one of the questions we’re facing: How should our default view present a device’s score?

We see three options:

1) Present the device’s score in relation to the overall high and low scores for that benchmark across all devices.
2) Present the device’s score in relation to the overall high and low scores for that benchmark across the broad category of devices to which that device belongs (e.g., phones).
3) Present the device’s score in relation to the overall high and low scores for that benchmark across a narrower sub-category of devices to which that device belongs (e.g., high-end flagship phones).

To think this through, consider WebXPRT, which runs on desktops, laptops, phones, tablets, and other devices. Typically, the WebXPRT scores for phones and tablets are lower than scores for desktop and laptop systems. The first approach helps to show just how fast high-end desktops and laptops handle the WebXPRT workloads, but it could make a phone or tablet look slow, even if its score was good for its category. The second approach would prevent unfair default comparisons between different device types but would still present comparisons between devices that are not true competitors (e.g., flagship phones vs. budget phones). The third approach is the most careful, but would introduce an element of subjectivity because determining the sub-category in which a device belongs is not always clear cut.

Do you have thoughts on this subject, or recommendations for Spotlight in general? If so, Let us know.

Justin

Check out the other XPRTs:

Forgot your password?