BenchmarkXPRT Blog banner

Category: Performance of computing devices

Comparing apples and oranges?

My first day at CES, I had breakfast with Brett Howse from AnandTech. It was a great opportunity to get the perspective of a savvy tech journalist and frequent user of the XPRTs.

During our conversation, Brett raised concerns about comparing mobile devices to PCs. As mobile devices get more powerful, the performance and capability gaps between them and PCs are narrowing. That makes it more common to compare upper-end mobile devices to PCs.

People have long used different versions of benchmarks when comparing these two classes of devices. For example, the images for benchmarking a phone might be smaller than those for benchmarking a PC. Also, because of processor differences, the benchmarks might be built differently, say a 16- or 32-bit executable for a mobile device, and a 64-bit version for a PC. That was fine when no one was comparing the devices directly, but can be a problem now.

This issue is more complicated than it sounds. For those cases where a benchmark uses a dumbed-down version of the workload for mobile devices, comparing the results is clearly not valid. However, let’s assume that the workload stays the same, and that you run a 32-bit benchmark on a tablet, and a 64-bit version on a PC. Is the comparison valid? It may be, if you are talking about the day-to-day performance a user is likely to encounter. However, it may not be valid if you are making statement about the potential performance of the device itself.

Brett would like the benchmarking community to take charge of this issue and provide guidance about how to compare mobile devices and PCs. What are your thoughts?

Eric

Question we get a lot

“How come your benchmark ranks devices differently than [insert other benchmark here]?” It’s a fair question, and the reason is that each benchmark has its own emphasis and tests different things. When you think about it, it would be unusual if all benchmarks did agree.

To illustrate the phenomenon, consider this excerpt from a recent browser shootout in VentureBeat:

 
While this looks very confusing, the simple explanation is that the different benchmarks are testing different things. To begin with, SunSpider, Octane, JetStream, PeaceKeeper, and Kraken all measure JavaScript performance. Oort Online measures WebGL performance. WebXPRT measures both JavaScript and HTML 5 performance. HTML5Test measures HTML5 compliance.

Even with benchmarks that test the same aspect of browser performance, the tests differ. Kraken and SunSpider both test the speed of JavaScript math, string, and graphics operations in isolation, but run different sets of tests to do so. PeaceKeeper profiles the JavaScript from sites such as YouTube and FaceBook.

WebXPRT, like the other XPRTs, uses scenarios that model the types of work people do with their devices.

It’s no surprise that the order changes depending on which aspect of the Web experience you emphasize, in much the same way that the most fuel-efficient cars might not be the ones with the best acceleration.

This is a bigger topic than we can deal with in a single blog post, and we’ll examine it more in the future.

Eric

WebXPRT 2015 is here!

Today, we’re releasing WebXPRT 2015, our benchmark for evaluating the performance of Web-enabled devices. The BenchmarkXPRT Development Community has been using a community preview for several weeks, but now that we’ve released the benchmark, anyone can run WebXPRT and publish results.

Run WebXPRT 2015

WebXPRT 2013 is still available here while people transition to WebXPRT 2015. We will provide plenty of notice before discontinuing WebXPRT 2013.

After trying out WebXPRT, please send your comments to BenchmarkXPRTsupport@principledtechnologies.com.

What do you think when you hear “Chromebook”?

We’ve been thinking a lot about Chromebooks while doing all of our testing in preparation for the CrXPRT Community Preview. In both the models we’re testing and the ones announced in the press, we’ve seen just how much the Chromebook market is changing. Some folks even claim that Chromebook sales made up 35 percent of US commercial laptop sales in the first half of 2014. What’s even more interesting to us is the wide variety of Chromebooks on the market.

Choosing between Chromebooks these days is becoming more complicated than it used to be. There’s a greater range of hardware choices, and those choices can have a direct impact on performance and battery life. Some Chromebooks offer local storage up to 320 GB, touch screens, and 4G/LTE connectivity. Prices range widely, from $199 to $1,499. Even seemingly comparable systems can perform much differently when put to the test. For instance, we recently tested two Chromebooks separated by only $50 in price, but over 5 hours of estimated battery life!

Whether a consumer’s ultimate purchasing decision is based on price, specs, or a combination of factors, there are few things more valuable to buyers than reliable facts about performance and battery life. Benchmarking is ultimately about gaining useful data for decision making, and that’s why we’re excited about the value that CrXPRT will bring to the Chromebook discussion!

Justin

Comment on this post in the forums

Check out the other XPRTs:

Forgot your password?