A couple of weeks ago, I described a conversation I had with Brett Howse of AnandTech. Brett was kind enough to send a clarification of some of his remarks, which he gave us permission to share with you.
“We are at a point in time where the technology that’s been called mobile since its inception is now at a point where it makes sense to compare it to the PC. However we struggle with the comparisons because the tools used to do the testing do not always perform the same workloads. This can be a major issue when a company uses a mobile workload, and a desktop workload, but then puts the resulting scores side by side, which can lead to misinformed conclusions. This is not only a CPU issue either, since on the graphics side we have OpenGL well established, along with DirectX, in the PC space, but our mobile workloads tend to rely on OpenGL ES, with less precision asked of the GPU, and GPUs designed around this. Getting two devices to run the same work is a major challenge, but one that has people asking what the results would be.”
I really appreciate Brett taking the time to respond. What are your thoughts in these issues? Please let us know!
My first day at CES, I had breakfast with Brett Howse from AnandTech. It was a great opportunity to get the perspective of a savvy tech journalist and frequent user of the XPRTs.
During our conversation, Brett raised concerns about comparing mobile devices to PCs. As mobile devices get more powerful, the performance and capability gaps between them and PCs are narrowing. That makes it more common to compare upper-end mobile devices to PCs.
People have long used different versions of benchmarks when comparing these two classes of devices. For example, the images for benchmarking a phone might be smaller than those for benchmarking a PC. Also, because of processor differences, the benchmarks might be built differently, say a 16- or 32-bit executable for a mobile device, and a 64-bit version for a PC. That was fine when no one was comparing the devices directly, but can be a problem now.
This issue is more complicated than it sounds. For those cases where a benchmark uses a dumbed-down version of the workload for mobile devices, comparing the results is clearly not valid. However, let’s assume that the workload stays the same, and that you run a 32-bit benchmark on a tablet, and a 64-bit version on a PC. Is the comparison valid? It may be, if you are talking about the day-to-day performance a user is likely to encounter. However, it may not be valid if you are making statement about the potential performance of the device itself.
Brett would like the benchmarking community to take charge of this issue and provide guidance about how to compare mobile devices and PCs. What are your thoughts?
“How come your benchmark ranks devices differently than [insert other benchmark here]?” It’s a fair question, and the reason is that each benchmark has its own emphasis and tests different things. When you think about it, it would be unusual if all benchmarks did agree.
To illustrate the phenomenon, consider this excerpt from a recent browser shootout in VentureBeat:
WebXPRT, like the other XPRTs, uses scenarios that model the types of work people do with their devices.
It’s no surprise that the order changes depending on which aspect of the Web experience you emphasize, in much the same way that the most fuel-efficient cars might not be the ones with the best acceleration.
This is a bigger topic than we can deal with in a single blog post, and we’ll examine it more in the future.
Today, we’re releasing WebXPRT 2015, our benchmark for evaluating the performance of Web-enabled devices. The BenchmarkXPRT Development Community has been using a community preview for several weeks, but now that we’ve released the benchmark, anyone can run WebXPRT and publish results.
Run WebXPRT 2015
WebXPRT 2013 is still available here while people transition to WebXPRT 2015. We will provide plenty of notice before discontinuing WebXPRT 2013.
After trying out WebXPRT, please send your comments to BenchmarkXPRTsupport@principledtechnologies.com.
We’ve been thinking a lot about Chromebooks while doing all of our testing in preparation for the CrXPRT Community Preview. In both the models we’re testing and the ones announced in the press, we’ve seen just how much the Chromebook market is changing. Some folks even claim that Chromebook sales made up 35 percent of US commercial laptop sales in the first half of 2014. What’s even more interesting to us is the wide variety of Chromebooks on the market.
Choosing between Chromebooks these days is becoming more complicated than it used to be. There’s a greater range of hardware choices, and those choices can have a direct impact on performance and battery life. Some Chromebooks offer local storage up to 320 GB, touch screens, and 4G/LTE connectivity. Prices range widely, from $199 to $1,499. Even seemingly comparable systems can perform much differently when put to the test. For instance, we recently tested two Chromebooks separated by only $50 in price, but over 5 hours of estimated battery life!
Whether a consumer’s ultimate purchasing decision is based on price, specs, or a combination of factors, there are few things more valuable to buyers than reliable facts about performance and battery life. Benchmarking is ultimately about gaining useful data for decision making, and that’s why we’re excited about the value that CrXPRT will bring to the Chromebook discussion!
Comment on this post in the forums