BenchmarkXPRT Blog banner

Evolve or die

Last week, Google announced that it would retire its Octane benchmark. Their announcement explains that they designed Octane to spur improvement in JavaScript performance, and while it did just that when it was first released, those improvements have plateaued in recent years. They also note that there are some operations in Octane that optimize Octane scores but do not reflect real-world scenarios. That’s unfortunate, because they, like most of us, want improvements in benchmark scores to mean improvements in end-user experience.

WebXPRT comes at the web performance issue differently. While Octane’s goal was to improve JavaScript performance, the purpose of WebXPRT is to measure performance from the end user’s perspective. By doing the types of work real people do, WebXPRT doesn’t measure only improvements in JavaScript performance; it also measures the quality of the real-world user experience. WebXPRT’s results also reflect the performance of the entire device and software stack, not just the performance of the JavaScript interpreter.

Google’s announcement reminds us that benchmarks have finite life spans, that they must constantly evolve to keep pace with changes in technology, or they will become useless. To make sure the XPRT benchmarks do just that, we are always looking at how people use their devices and developing workloads that reflect their actions. This is a core element of the XPRT philosophy.

As we mentioned last week, we’ve working on the next version of WebXPRT. If you have any thoughts about how it should evolve, let us know!

Eric

Thinking ahead to WebXPRT 2017

A few months ago, Bill discussed our intention to update WebXPRT this year. Today, we want to share some initial ideas for WebXPRT 2017 and ask for your input.

Updates to the workloads provide an opportunity to increase the relevance and value of WebXPRT in the years to come. Here are a few of the ideas we’re considering:

  • For the Photo Enhancement workload, we can increase the data sizes of pictures. We can also experiment with additional types of photo enhancement such as background/foreground subtraction, collage creation, or panoramic/360-degree image viewing.
  • For the Organize Album workload, we can explore machine learning workloads by incorporating open source JavaScript libraries into web-based inferencing tests.
  • For the Local Notes workload, we’re investigating the possibility of leveraging natural-brain libraries for language processing functions.
  • For a new workload, we’re investigating the possibility of using online 3D modeling applications such as Tinkercad.

 
For the UI, we’re considering improvements to features like the in-test progress bars and individual subtest selection. We’re also planning to update the UI to make it visually distinct from older versions.

Throughout this process, we want to be careful to maintain the features that have made WebXPRT our most popular tool, with more than 141,000 runs to date. We’re committed to making sure that it runs quickly and simply in most browsers and produces results that are useful for comparing web browsing performance across a wide variety of devices.

Do you have feedback on these ideas or suggestions for browser technologies or test scenarios that we should consider for WebXPRT 2017? Are there existing features we should ditch? Are there elements of the UI that you find especially useful or would like to see improved? Please let us know. We want to hear from you and make sure that we’re crafting a performance tool that continues to meet your needs.

Justin

Looking under the hood

In the next couple of weeks, we’ll publish the source code and build instructions for the latest HDXPRT 2014 and BatteryXPRT 2014 builds. Access to XPRT source code is one of the benefits of BenchmarkXPRT Development Community membership. For readers who may not know, this a good time to revisit the reasons we make the source code available.

The primary reason is transparency; we want the XPRTs to be as open as possible. As part of our community model for software development, the source code is available to anyone who joins the community. Closed-source benchmark development can lead some people to infer that a benchmark is biased in some way. Our approach makes it impossible to hide any biases.

Another reason we publish source code is to encourage collaborative development and innovation. Community members are involved in XPRT development from the beginning, helping to identify emerging technologies in need of reliable benchmarking tools, suggesting potential workloads and improvements, reviewing design documents, and offering all sorts of general feedback.

Simply put, if you’re interested in benchmarking and the BenchmarkXPRT Development Community, then we’re interested in what you have to say! Community input helps us at every step of the process, and ultimately helps us to create benchmarking tools that are as reliable and relevant as possible.

If you’d like to review XPRT source code, but haven’t yet joined the community, we encourage you to go ahead and join! It’s easy, and if you work for a company or organization with an interest in benchmarking, you can join the community for free. Simply fill out the form with your company e-mail address and click the option to be considered for a free membership. We’ll contact you to verify the address is real and then activate your membership.

If you have any other questions about community membership or XPRT source code, feel free to contact us. We look forward to hearing from you!

Justin

Running Android-oriented XPRTs on Chrome OS

Since last summer, we’ve been following Google’s progress in bringing Android apps and the Google Play store to Chromebooks, along with their plan to gradually phase out support for Chrome apps over the next few years. Because we currently offer apps that assess battery life and performance for Android devices (BatteryXPRT and MobileXPRT) and Chromebooks (CrXPRT), the way this situation unfolds could affect the makeup of the XPRT portfolio in the years to come.

For now, we’re experimenting to see how well the Android app/Chrome OS merger is working with the devices in our lab. One test case is the Samsung Chromebook Plus, which we featured in the XPRT Weekly Tech Spotlight a few weeks ago. Normally, we would publish only CrXPRT and WebXPRT results for a Chromebook, but installing and running MobileXPRT 2015 from the Google Play store was such a smooth and error-free process that we decided to publish the first MobileXPRT score for a device running Chrome OS.

We also tried running BatteryXPRT on the Chromebook Plus, but even though the installation was quick and easy and the test kicked off without a hitch, we could not generate a valid result. Typically, the test would complete several iterations successfully, but terminate before producing a result. We’re investigating the problem, and will keep the community up to date on what we find.

In the meantime, we continue to recommend that Chromebook testers use CrXPRT for performance and battery life assessment. While we haven’t encountered any issues running MobileXPRT 2015 on Chromebooks, CrXPRT has a proven track record.

If you have any questions about running Android-oriented XPRTs on Chrome OS, or insights that you’d like to share, please let us know.

Justin

Digging deeper

From time to time, we like to revisit the fundamentals of the XPRT approach to benchmark development. Today, we’re discussing the need for testers and benchmark developers to consider the multiple factors that influence benchmark results. For every device we test, all of its hardware and software components have the potential to affect performance, and changing the configuration of those components can significantly change results.

For example, we frequently see significant performance differences between different browsers on the same system. In our recent recap of the XPRT Weekly Tech Spotlight’s first year, we highlighted an example of how testing the same device with the same benchmark can produce different results, depending on the software stack under test. In that instance, the Alienware Steam Machine entry included a WebXPRT 2015 score for each of the two browsers that consumers were likely to use. The first score (356) represented the SteamOS browser app in the SteamOS environment, and the second (441) represented the Iceweasel browser (a Firefox variant) in the Linux-based desktop environment. Including only the first score would have given readers an incomplete picture of the Steam Machine’s web-browsing capabilities, so we thought it was important to include both.

We also see performance differences between different versions of the same browser, a fact especially relevant to those who use frequently updated browsers, such as Chrome. Even benchmarks that measure the same general area of performance, for example, web browsing, are usually testing very different things.

OS updates can also have an impact on performance. Consumers might base a purchase on performance or battery life scores and end up with a device that behaves much differently when updated to a new version of Android or iOS, for example.

Other important factors in the software stack include pre-installed software, commonly referred to as bloatware, and the proliferation of apps that sap performance and battery life.

This is a much larger topic than we can cover in the blog. Let the examples we’ve mentioned remind you to think critically about, and dig deeper into, benchmark results. If we see published XPRT scores that differ significantly from our own results, our first question is always “What’s different between the two devices?” Most of the time, the answer becomes clear as we compare hardware and software from top to bottom.

Justin

VR and AR at Mobile World Congress 2017

Spotting the virtual reality (VR) and augmented reality (AR) demos at the recent Mobile World Congress (MWC) in Barcelona was easy: all you had to do was look for the long queues of people waiting to put on a headset and see another world. Though the demos ranged from games to simulated roller-coaster rides to simple how-to tools, the interest of the crowd was always high. A lot of the attraction was clearly due to the tools’ relative novelty, but many people seemed focused on using the technologies to create commercially viable products.

Both VR and AR involve a great deal of graphics and data movement, so they can be quite computationally demanding. Right now, that’s not a problem, because most applications and demos are hooked directly to powerful computers. As these technologies become more pervasive, however, they’re going to find their way into our devices, which will almost certainly do some of the processing even as the bulk of the work happens on servers in the cloud. The better the AR and VR experiences our devices can support, the happier we’re likely to be with those technologies.

Along with the crowds at MWC, many of us in the BenchmarkXPRT Development Community are enthusiastic about VR and AR, which is why we’ve been monitoring these fields for some time. We’ve even worked with a group of NC State University students to produce a sample VR workload. If you have thoughts on how we might best support VR and AR, please contact us. Meanwhile, we’ll continue to track both closely and work to get the XPRTs ready to measure how well devices handle these technologies.

Mark

Check out the other XPRTs:

Forgot your password?