BenchmarkXPRT Blog banner

Month: May 2015

What makes a good benchmark?

As we discussed recently, we’re working on the design document for the next version of MobileXPRT, and we’re really interested in any ideas you may have. However, we haven’t talked much about what makes for a good benchmark test.

The things we measure need to be quantifiable. A reviewer can talk about the realism of game play, or the innovative look of a game, and those are valid observations. However, it is difficult to convert those kinds of subjective impressions to numbers.

The things we measure must also be repeatable. For example, the response time for an online service may depend on the time of day, number of people using the service at the time, network load, and other factors that change over time. You can measure the responsiveness of such services, but doing so requires repeating the test enough times under enough different circumstances to get a representative sample.

The possible things we can measure go beyond the speed of the device to include things such as battery life and compatibility with standards, and even fidelity or quality such as with photos or video. BatteryXPRT and CrXPRT test battery life, while the HTML5 tests in WebXPRT are among those that test compatibility. We are currently looking into quality metrics for possible future tools.

I hope this has given you some ideas. If so, let us know!

Eric

It’s not the same

We sometimes get questions about comparing results from older versions of benchmarks to the current version. Unfortunately, it’s never safe to compare the results from different versions of benchmarks. This principle has been around much longer than the XPRTs. A major update will use different workloads and test data, and will probably be built with updated or different tools.

To avoid confusion, we rescale the results every time we release a new version of an existing benchmark. By making the results significantly different, we hope to reduce the likelihood that results from two different versions will get mixed together.

As an example, we scaled the results from WebXPRT 2015 to be significantly lower than those from WebXPRT 2013. Here are some scores from the published results for WebXPRT 2013 and WebXPRT 2015.

WebXPRT 2013 vs. 2015 results

Please note that the results above are not necessarily from the same device configurations, and are meant only to illustrate the difference in results between the two versions of WebXPRT.

If you have any questions, please let us know.

Eric

WebXPRT 2015 source code is now available

As of today, we are making the WebXPRT 2015 source code available to community members.

Download the WebXPRT 2015 source here (login required).

We’ll also post a link to the source on the WebXPRT tab in the Members Area. The source code package contains instructions for setting up a local installation of WebXPRT. However, please note that you must test using WebXPRT.com to get an official result.

If you want more information, please contact BenchmarkXPRTSupport@principledtechnologies.com.

We look forward to your feedback!

What are the implications?

It’s been a couple of weeks since the Microsoft Build 2015 conference. There was a lot of interesting news and we are still digesting what it means for the XPRTS, especially TouchXPRT.

Rebuilding TouchXPRT as a universal app has the potential to let it run on a much wider range of devices: PCs, tablets, phones, even the Xbox. This would give TouchXPRT the kind of versatility that we enjoy in the Android space with MobileXPRT and BatteryXPRT.

It’s a lot more complicated to sort out the implications of Microsoft Continuum, which allows you to use your phone as a computer by connecting it to a docking station. The features of your device and the way the apps behave can change based on the display available. Connect the phone to a docking station and it behaves like a desktop. It also means that the hardware and features available on a device could potentially change while you are testing the device. TouchXPRT would need to detect any such changes and respond appropriately.

That’s a lot to think about, and we’ve been experimenting. If you have any thoughts about Windows 10 and benchmarking, please let us know.

Eric

WebXPRT 2015 is here!

Today, we’re releasing WebXPRT 2015, our benchmark for evaluating the performance of Web-enabled devices. The BenchmarkXPRT Development Community has been using a community preview for several weeks, but now that we’ve released the benchmark, anyone can run WebXPRT and publish results.

Run WebXPRT 2015

WebXPRT 2013 is still available here while people transition to WebXPRT 2015. We will provide plenty of notice before discontinuing WebXPRT 2013.

After trying out WebXPRT, please send your comments to BenchmarkXPRTsupport@principledtechnologies.com.

WebXPRT 2015

Tomorrow we’ll be releasing WebXPRT 2015, with mirror site in Singapore to follow soon. We’ve been talking about it for a while and we’re delighted to finally make it available to the public.

As we’ve discussed over the past few weeks, the new WebXPRT is a big improvement over WebXPRT 2013. Some of the changes are

  • An improved UI. In addition to a cleaner, sleeker look, the UI now has a progress indicator and on-screen test descriptions. There is also a Simplified Chinese version of the UI.
  • Test automation. WebXPRT 2015 lets you automate testing, giving labs more flexibility and making it easier to test lots of devices.
  • New and improved tests. In addition to enhancing the existing tests, WebXPRT 2015 adds two new tests, Explore DNA Sequencing and Sales Graphs.

 

If you haven’t checked out the new WebXPRT, now is the time!

And remember the design document for the next generation of MobileXPRT should be out by the end of the month. If there are things you would like to see, it’s a great time to let us know.

Eric

Check out the other XPRTs:

Forgot your password?