BenchmarkXPRT Blog banner

Category: Benchmark metrics

Planning the next version of MobileXPRT

We’re in the early planning stages for the next version of MobileXPRT, and invite you to send us any suggestions you may have. What do you like or not like about MobileXPRT? What features would you like to see in a new version?

When we begin work on a new version of any XPRT, one of the first steps we take is to assess the benchmark’s workloads to determine whether they will provide value during the years ahead. This step almost always involves updating test content such as photos and videos to more contemporary file resolutions and sizes, and it can also involve removing workloads or adding completely new scenarios. MobileXPRT currently includes five performance scenarios (Apply Photo Effects, Create Photo Collages, Create Slideshow, Encrypt Personal Content, and Detect Faces to Organize Photos). Should we stick with these five or investigate other use cases? What do you think?

As we did with WebXPRT 3 and the upcoming HDXPRT 4, we’re also planning to update the MobileXPRT UI to improve the look of the benchmark and make it easier to use.

Crucially, we’ll also build the app using the most current Android Studio SDK. Android development has changed significantly since we released MobileXPRT 2015 and apps must now conform to stricter standards that require explicit user permission for many tasks. Navigating these changes shouldn’t be too difficult, but it’s always possible that we’ll encounter unforeseen challenges at some point during the process.

Do you have suggestions for test scenarios that we should consider for MobileXPRT? Are there existing features we should remove? Are there elements of the UI that you find especially useful or have ideas for improving? Please let us know. We want to hear from you and make sure that MobileXPRT continues to meet your needs.

Justin

Which browser is the fastest? It’s complicated.

PCWorld recently published the results of a head-to-head browser performance comparison between Google Chrome, Microsoft Edge, Mozilla Firefox, and Opera. As we’ve noted about similar comparisons, no single browser was the fastest in every test. Browser speed sounds like a straightforward metric, but the reality is complex.

For the comparison, PCWorld used three JavaScript-centric test suites (JetStream, SunSpider, and Octane), one benchmark that simulates user actions (Speedometer), a few in-house tests of their own design, and one benchmark that simulates real-world web applications (WebXPRT). Edge came out on top in JetStream and SunSpider, Opera won in Octane and WebXPRT, and Chrome had the best results in Speedometer and PCWorld’s custom workloads.

The reason that the benchmarks rank the browsers so differently is that each one has a unique emphasis and tests a specific set of workloads and technologies. Some focus on very low-level JavaScript tasks, some test additional technologies such as HTML5, and some are designed to identify strengths or weakness by stressing devices in unusual ways. These approaches are all valid, and it’s important to understand exactly what a given score represents. Some scores reflect a very broad set of metrics, while others assess a very narrow set of tasks. Some scores help you to understand the performance you can expect from a device in your everyday life, and others measure performance in scenarios that you’re unlikely to encounter. For example, when Eric discussed a similar topic in the past, he said the tests in JetStream 1.1 provided information that “can be very useful for engineers and developers, but may not be as meaningful to the typical user.”

As we do with all the XPRTs, we designed WebXPRT to test how devices handle the types of real-world tasks consumers perform every day. While lab techs, manufacturers, and tech journalists can all glean detailed data from WebXPRT, the test’s real-world focus means that the overall score is relevant to the average consumer. Simply put, a device with a higher WebXPRT score is probably going to feel faster to you during daily use than one with a lower score. In today’s crowded tech marketplace, that piece of information provides a great deal of value to many people.

What are your thoughts on browser testing? We’d love to hear from you.

Justin

WebXPRT passes another milestone!

We’re excited to see that users have successfully completed over 250,000 WebXPRT runs! From the original WebXPRT 2013 to the most recent version, WebXPRT 3, this tool has been popular with manufacturers, developers, consumers, and media outlets around the world because it’s easy to run, it runs quickly and on a wide variety of platforms, and it evaluates device performance using real-world tasks.

If you’ve run WebXPRT in any of the more than 458 cities and 64 countries from which we’ve received complete test data—including newcomers Lithuania, Luxembourg, Sweden, and Uruguay—we’re grateful for your help in reaching this milestone. Here’s to another quarter-million runs!

If you haven’t yet transitioned your browser testing to WebXPRT 3, now is a great time to give it a try! WebXPRT 3 includes updated photo workloads with new images and a deep learning task used for image classification. It also uses an optical character recognition task in the Encrypt Notes and OCR scan workload and combines part of the DNA Sequence Analysis scenario with a writing sample/spell check scenario to simulate online homework in the new Online Homework workload. Users carry out tasks like these on their browsers daily, making these workloads very effective for assessing how well a device will perform in the real world.

Happy testing to everyone, and if you have any questions about WebXPRT 3 or the XPRTs in general, feel free to ask!

Justin

The WebXPRT 3 results calculation white paper is now available

As we’ve discussed in prior blog posts, transparency is a core value of our open development community. A key part of being transparent is explaining how we design our benchmarks, why we make certain development decisions, and how the benchmarks actually work. This week, to help WebXPRT 3 testers understand how the benchmark calculates results, we published the WebXPRT 3 results calculation and confidence interval white paper.

The white paper explains what the WebXPRT 3 confidence interval is, how it differs from typical benchmark variability, and how the benchmark calculates the individual workload scenario and overall scores. The paper also provides an overview of the statistical techniques WebXPRT uses to translate raw times into scores.

To supplement the white paper’s overview of the results calculation process, we’ve also published a spreadsheet that shows the raw data from a sample test run and reproduces the calculations WebXPRT uses.

The paper and spreadsheet are both available on WebXPRT.com and on our XPRT white papers page. If you have any questions about the WebXPRT results calculation process, please let us know, and be sure to check out our other XPRT white papers.

Justin

The value of speed

I was reading an interesting article on how high-end smartphones like the iPhone X, Pixel 2 XL, and Galaxy S8 generate more money from in-game revenue than cheaper phones do.

One line stood out to me: “With smartphones becoming faster, larger and more capable of delivering an engaging gaming experience, these monetization key performance indicators (KPIs) have begun to increase significantly.”

It turns out the game companies totally agree with the rest of us that faster devices are better!

Regardless of who is seeking better performance—consumers or game companies—the obvious question is how you determine which models are fastest. Many folks rely on device vendors’ claims about how much faster the new model is. Unfortunately, the vendors’ claims don’t always specify on what they base the claims. Even when they do, it’s hard to know whether the numbers are accurate and applicable to how you use your device.

The key part of any answer is performance tools that are representative, dependable, and open.

  • Representative – Performance tools need to have realistic workloads that do things that you care about.
  • Dependable – Good performance tools run reliably and produce repeatable results, both of which require that significant work go into their development and testing.
  • Open – Performance tools that allow people to access the source code, and even contribute to it, keep things above the table and reassure you that you can rely on the results.

Our goal with the XPRTs is to provide performance tools that meet all these criteria. WebXPRT 3 and all our other XPRTs exist to help accurately reveal how devices perform. You can run them yourself or rely on the wealth of results that we and others have collected on a wide array of devices.

The best thing about good performance tools is that everyone, even vendors, can use them. I sincerely hope that you find the XPRTs helpful when you make your next technology purchase.

Bill

AIXPRT: We want your feedback!

Today, we’re publishing the AIXPRT Request for Comments (RFC) document. The RFC explains the need for a new artificial intelligence (AI)/machine learning benchmark, shows how the BenchmarkXPRT Development Community plans to address that need, and provides preliminary design specifications for the benchmark.

We’re seeking feedback and suggestions from anyone interested in shaping the future of machine learning benchmarking, including those not currently part of the Development Community. Usually, only members of the BenchmarkXPRT Development Community have access to our RFCs and the opportunity to provide feedback. However, because we’re seeking input from non-members who have expertise in this field, we will be posting this RFC in the New events & happenings section of the main BenchmarkXPRT.com page and making it available at AIXPRT.com.

We welcome input on all aspects of the benchmark, including scope, workloads, metrics and scores, UI design, and reporting requirements. We will accept feedback through May 13, 2018, after which BenchmarkXPRT Development Community administrators will collect and evaluate the feedback and publish the final design specification.

Please share the RFC with anyone interested in machine learning benchmarking and please send us your feedback before May 13.

Justin

Check out the other XPRTs:

Forgot your password?