BenchmarkXPRT Blog banner

Month: May 2018

The WebXPRT 3 results calculation white paper is now available

As we’ve discussed in prior blog posts, transparency is a core value of our open development community. A key part of being transparent is explaining how we design our benchmarks, why we make certain development decisions, and how the benchmarks actually work. This week, to help WebXPRT 3 testers understand how the benchmark calculates results, we published the WebXPRT 3 results calculation and confidence interval white paper.

The white paper explains what the WebXPRT 3 confidence interval is, how it differs from typical benchmark variability, and how the benchmark calculates the individual workload scenario and overall scores. The paper also provides an overview of the statistical techniques WebXPRT uses to translate raw times into scores.

To supplement the white paper’s overview of the results calculation process, we’ve also published a spreadsheet that shows the raw data from a sample test run and reproduces the calculations WebXPRT uses.

The paper and spreadsheet are both available on WebXPRT.com and on our XPRT white papers page. If you have any questions about the WebXPRT results calculation process, please let us know, and be sure to check out our other XPRT white papers.

Justin

More on the way for the XPRT Weekly Tech Spotlight

In the coming months, we’ll continue to add more devices and helpful features to the XPRT Weekly Tech Spotlight. We’re especially interested in adding data points and visual aids that make it easier to quickly understand the context of each device’s test scores. For instance, those of us who are familiar with WebXPRT 3 scores know that an overall score of 250 is pretty high, but site visitors who are unfamiliar with WebXPRT probably won’t know how that score compares to scores for other devices.

We designed Spotlight to be a source of objective data, in contrast to sites that provide subjective ratings for devices. As we pursue our goal of helping users make sense of scores, we want to maintain this objectivity and avoid presenting information in ways that could be misleading.

Introducing comparison aids to the site is forcing us to make some tricky decisions. Because we value input from XPRT community members, we’d love to hear your thoughts on one of the questions we’re facing: How should our default view present a device’s score?

We see three options:

1) Present the device’s score in relation to the overall high and low scores for that benchmark across all devices.
2) Present the device’s score in relation to the overall high and low scores for that benchmark across the broad category of devices to which that device belongs (e.g., phones).
3) Present the device’s score in relation to the overall high and low scores for that benchmark across a narrower sub-category of devices to which that device belongs (e.g., high-end flagship phones).

To think this through, consider WebXPRT, which runs on desktops, laptops, phones, tablets, and other devices. Typically, the WebXPRT scores for phones and tablets are lower than scores for desktop and laptop systems. The first approach helps to show just how fast high-end desktops and laptops handle the WebXPRT workloads, but it could make a phone or tablet look slow, even if its score was good for its category. The second approach would prevent unfair default comparisons between different device types but would still present comparisons between devices that are not true competitors (e.g., flagship phones vs. budget phones). The third approach is the most careful, but would introduce an element of subjectivity because determining the sub-category in which a device belongs is not always clear cut.

Do you have thoughts on this subject, or recommendations for Spotlight in general? If so, Let us know.

Justin

The XPRTs and GDPR

If you’re like me, your inbox has been receiving a constant stream of “updates to our privacy policy” messages from seemingly every company you’ve ever interacted with. The timing is not coincidental: most of the notifications are the result of companies taking steps to comply with the European Union’s General Data Protection Regulation (GDPR), which goes into effect next Friday, May 25th.

The GDPR was designed to introduce clarity and consistency to privacy laws in Europe, protect the data privacy rights of citizens, and change the way corporations approach data privacy issues. The gist is that if any party collects any personal data from European citizens or organizations, they must follow specific guidelines:

  • They may only collect data for clearly-stated, legitimate reasons.
  • They must identify what data they’re collecting.
  • They must allow data subjects to see the data they’ve collected.
  • They must provide a quick, easy way for data subjects to have their data deleted.


Europeans use the XPRTs quite a bit, and some of our community members live in Europe, so we’re taking the necessary steps to comply with the new regulations. Thankfully, we don’t have to change very much. We built respect for testers’ and community members’ privacy rights into our processes from the very beginning and we are committed to clarity and transparency regarding our data collection practices.

Our blog post on data privacy from last month contains more details about how the XPRTs handle your data, and we’ll update our privacy policies and data collection notices where necessary to make things as clear as possible.

We also want to make sure community members know that if they would like to leave the community and have us delete their name, username, and email address from our records, simple send us a message at benchmarkxprtsupport@principledtechnologies.com and we’ll fulfill your request within 72 hours.

If you have any questions or comments about this topic, please let us know.

Justin

An update on HDXPRT development

It’s been a while since we updated the community on HDXPRT development, and we’ve made a lot of progress since then. Here’s a quick summary of where we are and what to expect in the coming months.

The benchmark’s official name will be HDXPRT 4, and we’re sticking with the basic plan we outlined in the blog, which includes updating the benchmark’s real-world trial applications and workload content and improving the UI.

We’ve updated Adobe Photoshop Elements, Audacity, CyberLink Media Espresso, and HandBrake to more contemporary versions, but decided the benchmark will no longer use Apple iTunes. We sometimes encountered problems with iTunes during testing, and because we can complete the audio-related workloads using Audacity, we decided that it was OK to remove iTunes from the test. Please contact us if you have any concerns about this decision.

In addition to the editing photos, editing music, and converting videos workloads from prior versions of the benchmark, HDXPRT 4 includes two new Photoshop Elements scenarios. The first utilizes an AI tool that corrects closed eyes in photos and the second creates a single panoramic photo from seven separate photos. For the photo and video workloads, we produced new high-res photo content and 4K GoPro video footage respectively.

For the UI, our goal is to implement a clean and functional design and align it more closely with the themes, colors, and font styles we’ll be implementing in the XPRTs moving forward. The WebXPRT 3 UI will give you a feel for the direction the HDXPRT UI is headed.

Some of these details may change as we test preliminary builds, but we wanted to give you a better sense of where HDXPRT is headed. We’re not ready to share a date for the community preview, but will provide more details as the day approaches.

If you have any questions or comments about HDXPRT, please let us know. It’s not too late to for us to consider your input for HDXPRT 4.

Justin

The value of speed

I was reading an interesting article on how high-end smartphones like the iPhone X, Pixel 2 XL, and Galaxy S8 generate more money from in-game revenue than cheaper phones do.

One line stood out to me: “With smartphones becoming faster, larger and more capable of delivering an engaging gaming experience, these monetization key performance indicators (KPIs) have begun to increase significantly.”

It turns out the game companies totally agree with the rest of us that faster devices are better!

Regardless of who is seeking better performance—consumers or game companies—the obvious question is how you determine which models are fastest. Many folks rely on device vendors’ claims about how much faster the new model is. Unfortunately, the vendors’ claims don’t always specify on what they base the claims. Even when they do, it’s hard to know whether the numbers are accurate and applicable to how you use your device.

The key part of any answer is performance tools that are representative, dependable, and open.

  • Representative – Performance tools need to have realistic workloads that do things that you care about.
  • Dependable – Good performance tools run reliably and produce repeatable results, both of which require that significant work go into their development and testing.
  • Open – Performance tools that allow people to access the source code, and even contribute to it, keep things above the table and reassure you that you can rely on the results.

Our goal with the XPRTs is to provide performance tools that meet all these criteria. WebXPRT 3 and all our other XPRTs exist to help accurately reveal how devices perform. You can run them yourself or rely on the wealth of results that we and others have collected on a wide array of devices.

The best thing about good performance tools is that everyone, even vendors, can use them. I sincerely hope that you find the XPRTs helpful when you make your next technology purchase.

Bill

Check out the other XPRTs:

Forgot your password?