BenchmarkXPRT Blog banner

Month: August 2017

WebXPRT and user-agent strings

After running WebXPRT in Microsoft Edge, a tester recently asked why the browser information field on the results page displayed “Chrome 52 – Edge 15.15063.” It’s a good question; why would the benchmark report Chrome 52 when Microsoft Edge is the browser under test? The answer lies in understanding user-agent strings and the way that WebXPRT gathers specific bits of information.

When browsers request a web page from a hosting server, they send an array of basic header information that allows the server to determine the client’s capabilities and the best way to provide the requested content. One of these headers, the user-agent, presents a string of tokens that provide information about the application making the request, the operating system and version, rendering engine compatibility, and browser platform details. In effect, the user-agent string is a way for a browser to tell the hosting server the full extent of its capabilities.

When WebXPRT attempts to identify a browser, it references the browser token in the user-agent string.

The process is generally straightforward, but in some cases, browsers spoof information from other browsers in their user-agent strings, which makes accurate browser detection difficult. The reasons for this are complex, but they involve web development practices and the fact that some web pages are not designed to recognize and work well with new or less-popular browsers. When we released WebXPRT 2015, Microsoft Edge was new. The Edge team wanted to make sure that as much advanced web content as possible would be available to Edge users, so they created a user-agent string that declared itself to be several different browsers at once.

I can see this in action if I check Edge’s user-agent string on my system. Currently, it reports “Mozilla/5.0 (Windows NT 10.0; Win64; x64; ServiceUI 9) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36 Edge/15.15063.” So, because of the way Edge’s user-agent string is constructed, and the way WebXPRT parses that information, the browser information field on WebXPRT’s results page will report “Chrome 52 – Edge 15.15063” on my system.

To try this on your system, in Edge, select the ellipses icon in the top right-hand corner, then F12 Developer Tools. Next, select the Console tab, and run “javascript:alert(navigator.userAgent).” A popup window will display the UA string.

You can find instructions for finding the user-agent string in other browsers here: http://techdows.com/2016/07/edge-ie-chrome-firefox-user-agent-strings.html.

In the next version of WebXPRT, we’ll work to refine the way that the test parses user-agent strings, and provide more accurate system information for testers. If you have any questions or suggestions regarding this topic, let us know!

Justin

Introducing the XPRT Selector

We’re proud of all the XPRT tools, each of which serves a different purpose for the people who rely on them. But for those new to the XPRTs, we wanted a way to make it easy to tell which tool will best suit each person’s specific requirements. To that end, today we’re excited to announce the XPRT Selector, an interactive web tool that helps consumers, developers, manufacturers, and reviewers zero in on exactly which XPRT tool is the right match for their needs.

Using the XPRT Selector is easy. Simply spin the dials on the wheel to choose the categories that best describe yourself, the devices and operating systems you’re working with, and the topic that interests you. Once you’ve aligned the dials, click Get results, and the Selector will present all the free XPRT tools and resources that are available to you. Along with choosing the best tools for you, the XPRT Selector also explains the purpose and capabilities of each tool.

To see the Selector in action, check out the short video below. You can take the XPRT Selector for a spin at http://www.principledtechnologies.com/benchmarkxprt/the-xprt-selector/.

The XPRT Selector

All the XPRT tools have one thing in common: They help take the guesswork out of device evaluation and comparison, making them invaluable for anyone using, making, or writing about tech products. We think the XPRT Selector is a great addition to the fold!

Justin

The XPRT Spotlight Back-to-School Roundup

Today, we’re pleased to announce our second annual XPRT Spotlight Back-to-School Roundup, a free shopping tool that provides side-by-side comparisons of this school year’s most popular Chromebooks, laptops, tablets, and convertibles. We designed the Roundup to help buyers choosing devices for education, such as college students picking out a laptop or school administrators deciding on the devices for a grade. The Roundup can help make those decisions easier by gathering the product and performance facts these buyers need in one convenient place.

We tested the Roundup devices in our lab using the XPRT suite of benchmark tools. In addition to benchmark results, we also provide photographs, device specs, and prices.

If you haven’t yet visited the XPRT Weekly Tech Spotlight page, check it out. Every week, the Spotlight highlights a new device, making it easier for consumers to shop for a new laptop, smartphone, tablet, or PC. Recent devices in the spotlight include the Samsung Chromebook Pro, Microsoft Surface Laptop, Microsoft Surface Pro, OnePlus 5, and Apple iPad Pro 10.5”.

Vendors interested in having their devices featured in the XPRT Weekly Tech Spotlight or next year’s Roundup can visit the website for more details.

We’re always working on ways to make the Spotlight an even more powerful tool for helping with buying decisions. If you have any ideas for the page or suggestions for devices you’d like to see, let us know!

Justin

Machine learning everywhere!

I usually think of machine learning as an emerging technology that will have a big impact on our lives in the not too distant future through applications like autonomous driving. Everywhere I look, however, I see areas where machine learning will affect our lives much sooner in a myriad of smaller ways.

A recent article in Wired described one such example. It told about the work some MIT and Google researchers have done using machine learning to retouch photos. I would do this by using a photo editing program to do something like adjust the color saturation of a whole photo. Instead, their algorithm applies different filters to different parts of a photo. So, faces in the foreground might get different treatment than the sunset in the background.

The researchers train the neural network using professionally retouched photos. I love the idea of a program that automatically improves the look of my less-than-professional personal photos.

What I found more exciting, however, is that the researchers could make their software efficient enough to run on a smartphone in a fraction of a second. That makes it significantly more useful.

This technology is not yet available, but it seems like something that could show up in existing photo or camera apps before long. I hope to see it soon on a smartphone in my hand!

All of that made me think about how we might incorporate such an algorithm in the XPRTs. When I started reading the article, I was thinking it might fit well in our upcoming machine-learning XPRT. By the time I finished it, however, I realized it might belong in a future version of one of the other XPRTs, like MobileXPRT. What do you think?

Bill

Best practices

Recently, a tester wrote in and asked for help determining why they were seeing different WebXPRT scores on two tablets with the same hardware configuration. The scores differed by approximately 7.5 percent. This can happen for many reasons, including different software stacks, but score variability can also result from different testing behavior and environments. While some degree of variability is natural, the question provides us with a great opportunity to talk about the basic benchmarking practices we follow in the XPRT lab, practices that contribute to the most consistent and reliable scores.

Below, we list a few basic best practices you might find useful in your testing. While they’re largely in the context of the WebXPRT focus on evaluating browser performance, several of these practices apply to other benchmarks as well.

  • Test with clean images: We use an out-of-box (OOB) method for testing XPRT Spotlight devices. OOB testing means that other than initial OS and browser version updates that users are likely to run after first turning on the device, we change as little as possible before testing. We want to assess the performance that buyers are likely to see when they first purchase the device, before installing additional apps and utilities. This is the best way to provide an accurate assessment of the performance retail buyers will experience. While OOB is not appropriate for certain types of testing, the key is to not test a device that’s bogged down with programs that influence results unnecessarily.
  • Turn off updates: We do our best to eliminate or minimize app and system updates after initial setup. Some vendors are making it more difficult to turn off updates completely, but you should always account for update settings.
  • Get a feel for system processes: Depending on the system and the OS, quite a lot of system-level activity can be going on in the background after you turn it on. As much as possible, we like to wait for a stable baseline (idle) of system activity before kicking off a test. If we start testing immediately after booting the system, we often see higher variability in the first run before the scores start to tighten up.
  • Disclosure is not just about hardware: Most people know that different browsers will produce different performance scores on the same system. However, testers aren’t always aware of shifts in performance between different versions of the same browser. While most updates don’t have a large impact on performance, a few updates have increased (or even decreased) browser performance by a significant amount. For this reason, it’s always worthwhile to record and disclose the extended browser version number for each test run. The same principle applies to any other relevant software.
  • Use more than one data point: Because of natural variability, our standard practice in the XPRT lab is to publish a score that represents the median from at least three to five runs. If you run a benchmark only once, and the score differs significantly from other published scores, your result could be an outlier that you would not see again under stable testing conditions.


We hope those tips will make testing a little easier for you. If you have any questions about the XPRTs, or about benchmarking in general, feel free to ask!

Justin

Check out the other XPRTs:

Forgot your password?