BenchmarkXPRT Blog banner

Category: WebXPRT 2015

WebXPRT 3 arrives next week!

After much development work and testing, we’re happy to report that we’ll be releasing WebXPRT 3 early next week!

Here are the final workload names and descriptions.

1) Photo Enhancement: Applies three effects to two photos each using Canvas.
2) Organize Album Using AI: Detects faces and classifies images using the ConvNetJS neural network library.
3) Stock Option Pricing: Calculates and displays graphics views of a stock portfolio using Canvas, SVG, and dygraphs.js.
4) Encrypt Notes and OCR Scan: Encrypts notes in local storage and scans a receipt using optical character recognition.
5) Sales Graphs: Calculates and displays multiple views of sales data using InfoVis and d3.js.
6) Online Homework: Performs science and English homework using Web Workers and Typo.js spell check.

As we mentioned in an earlier blog post, the updated photo workloads contain new images and a deep learning task. We also added an optical character recognition task to the Local Notes workload and combined part of the DNA Sequence Analysis scenario with a writing sample/spell check scenario to simulate online homework in the new Online Homework workload.

Longtime WebXPRT users will immediately notice a completely new UI. We worked to improve the UI’s appearance on smaller devices such as phones and we think testers will find it easier to navigate.

Testers can still choose to run individual workloads and we’re once again offering English, German, and Simplified Chinese language options.

Below my sig, I’ve included pictures of WebXPRT 3’s start test and results pages, as well as an in-test screen capture.

We’re thankful for all the interest in WebXPRT 3 so far and believe the new version of WebXPRT will be as relevant and reliable as WebXPRT 2013 and 2015—and easier to use. We look forward to seeing new results submissions next week!

Justin

WebXPRT 3 start page

WebXPRT 3 in test

Results page

How to submit results for the WebXPRT Processor Comparison Chart

The WebXPRT 2015 Processor Comparison Chart is in its second month, and we’re excited to see that people are browsing the scores. We’re also starting to receive more WebXPRT score submissions for publication, so we thought it would be a good time to describe how that process works.

Unlike sites that publish any results they receive, we hand-select results from internal lab testing, end-of-test user submissions, and reliable tech media sources. In each case, we evaluate whether the score is consistent with general expectations. For sources outside of our lab, that evaluation includes checking to see whether there is enough detailed system information to get a sense of whether the score makes sense. We do this for every score on the WebXPRT results page and the general XPRT results page.

If we decide to publish a WebXPRT result, that score automatically appears in the processor comparison chart as well. If you would like to submit your score, the submission process is quick and easy. At the end of the WebXPRT test run, click the Submit button below the individual workload scores, complete the short submission form, and click Submit again. The screenshot below shows how the form would look if I submitted a score at the end of a WebXPRT run on my personal system.

WebXPRT results submission

After you submit your score, we’ll contact you to confirm the name we should display as the source for the data. You can use one of the following:

  • Your first and last name
  • “Independent tester,” if you wish to remain anonymous
  • Your company’s name, provided that you have permission to submit the result in their name. If you want to use a company name, we ask that you provide your work email address.


We will not publish any additional information about you or your company without your permission.

We look forward to seeing your score submissions, and if you have suggestions for the processor chart or any other aspect of the XPRTs, let us know!

Justin

Introducing the WebXPRT 2015 Processor Comparison Chart

Today, we’re excited to announce the WebXPRT 2015 Processor Comparison Chart, a new tool that makes it easier to access hundreds of PT-curated, real-world performance scores from a wide range of devices covering everything from TVs to phones to tablets to PCs.

The chart offers a quick way to browse and compare WebXPRT 2015 results grouped by processor. Unlike benchmark-score charts that may contain results from unknown sources, PT hand-selected each of the results from internal lab testing and reliable tech media sources. If we published multiple scores for an individual processor, the score presented in the chart will be an average of those scores. Users can hover over and click individual score bars for additional information about the test results and test sources for each processor.

WebXPRT proc chart capture

We think the WebXPRT Processor Comparison Chart will be a valuable resource for folks interested in performance testing and product evaluation, but the current iteration is only the beginning. We plan to add additional capabilities on a regular basis, such a detailed filtering and enhanced viewing and navigational options. It’s also possible that we may integrate other XPRT benchmarks down the road.

Most importantly, we want the chart to be a great asset for its users, and that’s where you come in. We’d love to hear your feedback on the features and types of data you’d like to see. If you have suggestions, please let us know!

Justin

WebXPRT and user-agent strings

After running WebXPRT in Microsoft Edge, a tester recently asked why the browser information field on the results page displayed “Chrome 52 – Edge 15.15063.” It’s a good question; why would the benchmark report Chrome 52 when Microsoft Edge is the browser under test? The answer lies in understanding user-agent strings and the way that WebXPRT gathers specific bits of information.

When browsers request a web page from a hosting server, they send an array of basic header information that allows the server to determine the client’s capabilities and the best way to provide the requested content. One of these headers, the user-agent, presents a string of tokens that provide information about the application making the request, the operating system and version, rendering engine compatibility, and browser platform details. In effect, the user-agent string is a way for a browser to tell the hosting server the full extent of its capabilities.

When WebXPRT attempts to identify a browser, it references the browser token in the user-agent string.

The process is generally straightforward, but in some cases, browsers spoof information from other browsers in their user-agent strings, which makes accurate browser detection difficult. The reasons for this are complex, but they involve web development practices and the fact that some web pages are not designed to recognize and work well with new or less-popular browsers. When we released WebXPRT 2015, Microsoft Edge was new. The Edge team wanted to make sure that as much advanced web content as possible would be available to Edge users, so they created a user-agent string that declared itself to be several different browsers at once.

I can see this in action if I check Edge’s user-agent string on my system. Currently, it reports “Mozilla/5.0 (Windows NT 10.0; Win64; x64; ServiceUI 9) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36 Edge/15.15063.” So, because of the way Edge’s user-agent string is constructed, and the way WebXPRT parses that information, the browser information field on WebXPRT’s results page will report “Chrome 52 – Edge 15.15063” on my system.

To try this on your system, in Edge, select the ellipses icon in the top right-hand corner, then F12 Developer Tools. Next, select the Console tab, and run “javascript:alert(navigator.userAgent).” A popup window will display the UA string.

You can find instructions for finding the user-agent string in other browsers here: http://techdows.com/2016/07/edge-ie-chrome-firefox-user-agent-strings.html.

In the next version of WebXPRT, we’ll work to refine the way that the test parses user-agent strings, and provide more accurate system information for testers. If you have any questions or suggestions regarding this topic, let us know!

Justin

Best practices

Recently, a tester wrote in and asked for help determining why they were seeing different WebXPRT scores on two tablets with the same hardware configuration. The scores differed by approximately 7.5 percent. This can happen for many reasons, including different software stacks, but score variability can also result from different testing behavior and environments. While some degree of variability is natural, the question provides us with a great opportunity to talk about the basic benchmarking practices we follow in the XPRT lab, practices that contribute to the most consistent and reliable scores.

Below, we list a few basic best practices you might find useful in your testing. While they’re largely in the context of the WebXPRT focus on evaluating browser performance, several of these practices apply to other benchmarks as well.

  • Test with clean images: We use an out-of-box (OOB) method for testing XPRT Spotlight devices. OOB testing means that other than initial OS and browser version updates that users are likely to run after first turning on the device, we change as little as possible before testing. We want to assess the performance that buyers are likely to see when they first purchase the device, before installing additional apps and utilities. This is the best way to provide an accurate assessment of the performance retail buyers will experience. While OOB is not appropriate for certain types of testing, the key is to not test a device that’s bogged down with programs that influence results unnecessarily.
  • Turn off updates: We do our best to eliminate or minimize app and system updates after initial setup. Some vendors are making it more difficult to turn off updates completely, but you should always account for update settings.
  • Get a feel for system processes: Depending on the system and the OS, quite a lot of system-level activity can be going on in the background after you turn it on. As much as possible, we like to wait for a stable baseline (idle) of system activity before kicking off a test. If we start testing immediately after booting the system, we often see higher variability in the first run before the scores start to tighten up.
  • Disclosure is not just about hardware: Most people know that different browsers will produce different performance scores on the same system. However, testers aren’t always aware of shifts in performance between different versions of the same browser. While most updates don’t have a large impact on performance, a few updates have increased (or even decreased) browser performance by a significant amount. For this reason, it’s always worthwhile to record and disclose the extended browser version number for each test run. The same principle applies to any other relevant software.
  • Use more than one data point: Because of natural variability, our standard practice in the XPRT lab is to publish a score that represents the median from at least three to five runs. If you run a benchmark only once, and the score differs significantly from other published scores, your result could be an outlier that you would not see again under stable testing conditions.


We hope those tips will make testing a little easier for you. If you have any questions about the XPRTs, or about benchmarking in general, feel free to ask!

Justin

XPRTs around the world

This week, we posted an updated version of our “XPRTs around the world” infographic. From time to time, we like to give readers a broader view of the impact that the XPRTs are having around the world, and the infographic shows just how far and wide the XPRTs’ reach has grown.

Here are some numbers from the latest update:

  • The XPRTs have been mentioned more than 7,800 times on over 2,500 unique sites.
  • Those mentions include more than 6,800 articles and reviews.
  • Those mentions originated in over 400 cities located in 58 countries on six continents. If you’re a tech reviewer based in Antarctica, we’re counting on you to help us make it a clean sweep!
  • The BenchmarkXPRT Development Community now includes 203 members from 73 companies and organizations around the world.


In addition to the reach numbers, we’re excited that the XPRTs have now delivered more than 250,000 real-world results!

If you’re familiar with the run counter on WebXPRT.com, you may have noticed that the WebXPRT run tally is rising quickly. Starting with the original release of WebXPRT in early 2013, it took more than three and a half years for the combined run tally of WebXPRT 2013 and WebXPRT 2015 to reach 100,000. In the nine months since that happened, users have added 60,000 runs. The pace is picking up significantly!

We’re grateful for everyone who’s helped us get this far. Here’s to another quarter-million runs and downloads!

Justin

Check out the other XPRTs:

Forgot your password?