BenchmarkXPRT Blog banner

Find the perfect tech gift with the XPRT Spotlight Black Friday Showcase

With the biggest shopping day of the year fast approaching, you might be feeling overwhelmed by the sea of tech gifts to choose from. Luckily, the XPRTs are here to help. We’ve gathered the product specs and performance facts for the hottest tech devices in one convenient place—the XPRT Spotlight Black Friday Showcase. This free shopping tool provides side-by-side comparisons of some of the season’s most coveted smartphones, laptops, Chromebooks, tablets, and PCs. Most importantly, it helps you make informed buying decisions so you can breeze through this season’s holiday shopping.

Want to know how the Google Pixel 2 compares to the Apple iPhone X or Samsung Galaxy Note 8 in web browsing performance or screen size? Simply select any two devices and click the compare button to see how they stack up against each other. You can also search by device type if you’re interested in a specific form factor such as consoles or tablets.

The Showcase doesn’t go away after Black Friday. We’ll rename it the XPRT Holiday Buying Guide and continue to add devices throughout the shopping season. So be sure to check back in and see how your tech gifts measure up.

If this is your first time reading about the XPRT Weekly Tech Spotlight, here’s a little background. Our hands-on testing process equips consumers with accurate information about how devices function in the real world. We test devices using our industry-standard BenchmarkXPRT tools: WebXPRT, MobileXPRT, TouchXPRT, CrXPRT, BatteryXPRT, and HDXPRT. In addition to benchmark results, we include photographs, specs, and prices for all products. New devices come online weekly, and you can browse the full list of almost 100 that we’ve featured to date on the Spotlight page.

If you represent a device vendor and want us to feature your product in the XPRT Weekly Tech Spotlight, please visit the website for more details.

Do you have suggestions for the Spotlight page or device recommendations? Let us know!

Justin

Nothing to hide

I recently saw an article in ZDNet by my old friend Steven J. Vaughan-Nichols that talks about how NetMarketShare and StatCounter reported a significant jump in the operating system market shares for Linux and Chrome OS. One frustration Vaughan-Nichols alluded to in the article is the lack of transparency into how these firms calculated market share, so he can’t gauge how reliable they are. Because neither NetMarketShare nor StatCounter disclosed their methods, there’s no sure way for interested observers to verify the numbers. Steven prefers the data from the federal government’s Digital Analytics Program (DAP). DAP makes its data freely available, so you can run your own calculations. Transparency generates trust.

Transparency is a core value for the XPRTs. We’ve written before about how statistics can be misleading. That’s why we’ve always disclosed exactly how the XPRTs calculate performance results, and the way BatteryXPRT calculates battery life. It’s also why we make each XPRT’s source code available to community members. We want to be open and honest about how we do things, and our open development community model fosters the kind of constructive feedback that helps to continually improve the XPRTs.

We’d love for you to be a part of that process, so if you have questions or suggestions for improvement, let us know. If you’d like to gain access to XPRT source code and previews of upcoming benchmarks, today is a great day to join the community!

Eric

Glimpses of the next WebXPRT

Development work on the next version of WebXPRT is well underway, and we think it’s a good time to offer a glimpse of what’s to come.

We’ve updated the photo-related workloads with new images and are experimenting with adding a new task to the Organize Album workload. The task utilizes ConvNetJS, a JavaScript library designed for training neural networks within the browser itself, to assign classifications to a set of images. It’s an example of the type of integrated deep learning tasks that will be showing up in all sorts of devices in the years to come.

We’re also testing an additional task in the Local Notes workload using Tesseract.js, a popular OCR (optical character recognition) engine. Our scenario uses OCR technology to scan images of purchase receipts and gather relevant information.

We’re testing these new tasks now, and will include them only once we’re confident that they produce consistent and reliable results without extending the benchmark’s runtime unnecessarily.  Consequently, the next WebXPRT might contain variations of these tasks, or other new technologies altogether. We mention them now to offer some insight into the types of workload enhancements that we’re considering.

We’ve been working hard on the new WebXPRT UI as well. The image below shows the new start page from an early development build. We’re still making adjustments, so the final product will probably differ, but you do get a sense of the new UI’s clean look.

WebXPRT screen shot

As we’ve said before, we’re committed to making sure that WebXPRT runs in most browsers and produces results that are useful for comparing web browsing performance across a wide variety of devices. We appreciate the feedback we’ve gotten so far, and are happy to receive more. Do you have ideas for the next WebXPRT? Let us know!

Justin

Looking for performance clues

We’ve written before about how the operating system and other software can influence test scores and even battery life. Benchmarks like the XPRTs provide overall results, but teasing out which factors affect those results may require some detective work. The key is to collect individual data points as evidence to what may be causing performance changes.

The Apple iOS 11 rollout last month is an excellent example of the effect of software on device performance. Angry tweets started almost immediately after the update, claiming that iOS 11 drained device batteries. iPhone users here at PT experienced similar issues. What was the cause of that performance drop? The hardware remained the same. So, did software cause the problem?

Less than a week after the rollout, Mashable published an explanation of possible causes. The article quotes research from mobile security firm Wandera showing that, for the 50,000 “moderate to heavy iPhone and iPad users” in the study, devices running iOS 11 burned through their battery at much faster rates than the same devices running iOS 10. They cite two possible causes:

    • devices often re-categorize the files stored on them for every new OS install, which may account for some of the battery issues.
    • many apps are not optimized for iOS 11 yet.

 

While these explanations make sense, with a little more digging, we could get closer to actually solving the mystery instead of guessing at the causes. After all, it is also possible that people are using iOS 11 differently from iOS 10. So, how could a dedicated sleuth investigate further? Anyone using benchmarks and hands-on testing to sift through various scenarios and configurations could get us closer to solving this mystery and any other software-based performance anomalies. But it’s a daunting task—changing only one variable at a time and recording the results is like pounding the streets and knocking on doors to solve a case.

In all likelihood, some combination of Apple iOS updates and application changes will improve the battery life for iOS 11. In the meantime, we wish we had an XPRT that could test battery life on iOS. Who knows, maybe some future version of WebXPRT will be able to help in future sleuthing.

Eric

Machine learning performance tool update

Earlier this year we started talking about our efforts to develop a tool to help in evaluating machine learning performance. We’ve given some updates since then, but we’ve also gotten some questions, so I thought I’d do my best to summarize our answers for everyone.

Some have asked what kinds of algorithms we’ve been looking into. As we said in an earlier blog, we’re looking at  algorithms involved in computer vision, natural language processing, and data analytics, particularly different aspects of computer vision.

One seemingly trivial question we’ve received regards the proposed name, MLXPRT. We have been thinking of this tool as evaluating machine learning performance, but folks have raised a valid concern that it may well be broader than that. Does machine learning include deep learning? What about other artificial intelligence approaches? I’ve certainly seen other approaches lumped into machine learning, probably because machine learning is the hot topic of the moment. It feels like everything is boasting, “Now with machine learning!”

While there is some value in being part of such a hot movement, we’ve begun to wonder if a more inclusive name, such as AIXPRT, would be better. We’d love to hear your thoughts on that.

We’ve also had questions about the kind of devices the tool will run on. The short answer is that we’re concentrating on edge devices. While there is a need for server AI/ML tools, we’ve been focusing on the evaluating the devices close to the end users. As a result, we’re looking at the inference aspect of machine learning rather than the training aspect.

Probably the most frequent thing we’ve been asked about is the timetable. While we’d hoped to have something available this year, we were overly optimistic. We’re currently working on a more detailed proposal of what the tool will be, and we aim to make that available by the end of this year. If we achieve that goal, our next one will be to have a preliminary version of the tool itself ready in the first half of 2018.

As always, we seek input from folks, like yourself, who are working in these areas. What would you most like to see in an AI/machine learning performance tool? Do you have any questions?

Bill 

What’s next for HDXPRT?

A few months ago, we discussed some initial ideas for the next version of HDXPRT, including updating the benchmark’s workloads and real-world trial applications and improving the look and feel of the UI. This week, we’d like to share more about the status of the HDXPRT development process.

We’re planning to keep HDXPRT’s three test categories: editing photos, editing music, and converting videos. We’re also planning to use the latest trial versions of the same five applications included in HDXPRT 2014: Adobe Photoshop Elements, Apple iTunes, Audacity, CyberLink MediaEspresso, and HandBrake. The new versions of each of these programs include features and capabilities that may enhance the HDXPRT workloads. For example, Adobe Photoshop Elements 2018 includes interesting new AI tools such as “Open Closed Eyes,” which purports to fix photos ruined by subjects who blinked at the wrong time. We’re evaluating whether any of the new technologies on offer will be a good fit for HDXPRT.

We’re also evaluating how the new Windows 10 SDK and Fall Creators Update will affect HDXPRT. It’s too early to discuss potential changes in any detail, but we know we’ll need to adapt to new development tools, and it’s possible that the Fluent Design System will affect the HDXPRT UI beyond the improvements we already had in mind.

As HDXPRT development progresses, we’ll continue to keep the community up to date. If you have suggestions or insights into the new Fall Creators Update or any of HDXPRT’s real-world applications, we’d love to hear from you! If you’re just reading out about HDXPRT for the first time, you can find out more about the purpose, structure, and capabilities of the test here.

Justin

Check out the other XPRTs:

Forgot your password?