BenchmarkXPRT Blog banner

Category: Benchmark metrics

Principled Technologies and the BenchmarkXPRT Development Community release WebXPRT 3, a free online performance evaluation tool for web-enabled devices

Durham, NC — Principled Technologies and the BenchmarkXPRT Development Community have released WebXPRT 3, a free online tool that gives objective information about how well a laptop, tablet, smartphone, or any other web-enabled device handles common web tasks. Anyone can go to WebXPRT.com and compare existing performance evaluation results on a variety of devices or run a simple evaluation test on their own.

WebXPRT 3 contains six HTML5- and JavaScript-based scenarios created to mirror common web browser tasks: Photo Enhancement, Organize Album Using AI, Stock Option Pricing, Encrypt Notes and OCR Scan, Sales Graphs, and Online Homework.

“WebXPRT is a popular, easy-to-use benchmark run by manufacturers, tech journalists, and consumers all around the world,” said Bill Catchings, co-founder of Principled Technologies, which administers the BenchmarkXPRT Development Community. “We believe that WebXPRT 3 is a great addition to WebXPRT’s legacy of providing relevant and reliable performance data for a wide range of devices.”

WebXPRT is one of the BenchmarkXPRT suite of performance evaluation tools. Other tools include MobileXPRT, TouchXPRT, CrXPRT, BatteryXPRT, and HDXPRT. The XPRTs help users get the facts before they buy, use, or evaluate tech products such as computers, tablets, and phones.

To learn more about and join the BenchmarkXPRT Development Community, go to www.BenchmarkXPRT.com.

About Principled Technologies, Inc.
Principled Technologies, Inc. is a leading provider of technology marketing and learning & development services. It administers the BenchmarkXPRT Development Community.

Principled Technologies, Inc. is located in Durham, North Carolina, USA. For more information, please visit www.PrincipledTechnologies.com.

Company Contact
Justin Greene
BenchmarkXPRT Development Community
Principled Technologies, Inc.
1007 Slater Road, Ste. 300
Durham, NC 27703
BenchmarkXPRTsupport@PrincipledTechnologies.com

Looking for performance clues

We’ve written before about how the operating system and other software can influence test scores and even battery life. Benchmarks like the XPRTs provide overall results, but teasing out which factors affect those results may require some detective work. The key is to collect individual data points as evidence to what may be causing performance changes.

The Apple iOS 11 rollout last month is an excellent example of the effect of software on device performance. Angry tweets started almost immediately after the update, claiming that iOS 11 drained device batteries. iPhone users here at PT experienced similar issues. What was the cause of that performance drop? The hardware remained the same. So, did software cause the problem?

Less than a week after the rollout, Mashable published an explanation of possible causes. The article quotes research from mobile security firm Wandera showing that, for the 50,000 “moderate to heavy iPhone and iPad users” in the study, devices running iOS 11 burned through their battery at much faster rates than the same devices running iOS 10. They cite two possible causes:

    • devices often re-categorize the files stored on them for every new OS install, which may account for some of the battery issues.
    • many apps are not optimized for iOS 11 yet.

 

While these explanations make sense, with a little more digging, we could get closer to actually solving the mystery instead of guessing at the causes. After all, it is also possible that people are using iOS 11 differently from iOS 10. So, how could a dedicated sleuth investigate further? Anyone using benchmarks and hands-on testing to sift through various scenarios and configurations could get us closer to solving this mystery and any other software-based performance anomalies. But it’s a daunting task—changing only one variable at a time and recording the results is like pounding the streets and knocking on doors to solve a case.

In all likelihood, some combination of Apple iOS updates and application changes will improve the battery life for iOS 11. In the meantime, we wish we had an XPRT that could test battery life on iOS. Who knows, maybe some future version of WebXPRT will be able to help in future sleuthing.

Eric

Machine learning performance tool update

Earlier this year we started talking about our efforts to develop a tool to help in evaluating machine learning performance. We’ve given some updates since then, but we’ve also gotten some questions, so I thought I’d do my best to summarize our answers for everyone.

Some have asked what kinds of algorithms we’ve been looking into. As we said in an earlier blog, we’re looking at  algorithms involved in computer vision, natural language processing, and data analytics, particularly different aspects of computer vision.

One seemingly trivial question we’ve received regards the proposed name, MLXPRT. We have been thinking of this tool as evaluating machine learning performance, but folks have raised a valid concern that it may well be broader than that. Does machine learning include deep learning? What about other artificial intelligence approaches? I’ve certainly seen other approaches lumped into machine learning, probably because machine learning is the hot topic of the moment. It feels like everything is boasting, “Now with machine learning!”

While there is some value in being part of such a hot movement, we’ve begun to wonder if a more inclusive name, such as AIXPRT, would be better. We’d love to hear your thoughts on that.

We’ve also had questions about the kind of devices the tool will run on. The short answer is that we’re concentrating on edge devices. While there is a need for server AI/ML tools, we’ve been focusing on the evaluating the devices close to the end users. As a result, we’re looking at the inference aspect of machine learning rather than the training aspect.

Probably the most frequent thing we’ve been asked about is the timetable. While we’d hoped to have something available this year, we were overly optimistic. We’re currently working on a more detailed proposal of what the tool will be, and we aim to make that available by the end of this year. If we achieve that goal, our next one will be to have a preliminary version of the tool itself ready in the first half of 2018.

As always, we seek input from folks, like yourself, who are working in these areas. What would you most like to see in an AI/machine learning performance tool? Do you have any questions?

Bill 

Introducing the WebXPRT 2015 Processor Comparison Chart

Today, we’re excited to announce the WebXPRT 2015 Processor Comparison Chart, a new tool that makes it easier to access hundreds of PT-curated, real-world performance scores from a wide range of devices covering everything from TVs to phones to tablets to PCs.

The chart offers a quick way to browse and compare WebXPRT 2015 results grouped by processor. Unlike benchmark-score charts that may contain results from unknown sources, PT hand-selected each of the results from internal lab testing and reliable tech media sources. If we published multiple scores for an individual processor, the score presented in the chart will be an average of those scores. Users can hover over and click individual score bars for additional information about the test results and test sources for each processor.

WebXPRT proc chart capture

We think the WebXPRT Processor Comparison Chart will be a valuable resource for folks interested in performance testing and product evaluation, but the current iteration is only the beginning. We plan to add additional capabilities on a regular basis, such a detailed filtering and enhanced viewing and navigational options. It’s also possible that we may integrate other XPRT benchmarks down the road.

Most importantly, we want the chart to be a great asset for its users, and that’s where you come in. We’d love to hear your feedback on the features and types of data you’d like to see. If you have suggestions, please let us know!

Justin

Decisions, decisions

Back in April, we shared some of our initial ideas for a new version of WebXPRT, and work on the new benchmark is underway. Any time we begin the process of updating one of the XPRT benchmarks, one of the first decisions we face is how to improve workload content so it better reflects the types of technology average consumers use every day. Since benchmarks typically have a life cycle of two to four years, we want the benchmark to be relevant for at least the next couple of years.

For example, WebXPRT contains two photo-related workloads, Photo Effects and Organize Album. Photo Effects applies a series of effects to a set of photos, and Organize Album uses facial recognition technology to analyze a set of photos. In both cases, we want to use photos that represent the most relevant combination of image size, resolution, and data footprint possible. Ideally, the resulting image sizes and resolutions should differentiate processing speed on the latest systems, but not at the expense of being able to run reasonably on most current devices. We also have to confirm that the photos aren’t so large as to impact page load times unnecessarily.

The way this strategy works in practice is that we spend time researching hardware and operating system market share. Given that phones are the cameras that most people use, we look at them to help define photo characteristics. In 2017, the most widespread mobile OS is Android, and while reports vary depending on the metric used, the Samsung Galaxy S5 and Galaxy S7 are at or near the top of global mobile market share. For our purposes, the data tells us that choosing photo sizes and resolutions that mirror those of the Galaxy line is a good start, and a good chunk of Android users are either already using S7-generation technology, or will be shifting to new phones with that technology in the coming year. So, for the next version of WebXPRT, we’ll likely use photos that represent the real-life environment of an S7 user.

I hope that provides a brief glimpse into the strategies we use to evaluate workload content in the XPRT benchmarks. Of course, since the BenchmarkXPRT Development Community is an open development community, we’d love to hear your comments or suggestions!

Justin

Introducing the XPRT Selector

We’re proud of all the XPRT tools, each of which serves a different purpose for the people who rely on them. But for those new to the XPRTs, we wanted a way to make it easy to tell which tool will best suit each person’s specific requirements. To that end, today we’re excited to announce the XPRT Selector, an interactive web tool that helps consumers, developers, manufacturers, and reviewers zero in on exactly which XPRT tool is the right match for their needs.

Using the XPRT Selector is easy. Simply spin the dials on the wheel to choose the categories that best describe yourself, the devices and operating systems you’re working with, and the topic that interests you. Once you’ve aligned the dials, click Get results, and the Selector will present all the free XPRT tools and resources that are available to you. Along with choosing the best tools for you, the XPRT Selector also explains the purpose and capabilities of each tool.

To see the Selector in action, check out the short video below. You can take the XPRT Selector for a spin at http://www.principledtechnologies.com/benchmarkxprt/the-xprt-selector/.

The XPRT Selector

All the XPRT tools have one thing in common: They help take the guesswork out of device evaluation and comparison, making them invaluable for anyone using, making, or writing about tech products. We think the XPRT Selector is a great addition to the fold!

Justin

Check out the other XPRTs:

Forgot your password?