BenchmarkXPRT Blog banner

Category: Automation

More, faster, better: The future according to Mobile World Congress 2019

More is more data, which the trillions of devices in the coming Internet of Things will be pumping through our air into our (computing) clouds in hitherto unseen quantities.

Faster is the speed at which tomorrow’s 5G networks will carry this data—and the responses and actions from our automated assistants (and possibly overlords).

Better is the quality of the data analysis and recommendations, thanks primarily to the vast army of AI-powered analytics engines that will be poring over everything digital the planet has to say.

Swimming through this perpetual data tsunami will be we humans and our many devices, our laptops and tablets and smartphones and smart watches and, ultimately, implants. If we are to believe the promise of this year’s Mobile World Congress in Barcelona—and of course I do want to believe it, who wouldn’t?—the result of all of this will be a better world for all humanity, no person left behind. As I walked the show floor, I could not help but feel and want to embrace its optimism.

The catch, of course, is that we have a tremendous amount of work to do between where we are today and this fabulous future.

We must, for example, make sure that every computing node that will contribute to these powerful AI programs is up to the task. From the smartphone to the datacenter, AI will end up being a very distributed and very demanding workload. That’s one of the reasons we’ve been developing AIXPRT. Without tools that let us accurately compare different devices, the industry won’t be able to keep delivering the levels of performance improvements that we need to realize these dreams.

We must also think a lot about how to accurately measure all other aspects of our devices’ performance, because the demands this future will place on them are going to be significant. Fortunately, the always evolving XPRT family of tools is up to the task.

The coming 5G revolution, like all tech leaps forward before it, will not come evenly. Different 5G devices will end up behaving differently, some better and some worse. That fact, plus our constant and growing reliance on bandwidth, suggests that maybe the XPRT community should turn its attention to the task of measuring bandwidth. What do you think?

One thing is certain: we at the Benchmark XPRT Development Community have a role to play in building the tools necessary to test the tech the world will need to deliver on the promise of this exciting trade show. We look forward to that work.

AI is the heartbeat of CES 2019

This year’s CES features a familiar cast of characters: gigantic, super-thin 8K screens; plenty of signage promising the arrival of 5G; robots of all shapes, sizes, and levels of competency; and acres of personal grooming products that you can pair with your phone. In all seriousness, however, one main question keeps coming to mind as I walk the floor: Are we approaching the tipping point where AI truly starts to affect most people in meaningful ways on a daily basis? I think we’re still a couple of years away from ubiquitous AI, but it’s the heartbeat of this year’s show, and it’s going play a part in almost everything we do in the very near future. AI applications at this year’s show include manufacturing, transportation, energy, medicine, education, photography, communications, farming, grocery shopping, fitness, sports, defense, and entertainment, just to name a few. The AI revolution is just starting, but once it gets going, AI will continually reshape society for decades to come. This year’s show reinforces our decision to explore the roles that the XPRTs, beginning with AIXPRT, can play in the AI revolution.

Now for the fun stuff. Here’s a peek at a couple of my favorite displays so far. As is often the case, the most awe-inducing displays at CES are those that overwhelm attendees with light and sound. LG’s enormous curved OLED wall, dubbed the Massive Curve of Nature, was truly something to behold.

IMG_0268 - Copy

Another big draw has been Bell’s Nexus prototype, a hybrid-electric VTOL (vertical takeoff and landing) air taxi. Some journalists can’t resist calling it a flying car, but I refuse to do so, because it has nothing in common with cars apart from the fact that people sit in it and use it to travel from place to place. As Elon Musk once said of an earlier, but similar, concept, “it’s just a helicopter in helicopter’s clothing.” Semantics aside, it’s intriguing to imagine urban environments full of nimble aircraft that are quieter, easier to fly, and more energy efficient than traditional helicopters, especially if they’re paired with autonomous driving technologies.

Version 2

Finally, quite a few companies are displaying props that put some of the “reality” back into “virtual reality.” Driving and flight simulators with full range of motion that are small enough to fit in someone’s basement or game room, full-body VR suits that control your temperature and deliver electrical stimulus based on game play (yikes!), and portable roller-coaster-like VR rides were just a few of the attractions.

IMG_0203 - Copy

It’s been a fascinating show so far!

Justin

Thinking ahead at CES 2018

It may sound trite to say that a show like CES is all about the future, but this year’s show is prompting me to think about how our lives will evolve in the coming years. Some technological breakthroughs change the way we do everyday things like play music or hail a cab—while some transform the way we do everything. For technological innovation to truly shift society on a wide scale, it has to coincide with markets of scale in a way that makes life-changing tech accessible to almost everyone (think: smartphones in 2005 versus smartphones in 2018).

These technical and economic forces are coinciding once again in the areas of AI, automation, the Internet of Things (IoT), and consumer robotics. While many of our daily activities will stay the same, the ways we organize and engage with those activities are changing dramatically.

I’ll leave you with a few general observations from the show:

  • Huawei has a huge presence here. A new tagline for them that I haven’t seen before is a play on their name, “Wow Way.” I suspect we may have a Mate 10 Pro XPRT Spotlight entry in the near future.
  • The Kino-mo Hypervsn 3D Holograph display blew me away. People were crowding in to see it and couldn’t stop staring. It’s straight out of sci-fi, and its appearance is similar to Princess Leia’s hologram message in Star Wars. (By the way, it looks way better in real life than in the video.)
  • Sony is making a big push into smart homes by building systems that work cross-platform with a range of smart speakers and assistants. Between their smart home push and some of the cool home theater tech they had on display, I can see them gaining some brand power.
  • To me, the most exciting concepts at the show involved smart infrastructure, which promises enormous potential to boost the efficient distribution of water, energy, and transportation resources.
  • Surprisingly, I saw automation, smart city, and smart infrastructure displays from companies that I don’t always associate with IoT or AI, like Bosch and Panasonic. Panasonic was marketing an array of semi-autonomous vehicle cockpit prototypes, and had a section highlighting their partnership with Tesla.
  • By far, the strangest thing I’ve seen at CES has been the Psychasec booth, staffed by eerie attendants in pure white outfits who talked confidently about “downloading your cortical stack into customized bodies made from organic materials.” The Psychasec staff absolutely refused to break character, which made the whole scene even stranger. Check out the link above for the story behind Psychasec.



And, while I’m probably not supposed to admit this, my favorite part of the show so far has been the line of expensive massage chair vendors doling out free sessions…

More to follow soon,

Justin

Learning about machine learning

Everywhere we look, machine learning is in the news. It’s driving cars and beating the world’s best Go players. Whether we are aware of it or not, it’s in our lives–understanding our voices and identifying our pictures.

Our goal of being able to measure the performance of hardware and software that does machine learning seems more relevant than ever. Our challenge is to scan the vast landscape that is machine learning, and identify which elements to measure first.

There is a natural temptation to see machine learning as being all about neural networks such as AlexNet and GoogLeNet. However, new innovations appear all the time and lots of important work with more classic machine learning techniques is also underway. (Classic machine learning being anything more than a few years old!) Recursive neural networks used for language translation, reinforcement learning used in robotics, and support vector machine (SVM) learning used in text recognition are just a few examples among the wide array of algorithms to consider.

Creating a benchmark or set of benchmarks to cover all those areas, however, is unlikely to be possible. Certainly, creating such an ambitious tool would take so long that it would be of limited usefulness.

Our current thinking is to begin with a small set of representative algorithms. The challenge, of course, is identifying them. That’s where you come in. What would you like to start with?

We anticipate that the benchmark will focus on the types of inference learning and light training that are likely to occur on edge devices. Extensive training with large datasets takes place in data centers or on systems with extraordinary computing capabilities. We’re interested in use cases that will stress the local processing power of everyday devices.

We are, of course, reaching out to folks in the machine learning field—including those in academia, those who create the underlying hardware and software, and those who make the products that rely on that hardware and software.

What do you think?

Bill

WebXPRT 2015 is here!

Today, we’re releasing WebXPRT 2015, our benchmark for evaluating the performance of Web-enabled devices. The BenchmarkXPRT Development Community has been using a community preview for several weeks, but now that we’ve released the benchmark, anyone can run WebXPRT and publish results.

Run WebXPRT 2015

WebXPRT 2013 is still available here while people transition to WebXPRT 2015. We will provide plenty of notice before discontinuing WebXPRT 2013.

After trying out WebXPRT, please send your comments to BenchmarkXPRTsupport@principledtechnologies.com.

WebXPRT 2015

Tomorrow we’ll be releasing WebXPRT 2015, with mirror site in Singapore to follow soon. We’ve been talking about it for a while and we’re delighted to finally make it available to the public.

As we’ve discussed over the past few weeks, the new WebXPRT is a big improvement over WebXPRT 2013. Some of the changes are

  • An improved UI. In addition to a cleaner, sleeker look, the UI now has a progress indicator and on-screen test descriptions. There is also a Simplified Chinese version of the UI.
  • Test automation. WebXPRT 2015 lets you automate testing, giving labs more flexibility and making it easier to test lots of devices.
  • New and improved tests. In addition to enhancing the existing tests, WebXPRT 2015 adds two new tests, Explore DNA Sequencing and Sales Graphs.

 

If you haven’t checked out the new WebXPRT, now is the time!

And remember the design document for the next generation of MobileXPRT should be out by the end of the month. If there are things you would like to see, it’s a great time to let us know.

Eric

Check out the other XPRTs:

Forgot your password?