BenchmarkXPRT Blog banner

Category: AR

Check out our CES 2023 recap video!

We have a very talented studio team here at Principled Technologies, and this week, the team worked with the XPRTs to put together a CES 2023 recap video. In it, I discuss why we traveled to CES, my overall impressions from the show, and how the ideas and technologies presented at the show may influence the development of future XPRT benchmarks. If you missed last week’s post about my initial thoughts on the advances in AR and VR technology at the show, or you didn’t get a chance to see some of our photos from the show on social media, this week’s video provides a good summary of our CES-related activity.

To view the video, you can follow this link or click the screenshot below. If you followed CES this year and have any thoughts about how the XPRTs can help to evaluate emerging technologies, we’d love to hear from you!

Justin

CES 2023: Adapting to changing realities

The last time the XPRTs attended the Consumer Electronics Show in Las Vegas was in January 2020, shortly before shutdowns due to the global pandemic began. More than 171,000 people attended that year’s show, the 2021 show was totally virtual, and CES shortened the 2022 show after many exhibitors and media pulled out during the Omicron surge. While some aspects of the event are returning to normal this year, about one-third of the typically jam-packed Las Vegas Convention Center space is empty, and only about 100,000 people are likely to attend. Nevertheless, the show is still enormous and full of fascinating new technology.

Just one day into the show, I’ve already noticed some interesting changes in the virtual reality (VR) and augmented reality (AR) areas since I last attended in 2020. One change is a significant expansion in the sensory capabilities of VR equipment. For a long time, VR technologies have focused almost solely on visual and audio input technology and the graphics-rendering capabilities necessary for lag-free, immersive experiences. In 2020, I saw companies working on various types of haptic feedback gear, including full-body suits, that pushed the boundaries of VR beyond sight and sound. Now, several companies are demonstrating significant progress in “real-feel touch” technologies for VR. One such company is HaptX, which is developing a set of gloves (see the picture below) that pump air through “microfluidic actuators” so that users can feel the size and shape of virtual objects they interact with in a VR environment. While we often think of VR being used for gaming and entertainment, advances in realistic, multi-sensory capabilities can lead to VR becoming a valuable tool for all kinds of industrial and professional training applications.

A show attendee tries out HaptX gear.

Another change I’ve noticed is how AR seems poised to move from demos to everyday life by means of integration with all types of smartphone apps. I enjoyed speaking with a representative from a Korean AR company called Arbeon. Arbeon is developing an app that will allow users to point their phone’s camera at an object (a wine bottle in the picture below), and see an array of customizable, interactive AR animations surrounding the object. You’ll be able to find product info, see and leave feedback similar to “likes” and reviews, attach emojis, tag friends, and even purchase the product, all from your phone’s AR-enhanced camera and screen. It’s an interesting concept with limitless applications. While VR is here to stay and getting better all the time, I personally think that AR will become much more integrated into everyday life in the coming years. I also think AR apps for phones will allow the technology to take off more quickly in the near term than clunkier options like AR eyeglasses.

The large screen displays how Arbeon’s AR phone app interacts with objects like a wine bottle.

Of course, thinking about AR has led me to wonder if we’ll be able to incorporate AR-related workloads into future XPRTs. As new technologies place new and unprecedented levels of processing demand on our computing hardware, the need for objective performance evaluation will continue. Providing reliable, objective performance data is why the XPRTs exist, and planning for the future of the XPRTs is why we’re at CES 2023. If you have any thoughts about how the XPRTs can help to evaluate new technologies, we’d love to hear from you!

Justin

MWCS18 and AIXPRT: a new video

A few weeks ago, Bill shared his first impressions from this year’s Mobile World Congress Shanghai (MWCS). “5G +” was the major theme, and there was a heavy emphasis on 5G + AI. This week, we published a video about Bill’s MWCS experience and the role that the XPRTs can play in evaluating emerging technologies such as 5G, AI, and VR. Check it out!

MWC Shanghai 2018: 5G, AI, VR, and the XPRTs

 

You can read more about AIXPRT development here. We’re still accepting responses to the AIXPRT Request for Comments, so if you would like to share your ideas on developing an AI/machine learning benchmark, please feel free to contact us.

Justin

 

Thoughts from MWC Shanghai 2018

Ni hao from Shanghai! It is amazing the change that happens in a year. This year’s MWC Shanghai, like last year’s, took up about half of the Shanghai New International Expo Centre (SNIEC). “5G +” is the major theme and, unlike last year, 5G is not something in the distant future. It is now assumed to be in progress.

The biggest of the pluses was AI, with a number of booths explicitly sporting 5G + AI signage. There were also 5G plus robots, cars, and cloud services. Many of those are really about AI as well. The show makes it feel like 5G is everywhere and will make everything better (or at least a lot faster). And Asia is leading the way.

[caption id="attachment_3447" align="alignleft" width="640"]5G + robotics at MWCS 18. 5G + robotics at MWCS 18.[/caption]

Most of the booths touted their 5G support as they did last year, but rather than talking about the future, they tried to say that their 5G was now. They claimed their products were in real-world tests with anticipated deployment schedules. One of the keynote speakers talked about 1.2 billion 5G connections by 2025, with more than half of those in Asia. The purported scale and speed of the transition to 5G is staggering.

[caption id="attachment_3449" align="alignleft" width="640"]The keynote stage, displaying some big numbers. The keynote stage, displaying some big numbers.[/caption]

The last two halls I visited showed that world is not all 5G and AI. These halls looked at current fun applications of mobile technologies and companies developing technologies in the near future. MWC allowed children into one of the halls, where they (and we adults) could fly drones and experience VR technology. I watched in some amusement as people crashed drones, rode bikes with VR gear to simulate horses, were 3D scanned, and generally tried out new tech that didn’t always work.

The second hall included small booths from new companies working on future technologies that might be ready “4 years from now” (4YFN). These companies did not have much to show yet, but each booth displayed the company name and a short phrase summing up their future tech. That led to “Deepscent Labs is a smart scent data company,” ChineSpain is a “Marketplace of experiences for Chinese tourists in Spain,” and “Juice is a tech-based music contents startup that creates an ecosystem of music.” The mind boggles!

The XPRTs’ foray into AI with AIXPRT seems well timed based on this show. Other areas from this show that may be worth considering for the XPRTs are 5G and the cloud. We would love to hear your thoughts on those areas. We know they are important, but do you need the XPRTs and their emphasis on real-world benchmarks and workloads in those areas? Drop us a line and let us know!

Bill

Learning about machine learning

Everywhere we look, machine learning is in the news. It’s driving cars and beating the world’s best Go players. Whether we are aware of it or not, it’s in our lives–understanding our voices and identifying our pictures.

Our goal of being able to measure the performance of hardware and software that does machine learning seems more relevant than ever. Our challenge is to scan the vast landscape that is machine learning, and identify which elements to measure first.

There is a natural temptation to see machine learning as being all about neural networks such as AlexNet and GoogLeNet. However, new innovations appear all the time and lots of important work with more classic machine learning techniques is also underway. (Classic machine learning being anything more than a few years old!) Recursive neural networks used for language translation, reinforcement learning used in robotics, and support vector machine (SVM) learning used in text recognition are just a few examples among the wide array of algorithms to consider.

Creating a benchmark or set of benchmarks to cover all those areas, however, is unlikely to be possible. Certainly, creating such an ambitious tool would take so long that it would be of limited usefulness.

Our current thinking is to begin with a small set of representative algorithms. The challenge, of course, is identifying them. That’s where you come in. What would you like to start with?

We anticipate that the benchmark will focus on the types of inference learning and light training that are likely to occur on edge devices. Extensive training with large datasets takes place in data centers or on systems with extraordinary computing capabilities. We’re interested in use cases that will stress the local processing power of everyday devices.

We are, of course, reaching out to folks in the machine learning field—including those in academia, those who create the underlying hardware and software, and those who make the products that rely on that hardware and software.

What do you think?

Bill

VR and AR at Mobile World Congress 2017

Spotting the virtual reality (VR) and augmented reality (AR) demos at the recent Mobile World Congress (MWC) in Barcelona was easy: all you had to do was look for the long queues of people waiting to put on a headset and see another world. Though the demos ranged from games to simulated roller-coaster rides to simple how-to tools, the interest of the crowd was always high. A lot of the attraction was clearly due to the tools’ relative novelty, but many people seemed focused on using the technologies to create commercially viable products.

Both VR and AR involve a great deal of graphics and data movement, so they can be quite computationally demanding. Right now, that’s not a problem, because most applications and demos are hooked directly to powerful computers. As these technologies become more pervasive, however, they’re going to find their way into our devices, which will almost certainly do some of the processing even as the bulk of the work happens on servers in the cloud. The better the AR and VR experiences our devices can support, the happier we’re likely to be with those technologies.

Along with the crowds at MWC, many of us in the BenchmarkXPRT Development Community are enthusiastic about VR and AR, which is why we’ve been monitoring these fields for some time. We’ve even worked with a group of NC State University students to produce a sample VR workload. If you have thoughts on how we might best support VR and AR, please contact us. Meanwhile, we’ll continue to track both closely and work to get the XPRTs ready to measure how well devices handle these technologies.

Mark

Check out the other XPRTs:

Forgot your password?