BenchmarkXPRT Blog banner

Category: AI

News from the MobileXPRT 3 team

A few months ago, we shared some of our thoughts during the early planning stages of MobileXPRT 3 development. Since then, we’ve started building the new benchmark with Android Studio SDK 27. We’re now at a place where we can share more details about what to expect in MobileXPRT 3. In a nutshell, one of the five workloads in the previous version, MobileXPRT 2015, is getting a major overhaul, the remaining four workloads are getting updated test content, and we’re adding one completely new workload.

One of the first challenges we tackled was to completely rebuild the Create Slideshow workload. In MobileXPRT 2015, the workload uses FFmpeg to convert photos into video. FFmpeg utilizes a C++ executable, and it needs to be compiled differently for different architectures such as x86, x64, arm32, arm64, etc. With each new Android version, the task of maintaining FFmpeg compatibility with numerous architectures and Android versions becomes more complex. MobileXPRT 2015 still works well on most Android devices, but we wanted a more future-proof solution. In MobileXPRT 3, the Create Slideshow workload will use the Android MediaCodec API instead of FFmpeg. This change enables the workload to run successfully on devices that could not complete the workload in MobileXPRT 2015.

We are updating the test content of the following workloads: Apply Photo Effects, Create Photo Collages, Encrypt Personal Content, and Detect Faces to Organize Photos. We will replace items such as photos and videos with more contemporary file resolutions and sizes where applicable.

In the mobile device market, artificial intelligence and machine learning capabilities are rapidly moving from the level of novelty to being integrated into many daily tasks, so we wanted to include an AI or ML element in MobileXPRT 3. Our new workload uses Google’s Mobile Vision API to perform optical character recognition (OCR) tasks involving scanning receipts for personal records or an expense report. The scenario is similar to the OCR receipt-scanning task in WebXPRT 3, though the two workloads are based on different text-recognition technologies.

Finally, we’re updating the MobileXPRT UI to improve the look of the benchmark and make it easier to use. We’ll share a sneak peek of the new UI here in the blog around the time of the community preview. If you have any questions about MobileXPRT 2015 or MobileXPRT 3, please let us know!

Justin

XPRTs in the datacenter

The XPRTs have been very successful on desktops, notebooks, tablets, and phones. People have run WebXPRT over 295,000 times. It and other benchmarks such as MobileXPRT, HDXPRT, and CrXPRT are important tools globally for evaluating device performance on various consumer and business client platforms.

We’ve begun branching out with tests for edge devices with AIXPRT, our new artificial intelligence benchmark. While typical consumers won’t be able to run AIXPRT on their devices initially, we feel that it is important for the XPRTs to play an active role in a critical emerging market. (We’ll have some updates on the AIXPRT front in the next few weeks.)

Recently, both community members and others have asked about the possibility of the XPRTs moving into the datacenter. Folks face challenges in evaluating the performance and suitability to task of such datacenter mainstays as servers, storage, networking infrastructure, clusters, and converged solutions. These challenges include the lack of easy-to-run benchmarks, the complexity and cost of the equipment (multi-tier servers, large amounts of storage, and fast networks) necessary to run tests, and confusion about best testing practices.

PT has a lot of expertise in measuring datacenter performance, as you can tell from the hundreds of datacenter-focused test reports on our website. We see great potential in our working with the BenchmarkXPRT Development Community to help in this area. It is very possible that, as with AIXPRT, our approach to datacenter benchmarks would differ from the approach we’ve taken with previous benchmarks. While we have ideas for useful benchmarks we might develop down the road, more immediate steps could be drafting white papers, developing testing guidelines, or working with vendors to set up a lab.

Right now, we’re trying to gauge the level of interest in having such tools and in helping us carry out these initiatives. What are the biggest challenges you face in datacenter-focused performance and suitability to task evaluations? Would you be willing to work with us in this area? We’d love to hear from you and will be reaching out to members of the community over the coming weeks.

As always, thanks for your help!

Bill

AI and the next MobileXPRT

As we mentioned a few weeks ago, we’re in the early planning stages for the next version of MobileXPRT—MobileXPRT 3. We’re always looking for ways to make XPRT benchmark workloads more relevant to everyday users, and a new version of MobileXPRT provides a great opportunity to incorporate emerging tech such as AI into our apps. AI is everywhere and is beginning to play a huge role in our everyday lives through smarter-than-ever phones, virtual assistants, and smart homes. The challenge for us is to identify representative mobile AI workloads that have the necessary characteristics to work well in a benchmark setting. For MobileXPRT, we’re researching AI workloads that have the following characteristics:

  • They work offline, not in the cloud.
  • They don’t require additional training prior to use.
  • They support common use cases such as image processing, optical character recognition (OCR), etc.


We’re researching the possibility of using Google’s Mobile Vision library, but there may be other options or concerns that we’re not aware of. If you have tips for places we should look, or ideas for workloads or APIs we haven’t mentioned, please let us know. We’ll keep the community informed as we narrow down our options.

Justin

MWCS18 and AIXPRT: a new video

A few weeks ago, Bill shared his first impressions from this year’s Mobile World Congress Shanghai (MWCS). “5G +” was the major theme, and there was a heavy emphasis on 5G + AI. This week, we published a video about Bill’s MWCS experience and the role that the XPRTs can play in evaluating emerging technologies such as 5G, AI, and VR. Check it out!

MWC Shanghai 2018: 5G, AI, VR, and the XPRTs

 

You can read more about AIXPRT development here. We’re still accepting responses to the AIXPRT Request for Comments, so if you would like to share your ideas on developing an AI/machine learning benchmark, please feel free to contact us.

Justin

 

Thoughts from MWC Shanghai 2018

Ni hao from Shanghai! It is amazing the change that happens in a year. This year’s MWC Shanghai, like last year’s, took up about half of the Shanghai New International Expo Centre (SNIEC). “5G +” is the major theme and, unlike last year, 5G is not something in the distant future. It is now assumed to be in progress.

The biggest of the pluses was AI, with a number of booths explicitly sporting 5G + AI signage. There were also 5G plus robots, cars, and cloud services. Many of those are really about AI as well. The show makes it feel like 5G is everywhere and will make everything better (or at least a lot faster). And Asia is leading the way.

[caption id="attachment_3447" align="alignleft" width="640"]5G + robotics at MWCS 18. 5G + robotics at MWCS 18.[/caption]

Most of the booths touted their 5G support as they did last year, but rather than talking about the future, they tried to say that their 5G was now. They claimed their products were in real-world tests with anticipated deployment schedules. One of the keynote speakers talked about 1.2 billion 5G connections by 2025, with more than half of those in Asia. The purported scale and speed of the transition to 5G is staggering.

[caption id="attachment_3449" align="alignleft" width="640"]The keynote stage, displaying some big numbers. The keynote stage, displaying some big numbers.[/caption]

The last two halls I visited showed that world is not all 5G and AI. These halls looked at current fun applications of mobile technologies and companies developing technologies in the near future. MWC allowed children into one of the halls, where they (and we adults) could fly drones and experience VR technology. I watched in some amusement as people crashed drones, rode bikes with VR gear to simulate horses, were 3D scanned, and generally tried out new tech that didn’t always work.

The second hall included small booths from new companies working on future technologies that might be ready “4 years from now” (4YFN). These companies did not have much to show yet, but each booth displayed the company name and a short phrase summing up their future tech. That led to “Deepscent Labs is a smart scent data company,” ChineSpain is a “Marketplace of experiences for Chinese tourists in Spain,” and “Juice is a tech-based music contents startup that creates an ecosystem of music.” The mind boggles!

The XPRTs’ foray into AI with AIXPRT seems well timed based on this show. Other areas from this show that may be worth considering for the XPRTs are 5G and the cloud. We would love to hear your thoughts on those areas. We know they are important, but do you need the XPRTs and their emphasis on real-world benchmarks and workloads in those areas? Drop us a line and let us know!

Bill

AIXPRT: We want your feedback!

Today, we’re publishing the AIXPRT Request for Comments (RFC) document. The RFC explains the need for a new artificial intelligence (AI)/machine learning benchmark, shows how the BenchmarkXPRT Development Community plans to address that need, and provides preliminary design specifications for the benchmark.

We’re seeking feedback and suggestions from anyone interested in shaping the future of machine learning benchmarking, including those not currently part of the Development Community. Usually, only members of the BenchmarkXPRT Development Community have access to our RFCs and the opportunity to provide feedback. However, because we’re seeking input from non-members who have expertise in this field, we will be posting this RFC in the New events & happenings section of the main BenchmarkXPRT.com page and making it available at AIXPRT.com.

We welcome input on all aspects of the benchmark, including scope, workloads, metrics and scores, UI design, and reporting requirements. We will accept feedback through May 13, 2018, after which BenchmarkXPRT Development Community administrators will collect and evaluate the feedback and publish the final design specification.

Please share the RFC with anyone interested in machine learning benchmarking and please send us your feedback before May 13.

Justin

Check out the other XPRTs:

Forgot your password?