BenchmarkXPRT Blog banner

Category: Future of performance evaluation

CloudXPRT is up next, and we’re thinking about how to handle results submission and publication

Last month, we provided an update on the CloudXPRT development process and more information about the three workloads that we’re including in the first build. We’d initially hoped to release the build at the end of April, but several technical challenges have caused us to push the timeline out a bit. We believe we’re very close to ready, and look forward to posting a release announcement soon.

In the meantime, we’d like to hear your thoughts about the CloudXPRT results publication process. Traditionally, we’ve published XPRT results on our site on a rolling basis. When we complete our own tests, receive results submissions from other testers, or see results published in the tech media, we authenticate them and add them to our site. This lets testers make their results public on their timetable, as frequently as they want.

Some major benchmark organizations use a different approach, and create a schedule of periodic submission deadlines. After each deadline passes, they review the batch of submissions they’ve received and publish all of them together on a single later date. In some cases, they release results only two or three times per year. This process offers a high level of predictability. However, it can pose significant scheduling obstacles for other testers, such as tech journalists who want to publish their results in an upcoming device review and need official results to back up their claims.

We’d like to hear what you think about the different approaches to results submission and publication that you’ve encountered. Are there aspects of the XPRT approach that you like? Are there things we should change? Should we consider periodic results submission deadlines and publication dates for CloudXPRT? Let us know what you think!

Justin

Adapting to a changing tech landscape

The BenchmarkXPRT Development Community started almost 10 years ago with the development of the High Definition Experience & Performance Ratings Test, also known as HDXPRT. Back then, we distributed the benchmark to interested parties by mailing out physical DVDs. We’ve come a long way since then, as testers now freely and easily access six XPRT benchmarks from our site and major app stores.

Developers, hardware manufacturers, and tech journalists—the core group of XPRT testers—work within a constantly changing tech landscape. Because of our commitment to providing those testers with what they need, the XPRTs grew as we developed additional benchmarks to expand the reach of our tools from PCs to servers and all types of notebooks, Chromebooks, and mobile devices.

As today’s tech landscape continues to evolve at a rapid pace, our desire to play an active role in emerging markets continues to drive us to expand our testing capabilities into areas like machine learning (AIXPRT) and cloud-first applications (CloudXPRT). While these new technologies carry the potential to increase efficiency, improve quality, and boost the bottom line for companies around the world, it’s often difficult to decide where and how to invest in new hardware or services. The ever-present need for relevant and reliable data is the reason many organizations use the XPRTs to help make confident choices about their company’s future tech.

We just released a new video that helps to explain what the XPRTs provide and how they can play an important role in a company’s tech purchasing decisions. We hope you’ll check it out!

We’re excited about the continued growth of the XPRTs, and we’re eager to meet the challenges of adapting to the changing tech landscape. If you have any questions about the XPRTs or suggestions for future benchmarks, please let us know!

Justin

CloudXPRT development news

Last month, Bill announced that we were starting work on a new data center benchmark. CloudXPRT will measure the performance of modern, cloud-first applications deployed on infrastructure as a service (IaaS) platformson-premises platforms, externally hosted platforms, and hybrid clouds that use a mix of the two. Our ultimate goal is for CloudXPRT to use cloud-native components on an actual stack to produce end-to-end performance metrics that can help users determine the right IaaS configuration for their business.

Today, we want to provide a quick update on CloudXPRT development and testing.

  • Installation. We’ve completely automated the CloudXPRT installation process, which leverages Kubernetes or Ansible tools depending on the target platform. The installation processes differ slightly for each platform, but testing is the same.
  • Workloads. We’re currently testing potential workloads that focus on three areas: web microservices, data analytics, and container scaling. We might not include all of these workloads in the first release, but we’ll keep the community informed and share more details about each workload as the picture becomes clearer. We are designing the workloads so that testers can use them to directly compare IaaS stacks and evaluate whether any given stack can meet service level agreement (SLA) thresholds.
  • Platforms. We want CloudXPRT to eventually support testing on a variety of popular externally hosted platforms. However, constructing a cross-platform benchmark is complicated and we haven’t yet decided which external platforms the first CloudXPRT release will support. We’ve successfully tested the current build with on-premises IaaS stacks and with one externally hosted platform, Amazon Web Services. Next, we will test the build on Google Cloud Hosting and Microsoft Azure.
  • Timeline. We are on track to meet our target of releasing a CloudXPRT preview build in late March and the first official build about two months later. If anything changes, we’ll post an updated timeline here in the blog.

If you would like to share any thoughts or comments related to CloudXPRT or cloud benchmarking, please feel free to contact us.

Justin

CloudXPRT is on the way

A few months ago, we wrote about the possibility of creating a datacenter XPRT. In the intervening time, we’ve discussed the idea with folks both in and outside of the XPRT Community. We’ve heard from vendors of datacenter products, hosting/cloud providers, and IT professionals that use those products and services.

The common thread that emerged was the need for a cloud benchmark that can accurately measure the performance of modern, cloud-first applications deployed on modern infrastructure as a service (IaaS) platforms, whether those platforms are on-premises, hosted elsewhere, or some combination of the two (hybrid clouds). Regardless of where clouds reside, applications are increasingly using them in latency-critical, highly available, and high-compute scenarios.

Existing datacenter benchmarks do not give a clear indication of how applications will perform on a given IaaS infrastructure, so the benchmark should use cloud-native components on the actual stacks used for on-prem and public cloud management.

We are planning to call the benchmark CloudXPRT. Our goal is for CloudXPRT to address the needs described above while also including the elements that have made the other XPRTs successful. We plan for CloudXPRT to

  • Be relevant to on-prem (datacenter), private, and public cloud deployments
  • Run on top of cloud platform software such as Kubernetes
  • Include multiple workloads that address common scenarios like web applications, AI, and media analytics
  • Support multi-tier workloads
  • Report relevant metrics including both throughput and critical latency for responsiveness-driven applications and maximum throughput for applications dependent on batch processing

CloudXPRT’s workloads will use cloud-native components on an actual stack to provide end-to-end performance metrics that allow users to choose the best IaaS configuration for their business.

We’ve been building and testing preliminary versions of CloudXPRT for the last few months. Based on the progress so far, we are shooting to have a Community Preview of CloudXPRT ready in mid- to late-March with a version for general availability ready about two months later.

Over the coming weeks, we’ll be working on getting out more information about CloudXPRT and continuing to talk with interested parties about how they can help. We’d love to hear what workflows would be of most interest to you and what you would most like to see in a datacenter/cloud benchmark. Please feel free to contact us!

Bill

CES 2020: AI in action and a “smart” future

During last year’s Consumer Electronics Show (CES), one question kept coming to mind as I walked the floor: Are we approaching the tipping point where AI truly affects most people in meaningful ways on a daily basis? I think it’s safe to say that we’ve reached that point as a result of AI integration with phones. After all, for many of us, AI improves the quality of our photography, recommends words and phrases as we text and search the web, and lets us know when to allow extra drive time because traffic is heavy.

However, for me, the most intriguing aspects of this year’s CES are the glimpses of how AI will change every area of our lives, with and without mobile devices. The show floor is jam-packed with ways to integrate AI with everything from athletic shoes to pet care to the kitchen sink. Many of these ideas are fascinating on their own, and they’re all part of a much bigger picture. The next few years will see increased AI utilization in medicine, transportation, agriculture, water and energy distribution, natural resource protection, and many more areas. Our personal smart devices will connect to smart vehicles, smart homes, smart grids, and smart cities. In the near future, CES shows won’t need AI sections because AI will be a part of everything.

At each step of this journey, people will need objective data about how well their tech can handle the demands of common AI workloads. We’re excited that AIXPRT is already becoming a go-to tool for testing inference performance on laptops, desktops, and servers. There’s much more to come with AIXPRT in 2020, along with news about XPRTs in the datacenter, so stay tuned to the blog for exciting developments in the weeks to come!

I’ll leave you with pics from three of my favorite displays at this year’s show. The first is a model of Toyota’s Woven City. Toyota announced plans to build an entire mini city on existing company land near Mount Fuji. The city will house 2,000 people and will serve as an enormous real-time lab where designers and engineers can test ubiquitous AI and sensor technology. Toyota will also design the city to be fully sustainable with the use of hydrogen fuel cells and solar panels.

The second picture shows the electric Hyundai Urban Air Mobility prototype. Hyundai is partnering with Uber on this project, and the planned vertical take-off and landing (VTOL) craft will seat five passengers plus a pilot, have a range of 60 miles, and be able to recharge in less than 10 minutes. These concepts aren’t new, but battery and material sciences technologies are progressing to the point that this one may get off the ground!

The third picture shows BrainCo’s AI Prosthetic Hand display. The hand provides amputees with new levels of dexterity compared to previous prosthetics, and it uses AI to learn from the user’s patterns of movement. The idea is that the accuracy of gestures and grips will improve over time, allowing users to accomplish tasks that are impossible with existing technology. A young man in the booth was using the hand to paint beautiful and precise Chinese calligraphy. Very cool!

Justin

AIXPRT’s unique development path

With four separate machine learning toolkits on their own development schedules, three workloads, and a wide range of possible configurations and use cases, AIXPRT has more moving parts than any of the XPRT benchmark tools to date. Because there are so many different components, and because we want AIXPRT to provide consistently relevant evaluation data in the rapidly evolving AI and machine learning spaces, we anticipate a cadence of AIXPRT updates in the future that will be more frequent than the schedules we’ve used for other XPRTs in the past. With that expectation in mind, we want to let AIXPRT testers know that when we release an AIXPRT update, they can expect minimized disruption, consideration for their testing needs, and clear communication.

Minimized disruption

Each AIXPRT toolkit (Intel OpenVINO, TensorFlow, NVIDIA TensorRT, and Apache MXNet) is on its own development schedule, and we won’t always have a lot of advance notice when new versions are on the way. Hypothetically, a new version of OpenVINO could release one month, and a new version of TensorRT just two months later. Thankfully, the modular nature of AIXPRT’s installation packages ensures that we won’t need to revise the entire AIXPRT suite every time a toolkit update goes live. Instead, we’ll update each package individually when necessary. This means that if you only test with a single AIXPRT package, updates to the other packages won’t affect your testing. For us to maintain AIXPRT’s relevance, there’s unfortunately no way to avoid all disruption, but we’ll work to keep it to a minimum.

Consideration for testers

As we move forward, when software compatibility issues force us to update an AIXPRT package, we may discover that the update has a significant effect on results. If we find that results from the new package are no longer comparable to those from previous tests, we’ll share the differences that we’re seeing in our lab. As always, we will use documentation and versioning to make sure that testers know what to expect and  that there’s no confusion about which package to use.

Clear communication

When we update any package, we’ll make sure to communicate any updates in the new build as clearly as possible. We’ll document all changes thoroughly in the package readmes, and we’ll talk through significant updates here in the blog. We’re also available to answer questions about AIXPRT and any other XPRT-related topic, so feel free to ask!

Justin

Check out the other XPRTs:

Forgot your password?