BenchmarkXPRT Blog banner

Category: On-premise

More details about CloudXPRT’s workloads

About a month ago, we posted an update on the CloudXPRT development process. Today, we want to provide more details about the three workloads we plan to offer in the initial preview build:

  • In the web-tier microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
  • The machine learning (ML) training workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework  that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.
  • The AI-themed container scaling workload starts up a container and uses a version of the AIXPRT harness to launch Wide and Deep recommender system inference tasks in the container. Each container represents a fixed amount of work, and as the number of Wide and Deep jobs increases, CloudXPRT launches more containers in parallel to handle the load. The workload reports both the startup time for the containers and the Wide and Deep throughput results. Testers can use this workload to compare container startup time between IaaS stacks; optimize the balance between resource allocation, capacity, and throughput on a given stack; and confirm whether a given stack is suitable for specific SLAs.

We’re continuing to move forward with CloudXPRT development and testing and hope to add more workloads in subsequent builds. Like most organizations, we’ve adjusted our work patterns to adapt to the COVID-19 situation. While this has slowed our progress a bit, we still hope to release the CloudXPRT preview build in April. If anything changes, we’ll let folks know as soon as possible here in the blog.

If you have any thoughts or comments about CloudXPRT workloads, please feel free to contact us.

Justin

CloudXPRT development news

Last month, Bill announced that we were starting work on a new data center benchmark. CloudXPRT will measure the performance of modern, cloud-first applications deployed on infrastructure as a service (IaaS) platformson-premises platforms, externally hosted platforms, and hybrid clouds that use a mix of the two. Our ultimate goal is for CloudXPRT to use cloud-native components on an actual stack to produce end-to-end performance metrics that can help users determine the right IaaS configuration for their business.

Today, we want to provide a quick update on CloudXPRT development and testing.

  • Installation. We’ve completely automated the CloudXPRT installation process, which leverages Kubernetes or Ansible tools depending on the target platform. The installation processes differ slightly for each platform, but testing is the same.
  • Workloads. We’re currently testing potential workloads that focus on three areas: web microservices, data analytics, and container scaling. We might not include all of these workloads in the first release, but we’ll keep the community informed and share more details about each workload as the picture becomes clearer. We are designing the workloads so that testers can use them to directly compare IaaS stacks and evaluate whether any given stack can meet service level agreement (SLA) thresholds.
  • Platforms. We want CloudXPRT to eventually support testing on a variety of popular externally hosted platforms. However, constructing a cross-platform benchmark is complicated and we haven’t yet decided which external platforms the first CloudXPRT release will support. We’ve successfully tested the current build with on-premises IaaS stacks and with one externally hosted platform, Amazon Web Services. Next, we will test the build on Google Cloud Hosting and Microsoft Azure.
  • Timeline. We are on track to meet our target of releasing a CloudXPRT preview build in late March and the first official build about two months later. If anything changes, we’ll post an updated timeline here in the blog.

If you would like to share any thoughts or comments related to CloudXPRT or cloud benchmarking, please feel free to contact us.

Justin

CloudXPRT is on the way

A few months ago, we wrote about the possibility of creating a datacenter XPRT. In the intervening time, we’ve discussed the idea with folks both in and outside of the XPRT Community. We’ve heard from vendors of datacenter products, hosting/cloud providers, and IT professionals that use those products and services.

The common thread that emerged was the need for a cloud benchmark that can accurately measure the performance of modern, cloud-first applications deployed on modern infrastructure as a service (IaaS) platforms, whether those platforms are on-premises, hosted elsewhere, or some combination of the two (hybrid clouds). Regardless of where clouds reside, applications are increasingly using them in latency-critical, highly available, and high-compute scenarios.

Existing datacenter benchmarks do not give a clear indication of how applications will perform on a given IaaS infrastructure, so the benchmark should use cloud-native components on the actual stacks used for on-prem and public cloud management.

We are planning to call the benchmark CloudXPRT. Our goal is for CloudXPRT to address the needs described above while also including the elements that have made the other XPRTs successful. We plan for CloudXPRT to

  • Be relevant to on-prem (datacenter), private, and public cloud deployments
  • Run on top of cloud platform software such as Kubernetes
  • Include multiple workloads that address common scenarios like web applications, AI, and media analytics
  • Support multi-tier workloads
  • Report relevant metrics including both throughput and critical latency for responsiveness-driven applications and maximum throughput for applications dependent on batch processing

CloudXPRT’s workloads will use cloud-native components on an actual stack to provide end-to-end performance metrics that allow users to choose the best IaaS configuration for their business.

We’ve been building and testing preliminary versions of CloudXPRT for the last few months. Based on the progress so far, we are shooting to have a Community Preview of CloudXPRT ready in mid- to late-March with a version for general availability ready about two months later.

Over the coming weeks, we’ll be working on getting out more information about CloudXPRT and continuing to talk with interested parties about how they can help. We’d love to hear what workflows would be of most interest to you and what you would most like to see in a datacenter/cloud benchmark. Please feel free to contact us!

Bill

Check out the other XPRTs:

Forgot your password?