month, we announced
that we’re working on an updated CloudXPRT web microservices test package. The purpose
of the update is to fix installation failures on Google Cloud Platform and
Microsoft Azure, and ensure that the web microservices workload works on Ubuntu
22.04, using updated software components such as Kubernetes v1.23.7, Kubespray
v2.18.1, and Kubernetes Metrics Server v1. The update also incorporates some
additional minor script changes.
are still testing the updated test package with on-premises hardware and Amazon
Web Services, Google Cloud Platform, and Microsoft Azure configurations. So
far, testing is progressing well, and we feel increasingly confident that we
will be able to release the updated test package soon. We would like to share a
more concrete release schedule, but because of the complexity of the workload
and the CSP platforms involved, we are waiting until we are certain that
everything is ready to go.
name of the updated package will be CloudXPRT v1.2, and it will include only the
updated v1.2 test harness and the updated web microservices workload. It will
not include the data analytics workload. As we stated in last month’s blog, we plan
to publish the updated web microservices package, and see what kind of interest
we receive from users about a possible refresh of the v1.1 data analytics workload.
For now, the v1.1 data analytics workload will continue to be available via CloudXPRT.com
for some time to serve as a reference resource for users that have worked with
the package in the past.
soon as possible, we’ll provide more information about the CloudXPRT v1.2 release
date here in the blog. If you have any questions about the update or CloudXPRT
in general, please feel free to contact us!
In July, we discussed the Chrome OS team’s decision to end support for Chrome apps, and how that will prevent us from publishing any future fixes or updates for CrXPRT 2. We also announced our goal of beginning development of an all-new Chrome OS XPRT benchmark by the end of this year. While we are actively discussing this benchmark and researching workload technologies and scenarios, we don’t foresee releasing a preview build this year.
The good news is that,
in spite of a lack of formal support from the Chrome OS team, the CrXPRT 2
performance and battery life tests currently run without any known issues. We
continue to monitor the status of CrXPRT and will inform our blog readers of
any significant changes.
If you have any questions about CrXPRT, or ideas about the types of features or workloads you’d like to see in a new Chrome OS benchmark, please let us know!
Last week, we
published the Exploring WebXPRT 4 white paper.
The paper describes the design and structure of WebXPRT 4, including detailed
information about the benchmark’s harness, HTML5 and WebAssembly capability
checks, and the structure of the performance test workloads. This week, to
help WebXPRT 4 testers understand how the benchmark calculates results, we’ve published
the WebXPRT 4 results calculation and confidence interval white
paper explains the WebXPRT 4 confidence interval and how it differs from typical
benchmark variability, and the formulas the benchmark uses to calculate the
individual workload scenario scores and overall score. The paper also provides
an overview of the statistical techniques WebXPRT uses to translate raw timings
the white paper’s discussion of the results calculation process, we’ve also
published a results calculation spreadsheet that shows the
raw data from a sample test run and reproduces the calculations WebXPRT uses to
produce workload scores and the overall score.
The paper is available on WebXPRT.com and on our XPRT white papers page. If you have any questions about the WebXPRT results calculation process, please let us know!
This week, we published the Exploring WebXPRT 4 white paper. It describes the design and structure of WebXPRT 4, including detailed information about the benchmark’s harness, HTML5 and WebAssembly (WASM) capability checks, and changes we’ve made to the structure of the performance test workloads. We explain the benchmark’s scoring methodology, how to automate tests, and how to submit results for publication. The white paper also includes information about the third-party functions and libraries that WebXPRT 4 uses during the HTML5 and WASM capability checks and performance workloads.
The Exploring WebXPRT 4 white paper promotes
the high level of transparency and disclosure that is a core value of the
BenchmarkXPRT Development Community. We’ve always believed that transparency
builds trust, and trust is essential for a healthy benchmarking community.
That’s why we involve community members in the benchmark development process
and disclose how we build our benchmarks and how they work.
You can find the paper on WebXPRT.com and our XPRT white papers page. If you have any questions about WebXPRT 4, please let us know, and be sure to check out our other XPRT white papers.
developed our first cloud benchmark, CloudXPRT,
to measure the performance of cloud applications deployed on modern infrastructure
as a service (IaaS) platforms. When we first released CloudXPRT in
February of 2021, the benchmark included two test packages: a web microservices
workload and a data analytics workload. Both supported on-premises and cloud
service provider (CSP) testing with Amazon Web Services (AWS), Google Cloud
Platform (GCP), and Microsoft Azure.
is our most complex benchmark, requiring sustained compatibility between many
software components across multiple independent test environments. As vendors
roll out updates for some components and stop supporting others, it’s
inevitable that something will break. Since CloudXPRT’s launch, we’ve become
aware of installation failures while attempting to set up CloudXPRT on Ubuntu
virtual machines with GCP and Microsoft Azure. Additionally, while the web
microservices workload continues to run in most instances with a few
configuration tweaks and workarounds, the data analytics workload fails
consistently due to compatibility issues with Minio, Prometheus, and Kafka
within the Kubernetes environment.
response, we’re working to fix problems with the web microservices workload and
bring all necessary components up to date. We’re developing an updated test
package that will work on Ubuntu 22.04, using Kubernetes v1.23.7 and Kubespray
v2.18.1. We’re also updating Kubernetes Metrics Server from v1beta1 to v1, and will
incorporate some minor script changes. Our goal is to ensure successful
installation and testing with the on-premises and CSP platforms that we
supported when we first launched CloudXPRT.
are currently focusing on the web microservices workload for two reasons.
First, more users have downloaded it than the data analytics workload. Second, we
think we have a clear path to success. Our plan is to publish the updated web
microservices test package, and see what feedback and interest we receive from
users about a possible data analytics refresh. The existing data analytics workload
will remain available via CloudXPRT.com for the time being to serve as a
apologize for the inconvenience that these issues have caused. We’ll provide
more information about a release timeline and final test package details here
in the blog as we get closer to publication. If you have any questions about
the future of CloudXPRT, please feel free to contact us!
The new school year is
upon us, and learners of all ages are looking for tech devices that have the
capabilities they will need in the coming year. The tech marketplace can be
confusing, and competing claims can be hard to navigate. The XPRTs are here to
help! Whether you’re shopping for a new phone, tablet, Chromebook, laptop, or
desktop, the XPRTs can provide reliable, industry-trusted performance scores
that can cut through all the noise.
A good place to start looking
for scores is the WebXPRT 4 results viewer. The viewer displays WebXPRT 4 scores from
over 175 devices—including many hot new releases—and we’re adding new scores
all the time. To learn more about the viewer’s capabilities and how you can use
it to compare devices, check out this blog post.
Another resource we
offer is the XPRT results browser. The browser is the most efficient way to access the XPRT
results database, which currently holds more than 3,000 test results from over 120
sources, including major tech review publications around the world, OEMs, and
independent testers. It offers a wealth of current and historical performance
data across all of the XPRT benchmarks and hundreds of devices. You can read
more about how to use the results browser here.
Also, if you’re considering a popular device, chances are good that a recent tech review includes an XPRT score for that device. Two quick ways to find these reviews: (1) go to your favorite tech review site and search for “XPRT” and (2) go to a search engine and enter the device name and XPRT name (e.g., “Apple MacBook Air” and “WebXPRT”). Here are a few recent tech reviews that use one of the XPRTs to evaluate a popular device:
- Notebookcheck used WebXPRT in reviews of the Acer Swift X 16, Apple MacBook Air, ASUS ROG Flow X16, Lenovo V17G2, Nothing Phone (1); and a recent article titled, “The Best Smartphones.”
- PCMag used WebXPRT 3 to compare the M1 Max and M1 Ultra versions of the Apple Mac Studio, and to review the Apple Macbook Air (2022, M2).
- PCWorld used CrXPRT 2 in a feature called, “The best Chromebooks: Best overall, best battery life, and more.”
- ZDNet used CrXPRT 2 in a review titled, “The 5 best Chromebooks for students: Top back-to-school picks.”
The XPRTs can help consumers make better-informed and more confident tech purchases. As this school year begins, we hope you’ll find the data you need on our site or in an XPRT-related tech review. If you have any questions about the XPRTs, XPRT scores, or the results database please feel free to ask!