We’ve designed each of the XPRT benchmarks to assess the performance of specific types of devices in scenarios that mirror the ways consumers typically use those devices. While most XPRT benchmark users are interested in producing official overall scores, some members of the tech press have been using the XPRTs in unconventional, creative ways.
One example is the use
by Tweakers, a popular tech
review site based in The Netherlands. (The site is in Dutch, so the Google
Translate extension in Chrome was helpful for me.) As Tweakers uses WebXPRT to
evaluate all kinds of consumer hardware, they also measure the sound output of each
device. Tweakers then publishes the LAeq metric for each device,
giving readers a sense of how loud a system may be, on average, while it
performs common browser tasks.
If you’re interested
in seeing Tweakers’ use of WebXPRT for sound output testing firsthand, check
out their Apple MacBook Pro M2,
HP Envy 34 All-in-One,
and Samsung Galaxy Book 2 Pro
Other labs and tech
publications have also used the XPRTs in unusual ways such as automating the
benchmarks to run during screen burn-in tests or custom battery-life rundowns. If
you’ve used any of the XPRT benchmarks in creative ways, please let us know!
We are interested in learning more about your tests, and your experiences may
provide helpful information that we can share with other XPRT users.
Last week, a member of the tech press let us know that they encountered an error while preparing a system for HDXPRT 4 testing. Specifically, while attempting to install the trial version of Adobe Photoshop Elements (PSE) 2020, they encountered the following error:
Your browser or operating system is no longer supported. You may need to install the latest updates to your operating system.
They were working with
an MSI Sword 15 A12UE, which had all the latest Windows 11 and Microsoft Edge
updates, and they were able to complete installation and testing on other
Windows 11 systems in their lab. This eliminates compatibility between the
Adobe PSE 2020 installer package and Windows 11 or Microsoft Edge as the issue.
We do not have the
same MSI Sword system in our lab, but we tried to replicate the issue by performing
the HDXPRT 4 installation and setup process on a Dell G7 15 laptop running on
an up-to-date version of Windows 11 (22H2, 22621.521). We successfully installed
Adobe PSE 2020 and completed several HDXPRT 4 iterations.
The error this user
encountered could be specific to their system or situation. However, we would
like to know if other HDXPRT 4 users have run into the same issue. If you’ve experienced
this issue in your testing, please contact us.
We may be able to identify and publish a solution.
month, we announced
that we’re working on an updated CloudXPRT web microservices test package. The purpose
of the update is to fix installation failures on Google Cloud Platform and
Microsoft Azure, and ensure that the web microservices workload works on Ubuntu
22.04, using updated software components such as Kubernetes v1.23.7, Kubespray
v2.18.1, and Kubernetes Metrics Server v1. The update also incorporates some
additional minor script changes.
are still testing the updated test package with on-premises hardware and Amazon
Web Services, Google Cloud Platform, and Microsoft Azure configurations. So
far, testing is progressing well, and we feel increasingly confident that we
will be able to release the updated test package soon. We would like to share a
more concrete release schedule, but because of the complexity of the workload
and the CSP platforms involved, we are waiting until we are certain that
everything is ready to go.
name of the updated package will be CloudXPRT v1.2, and it will include only the
updated v1.2 test harness and the updated web microservices workload. It will
not include the data analytics workload. As we stated in last month’s blog, we plan
to publish the updated web microservices package, and see what kind of interest
we receive from users about a possible refresh of the v1.1 data analytics workload.
For now, the v1.1 data analytics workload will continue to be available via CloudXPRT.com
for some time to serve as a reference resource for users that have worked with
the package in the past.
soon as possible, we’ll provide more information about the CloudXPRT v1.2 release
date here in the blog. If you have any questions about the update or CloudXPRT
in general, please feel free to contact us!
In July, we discussed the Chrome OS team’s decision to end support for Chrome apps, and how that will prevent us from publishing any future fixes or updates for CrXPRT 2. We also announced our goal of beginning development of an all-new Chrome OS XPRT benchmark by the end of this year. While we are actively discussing this benchmark and researching workload technologies and scenarios, we don’t foresee releasing a preview build this year.
The good news is that,
in spite of a lack of formal support from the Chrome OS team, the CrXPRT 2
performance and battery life tests currently run without any known issues. We
continue to monitor the status of CrXPRT and will inform our blog readers of
any significant changes.
If you have any questions about CrXPRT, or ideas about the types of features or workloads you’d like to see in a new Chrome OS benchmark, please let us know!
Last week, we
published the Exploring WebXPRT 4 white paper.
The paper describes the design and structure of WebXPRT 4, including detailed
information about the benchmark’s harness, HTML5 and WebAssembly capability
checks, and the structure of the performance test workloads. This week, to
help WebXPRT 4 testers understand how the benchmark calculates results, we’ve published
the WebXPRT 4 results calculation and confidence interval white
paper explains the WebXPRT 4 confidence interval and how it differs from typical
benchmark variability, and the formulas the benchmark uses to calculate the
individual workload scenario scores and overall score. The paper also provides
an overview of the statistical techniques WebXPRT uses to translate raw timings
the white paper’s discussion of the results calculation process, we’ve also
published a results calculation spreadsheet that shows the
raw data from a sample test run and reproduces the calculations WebXPRT uses to
produce workload scores and the overall score.
The paper is available on WebXPRT.com and on our XPRT white papers page. If you have any questions about the WebXPRT results calculation process, please let us know!
This week, we published the Exploring WebXPRT 4 white paper. It describes the design and structure of WebXPRT 4, including detailed information about the benchmark’s harness, HTML5 and WebAssembly (WASM) capability checks, and changes we’ve made to the structure of the performance test workloads. We explain the benchmark’s scoring methodology, how to automate tests, and how to submit results for publication. The white paper also includes information about the third-party functions and libraries that WebXPRT 4 uses during the HTML5 and WASM capability checks and performance workloads.
The Exploring WebXPRT 4 white paper promotes
the high level of transparency and disclosure that is a core value of the
BenchmarkXPRT Development Community. We’ve always believed that transparency
builds trust, and trust is essential for a healthy benchmarking community.
That’s why we involve community members in the benchmark development process
and disclose how we build our benchmarks and how they work.
You can find the paper on WebXPRT.com and our XPRT white papers page. If you have any questions about WebXPRT 4, please let us know, and be sure to check out our other XPRT white papers.