Device reviews in publications
such as AnandTech, Notebookcheck, and PCMag, among many others, often feature
WebXPRT test results, and we appreciate the many members of the tech press that
use WebXPRT. As we move forward with the WebXPRT 4 development process, we’re especially
interested in learning what longtime users would like to see in a new version
of the benchmark.
In previous posts,
we’ve asked people to weigh in on the potential addition of a WebAssembly workload or a battery life test. We’d also like to ask experienced testers some other
test-related questions. To that end, this week we’ll be sending a WebXPRT 4
survey directly to members of the tech press who frequently publish WebXPRT
Regardless of whether you are a member of the tech press, we invite you to participate by sending your answers to any or all the questions below to email@example.com. We ask you to do so by the end of May.
Do you think WebXPRT 3’s selection of workload scenarios is representative of modern web tasks?
How do you think WebXPRT compares to other common browser-based benchmarks, such as JetStream, Speedometer, and Octane?
Are there web technologies that you’d like us to include in additional workloads?
Are you happy with the WebXPRT 3 user interface? If not, what UI changes would you like to see?
Are there any aspects of WebXPRT 2015 that we changed in WebXPRT 3 that you’d like to see us change back?
Have you ever experienced significant connection issues when testing with WebXPRT?
Given the array of workloads, do you think the WebXPRT runtime is reasonable? Would you mind if the average runtime were a bit longer?
Are there any other aspects of WebXPRT 3 that you’d like to see us change?
If you’d like to discuss any topics
that we did not cover in the questions above, please feel free to include additional
comments in your response. We look forward to hearing your thoughts!
In the coming months,
we’ll be moving forward with the first stages of the WebXPRT 4 development
process. It’s been a while since we last asked readers to send their
thoughts about web technologies and workloads that may be a good fit for
WebXPRT 4, but we’re still very much open to ideas. If you missed our previous
posts about possible changes for WebXPRT 4, we recap the most prominent ideas
below. We also request specific feedback regarding a potential battery life
Community members have asked about a WebXPRT 4 battery life test. Any such test would likely be very similar to the performance-weighted battery life test in CrXPRT 2 (as opposed to a simple rundown test). While WebXPRT runs in almost any browser, cross-browser compatibility issues could cause a WebXPRT battery life test to run in only one browser. If this turned out to be the case, would you still be interested in using the battery life test? Please let us know.
One of the most promising ideas is the potential addition of one or more WebAssembly (WASM) workloads. WASM is a low-level, binary instruction format that works across all modern browsers. It offers web developers a great deal of flexibility and provides the speed and efficiency necessary for running complex client applications in the browser. WASM enables a variety of workload scenario options, including gaming, video editing, VR, virtual machines, image recognition, and interactive educational content.
Other ideas include using a WebGL-based workload to target GPUs, and simulating common web applications.
We’ll start work on
WebXPRT 4 soon, but there’s still time to send your comments and ideas, so please
do so as quickly as possible!
We’re currently formulating our 2021 development roadmap for the XPRTs. In addition to planning CloudXPRT and WebXPRT updates, we’re discussing the possibility of releasing HDXPRT 5 in 2021. It’s hard for me to believe, but it’s been about two and a half years since we started work on HDXPRT 4, and February 2021 will mark two years since the first HDXPRT 4 release. Windows PCs are more powerful than ever, so it’s a good time to talk about how we can enhance the benchmark’s ability to measure how well the latest systems handle real-world media technologies and applications.
When we plan a new
version of an XPRT benchmark, one of our first steps is updating the
benchmark’s workloads so that they will remain relevant in years to come. We
almost always update application content, such as photos and videos, to
contemporary file resolutions and sizes. For example, we added both higher-resolution
photos and a 4K video conversion task in HDXPRT 4. Are there specific types of
media files that you think would be especially relevant to high-performance
media tasks over the next few years?
Next, we will assess
the suitability of the real-world trial applications that the editing photos,
editing music, and converting videos test scenarios use. Currently, these are Adobe
Photoshop Elements, Audacity, CyberLink MediaEspresso, and HandBrake. Can you
think of other applications that belong in a high-performance media processing
In HDXPRT 4, we gave
testers the option to target a system’s discrete graphics card during the video
conversion workload. Has this proven useful in your testing? Do you have
suggestions for new graphics-oriented workloads?
We’ll also strive to
make the UI more intuitive, to simplify installation, and to reduce the size of
the installation package. What elements of the current UI do you find
especially useful or think we could improve?
We welcome your answers to these questions and any additional suggestions or comments on HDXPRT 5. Send them our way!
month, we provided an update
on the CloudXPRT development process and more information about the three workloads
that we’re including in the first build. We’d initially hoped to release the
build at the end of April, but several technical challenges have caused us to
push the timeline out a bit. We believe we’re very close to ready, and look
forward to posting a release announcement soon.
the meantime, we’d like to hear your thoughts about the CloudXPRT results publication
process. Traditionally, we’ve published XPRT results on our site on a rolling
basis. When we complete our own tests, receive results submissions from other
testers, or see results published in the tech media, we authenticate them and add
them to our site. This lets testers make their results public on their
timetable, as frequently as they want.
major benchmark organizations use a different approach, and create a schedule
of periodic submission deadlines. After each deadline passes, they review the batch
of submissions they’ve received and publish all of them together on a single
later date. In some cases, they release results only two or three times per
year. This process offers a high level of predictability. However, it can pose
significant scheduling obstacles for other testers, such as tech journalists
who want to publish their results in an upcoming device review and need official
results to back up their claims.
We’d like to hear what you think about the different approaches to results submission and publication that you’ve encountered. Are there aspects of the XPRT approach that you like? Are there things we should change? Should we consider periodic results submission deadlines and publication dates for CloudXPRT? Let us know what you think!
BenchmarkXPRT Development Community started almost 10 years ago with the development
of the High Definition Experience & Performance Ratings Test, also known as
HDXPRT. Back then, we distributed the benchmark to interested parties by
mailing out physical DVDs. We’ve come a long way since then, as testers now
freely and easily access six XPRT benchmarks from our site and major app
hardware manufacturers, and tech journalists—the core group of XPRT testers—work
within a constantly changing tech landscape. Because of our commitment to
providing those testers with what they need, the XPRTs grew as we developed
additional benchmarks to expand the reach of our tools from PCs to servers and
all types of notebooks, Chromebooks, and mobile devices.
today’s tech landscape continues to evolve at a rapid pace, our desire to play
an active role in emerging markets continues to drive us to expand our testing
capabilities into areas like machine learning (AIXPRT)
and cloud-first applications (CloudXPRT).
While these new technologies carry the potential to increase efficiency, improve
quality, and boost the bottom line for companies around the world, it’s often
difficult to decide where and how to invest in new hardware or services. The
ever-present need for relevant and reliable data is the reason many
organizations use the XPRTs to help make confident choices about their
company’s future tech.
We just released a new video that helps to explain what the XPRTs provide and how they can play an important role in a company’s tech purchasing decisions. We hope you’ll check it out!
excited about the continued growth of the XPRTs, and we’re eager to meet the
challenges of adapting to the changing tech landscape. If you have any questions
about the XPRTs or suggestions for future benchmarks, please let us know!
month, Bill announced
that we were starting work on a new data center benchmark. CloudXPRT
will measure the performance of modern, cloud-first applications deployed on infrastructure
as a service (IaaS) platforms—on-premises platforms,
externally hosted platforms, and hybrid clouds that use a mix of the two. Our
ultimate goal is for CloudXPRT to use cloud-native components on an actual
stack to produce end-to-end performance metrics that can help users determine the
right IaaS configuration for their business.
we want to provide a quick update on CloudXPRT development and testing.
Installation. We’ve completely automated the CloudXPRT installation process, which leverages Kubernetes or Ansible tools depending on the target platform. The installation processes differ slightly for each platform, but testing is the same.
Workloads. We’re currently testing potential workloads that focus on three areas: web microservices, data analytics, and container scaling. We might not include all of these workloads in the first release, but we’ll keep the community informed and share more details about each workload as the picture becomes clearer. We are designing the workloads so that testers can use them to directly compare IaaS stacks and evaluate whether any given stack can meet service level agreement (SLA) thresholds.
Platforms. We want CloudXPRT to eventually support testing on a variety of popular externally hosted platforms. However, constructing a cross-platform benchmark is complicated and we haven’t yet decided which external platforms the first CloudXPRT release will support. We’ve successfully tested the current build with on-premises IaaS stacks and with one externally hosted platform, Amazon Web Services. Next, we will test the build on Google Cloud Hosting and Microsoft Azure.
Timeline. We are on track to meet our target of releasing a CloudXPRT preview build in late March and the first official build about two months later. If anything changes, we’ll post an updated timeline here in the blog.
you would like to share any thoughts or comments related to CloudXPRT or cloud
benchmarking, please feel free to contact