As the WebXPRT 4 development process has progressed, we’ve started to discuss the possibility of offering experimental WebXPRT 4 workloads in 2022. These would be optional workloads that test cutting-edge browser technologies or new use cases. The individual scores for the experimental workloads would stand alone, and would not factor in the WebXPRT 4 overall score.
WebXPRT testers would be able to run the experimental workloads one of two ways: by manually selecting them on the benchmark’s home screen, or by adjusting a value in the WebXPRT 4 automation scripts.
Testers would benefit from experimental workloads by being able to compare how well certain browsers or systems handle new tasks (e.g., new web apps or AI capabilities). We would benefit from fielding workloads for large-scale testing and user feedback before we commit to including them as core WebXPRT workloads.
Do you have any general thoughts about experimental workloads for browser performance testing, or any specific workloads that you’d like us to consider? Please let us know.
People choose a default web browser based on several factors.
Speed is sometimes the deciding factor, but privacy settings, memory load,
ecosystem integration, and web app capabilities can also come into play.
Regardless of the motivations behind a person’s go-to browser choice, the
dominance of software-as-a-service (SaaS) computing means that new updates are
always right around the corner. In previous blog posts, we’ve talked about how browser speed can increase
or decrease significantly after an update, only to swing back in the other
direction shortly thereafter. OS-specific optimizations can also affect
performance, such as with Microsoft Edge on Windows and Google Chrome on Chrome
OS.
Windows 11 began rolling out earlier this month, and tech press outlets
such as AnandTech and PCWorld have used WebXPRT
3 to evaluate the impact of the new OS—or
specific settings in the OS—on browser performance. Our own in-house tests, which
we discuss below, show a negligible impact on browser performance when updating
our test system from Windows 10 to Windows 11. It’s important to note that depending
on a system’s hardware setup, the impact might be more significant in certain
scenarios. For more information about such scenarios, we encourage you to read the
PCWorld article discussing the impact of the Windows 11 default virtualization-based
security (VBS) settings on
browser performance in some instances.
In our comparison tests, we used a Dell
XPS 13 7930 with an Intel
Core i3-10110U processor and 4 GB of RAM. For the Windows 10 tests, we used a
clean Windows 10 Home image updated to version 20H2 (19042.1165). For the
Windows 11 tests, we updated the system to Windows 11 Home version 21H2 (22000.282).
On each OS version, we ran WebXPRT 3 three times on the latest versions of five
browsers: Brave, Google Chrome, Microsoft Edge, Mozilla Firefox, and Opera. For
each browser, the score we post below is the median of the three test runs.
In our last
round of tests on Windows 10, Firefox was the clear winner. Three of the
Chromium-based browsers (Chrome, Edge, and Opera) produced very close scores,
and the performance of Brave lagged by about 7 percent. In this round of
Windows 10 testing, performance on every browser improved slightly, with Google
Chrome taking a slight lead over Firefox.
In our Windows 11 testing, we were interested to find that without exception, browser scores were slightly lower than in Windows 10 testing. However, none of the decreases were statistically significant. Most users performing daily tasks are unlikely to notice that degree of difference.
Have you observed any significant differences in WebXPRT 3 scores
after upgrading to Windows 11? If so, let us know!
Last
week, we shared some new details
about the changes we’re likely to make in WebXPRT 4, and a rough target date
for publishing a preview build. This week, we’re excited to share an early
preview of the new results viewer tool that we plan to release in conjunction
with WebXPRT 4. We hope the tool will help testers and analysts access the
wealth of WebXPRT test results in our database in an efficient, productive, and
enjoyable way. We’re still ironing out many of the details, so some aspects of
what we’re showing today might change, but we’d like to give you an idea of
what to expect.
The screenshot below shows the tool’s default display. In this example, the viewer displays over 650 sample results—from a wide range of device types—that we’re currently using as placeholder data. The viewer will include several sorting and filtering options, such as device type, hardware specs such as browser type and processor vendor, the source of the result, etc.
Each
vertical bar in the graph represents the overall score of single test result,
and the graph presents the scores in order from lowest to highest. To view an
individual result in detail, the user simply hovers over and selects the bar
representing the result. The bar turns dark blue, and the dark blue banner at
the bottom of the viewer displays details about that result.
In the example above, the banner shows the overall score (250) and the score’s percentile rank (85th) among the scores in the current display. In the final version of the viewer, the banner will also display the device name of the test system, along with basic hardware disclosure information. Selecting the Run details button will let users see more about the run’s individual workload scores.
We’re
still working on a way for users to pin or save specific runs. This would let
users easily find the results that interest them, or possibly select multiple
runs for a side-by-side comparison.
We’re excited about this new tool, and we look forward to sharing more details here in the blog as we get closer to taking it live. If you have any questions or comments about the results viewer, please feel free to contact us!
The WebXPRT 4 development process is
far enough along that we’d like to share more about changes we are likely to
make and a rough target date for publishing a preview build. While some of the
details below will probably change, this post should give readers a good sense
of what to expect.
General changes
Some of the non-workload changes in
WebXPRT 4 relate to our typical benchmark update process, and a few result
directly from feedback we received from the WebXPRT tech press survey.
We will update the aesthetics of the WebXPRT UI to make
WebXPRT 4 visually distinct from older versions. We do not anticipate
significantly changing the flow of the UI.
We will update content in some of the workloads to
reflect changes in everyday technology. For instance, we will upgrade most
of the photos in the photo processing workloads to higher resolutions.
In response to a request from tech press survey
respondents, we are considering adding a looping function to the
automation scripts.
We are investigating the possibility of shortening the
benchmark by reducing the default number of iterations from seven to five.
We will only make this change if we can ensure that five iterations produce
consistently low score variance.
Changes to existing workloads
Photo
Enhancement. This workload applies three effects
to two photos each (six photos total). It tests HTML5 Canvas, Canvas 2D, and
JavaScript performance. The only change we are considering is adding
higher-resolution photos.
Organize Album Using AI. This workload currently uses the ConvNetJS neural network library to complete two tasks: (1) organizing five images and (2) classifying the five images in an album. We are planning to replace ConvNetJS with WebAssembly (WASM) for both tasks and are considering upgrading the images to higher resolutions.
Stock Option Pricing. This workload calculates and displays graphic views of a stock portfolio using Canvas, SVG, and dygraph.js. The only change we are considering is combining it with the Sales Graphs workload (below).
Sales Graphs. This workload provides a web-based application displaying multiple views of sales data. Sales Graphs exercises HTML5 Canvas and SVG performance. The only change we are considering is combining it with the Stock Option Pricing workload (above).
Encrypt Notes and OCR Scan. This workload uses ASM.js to sync notes, extract text from a scanned receipt using optical character recognition (OCR), and add the scanned text to a spending report. We are planning to replace ASM.js with WASM for the Notes task and with WASM-based Tesseract for the OCR task.
Online Homework. This workload uses regex, arrays, strings, and Web Workers to review DNA and spell-check an essay. We are not planning to change this workload.
Possible new workloads
Natural Language Processing (NLP). We are considering the addition of an NLP workload using ONNX Runtime and/or TensorFlowJS. The workload would use Bidirectional Encoder Representations from Transformers (BERT) to answer questions about a given text. Similar use cases are becoming more prevalent in conversational bot systems, domain-specific document search tools, and various other educational applications.
Message Scrolling. We are considering developing a new workload that would use an Angular or React.js to scroll through hundreds of messages. We’ll share more about this possible workload as we firm up the details.
The release timeline
We hope to publish a WebXPRT 4
preview build in the second half of November, with a general release before the
end of the year. If it looks as though that timeline will change significantly,
we’ll provide an update here in the blog as soon as possible.
We’re very grateful for all the
input we received during the WebXPRT 4 planning process. If you have any
questions about the details we’ve shared above, please feel free to ask!
Last week, we discussed the upcoming Windows 11 GA launch on October 5, and our hope is that the transition period from Windows 10 to Windows 11 will go smoothly for the three XPRTs that run on Windows 10, HDXPRT 4, TouchXPRT 2016, and AIXPRT. We’re happy to report that so far, we’ve been able to install HDXPRT 4 and TouchXPRT 2016 on the latest stable preview of Windows 11 without any problems. For TouchXPRT 2016, we successfully installed the benchmark using both available methods—directly from the Microsoft Store and through the manual sideload process—and ran it without issues.
We’re
still testing Windows 11 compatibility with the AIXPRT OpenVINO, TensorFlow,
and TensorRT test packages, and will share our findings here in the blog as
soon as possible. Also, because Microsoft might still publish through the
stable preview channel Windows 11 changes that interfere with the HDXPRT 4 or
TouchXPRT 2016 installation or testing processes, we’ll continue to verify each
benchmark’s Windows 11 compatibility up through and beyond launch day.
If
you’re conducting your own HDXPRT 4, TouchXPRT 2016, or AIXPRT testing on the
Windows 11 beta, you could encounter issues with newly published updates before
we do due to the timing of our update cycles. You could also run into problems
that are specific to your test gear. In either case, please don’t assume that
we already know about the problem. Let us know!
Last
week, Microsoft announced
that the Windows 11 GA build will officially launch Tuesday October 5, earlier
than the initial late 2021 estimate. The update will start rolling out with
select new laptops and existing Windows 10 PCs that satisfy specific system requirements,
and only some Windows 10 PCs will be eligible for the update right away.
Through a phased Windows Update process, additional Windows 10 PCs will be able
to access the update throughout the first half of 2022.
Between
the phased Windows 11 rollout and the pledge
Microsoft has made to continue Windows 10 support through October 2025, it will
likely be a while before the majority of Windows users transition to the new version.
We hope the transition period will go smoothly for the XPRTs. However, because we
designed three of our benchmarks to run on Windows 10 (HDXPRT 4,
TouchXPRT 2016,
and AIXPRT),
we might encounter compatibility issues with Windows 11.
Over
the coming weeks, we’ll be testing HDXPRT 4, TouchXPRT 2016, and AIXPRT on beta
versions of Windows 11, and we’ll test again after the GA launch. In addition
to obvious compatibility issues and test failures, we’ll note any changes we
need to make to our documentation to account for differences in the Windows 11
installation or test processes.
We hope that testers will be able to successfully use all three benchmarks on both OS versions throughout the transition process. If problems arise, we will keep our blog readers informed while exploring solutions. As always, we’re also open to feedback from the community, so if you are participating in the Windows Insider Program and have encountered Windows 11 beta compatibility issues with any of the Windows-focused XPRTs, please let us know!
Cookie Notice: Our website uses cookies to deliver a smooth experience by storing logins and saving user information. By continuing to use our site, you agree with our usage of cookies as our privacy policy outlines.