In October, we shared an early preview of the new results viewer tool that we’ve been developing in parallel with WebXPRT 4. The WebXPRT 4 Preview is now available to the public, and we’re excited to announce that the new results viewer is also live. We already have over 65 test results in the viewer, and in the weeks leading up to the WebXPRT 4 general release, we’ll be actively populating the viewer with the latest PT-curated WebXPRT 4 Preview results.
We
encourage readers to visit the blog for details
about the viewer’s features, and to take some time to explore the data.
We’re excited about this new tool, which we view as an ongoing project with
room for expansion and improvement based on user feedback.
If you have any questions or comments about the WebXPRT 4 Preview or the new results viewer, please feel free to contact us!
A few months ago, we shared detailed information about the changes we expected
to make in WebXPRT 4. We are currently doing internal testing of the WebXPRT 4 Preview
build in preparation for releasing it to the public. We want to let our readers
know what to expect.
We’ve made some changes since our
last update and some of the details we present below could still change before
the preview release. However, we are much closer to the final product. Once we
release the WebXPRT 4 Preview, testers will be able to publish scores from Preview
build testing. We will limit any changes that we make between the Preview and
the final release to the UI or features that are not expected to affect test
scores.
General changes
Some of the non-workload changes we’ve
made in WebXPRT 4 relate to our typical benchmark update process.
We have updated the aesthetics of the WebXPRT UI to make WebXPRT 4 visually distinct from older versions. We did not significantly change the flow of the UI.
We have updated content in some of the workloads to reflect changes in everyday technology, such as upgrading most of the photos in the photo processing workloads to higher resolutions.
We have not yet added a looping function to the automation scripts, but are still considering it for the future.
We investigated the possibility of shortening the benchmark by reducing the default number of iterations from seven to five, but have decided to stick with seven iterations to ensure that score variability remains acceptable across all platforms.
Workload changes
Photo
Enhancement. We increased the efficiency of the
workload’s Canvas object creation function, and replaced the existing photos
with new, higher-resolution photos.
Organize Album Using AI. We replaced ConvNetJS with WebAssembly (WASM) based OpenCV.js for both the face detection and image classification tasks. We changed the images for the image classification tasks to images from the ImageNet dataset.
Stock Option Pricing. We updated the dygraph.js library.
Sales Graphs. We made no changes to this workload.
Encrypt Notes and OCR Scan. We replaced ASM.js with WASM for the Notes task and updated the WASM-based Tesseract version for the OCR task.
Online Homework. In addition to the existing scenario which uses four Web Workers, we have added a scenario with two Web Workers. The workload now covers a wider range of Web Worker performance, and we calculate the score by using the combined run time of both scenarios. We also updated the typo.js library.
Experimental workloads
As part of the WebXPRT 4 development
process, we researched the possibility of including two new workloads: a
natural language processing (NLP) workload, and an Angular-based message
scrolling workload. After much testing and discussion, we have decided to not
include these two workloads in WebXPRT 4. They will be good candidates for us
to add as experimental WebXPRT 4 workloads in 2022.
The release timeline
Our goal is to publish the WebXPRT 4
preview build by December 15th, which will allow testers to publish
scores in the weeks leading up to the Consumer Electronics Show in Las Vegas in
January 2022. We will provide more detailed information about the GA timeline
here in the blog as soon as possible.
If you have any questions about the details we’ve shared above, please feel free to ask!
People choose a default web browser based on several factors.
Speed is sometimes the deciding factor, but privacy settings, memory load,
ecosystem integration, and web app capabilities can also come into play.
Regardless of the motivations behind a person’s go-to browser choice, the
dominance of software-as-a-service (SaaS) computing means that new updates are
always right around the corner. In previous blog posts, we’ve talked about how browser speed can increase
or decrease significantly after an update, only to swing back in the other
direction shortly thereafter. OS-specific optimizations can also affect
performance, such as with Microsoft Edge on Windows and Google Chrome on Chrome
OS.
Windows 11 began rolling out earlier this month, and tech press outlets
such as AnandTech and PCWorld have used WebXPRT
3 to evaluate the impact of the new OS—or
specific settings in the OS—on browser performance. Our own in-house tests, which
we discuss below, show a negligible impact on browser performance when updating
our test system from Windows 10 to Windows 11. It’s important to note that depending
on a system’s hardware setup, the impact might be more significant in certain
scenarios. For more information about such scenarios, we encourage you to read the
PCWorld article discussing the impact of the Windows 11 default virtualization-based
security (VBS) settings on
browser performance in some instances.
In our comparison tests, we used a Dell
XPS 13 7930 with an Intel
Core i3-10110U processor and 4 GB of RAM. For the Windows 10 tests, we used a
clean Windows 10 Home image updated to version 20H2 (19042.1165). For the
Windows 11 tests, we updated the system to Windows 11 Home version 21H2 (22000.282).
On each OS version, we ran WebXPRT 3 three times on the latest versions of five
browsers: Brave, Google Chrome, Microsoft Edge, Mozilla Firefox, and Opera. For
each browser, the score we post below is the median of the three test runs.
In our last
round of tests on Windows 10, Firefox was the clear winner. Three of the
Chromium-based browsers (Chrome, Edge, and Opera) produced very close scores,
and the performance of Brave lagged by about 7 percent. In this round of
Windows 10 testing, performance on every browser improved slightly, with Google
Chrome taking a slight lead over Firefox.
In our Windows 11 testing, we were interested to find that without exception, browser scores were slightly lower than in Windows 10 testing. However, none of the decreases were statistically significant. Most users performing daily tasks are unlikely to notice that degree of difference.
Have you observed any significant differences in WebXPRT 3 scores
after upgrading to Windows 11? If so, let us know!
One
of our goals during the ongoing WebXPRT 4 development process is to be as
responsive as possible to user feedback, and we want to emphasize that it’s not
too late to send us your ideas. Until we finalize the details for each workload
and complete the code work for the preview build, we still have quite a bit of
flexibility around adding new features.
Just
this week, a community member raised the possibility of a WebXPRT 4 feature that
would enable user-specific test ID numbers or accounts. One possible implementation
of the idea would allow a user to sign up for a WebXPRT test account as an
individual or on behalf of their organization. The test accounts would be both
free and optional; you could continue to run the benchmark without an account,
but running it with an account would let you save and view your test history. Another
implementation option we are considering would let users generate a permanent
user ID number for themselves or their organization. They could then use that
number to tag and search for their automated test runs in our database, without
having to log into an account.
Our biggest question at the moment is whether our user base would be interested in WebXPRT user accounts or test IDs. If this concept piques your interest, or you have suggestions for implementation, please let us know!
In May, we surveyed
longtime WebXPRT users regarding the types of changes they would like to see in
a WebXPRT 4. We sent the survey to journalists at several tech press outlets,
and invited our blog readers to participate as well. We received some very helpful feedback. As we explore new possibilities for WebXPRT 4, we’ve decided to
open an updated version of the survey. We’ve adjusted the questions a bit based
on previous feedback and added some new ones, so we invite you to respond even
if you participated in the original survey.
Would you like to see a workload based on Motion UI in WebXPRT 4? Why or why not?
Would you like to see us include any other web technologies in additional workloads?
Are you happy with the WebXPRT 3 user interface? If not, what UI changes would you like to see?
Have you ever experienced significant connection issues when testing with WebXPRT?
Given its array of workloads, do you think the WebXPRT runtime is reasonable? Would you mind if the average runtime increased slightly?
Would you like to see us change any other aspects of WebXPRT 3?
If you would like to share your thoughts on any topics that the questions above do not cover, please include those in your response. We look forward to hearing from you!
It’s
been a while since we last discussed the process for submitting WebXPRT results
to be considered for publication in the WebXPRT results browser
and the WebXPRT Processor Comparison Chart, so we
thought we’d offer a refresher.
Unlike
sites that publish all results they receive, we hand-select results from
internal lab testing, user submissions, and reliable tech media sources. In
each case, we evaluate whether the score is consistent with general expectations.
For sources outside of our lab, that evaluation includes confirming that there
is enough detailed system information to help us determine whether the score
makes sense. We do this for every score on the WebXPRT results page and the
general XPRT results page.
All WebXPRT results we publish automatically appear in the processor comparison
chart as well.
Submitting your score is quick and easy. At the end of the WebXPRT test run, click the Submit your results button below the overall score, complete the short submission form, and click Submit again. The screenshot below shows how the form would look if I submitted a score at the end of a WebXPRT 3 run on my personal system.
After you submit your score, we’ll contact you to confirm how we should display
the source. You can choose one of the following:
Your first and last name
“Independent tester” (for those
who wish to remain anonymous)
Your company’s name, provided
that you have permission to submit the result in their name. To use a
company name, we ask that you provide a valid company email address.
We will
not publish any additional information about you or your company without your
permission.
We look forward to seeing your score submissions, and if you have suggestions for the processor chart or any other aspect of the XPRTs, let us know!
Cookie Notice: Our website uses cookies to deliver a smooth experience by storing logins and saving user information. By continuing to use our site, you agree with our usage of cookies as our privacy policy outlines.