Soon, we’ll be expanding
our portfolio of CloudXPRT resources with a white paper that focuses on the benchmark’s
web microservices workload. While we summarized the workload in the Introduction to CloudXPRT white paper, the new paper will discuss the
workload in much greater detail.
In addition to providing practical information about the web microservices installation packages and minimum system requirements, the paper describes the workload’s test configuration variables, structural components, task workflows, and test metrics. It also discusses interpreting test results and the process for submitting results for publication.
As we’ve noted, CloudXPRT is one of the more complex tools in the XPRT family, with no shortage of topics to explore further. We plan to publish a companion overview for the data analytics workload, and possible future topics include the impact of adjusting specific test configuration options, recommendations for results reporting, and methods for analysis.
We hope that the
upcoming Overview of the CloudXPRT Web Microservices Workload paper will
serve as a go-to resource for CloudXPRT testers, and will answer any questions
you have about the workload. Once it goes live, we’ll provide links in the
Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.
A few months ago, we invited readers to send in their thoughts and ideas about web
technologies and workload scenarios that may be a good fit for the next WebXPRT. We’d like to share a few of those ideas today, and we invite
you to continue to send your feedback. We’re approaching the time when we need to begin firming up
plans for a WebXPRT 4 development cycle in 2021, but there’s still plenty of
time for you to help shape the future of the benchmark.
One of the most
promising ideas for WebXPRT 4 is the potential addition of one or more WebAssembly (WASM) workloads.
WASM is a low-level, binary instruction format that works across all modern browsers.
It offers web developers a great deal of flexibility and provides the speed and
efficiency necessary for running complex client applications in the browser. WASM
enables a variety of workload scenario options, including gaming, video editing, VR, virtual
machines, image recognition, and interactive educational content.
In addition, the
Chrome team is dropping Portable Native Client (PNaCL) support in favor of
WASM, which is why we had to remove a PNaCL workload when updating CrXPRT 2015 to CrXPRT 2. We
generally model CrXPRT workloads on existing WebXPRT workloads, so
familiarizing ourselves with WASM could ultimately benefit more than one XPRT
benchmark.
We are also
considering adding a web-based machine learning workload with TensorFlow for
JavaScript (TensorFlow.js). TensorFlow.js offers pre-trained models for a wide variety of
tasks including image classification, object detection, sentence encoding,
natural language processing, and more. We could also use this technology to
enhance one of WebXPRT’s existing AI-themed workloads, such as Organize Album
using AI or Encrypt Notes and OCR Scan.
Other ideas include using
a WebGL-based workload to target GPUs and investigating ways to incorporate a
battery life test. What do you think? Let us know!
Durham, NC, April 23, 2020 — Principled Technologies and the BenchmarkXPRT Development Community have released a video on the benefits of consulting the XPRTs before committing to new technology purchases.
AIXPRT, one of the battery of XPRT benchmark tools, runs image-classification and object-detection workloads to determine how well tech handles AI and machine learning.
CloudXPRT, another XPRT tool, accurately measures the end-to-end performance of modern, cloud-first applications deployed on infrastructure as a service (IaaS) platforms – allowing corporate decision-makers to select the best configuration for every objective.
All of the XPRTs give companies the real-world information necessary to determine which prospective future tech p – and which will disappoint
According to the video, “The XPRTs don’t just look at specs and features; they gauge a technology solution’s real-world performance and capabilities. So you know whether switching environments is worth the investment. How well solutions support machine learning and other AI capabilities. If next-gen releases beat their rivals or fall behind the curve.”
Watch the video at facts.pt/pyt88k5. To learn more about how AIXPRT, CloudXPRT, WebXPRT, MobileXPRT, TouchXPRT, CrXPRT, and HDXPRT can help IT decision-makers can make confident choices about future purchases, go to www.BenchmarkXPRT.com.
About Principled Technologies, Inc. Principled Technologies, Inc. is the leading provider of technology marketing and learning & development services. It administers the BenchmarkXPRT Development Community.
Principled Technologies, Inc. is located in Durham, North Carolina, USA. For more information, please visit www.principledtechnologies.com.
Company Contact Justin Greene BenchmarkXPRT Development Community Principled Technologies, Inc. 1007 Slater Road, Suite #300 Durham, NC 27703 BenchmarkXPRTsupport@PrincipledTechnologies.com
It’s been about two years since we released WebXPRT 3, and we’re starting to think about the WebXPRT 4 development cycle. With over 529,000 runs to date, WebXPRT continues to be our most popular benchmark because it’s quick and easy to run, it runs on almost anything with a web browser, and it evaluates performance using the types of web technologies that many people use every day.
For each new version of WebXPRT, we start the development process by looking at browser trends and analyzing the feasibility of incorporating new web technologies into our workload scenarios. For example, in WebXPRT 3, we updated the Organize Album workload to include an image-classification task that uses deep learning. We also added an optical character recognition task to the Encrypt Notes and OCR scan workload, and introduced a new Online Homework workload that combined part of the DNA Sequence Analysis scenario with a writing sample/spell check scenario.
Here are the current WebXPRT 3 workloads:
Photo Enhancement: Applies three effects, each using Canvas, to two photos.
Organize Album Using AI: Detects faces and classifies images using the ConvNetJS neural network library.
Stock Option Pricing: Calculates and displays graphic views of a stock portfolio using Canvas, SVG, and dygraphs.js.
Encrypt Notes and OCR Scan: Encrypts notes in local storage and scans a receipt using optical character recognition.
Sales Graphs: Calculates and displays multiple views of sales data using InfoVis and d3.js.
Online Homework: Performs science and English assignment tasks using Web Workers and Typo.js spell check.
What new technologies or workload scenarios should we add? Are there any existing features we should remove? Would you be interested in an associated battery life test? We want to hear your thoughts and ideas about WebXPRT, so please tell us what you think!
This week, we
have good news for AIXPRT testers: the AIXPRT source code is now available to the public
via GitHub. As we’ve discussed in the past, publishing XPRT source code is part of our
commitment to making the XPRT development process as transparent as
possible. With other XPRT benchmarks, we’ve only made the source code available
to community members. With AIXPRT, we have released the source code more
widely. By allowing all interested parties, not just community members, to
download and review our source code, we’re taking tangible steps to improve
openness and honesty in the benchmarking industry and we’re encouraging the
kind of constructive feedback that helps to ensure that the XPRTs continue to
contribute to a level playing field.
Traditional open-source models encourage developers to change products and even take them in new and different directions. Because benchmarking requires a product that remains static to enable valid comparisons over time, we allow people to download the source code and submit potential workloads for future consideration, but we reserve the right to control derivative works. This discourages a situation where someone publishes an unauthorized version of the benchmark and calls it an “XPRT.”
We encourage you to download and review the source and send us any feedback you may have. Your questions and suggestions may influence future versions of AIXPRT. If you have any questions about AIXPRT or accessing the source code, please feel free to ask! Please also let us know if you think we should take this approach to releasing the source code with other XPRT benchmarks.
Microsoft recently released a new Chromium-based version of the Edge browser, and several tech press outlets have released reviews and results from head-to-head browser performance comparison tests. Because WebXPRT is a go-to benchmark for evaluating browser performance, PCMag, PCWorld, and VentureBeat, among others, used WebXPRT 3 scores as part of the evaluation criteria for their reviews.
We thought we
would try a quick experiment of our own, so we grabbed a recent laptop from our
Spotlight testbed: a Dell XPS 13 7930 running
Windows 10 Home 1909 (18363.628) with an Intel Core i3-10110U processor and 4
GB of RAM. We tested on a clean system image after installing all current
Windows updates, and after the update process completed, we turned off updates
to prevent them from interfering with test runs. We ran WebXPRT 3 three times on
six browsers: a new browser called Brave, Google Chrome, the legacy version of
Microsoft Edge, the new version of Microsoft Edge, Mozilla Firefox, and Opera.
The posted score for each browser is the median of the three test runs.
As you can
see in the chart below, five of the browsers (legacy Edge, Brave, Opera, Chrome,
and new Edge) produced scores that were nearly identical. Mozilla Firefox was
the only browser that produced a significantly different score. The parity
among Brave, Chrome, Opera, and the new Edge is not that surprising,
considering they are all Chromium-based browsers. The rank order and relative
scaling of these results is similar to the results published by the tech
outlets mentioned above.
Do these
results mean that Mozilla Firefox will provide you with a speedier web
experience? Generally, a device with a higher WebXPRT score is probably going
to feel faster to you during daily use than one with a lower score. For
comparisons on the same system, however, the answer depends in part on the
types of things you do on the web, how the extensions you’ve installed affect
performance, how frequently the browsers issue updates and incorporate new web
technologies, and how accurately the browsers’ default installation settings reflect
how you would set up the same browsers for your daily workflow.
In addition,
browser speed can increase or decrease significantly after an update, only to
swing back in the other direction shortly thereafter. OS-specific optimizations
can also affect performance, such as with Edge on Windows 10 and Chrome on
Chrome OS. All of these variables are important to keep in mind when
considering how browser performance comparison results translate to your
everyday experience. In such a competitive market, and with so many variables
to consider, we’re happy that WebXPRT can help consumers by providing reliable,
objective results.
What are your
thoughts on today’s competitive browser market? We’d love to hear from you.
Cookie Notice: Our website uses cookies to deliver a smooth experience by storing logins and saving user information. By continuing to use our site, you agree with our usage of cookies as our privacy policy outlines.