BenchmarkXPRT Blog banner

Category: Benchmarking

A happy coincidence

I love new gadgets and even the promise of new ones. Samsung just announced the specs for their upcoming Galaxy Tab 3. Initial reactions to the specs have been somewhat muted to say the least. Basically, they’ve looked at this as only an incremental improvement to the current model. The early rumors of a larger screen and bigger improvements, which turned out to be false, surely contributed to the disappointed reactions.

That being said, some sites claim that the performance of the Galaxy Tab 3 is much higher than the Galaxy Tab 2, particularly regarding graphics. We look forward to verifying these claims ourselves.

Coincidentally, this week we have been playing with an early version of PhoneXPRT (or whatever we end up calling it). So far, things are looking good. We ran it on several devices, including a Samsung Galaxy Tab 2. Like all the XPRT benchmarks, it uses real- world scenarios, which we think result in more useful and accurate results. We’ll talk more about the scenarios in the next few weeks.

It’s a very exciting time in the Android phone and tablet market. I can’t wait to try out subsequent versions of the new benchmark on the latest and greatest Android devices!

Bill

Comment on this post in the forums

Lies, damned lies, and statistics

No one knows who first said “lies, damned lies, and statistics,” but it’s easy to understand why they said it. It’s no surprise that the bestselling statistics book in history is titled How to Lie with Statistics. While the title is facetious, it is certainly true that statistics can be confusing—consider the word “average,” which can refer to the mean, median, or mode. “Mean average,” in turn, can refer to the arithmetic mean, the geometric mean, or the harmonic mean. It’s enough to make a non-statistician’s head spin.

In fact, a number of people have been confused by the confidence interval WebXPRT reports. We believe that the best way to stand behind your results is to be completely open about how you crunch the numbers. To this end, we released the white paper WebXPRT 2013 results calculation and confidence interval this past Monday.

This white paper, which does not require a background in mathematics, explains what the WebXPRT confidence interval is and how it differs from the benchmark variability we sometimes talk about. The paper also gives an overview of the statistical and mathematical techniques WebXPRT uses to translate the raw timing numbers into results.

Because sometimes the devil is in the details, we wanted to augment our overview by showing exactly how WebXPRT calculates results. The white paper is accompanied by a spreadsheet that reproduces the calculations WebXPRT uses. If you are mathematically inclined and would like to suggest improvements to the process, by all means let us know!

Eric

Comment on this post in the forums

The HDXPRT 2013 RFC is here

We released the RFC, or request for comments, for HDXPRT 2013 yesterday. Our major objective with the RFC is to get your feedback. Your feedback played an important part in developing HDXPRT 2012, and we are hoping it plays an even larger role in developing HDXPRT 2013.

The RFC includes our thoughts and ideas for the design of HDXPRT 2013 based on the many conversations we’ve had over the six months since the current version of HDXPRT debuted. Indeed, during the last few weeks, we shared some of the feedback we received during and after the Webinar in January.

At this point, nothing is written in stone. Now is the time to let us know where you agree and where you disagree. For example, the current proposal drops support for Windows 7. Do you have an opinion about this? Let us know.

The RFC is available for Development Community members at http://www.principledtechnologies.com/hdxprt/forum/hdxprt2013RFC.php. Our goal is to get your feedback by March 6. We’d like as much of the feedback as possible to appear on the forums to help stimulate discussion. However, if you prefer to send in your comments via email, please send them to BenchmarkXPRTsupport@hdxprt.com.

Of course, you can send comments to us any time, and you don’t have to limit yourself to HDXPRT! Do you have thoughts about TouchXPRT or WebXPRT? They are both moving rapidly toward their official releases. Do you have thoughts about other benchmarks we should consider developing? Send those, too!

Eric

Comment on this post in the forums

Straight from the source

One of the pillars of our community model of benchmark development is making the source available.  As we’ve said many times, we believe that doing so leads to better benchmarks.

Today we released the source for HDXPRT 2012. As with previous versions of HDXPRT, the source is available only to community members, not to the general public.  We apologize that it has taken so long. HDXPRT is complicated to build, and we wanted to have a simpler and more robust build process before we made the source available.

The source allows you to examine how HDXPRT is implemented and to try some experiments of your own. Because of the size of HDXPRT 2012, the source package does not include the applications or the data files for the workloads. By including only the benchmark source code and associated files, we could keep the package small enough to download. If you want to try some changes for experiments and test them, all you need to do is install HDXPRT 2012 from the distribution DVDs. The compilation instructions will tell you how to copy your modified executables over the shipping versions.

Community members can get instructions on how to download the source code here (registration required).

If you create something interesting while you’re experimenting, let us know! We’d love to have the community consider it for HDXPRT 2013.

Speaking of the community, we’ve sent T-shirts to all community members who’ve supplied their up-to-date mailing address. If you’re a community member who wants a shirt but hasn’t yet let us know, please e-mail benchmarkxprtsupport@principledtechnologies.com with your mailing address by February 15th.

Eric

Comment on this post in the forums

Keep them coming!

Questions and comments have continued to come in since the Webinar last week. Here are a few of them:

  • How long are results valid? For a reviewer like us, we need to know that we can reuse results for a reasonable length of time. There is a tension between keeping results stable and keeping the benchmark current enough for the results to be relevant. Historically, HDXPRT allowed at least a year between releases. Based on the feedback we’ve received, a year seems like a reasonable length of time.
  • Is HDXPRT command line operable? (asked by a community member with a scripted suite of tests) HDXPRT 2012 is not, but we will consider adding a command line interface for HDXPRT 2013. While most casual users don’t need a command line interface, it could be very valuable to those of us using HDXPRT in labs.
  • I would be hesitant to overemphasize the running time of HDXPRT. The more applications it runs, the more it can differentiate things and the more interesting it is to those of us who run it at a professional level. If I could say “This gives a complete overview of the performance of this system,” that would actually save time. This comment was a surprise, given the amount of feedback we received saying that HDXPRT was too large. However, this gets to the heart of why we all need to be careful as we consider which applications to include in HDXPRT 2013.

If you had to miss the Webinar, it’s available at the BenchmarkXPRT 2013 Webinars page.

We’re planning to release the HDXPRT 2013 RFC next week. We’re looking forward to your comments.

Eric

Comment on this post in the forums

Some good questions

On Tuesday, we had a Webinar for the BenchmarkXPRT community. This Webinar covered the material that Bill would have given in individual presentations at CES. As such, it was an overview of the XPRT family.

The Webinar was well attended. We will be posting the slides and the recording of the Webinar online soon. However, we got some good questions, and thought we’d share our responses with you.

How will updates to TouchXPRT, and other benchmarks, affect results? We will avoid affecting results as much as possible. However, when updates do affect results, we will disclose the effect and the testing we performed to verify it.

Will we provide a way for benchmark users to talk to each other about support issues, perhaps via OpenBlog? We had envisioned the benchmark forums providing this opportunity. However, we are very happy to look into ways to make community communication easier and more effective.

Do you provide company memberships, as opposed to individual memberships? Not currently, although we will certainly look into this. We have no formal voting mechanism, as SPEC and some other organizations have. We may get there one day, but it’s not currently an issue. If your concern is about paying multiple membership fees, contact us, and we’ll work with you to avoid that.

In HDXPRT, can you select the CPU or GPU for video conversion and control the quality of the conversion? We have not investigated this. HDXPRT installs the applications using the default settings. However, because HDXPRT installs the applications in a separate step from running the test, it might be possible to manually change the benchmark settings and then run HDXPRT. We will be looking into this and reporting on it going forward.

How does the server influence WebXPRT results? We have run WebXPRT hosted on different servers in different locations, and seen little influence on the results. However, as part of preparing the WebXPRT general release, we will characterize and document the influence of the server.

Feel free to let us know what you think about these or any other topics. As I said earlier, we’ll be posting the whole Webinar online soon.

Eric

Comment on this post in the forums

Check out the other XPRTs:

Forgot your password?