BenchmarkXPRT Blog banner

Category: HDXPRT metrics

Scoring with HDXPRT

Two weeks ago, I began explaining how benchmarks keep score (http://www.hdxprt.com/blog/2011/08/17/keeping-score/). HDXPRT 2011 fundamentally measures the time a PC required to complete a series of tasks, such as editing photos and converting videos from one format to another. It uses the times of three sets of tasks to come up with three use case times (Edit videos from your camcorder, Create memories from your digital camera, and Prepare media for on-the-go). Because an early version of the benchmark took too long to run, we trimmed the size of the workloads (such as the number of photos) to make it complete more quickly. Because we believed the size of the original workloads was realistic, we extrapolated (multiplied by the difference in size) what the time would have been. That process results in times in minutes.

We could have simply combined the three times into one total time, but doing so would have created a score where smaller is better, which can be confusing. To avoid this, HDXPRT 2011 normalizes the three times to the times a calibration, or base, system required to complete the same work. The benchmark then calculates a geometric mean of those three normalized scores and multiplies that number by 100 to create the overall Create HD Score. This scoring method sets the calibration system’s score to 100 and makes it easy for you to compare multiple systems. For example, if PC A gets a score of 200, and PC B gets a 400, PC B is twice the speed of PC A (and four times the speed of the calibration system) at creating HD content.

The term “geometric mean” might be unfamiliar. One way to get benchmark geeks arguing is to ask about the correct mean for combining results. (Yes, there really are enough of us for an argument.) At the risk of inflaming my fellow benchmark geeks, I will give a quick summary of the main ways people combine results.

An arithmetic mean is a simple average, where you add all the numbers and divide by the number of numbers. It is good for combining amounts, such as gigabytes of RAM, across multiple computers.

A geometric mean is more mathematically complex. You compute it by multiplying all the numbers and then taking the nth root, where n is the number of numbers. This kind of mean is appropriate for combining normalized numbers. Its advantage over the arithmetic mean is that it keeps one really good number from drowning out all the others.

The final mean is the harmonic. You calculate it by dividing the number of numbers by the sum of 1 divided by the square of each element. (If that makes little sense to you, don’t worry about it!) The harmonic mean is appropriate for combining rates, such as megabytes per second.

I should also mention one other result from HDXPRT 2011, the Overall Play HD Experience score. This is a very different kind of score that uses one to five stars to indicate the quality of three HD video playbacks. HDXPRT uses mean opinion scores (MOS) based on smoothness of playback to compute these results. (I’ll discuss MOS in more detail in a future blog.) With this kind of score, a four-star rating is better than a two-star rating, but it is hard to say how much better. The MOS research indicates that people would rate the four-star playback as good and the two-star playback as poor, but you can’t say that one is twice as good as the other because the relationship is not linear.

What do you think of the metrics that HDXPRT 2011 provides? Are there others you would find more useful or meaningful? Your input is vital to improving the benchmark and making sure it does what you want it to do.

Bill

Comment on this post in the forums

Petaflops?

I saw an article earlier this week about Japan’s K Computer, the latest computer to be designated the “fastest supercomputer” in the world.  Twice a year (June and November), the Top500 list comes out.  The list’s publishers consider the highest scoring computer on the list as the fastest computer in the world.  The first article I read about the recent rankings did not cite the results, just the rankings.  So, I went to another article which referred to the K computer as capable of 8.2 quadrillion calculations per second, but did not give the results of the other leading supercomputers.  On to the next article which said the K Computer was capable of 1.2 petaflops per second.  (The phrase petaflops per second is in the same category as ATM machine or PIN number…)  The same article said that the third fastest was able to get 1.75 petaflops per second.  OK, now I was definitely confused.  (I really miss the old days of good copy editing and fact checking, but that is a blog for another day.)

So, I went to the source, the Top500 Web site (www.top500.org).  It confirmed that the K Computer obtained 8.16 petaflops (or quadrillion calculations per second) on the LINPACK test.  The Chinese Tianhe-1A got 2.56 petaflops and the American Jaguar, 1.76 petaflops.

Once I got over the sloppy reporting and stopped playing with the graphs of the trends and scores over time, I started thinking about the problem of metrics and the importance of making them easy to understand.  Some metrics are very easy to report and understand.  For example, a battery life benchmark reports its results in hours and minutes.  We all know what this means and we know that more hours and minutes is a good thing.  Understanding what petaflops are is decidedly harder.

Another issue is the desire for bigger numbers to mean better results.  The time to finish a task is fairly easy to understand, but in that case, less time is better.  One technique for dealing with this issue is to normalize the numbers.  Basically, that means to divide the result (such as a time) by the result of a baseline system’s result.  The baseline system’s result is typically considered to be 1.0 (or some other number like 10 or 100) and other results are meaningful only in relation to the baseline system or each other.  A system scoring 2.0 runs twice as fast as the baseline system’s 1.0.  While that is clear, it does take more explanation than just seconds.

Finding the right metrics was a challenge we faced with HDXPRT 2011. Do you think we got it right? Please let us know what you think.

Bill

Comment on this post in the forums

Knowing when to wait

Mark mentioned in his blog entry a few weeks ago that waiting sucks.  I think we can all agree with that sentiment.  However, an experience I had while in Taipei for Computex made me reevaluate that thinking a bit.  

I went jogging one morning in a park near my hotel.  It was a relatively small park, just a quarter mile around the pond that took up most of the park.  I was one of only a couple people jogging, but the park was full of people.  Some were walking around the pond.  There also were groups of people doing some form of Tai Chi in various clearings around the pond.  The path I was on was narrow.  At times, there was no way of getting around the people walking without running into the ones doing Tai Chi.  That in turn meant running in place at times.  Or, put another way, waiting.  

Everyone was polite at the encounters, but the contrast between me jogging and the folks doing Tai Chi was stark.  I wanted to run my miles as quickly as possible.  Those doing Tai Chi were decidedly not in a rush.  They were doing their exercises together with others.  The goal was to do them at the proper pace in the proper way.  

That got me to thinking about waiting on my computer.  (Hey, time to think is one of the main reasons I exercise!)  There are times when waiting for a computer infuriates me.  Other times, however, the computer is fast enough.  Or even too fast, like when I’m trying to scroll down to the right cell in Excel and it jumps down to a whole screen full of empty cells.  This phenomenon, of course, relates to benchmarks.  Benchmarks should measure those operations that are slow enough to hurt productivity or are downright annoying.  There is less value in measuring operations that users don’t have to wait on. 

Have you had any thoughts about what makes a good benchmark?  Even if you weren’t exercising when you had the thought, please share it with the community. 

Bill

Comment on this post in the forums

Waiting sucks

You know it does.  Time is the most precious commodity, the one thing you can never get back.  So when someone or something makes you wait, it sucks.

It particularly sucks when you have to wait on your PC.  It’s your computer, after all, and it should do the work and be quick about it.  For many tasks, it is quick, almost instantaneous.  Some, though, require so much work that the computer can spend a lot of time doing them, leaving you waiting. Tasks that involve working with different types of media often fall into that category.

Which is exactly why we have HDXPRT.

It gives you a way to compare how long different PCs require to perform some common media-manipulation tasks.  Because those times can be significant—sometimes many seconds, but also sometimes many minutes—HDXPRT can give you valuable information that you can factor into your PC buying plans.

After all, the faster a PC is at this sort of work, the less time you’ll spend waiting on it—and that’s a good thing.

Mark Van Name

Comment on this post in the forums

Check out the other XPRTs:

Forgot your password?