BenchmarkXPRT Blog banner

Quick HDXPRT 2012 status update

Between talking about CES, the new touch benchmark, and sausage making, it seems like it’ has been a while since I’ve said anything about HDXPRT. Here’ is a quick status update. The short form is that folks are heads- down coding, debugging, and testing. We still have some significant hurdles to overcome, such as trying to script Picassa. We also are going to have to make some difficult decisions in the near future about possibly swapping out one or two of the applications due to either licensing or scripting issues. (Sausage making at its best!) We’ll keep you posted in the forums when we have to make those decisions.

There is still a lot to get done, but things still appear to be on schedule. That schedule means that we are still hoping to have a beta version available for the Development Community to test in late March. At that point, the beta version will be available to our members and we will really need your help to try and shake things out. (Join at http://hdxprt.com/forum/register.php if you are not yet a member of the Development Community and want to help in our effort.) The more different systems and configurations we can all test together, the better the benchmark will be. There will also be at least some time for feedback on whether HDXPRT 2012 matches the design specification and if there are any last- minute tweaks you think would help make for a better benchmark.

So, stay tuned! We look forward to continuing to work with you on making HDXPRT 2012 even better than the current version.

Bill

Comment on this post in the forums

Art or sausage?

I discussed in my previous blog how weighing the tradeoffs between real science and real world in benchmark is a real art. One person felt it was more akin to sausage making than art! In truth, I have made that comparison myself.

That, of course, got me thinking. Is the process of creating a benchmark like that of creating sausage? With sausage, the feeling is that if you knew what went into sausage, you probably wouldn’t eat it. That may well be true, but I would still like to know that someone was inspecting the sausage factory. Sausage that contains strange animal parts is one thing, but sausage containing E. coli is another.

We are trying with the Development Community to use transparency to create better benchmarks. My feeling is that the more inspectors (members) there are, the better the benchmark will be. At least to me, unlike making sausage, creating benchmarks is actually cool. (There are probably sausage artisans who feel the same way about sausage.)

What do you think? Would you prefer to know what goes into making a benchmark? We hope so and hope that is why you are a part of this community. If you are not part of the Development Community, we encourage you to join at http://hdxprt.com/forum/register.php. Come join us in the sausage-making art house!

Bill

Comment on this post in the forums

The real art of benchmarking

In my last blog entry, I noted the challenge of balancing real-world and real-science considerations when benchmarking Web page loads. That issue, however, is inherent in all benchmarking. Real world argues for benchmarks that emphasize what users and computers actually do. For servers, that might mean something like executing real database transactions against a real database from real client computers. For tablets, that might mean real fingers selecting and displaying real photos. There are obvious issues with both—setting up such a real database environment is difficult and who wants to be the owner of the real fingers driving the tablet? It is also difficult to understand what causes performance differences—is it the network, the processors, or the disks in the server? There are also more subtle challenges, such as how to make the tests work on servers or tablets other than the original ones. Worse, such real-world environments are subject to all sorts of repeatability and reproducibility issues.

Real science, on the other hand, argues for benchmarks that emphasize repeatable and reproducible results. Further, real science wants benchmarks that isolate the causes of performance differences. For servers, that might mean a suite of tests targeting processor speed, network bandwidth, and disk transfer rate. For tablets, that might mean tests targeting processor speed, touch responsiveness, and graphics-rendering rate. The problem is that it is not always obvious what combination of such factors actually delivers better database server performance or tablet experience. Worse, it is possible that testing different databases and transactions would result in very different characteristics that these tests don’t at all measure.

The good news is that real world and real science are not always in opposition. The bad news is that a third factor exacerbates the situation—benchmarks take real time (and of course real money) to develop. That means benchmark developers need to make compromises if they want to bring tests to market before the real world they are attempting to measure has changed. And, they need to avoid some of the most difficult technical hurdles. Like most things, that means trying to find the right balance between real world and real science.

Unfortunately, there is no formula for determining that balance. Instead, it really is somewhat of an art. I’d love to hear from you some examples of benchmarks (current or from the past) that you think do a good job implementing this balance and showing the real art of benchmarking.

Bill

Comment on this post in the forums

Web benchmarking challenges

I think that an important part of any touch benchmark will be a Web component. After all, the always (or almost always) connected nature ofthese devices is a critical part of their identities. I think such a Web benchmark needs to include a measurement of page load speed (how long it takes to download and render a page).

Creating such a test seems straightforward. Pick a set of sites, such as the five or ten most popular, and then time how long the home page of each takes to load. The problem, however, is that those pages are constantly changing. Every few months, most popular sites do a major redesign. That would obviously affect the results for a test and make it difficult to compare the results of a current test to one from a few months back. It is even more of a problem that the page will be different for one user than another as sites typically know things like where you are and what your computer is and adjust things to match those characteristics. And, the ads and the content of the site are constantly changing and updating. Even hitting Refresh on a page can give you different page.

Given all of those problems, how is it possible to test page loads? One way is to create pages that are similar those of leading Web sites in terms of things like size, amount of graphics, and dynamic elements. This allows the tests to be consistent over time and from different devices and locations. (Or, at least, as consistent as the variability of the Internet from moment to moment allows.) The problem with this approach, however, is that the pages will age out as Web sites update themselves and they will not be the real sites.

Such are the tradeoffs in benchmarking. The key is how to balance real science with real world considerations. What do you think? Which approach is the better balance of real science and real world?

Bill

Comment on this post in the forums

Who is on board?

While talking with people at CES about HDXPRT and the upcoming touch benchmark, I encountered the same question a few times—Who are the current members of the Development Community? My answer was something along the lines of “About 10 PC hardware vendors, about the same number of press people, and a few other folks from companies around the world.” I was, however, itching to name the companies because the list is really pretty impressive. We haven’t asked for permission from the Development Community members, though, so I left my answer vague.

Given our goal of expanding the Development Community, I find myself weighing two possible outcomes if we were to make public the names of the companies represented. On the one hand, it could encourage others to join us (“All the other cool kids are doing it, I guess I will too!”). On the other hand, it could discourage others from joining us (“Not sure how my company would feel about this. Should I ask Legal? I’m too busy, never mind.”)

My best plan for now is to email each member individually and ask where he or she stands on company anonymity. And to give all new members the option of keeping their affiliation off the record. Rest assured that we will definitely not reveal this information without your permission.

We’d like to know what you think. Would you have joined the Development Community if doing so required identifying your company and allowing us to share it? Would you now be willing to let us say that someone from your company is a member?

Bill

Comment on this post in the forums

CES: Gadget overload

I never thought I would say this, but there are more electronic gadgets and toys than I want. While walking the many cavernous show floors of CES, I saw cool bicycle gadgets from iBike (www.ibikedash.com). One device is a case for your iPhone that transforms it into a cycling computer. Because it measures wind speed, it actually is more capable than any existing bike computer—it uses data you supply like your type of bike and your weight, GPS info and knowledge of the terrain, and readings on wind speed and your heart rate to calculate your power output. If it works reliably, it would provide data that normally requires a cycle power meter costing a couple thousand dollars. If you are not into cycling, you probably don’t care, but it does show how our phones are becoming the gathering point for a myriad of data sources around us. I definitely need to try one of these out when they become available in March.

I also saw solar panels from Sharp (SunSnap) that have the inverter built in so that they output AC power directly. This gets around the messy inverter and wiring problems of typical panels that output DC power. Now, if I can get my homeowners association to agree, I need some of these.

I also saw TVs that were enormous, like 84-inch LCD, and gorgeous, like the 55-inch OLED, both from LG. I saw Windows 8 tablets and cars and iPhone cases and e-cigarettes. Basically, I reached gadget overload. At least the future of technology does not appear to be boring!

Thanks so much to the folks that stopped by our suite to talk about HDXPRT, the upcoming touch benchmark, and what they see as the future of benchmarking. We will be doing our best over the coming months to incorporate your ideas and suggestions. If you were not able to visit with us, please feel free to drop me an email and let me know what you are thinking.

Bill

Comment on this post in the forums

Check out the other XPRTs:

Forgot your password?