BenchmarkXPRT Blog banner

Month: March 2012

Tentative TouchXPRT plan and schedule

Since the beginning of the year and especially in the last couple of weeks, I’ve been discussing in the blog our thoughts on what should be in TouchXPRT. Based on those thoughts and on feedback we’ve gotten, we are working on scenarios, apps, and workloads for two of the seven possible roles I mentioned in an earlier blog—consuming and manipulating media and browsing the Web. These seemed like two of the more important and common roles and ones where performance might have a noticeable impact.

For the consuming and manipulating media portion, we are working on building a limited app (or apps) that can do some of the functions in the scenario I described in last week’s blog. We’re also working on the necessary content (photos, videos, and sound clips) for TouchXPRT to manipulate and show using the app(s). For the Web browsing role, we are putting together Web pages and HTML5 that emulate sites and applications on the Web.

The goal is to release both of these roles as the first Community Build (CB1) of TouchXPRT by the end of April. As the name implies, CB1 will be available only to members of the Development Community. If you have not joined the Development Community, hopefully TouchXPRT CB1 will give you some additional incentive!

Once we have CB1 ready to release to the community, we will need your help with debugging, results gathering, and general critiquing. As always, thanks in advance for whatever help you are able to offer.

Bill

Comment on this post in the forums

Thinking about TouchXPRT scenarios

Last week I looked at the roles in TouchXPRT that would make sense on a touch-based device like a tablet. I suggested seven possible ones. The next step is to create usage models and scenarios based on those roles. In turn, we would need to develop simple apps to do these things. To get the ball rolling, here are some activity and scenario ideas we came up with for one of the roles—consuming and manipulating media.

After doing email and reading books, this is one of the main things I do on my iPad. Originally, in this role I mostly showed pictures or videos (especially of my grandsons) to people. (Yes, people do hide when they see me coming with my iPad in hand saying, “You gotta see this!”) As the iPad and its apps have grown, I’ve found myself doing some cleaning up of photos, video, and even sound directly on the iPad. I think a person in this role is not necessarily an expert in media, but like most of us enjoys playing with media. So, the person might do something like scale/trim a video or add a nice cross-dissolve between two video clips. Maybe the person would even create a video montage by combining stock travel footage with personal video clips. Beyond simply rotating and cropping photos, the person might add some stock preset effects like making them sepia toned, adding titles, or creating a postcard. The person might create a slideshow based on a set of travel photos and use some visual or audio effects. They might also add sound by manipulating audio clips. Based upon these kinds of usages, the apps would include some of the features found in apps like iMovie, Instagram, SnapSeed, PhotoGene, iPhoto, and GarageBand.

What do you think? How do those activities match your usage of touch-based devices? What would you add, subtract, or change? Do you have suggestions for the other roles? Thanks for your help in defining what TouchXPRT will be.

Bill

Comment on this post in the forums

TouchXPRT update

We have been busy the last couple of months with TouchXPRT. We have been investigating and trying out things on Windows 8 Metro. While we are excited by the possibilities for a benchmark in that space, the task is a bit daunting.

The first key question is what are people likely to do with a device using touch-based environment like Metro? The best way to answer that is to look at what people are currently doing with IOS- and Android-based devices. We have been playing with those as well as some units running the Metro beta. To create an initial list of roles or usage categories, we spent some time looking at what is available on the iTunes App Store, the Android Play store, and the Windows Store. Here, in no particular order, is the list of uses we came up with:

  • Consume and manipulate media – Touch devices are heavily used for consuming media (music, photos, and video), but now are being used for some simple manipulation tasks like adding simple visual effects to video, mixing and changing audio, and enhancing photos.
  • Browse the Web – Touch devices are becoming one of the main ways people consume Web content, both normal Web pages and specially crafted “mobile” pages. Touch devices are what I use to find the phone number for the nearest takeout Chinese.
  • Watch video for entertainment – Through movie apps like Netflix and TV network apps, touch devices (especially tablets) are becoming a major force in this area.
  • Play games – This is obviously something folks do on their touch devices. As best we can tell, no consumer device can ship without Angry Birds!
  • Interact with others – Through apps like Facebook and Foursquare, touch devices are becoming a big way that people interact with each other.
  • Get news and information – Another big area is general news and information, including things like stock quotes and weather.
  • Use utilities – This is a broad category—there are a ton of utilities for doing everything from moving files to backing up data.

That list covers a lot of ground and some of the areas, like games, would be particularly difficult to benchmark. We thought, however, that it would be best to get everything out and then figure out what to tackle first. The big challenges we face are the lack of apps available for Metro and having no good ability to script or drive applications. Our current thinking is to write some minimal sample apps that mimic common apps out there. These would not be complete apps, but would do some of the key functions. Then, we could build scenarios around these functions. That seems like the best approach to completing something in a timely fashion. Initially, we would aim for two or three of those areas and then add others over time.

As always, we need your feedback. Let us know what you think about the list of uses and the approach in general. And, let us know if you can help with any of the sample app development. Thanks!

Bill

Comment on this post in the forums

Here a core, there a core…

Earlier this week, Apple announced its latest iPad. While the improvements seem to be largely incremental, I can’t wait to get my hands on one. (As an aside, I wonder how much work and how many arguments it took to come up with the name “new iPad.” I thought Apple had finally gotten over their longstanding fear of the number 3 with the iPhone 3, but I guess not.)

One of the incremental improvements that caught my eye, especially in light of trying to test the performance of touch devices, is the new iPad’s processor, the A5X. It’s hard to get a straight story as most reports refer to the chip as a quad-core processor and Apple referred to quad-core graphics. As best I can ferret out amidst the hype, the A5X is a quad-core for graphics, but for other operations it functions only as a dual-core.

Regardless of the specifics of the chip, it does have multiple cores for general execution and for graphics. Multiple processing units is an important trend over the last decade for processors in devices from PCs to tablets to phones. The interesting question to me is what is the proper way to benchmark devices in light of that trend. The problem is that for some things, the extra cores don’t help. For others, two cores may be twice as fast as one core. Similarly, additional dedicated processing units (such as for graphics) help only for particular operations.

The right answer to me is to do as we are trying to do with both HDXPRT and TouchXPRT—start with what people really do. That means that some usage scenarios and applications will benefit from additional processing units, while others will not. That should correspond with what people really experience. To make the results more useful, it would be helpful to try and understand which operations are most affected by additional general or special purpose processing units.

How do you think we should look at devices with multiple and varied processing units? I’d love to get your feedback and incorporate it into both HDXPRT and TouchXPRT over the coming months.

Bill

Comment on this post in the forums

Bye, bye 32 bits?

In developing HDXPRT 2012, we have encountered a dilemma. The problem is the amount of effort necessary to support 32-bit as well as 64-bit. While the world is moving to 64-bit Windows, some older platforms as well as possibly some lower-end devices still use 32-bit Windows. Our feeling is that the effort necessary to support 32-bit Windows would be better spent elsewhere, such as working on TouchXPRT. Further, supporting 32-bit Windows might have a noticeable impact on when we can complete HDXPRT 2012.

The downside in supporting only 64-bit Windows is that we had hoped to be able to increase the range of devices HDXPRT 2012 supports. The advent of TouchXPRT, however, means that it might be the more appropriate benchmark for those lower-end devices that consume content rather than create it. What do you think? This is one decision where we would really like your input. So, should we support 32-bit Windows or limit HDXPRT to 64-bit? Thanks!

Bill

Comment on this post in the forums

Check out the other XPRTs:

Forgot your password?