jim.shamlin.com

Research and Design Process

Different organizations have different development processes, and fads in project management come and go, but the development task has been fundamentally unchanged: research to determine high-level plans, then more directive research to determine requirements, then designing the solution, then building it out, and finally checking for quality assurance. For design of user experience, the same stages are followed, though with a focus on the user rather than the company.

The chief difference for the mobile channel is that it adds the need to design for each device class being targeted. (EN: My sense is this is similar to developing a software application for multiple operating systems, though there is greater variance in the device and system capabilities of mobile than of desktop.) Market analysis should help to identify which devices a given effort needs to support - e.g., a corporate application will likely have to accommodate only the devices issued by the company.

MOBILE RESEARCH CHALLENGES

Research, especially in terms of usability testing, is fairly well established in the desktop development world. While much has been worked out, there is still a significant margin of "testing" error: testing is done in a controlled laboratory environment, the hardware/software is on the lab computer, the user is given a specific task to perform, and is generally not distracted or interrupted. Some of these problems are amplified in the mobile channel, and there are additional areas of concern for mobile.

Device proliferation pertains to a multitude of different devices, each of which has different capabilities and features. In many instances, an application must be written for a specific device (make, model, version), and even "universal" applications written on a common application form will be significantly different on devices that support that platform.

Proliferation also affects usability, as users are familiar with their own device and does not necessarily know how to use other devices, Hence an application tested on a Nokia device in the hands of a person who normally uses a Samsung phone will result in poor usability, whereas the results would have been better if the user were a Nokia owner. The answer is obvious: test on popular devices, and screen participants to ensure you are getting the proper users. If possible, set it up so the test can be done on the user's own device.

(EN: That makes quite a lot of sense - as it's the most realistic. My sense is that desktop applications are tested on a machine because you can't expect test subjects to bring their own computer. The one drawback is if there is anything being tested that could harm the device, you're opening up to potential liabilities - but it seems a great idea if there is low risk.)

The author suggests that multimedia may be an issue, as usability testing techniques based on desktop software do not often consider things such as aural experience, and a prototype of a media element will be more different than the final version. (EN: True, but I'm not squire the problem is quite as significant as the author suggests - it's not uncommon to do testing of media, such as radio or television spots - and while it's a bit messier than testing a web site with static content, it's still possible to get useful feedback.)

The notion of environment is mentioned. It's presumed that the test laboratory can replicate an office environment with acceptable accuracy, but because the environment of mobile is highly varied, it's difficult to find design flaws that would be apparent in some environments and not others (walking past a light source, being in a noisy environment, etc.)

USER RESEARCH

The author provides a quick overview of some of the basic methods of user research: ethnography, user interview, and focus groups.

Ethnography, adapted from anthropology, involves observing people to gain an understanding of their patterns and practices. Ideally, it is done in the wild, where it provides the most accurate results - but it can be difficult to unobtrusively observe people in the actions you wish to study (especially fi that action is using a mobile application that hasn't been invented yet.) Ethnography can be done in a laboratory environment, given that this introduces a level of variance. (EN: In effect, usability testing is a form of ethnography, but it tends to be more intrusive - interrupting and directing the action.)

User interviews seek to discover information about an audience, their attitudes, and their stated preferences. Generally, interviewers ask open-ended questions to discover, rather than focused questions to validate (EN: the latter is more in the nature of a survey, which is a different technique). Commonly, the results of multiple interviews are analyzed to define common themes, and often develop "personas" for use in design. (EN: The author does not mention the problem of interviews - in that there is the social dynamic in which the interviewee says what they think they should say rather than what they really think, or perhaps even what the interviewer wants to hear. It can be a fairly serious issue that corrupts results.)

Focus groups are common in market research, but do not often provide information useful for design decisions. (EN: My sense is a focus group is simply a group interview, where participants can play off one another as well as the interviewer to broaden the scope of discovery. Also, the social dynamic is even more corruptive in the group setting, so beware.)

The author provides some examples of the kind of information this research can reveal, though it seems much like the kinds of information that could be gathered by survey (what device you use, your carrier and plan, what apps they use, etc.) and seems to avoid some of the more qualitative information that can be better gathered by the methods named above.

DESIGN PHASE TESTING

A few techniques specific to design testing and research are discussed: card sorting and Wizard-of-Oz testing.

The card sort is generally used to help determine information architecture - to determine how the users would organize information on a given topic. This is common on information Web sites, to arrange a large number of articles into categories, but is also done with software, to organize commands or functions into groups and menus. Topics are written on cards and users group them. It's fairly simple.

The author uses the notion of a "Wizard of Oz" test to refer to prototype testing, which can be done with built-out screens that are linked together (but have no back-end system), or even with paper prototypes. In general, there is a test subject, a researcher, and a "wizard" who handles the test instrument (in effect, doing what the computer system would do to make the interfaces change according to the user's actions).

The notion of the "wizard" comes from an example where the prototype is done on paper, and the user looks at a fake device - with each click, the "wizard" (the little man behind the screen) drives the interactive presentation, which may even include making quick sketches.

APPLICATION USABILITY TESTING

The author suggests that there is a second round of testing that can be done to further refine the design. At this point, the design has been determined, but you're attempting to discover if there are any major problems.

(EN: There is a lack of clarity on when this takes place -you can test a prototype before coding begins, components as they are coded out, or the final application prior to release. I think that the author means to imply that in can be done at any time - though in practice, I've found that the further along the programming effort, the more costly it is to make design changes, and the more reluctant a company is to undertake the additional cost of doing so.)

The author mentions the use of emulators or simulators (EN: a distinction is made, but my sense is it's the author's own distinction) to test code: the developers do not need to compile and load to a device, but can test it on their own computer, which saves time in coding and debugging. Naturally, this seems like a quick and cheap way to usability test, but it provides inaccurate results because the user interacts with the device by proxy rather than directly, as they actually will.

(EN: Much more is said on this, as it's a pretty serious problem - but it's more of a political problem negotiating with sponsors or efficiency managers who want a quick and cheap test, even if the results are no good or even misleading. Perhaps this carping is included in case one of them reads this book - but for designers, the author is preaching to the choir.)

A step up from simulation is laboratory testing using an actual device, which has the benefit of being closer to realistic (the user is interacting with a device similar to the one that will be used with the "real" application), though it still lacks an accurate simulation of the environment in which it will be used. It's also noted that data collection can be a challenge: usability testing has largely been worked out for the desktop - how to record the screen, camera angles to catch user expressions and actions, even things such as eye-tracking - but this has not yet been worked out for mobile devices.

With mobile, there is the notion of field usability testing - putting the device in the user's hands in a public setting (a shopping mall, a city street, a park, a bus station, etc) and following them around. This brings in some of the random features of real-world use, at risk to statistical precision (the author advises against attempting to introduce distractions, unless you can do it to all subjects)

Some research is cited (Kaikkonen) that indicates that field tests take almost twice as much time and money as lab tests with no discernable benefit (they discovered the same design flaws, though field testing made them seem more severe). However, the author admits that this was one study or one set of tests, so may not be entirely reliable.

Informal field testing is also mentioned - getting "the man in the street" to test out a simple task and provide feedback. This can be done quickly and cheaply in a short amount of time, and people are generally willing to allow you to interrupt them in their daily life for a small incentive. (EN: I suspect this is cultural, and I also suspect it skews the group to more extraverted subjects.) The author admits this is not accurate and not a good replacement for formal testing, but can be helpful in getting a wider array of feedback.

MARKET ACCEPTANCE (BETA) TESTING

Beta testing is a common practice in marketing that has bled over to other fields. In such a test, a product is introduced to a small group of users to see how well it does, and feedback is solicited.

One common practice is to give the product to participants for a month, then bring them into a facility for surveys and focus groups to gather information. This generally gets a sense of suitability of the product to the needs of the user, and seldom gathers sufficient details to identify product design and usability issues (though it may suggest areas where more thorough testing is merited).

Another form of research is mentioned, that doesn't seem to fit any of the above, but it relies on the mobile device as an instrument for collecting data from subjects - to send surveys to be answered in the wild, to ask the subject to send a text or photo from their location, etc. (EN: I skipped a lot of details - innovative ideas for using the mobile device as a research instrument, but a bit off-topic from research that feeds into application design.)