New Methods, New Goals

The author asserts that usability cannot be achieved by merely putting out a bad product and making adjustments to deal with complaints: design and usability should be concerned during the development phase.

An example is the development of text-editing programs (such as vi and pico) that were created by programmers for their own needs, and depended on intimate knowledge of system commands in order to use them at all (a good example: "grep -v ^$" must be typed to delete a blank line in a document). It had the benefit of capitalizing on what the users already knew, but the detriment of making the programs more difficult to use (more keystrokes to do a simple action) and completely unfathomable to anyone else.

With the development of the GUI, computing was opened up to a broader user-base: it provides a simple interface to the basic commands to operate a computer (open a file, launch a program, etc.) without the need to learn the code, and this is the standard to which applications and Web sites should aspire.

The core problem is that this new breed of user wants to achieve a goal (to send a message, to buy a stock, to order a book, to get travel directions, download a song) without the necessity of learning the complexities of the equipment, and has the expectation of the same level of point-and-click functionality as the GUI has provided for the operating system.

To achieve this, user interface design must be integrated into the development process for software.

Methods for Achieving User-Centered Designs

The starting point in any project should be to understand the users: who they are and what they are trying to accomplish. Seems very simple, but it's often the last consideration of a task that focuses on the capabilities of the machine or the nature of the data involved.

One method for discovering this is the "needs assessment" that focuses on the needs of the user. An example is given of an inventory database, which contains a huge amount of data that can be manipulated and handled in various ways. The cloud of what is possible, based on the data and systems, dissipated quickly when the questions above were answered.

Ethnography is another method. When the task is not entirely new, one can (unobtrusively) observe the way in which it is presently performed (even if technology is not involved) to determine the needs of the users and identify current choke-points in their process.

The author suggests a goal of attempting to "minimize cultural impact" when introducing a new system. He then suggests cultural borrowing (e.g., an innovative approach to air traffic control was discovered by watching the way in which teenage girls use computers), which would seem to contradict his original point.

Observation has its limits, and interviewing users may need to be done to uncover information that is not visible to the naked eye - but there's a word of caution about the accuracy of user accounts. You should always compare what is claimed in an interview versus what is done in actuality, as there are often stark differences, for various reasons.

After the analysis of users' needs, designers should create mockups of what the interface might be like, beginning with "hasty sketches" and proceeding to a more polished prototype. Of importance, the prototype is a blueprint for developing the software. It is not an afterthought.

Another method of user-centered design is usability testing, which consists of developing a prototype of the application and having test subjects attempt to use it to perform a task. This is generally done in a laboratory environment, which is different that the real-world environment, but necessary to closely observe the user's interaction.

Usability testing is qualitative: it uncovers problems encountered by users, but these problems may be highly idiosyncratic. Even so, usability testing in practice has been found to improve quality enormously.

Another method is the collection and analysis of user feedback. This includes both behavioral monitoring (collecting data from the system that indicates how it is being used) as well as the more organic sort of direct feedback you can gather by allowing users to express themselves in words. Again, beware of incongruities between what users say and what they actually do. This feedback is useful in making improvements to a program, as well as avoiding problems in other programs involving similar tasks.

A Metaphor for User-Centered Design

EN: no such section header, but the author switches channels.)

The author suggests looking to architecture for a metaphor for development. The architect must consider the capabilities of the building materials when designing a structure, but he is instead guided by the nature of the building and the needs of its users - and in the best of cases, the architect considers the actual users to build a custom structure versus what he perceives the needs of an abstract user to be.

By contrast, the architect who ignores the user and builds according to the technical properties of the construction materials, or seeks to achieve an artistic vision and consider practical use afterwards, often designs a very bad building.

This brings the author to another random point: that the use of the word "design" is in a practical, rather than aesthetic sense. A quirky, cool, or fanciful Web site is not necessarily a usable one.

Moore's Law Reexamined

"Moore's Law" is the observation that the power of computers doubles every eighteen months, and it has held to be largely true over the past five decades. Generally speaking, processors get faster, hard drives have greater capacity, connection speeds increase, and devices get smaller. Meanwhile, the cost of technology remains fairly well grounded.

But again, the capacity of the technology is self-referential. What is important, in a larger sense, is not what the machines can do, but what they enable the users to achieve. And in that regard, progress has been much slower.

There is a reference to another visionary, who indicated that "the new world of globalization is defined by collaboration" rather than competition between firms. The author suggests that this requires a new approach to metrics that considers collaborative interaction - how many messages you get, how many groups you contribute to, how many other sites link to tour pages. (EN: this seems to divorce actions from results - what's the point of this new metric?)

He points to collaboration across organizations (people from various departments coming together) and collaboration among companies to define industry standards and seek to make their products interoperable. He also looks to "societal collaboration" of individuals who participate in online communities, putting aside the difference of culture, nationality, race, gender, and age to work together, and sees this open collaboration as the driver of progress.

EN: all of this seems very kum-ba-ya, but I'll concede he has a point if you consider things on the macro level: open collaboration moves industries forward, but has limited potential for the individual companies)

A specific example: e-commerce merchants are beginning to measure conversion rates and returning customers rather than mere hits, which will lead them in the direction of customer experience, which will lead them to considering the effectiveness of their Web sites at serving customer needs.

From Artificial Intelligence to User Interfaces

An early initiative of computing was to arrive at "artificial intelligence," enabling machines to perform the perceptive and cognitive tasks of the human user, thereby replacing people. (EN: the claim that it was for mundane tasks only is likely a political angle). This limited the vision to mere mimicry - finding a different way to do things that are already done - which the author (rightly) dismisses as "misguided and largely counterproductive."

A better approach is the use of machines to facilitate, rather than replace, human performance. (EN: hearkens to the "tool" theory, in which any device is merely an extension and improvement of human capabilities - the microscope is a more discerning eye, but it does not "see" on its own).

Another grand failure was the attempt to humanize machines, to make them more human-like so that they could interact with humans (and be granted equal status, in terms of productivity measures). Examples are provided of talking soda machines and characters such as "Clippy" that would offer advice in a friendly way ... which most people found irritating and decidedly un-helpful. "People don't want a relationship with a machine, they want control over it," and finally relegated to entertainment in the form of animatronics, which many people find to be creepy.

And so, AI has faded into the background, though it persists in the "provocative fantasy" of expert systems and intelligent machines. It has been largely replaced by focus on user interface, and admission that computer technology cannot replace the human, but can empower him - of only there is a method (interface) by which a person can accurately and effectively operate it.

Guidelines for User-Centered Designs

Presently (EN: recall the book was written in 2002), the movement toward user-centered design is limited to advice on how to use color effectively, write clear instructions, and organize Web pages into a navigable site. These are all low-level tasks that seem to be addressed with little consideration of the high-level goals of user-centered design.

The author goes off to cloud nine again, but arrives at three basic principles: a "good" user interface must be consistent, predictable, and controllable. Consistency is not discussed in detail. Predictable designs give the user a clear indication of their options and knowledge of what the consequences of an action will be before the action is taken (no surprises). Controllable interface empowers the user to do what they want - it is easy to do the task, and the user is not compelled to do something he does not want to do.

The author shows some examples (Southwest Airlines, Yahoo, and eBay) to illustrate these concepts. It's repetitive, granular, and random, so I'm skipping it.

The author does return, briefly, to consistency (which was neglected earlier), showing ways in which Web designers learn from their peers, which creates a consistency of actions among various Web sites. (EN: Don't know if I completely agree with the rationale that there can only be one "best way" of doing a task and all should follow suit - recall the need to serve the specific audience of a given site).

Why Focus on Human-Computer Interaction?

Human-Computer Interaction (HCI) is an extension of industrial design that has gained popularity and favor (the author sites the growth in degree programs). The author lauds it as a preferable alternative to technology-centered design, but doesn't go into detail about what HCI actually means. (EN: I think he has limited knowledge or an idealized vision of it that prevents him from sensing its inadequacy in its present incarnation.)

The Skeptic's Corner

The author notes that technology advocates are "probably angry by now," in that the suggestion that the user's needs should be paramount is a threat to the control and power they presently exercise, to the detriment of the user.

An example is given of a conversation with a highway engineer whose solution to the problem of traffic was limited to "more of the same" - i.e., more and wider roads - adamantly rejecting alternative solutions (better urban planning, telecommuting) . The advocates of technology are likewise myopic, in the relentless belief that better technology means more technology, not a different approach to how technology is used.

He suggests that proponents of user-centered design should "be aware of the resistance they may face," and alludes to the way that societies treated the first astronomers who suggested that the sun, rather than the earth, is the center of the solar system.