jim.shamlin.com

Credibility and Computers

Credibility is critical to influence, and the ability of a person to change attitudes, motivate, and persuade others is strongly linked to the perception of credibility by those subject to persuasion.

What Is "Credibility"?

Credibility is the inclination of the recipient of information to accept it to be truth. It is generally gauged by two dimensions: the expertise of the speaker in the subject being discussed, and his honesty or trustworthiness, both assessed from the perspective of the recipient.

As a perceived quality, credibility may not be granted to inaccurate information or withheld when information is accurate. While the assessment of credibility is subjective, it is not entirely arbitrary. There are certain factors that are common in considerations of credibility.

Trustworthiness is a more important factor than expertise. Generally, it is derived from the objectivity of the speaker, in that they are not biased toward a specific agenda. However, a source is often granted credibility in general if they are willing to concede certain facts that seem contrary to their interests. For example, when a company praises its competition, that is taken as unbiased information and builds credibility (which may then be used to gain acceptance of a follow-up statement that is in their interest: our competitor's product is cheaper, but ours is better). A source is also granted greater credibility if the recipient feels certain similarities to the speaker - that the factors known to be common implies that other factors are also common, including one's interests and agenda.

While less significant than trustworthiness, the perception of expertise also makes a recipient more open to persuasion. If a person is called by a title (such as "professor" or "doctor") or has the physical attributes (a white lab coat) or behavioral attributes (a "clinical" demeanor), they are likely to be accepted by a subject as a figure of expertise, hence authority.

The two factors are not entirely coincidental: it is a common perception that a mechanic has considerable expertise, but is viewed with some suspicion as to whether they are trustworthy (and may be conning you into paying for unneeded repairs). Likewise, a Boy Scout may be perceived as trustworthy, but the advice he provides may not be based on much experience (he is, after all, still a child). Where one factor is known to be weak, credibility suffers in general regardless of the strength of the other factor.

In situations where one factor is strong and the other is unknown, the speaker may still have credibility due to the "halo effect", the tendency of the listener to give the speaker the benefit of the doubt in an area where their assessment is uncertain.

And naturally, when both factors are considered to be strong, the speaker is granted a high degree of credibility, and has considerable persuasive power over the listener.

Another factor that comes into play is trust - not the trustworthiness of the speaker (which derives from a perception of their qualities), but the level of trust that the listener is willing to place in any speaker. A person who feels a high level of personal expertise will be critical of the statements that do not jibe with their opinions, regardless of the source. Also, when there is considerable risk involved, discomfort or fear may overcome the qualities of the speaker (a person with a fear of heights is unlikely to be talked into skydiving even if the suggestion is made by a trusted friend with considerable experience).

Credibility in Human-Computer Interaction

In general, computers have developed credibility in instances where information is driven by numeric data due to their accuracy, speed, and consistency, but where the information they provide is of a less mathematical nature, so is their credibility.

Due to the emergence of the internet and the proliferation of less-than-authoritative sources of information, the Web has lost much of its initial credibility (a topic to be explored in the next chapter), but in some areas, such as consumer information, it is still considered a more credible source than humans, as the assumption is that human salesmen have a history of misrepresenting information to consumers.

Credibility is not always a factor in human-computer interaction. The user may perceive the computer as a tool to be used for a purpose, and credibility matters only insofar as the computer is a reliable in its performance. However, the author defines seven tasks for which credibility is a critical factor in HCI:

Naturally, the user's interaction with a computer may involve more than one of the contexts listed above.

Types of Credibility

The author concedes that research has not classified credibility, but from his own experience, he theorizes that credibility can be subdivided into four main categories: presumed, surface, reputed, and earned. This order indicates the chronological sequence of credibility, and is also roughly indicative of the strength of perception.

"Presumed" credibility is that which a person grants because of pre-existing beliefs. This is closely related to stereotypes and prejudices, in that the subject applies their assessment of individuals in similar roles (or with similar characteristics) to another situation that closely resembles their previous experience. In this way, the subject's approach to a new device or software is shaped by their previous experience.

(EN: I would add a fifth category of "associated" credibility, based on the subject's experience with similar devices. Arguably, this could be classified as "presumed" credibility, but it has a stronger basis in experience. This concept is drawn upon by brand marketing, where the presumed credibility of a product category may be overcome by the credibility associated to the brand.)

"Surface" credibility is derived from simple inspection. An initial impression is formed quickly, often based on visual appearance of surface traits, that characterizes the way in which the subject reacts to the device immediately. While perception may change with interaction, this is based on a modification of the initial impression made upon the subject. This is especially important when developing the visual design of an object, in that a poor design can "turn off" a subject immediately, and it may be difficult to recover from that position.

"Reputed" credibility is derived from the opinions of third parties. The assessment of credibility by a trusted source is an endorsement that colors the perception of the subject and may prejudice them to be more or less inclined to accept the credibility of an device with which they have no credibility. Naturally, this derives from the credibility of the referring party: the endorsement fo someone whom the subject does not know or trust is of less value, and an endorsement from a source the subject distrusts can harm the credibility of that which is endorsed.

"Earned" credibility is derived from pervious personal experience. Each time the individual re-encounters the device, their trust is based on previous encounters, and the subject generally does not re-evaluate whether the device is credible "this time," but expects consistency with previous experience. So long as their experience is consistent, the credibility is reinforced (but when the experience is inconsistent, credibility wavers a bit, and repeated inconsistency may undermine credibility altogether).

Dynamics of Computer Credibility

In a nutshell, credibility is much easier to lose than it is to gain. The user must bestow some level of credibility to interact with a computer at all, gains confidence when the computer produces beneficial and correct results, but may lose ground if an miscalculation or malfunction results in incorrect results, even after a history of positive experience.

The magnitude of the variance is reflected in the magnitude of its impact on credibility. And the impact of a variabnce will also determine the degree of lost credibility (an error that results in injury is much less serious than one that creates a mere inconvenience).

Also, users may be more forgiving of inaccuracies of newer technologies (knowing the risk of the "bleeding edge") than of more established ones, especially if no alternatives exist. The example is given of navigation systems, which are only 70% accurate - however, users continued to rely on the devices because getting bad information 30% of the time was still not as inconvenient as getting no help at all.

(EN: The author does not mention the ability of the designer to decrease loss of credibility by using warning messages: if the computer detects a major variance and a message suggests that the outcome may not be accurate, this decreases the loss of credibility and, in some instances, the ability to recognize and admit errors may even improve credibility).

After a loss of credibility, there are two ways in which a computing device can recover. The first is to return to a state of accuracy in future operations, such that the user considers the error to have been an occasional glitch. The second, oddly enough, is to continue to make the exact same mistake in the exact same circumstances - for example, if the auto-correct function of a word processor improperly "corrects" a word, and does so consistently, the user will recognize that the error pertains to that one word, consider it to be a known exception from its typical performance, and generally find a work-around rather than abandoning the use of the auto-correct feature.

However, there remains a level at which the faith of the user is lost, and they will discontinue use, so the product will have no opportunity to regain credibility through any means. This is especially true of medical technology, in that if a device makes a single mistake, most practitioners will abandon its use altogether.

Errors in Credibility Evaluations

There are two kinds of errors in credibility evaluations: gullibility and incredulity. The gullibility error consists of accepting the computer as being credible when it actually is not (the user is unable to detect that there is inaccuracy) and the incredulity error consists of dismissing accurate information (the user suffers a logical or emotional defect that makes them unable to accept the outcome).

The gullibility error has received a great deal of attention, in that there are individuals and organizations who see themselves as guardians of the welfare of others and are quick to react, generally on the presumption that it was the intention of the designer to mislead the audience.

The incredulity error has not been given equal attention. It is generally in the interests of the creators of technology to promote faith in their own products, and there is little third-party advocacy to attempt to instill public confidence in their devices. (EN: I would suggest that the emergence of social media may have addressed this. There are many sites where individuals post ratings and reviews of products, or advocate them in blogs and other sites.)

Adjusting Credibility Perceptions

Addressing credulity errors is difficult, as it arises from the mental state of the subject. There is some argument to be made that presenting additional information about a product will have little impact: a gullible person (lacking intellectual maturity) is unlikely to comprehend any additional information, and the incredulous person (lacking emotional maturity) is unlikely to be swayed from their position by factual information, but will regard it with equal incredulity. However, such arguments are based on the behavior of subjects at the extremes of deficiency. It is possible to educate the slightly gullible or to convince the slightly incredulous by presenting additional information - which is, in itself, a concern of rhetorical persuasion.

A more assessable challenge is for developers and marketers of computing products to position their products to the appropriate level of credibility: a product intended for entertainment purposes has little need to establish credibility (with the exception of simulations, whose appeal is in their mimetic capabilities) whereas a productivity tool has a higher need to establish credibility with its market.

While it is generally accepted that stressing the accuracy and reliability of a product is axiomatic, there is some resistance to being forthcoming about the limitations of a product, as it is perceived as being detrimental to the product's image. However, it has been found that admitting a minor shortcoming gives a person greater credibility, and helps them to retain credibility even when an error is made (the subject has been prepared for the eventuality of a variance).

The author concedes that there is a lack of formal research into the effect of admitting shortcomings on computer products, but he "suspects" that it would be as effective as it is for other products, so long as the shortcoming is not a fundamental software flaw that has a significant detriment to the value for which it is being provided. For example, products that hedge their estimates by providing ranges are, in effect, conceding the inability to arrive at a precise conclusion, but are often grated more credibility than those that provide a specific conclusion.

The Future of Computer Credibility

The future of computer credibility seems uncertain.

The level of media attention paid to instances where a disaster is blamed on a computer error, or where a hacker was able to subvert a system, has done much to undermine public confidence in the accuracy of computer systems, and computers are clearly losing their mystique as they become a part of everyday life.

But the latter phenomenon, the increased usage of computers, would seem to suggest a growing credibility in technology among the consumer, whose personal experience forms a basis of earned credibility for technology in general that the media may be losing power to prejudice, the general experience of users has been overwhelmingly positive, which may overcome the periodic problems, and the accuracy of computers improves with each technological advance.