Ethics and Persuasive Technology
(EN: A note of caution: philosophy and psychology are separate disciplines, and while this is a valid area of inquiry, the author's approach is superficial and unsystematic.)
Debate over the ethical use of persuasion is nothing new. Even as Aristotle composed the "Rhetoric," he acknowledged the potential for abuse and the existence of objections.
To suggest that any attempt to change another person's attitudes or behaviors is inherently ethical is extreme. As social creatures, it is impossible for a person not to have an impact on others, and the majority of human interaction involves some degree of persuasion.
And so, the assessment of the ethics of persuasion examines the means and intentions of persuasion. Where force, threat, or deception is used to "persuade," it becomes unethical. Where a persuasive person takes advantage of a subject's ignorance, it becomes unethical. And where persuasion is used to manipulate the subject to undertake an action that is harmful to himself and beneficial to the persuader, it becomes unethical.
Ethical Concerns for Persuasive Technology
The use of technology in persuasion has all the ethical portent of persuasion through any medium, but there are a handful of concerns that are considered to be unique to the medium, or to which the medium is particularly prone.
The novelty of technology can mask its persuasive intent. Because it is new, it is presumed that people are unfamiliar with it. They may not consider it a conduit of persuasion, may not recognize the various tactics of persuasion in the medium, or may not have the experience to defend against unethical persuaders. Especially since persuasion is often built into games, simulations, and other interactive baubles, there is the appearance that they are intended to distract users, or to target to children.
The complexity of technology can also be used to mask persuasion. A person who feels unfamiliar with and intimidated by technology is more likely to obey the instructions of the computer, without considering whether they should. A common example is the registration process that informs users that they are "required" to provide personal information that is unrelated to a task, and the novice user assumes this to be true, or legally required.
The reputation for computers for being intelligent and fair has been widely promoted (the author cites advertising that implies that "computer technology" assures reliability and quality), which enables persuaders to borrow upon this reputation to suggest the validity of their claims or to offer software that is biased by design.
The persistent nature of computers is cited as an ethical concern, in that computers can be programmed to relentlessly pester users into consenting by wearing them down or seeking a moment when they are not as attentive.
Computers and technology can also be invasive, and critics question the ethics of using technology to intrude, without permission, into times and places where they are unwanted. Examples of this would include telemarketing, spam, and pop up advertisements.
Another ethical concern occurs when computers control the interactive possibilities, which includes processes in which the user is put "on rails" to prevent the user from escaping a process without abandoning a task entirely, or simulations that are geared to lead the user to specific outcomes by eliminating possible alternatives from the interaction.
The fact that computers can send social and emotional cues to individuals, but are not subject to rebuttal by the same means, is viewed by some as an unfair advantage in negotiation gained by using computers rather than people to interact with subjects.
Computers can also be used to escape or displace responsibility. In an instance when a user is harmed by a computer (for example, a person's health is damaged by using a fitness training program), the developer may attempt to escape responsibility by blaming it on "computer error" or suggest that it was the victim's own fault for failing to use the software appropriately, and there is a tendency for people (and even juries) to accept these suggestions as valid.
Related to this is the ability to use computer technology in an anonymous fashion, such that the individual who is responsible for unethical conduct may falsify credentials or make it difficult for the victim to determine who is behind the computer. This functions both to escape responsibility and to mask the reputation of the operator.
Intentions, Methods, and Outcomes
Ethical concerns in instances of persuasion fall into one of three categories: intentions, methods, and outcomes.
One reasonable approach to assessing ethics is to investigate the intentions of the operator. Certain intentions are generally presumed to be positive, such as promoting anything that is beneficial to the health, safety, or welfare of the subject. In other instances, the ethics of intention are debatable, especially in instances where the goal of persuasion is to sell merchandise or services or advocating political causes. However, even in such instances, ethics are also subjective. Critics seldom object to the use of persuasion to sell products they believe to have merit, nor to advocate political ideologies with which they agree.
The methods of persuasion are called into question when the operator uses deception, coersion, or other influence strategies that are successful by subverting the judgment of the subject. Even in this area, there is room for debate. For example, one may debate that a cause-and-effect scenario is an intention to deceive, or represents the best effort of the operator to implement a valid statistical model. There is also some debate over whether the use of emotions or reputation are ethical practices (though they are acknowledged by Aristotle as valid and necessary elements of argument - ethos and pathos, respectively).
The outcome of persuasion is used to assess the ethics of an act. If the outcome of persuasion is benign, there is no significant ethical concern, but if it is harmful, ethics are called into question. In particular, any action has intended outcomes (addressed above) as well as unintended outcomes. If the unintended consequences are something that the operator could reasonably be expected to have foreseen, or if the operator fails to take corrective action (to stop, and to make amends) once an unintended outcome is discovered, this raises ethical questions.
Operant conditioning is a method of persuasion that is of acute ethical concern. Developing conditioned response based on reward and punishment is theorized to operate on the subconscious level, rather than by conscious choice. And since most technology products provide cues to the user, there are concerns that conditioning may be intentionally employed to encourage or discourage behavior.
Conditioning is most clearly evident in games and simulations, where reward or punishment is used to cue the user toward a goal, but it is also present in productivity software (the unpleasant error tone, or the pleasant chime that indicates success)
Operant conditioning by use of punishment is most likely to draw fire. If a Web browser were designed to make the user wait longer to download certain sites, or made information more difficult to access unless the user provides personal information (EN: which is actually quite common in subscription-based sites), the ethics become questionable.
A hypothetical example is provided of a software program that would punish users who failed to pay to register their copy by crashing the system or encrypting the user's files until the user paid the fee as being clearly unethical (EN: This, too, is a common function of demo software, though it tends only to prevent access to files created with the demo version until the user pays for the full version)
Surveillance (which was mentioned earlier) becomes ethically questionable when the user is unaware of surveillance or is unable to avoid surveillance without inconvenience or penalties.
The area in which surveillance is most hotly argued is in and employer's surveillance of employees, which may be done without their knowledge or unfettered consent (as the employee must accept surveillance as a condition of employment). There is also the question of whether surveillance may be used by a government against its citizens for the sake of security or public welfare, as well as instances where surveillance is done by parents of their own children.
Targeting Vulnerable Groups
The intentions, methods, and outcomes of persuasion that may be considered as ethically acceptable when targeted to individuals who are capable of rational thought may become questionable when they are targeted at vulnerable groups, such as children, the mentally handicapped, the elderly, and individuals in time of crisis and emotional distress.
There is acute attention to this practice, particularly in regard to the use of video games to teach values to children. (EN: This is a particularly hackneyed argument, so I'm skipping the rest).
It is generally accepted to be unethical to intentionally target a vulnerable group for persuasion, though it becomes questionable as to whether the operator is held morally responsible for the consequences when their product falls into the hands of an unintended subject, though such arguments are generally ones of social responsibility and the validity of intent.
Education As Key
The author suggests that education, in the sense of awareness, is a factor in addressing and defusing the ethical issues related to technology. With awareness, users of technology are better able to recognize and consider the tactics that are implicit in any technology product, and the operators are better able to understand, assess, and make well-reasoned decisions about their use of persuasive tools in their products.