jim.shamlin.com

6: The Myth of Fingerprints

The author provides a few examples on instances in which in innocent party was persecuted, and even convicted, of criminal activity based on fingerprint evidence - even in instances where there was evidence to the contrary that they were no-where near the scene of the crime at the time it occurred. Especially in the wake of terrorist attacks in America, the tone has changed to one of presuming guilt in detaining a suspect and subjecting him to "hard persuasion" to elicit confessions, the notion of mistaken identity is a grave concern, even if the facts later prove a person to be innocent.

In spite of the high level of faith that is placed in fingerprint identification, no scientific study has ever shown conclusively that two sets of fingerprints can be matched with total certainty. The millions of dollars spent annually on research to improve the technology is clear enough indication that it has not been perfected.

(EN: I wasn't able to find much corroborating detail on this in any reliable source, as it would likely be self-defeating for any researcher or equipment manufacturer to be frank about its level of accuracy - but claims that newer techniques and technology is "more accurate" does seem to suggest that last year's model, which was the newest at the time, was not without its flaws.)

Fingerprinting is only one method of using biometric information to identify an individual - and while facial features, voice patterns, and retinal scans haven't "caught on" yet, they all seem to be based on a presumption that each person's body is entirely unique, that it does not change over time, and that it is possible to accurately measure and interpret readings to render ironclad and indisputable proof of identity. None of this happens to be true, but such assumptions are widely accepted, and there is an almost religious level of faith that it's just a matter of perfecting the technology.

Following a few high-profile cases f mistaken identification, congress commissioned a study by the National Academy of Sciences. Published in 2009, the report entitled "Strengthening Forensic Science in the United States" is described as the author as being "not very encouraging." With the exception of DNA matching, no form of forensic analysis (fingerprints, bullet casings, paint chips, etc.) has a standard methodology. Each practitioner goes about things in his own way to create the appearance of certainty based on the evidence, and what is used as "proof" in criminal cases often has no scientific defensible method of validating its accuracy. Ultimately, the study suggests that the field of forensics is highly flawed, and that the presumption of its accuracy likely has put innocent people on jail and exonerated criminals.

Criticisms of Fingerprinting

There has over time been significant evidence of the unreliability of fingerprinting, which has largely been ignored or flatly dismissed without due consideration of the legitimacy of the evidence. Especially since fingerprints are commonly used by the judicial system, there is great discomfort and embarrassment to admitting, or even allowing there to exist, a shadow of a doubt of their absolute accuracy.

One such criticism is that fingerprint analysis focus on a very few key features of a given print and ignores the rest of the data. Two prints are said to "match" if they have common features, and all other differences between them are ignored, even if they are distinctly different patterns.

There is also the notion that each person's fingerprint is entirely unique. There are only three basic patterns - arch, loop, and whorl - and less than twenty variations (plain arch, tented arch, radial loop, ulnar loop, etc.), and the variety in the number of ridges, the degree to which they curve and intersect, etc. yields far fewer "unique" combinations than is generally presumed.

Additionally, fingerprint identification assumes that prints do not change over time. Whiel the basic pattern remains the same, distinguishing features may vary greatly from childhood to adulthood. And even in adulthood, the results of relatively minor injuries can alter the appearance of minute features that can lead to a mismatch. The injuries need not be fresh, as even long-lasting scars change shape and migrate slightly.

Identification systems claim accuracy on points of comparison, some claiming to be able to examine as many as 2.5 billion - but even these systems have a significant amount of data they ignore, and focus on minute features that may not be unique or specific.

Fingerprint Scanning Systems

In spite of the flaws in fingerprint scanning, technology solutions have attempted to use it as a method of identification. For a time, some models of laptop computers were equipped with fingerprint scanners as a means of authentication, and some retailers have periodically experimented with the use of a fingerprint as a method of identifying customers for loyalty programs or even to access payment card accounts. Some European countries are using fingerprints for immigration control (EN: no details on which countries, or how it's being used, so I'd toss out that example)

Aside of consumer concern and frustration at their inability of such systems to correctly identify a person by their fingerprint, they are also relatively simple to defeat. The author mentions the work of a number of researchers, and even popular television shows, in demonstrating how easily they can be fooled. One researcher's technique achieved an 80% success rate in lifting prints from glass and making an imprint with gelatin. Added "security" features that use temperature and moisture detection were fooled by such simple measures as the mold in a closed hand to warm and moisten it.

These techniques could easily and cheaply be reproduced, and are well worth the effort to get a few hundred dollars' worth of groceries on someone else's tab. Even more accuracy can be achieved with more sophisticated techniques - but if the reward is access to a sensitive location, it may be worth the investment.

Naturally, all techniques to manufacture a print - the hackneyed example of getting someone to touch a glass or a piece of metal - but even a high-resolution photograph can capture sufficient detail to forge a print, and the databases of fingerprints maintained by law enforcement and their vendors are not very well protected. As evidence of the ease of lifting a print, one German hackers' club obtained finger print industries of government officials and posted images to the Internet.

Case Study: Disney

Disney uses fingerprints to enhance the security of its multi-day and seasonal passes, when bar-codes, photographs, and other methods of security became easier to counterfeit. The author details the way in which the system works (converting the fingerprint to a numerical code rather than storing the actual print, dumping data when tickets expire, etc.)

Given the volume of tickets sold, and their need to process admissions quickly, the company has much more experience in this area than many government agencies. However, the risk is significantly less: they need not worry about perfect accuracy, and are merely attempting to discourage widespread forgery that would be detrimental to their income, and can still operate profitably even if some tickets are forged.

In spite of their extensive experience, the system is far from perfect: there are instances in which a "legal" ticket-holder might be denied admission (or other methods are used when the fingerprint method fails), there's really no accurate way to estimate the number of forgeries that still succeed, and on very busy times the park doesn't use it.

Danger of Biometrics

The author brings up the example of a Malaysian man whose vehicle was protected by a biometric lock. The thieves first threatened him to get him to open the car, then kidnapped him, and eventually cut off the tip of his finger. In this sense, biometrics use to make a vehicle safe serve to put the owner in much greater danger.

(EN: An entirely plausible argument, and the author presents it without exaggeration. I've seen the same notion presented in other sources, wildly exaggerated - but to date, the only incident presented with much credibility is this one case. So the concern can't be denied, but the level of hysteria does much to discredit it.)

Other Biometrics

Another form of biometric is vein-pattern recognition, which exploits the tendency of veins (containing deoxidized hemoglobin) to appear as a black pattern when scanned with certain wavelengths of light. This pattern is considered to be unique even for identical twins, ensures that a non-living artifact such as a fake (or even severed) finger cannot be used against the system, and since the veins are internal the pattern can only be discovered by such a scan.

The Japanese ATM system uses such a system and it has been "highly effective" in preventing fraud. One system, developed by Fujitsu, was implemented after a rash of ATM fraud, and it's been suggested that the system has been in place for several years "without incident." The same system is being used in hospitals in lieu of bracelets to identify patients.

Another biometric uses camera images of the iris, which can be taken from three or four feet away, to recognize patterns that are unique to the individual - even more unique than fingerprints (again, the "different even in identical twins" standard is mentioned), doesn't change over time, and is generally not affected by injury. The accuracy level cited is that "not a single false match has ever been reported" even comparing more than 200 billion combinations.

This system is used at Heathrow Airport to be able British citizens returning from travel abroad to be processed quickly - a few seconds - with such a degree of confidence that customs officials do not ask to see a passport. The author suggests that this might be placing too much confidence in a single method of identification - it would be wise to have a back-up plan.

Retinal scans work a bit differently: it measures the pattern of blood vessels on the back wall of the eye, which his also unique to the individual, and has a high level of complexity. As such, it is considered by some experts to be "the gold standard" of biometrics.

However, retinal scans are invasive (they require a puff of air to be blown into the eye at close range) and it often takes repeated attempts to get a usable scan. Certain health conditions (astigmatism), disease, age, and even allergies can also affect the pattern sufficiently enough to prevent a positive match, though it's not considered possible for this to create a false match, and a way to "forge" a retinal scan is presently unknown.

Facial Recognition

Facial recognition was used at the 2001 Superbowl, using security cameras to record the image of everyone entering the stadium and match them against a database of known or wanted criminals. There were some legal issues raised (attendees were not informed this was being done), but more to the point, it was a technical failure.

(EN: Fourth amendment protection against illegal search and seizure are cited in this instance, and are often cited in any use of facial recognition. My sense is that it would be difficult to sell the notion that using technology to do what any person can do - look at the exposed face of an individual in a public place and recognize them - is not a tenable argument.)

It would seem that facial recognition is a natural for security: it's easy to observe and the human face has such a high level of detail and degree of uniqueness that it should be possible, and the ability to recognize a person by physical features (including facial ones) is commonly done without technology. But yet, technology can't seen to get it quite right.

(EN: I scanned ahead to see if the author makes reference to the Griffin system, commonly used to identify "cheats" in casinos, which uses candid photos on both ends. While the system isn't perfect and can't stand on its own, it does a remarkable job of using candid images from video surveillance to present a number of possible matches to a human operator, who ultimately makes the decision as to whether the person at a gaming table matches the records in their black book.)

One of the problems is that the human face is highly expressive, and changes in its shape confound software solutions - this is the reason that, when being photographed for an ID card, it takes some effort to position the subject and insist they do no smile, to get a "blank" expression: they have to provide the kind of data that the system is equipped to handle.

The simple act of smiling changes a person's entire face - not just the mouth and surrounding muscles. It changes the shape of a person's forehead, ears, nose, and eyes. To get a set of images that would create a good basis for comparison, the subject would have to go through the ridiculous exercise of miming numerous expressions (relax your face, now smile, now frown, now scowl, etc.) and consuming massive amounts of data to store all those images. Even then, a "fake" expression differs from a genuine one.

Once the image is captured, matching it against another image of the same subject is difficult - facial hair and make-up will alter the algorithm, as will "allergy eyes" or dark circles from exhaustion. If the photograph is taken candidly, without posing the subject, the precise angle and distance as well as facial expression will prevent an accurate match or cause a mismatch.

Biometrics have been attempted since the 1800's (the measurements of convicts to determine whether facial features can show a person's natural proclivity toward crime) and has yet to get it quite right. Even human beings, who are somehow able to recognize a familiar face very quickly, are not able to do so with much accuracy when it comes to a person they haven't seen repeatedly.

The notion that people of a given race "all look alike" does have some basis in the amount of training it takes to discern people from one another. A westerner in China generally takes a few months to be able to recognize people in a crowd of others whose features are similar, as the differences can be subtle.

The same is true of objects: set two cell phones of the same model on a table, and the owner may be able to identify his own only because of unique wear patterns that are very subtle. If there aren't many differences, or if the phone hasn't been owned for long enough to make mental note if them, people can't identify their own in a group.

There is also the problem of the two-dimensional nature of computer imagery. Even a person who seems in three-dimensions may not recognize a familiar face in dim light or shadow, and computers interpret depth based on the perception of light and darkness. A fair-skinned person in bright light, or a dark-skinned person in dim light, is a featureless blank to computer software.

It's suggested that the very best computerized facial recognition systems are only about 80% accurate, and if it were possible to address all the known problems, it's estimated it would be about 95% accurate. But even that is insufficient to make it practical for most of the purposes for which it is proposed.

While facial recognition "in the wild" is dreadful, the ability of systems to compare static photographs has shown some progress. Since the photographs taken for passports, drivers licenses, and people who are arrested or detailed all are composed in roughly the same way, there is less inaccuracy and, if human judgment and common sense are applied, can yield beneficial results.

In these instances, the computer match is used to identify a possible match, and a human operator considers the evidence before deciding whether a given incident requires further consideration. Even so, identical twins and people with similar faces have been subjected to scrutiny.

Voice Recognition

Voice recognition is one of the least obtrusive forms of recognition, but suffered from marketing problems - simply put, claims of its accuracy were aggrandized to the point that it lost much of its credibility when flaws were identified.

The technology works by considering multiple characteristics of a particular human voice - the tonal quality created by physical features such as the shape of the nasal cavity and throat, idiosyncrasies of accent and pattern of speech, and even the "style" of speech such as vocabulary and diction that are difficult to adjust.

Naturally, each of these factors has shortcomings: physical features may be similar among individuals, a cold can offset a voice, accent and pattern can be altered, and even the idiosyncrasies can be consciously adjusted - but taken together, and given a long enough sample for comparison, it can be reasonably accurate. There's also the problem of sound quality and background noise when a voice is captured in the wild.

Multiple Vectors

To improve the accuracy, other vectors can be used. The FBI's "next generation identification" system is attempting to pull together multiple vectors - facial recognition, tattoos and scars, iris patterns, fingerprints, and even things such as posture and gait - as a method of compiling a database that can be used to spot someone and accurately identify them by multiple traits.

It's alleged that "massive" databases exist in the US and UK to improve the ability to recognize terrorists, or to avoid wasting time investigating people who might be misidentified because of a few common traits. However, even this system has its flaws, in that a terrorist may give false information to evade being identified and, at the same time, cause an innocent person to be misidentified.

One egregious example is a 62-year-old Dominican nun who ended up on the no-fly list, and is still subjected to special security measures whenever she travels, because a 30-year-old man in Algeria who is suspected of terrorism used her name as an alias. The fact tat two individuals so disparate in terms of race, gender, and age can be confused is a clear sign that the system is flawed and the authorities aren't using a basic level of common sense.

Conclusion

The notion that biometrics are absolutely unique to the individual is in some instances accurate, but this level of faith is too broadly granted to metrics that are not. Additionally, the notion that technology is accurate in measuring and analyzing this information is clearly a matter of faith rather than fact.

While computer technology is given much credit for its ability to perform mathematical comparison with much greater speed and accuracy than the human mind, it is sorely lacking in its ability to analyze and interpret data. Even the most complex computer algorithms are based on simplistic mathematical analysis and are limited to the logic by which they are programmed, which may be inherently flawed or based on incorrect assumptions of the programmer. Even DNA analysis, which is considered to be highly accurate, is based on a simplistic mathematical model of the human genome and the functions developed by programmers to draw comparisons.

It's also noted that while the computational capabilities of a computer processor are largely accurate, the capabilities of input sensors are still very primitive and unreliable. A camera, microphone, or other input device is far from flawless, and the input of sensors must be broken down to a simple mathematical value that are based on assumptions and imperfect human logic.

Aside of imperfect logic, there is also an economic cost to technology. The cost of equipment and software can be significant, but there's also the cost of collecting data. If a crime is committed in a public place (such as a restaurant) or even a place where multiple people have been (a hotel room, even a private home), there is an enormous amount of trace evidence to be collected, and an equally enormous amount of effort sorting through it to determine what is relevant and accurate.

(EN: The author also doesn't mention that many analyses, especially in the judicial system, are conducted by one side or the other, and begin with a hypothesis - guilty or not guilty - and evidence to the contrary is often ignored or suppressed. Thus, the sense that one fingerprint at a crime scene is relevant but thousands of others must be ignored without any further analysis.)

Every method of identification is a conducted to achieve a desired result - that a person is, or is not, a match for a given measurement. And because the methods by which the match is made are known, they are vulnerable - a person who wishes to create a false match or avoid an accurate one has a clear indication of what he must do to fame the system.

But the fundamental flaw goes back to the faith people put in such systems to deliver as promised, and to ignore or dismiss any evidence to the contrary. We want technology to "work," and assume that it does so flawlessly, and are resistant to the notion that it doesn't, even if the results are wrong in a way that can be plainly observed.

Loose Bits

In the context of one of the examples, the author presents an interesting "trilogy" for authentication: a system that combines "something you have" (such as a card) with "something you know" (a password or code) with "something you are" (a biometric) provides a strong level of security that is not excessively inconvenient to the user.