U.S. Department of Justice, Office of Justice Programs; National Institute of Justice The Research, Development, and Evaluation Agency of the U.S. Department of Justice U.S. Department of Justice, Office of Justice ProgramsNational Institute of JusticeThe Research, Development, and Evaluation Agency of the U.S. Department of Justice

Transcript of "Human Factors in Latent Print Examination"

NIJ Conference
June 2011

bulletModerator: Melissa Taylor, Program Manager, Office of Law Enforcement Standards, National Institute of Standards and Technology
Panelists:
bulletDeborah Boehm-Davis, Professor, George Mason University
bulletMelissa Gische, Physical Scientist, Latent Print Operations Unit, Federal Bureau of Investigation Laboratory

Melissa Taylor: The National Institute of Justice sponsored the National Institute of Standards and Technology Law Enforcement Standards Office to pull a group together to conduct a scientific assessment of the effects of human factors on forensic latent print analysis with the goal of recommending strategies and approaches to improve its practices and reduce the likelihood of errors. So the question of error within forensic disciplines has become a topic of considerable debate in recent years. And the legal community as well as the scientific community are really advocating for traditional forensic science disciplines to reevaluate its policies and procedures and assess potential sources of error. So the big question that we were asking this working group is "What are the things that prevent an examiner from achieving excellence?" And I also provided the definition, OMB's definition for scientific assessment, which is "an evaluation of a body of scientific and technical knowledge that typically synthesizes multiple factual inputs, data, models, assumptions, and/or applies best professional judgment to bridge uncertainties in the available information." And we chose this definition because there was not a lot of research directly related to human factors in latent print examination when we began, so we really wanted to be able to rely on best professional judgment to fill in where that research wasn't available.

So why did we choose human factors to frame our discussion about error? Well, human factors analysis can be used to advance the understanding of the true nature of errors in complex work settings. Research in this area has identified factors that contribute to inefficiencies and quantified the effects of human and organizational factors on the performance of critical tasks. The forensic science community can really benefit from the research that has been done to reduce the consequences and likelihood of human error in other scientific disciplines like the medical community and aviation. And "human factors" is a term used to describe the interaction of individuals with each other, with facilities and equipment and with management systems. This interaction is also influenced by both the working environment and the culture of the environment that people work in.

So some lessons learned from human factors research is that error occurs in all human endeavors. So no matter what your intent is when you first begin or how hard you strive not to make an error, in any human endeavor, error exists. And the root causes of error can be known. If you look beyond just the original person who is doing the work and also take a systems approach and look beyond that to see why the error happened. Many errors are caused by activities that rely on weak aspects of cognition, like short-term memory and attention span. There are ways that you can get around these weak cognitive aspects with things like checklists. Errors can be prevented by designing tasks and processes that minimize dependence on weak cognitive functions. And the fear of punishment for performance errors inhibits error reporting, which is really important when you're trying to improve quality, because you want to know the frequency of the different types of errors that happen and really get a good understanding of how to prevent them.

So as I said before, the working group, through a consensus process, really took the time to take the lessons learned from the human factors community and set out to evaluate how human factors affect current practices within the latent print community. The working group was able to reach substantial agreement on many issues, not just limited to the formal recommendations. But on some significant matters, as you can probably imagine given the size of the group and the diversity of the group, there were some areas where consensus couldn't be reached. And the final report of the working group also spells out those areas as well.

So just to give you an idea of where this working group has been and what we have left to do, we started in about 2008 with the project plan initiation. It was submitted to NIJ for funding. And they funded it and the first meeting was in December of 2008 and we spent the first few meetings really developing baseline knowledge for this multidisciplinary group because not everyone was familiar with human factors, not everyone was familiar with latent print examination. So we took a lot of time doing workshops, bringing in presenters from different fields, to make sure that everyone had a clear understanding of what they were being asked to do. Currently, the draft report is being circulated for external review. It will be released later this summer. So the draft report provides recommendations related to work environment, management, quality assurance and quality control, testimonial reporting, interpretation, training and education issues, as well as technology issues. It also provides a descriptive process map. When the working group first started out, we looked around to see if there were any really good descriptions of how latent print examination is done currently in the U.S., and we really couldn't find one. So we started from scratch with a few latent print examiners who came into the room and just basically said, "Ok, if I'm a piece of evidence, what's the process that someone uses in order to make a decision?" And we came up with, I think, a really good product that will provide people who don't have background in latent print examination a good idea of what goes on. And one of the speakers this afternoon, Melissa Gische, will give you a presentation on that process map.

So one of the things that the working group discussed was bias. As you know, "bias" can be a very loaded term. Some people speak of personal bias, racial bias, gender bias, cultural bias, media bias, political bias, and so on. But this is not the type of bias which the working group was concerned about. Mainly we were concerned about cognitive bias, which is far more subtle and usually is unknown to the observer. But the working groups also looked at two other types of bias, which is legal bias and statistical bias. In law, bias refers to witness partiality toward one party or against another party as a result of financial, emotional or other interests or attitudes. In statistics, bias refers to the extent to which an average statistic departs from the parameter it is estimating or the extent to which measurement on individual units systematically depart from true values. In psychology, which will be the focus of our next presentation, cognitive bias is a general term for many observer effects in the human mind, some of which can lead to perceptual distortion, inaccurate judgment or illogical interpretation.

Like "bias," the word "error" has a multitude of possible meanings. In the medical domain and other domains they use various terms for error. But for the purposes of the working group we really focused on two types of error, which is procedural and outcome error. So the procedural error refers to departures from prescribed procedures or protocols. Outcome errors contrast with procedural errors. It is closer, but not identical, to an "adverse event" in fields such as aviation and medicine. You could imagine in aviation it's a plane crash; in medicine it might be that someone dies. But just as procedural errors do not necessarily lead to erroneous outcomes, they nevertheless are important because they may have implications on improving training, so we need to track both procedural and outcome errors.

And our next presentation will be from Dr. Boehm-Davis, and she'll be writing a more in-depth presentation on human factors, specifically on contextual influences on decision-making.

Deborah Boehm-Davis: Context, or contextual influences, result from the way in which we process information. There's nothing magic, there's nothing unusual about the fact that context influences us. It's part of the way that we process the information that is around us. We know this based on lots of lots of psychological research. That typically helps us--in that second case we were able to interpret the word. But it doesn't always help. Sometimes it leads to misunderstandings. Let me give you an example drawn from an alien domain to most of you, probably, which is aviation. Let's say that you're a pilot, and you've been given directions to descend from 20,000 feet to 10,000 feet by this particular point in space. Further imagine that the pilot wants to do this in the most fuel-efficient manner. It turns out there is a mode in the aircraft called VNAV, for vertical navigation, and that allows you--it computes, actually, for you-- the most fuel-efficient path. What it then does is it displays when you should start down and the path you should follow in order to get down in the most fuel-efficient path. Now, most advanced aircraft will in fact start you down at the top of descent point. That is, the equipment knows that's where you are, and it just starts to bring the aircraft down because you selected vertical navigation. However, in less-equipped aircraft, it gives you the guidance, but it doesn't start you down. You actually have to tell the plane to descend when you hit that point of top of descent. Now, what happens when a pilot misses that point? They're engaged in conversation, they're talking to air traffic control, they're talking to their co-pilot. They fly past the top of descent point. Well the vertical guidance goes away. Why does it go away? The aircraft is smart enough to know "I can't get down anymore in the most fuel-efficient manner. In fact, the only way I can get down now is to use some other mode, which is a steeper descent not as fuel-efficient." So the aircraft understands that this is what had to happen. But the pilots don't understand why all their vertical navigation went away. They're looking at the plane, and it's giving them the information. They look somewhere else and look back, and it's gone. And, much to my surprise, they really don't understand why it goes away because when they're trained, they're trained by procedures. Do this, do this, do this. They're not ever told that the system is smart enough to know when it can't achieve what they've asked it to do. Now, it could achieve it when they started, but it can't now, and so the information has gone away. This is a case where there's a mismatch between what the pilot thinks might be happening and what the system thinks is happening, and that leads to misunderstandings.

Also, this contextual influence can lead us to biases in the way that we process information. So we don't take in everything around us. Right now, there are sounds coming from the next room, there's lighting, there's temperature. There are all these pieces of information available to us, but we don't process every piece of information that's in the environment. We couldn't. It's just too much. So we focus. And let me give you an example of focus coming from another domain. Here is an image, and the blue lines illustrate where the nurse's eyes looked as she watched the replay of an actual resuscitation in the emergency room. You can see that she's focused primarily on the patient, which is kind of under that big mass in the middle--that's the patient's face. But she's also checking other things in the environment. I want to contrast this with the pathway that the eyes take for the anesthesiologist. The anesthesiologist is only focused on the patient's face and airway. You can see there is almost no excursion beyond that location. That has implications for how I design some kind of monitor or alert for that physician. I can't put it up where the nurses are looking. That anesthesiologist would never, ever see it. And that's because that anesthesiologist is selectively focusing on information that they think is relevant. This is generally a good thing. You want them focused on the airway. And if you're a patient, you probably really want them focused only on the airway. But again it has implications for what information can be perceived by that individual.

Well, what does all this have to do with the topic we're about here today, latent fingerprint analysis? I want to talk to you about two studies. Both of them used actual latent fingerprint examiners. They were given latent fingerprints to analyze, using fingerprints that had previously identified as a match or a no match by examiners. And then they manipulated the context--the information that surrounded the prints. In one context, on the left, the person had confessed to the crime, an eyewitness had identified him, the detective knows that the individual is guilty. In the second context, someone else confessed to the crime, someone else was identified, or the detective really doesn't think this is right person. Another way they might have manipulated the content had to do with the sheet that was given to the examiner. In the remarks section--you probably can't read it, but it says, "The above listed suspect is the person who pulled the trigger. Making every effort to place him in the truck." So, the first study consisted of six experts. Each of them had more than five years of experience in latent fingerprint examination, post-training and accreditation. They all consistently passed proficiency training. They were regarded as very competent investigators. They were approached by their manager, and they were asked to make eight judgments on pairs of prints. They were not told that these were prints they had previously seen. In four of them, they were given the same context as the original crime; in four they were given a different context. So what happened? Five out of 47 judgments, the choice was changed from either match to no match or from no match to match. Second study. Again, five experts. These were new experts. Again, with lots of experience. And they were given one set, and they were biased towards a no match decision. These were all cases where they had previously been identified as a match, and in four or five judgments, the match changed to no match with a change in the context.

So context, or lack thereof, can lead to misunderstanding of the system, can lead to bias. Again, it's really important--Melissa said this to begin with--bias is not intentional. We are not talking about having some prejudice in your mind. This is without awareness. It's not an ethical issue, and knowing that these things happen does not make it go away. We are influenced by contextual information in terms of both our perception and in decision-making. So we can maybe take the information and apply it to understand how we might improve performance in the specific domains that we're interested in. And I want to talk a little bit both about human factors and then how it's been applied in some other domains so that we can talk about implications it might have for latent fingerprint examination. So as Melissa said, human factors takes what we know about our processing limitations or the ways in which we process information, and applies it to the design and/or training to improve safety or effectiveness of performance. More formally, as she said, it's the study of performance capabilities, limitations and other characteristics that get applied to systems that we work with. And it's also important to point out that it's a science. It's not just kind of what I thought might work. It's based on lots and lots of experimentation and lots of research. I'd like to give you some examples of changes that have been made in two other domains and then think about the implications that might have for latent fingerprint analysis.

So I'll go back to my aviation example. One of the things that's being changed now after 20 plus years of experience with auto-pilots--that is, those systems that help you fly the plane and descend from one altitude to another--is they've changed the displays. They've now recognized that the display itself is not providing sufficient information to the pilot for them to understand what the system is doing. So they are physically changing devices. However there are still lots of existing devices out there--it's very expensive to retrofit aircraft--so we will have those old systems onboard for quite some time. So the other thing that's changing is training. We're developing training that explains those key features of the system and the places where we see fall-through on the part of the pilots, places where they don't understand what's going on.

Another lesson learned from human factors is to develop procedures that encapsulate the best practices. A lot of times pilots are taught, again, by procedures and given what the instructor pilot thinks is a good thing to do. They haven't necessarily always gone out to find out what's the best way to do this. And so they have been working over the last 20, 30 years to encapsulate those best practices, put them into procedures, and I will note, as, again, Melissa said, errors still occur. Errors occur in every domain. But the consequential errors, the ones that lead to negative consequences, have been reduced by following these procedures because they have catch points in them. They help you figure out where you might have made a mistake. The other thing that they've used in the cockpit is checklists. They provide a memory aid when something happens. They allow you to follow that checklist and to know step by step what you should do next. And I've heard both Sullenberger and his co-pilot, Jeff Skiles, talk about the Hudson River incident. They followed their checklists. Those checklists helped them land that plane safely, even though it wasn't intended to land in the river initially. It was the safest thing they could have done under the circumstances. Following checklists.

In medicine, changing domains, I want to talk about a place where design made a difference. There was a heart valve. And the heart valve was to be implanted in an individual, and they provided it to the physician in a customized container with a cotton spacer between two pieces that couldn't touch so that they didn't--you know if they have touched before they were implanted it could cause problems. So they packaged it with cotton spacer. Unfortunately, there were a number of cases where a physician forgot to remove the cotton spacer from the patient after they implanted the device, which led to massive clots associated with residue from the cotton. What did they do? They could have just told physicians, "Hey guys, pay attention. Take the cotton spacer out." But remember, humans are subject to error. This is a high-risk, high-casualty environment. Better is--what they did was a new design. They packaged it in basically a plastic bubble wrap so that the pieces were kept apart in the container by the plastic. When you lifted it out, there was no residue that could be placed into the patient, so they took away the opportunity for that error to occur. So by changing the design, we've reduced human error. Yes it was a human who actually left that in there, but there were so many things going on, it's pointless to point the finger. It makes more sense to design out the opportunity to make a mistake.

Now, one of the things that's critical, though, in all of these domains, and has been critical piece of their infrastructure is to be able to identify the errors that are being made. If you don't know what errors are happening, you have no hope of knowing what needs to be changed. And it's difficult, as, again, Melissa said, there are fears of reprisal. If I report that I made a mistake, I'm going to lose my job, or I'm going to be demoted, or something is going to happen. But they have addressed this in both the aviation and medical demands. In aviation safety, a reporting system exists in aviation, and in the medical domain, there's the Medical Product Safety Network. Let me tell you a little bit about each of them.

ASRS receives, processes and analyzes voluntarily submitted reports of safety violations with the objective of improving safety. And I should note that it is voluntary, although there are a few things that are mandatory to be reported. This was developed in the mid 70s, and through December of 2009 they had close to 900,000 reports of safety violations. They created from those 5,000 safety alerts. They went out to pilots, airlines everywhere warning them about things that might happen based on things that almost happened. And over 60 research studies have been done.

MedSun is newer. It started only in 2002, and their goal is to identify, understand and solve problems associated with using medical devices. So far they have roughly 350 participating health care facilities, mostly hospitals at this point. But they have also issued reports, newsletters, educational materials when they see problems that are building.

So both of these systems have been quite effective. However, it's important to note they have a few key characteristics that have allowed them to be successful: A) they are voluntary; B) they're confidential; and 3) they are non-punitive. So let me talk about each of those. Although some reporting categories are mandatory--if there is a crash, you need to report it; in MedSun, if there's a death, you do need to report it--however there are lots of things that come close: Almost crashed. Saw that plane out my window. You know, almost left something in the patient, but did this and found that I was able to recover that piece of appliance from the patient. So it's important to focus on those non-mandatory incidents. You don't want to wait until something actually happens. Because it turns out that a lot of times, things almost happen to a lot of people, and then it happens to the next one. So you want to know about the things that are leading people to making mistakes.

Second, it's important that it be confidential. People are worried about their jobs. They are worried about things happening. So it's important that the identity of the reporter not be connected to the report. In ASRS, the first thing that happens with the report is it goes to a legal analyst who looks at the report to determine that it's not a negligent action. If in fact something is reported that is negligence, then you are not covered. However, for all other cases, they go on, are then stripped of their identity and go into the system. And it's important to understand if I'm flying in an aircraft, and I'm a passenger, and I see a violation, I can report that. And if the pilot hasn't also reported it, then they can get in trouble. So pilots are incentivized to report before someone else on the flight deck or someone else in the aircraft reports it. In MedSun, it goes to an analytic team. They actually do know, for a short period of time, who made the report so they can actually improve the quality of the report or clarify things that are not clear. As soon as they've gotten past that point, again, they strip out all the identifying information.

And then finally it's important that it be non-punitive. Focus needs to be on improving the system, not punishing the person who reported it. And this ties to this notion of "just culture," which is just coming up in both aviation and in medicine. Sometimes you have a choice of doing something bad or something worse, and both of them are technically against the rules. But you may decide that rather than following the letter of the law, it makes more sense to do something that will be beneficial under those circumstances. And that is what "just culture" is about: understanding that the person may have taken the best course of action available to them, even though, by letter of the law, it may not have been the right thing to have done; and to not penalize people for being, you know, people and using their brain to select an alternative that makes more sense under the circumstances. So it's important to recognize and track these errors, right? A mistake, an accident, a near miss--all of those are, in some sense, error. But unless we track it, we don't know how big the problem is. So we need to first track in order to quantify the magnitude of the problem. And it's only then that we can address the mechanisms for improving performance.

So how is this relevant to latent fingerprint analysis? We now know that the identification of a match versus no match is influenced by context. It might be influenced by information provided by the investigators, it might be influenced by the source of the potential matches. Did this information come from people who have some reason to be connected to the crime? Or did it come from [unclear]? And what's the history of the prints? Is it a print that's already been identified by another analyst as a match or a no match? All of these things cannot help but influence the investigator, even with the best of intentions. So what implications does this have? Well, maybe once we quantify where we're seeing bias and error we can think about changing training. We might think about changing procedures. We might change the way that we measure and share error data. I don't really know. I'm not an analyst, and I'm not familiar with all the relevant aspects that could lead to improvement. So I'm not here to give you an answer. However, I hope that just as you can no longer look at this picture without seeing the cow, you will no longer think about latent fingerprint analysis without considering the contextual factors that influence the analyst, regardless of his or her intention to remain unbiased. Humans cannot function without bringing context into their decisions, and we need to ensure the context plays the appropriate role. Thank you.

[Applause]

Taylor: I just want to give you a brief overview of process mapping and why it's an important activity for people to undergo. So why is it important to document a process? Understand the step in a process, their order and dependencies, who's responsible, how long they take and other key pieces of information helps an organization raise the visibility of process issues. It also provides a baseline from which to measure productivity and improvement. And it captures knowledge so that it can be used to train others. This will enable an organization to reduce the hidden factory and document how the work is really done. So the hidden factory is like "Oh we don't know that those people are doing, we just know that they go in there and ten minutes later they come out with a report." Everyone within the system should have a clear understanding of what the other pieces of the puzzle are doing. It also eliminates working by folklore. So "Oh, someone is--this is how they've done it for 20 years, so this is why I'm doing it." The process map allows you to have a clear understanding of why people are doing what they're doing, and at what point they're doing. It also helps to make a cultural shift from just focusing on who made the error and changing the question to "What allowed that error to occur?" So when you're looking at the system view, you can look through and say, "Ok, maybe it's not this individual person but the fact that it takes them three days to get information from someone else could be leading to that happening."

So this is an example of--it's really hard to see, but Melissa will provide a larger picture of each one of these subsets--but this is the process map that the working group created. And process mapping is a workflow diagram and it creates a clear understanding of the processes and it also shows how the work gets done. It's a great visual representation of the work process and it describes the sequence of the processing if it's important to know what happens first and what happens next.

So when you're constructing a process map, the first thing to do is determine the boundaries. You want to determine what's the start and stop points for the flow of your process map--where does the process begin, and where does the process end--so that you have clear scope in what it is that you're trying to map. And the second step is to list the steps--just write down all of the process steps as they exist now. And the rule of thumb is to pretend that you're the evidence. You know, we went though and we said "Ok, if I was a latent print, where would I go? What next? What questions would be asked in order to get to a decision?" And then, some other advice about how to list the steps: You use a verb to start a task description. The flowchart can either show the sufficient information to understand the general process flow or detail every finite action and decision point. So you have choice: your process map could be a high-level process map, or you could get down into the nitty-gritty and ask, "What are the questions that the examiner is asking themselves?" And if they answer yes, what do you do? If they answer no, what do you do? So you can adjust the complexity of the process map as well. But at a minimum, you should record the process steps, decision steps and any transportation methods that are involved in the process. And the third step is to sequence the steps so now that you have all of the steps that are needed to conduct a comparison, the next step is what is the correct order? So the fourth step is to draw the appropriate symbols. And then step five is really to check for completeness. And when the latent print working group was checking for completeness, we really just wanted to make sure that there was a pathway to each one of the three decisions. There was always a pathway that would lead you to identification, and that there was a pathway that would lead you to exclusion and a pathway to lead you to inconclusive--the three major decisions. So that you make sure that you are accounting for all the variabilities in the decision-making process. And then the last step was to finalize the process map. And really just make sure that it was representative of the way in which the latent print examination community was doing their work. Of course there's a lot of variability from agency to agency, but we really worked hard to create a generalized workflow so that it could provide a good baseline for folks who didn't have a good understanding of the latent print process.

So with that, I will turn it over to Melissa to present a little more on the process map.

Melissa Gische: As Melissa said, we put this process map together for the working group, not as a recommendation for how things should be done, but what we believed how things were currently being done. So that we can identify the different decision-making points in the process to help identify where there might be vulnerabilities. And so this process map was then given to each of the subcommittees within the larger committee so that they can address those issues, whether they were talking about quality assurance measures, training, new technology, interpretation, whatever the case may be. Now, yes, there's a lot of stuff going on here, but I wanted to start and just cover what happens when the examiner actually has a latent print in front of them. We didn't focus on the processing or the development to get to that point. You know, we recognize that, yes, things can go wrong during that time, but that was not the focus of this particular group. So if we start in this orange section here, this is the analysis section. Essentially that top part in purple there represents, whether it's the crime scene technician or the actual examiner back at the lab working with the evidence. And it's not until we get to this orange part where a latent print is set in front of that examiner. And now what is the process, what are the decisions that that examiner has to make? Unlike some other disciplines, the human is the instrument in latent print analysis. So there always have to be--the human factors have to be accounted for.

So essentially what happens first in the analysis--the examiner gets that print and they're going to be looking at all of the information in that unknown print. Looking at both the print itself, the ridge detail, which is typically divided into three levels of detail: the overall ridge flow, the specific ridge path, and even tinier minute details such as locations of pores. Now given that we only have about 20 minutes or so to discuss this stuff, I'm not going to go into too much detail. The examiner is also going to look at other things that may influence the appearance of those ridge details. And then they're going to ask themselves, "Do I have enough information here to move on to the comparison?" But before we get there, here's just an example of some of the types of information that the examiner is going to look at when they're analyzing the latent print. So they're going to be looking at not only the overall ridge flow, but also the specific ridge path. So are there ending ridges, dividing ridges, or dots, those characteristics or points that they're commonly referred to? And what is their direction, their type and their spatial relationship to all of the other characteristics in the print? The examiner is also then going to take into account some of those factors that may influence the appearance of those ridges, whether it's the substrate or the surface that the print is left on. Examiners in the room know that prints developed off of a curved surface--there are some differences than if a print from that same source was developed off of a smooth or flat surface. Same thing, depending upon whether it's a piece of paper versus a table versus a textured glass bottle. As well as the matrix or whatever substance it is that's coating the ridges that then gets deposited on the item. Blood prints are going to look very different than prints left in sweat. And examiners are trained how to recognize these different factors and then use that to properly analyze the print that they're seeing. And then development medium. There are probably about 30 or so processes online that the examiner has available to them to apply to different types of evidence. You've got super glue, powder, various other chemicals, ninhydrin, and all these different development methods are going to appear differently. And the examiners in the room--if I put up a ninhydrin print and a superglue print--the examiners in the room will immediately be able to tell which was developed with which process. And then finally we also look at the pressure or movement that may occur when the print is deposited. Now granted, while we're not there when the print is left behind, there are typically some clues or indications that we can tell just by studying what the ridges look like as to what kind of pressure was applied, so that we know what kind of distortion we may or may not expect to see when we're also looking at that known print. The examiners in the room know that these four prints were taken from the same source. Even though, from afar, they look like very different prints. It was actually just the same finger left four subsequent times with different, varying pressures.

Ok, so during the analysis, the examiner is taking all of those factors into account and looking at the latent first and then the known. And it's typically at this point that the examiner is then going to decide, "Do I have enough information here? Is it of sufficient quality and quantity? Do I have enough reliable information for me to move on and conduct a comparison?" If the answer is yes, then the examiner will move on and look at the known print as well. So then they're going to do a similar but separate analysis of the known print as well. Ten-print card. You know, if you guys work in the government or have ever worked with kids you've probably had this taken of your own fingerprints. And we're going to do a very quick analysis here--you know, was the ten-print card recorded in ink? Or was it done digitally with one of the live scan systems available at a lot of police departments? Were all ten fingers properly recorded? Is there smudging? And so on. Granted, this probably takes maybe 5, 10 seconds, but we're still doing an analysis of the known prints as well. And so as you can see, based on this chart, that orange section, that's the largest section. In fact it's probably just as big, if not bigger, than the comparison and evaluation stages put together. And so because so much information is gathered during this phase, this is one of the critical phases of the comparison process.

And so some of the recommendations being considered by the working group--and I think this is the first time we've ever discussed even where the working group is headed, and again it's not final, so these are just being considered--are recommendations dealing with documentation to make this part of the process, this analysis process, more transparent, so that another qualified examiner can assess both the accuracy and validity of the primary examiner's conclusion. In addition, the working group is considering recommending more research into developing analysis metrics. There has been some research trying to quantify or allow us to measure the quality of the print, but to my knowledge I'm not aware of any of these things actually being put into practice just yet, actually being applied to casework. So there is research being done, but we certainly would like more so that we can take out some of that subjectiveness, some of that human element out of that process and allow us to put a number or quantify that.

So after then, the examiner does this analysis of both the latent and the known print, they're going to move into the comparison and evaluation stages. Now, just quickly to go over this section here--ok, so a comparison. The first thing the examiner is going to do is they're going to actually try and exclude the prints. Many, many prints that we look at can be excluded based on level one detail. And by level one detail I mean the overall ridge flow or the general pattern type. And so if you look at our latent print here--I know it's probably difficult--you can see that green outline. That's outlining just the general ridge flow of the print. It kind of flows in a circular, tight manner with some ridges pointing off down to the left. Now if we then pull up an enlargement of the right thumb or the finger in the number one block of the fingerprint card, we can see that while those ridges also flow in a circular, tight manner, it's actually still quite a different overall ridge flow or pattern than the latent print. So an examiner, when asking themselves, "Do I have a sufficient amount of information here at level one to move on to the evaluation and say that these two prints came from different sources," they would be able to answer, "Yes, in fact, there is a sufficient amount of information that disagrees here that I would be able to reach an exclusion decision. And this actually happens quite frequently and quite quickly in a number of comparisons. So then the examiner would move on and look at the next print on the card, in this case if we enlarge the number two, or the right index finger on the known card, we see that that overall ridge flow--it's similar. While I couldn't make an identification at this point, I certainly can't exclude at this point either. And so I would have to then move on and look at additional ridge detail.

So the process map then tells me that I have to look for a target group that exists within tolerance--a target group just being a cluster of ridge details that's easy for the examiner to recognize that they're going to use to compare to see if it's the known print. And within tolerance means--because every latent print is distorted in some manner, because the skin is pliable, every impression, even from the same finger, is going to appear slightly differently. So when we say if that's within tolerance, it means is it within an acceptable amount of movement or change that we would expect to see based on the pliability of the skin or the development method and so on. And so then looking at that--and it doesn't matter where in the print I start, as long as I start in an area where the comparable area is present in the known print as well--then I would then look to see if I could find this target group in the latent print and subsequently then in the known. And if I do find it, then I would continue comparing ridges and sequence to determine if there is a sufficient amount of information either in agreement or disagreement for me to reach a conclusion. Now I just put these up; obviously it's going to take a little bit longer for the actual examiner doing the process. But they're going to be looking at all of that information that's there. And in this case, I think we can conclude that, yes, there's a sufficient amount of information in agreement that an examiner would reach an identification decision.

Ok so that's comparison and evaluation. They kind of go hand in hand. You know, the comparison is essentially doing the side-by-side observation of the two prints after having gathered all that information during analysis and the evaluation part of the step being where the examiner says, "Ok, do I have enough information here that disagrees so that I would reach an exclusion? Do I have enough information here where it agrees, thereby I would reach an identification decision? Or do I have not enough to go either way, in which case I would have to reach an inconclusive decision?"

Now, some of the recommendations being considered in dealing with these overall conclusions deal specifically with individualization, and this is probably one of the--it's a hot topic in the field these days. It's certainly something that is being considered very strongly. One of the recommendations is to do away with the individualization decision. Ok, so one of the recommendations is going to say that examiners should refrain--or should, is being considered--examiners should refrain from making a source attribution to the exclusion of all other individuals in the world. At least saying that as an absolute scientific fact. And, you know, it's one of those things where the working group is considering, you know, what basis do examiners have for making this type of conclusion? What empirical data is out there? What other studies have been done? And that sort of thing, looking to see whether or not an examiner can reach this conclusion.

Other considerations along these same lines is that examiners really need to have more training in probability theory. It wasn't until recently that I realized that I'm essentially doing subjective probabilities in my head when I'm looking at these latent prints and reaching a conclusion. You don't have to have numbers assigned to be able to work with probabilities. And so there needs to be a significant amount of training in the latent print discipline for latent print examiners in order to understand a lot of these probabilistic concepts.

So that's some of the considerations, when it comes to the ultimate conclusion, that are being done. Now after an examiner goes through their own independent analysis, comparison and evaluation, the prints would then be given to another examiner for the verification part of the process. Typically in the field, all identification decisions are verified, and exclusion and inconclusive decisions may be verified. That's currently an agency-to-agency policy. Verifications may also be conducted blindly, and by blindly I mean that the verifying examiner does not know the conclusion that the primary examiner had reached. So this is, unfortunately, it is being done differently throughout the country, but there are still some recommendations dealing with the ultimate conclusion.

And some other areas where the working group is considering some recommendations deal with more research into the accuracy and variability of examiner conclusions. There certainly is some research out there. Thankfully--Black Box--finally got that published. Some of the work that Glenn Langenburg has done out in Minnesota helps address some of these areas. But we can certainly always use more research into how accurate are examiners, and are they consistently reaching the same conclusions or different conclusions. And so we certainly are looking to recommend more research in that area. And one of the issues that a lot of the researchers are having in this area is getting access to the database of prints. Ok, so they've got these great projects that are thought of, but then actually trying to put them into play has been difficult for some because they don't either--to create a database of a large enough size to be statistically significant or get access to a database has been a challenge for a lot of researchers. And so it's very probable that one of the recommendations from the working group is going to be for either the development of or access to some of these larger databases so that this very important research can be conducted.

And then finally some of the other areas where the working group is considering recommendations deal with training, such as right now there is no nationwide standard for certification or for competency, even, for that matter. And so the working group is considering recommending that there be a comprehensive testing program that includes both competency testing, certification programs, as well as proficiency testing. And along those lines it goes with continuing education. These aren't new or novel ideas. And then just as Dr. Boehm-Davis said before, you know, having a culture or a system to identify errors. When I went through training 11 or so years ago, an error meant you were looking for another job. And in order to change that we need to change the culture surrounding what happens if there is a fingerprint error. And so we need both systems to identify errors to figure out the cause--not to point fingers and assign blame to somebody, but to figure out, ok, what maybe went wrong in the process? Where can we improve the overall process or system so that we can prevent these vulnerabilities from occurring again in the future? And along with that, we also then need that change in culture. And I personally am seeing a change in the culture when it comes to talking about error, but it's been a slow process. And so these are just some of the recommendations that the working group is considering when it comes to the comparison process that latent print examiners go through. And again that was just a quick highlight of the [unclear] process, you know comparisons and so on.

[Applause]

Taylor: So now we'll open the floor up for questions.

[Inaudible]

Gische: Ultimately it's going to be the human error, because even if technology introduces a problem of some kind, the ultimate conclusion or comparison is still going to be done by the examiner, so even when computers are involved, whether it's for digital processing or the automated fingerprint systems where we're searching the prints, even when those computer systems are involved, the human examiner is still going behind and looking and doing the actual comparison and reaching the decision. So I think the human error would be the most dramatic. [audience member asks follow-up question] No, the examiner is allowed to reach an inconclusive decision. [audience member responds] I may not be understanding your question… [audience member responds] I guess I don't--even in that situation, where is the technology affecting the exam? [audience member responds]

[Inaudible]

Boehm-Davis: There's a part of me that doesn't know the answer because I'm not working in this domain, but my understanding is the first step is this working group, is looking at the process. And so at each step of the process, where there is a decision to be made, the question is "are there contextual factors that might influence an examiner one way or the other?" So to go back to your example of technology, do I know whether the technology is likely to have distorted it or not? Do I know enough about how it did? Maybe it made it more likely to make it look good versus more likely to be inconclusive. Do I know what those factors might be at each step of the process? So the first step is what the working group is doing, is identifying what's the process. Then it's to look at every step and say, "How might context influence this?" And then once you've done that, you want to go back and say, "What could we do to reduce the influence of context in this circumstance?" Now it's important to have with that some notion of where most of the errors are being made or where there are difficulties. So let's say we have a print, and we know this is an actual print. Now I distort it. And I distort it in various ways. At what point is the examiner wrong by saying its inconclusive or that it is a match? Because at some point if I distort it enough, perhaps they ought to say it's inconclusive even though I know it came from this person. So where's that line? It gets a little bit difficult to know exactly where the cutoff points are. Does that make sense?

[Inaudible]

Gische: Yeah, absolutely. That's additional considerations. I highlighted maybe 10 or so things that the working group is considering. I think at one point--I think we're down to 40 or 50 overall recommendations in the report, which is down probably from about 100 at some point. And so there are a number of things that are addressed that I didn't talk specifically here, that being national standards, training--there's an entire chapter devoted to training. There's an entire chapter devoted to quality assurance measures. And so those things are certainly addressed in the report.

[Inaudible]

Taylor: Yes. We actually have it. It's going to be posted online. I presented it and we've handed it out at several forensic conferences, so yeah, it's available, but it will also be in the report.

Audience member: If you eliminate the source attribution, can you give us some idea of what the probability statements sound like? I'm a prosecutor so that interests me a lot.

Gische: No, that's a great question, and this is something that the working group struggled with, because it's quite easy to say don't do this, don't do that, don't do this, but without providing solutions to the problem. It's one thing for me--I mean even right now, I don't testify to absolute fact source attributions. I say it's my opinion based on everything I know about fingerprints, and I can discuss that and so on, that I wouldn't expect to see this arrangement repeated in another source somewhere out there. I cannot rule it out. I haven't compared all the prints that are, or that ever will exist. But based on everything I know, I wouldn't expect to see it. And even just be changing that subtle language, it takes out that "it is this person, to the exclusion of all others that have ever existed in the universe." And so at least, for me, that's the direction--I've already even made that change in my own testimony.

[Inaudible]

Gische: Yeah, I don't think that the actual ability to help law enforcement solve crimes, as you say, is going to be affected. The examinations are going to remain the same. I do, however, think that there could be significant impact to the smaller identification units that exist throughout the country, those one- and two-man units. How are they going to do identifications? How are they going to do blind verifications? They don't have time to have a person devoted solely to quality assurance measures. And so I think that there is the possibility for there to be a significant impact for them, whether that means they combine resources, work with other agencies in their area, work with their state labs. I think that has yet to be determined.

[Inaudible]

Gische: We didn't conduct any actual application study like what you're referring to. [audience member responds] I think I understand your clarification. And that's one of the things that the working group did consider. And when the working group considered saying something to the effect of removing all contextual information. It's not removing all contextual information. The working group specifically talks about removing what they're calling domain or relevant information. So there is information regarding a case or regarding a specific print that the examiner does need to do a correct interpretation, whether it's what item was the print developed on, what do they believe the substance was--was it blood versus something else? Information that the examiner doesn't necessarily need to know: is this person a suspect or the victim? It doesn't matter to me--if you're saying compare this person, that's all I need to know. And that part, say, wouldn't affect my examination. I don't need to know that this person confessed. That doesn't affect the latent print. So when we're talking about removing those contextual influences, it's that which doesn't affect the analysis of the image itself.

Audience member: Ms. Gische, for laboratory units that only verify matches, what are the dangers or disadvantages compared to other units that mix in inconclusives or perhaps exclusions along with [unclear] support or independent verifications?

Gische: I don't think we know the answer to that yet. There is some research being done. The research that's been published by Langenburg back in 2009 where they inserted false exclusions into their verification process, and none of the false exclusions were caught. So I think there needs to be more research in that area. Now, what we are starting to see is not necessarily the effect of verification on exclusions and inconclusive, but more so the effects of blind verification on all three decision types, and I think that we're going to see more results leading to push for blind verification of those types of exams, whereas with regular verification where the verifier knows the results, we don't know the impact that that's going to have yet.

[Inaudible]

Gische: Yeah, and I think that your question speaks directly to developing that tracking system that we spoke of, not only to identify errors--and in this case maybe it's not even an error; maybe it's just a difference of opinion--but to track why that's happening. And if we don't have a system in place to track not only when that happens, but then try and figure out the cause for that happening, then maybe it is always a lesser experienced examiner or somebody who didn't have the same training as the other examiner. Then you can identify those areas where maybe you need better training on distorted prints or whatever the case may be. So I think that speaks directly to that tracking system of errors and differences of opinion. As far as what should a manager do now? Labs that are accredited have a documented conflict resolution policy. And that outlines the steps--if there is a difference of opinion, this is what needs to happen. And so at least I know from my lab there are six or so different levels and it would be done the same way each time if there is a difference of opinion.

Audience member: [unclear] I'm going to ask Melissa to comment on this, because my understand of what you said is that "just culture" is that the person is choosing what's the right path of the situation rather than following the strict letter of the law. For an accredited lab, especially in latent fingerprints now, [unclear], I think that just culture is reasonable, because we're scientists, and we should be using what we feel is the best solution to the scientific problem. How would that fit in with accreditation, and what you feel the general latent print community would feel about adopting just culture into the process?

Gische: I think having the latent fingerprint community adopt anything that they're not used to is a difficult thing to suggest. However, maybe it's the direction we need to go. Specifically something where I could see that coming into play would be potentially in one of these conflict resolution settings, where maybe one examiner says, "I think that's absolutely an exclusion," and another examiner says, "I'm not sure; I think it might be inconclusive." And so maybe that examiner who said, "Exclusion, absolutely," goes with a more conservative decision of inconclusive as part of this just culture. It's not what they truly thought, but under the circumstances it's the right decision to make for that print. Just off the top of my head. Did that answer…?

[Inaudible]

Gische: Well even now some of our policies deal with examiner discretion. You know, we certainly have guidelines that say when you get in a piece of paper for evidence, these are the processes that you go through. Now depending upon the case there might be certain situations where I skip one of those processes. And as long as I adequately document it and the reasons why I did that, I think it would be sufficient to go in that direction.

Same audience member: Is this something that's going to be recommended as part of the document that's coming out, or is that--

Taylor: We don't have very much about "just culture," but I did ask Dr. Boehm-Davis to mention it because it is something that other communities have adopted and it's something that we think could have potential in the forensic domain.

Boehm-Davis: And I mean there is going to be subjective judgment in all of these things--so, air traffic control says you must have certain spacing between aircraft. And that's for safety reasons. And it may be that there's weather. There's other sorts of things. And for whatever reason you can't reach the air traffic controller quickly enough to move to the side to avoid that weather or to avoid the bird strike that you see coming. And so you violate the rule. You've moved close to this other aircraft to avoid the birds that are coming right at you, but you did it because, in your judgment, there was insufficient time to do it in the way that's prescribed, which is to call ATC, ask for clearance, get the clearance and then move. So there's in some sense, some objective way that you can look at what's happened and make a judgment after the fact about whether this was a reasonable departure under the circumstances.

Taylor: And again, like Melissa said, we'd have to start tracking near-misses and errors in order to be able to have a dialogue about the reasonableness of peoples' decision-making.

[Inaudible]

Gische: "Will latent fingerprints ever have that? Is the working group recommending it?" The working group didn't go down that road. They're certainly recommending more research into the basis that examiners are giving for their conclusions, but as far as requiring a minimum point standard, there's really nothing in the scientific literature that supports having a minimum number of points. And the working group didn't address that issue specifically. [response] On where the human factors, where those decision-making points may affect the process.

[Inaudible]

Gische: Yeah, I know of research that's done. In fact, keep your eyes open in the Royal Statistical Society Journal. Should be published in the spring. Some research done by Cedric Newman speaks to just that. And in fact, the research, though, shows why it's not appropriate to try and assign a minimum number of points, because there are some five-point prints, if you will, that are rarer or have a unique arrangement that is more rare than some six-point prints or seven-point prints. And so there is research being done in that area, but what it is not currently supporting is to have this minimum number.

[Inaudible]

Gische: Well that's what the examiner--when they are looking at a print, they're assessing both the quality and quantity of information. And part of the quality of that information, not just how many points do I have, but essentially the examiner has to assess "Would I expect to see this arrangement repeated in another source?" And it's only when the examiner can answer no, with this amount of information I think this is so rare as to only come from this one source, that they reach the identification. So the examiner is assessing the rarity or the selectivity of the features they are seeing.

[Inaudible]

Taylor: Yes. Right. Any other questions? Ok, well thank you for attending, and we'll be up here for a few more minutes if you have any questions for us.

[Applause]

View other highlights from the 2011 NIJ Conference.

Date Modified: November 18, 2011