post-thumb Computing likelihood ratios in the pattern comparison disciplines

When you make an identification, you know that not all identification decisions are created equal. We also don’t know if the term ‘identification’ is calibrated to the strength of the evidence. In this paper we re-analyze data from our own lab and data from the FBI/Noblis black box study to convert the distribution of scores from all the examiners who completed this comparison into a numerical score called the likelihood ratio. This number accurately reflects the true strength of the evidence provided by all of the examiners who completed a particular comparison.

Read More
post-thumb How does fatigue affect comparisons?

How does fatigue affect latent print comparisons? To answer this question, we collected eye tracking data from 5 examiners when they were either fresh or tired. We found a number of interesting results, including:

Examiners suffer from ‘decision fatigue’, where they are more likely to reach an inconclusive decision when they are tired.

When examiners are tired, their search is less efficient and less organized.

Fatigue seems to reduce the capacity of visual working memory. Rather than putting whole target groups in memory, examiners only put one or two features in memory to compare.

Read More
post-thumb Optimizing Decision Thresholds

Forensic scientists make decisions about evidence, and communicate that evidence to a consumer such as a detective or a jury. Somehow, the latent print community has coalesced around a decision threshold that according to error rate studies produces very few erroneous identifications, but a fair number of erroneous exclusions. Is that ok?

In this project, we set out to measure what examiners and the general public consider to be appropriate locations for the identification and exclusion decision thresholds. Normally these exist only in the mind of the examiner, but we can make them real by looking at the outcomes that can occur when a particular set of thresholds is adopted.

Read More
post-thumb Varying the Size of the Conclusions Scale

How should conclusions be expressed?

When you conduct a comparison, you accumulate evidence in favor of one of two hypotheses:

  1. the two impressions came from the same source

  2. the two impressions came from different sources

However, if this accumulated evidence is expressed using one of three conclusions, information gets lost. That’s because unless you provide supporting information, a just barely over the threshold conclusion and a Helen Keller conclusion are treated as the same. Likewise, there are probably some near-threshold conclusions that you went inconclusive on, but really wished that you could have provided more information to the detective.

Read More
post-thumb Do examiners know what to look for?

Do latent print examiners know what to look for? We all know that quantity and quality are important for latent print examinations, but what about specificity? I suspect that examiners don’t talk about specificity as much because it is hard to measure. Information Theory states that if you are trying to individualize an object, the rarest features are the most diagnostic. If you think about faces, knowing that a suspect had two eyes isn’t helpful. However, if you know that he had a heart-shaped mole on his cheek, that would be very diagnostic because that is a rare feature.

Read More
post-thumb Don't search databases that are too big!

When you search against a database, how big is that database? Do you always search really big ones, or small local ones? If it depends, what does it depend on?

In this project, I worked with a mathematics undergraduate student at Indiana University to ask the question of whether bigger is always better. In principle, bigger seems better, because it increases the chances that your suspect will be in the database. However, as databases have been getting bigger, examiners are noticing that the number of close non-mated impressions is increasing.

Read More