Posts tagged ‘entropy’

The Entropy of a DNA profile

I’m often asked how much entropy there is in the DNA profiles used in forensic investigations. Specifically, is it more than 33 bits, i.e., can it uniquely identify individuals? The short answer is: yes in theory, but there are many caveats in practice, and false matches are fairly common.

To explain the details, let’s start by looking at what is actually stored in a DNA profile. Your entire genome consists of billions of base pairs, but for profiling purposes, only a tiny portion of it is looked at — specifically, 13 locations or loci (in the U.S. version, which I will focus on. The U.K. version uses 10 loci.) Each of these loci yields a pair of integers which varies from person to person. You can see an example DNA profile on this page.

The degree of variation in the pairs of numbers — genotypes — at each locus has been empirically measured by many studies. Since biological laws dictate that the genotypes at different loci are uncorrelated, we can calculate entropy by simply adding up the entropy at individual loci. I analyzed (source code) the raw data on variation at each locus from a sample of U.S. Caucasians, and arrived at a figure of between 3.0 and 5.6 bits of entropy per locus and 54 bits of entropy for the whole 13-locus DNA profile. In addition, there is 1 sex-determining bit.

Since that number is well over 33 bits, with a high probability there is no one else who shares your DNA profile. However, there are many complications to this rosy picture:

Non-uniform genotype probabilities. The entropy calculation doesn’t quite tell the whole story, because some genotypes at each locus are much more common than others. If you happen to end up with a common genotype in all (or most) of the 13 loci, then there might be a significant chance that someone else in the world shares your DNA profile.

Population structure. The calculation above assumes the Hardy-Weinberg equilibrium, which is only true if mating is random, among other things. In reality, due to the non-random population structure, there is a slight deviation from the theoretical value. This manifests in two ways: first, the allele frequencies for different population groups (ethnic groups) need to be calculated separately. Second, there is a deviation from the expected genotype frequencies even within population groups, which is more difficult to account for (a correction factor called “theta” is applied in forensic calculations).

Familial relationships. Since we share half of our DNA with each parent and sibling, there is a much higher chance of a profile match between close relatives than between unrelated individuals. Therefore DNA database matches often turn up a relative of the perpetrator even if the perpetrator is not in the database (especially with partial matches; see below).

In recent years, law enforcement has sometimes adopted the strategy of turning this problem on its head and using these familial leads as starting points of investigation as a way to get to the true perpetrator. This is a controversial practice.

Each of the above factors results in an increase in the probability of a match between different individuals. But the effect is small; even after taking them into account, as long as we’re talking about the full 13-locus profile, most individuals do in fact have a unique DNA profile, albeit fewer than would be predicted by the simple entropy calculation.

Unfortunately, crime-scene sample collection is far from perfect, and profiles are often not extracted accurately from the physical samples due to limitations of technology and the quality of the sample. These inaccuracies in a crime-scene profile introduce errors into the matching process, which are the primary reason for false matches in investigations.

Partial and mixed profiles. Sometimes only a “partial profile” can be extracted from a crime-scene DNA sample. This means that only a subset of the 13 genotypes can be measured. This could be because the quantity of DNA available is too small (interfering with the “amplification” process at some of the loci), because the DNA has degraded, or because it is contaminated with chemicals called PCR inhibitors that interfere with the decoding process.

The other type of inaccuracy occurs when the DNA sample collected is in fact a mixture from multiple individuals. If this happens, multiple values for some genotypes might be measured. There is no foolproof way of separating the genotypes of each individual in the mixture.

These are very common occurrences, particularly partial profiles. There are no standards on the quality or quantity of the profile data for the evidence to be admissible in court. Instead, an expert witness computes a “likelihood ratio” based on the specific partial or mixed profile, and presents this to the court. Juries are often left not knowing how to interpret the number they are presented and are vulnerable to the prosecutor’s fallacy.

The birthday paradox. The history of DNA testing is littered with false matches and leads; one reason is the birthday paradox. The number of pairs of individuals in a database of size N grows proportional to N². The FBI database, for instance, has about 200,000 crime-scene profiles and 5 million offender profiles, for a total of a 1 trillion pairs of profiles. Due to use of partial profiles to find matches, the probability of a match between two random profiles is much higher than one in a trillion.

This long but fascinating paper has many hilarious stories of false DNA matches. Laboratory errors such as mixing up labels on the samples and contamination of the sample with the technician’s DNA appear to be depressingly common as well. Here is another story of lab contamination that cost $14 million.

Why only 13 loci? One question that all this raises is that if the use of a small number of loci causes problems when only a partial profile is available, why not use more of the genome, or even all of it?  Research on mini-STRs shows how to better utilize degraded DNA to recover genotypes from beyond the 13 CODIS loci. The cost of whole-genome genotyping has been falling dramatically, and enables even individuals contributing trace amounts of DNA to a mixture to be identified!

One stumbling block seems to be the small quantity of DNA available from crime scenes; whole genome amplification is being developed to address that. But I suspect that the main reason is inertia: forensic protocols, procedures and expertise in DNA profiling have evolved over the last two decades, and it would be costly to make any changes at all. Whatever the reasons, I’m certain that things are going to be very different in a decade or two, because there are millions of bits of entropy in the entire genome, and forensic science currently uses about 54 of them.

Further reading.

December 2, 2009 at 4:26 pm 1 comment


About 33bits.org

I’m an associate professor of computer science at Princeton. I research (and teach) information privacy and security, and moonlight in technology policy.

This is a blog about my research on breaking data anonymization, and more broadly about information privacy, law and policy.

For an explanation of the blog title and more info, see the About page.

Me, elsewhere

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 266 other subscribers