Methods Matter
    Fundamental limits of perfect measurements

    Fundamental limits of perfect measurements

    Published in July 2024 by Gergo Bohner, PhD, Ryan Olf, PhD and the Trisk team.

    To varying degrees of approximation, modern assays allow us to count exactly. We count the number of cells in a sample with a cell counter, and the number of a certain nucleotide sequence with dPCR. This leads to a characterization of samples that, at least in principle, can approach perfection along the dimension being measured.

    What we really care about, though, is the number of things - cells or matching nucleotide sequences - in a much larger system from which our analyzed sample was randomly drawn. And the properties of the sample do not predict exactly the properties of the system. If our analyzed sample cannot possibly be characterized (counted) more precisely, we will have to make due with whatever uncertainty remains in the extrapolated estimate. The uncertainty that remains is fundamental to the measurement and independent of the measurement technology.

    Of course, most of the time the main limitation of an assay is not fundamental. In fact, often the issue isn’t even statistical, but systematic. These systematic errors can lead to consistent results, making them more difficult to detect and leading to estimates that are overly confident in their accuracy. Systematic errors can arise, for example, if you are not quite counting the right thing (e.g. counting AAV ITRs as GOI), or are systematically counting some fraction of your items (e.g. the DNA strands are inconsistently annealed) when you believe you are counting all of them. Or, the sample, once analysed, may not be representative of the whole (e.g. due to non-ideal mixing or pipetting during prep).

    Non-fundamental statistical errors and systematic errors are rightfully the most pressing concern for those of us performing bioassays in the real world. But it’s interesting to consider what is possible when and if those errors are overcome, and the extent to which our technology approaches the theoretical ideal. And given the technology that we have, is using it in a way that approaches the theoretical ideal actually ideal, even in theory?

    Perhaps counterintuitively, the answer can be no. As we’ll show, dPCR has the theoretical ability to operate in a regime where the error in concentration estimates approaches fundamental limits, but the most precise measurements a given machine can produce occur in a different regime entirely. Increasing measurement error in the analyzed sample reduces uncertainty in the larger sample that we care about!

    To understand this, let’s start at the beginning: fundamentally, what can a sample tell us?

    What a sample tells us

    Our particular question is this: given that we’ve perfectly precisely measured NNN particles (cells, DNA strands) in an analysed sample volume VVV, how confident should we be that the system from which the sample was drawn has number concentrationn=N/Vn = N/Vn=N/V, and in particular, what should be our error bounds be on this concentration?

    To get at the fundamental limits, we are assuming that key aspects of our measurement apparatus are effectively ideal. Mixing, subsampling, and dilutions are always perfect. The particles we measure do not interact with each other, and are distributed randomly. Our pipettes have no errors. Obviously, none of this is quite the case. But we find that on our best of days, counting of cells and viral particle genomes is not too far from the ideal we’re describing.

    To get a sense of where the uncertainty arises when perfect sample prep is followed by perfect measurement, consider the simple, concrete example in Figure 1. There, we sample half of the total volume in each of two identical replicate systems, each with 16 cells, and find 7 and 9 cells in each respective sample. Since we sampled half the volume, these perfect measurements imply 14 and 18 total cells, respectively, a spread of 25% around the true value. A single cell moving from one half to the other shifts the estimate by 12.5%!

    Figure 1. 16 cells are randomly distributed in two identical replicates of a system. Cells are counted perfectly in half volume (green), yielding counts of 7 and 9 cells in each respective replicate.
    Figure 1. 16 cells are randomly distributed in two identical replicates of a system. Cells are counted perfectly in half volume (green), yielding counts of 7 and 9 cells in each respective replicate.

    The fact that we might find 7, 8, or 9 cells in a sample is probably not surprising. Even a 10/6 split between the two halves of the system would not look too out of place — a scenario that would lead to nearly a factor of 2 difference between the independent estimates of the total cell count and number density. Fundamentally, the source of this error is the fact that the systems are not uniform. They can’t be, because the cells are discrete lumps in a continuous space! The cells are also randomly distributed, leading to local variations in the average density. But even if you were able to coordinate the cells, e.g. configure their interaction such that they space out in a regular pattern, the estimates would retain some uncertainty: if we sample half the space, the estimate for total cell number is always an even number.

    A statistical model of sampling

    When we draw a sample of volume VobsV_\text{obs}Vobs​ from a larger system with volume V0V_0V0​, we can think of each of the N0N_0N0​ particles in the volume V0V_0V0​ as randomly being either “in” or “out” of the sample. In the field of statistics, this is what is known as a Bernoulli trial, with a particle being “in” termed a “success” in that context. In our ideal world, the probability of “success” ppp of a given particle ending up in the pipetted volume VobsV_\text{obs}Vobs​ is exactly Vobs/V0V_\text{obs}/V_0Vobs​/V0​.

    While the expected number of particles in each sample under this sampling process is linear in the probability ppp, the variance is not. The total number of successes in a set of independent Bernoulli trials follows a binomial distribution: P(Nobs ∣ N0,p)∼Binom(N0, p)P(N_\text{obs}\ |\ N_0,p) \sim \text{Binom}(N_0, \ p)P(Nobs​ ∣ N0​,p)∼Binom(N0​, p). The variance Var(Nobs)=N0p(1−p)\text{Var}(N_\text{obs}) = N_0p(1-p)Var(Nobs​)=N0​p(1−p) implies the variance in the sample concentration of Var(Nobs/Vobs)=N0(V0−Vobs)/(V02Vobs)\text{Var}(N_\text{obs}/V_\text{obs}) = N_0(V_0-V_\text{obs})/(V_0^2V_\text{obs})Var(Nobs​/Vobs​)=N0​(V0​−Vobs​)/(V02​Vobs​). This nicely shows that if we take the entire volume (and all particles), then the variance is zero: if we sample the whole system with our perfect measurement, there is only one possible answer.

    The binomial distribution describes the distribution of the number of particles (and equivalently concentrations) we would find in the samples — the 7 and 9 cells we saw in the example above — given a fixed system concentration, total volume, and sample volume. However, we do not actually know the system concentration: that’s what we’re hoping the sample will tell us, with some degree of precision. What we really want to know is not the statistics of the samples given the system but the statistics of the system given the sample. In mathematical terms, we want to know the distribution of our estimate of the system particle count, P(N0 ∣ Nobs,p)P(N_0\ |\ N_\text{obs}, p)P(N0​ ∣ Nobs​,p), which is not quite the same as the binomial distribution given above.

    In biological applications, where most direct observation assays are destructive, we often are in the situation where our final observed or assayed sample volume VobsV_\text{obs}Vobs​ is only a tiny fraction of the original volume V0V_0V0​. In this case the mathematics describing the uncertainty of our estimate of the system particle count (and concentration) can be greatly simplified by using the Poisson approximation to the binomial distribution.

    Poisson sampling variance

    Under the assumption that ppp is small, we expect the number of particles in the sample to be Poisson distributed with mean and variance E[Nobs]=Var(Nobs ∣ N0)=λ=pN0\text{E}[N_\text{obs}]=\text{Var}(N_\text{obs}\ |\ N_0)=\lambda = pN_0E[Nobs​]=Var(Nobs​ ∣ N0​)=λ=pN0​ (the mean and variance of the Poisson distribution are equal). What’s the mean, E[N0^]E[\hat{N_0}]E[N0​^​], and variance, Var(N^0 ∣ Nobs)\text{Var}(\hat{N}_0 \ |\ N_\text{obs})Var(N^0​ ∣ Nobs​), of our system particle estimate once we’ve observed NobsN_\text{obs}Nobs​ perfectly with one of our awesome direct counting observation methods?

    The maximum likelihood estimator for the system particle number is N^0=Nobs/p\hat{N}_0 = N_\text{obs} / pN^0​=Nobs​/p , and since this is a linear transformation of a random variable, the variance can be calculated easily as

    Var(N^0 ∣ Nobs)= 1p2Var(Nobs ∣ N0)=N0p.\begin{align*} \text{Var}(\hat{N}_0 \ |\ N_\text{obs}) =&\ \frac{1}{p^2}\text{Var}(N_\text{obs}\ |\ N_0) = \frac{N_0}{p}. \end{align*}Var(N^0​ ∣ Nobs​)=​ p21​Var(Nobs​ ∣ N0​)=pN0​​.​

    The most interesting form of this result comes when we consider the Coefficient of Variation (CV) of this estimator:

    CV(N^0)=Var(N^0)E[N0^]=N0/pN0=1N0p=1E[Nobs].\begin{align*} \text{CV}(\hat{N}_0) &= \frac{\sqrt{ \text{Var}(\hat{N}_0)}}{\mathrm{E}[\hat{N_0}]} = \frac{\sqrt{N_0/p}}{N_0} = \frac{1}{\sqrt{N_0p}} \\[15pt] &= \frac{1}{\sqrt{\mathrm{E}[N_\text{obs}]}}. \end{align*}CV(N^0​)​=E[N0​^​]Var(N^0​)​​=N0​N0​/p​​=N0​p​1​=E[Nobs​]​1​.​

    Fundamentally, the relative uncertainty of our estimator (and this remains the same under linear transformation, i.e. for concentration estimates too) is limited by the expected number of actually observed molecules in the sample. So when we sample our bioreactor and count 100 cells, we should expect our concentration estimates to be good to at best 10% CV. If we count 1000 cells, the best we can hope for is just more than 3% CV. And since our measurement is already “perfect” we can’t do better by making it more precise. All we can do is count more and larger samples.

    The N\sqrt{N}N​ signal-to-noise of our estimator is commonly called shot noise and arises in a range of processes that involving counting “quanta” (because such processes are often well described as Poissonian).

    Beyond Poisson

    The Poisson model describes our sampling situation well in the typical situation where the analysed sample comprises a small portion of the total system. However, if we count every particle in the system with our perfect counting assay, the CV should not be 1/N01/\sqrt{N_0}1/N0​​, but 0. Generally, the difference is small, as N0N_0N0​ is typically very large.

    If we do find ourselves in a small N0N_0N0​ situation, we are not lost. Recall that we think about our sample as consisting of NobsN_\text{obs}Nobs​ “successes” out of N0N_0N0​ Bernoulli trials with success rate ppp. An exact expression for our desired P(N0 ∣ Nobs,p)P(N_0\ |\ N_\text{obs}, p)P(N0​ ∣ Nobs​,p) exists when we add the constraint that the “last” (N0N_0N0​th) trial is a success: the negative binomial distribution. This should be a very good approximation so long as the number of successes is moderately large.

    Under this model, the variance is Var(N^0 ∣ Nobs)=Nobs(1−p)/p2\text{Var}(\hat{N}_0\ |\ N_\text{obs}) =N_\text{obs}(1-p)/p^2Var(N^0​ ∣ Nobs​)=Nobs​(1−p)/p2 and CV is modified slightly: CV=1−p/Nobs\text{CV} = \sqrt{1-p} / \sqrt{N_\text{obs}}CV=1−p​/Nobs​​. With ppp much less than 1, this is the same as under the Poisson model. However, unlike the Poisson model it goes to zero as p→1p \rightarrow 1p→1. If you can count everything, you can have perfect information.

    In the typical case, however, where only a fraction of the whole is observed, the Poisson model is very good, as shown below.

    Figure 2. Fundamental CV limit for the estimate of system concentration versus number of particles counted
    Figure 2. Fundamental CV limit for the estimate of system concentration versus number of particles counted NobsN_\text{obs}Nobs​ under three models, two of which are described in the text. With small numbers of observations, the CV is necessarily quite high!

    Application to dPCR: running near saturation is statistically optimal, but farther from fundamental

    A dPCR machine operates by dividing a sample containing DNA into a large number (about 8,500 or 26,000) of partitions. Each partition undergoes a PCR reaction to saturation, which ideally yields a binary readout in each well: a DNA molecule of interest was present or not. The method cannot (yet) distinguish between cases where there are multiple target molecules in a partition.

    When the sample concentration is such that the number of particles analyzed is much smaller than the number of partitions, the method can nearly perfectly count the number target DNA molecules in the analyzed sample. This is because chances are low that a given partition contains more than a single target molecule.

    Of course as the number of target molecules in the sample increases, the chances of a partition having multiple copies of the target molecule goes up. So the rise in filled partitions lags the rise in the number of target molecules, as shown below.

    Figure 3. Expected number of particles in an analyzed sample and the expected number of partitions with a particle, for 10,000 partitions. The dashed line has unit slope.
    Figure 3. Expected number of particles in an analyzed sample and the expected number of partitions with a particle, for 10,000 partitions. The dashed line has unit slope.

    This effect is understood and well described by Poisson statistics similar to those we’ve discussed in this post. Importantly, given an observed number of filled partitions (with one or more target molecule present), the concentration of target particles in the system from which the analyzed sample was taken can be estimated, and the uncertainty of this estimate can be calculated (assuming ideal prep, as we have here). At this point, it may not be too surprising that at low filling fractions, the relative uncertainty in the concentration estimate (”Poisson error”) approaches the fundamental shot noise limit, as shown below. These samples simply cannot be analyzed more precisely.

    Figure 4. Relative uncertainty of the concentration estimates from dPCR (Poisson error) and the fundamental error limit for a given sample. The Poisson error is always higher than the fundamental error limit (as it must be) but approaches the limit at low numbers of filled partitions.
    Figure 4. Relative uncertainty of the concentration estimates from dPCR (Poisson error) and the fundamental error limit for a given sample. The Poisson error is always higher than the fundamental error limit (as it must be) but approaches the limit at low numbers of filled partitions.

    However, it’s also clear from the graph above that the lowest Poisson error — the most precise measurements of the system concentration, assuming ideal prep and operation — happen in a regime where the Poisson error is substantially higher than the fundamental limit. The Poisson correction introduces additional uncertainty above shot noise, but the benefit of counting more particles is sufficiently large that it’s worth doing so even as the count becomes less precise — up to a point.

    The gap between the fundamental error and the Poisson error means there is room to improve in dPCR technology. In particular, the measurements would be better if the number of target molecules in each partition could be determined more precisely, for example by distinguishing partitions with one molecule from those with two molecules, etc. Of course, in reality, there’s likely much more to gain from making existing systems operate closer to their ideal— including improving the sample prep process— so that we can reliably achieve the theoretical limits of accuracy and precision. At least for now.

    Closing thoughts

    Perfect measurement is not the same as perfect information about the quantities you care about. Measurements — even perfect ones — ultimately have fundamental limits on what they can tell us, and those limits may be more relevant than we expect. Fundamental limits are powerful, as they don’t depend on technical details of how an assay is done, and, one might hope, the assumptions behind them might point towards how to better achieve them.

    Fundamental limits also apply to imperfect measurements, and as we’ve seen with dPCR, if we have the freedom to choose the expected attributes of our sample, the best measurement we can make may be imperfect, even if perfect measurement (of a more constrained sample) is available. Sometimes perfection isn’t optimal.

    Methods Matter

    © 2024 Trisk Bio Ltd. All rights reserved.