Have you ever wondered why your Dynamic Light Scattering (DLS) results occasionally and without obvious cause don’t meet your specifications? The answer might lie in the technique’s unique relationship with sampling probabilities and false positive results. To truly understand the potential causes of false positive results, we must understand the algorithms used in calculating data, intensity skew, and the important role of sampling and the value of replicate testing as they relate to DLS.

The experts at PTL can help by developing methods of testing that provide reliable and accurate data. In this article, we will dig into the causes and solutions to false positives in DLS. Together, we can get to the bottom of your mysterious unexpected results.

## Selecting Appropriate Results from your DLS Data

The first thing to consider is the algorithm used to calculate the mean size by DLS. DLS instruments utilize two sets of algorithms, one for a single monomodal distribution and another for multimodal and/or polydisperse materials. If your specification is applied to an inappropriate algorithm, unexpected variance in results may occur.

Per ASTM E2490-09 (2021): “*8.9.3 For narrow distributions (polydispersity index <0.2), there is little inherent problem in the deconvolution of the raw scattering information to particle size information. For wider distributions (polydispersity index 0.1–0.7), then distribution algorithms are likely to be useful. At higher polydispersity indices (>0.7) then the sample is unlikely to be suitable for PCS and is not likely to give a stable distribution with time*.”

*Note: Photon Correlation Spectroscopy (PCS) is another name for DLS.*

## Understanding Intensity Skew of DLS Data

Another factor contributing to periodic failures in meeting specifications is the intensity skew of data in DLS and its relation to sampling probability. First, let’s understand intensity skew. The signal received by the DLS detector is based on intensity of the light scattered, which has a d^{6} proportionality with the diameter (d) of the particle scattering the light. This means a single dimer will scatter the same signal as 64 monomers. Below are example plots.

**FIGURE 1. **Theoretical normal distribution (number-weighted) with a mean of 100 nm and standard deviation of 15 nm.

**FIGURE 2. **Theoretical intensity conversion of Figure 1’s data. Theoretical **intensity**-weighted mean of approximately 112 nm and standard deviation of approximately 7 nm.

**FIGURE 3. **Overlay of the **number**-weighted distribution and **intensity**-weighted distributions.

Due to the lower intensity contribution of smaller particles, the intensity distribution has a lower standard deviation, which can be misleading and give a false sense of confidence in the repeatability of results. This will be discussed further when discussing sampling probability but keep in mind that as broadness increases, so does the intensity skew:

**FIGURE 4. **Overlay of the **number**-weighted distribution and **intensity**-weighted distributions where the standard deviation on a number-weighted basis has increased to 20 nm.

## Intensity Skew’s Impact on Sampling Probability

The intensity skew of DLS is advantageous due to its sensitivity in detecting small populations of agglomerates. However, for this blog, we are primarily interested in its sensitivity to detect small variances when sampling larger particles within the expected distribution. Let’s consider the distribution from Figure 1 for an example.

Assuming the distribution in Figure 1 represents all the particles in a batch of material, a sub-sampling will be required for analysis. Based on the distribution of all the particles in the sample lot, 95% of the particles should come from particles between 71 to 129 nm, with 2.5% of particles exceeding 129 nm. However, that 2.5% of the particles contributes to about 10% of the intensity distribution. Consequently, if a single sub-sample contains more of the particles from the upper end of the distribution, the mean results will be drastically skewed. Considering how little the sub-90 nm portion of the population contributes to the overall distribution, if a sub sampling disproportionately contains amounts of material from the upper and lower end of the overall population of particles, the DLS intensity result may be much higher than expected. Simply put, each sampling from the upper and lower ends do not cancel each outer out due to the intensity skew.

**FIGURE 5. **Overlay of the **number**-weighted distribution and **intensity**-weighted distributions with brackets for the 95% confidence interval based on the number distribution.

To simplify this example, imagine that each time we perform an analysis, we analyze exactly 100 particles from the lot of material. While we know what the overall distribution will look like in this scenario, individual sub-samplings may vary significantly. Let’s assume the entire lot of material contains 1000 particles:

**FIGURE 6. **Distribution of 1000 particles with mean 100 nm and standard deviation of 15 nm.

If we randomly select 100 particles for analysis the distribution may look like:

**FIGURE 7. **Overlays of the **number**-weighted distribution and **intensity**-weighted distributions for 10 separate aliquots from Figure 6 sampling 100 particles each.

**TABLE 1: ****EXAMPLE ALIQUOT DATA FROM THE DISTRIBUTIONS IN FIGURE 7**

This example is a good indication of what one might normally expect. However, what happens when, by chance, you sample particles only at the upper and lower end of the distribution? Alternatively, what if all the particles come exclusively from the upper end of the distribution? How might these affect the intensity results?

** ****FIGURE 8. **Overlays of the **number**-weighted distribution and **intensity**-weighted distributions where sampling acquired more particles at the upper and lower end of the distribution.

** ****FIGURE 9. **Overlays of the **number**-weighted distribution and **intensity**-weighted distributions where sampling acquired more particles at the upper end of the distribution.

Based on the results from Table 1, a specification of 105 to 115 nm might seem reasonable for the intensity-weighted results. However, in the two examples from Figures 8 and 9, the obtained results would fall outside of this range (see Table 2).

**TABLE 2: ****EXAMPLE DATA FROM THE DISTRIBUTIONS IN FIGURES 8 & 9**

## Key Takeaways

Consider a sample with minor agglomeration, maybe a QC or stability sample. While the level of agglomeration may not be significant to the overall particle population, variation in sampling these agglomerates can lead to a single aliquot failing the specification. This does not necessarily indicate an instrument malfunction, poor aliquot sampling, or even that the method was executed incorrectly. Instead, it may signify a false positive event.

False positives are defined in statistics as “the event that the test is positive for a given condition, given that the person does not have the condition.”[1] In simple terms it’s falsely saying a lot is failing when it is passing. Due to the intensity skew in DLS results, periodic results much higher than trend can happen with broader distributions. For this reason, conducting replicate aliquots is recommended at any given test interval to prevent a single false positive misleading the perspective on a lot of material. Specifically, ASTM E2490-09 recommends analyzing three separate aliquots for any material.

Have particle size questions? PTL has answers! Contact us today to talk to an expert.

By Ryan Keefer, Laboratory Division Manager.

[1] Mendenhall, W., Beaver, B. M., & Beaver, R. J. (2019). *Introduction to Probability and Statistics*. Brooks/Cole.