How to Compare Particle Count Data

A popular question regarding aerosol particle counters is the extent to which results from particle counting should match. When new particle counters are introduced to an existing supply, this can become particularly crucial.

Image Credit: sKjust/ShutterstockImage Credit: sKjust/Shutterstock

The latest instruments could be provided by a new vendor or the existing manufacturer.

It is common for new models to be sourced due to product obsolescence or enhanced specifications, even when the particle counters are provided by the existing manufacturer. All of these factors are critical as they may influence the resulting particle count data.

The aim of this article is to offer a practical outline for the expectations an organization should have when particle count data from dissimilar and similar instrumentation is compared.

There are three popular situations that elicit questions about the agreement of data between two aerosol optical particle counters:

  • Group 1: Same instrumentation from the same supplier
  • Group 2: Like instrumentation from alternative supplier*
  • Group 3: Unlike instruments from different or same supplier

*The term “Like” describes particle counters having the same key characteristics of sample flow rate, first channel sensitivity and the number (and value) of size channels (indicating resolution).

Relative Data Comparisons and Contamination Trending

It is a likely assumption that all aerosol optical particle counters categorized in Groups 1 and 2 should offer equivalent relative particle trend information over time.

The data that can specifically be compared is normalized, cumulative counts. All of the popular particle trend profiles must compare well, including during irregular or periodic particle events, periods when the counts are stable, along with the presence of a clean-up curve for critical risks.

As steady-state levels of particles are often significantly less than the limits or action levels for a given environment, large offsets in the data (up to a factor of 2 or more) can be tolerated, particularly in the lower ISO classes.

It is best practice to avoid Group 3 for even relative data comparison as there are a high number of variables related to the instruments that can result in variations in the particle count data.

While this conclusion seems apparent, it is important to note here as it does occur in the real world and such comparisons must be avoided.

Absolute Data Comparisons

It may be required to perform absolute comparisons of data, and care should be taken when determining expectations for the extent to which particle data from unique instruments match. The following table presents the three categories of factors that influence the matching of data between aerosol particle counters.

Table 1. Factors affecting instrument matching. Source: Particle Measuring Systems

Instrument Factors Calibration Factors “How Used” Factors
Laser design (power, wavelength) PSL size and distribution Sampling location and technique
Laser and beam shaping PSL nebulization and delivery method Sample interval
Optical detection and signal processing Reference instrument/calibration equipment and apparatus Availability of statistically significant number of particles per channel
Sample cell design and flow delivery Calibration methodology Measurement of environment particles vs PSLs


For Group 1

The reason that Group 1 provides the optimal ability for close consensus for differential and cumulative information across a range of several instruments that may have been produced at different times is due to the elimination of the majority of data variations resulting from calibration and instrument factors.

Quality control and consistency in manufacturing are the only restricting elements. It is presumed that the “how used” factors can be reduced by carefully considering where and how the aerosol samples are acquired.

Even in this ideal example, some differences in sample data will be found and it is not possible for two particle counters to count precisely the same.

For Group 2

Absolute data comparisons are challenging mainly as a result of the variations in the configurations between optical particle counter manufacturers. To take an example, “like” instruments will function in a different manner although they have the same main features.

They will size and count particles differently due to design choices for the beam shaping in the sample region and laser type, signal processing and optical detection approach, and flow and sample cell delivery (such as recirculation).

The cumulative effect of these elements for instrumentation developed by alternative suppliers produces variations in the differential and cumulative information they provide. Calibration elements are also a reality that influences data correlation between like instruments created by dissimilar manufacturers.

There are a range of suppliers for the materials utilized for the calibration of optical particle counters. These monodisperse particles have different size distributions and mean sizes. Specification variations from batch to batch are often noted in specific part numbers from the same manufacturer.

An error source can also be introduced in the technique used to deliver the particles during calibration. Particles should be delivered in the precise concentration and must be contamination-free when channel thresholds are determined.

Lastly, the kind of reference instrument employed for the calibration of an instrument being tested will influence its counting efficacy. Along with this, the resolution that an instrument offers and the quantity of size channels will also have an influence on data correlation.

A higher number of channels results in more areas of difference. Similarly, a small alteration in sizing will have a much stronger influence on a higher resolution instrument.

An extra point that is important to note is that “splits” is a popular technique employed to set a particle counter’s inner channel size threshold at the point of calibration.

Setting and adjusting a size limit to match the median of the monodisperse particle challenge in adjacent channels is not a precise science, and there are several factors related to this calibration stage.

The cumulative error in relation to calibration factors can be noteworthy and does influence mismatching of data between like particle counters.

For Group 3

There should be no expectation of any data correlation for equipment falling into this classification as the calibration process and design of the instrument are totally different.

The variations in instrument sensitivity in this category influence cumulative counts and the ability to establish comparative accuracy in polydisperse aerosol distributions (such as in the actual sampling of an environment with particles of different sizes).

Guidelines for Matching Expectations

The following data matching expectations are suggested based on the above discussion. Differential count information should be excluded due to the reasons outlined in the table below, and only cumulative counts should be included.

A statistically meaningful quantity of particles should be included in the test environment to be able to support a conclusion with any degree of confidence.

Table 2. Source: Particle Measuring Systems

1 Same instruments from same manufacturer First channel cumulative ± 20% Best case scenario. The expectation should be that data between old and new instruments will compare well. Instrument factors and calibration factors should not be a major source of error. Make sure the sampling procedure and sample probe placement is consistent.
Inter-channel comparison of cumulative counts ± 50%
2 Like instruments from different manufacturers First channel cumulative ± 40% Instrument and calibration factors dominate. First channel comparisons are reasonable and generally provide acceptable results but allow for a larger absolute difference in cumulative counts. Inter-channel comparisons are possible, but errors will be larger.
Inter-channel comparison of cumulative counts ± 100%
3 Unalike instruments from either same or different manufacturer Any data comparison Unable to estimate Comparisons like this should be avoided.



When aiming to compare data between aerosol optical particle counters, care must be taken. Generally speaking, equivalent models from the same supplier will provide the optimal data matching under similar conditions for sampling.

Variations in differential and cumulative data from “like” instrumentation should always be assumed. These baseline changes in the particle count data is generally tolerable as it is reconcilable over time so that assumptions can be reset.

The total cumulative normalized counts is the recommended data to employ if comparisons between like instrumentation must be performed. It is practical to assume bigger variations in inter-channel differential and cumulative data for like instruments.

Lastly, comparing the particle data for dissimilar instruments from different or the same suppliers must always be avoided due to variations caused by calibration and instrument factors.

Particle Measuring Systems

This information has been sourced, reviewed and adapted from materials provided by Particle Measuring Systems.

For more information on this source, please visit Particle Measuring Systems.


Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Particle Measuring Systems. (2023, February 23). How to Compare Particle Count Data. AZoNano. Retrieved on September 29, 2023 from

  • MLA

    Particle Measuring Systems. "How to Compare Particle Count Data". AZoNano. 29 September 2023. <>.

  • Chicago

    Particle Measuring Systems. "How to Compare Particle Count Data". AZoNano. (accessed September 29, 2023).

  • Harvard

    Particle Measuring Systems. 2023. How to Compare Particle Count Data. AZoNano, viewed 29 September 2023,

Ask A Question

Do you have a question you'd like to ask regarding this article?

Leave your feedback
Your comment type