# Sampling For Aggregate Size Using European Standards

**UK quarries have now adopted European standards for quarry products and supporting test methods, but are the products still the same?**

UK quarries adopted the European standards for quarry products and supporting test methods on the 1 January 2004. Existing products were largely rebranded on a one-to-one basis. But are the products still the same? Take the aggregate size distributions of single-size coarse aggregates used in the production of asphalt, as specified by BS EN 13043:2002. Comparison with the former

BS 63 suggests that the aggregate size distribution of rebranded products has slightly broadened. For example, a small fraction of previously oversize aggregates is now tolerated. In some cases, a nominal fraction of aggregates in two largest size classes is mandatory. While this helps to maintain distinctions between the size distribution of different types of aggregate, it ignores increasing uncertainty associated with the measurement process. Measurement uncertainty is due to random errors and possible systematic bias during the selection, preparation and analysis of the sample. It is necessary to account for the uncertainty when judging whether the measured size fractions are conforming to the standards. When a standard specifies that a size class should contain at most a certain weight fraction, the sample should contain less. Conversely, a minimum value for the fraction in a size class implies that the sample should contain more. How much more or less can be determined by using suitable models to represent the measurement process.

While it is reasonable to assume that bias is avoided when correctly applying the test methods described in the standards, chance variation can be incurred during any stage of the measurement process. For example, when drawing more than one sample from the same batch of aggregates, measured size distributions of the samples are likely to differ by virtue of the distributed aggregate size in the batch. During sample preparation, the aggregate size distribution of portions obtained after splitting could vary, or the repeated analysis of a sample may reveal different aggregate size distributions, eg due to the irregular shape of aggregates. Such chance variations can be described with appropriate statistical models. By using models to establish measures for the variation, the overall variation associated with the measurement process can be estimated. Once this total variation is known, it is possible to identify whether the broader aggregate size distribution, as defined by the new standards, can be detected with the specified minimum sample size.

Chance variation introduced during the process of collecting a sample can differ for each size class that defines the aggregate size distribution. As a rule of thumb, significant chance variation occurs when:

- the size class under consideration contains heavy aggregates
- the tolerated fraction of aggregates in a size class is small
- the total mass of the sample is relatively small.

A relatively large chance variation is expected in the analysis of samples from coarse single-size aggregate products which contain small but mandatory fractions of relatively large aggregate in the upper or top size class. Noting that the new BS EN standards specify a maximum of just 2 wt-% in the top size class of coarse single-size products, a significant chance variation will be associated with measurement of the aggregates size distribution of such products.

In practice, the limiting fraction of aggregates in a size class is only corrected if larger than zero and smaller than unity. After all, a maximum of 0 wt-% oversize aggregates is automatically exceeded if any are found in the sample. Similarly, the absence of a single aggregate is unacceptable if a minimum of 100 wt-% is specified. When a small range, for example between 0 and 2 wt-%, is specified, a correction is only applied to the maximum value, in this case 2 wt-%. When the correction is exceptionally large, the maximum value may be very close to 0 wt-% and even smaller than the mass of a single aggregate! In such cases, measuring a small fraction of relatively large aggregates is impossible in view of the (insufficient) size of the sample. Given that the provision for a small class of relatively large aggregates is a feature which distinguishes the former BS standards from the new BE EN standards, it is interesting to evaluate the uncertainty, expressed in the chance variation, during the sampling of coarse single-size products.

The overall uncertainty associated with the measurement process consists of three contributions: uncertainty during collection of the initial ‘bulk’ sample; uncertainty during preparation in the laboratory, when the bulk sample is reduced in size to a ‘lab’ sample; and uncertainty during analysis of the sample. Each contribution can be inferred from appropriate statistical models which describe the chance variation rather than any bias.

Models for the collection and preparation of samples are fairly similar and differ markedly from models describing the analysis process, ie sieving of aggregates into size classes. For illustrative purposes, the uncertainty associated with analysis of the sample will not be considered in this article. Instead, potentially optimistic estimates of the overall uncertainty will be presented following a brief outline of the underlying models.

Models for the collection and preparation of a sample attempt to capture the essential process characteristics in the absence of detailed information. When extracting a single, arbitrary sample from a batch of aggregates, the possibility of an infinite number of different samples is retained by assuming that the sample is infinitely smaller than the batch. A further possibility is constituting a sample from an infinite number of different aggregates, which requires that the mass of the heaviest aggregate is infinitely smaller than the sample mass. These conditions will be approximated fairly closely when a sample is not too small, ie significantly larger than the heaviest aggregate in the batch, and not too large, ie significantly smaller than the size of the batch. Under these plausible conditions, virtual realizations of samples can be generated by modelling the sampling process with simple random sampling. Although application of simple random sampling requires a further (questionable) assumption that each aggregate has the same probability of being selected, it has the advantage of providing a straightforward estimate for the variance. The variance is a good measure for the chance variation because it characterizes the spread in the size fractions of virtual samples which have the same mass as a real sample. Another attractive feature is that the variance is based on the masses of aggregates present in a real sample. When the size fraction is small, the variance relating to that size class becomes independent of the sample size and equal to the number of aggregates (N) in the size class of the sample. For comparison with other variances, the relative variance, which is equal to 1/N, can be used.

The process of sample preparation, or sample size reduction, is also characterized with a view to establishing a relative variance. When using a riffle splitter to divide a sample into two (statistically) equal parts, the probability of an arbitrary aggregate ending up on either side is exactly 0.5. When simulating the riffling of aggregates using this selection probability, virtual realizations of the aggregates on either side are obtained. Performing the simulation with aggregates belonging to a particular size class reveals the spread in the distribution of these aggregates. Like sample collection, the spread can be characterized in terms of the variance. The relative variance is given by 0.25/N, where N is the initial number of aggregates belonging to a size class. It should be noted that one would expect to find half the aggregates on either side. This expected number is used as a starting point when simulating a second successive riffling stage. As a result, the relative variance associated with the second riffling stage equals (0.25/0.5N =) 0.5/N. Consequently, the relative variance doubles with every subsequent riffling stage. While each riffling stage constitutes an independent process, the relative variances for each individual riffling stage can be summed. For a similar reason, the total relative variance associated with sample preparation can be added to the relative variance related to sample collection to provide a measure for the overall uncertainty.

The described models will be applied to the sampling of three coarse single-size aggregate products from Carnsew Quarry, near Mabe in Cornwall. For each product, BS EN 13043:2002 ‰ specifies that the maximum fraction of relatively large aggregates in the top size class is 2 wt-%. Assuming that a batch contains exactly this limiting value of 2 wt-%, the expected number of relatively large aggregates in a sample can be found when the total sample mass and the average mass of an individual large aggregate are known. Noting that the loose bulk density equals 1.4 tonne/m3, the minimum sample mass specified by BS EN 932:1 was adopted. The average mass of an individual relatively large aggregate was determined by analysing relatively large aggregates in a large sample of each product. Table 1 summarizes the results.

During sample preparation by riffling, the average number of aggregates in the bulk sample is distributed over a set of sub-samples, one of which becomes the lab sample. Simple riffling schemes were devised to ensure that the size of the lab sample exceeded the minimum lab sample size specified by BS EN 933:1. Results are summarized in table 2. It is apparent that the average number of relatively large aggregates in the lab sample has become disconcertingly small.

To determine the overall relative variance associated with lab samples, the initial average numbers of relatively large aggregates, quoted in table 1, are denoted N. The numbers in table 1 are assumed to be sufficiently low to allow approximation of the relative variance by 1/N. Maintaining this definition of N, the relative variances associated with the successive riffling stages is expressed as (2i-10.25)/N, where i refers to the riffling stage (i = 1,2,3...). The derivation of the overall relative variance is presented in table 3.

While the overall relative variance provides a measure for the spread in the analysis result, the aim is to determine an absolute correction to the maximum average number of relatively large aggregates in the lab sample. For this purpose, it is necessary to take the square root of the relative variance to obtain the relative standard deviation. The relative standard deviation can be converted into the absolute standard deviation by multiplying by the average number of relatively large aggregates in the lab sample, denoted n. In principle, the absolute standard deviation could be applied directly as a correction. However, this would not do justice to the link between probability and the standard deviation. For a given probability distribution of virtual sample analyses, the standard deviation is associated with a fixed probability. This fixed probability does not necessarily correspond to the optimum probability. The latter can be determined by an economic assessment in which the combined cost of sampling and the cost of making an incorrect decision with respect to a batch of aggregates is minimized. Alternatively, the optimum probability can be set at a fixed value, eg 95%. Using knowledge of the shape of the probability distribution, the optimum probability finds expression as a factor by which the standard deviation is multiplied. For the sake of simplicity, it is assumed that in this case that the probability distribution has the shape of the well known normal distribution. For this distribution, a probability of 95% corresponds to a factor of 1.65. The final expression for the correction is shown in table 4. Following subtraction of the correction from the average number of relatively large aggregates in the lab sample (n), the maximum tolerated number of relatively large aggregates in the top size class is obtained. In order to accept a batch of aggregates based on the processing of the minimum specified sample sizes, a sample may contain no more than the corrected number of relatively large aggregates quoted in table 4.

Bearing in mind that the corrected numbers of aggregates in table 4 are based on ideal sample analyses, ie absent chance variation during sample analysis by sieving, as well as various model assumptions, the corrected values should probably be even lower. Notably, for products such as 20/40 and 20/31.5, it would be prudent to analyse more parts of the bulk sample and/or to extract a new bulk sample if any aggregates are found in the top size class.

It is interesting to consider the origin of the trend in the corrected number of aggregates in table 4. The main reason is the scale-up of the bulk sample size as specified in BS EN 932:1. The Standard indicates that the bulk sample size should increase with the square root of the upper aggregate size of the product. Assuming a constant limiting size fraction, the corrected tolerated number of relatively large aggregates would only be constant if the bulk sample size scaled with the mass of aggregates in the top size class. Notwithstanding slight variation in the aggregate shape, this would require that the bulk sample size scales with the cubed upper aggregate size. Hence the standard prescribes a restrained scale-up, suggesting expression of a practical desire to avoid large, potentially unmanageable bulk sample sizes. It should be noted that similar variation of the lab sample size (specified in BS EN 933:1) with the square root of the upper aggregate size slightly helps to redress the divergence in the corrected numbers.

Do these findings imply that transition to the new European standards does not necessarily affect the aggregate size distribution of rebranded products? To address this question, the size classes below the top size class should be considered as well. Comparison of the former BS 63 and new BE EN 13043 standards for the aforementioned products suggests that the second-largest class of aggregate was designed to guarantee the coarseness of rebranded products. In the case of 20/40 (formerly 40 mm) and 20/31.5 (formerly 28 mm) products, the minimum size of the second class of aggregates was slightly increased. Perhaps more significantly, it is a new requirement that a minimum of 1 wt-% is retained on the combined top and second size class for all three products. Following the approach presented for the top size class, it is evident that the combined top and second size class will have to contain a much larger fraction than 1 wt-% after correction! However, the standards make a provision for deviation from this specification: when less than 1 wt-% is routinely retained in the combined top and second size class, a producer may opt to measure the aggregate size distribution with an alternative set of sieves. On the lower sieves with finer aggregates, comparison of the former and new standards reveals that the new standards allow for larger fractions of fine aggregates. While the correction will be insignificant, the maximum tolerated fraction is unaffected as specified. However, the absence of a specified minimum fraction implies that the rebranded products do not have to be finer than before. In conclusion, rebranding of the products studied in this article is accompanied by a slight coarsening which is expressed in a more subtle way during measurement than would initially be anticipated. For other rebranded products, the generally applicable methodology can be applied to analyse potential differences in the aggregate size distribution.

**Acknowledgement**

Financial support for this work from the Natural Environment Research Council is gratefully acknowledged.

*The authors, Hylke J. Glass, Bas Geelhoed and Rod de Figueiredo, are with Camborne School of Mines, University of Exeter and Aram Resources, Colas Ltd *

## Add new comment