QMT Features: September 2014
In–process sampling: how much is too much?
Determining an effective in-process sampling strategy can be tricky. Steve Wise, VP of Statistical Methods at InfinityQS, suggests how to approach the problem


Developing sampling plans for acceptance sampling is typically a well-documented process based on industry-accepted standards and practices. The objective is to detect whether a lot meets an acceptable quality level. Most quality managers use acceptable-quality-level tables to determine the number of parts to sample from a given lot size.

However, developing in-process sampling strategies is more than referring to tables; it requires an understanding of the manufacturing process, patterns of variability, historical stability of the process, and a willingness to use data to drive improvements.

Why in-process sampling matters

Done properly, sampling provides an early detection point so operators can take corrective action before continuing a run of unacceptable product. It may be common practice to conduct acceptance sampling only at the end of the run, but this does not provide any real-time notifications when processes start to misbehave.
The director of quality at a manufacturer of precision plastics for laboratory use described to me how an incident that caused several pallets of finished product to be scrapped, at a significant cost to the company, was the impetus for changing his sampling approach. After determining where the problems were in the moulding and packaging process, he changed the work procedures and began sampling during setup rather than at the end of the run. The chemical testing is time-consuming, but he now tests for the most likely contaminants first to catch problems early in the process.

What to measure
Deciding what to measure typically falls into one of two categories: part measurements, such as diameter and thickness; or process parameters, such as temperature and pressure. Sampling in both categories can indicate variability and instability in the process, and can be used to bring the process back on track. The goal is to detect special causes of process variability so that immediate corrective action can be taken.

Part measurement sampling uses control charts to track the process’s ability to maintain a stable mean with consistent variability about that mean. Ideally, the mean of the data stream is very close to the desired feature’s target value. Any measurements outside the upper or lower control limits would indicate the process mean or variability has deviated from historical norms. In fact, there are a number of additional patterns that occur within the control limits that act as early detection warnings.

When deciding what process parameters to measure, choose those that have a direct effect on quality, and then determine what the optimum settings should be to deliver consistent quality. For example, if the temperature of an incoming fluid has no effect on the outgoing quality, but the flow rate does, then it’s better to monitor the flow rate.

Setting sampling requirements
After establishing what to measure, the next step is to determine the actual sampling requirements. How often should you take samples?  How many measurements per sample? How do you factor in the risks and costs? When determining how often to sample, it’s helpful to think about how long the process can chug along and still produce good product. If the process tends to be very stable, then taking minimal measurements, for instance at the beginning, middle, and end may suffice. However, if the process is less predictable, then more sampling is in order.

If in-process adjustments are typically needed every couple of hours, then consider taking at least two samples between adjustment periods. These samplings will let you know what happens with the process within each adjustment period. In addition to time-based sampling intervals, samples should also be taken whenever there is a known change in the process, such as when the shift changes, during setup, at start-up, or when tooling is refreshed.

In some cases, there is no historical process knowledge from which to base a reasonable sampling strategy. In these cases, consider sampling 100% for as long as it takes to expose the process variability patterns, and then, if conditions warrant, reduce sampling as you begin to better understand the process behaviour.

Sample size
Generally, most textbooks use sample sizes of 1, 3, 5, and 10. When the sample size is greater than one measurement, the assumption is that the values are consecutive. That is, if three bottle weights make up the subgroup, those three bottles were manufactured consecutively.

The purpose of a subgroup is to provide a snapshot of a process’s mean and the short-term variability about that mean. If you capture five consecutive measurements then you have a more definitive measure of the mean and short-term variability than three measurements. But at some point, the strength of the statistic does not appreciably improve by increasing sample size. As a rule, you’ll get more process knowledge by taking more frequent samples rather than by increasing the number of measurements within a sample.

Sometimes a sample size of one is the only size that makes sense. For example, the differences in three consecutive samples taken of a homogeneous product (e.g., agitated gravy in a mixing tank) would only be an indication of measurement error. A better strategy in this situation is to use a sample size of one. If the mixing tank were sampled again, say 30 minutes later, the differences in the two measurements would indicate how much the feature changed since the last sample. A sample size of one is also appropriate when only one value exists, such as overtime hours for a given day, maximum Amperage draw or peak temperature for a given oven cycle.

When should you change your strategy?
There are typically three situations that call for modifying a sampling strategy. The first is when a failure happens, but is not detected until downstream in the process. This indicates a need to change what is being measured upstream or to increase the sampling frequency. The second situation is when no failures are ever detected, indicating less frequent sampling may be appropriate. The third is when the measured product feature is showing no variation. This would indicate that the process produces to tighter tolerances than can be detected by the measurement system, or that someone is arbitrarily adding a value that he knows will report within limits.

The second life of data
With in-process data collection, the useful life of a single point is short-lived if the data are used only for real-time feedback, but historical data can offer an infinitely valuable process database. All data collected for real-time decisions take on a ‘second life’ for quality professionals to help them determine what to do today to make things better tomorrow. Analysing and mining these data can yield process improvement golden nuggets. Slicing and dicing them can expose relationships that would otherwise go undetected.

Finally, don’t let in-process sampling improvement efforts stagnate. Make sure there are always two internal personnel who really know the in-process sampling strategies and are constantly looking for new ways to use the SPC software.
www.infinityqs.com

 

  
You can now view all QMT Magazine issues on your favourite tablet or smart phone.
Download the free Quality Manufacturing Today App from the Apple iTunes App Store or from QMT Magazine on Google Play.

Rob Tremain Photographer
www.4exposure.co.uk
slideShow
Click above to see full page display and links to QMT articles.
Untitled
Nikon logo
Prodim logoi
Hexagon Logo 2
Mitutoyo logo
Creaform logo
Vision 2016 ad
TCT Inspex 2016 logo
MACH 2016 logo