QMT Features: April 2013
Vision systems – who has the edge?
 Edge detection algorithms allow even complex parts to be measured in a fully automated program without the need for expensive precision fixturing. By Geoff Jackson, Baty.

Non-contact measurement is used to solve many difficult measurement applications. The ability to measure parts without touching them has obvious advantages and the range of applications that benefit from this approach is huge, spanning all industry sectors including medical, packaging, electronics, aerospace, printing, automotive and plastics / rubber to name but a few.

Optical, non-contact measurement has been used for many decades; the traditional Shadograph is used to project an image of the part to be measured onto a glass screen.  In the early years this image was compared to an accurately drawn overlay chart placed over the screen in order that the user could compare the projected image to the drawn master and use his judgement to decide if the shape was correct (hence the generic term ‘optical comparator’).

Latterly, 2D measurement functionality was added by moving the part in the X and Y axis using a precision coordinate stage with high resolution linear encoders to log distance moved. In this case, measurements are made by targeting key points of the projected image using a fine crosshair which is etched onto the projection screen. Simple distances can be measured and with the aid of some geometric calculations, features such as circles, arcs and lines can be determined from coordinate data that is collected by targeting points along the edge of such features.
This new measurement functionality has transformed the profile projector from a simple comparator to a cost effective 2D coordinate measuring machine.

Like the traditional ‘Shadograph’,  vision systems use lighting and optics to project a magnified image of the part. Instead of projecting the image onto a screen however, the image is projected onto a camera chip which allows it to be displayed as a digital image within a software package to work in conjunction with the coordinate information from linear encoders fitted to the system’s workstage.

By analysing various parameters of each pixel and comparing them with its neighbouring pixel in a specified area of the image, it is possible for the software to determine the edge of features such as circles, lines, arcs etc using a suite of edge detection tools. This is one of the major advantages of vision systems over traditional profile projectors as the measurement is no longer a subjective decision, influenced entirely by the operator. Once the feature’s edge pixels are identified, the position of each pixel can be used to return coordinate information which generates a model from which the feature can be calculated. This makes it possible to collect a relatively large amount of coordinate data from each image grab to calculate the nominal feature sizes.

The measured result is not only very fast, but due to the averaging effect of such a large quantity of data, it is very repeatable, but is it correct?

If we consider the Shadograph once again, the image here is usually a silhouette of the part which provides a very clear contrast between the backlit area of the image and the shadow formed by the part.  If such an image were to be analysed using a typical vision system edge detection algorithm, one would expect to see a sharp change in the characteristics of the pixels as the edge is scanned from light to dark, making it relatively easy to decide at which point in this transition the data points should be taken i.e. the position of ‘the edge’.

When considering a surface image of a component, there are usually several edges that may be included in the area defined by the video measurement tool, so how can the vision system user be sure that the software is taking data points on the intended edge?

Sometimes it is possible to mask edges that are not of interest by adjusting the surface lighting so as to hide them in shadow, leaving the desired edge prominent for the software to detect. In order to achieve this it is necessary to have the ability not only to create variable segments of light but to be able to position the segment at the desired radial position relative to the part. Baty’s 64 LED segmented lighting was designed to do exactly that and the graphical user interface makes it incredibly easy to use. 

There are many times however, when this is not possible. It is essential therefore to have the ability not only to detect multiple edges and distinguish between them, but to know which of them represents the point at which a measurement is to be made.
Baty’s Venture systems all feature a 3M Pixel CMOS sensor which, when combined with their zoom optics, results in a pixel size of 0.39 micron at the highest optical magnification. This allows each edge transition to be analysed in sub-micron steps, resulting in a more accurate result.

On the above example, figure Edge 2, we can see that there are two obvious edges and this is also represented by the graphic. Baty’s segmented LED lighting has been used to create a good contrast but, due to the bright reflected surface of the top plane, the pixels that offer the highest level of contrast are not on the desired edge.
Using ‘standard’ edge detection criterion a vision system might return the data points as shown in red.

The normal approach here would be to narrow the scanning width of the edge detection tool so as to exclude this reflective area forcing the software to only consider and interpret a single edge. The problem with this is that at the high magnification required to achieve accurate edge detection, such a small scanning width might only equate to a few tenths of a millimetre. When a large batch of components is measured, if each part is not positioned precisely in a fixture, or indeed if the dimensional range from one part to another varies significantly, this feature may fall completely outside of the scanning range of the tool, the measurement cannot be made and the automated inspection stops requiring some kind of manual intervention by the operator.

All models in Baty’s range of vision systems use the same lighting, camera and optics. The edge detection algorithms in our Fusion metrology software were developed in conjunction with these key components and optimised accordingly. The software features advanced edge detection controls that allow the user to teach the system the edge that is of interest and how to recognise it when compared to similar edges that surround it. Unique edge detection parameters can be taught for every measurement, if required. These are automatically saved along with lighting and camera parameters for every measurement so that the same measurement conditions are used each time the inspection program is run.

In the above example, using the exact same lighting conditions the edge detection setup has been changed to return the correct data points as shown in figure Edge 1. This approach not only ensures the correct data points are taken on each measurement. It allows the edge scanning tool to be created with a much wider range, allowing even complex parts to be measured in a fully automated program without the need for expensive precision fixturing. l


You can now view all QMT Magazine issues on your favourite tablet or smart phone.
Download the free Quality Manufacturing Today App from the Apple iTunes App Store or from QMT Magazine on Google Play.

Rob Tremain Photographer
Click above to see full page display and links to QMT articles.
Mitutoyo logo
Bowers logo
Control logo