For most engineering companies, using CAD is now the norm. Whether companies were early adopters or somewhere else in the uptake curve, there has been sufficient time for a significant build up of CAD data. There has also been a significant growth in the movement of data.
Data is migrated internally from legacy vaults to new systems, from legacy and current work in progress to long term archives. Large volumes of data constantly move between companies working collaboratively.
A common factor in all this is CAD data translation. Where data is moved, it often bridges more than one CAD system (translation) or it moves from a CAD format to a neutral format (translation). This is the root of a problem that needs to be taken seriously.
CAD data translation is now so effective and so commonplace that unless there are “in your face” error messages everybody assumes that the translation has worked perfectly and that the destination CAD model is precisely the same as the source CAD model. This is a dangerous assumption because it is possible for a translation to introduce changes that are not categorised as errors. In fact they may not be errors at all, just differences that arise from the differences between the source and destination CAD systems. However, these differences are important if they result in changes that are outside acceptable engineering tolerances.
The challenge is knowing when these differences occur. If there are no messages from the CAD data translation process the only way to spot the anomalies would be to check every single translation. That means checking every translated model with its corresponding source model – not a trivial exercise, and if done manually a very difficult, slow and expensive operation.
If CAD data translation has become more common and is more effective than it was before why is there a need to add the further complexity of validation and checking?
The fact that the translation and sharing of data is so commonplace and relatively straightforward makes users assume that it is always successful and done to a high level of precision. However this actually lulls us into a false sense of security. Translations can fail for many reasons including poor quality source data and wrong translator options having been set. In an almost self contradictory way a spectacular failure isn’t a problem because it is so obvious. On the other hand very subtle changes, such as alteration to model faces, removal of data and creation of sliver faces, can go undetected. As a result a file can be sent out with tangible imperfections and the recipient may receive it and work with it – imperfections and all. Sometimes these imperfections do not become apparent until something has actually been manufactured and if it gets this far significant costs have already been incurred.
Failure to validate and check CAD parts as part of the interoperability process can actually nullify existing manufacturing quality checks. For example it is a common quality control process to use co-ordinate measuring machines to validate and check machined parts. The precise measurements are checked against the CAD model that was used for the creation of the tool paths and often a quality certificate is generated to confirm that the manufactured part is within the required tolerances. The problem with this is that if the manufactured part has been compared with data that was translated and which has not been validated it may contain small but significant imperfections. The ideal comparison should be made between the very original source design data and the manufactured part. Since in many supply chains this is not possible, the manufactured part should at least be compared against data which has been validated.
Another issue lies in the commercial world that surrounds design, engineering and manufacturing. It seems we are moving into ever more litigious times with the ownership of responsibility being brought into question wherever something goes wrong and with increasingly high costs being attached to blame – where it can be proved.The business implications are so significant that not validating and checking translated and/or shared data could be seen as dereliction of duty.
On face value it could be considered that only a few companies will have the need to validate and check large volumes of data but this is also a wrong assumption. There are three clear circumstances which call for CAD data validation and checking and most engineering companies are going to experience one or more of these over the next few years, either as a “one-off” or as part of an ongoing business process. They are;
1. The migration of legacy CAD data.
2. Collaborative working in a supply chain.
3. Setting up a long term data archive.
Having established that there is a problem and that there are circumstances which mean that most design, engineering and manufacturing companies are going to have to undertake validation and checking it is time to consider the technical requirements of CAD data validation and checking.
The starting place is to begin by putting the process into perspective by establishing a realistic expectation of what is required and what might be achieved. A simplistic view might be to be able to prove that the translated CAD model is exactly the same as its pre-translation source model. A nice idea but not necessarily a very realistic one.
CAD systems are different, even if they use the same internal kernel for holding data. The different ways in which they work and the different tolerances that they apply, can result in a perfectly valid translation producing a model that is not quite the same as the pre-translated source model. Where the CAD systems have entirely different kernels the possibility that there may be differences is even greater.
These differences are important to engineers because they need to measure and make comparisons. They are trained to ensure that design measurements are carried through to the manufactured part. Sometimes the differences that can be identified between pre translated source models and post translation destination models is so small that it really is insignificant, but it is equally possible that even after a successful translation the differences are mathematically significant and sometimes large enough to be significant in engineering terms.
For example even assuming that no errors were reported during the process of a translation, a comparison of the mass properties of the source and destination models may show differences. Moreover, it is possible that the mass properties are shown to be so similar as to be considered the same and yet the shapes could be different. Comparison for the purpose of CAD model validation and checking is not simple and it should include procedures to compare both mass properties and shape.
Making the comparison
The complexity of the comparisons required to establish that the data is within acceptable engineering tolerances and the volumes of data likely to be involved indicate that it is not practical to carry out manual verification and checking processes. Such an approach would be costly in terms of equipment and software licences, would require many dedicated man hours and would be error prone.
It is also unlikely that many companies would have employees with the specific skills necessary to make a valid comparison between source and destination CAD models. The STEP community has been a champion of CAD data validation and checking for some time and their focus is on making the following comparisons;
• Surface area
• Centre of mass
Cloud of points:
• Standalone CurvesPMI:
• Polyline length
• Polyline centroid
• Unicode string
Additionally data counts might be made of;
• No. of PMI entities
• No. of surfaces
• No. of solids
And assembly structure validation can be carried out.
There is no reason why the same comparison criteria should not be applied to CAD models that are not in the STEP format, and only realistic way to address this level of detail is programmatically as part of a complete process.
CAD data validation and checking should be a component part of a total solution and each validation and checking solution should be developed to meet the precise needs of the organisation it serves. These processes will also be different for each of the use cases i.e. data migration, supply chain interoperability and long term data archiving. This means that any attempt to describe a single generic process cannot detail what is required in all cases. However, here follows a list of some of the content of a CAD data validation module that might exist in a CAD data migration project.
The key elements are
• Definition of scope and objectives
• Identification and selection of data
• Identification of preferred method for data translation
• Identification of preferred method for comparison of source and destination models
• Definition of acceptable tolerances for differences identified between source and destination models
• Identification of method for comparing differences
• Definition of how to handle differences identified e.g. exception reporting
• Selection of “automation engine”
• Creation of automated audit trail
What needs to be done?
There is no silver bullet. There is no single action that can be taken that will mitigate or avoid all the risks. We have not faced this problem before and it is unlikely that many engineering companies will have the ability to solve it themselves. This is a specialised niche within engineering and design computing. In order for such a process to be effectively designed and implemented, without disrupting normal design and manufacturing throughput, the best approach may be to engage specialists, such as Theorem Solutions engineering data consultancy team, on an investigatory project.