Elsevier

Computers in Industry

Volume 64, Issue 5, June 2013, Pages 514-523
Computers in Industry

A coupled penalty matrix approach and principal component based co-linearity index technique to discover product specific foundry process knowledge from in-process data in order to reduce defects

https://doi.org/10.1016/j.compind.2013.02.009Get rights and content

Abstract

Foundry process is a complex process with more than 100 parameters that influence the quality of final cast component. It is a process with multiple optimal conditions. For two foundries manufacturing the same alloy and cast geometry, the process and alloy conditions used by one foundry will most likely be different from the other one. For a foundry process engineer, it is also currently difficult to link process knowledge available in the published literature to specific process conditions and defects in a foundry.

A concept of product and foundry specific process knowledge has been introduced so that the intellectual property that is created every time a cast component is poured can be stored and reused in order to be able to reduce defects. A methodology has been proposed for discovering noise free correlations and interactions in the data collected during a stable casting process so that small adjustments can be made to several process factors in order to progress towards the zero defects manufacturing environment. The concepts have been demonstrated on actual but anonymised in-process data set on chemical composition for a nickel based alloy.

Highlights

► Innovative approach for casting process optimization using in-process data. ► Ability to discover foundry and product specific process knowledge. ► Ability to associate casting processing conditions with product specifications. ► Ability to reduce rejection rates by 1% from existing 5% level. ► A toolkit that would be of immediate use to casting process engineers across the world. ► A methodology that is transferable to other net shape forming processes and beyond.

Introduction

The 2010 global casting production was around 91.7 M tonnes [1]. For the same year, according to the European Foundry Association's website, the average production cost for ferrous foundries was €1.15 billion for every million tonnes produced and €2.9 billion for every million tonnes of non-ferrous castings [2]. From Refs. [1] and [2], it can be further inferred that the global casting industry produced castings worth €130 billion in the year 2010. Although enormous developments have taken place in the field of foundry technology relating to simulation software, moulding machines, binder formulations and alloy development, it is common knowledge that the foundries still lose 4–5% of their revenue annually. In 2010, the direct cost to the global economy as a result of 1% rejection rate in foundries was €1.3 billion. In addition to this, the environmental damage cannot be ignored as 1% rejection rate corresponds to the production of 0.92 M tonnes which result in over 0.46 M tonnes of landfill waste [3] which is equivalent to the third of landfill waste produced by London in 2010 [4] and 0.92 M tonnes of emissions (equivalent to annual emissions of 920,000 green cars2). These costs can either be avoided or minimised if efforts are made to discover product specific process knowledge from in-process data and re-use it to optimise casting process for existing and new cast components.

A foundry process employs a series of sub-processes to make castings, whether turbine blades for aerospace applications or cylinder blocks for cars, with over hundred factors across its sub processes influencing the quality of castings. It is no longer adequate to focus on each individual sub-process (e.g. melting process) and to try to control its process parameters (e.g. chemistry) in isolation in order to resolve shrinkage defect in a casting. For example, in a high pressure sand casting process, sand parameters such as permeability, moisture content, green compression strength, loss on ignition, etc., are equally important along with chemical composition, pouring temperature, melt preparation and pouring practise. Commercial simulation packages may be used to quantify design data for each product (e.g. wall thickness ratio, feeder modulus, max/min temperature ratio, max/min gradient ratio, metallostatic head, cavity fill time, etc.).

The published literature (e.g. journal and conference papers, text books) normally gives trends rather than precise tolerance limit. An example of research trend is given by Kantor et al. [5] for a nickel based super alloy: ‘increased aluminium and niobium additions result in decreased inter-dendritic shrinkage porosity. For the alloys doped with higher Al and Nb concentration, MC carbides form corresponding to a greater weight fraction liquid. The analysis reveals that the increased fraction liquid for the doped composition improves mush permeability and reduces pressure drop in the interdendritic liquid and therefore reduces shrinkage porosity’. As given in the research paper, for a particular turbine blade with the same alloy %Al and %Nb value of 6.25 and 2.29 may be satisfactory for reducing shrinkage however, it may not be adequate for another turbine blade that requires %Al and %Nb with values 3.2 and 0.8 respectively. For the same alloy and component, the chemical compositions tend to be foundry specific rather than product specific. Each foundry has its own secret recipe for manufacturing a cast component to its specifications (e.g. dimensional control, surface finish, mechanical properties and cost).

In literature data, information and knowledge have many interpretations and definitions. However, for the benefit of foundry process engineers, the process knowledge and casting process optimisation has been redefined as follows:

Process knowledge for a given cast component is

  • i.

    the actionable information

  • ii.

    in form of optimal list of measurable factors and their ranges (niobium: 0.77–0.827%; aluminium: 3.24–3.306%; zirconium: 0.026–0.05%; carbon: 0.095–0.113%)

  • iii.

    in order to meet desired business goals (process responses) (e.g. minimize defect rates, porosity scores or rework time, etc., and/or maximize mechanical properties)

Casting process optimisation is then defined as a methodology of using existing process knowledge to discover new process knowledge by studying patterns in data. To reduce rejection rate from an existing level of 5% to 2–3% or zero, it is necessary to discover and document casting process knowledge, as defined above, for individual parts. It is essential that in an effort to relate mechanical properties and casting defects with processing conditions, product specific actionable information is discovered after (a) analysing cast component specific in-process data, (b) verified with trends available in the published literature and (c) confirmed in a confirmatory trial in order to develop product specific process knowledge.

The majority of foundries in the world routinely collect in-process data on chemistry, mould/core and melting/pouring parameters. However, very few foundries are able to reuse the data to discover product specific process knowledge as defined above. Typically, when problems occur in precision foundries, the rejection rates are around 4–5%. The number of factors for which in-process data is recorded exceeds 50 and the number of observations that are available for analysis range between 30 and 100. It is likely that a simultaneous adjustment to several factors is necessary to reduce rejection rate by 1%. The applicability of existing techniques to achieve this goal is reviewed in the Section 2. A novel data visualisation tool that discovers correlations in a factor subspace that explain most of the variance has been proposed in this paper. The factors that have significant influence on the total variance are identified and visualised in the proposed penalty matrix format to discover optimal and avoid correlations and interactions. A penalty based approach to transform process response data, a novel graphical method of representing correlations of all factors with a given response and a technique to visualise correlations in a reduced factor space using principal components that account of majority of variance is described in Section 3. In Section 4, the results are discussed and compared by verifying main effects and interactions using penalty matrices. The paper is concluded in Section 5.

Section snippets

Methods for the discovery of process knowledge: existing research

In recent years there has been an increasing interest in using data mining techniques to discover new process knowledge from existing data either to verify certain hypothesis or to discover new patterns that can lead to the generation of new hypothesis [6]. Knowledge discovery can be achieved by using a variety of well established methods some of which are rooted into statistics, like analysis of variance (ANOVA) or principal component analysis while others are based on artificial intelligence

Visualising in-process data via bubble diagrams and penalty matrices

One of the most frequent objectives of a process improvement study is to optimise one or more foundry process responses. These are typically measured in terms of defects, mechanical properties, dimensional variation, rework/quality cost, etc., and categories as higher the better, lower the better and nominal the best. The in-process data is normally retrieved in an Excel format with columns representing responses and factors and the values for each batch or heat being stored in the

Visualisation of co-linearity indices

The signals or noise free correlations, identified using the co-linearity index, are shown in Fig. 9. It should be noted that the new loading vectors (in the reduced space) displayed in Fig. 9 have now different magnitudes compared to loading vectors in the original PCs space, depicted in Fig. 6. The magnitude of the loading vectors can be interpreted in terms of the amount of contribution to the chosen PC subspace. The higher is the magnitude and the more is the contribution to the PC

Conclusions

Foundry process engineers need to think beyond Six Sigma philosophy in order to progress towards a zero defect manufacturing environment. Foundries conduct design of experiments and undertake research and innovation tasks every day. Every time a batch of castings is manufacturing, in-process data on over hundreds process variables across foundry sub-processes is generated. Some of the process variables have robust tolerance limits which means that the variation within the tolerance limit or

Acknowledgements

The second author would like to thank the financial support from the ASTUTE (http://www.astutewales.com/) project that is funded via the European Regional Development Fund through the Welsh Government.

Dr. Rajesh S. Ransing an Associate Professor at Swansea University holds one US patent and 40 papers in international journals with equal number of papers in conferences. He is a co-author of a John Wiley book titled “Fluid Properties at Nano Meso Scales” (238 pages). He has successfully supervised number of post doctoral and PhD researchers. He has led number of research projects with over one million pounds of research money funded by number of sources such as EPSRC (Engineering and Physical

References (24)

  • P.A. Ferrari et al.

    An imputation method for categorical variables with application to nonlinear principal component analysis

    Computational Statistics and Data Analysis

    (2011)
  • 45th Census Report, Modern Casting (2011). http://www.afsinc.org/files/2010censuslowres.pdf. Accessed...
  • Cited by (19)

    • A bootstrap method for uncertainty estimation in quality correlation algorithm for risk based tolerance synthesis

      2017, Computers and Industrial Engineering
      Citation Excerpt :

      A process response in the middle region is assigned a penalty value between zero and hundred. A correlation between factor values and penalty values for a given response is discovered using a principal component analysis (PCA) based co-linearity index (CLI) plot (Ransing et al., 2013). The length and angle of each loading vector is calculated in a reduced p-dimensional subspace and is used in a CLI plot.

    • A recommender system applied to the indirect materials selection process (RS-IMSP) for producing automobile spare parts

      2016, Computers in Industry
      Citation Excerpt :

      Method 2. The second proposed method is based on principal component analysis (PCA) [35], which is an extensively used technique for data analysis, dimensional reduction and for attributes correlation analysis [36]. Given an input of n rows (in this case the orders) by d attributes (in this case the indirect materials), PCA was used to find the most representative columns (indirect materials) in the sample for each auto part.

    View all citing articles on Scopus

    Dr. Rajesh S. Ransing an Associate Professor at Swansea University holds one US patent and 40 papers in international journals with equal number of papers in conferences. He is a co-author of a John Wiley book titled “Fluid Properties at Nano Meso Scales” (238 pages). He has successfully supervised number of post doctoral and PhD researchers. He has led number of research projects with over one million pounds of research money funded by number of sources such as EPSRC (Engineering and Physical Sciences Research Council), Knowledge Transfer Partnerships and Collaborative Industrial Research Projects on foundry process optimisation related topics. He is currently leading the European 7Epsilon consortium. He is secretary of Natural Computing Application Forum (www.ncaf.org.uk).

    C. Giannetti graduated in Applied and Computational Mathematics at University of Pisa in 1996. After working for several years as an R&D Software Engineer for world leading telecom companies, she is now a Research Officer in the College of Engineering at Swansea University. Her current research interests include the application of data mining techniques for the discovery of process knowledge and ontology engineering.

    Dr. Meghana Ransing is a graduate in Computer Engineering from Pune University, India and holds a Ph.D degree in Casting Defects Analysis from Swansea University, UK. Dr. Ransing carried out three year Post-Doctoral research work at Swansea University in a project jointly funded by EPSRC (Engineering and Physical Research Council) and Rolls Royce to develop a method based on self-learning principles to analyse in-process data in foundries. She later worked as project manager on the Knowledge Exploitation Fund (KEF) project and managed a team of seven developers to develop a web based prototype of the researched method. She currently leads a start up company p-matrix Ltd and gives her to time to work as a co-ordinator for the 7Epsilon consortium www.7Epsilon.org and the treasurer for the Natural Computing Applications Forum (www.ncaf.org.uk).

    Morgan James graduated from Swansea University in July 2012 with a Master's in Mechanical Engineering. He is now a Graduate Engineer with Wales and West Utilities, a local gas transporter, and is working towards Chartered Engineer status. He lives with his wife Sally and 7-month old son Charlie.

    View full text