1 Introduction
Textile industries are one of the traditional sectors that have to change daily to meet the ever-changing customer demand [
12]. Approximately 75 million people work in these industries worldwide [
24]. Textile industries play a significant role in employment in developing countries like Bangladesh. In Bangladesh, this sector generates employment for 4.4 million people, and approximately 84% of foreign earnings come from exported ready-made garments [
20]. In this competitive sector, quality control is the key to survival [
22]. Since this sector of Bangladesh vastly depends on manual processes, it faces various problems in maintaining buyers’ requirements.
Shade variation of yarn is one of these significant problems faced by the textile dyeing sector. It refers to the discrepancy in a produced color that varies from the desired color of a textile material [
26]. The primary raw material used in fabric manufacturing is yarn. Hence, shade variation of yarn poses a problem when making the buyer’s required fabric shade [
22]. This fault can occur due to some reasons such as operator irresponsibility, management issues, technical issues such as improper calibration of equipment, batching issues, incorrect operational procedures, etc. [
2]. Besides, temperature, pH, concentration of dye liquor, and fiber properties are the significant factors behind this problem. Due to the variability of these factors, about 20% of shade variation in dyed products can occur using yarn from different spinners [
2]. Such inconsistency in yarn dyeing results in buyers’ rejection of a product [
3]. Consequently, due to this unwanted variation of color depth in yarn, the industries dealing with yarn-dyed products need to invest a good amount of money and time in redyeing or washing which causes a significant annual loss for these industries [
22]. By continuous monitoring, one can avoid the shade variation problem by taking proper steps that can lead to achieving the buyer’s recommended shade and reducing the cost of wastage in bulk production. Hence, along with the dyeing factors, a monitoring system of checking yarn shade can minimize this loss [
27].
A significant number of research studies have already been performed to monitor the shade variation of textile material. For example, Abreu et al. [
4] and Arikan et al. [
1] suggested systems using visual and semi-automated inspection, respectively. However, these methods greatly depend on human judgment, which may create variability in shade matching of yarn. To avoid this problem, Kandi et al. [
11] proposed a system to monitor the shade difference between reference color depth and sample color depth by spectrophotometer. The installment cost and complexity made it inefficient for small-scale textile industries.
Several research studies have been done to measure the color difference between two samples using image processing techniques. For instance, Ding et al. [
6] proposed a DISTS (Deep image structure and texture similarity) metric to analyze the texture differences between images. Similarly, Deshpande et al. [
5] suggested the PSNR metric, Li et al. [
13] proposed the MS-SSIM metric, and Ieremeiev et al. [
8] argued the MDSI metric to assess the difference between images. However, these conventional metrics pose complexity in measurement and can not provide precise information on the color difference of textile material. On the contrary, Pandey et al. [
17] proposed a special metric called Delta-E to compare the color depth between Digitally Modulated Screening (DMS) and Hybrid Modulated Screening (HMS). Again, Irshad et al. [
9] used the Delta-E metric to develop a predictive model to assess the shade difference of fabric. Due to the fineness of the raw material of fabric, which is yarn, the effectiveness of these systems is not mentioned in these research works. However, these methods can not provide precise and real-time shade measurement systems for yarn.
To address these limitations, we propose a low-cost, real-time yarn shade monitoring system using an image processing technique. We attempt to find a suitable metric for yarn shade variation detection. Here, we use Delta-E as the primary metric due to its consistency in yarn shade measuring over other conventional metrics. In our system, a user can check the shade status using his mobile phone camera. Whenever the user clicks the button of ’Yarn shade check’ on the monitoring webpage, the system asks to provide images of the reference yarn shade and sample yarn shade. After giving these, the system analyzes the images according to the pre-programmed color similarity metric and finally gives the status of the shade difference.
To create a precise yarn shade monitoring system, we make the following contributions in this article:
•
We propose a low-cost shade-checking system for yarn that can be applicable to the textile industry.
•
We determine a suitable metric over fourteen traditionally used metrics for precisely monitoring the shade variation of yarn.
•
We design a system where users can check yarn shade in real time through continuous monitoring on the web.
•
We apply this system to real-time data obtained from textile industries
•
We evaluate the methodology by capturing the images directly from the bobbins for both cotton and synthetic yarn.
2 Background and Related Work
In this section, we discuss the existing shade variation monitoring systems and their effectiveness in determining yarn shade differences.
Various manual and semi-automatic approaches were made to monitor the shade of textile material. Abreu et al. [
4] proposed a system where shade matching can be done with the help of a rating scale. Values in that scale are assigned from 0 to 4, where 0 represents an excellent match and 4 represents a significant mismatch. In this system, a worker compares the actual shade with the produced shade using this scale, which may create variability in the result from person to person. Similarly, Arikan et al. [
1] suggested a fabric inspection machine with a PLC-controlled digital display to monitor shade differences. From this digital display, a worker needs to manually compare the produced sample with the actual sample. Besides, the high cost of instruments made this system difficult to implement for small-scale industries.
Kandi et al. [
11] suggested a system where color matching between reference and sample fabric is done with the help of a spectrophotometer. In this process, the reference and sample were placed under a light at a 45° angle while the sensor measured the reflected light perpendicularly on the sample. These instruments measure the color of textile fabrics under controlled conditions, creating calibration databases by analyzing target colors. In this process, color-matching software was used to measure shade variation. However, this process may not be directly applicable in all sectors where shade measurement is very critical or limited, such as for raw yarn, dyed yarn, fiber, loose cotton, stone, marble etc. A similar type of system was suggested by Park et al. [
18] where dyes were evaluated before the dyeing process by shade matching system. In this system, color strength was evaluated using color matching tests with the help of a spectrophotometer, fastness tests, and solubility tests. A major disadvantage of the system is the quality of dyes is affected by moisture. Hence, dye strength can not be measured accurately by this shade-matching system.
A clinical study was performed by Liberato et al. [
15] in which they compared shade-matching accuracy systems among spectrophotometers, intraoral scanners, and visual inspection. According to this article, the spectrophotometer provided a more accurate shade-matching result. However, the high cost of installing and skill associated with the operation of a spectrophotometer made this system inefficient for all factories. Another noble approach was suggested by Wang et al. [
25] where important factor involved in shade difference was determined using the principal component analysis technique (PCA). In this system, parameters such as the gram weight of the fabric, thickness, tightness, the linear density of yarn, the weft density of the fabric, and the warp density of the fabric were taken into consideration. According to this system, the most important factor determined by PCA can be used in shade variation monitoring. However, these parameters can not precisely determine shade variation, and these are only applicable to fabric shade but are not suitable for yarn.
2.1 Studies on Approaches Based on Image Similarity Metrics
Shade monitoring can be done using conventional and modern image similarity metrics. Significant approaches have been made to monitor the texture of different products. One of the most well-known parameters for assessing image similarity is Deep Image Structure and Texture Similarity (DISTS). Ding et al. [
6] proposed a system to identify structure similarity and texture similarity by using deep learning. According to this system, machine learning and image processing techniques can be utilized to identify differences among various materials. This system approaches DISTS and DSS (Dynamic Structural Similarity) to assess the difference between actual and produced shade. A significant limitation of this system is that it is challenging to obtain values due to half-wave rectification, and the value may not perform appropriately with global texture similarities.
On the other hand, a comparative study on assessing image similarity by Sara et al. [
21] dictates comparison among MSE (Mean Square Error), FSIM (Feature Similarity Index Method), PSNR (Peak Signal to Noise Ratio), and SSIM (Structural Similarity Indexing Method) metrics. In this work, the authors argued that the SSIM (Structure Similarity Index) metric, which compares the structural information such as edges, overall layout, and texture rather than pixel difference between two images, FSIM (Features Similarity Index) which looks at how similar features are present between two images, PSNR and MSE which calculates pixel-wise difference between two images. They also reported that if the quality of images is very close, the FSIM and SSIM can measure the value very closely, which can not be distinguished by the human eye. The FSIM and SSIM can give exact value ranges that are between 0 to 1. For this reason, the FSIM and SSIM are called normalized metrics, but the MSE and PSNR can only compare between two images and do not give actual value, and these metrics are called non-normalized metrics. The value of the metrics varies depending on noise, light, dust, dirt, and environment [
7]. A similar approach was proposed by Deshpande et al., [
5] who designed a system for video quality assessment based on the PSNR (Peak Signal-to-Noise Ratio) metric. In this system, the RGB color model is converted to YUV color space where Y is Luma and U,V are color information. Then, the PSNR is calculated for the Luma component. PSNR is also related to the MSE(Mean Square Error) value. A lower MSE value means a higher PSNR value that represents a better-quality image.
Li et al. [
13] proposed a system of image similarity metrics based on edges, texture, and smooth regions. The authors reported MS-SSIM (Multi-Scale Structural Similarity Index) as a metric that measures structural similarity between two images. MS-SSIM (Multi-Scale Structural Similarity Index) is designed to assess the quality of images at different scales or resolutions. It indicates alignment with human visual assessment. SSIM and MS-SSIM are both measured by comparing structure information between the reference image and the captured image. On the other hand, Ieremeiev et al. [
8] argued that the MDSI (Mean Deviation Similarity Index) metric is better than PSNR and SSIM metrics for assessing the texture similarity between images. It is calculated by using a statistical Spearman rank order correlation coefficient based on Gaussian noise, blur, and impulsion noise. A crucial limitation of this system is that MDSI calculation is more complicated than others. Jia et al. [
10] developed a system of image quality assessment by using the VSI index that reflects human perception. However, due to its complexity, there is not sufficient practical evidence that may work to assess shades of textile materials. To compare the similarity between images, Li et al. [
14] proposed the GMSD (Gradient Magnitude Similarity Deviation) metric. A significant disadvantage of using this metric is its inability to precisely analyze large datasets, and the testing time is too lengthy.
Proskuriakov et al. [
19] suggested a special metric called Delta-E that can measure the color difference of images with the help of CIE76, CIE-94, and CIE-2000 methods. It can measure average color differences using the Python PIL library. Similarly, Pandey et al. [
17] evaluated the color difference between Digitally Modulated Screening (DMS) and Hybrid Modulated Screening (HMS) using the Delta-E metric. Irshad et al. [
9] developed a predictive system to assess shade variation of fabric after water-repellent finishing using Delta-E. In this system, five neural networks were trained to predict the values of Delta-E, and the authors reported about 85% accuracy of that. However, these methods did not focus on the shade variation determination of yarn and were unable to express their effectiveness in real-time yarn shade variation assessment.
3 Proposed Methodology
In this section, we discuss our proposed system, how we implement it for yarn shade variation assessment. In our system, we analyze conventional metrics with the advanced color difference calculation for precise yarn shade variation monitoring. The process flow of methodology is shown in Figure
1.
3.1 Image Acquisition
We capture images of yarn bobbins using an Android camera. At first, we capture the image of the shade of the reference sample (the sample that is approved by the buyer). Afterward, we take the images of trial samples (the samples that are made to achieve the buyer’s given shade). To ensure consistency among the samples, we maintain some pre-conditions, such as:
•
Consistent light and camera: We use Poco X3 android mobile equipped with 64-megapixel rear camera which includes Sony Exmor IMX682 as the primary sensor. Images of dyed yarn bobbins are captured under tube-light.
•
Fixed distance : We set our camera at a fixed distance of 22 cm from the base of the tripod.
•
Same bobbin size : In our case, the size of the bobbins are 8.4 cm in diameter, 14 cm in length and approximately 2.2 kilograms of weight.
•
Same angle of image capturing: The angle of capturing image is approximately 32 degrees and we maintain this angle for all images.
We take the captured images as input for different metrics analysis. After capturing the images we store them in Google drive.
3.2 Various Image Similarity Metrics Analysis
Initially, we analyze the yarn images using 14 traditionally used image similarity metrics such as DISTS, DSS index, FSIM, GMSD, HaarPSI index, MS-SSIM index, MDSI index, MS-GMSDc index, PSNR, SSIM, TV, VIFp index, VSI index, and SR-SIM. We choose these metrics for their impact on analyzing image similarity. Here, we use different Python libraries, such as piq and cv2, for image analysis. We also set a fixed length of 500 pixels and width of 500 pixels for all images to ensure equal dimension of all the samples. Finally, we consider the image of the intended shade (buyer’s approved sample) as the reference image and then compare the trial samples with it to obtain the metric values.
3.3 Principal Component Analysis
After determining the image similarity metric values, we need to find out the significant metrics for yarn shade variation. So, we use Principal Component Analysis (PCA) for feature extraction. After analyzing the variances of principal components, we get significant metrics for our data using the principal component coefficients. Here, the term ’significant metrics’ refers to the metrics that are able to capture greater variances for our dataset. Metrics representing higher variances can effectively capture the primary variations in yarn shade across the samples, as they are more sensitive to minute variations. These differences may contain the intermediate shades that may fall among the captured samples, which can make the system reliable in comparing closely related shades. So, we identify three metrics among the initial fourteen metrics that represent larger variances than the remaining eleven.
Since in factory practice, yarn shades are assessed manually through human eyes, we perform a human visual inspection to observe the effectiveness of these three critical metrics in determining shade differences. During this inspection, reference or buyer’s approved sample and trial samples are taken side by side and subjected to consistent lighting on a shade-checking table. Finally, we compare this visual assessment with the values of those significant metrics obtained from PCA analysis.
3.4 Delta-E Metric Analysis
Recognizing the limitations of these conventional metrics for determining the variation of yarn shades, we employ Delta-E metric. Delta-E is a specialized metric to detect the color difference between images as perceived by human eye. In this work, we use CIE Delta E 2000 to detect the yarn shade variation. This metric calculates distance between two colours in a three-dimensional color space (CIELAB), considering various factors like lightness, chroma, hue, and their interaction. Therefore, we perform Delta-E analysis based on the following formula [
23]:
This formula calculates color difference,
ΔE00, using
ΔL′,
ΔC′, and
ΔH′ for lightness, chroma, and hue differences.
kL,
kC, and
kH are weighting factors, while
SL,
SC, and
SH adjust for human sensitivity to these differences, and
RT accounts for chroma-hue interactions.
Based on the formula of Delta-E, we consider the image of buyer’s approved shade as reference and determine the Delta-E values for other samples of that shade group. We have found Delta-E metric provides better accuracy and the result also aligns with human perceived shade difference. To facilitate quick interpretation of Delta-E results, we categorize these values into four groups where each group represents different observations of shade variation. These groups are made based on a published reference [
16] and modified to align with the buyer’s tolerance on shade matching of yarn. The groups are defined as follows:
•
Group 1: ΔE < 1: Colors are considered indistinguishable to the human eye.
•
Group 2: 1 < ΔE < 2: Colors have a very slight difference, barely perceptible to the human eye.
•
Group 3: 2 < ΔE < 5: Colors have a noticeable difference, but they are still relatively similar.
•
Group 4: ΔE > 5: Colors have a significant difference and are easily distinguishable to the human eye.
3.5 Data Visualization
For real-time monitoring, we employ the system on a web-based Flask application using a hosting platform named ’Pythonanywhere’. We consider the Delta-E metric as the primary metric for this web-based system due to its accuracy in determining yarn shade variation. This platform allows users to capture images directly from their Android mobile camera or from the gallery of the user’s phone. Afterward, the uploaded reference image and the image of the sample to be compared are saved in a specified folder on the server for processing. Based on the uploaded code in the flask application, the system converts the images from RGB to LAB color space to determine average color values for each image. Finally, it calculates the value of the Delta-E metric and provides the result on the monitoring webpage. The webpage shows the numerical value and significance based on the pre-defined groups mentioned in subsection 3.4. Since the processing code is uploaded to a Flask application on a free hosting platform, the system may take time to process the images based on the server load. However, maintaining consistent lighting and distance is necessary while capturing images to get precise shade variation results.
4 Experimental Evaluation
We collect real data for both cotton and synthetic yarn from two factories. We capture images of cotton yarn from Knit Concern Limited and synthetic yarn images from Colortex Ltd. We capture over 200 cotton yarn samples and 62 types of shades of synthetic yarn.
4.1 Evaluation of Yarn Images using Conventional Metrics
We perform factory-level experiments using an Android camera. Unlike lab experiments, where fewer and smaller bobbins are used to achieve the buyer’s approved shade, factory-level testing (during sample trials) is conducted with larger bobbins in greater quantity to assess whether the recipe and dyeing parameters will match the intended shade in bulk production. In this work, a Poco X3 Android mobile equipped with a 64-megapixel camera is used. The device is set according to pre-conditions such as fixed lighting, distance, angle, and bobbin size. Images are captured and analyzed by taking image of buyer’s approved sample (reference sample), and other trial samples are compared with it.
Initially, images are analyzed using 14 conventional metrics, including DISTS, DSS index, FSIM, GMSD, HaarPSI index, MDSI index, MS-SSIM index, MS-GMSDc index, PSNR, SSIM, TV, VIFp index, VSI index, and SR-SIM. These metrics are selected due to their significant impact on analyzing image similarity. Python libraries are used to determine the metric values for the images, with results shown in Table
1 and
2.
Significant metrics for yarn shade variation are then determined using a dimension reduction technique. Principal Component Analysis (PCA) is performed to extract significant parameters (metrics with higher variances) for our data, yielding 53.87% variance for the first principal component (PC1), 18.62% for the second, and 9.31% for the third. By forming equations with these variances and substituting values from the principal component coefficient matrix, it is found that MS-SSIM, MDSI, and SSIM metrics have greater significance for our sample data. Finally, the values of these three metrics are compared with human visual inspection results. Yarn shade variation is identified using these metrics, although occasional contradictions with the actual shade difference are observed. Therefore, a special metric named Delta-E is used to validate the findings.
4.2 Evaluation of Yarn Images using Delta-E Metric
In this work, we calculate the Delta-E metric values for both cotton and synthetic yarn samples. To determine the Delta-E metric values using the formula of CIE 2000, we take the shade of the buyer’s approved sample as a reference and then compare the trial samples against it. These Delta-E results are presented in tabular and graphical form, allowing us to observe the shade variation across yarn samples. For instance, if we look at Figure
2, a noticeable shade variation can be observed in the 4th image (i3) relative to the image of intended shade (i0), which is quantified by a Delta-E value of 4.5 in Table
3. This value belongs to group C, which indicates a noticeable difference between shades. Again, the i1 shade is closer to the reference shade than the i3 shade, resulting in a lower value (2.11) of Delta-E than that of i3. Similarly, for cotton yarn samples in Figure
3, the shade of i7 shows more significant variation than i1 when both are compared with the reference image (i0). This change is also reflected by the Delta-E metric values, shown in Table
4.
The effectiveness of the Delta-E metric is examined following the traditional method of yarn shade checking, which is practiced in factories. For each yarn sample, the Delta-E value is compared with an expert’s (the person who usually conducts the shade matching in the factory) assessment of the shade variation with the buyer’s approved sample. In this experiment, the result of the Delta-E metric correctly categorizes 51 out of 62 samples, which aligns with the shade group assigned by the expert, achieving an accuracy of 82.26%. Therefore, this accuracy demonstrates Delta-E’s effectiveness in determining shade variations of yarn that align with human evaluation.
4.3 Findings and Data Visualization
Initially, we attempt to monitor the shade variation using the conventional metrics of image similarity. Then, from the result of PCA, we find MS-SSIM, MDSI, SSIM as crucial metrics for our data. However, the results using these metrics do not suit well for yarn shade variation determination. On the other hand, Delta-E metric value gives almost precise result for determining the shade difference between yarn samples. We also observe the changes of Delta-E metric for the samples of synthetic and cotton yarn bobbins graphically from Figure
4 and Figure
5.
For real-time data visualization, we have developed a web-based application for shade variation determination on Pythonanywhere hosting platform, which is shown at Figure
6. Here, automatic shade variation determination is possible by capturing yarn images. After logging into the website, a user can see the upload yarn image option. After that the user can capture images of yarn bobbins directly using his Android mobile camera or can upload from the gallery of the phone. Finally, by analyzing the shade differences between the images, the system will display the output of the Delta-E value with its significance.
5 Conclusion
This work aims to find a reliable, low-cost system for monitoring yarn shade variation. For this purpose, various image similarity metrics of image processing techniques are implemented on the yarn bobbin images which are captured by a smartphone camera. Using PCA analysis, we determine the important features (metrics) from the 14 traditionally used image similarity metrics. However, the initial metrics DISTS, DSS index, FSIM, GMSD, HaarPSI index, MDSI index, MS-SSIM index, MS-GMSDc index, PSNR, SSIM, TV, VIFp index, VSI index, and SR-SIM do not work well to determine yarn shade variation. Delta-E metric emerges as a reliable feature for monitoring yarn shade variation. Finally, this system is implemented on a web-based platform so that a person can observe the shade variation by himself using his smartphone camera.
Future work can explore additional image processing technique with machine learning algorithm to make a more consistent automatic shade variation monitoring system. In future, we plan to integrate specialised camera with a microprocessor that can automate the system which itself able to capture images from the yarn bobbins and analyze the shade variation. Moreover, to obtain more accurate result, an expanded dataset based on different yarn types should be incorporated. In conclusion, this research provides a foundation for monitoring yarn shade variation using image processing technique and contributes significantly to the section of yarn dyeing quality control.