skip to main content
10.1145/3704522.3704530acmotherconferencesArticle/Chapter ViewFull TextPublication PagesnsyssConference Proceedingsconference-collections
research-article
Open access

Monitoring Yarn Shade Variation using Computer Vision Techniques

Published: 03 January 2025 Publication History

Abstract

Shade variation of yarn is a regular problem in the textile industry. Yarn shade variation causes various problems, such as inconsistent product quality, cost of re-dyeing, customer dissatisfaction, etc. Several studies have been carried out to minimize yarn shade variation. Previous studies on using different image processing metrics to analyze image quality and differences between images can not determine a suitable metric for shade variation of yarn. In this work, we propose a novel yarn shade variation detection system using an advanced image processing technique. After capturing images of yarn bobbins using an Android mobile camera, we worked initially on 14 image similarity metrics, including DISTS, DSS index, FSIM, GMSD, HaarPSI index, MDSI index, MS-SSIM index, MS-GMSDc index, PSNR, SSIM, TV, VIFp index, VSI index, and SR-SIM to determine shade variation of yarn. To determine important metrics for our data, we performed principal component analysis (PCA). However, these metrics can not represent the shade variation of yarn images precisely. As a result, we use a special metric called Delta-E to determine the values of the captured yarn images. This metric provides almost precise result which is approximately 82.26% for determining yarn shade variation. So, we implement it on a web-based platform considering Delta-E as a primary metric which allows users to capture or upload images directly from dyed yarn bobbins, and then the system provides shade variation results after analyzing the images. Hence, this low-cost, novel approach can be useful in the yarn dyeing industries by minimizing manual human visual errors and reducing issues regarding yarn shade variation.

1 Introduction

Textile industries are one of the traditional sectors that have to change daily to meet the ever-changing customer demand [12]. Approximately 75 million people work in these industries worldwide [24]. Textile industries play a significant role in employment in developing countries like Bangladesh. In Bangladesh, this sector generates employment for 4.4 million people, and approximately 84% of foreign earnings come from exported ready-made garments [20]. In this competitive sector, quality control is the key to survival [22]. Since this sector of Bangladesh vastly depends on manual processes, it faces various problems in maintaining buyers’ requirements.
Shade variation of yarn is one of these significant problems faced by the textile dyeing sector. It refers to the discrepancy in a produced color that varies from the desired color of a textile material [26]. The primary raw material used in fabric manufacturing is yarn. Hence, shade variation of yarn poses a problem when making the buyer’s required fabric shade [22]. This fault can occur due to some reasons such as operator irresponsibility, management issues, technical issues such as improper calibration of equipment, batching issues, incorrect operational procedures, etc. [2]. Besides, temperature, pH, concentration of dye liquor, and fiber properties are the significant factors behind this problem. Due to the variability of these factors, about 20% of shade variation in dyed products can occur using yarn from different spinners [2]. Such inconsistency in yarn dyeing results in buyers’ rejection of a product [3]. Consequently, due to this unwanted variation of color depth in yarn, the industries dealing with yarn-dyed products need to invest a good amount of money and time in redyeing or washing which causes a significant annual loss for these industries [22]. By continuous monitoring, one can avoid the shade variation problem by taking proper steps that can lead to achieving the buyer’s recommended shade and reducing the cost of wastage in bulk production. Hence, along with the dyeing factors, a monitoring system of checking yarn shade can minimize this loss [27].
A significant number of research studies have already been performed to monitor the shade variation of textile material. For example, Abreu et al. [4] and Arikan et al. [1] suggested systems using visual and semi-automated inspection, respectively. However, these methods greatly depend on human judgment, which may create variability in shade matching of yarn. To avoid this problem, Kandi et al. [11] proposed a system to monitor the shade difference between reference color depth and sample color depth by spectrophotometer. The installment cost and complexity made it inefficient for small-scale textile industries.
Several research studies have been done to measure the color difference between two samples using image processing techniques. For instance, Ding et al. [6] proposed a DISTS (Deep image structure and texture similarity) metric to analyze the texture differences between images. Similarly, Deshpande et al. [5] suggested the PSNR metric, Li et al. [13] proposed the MS-SSIM metric, and Ieremeiev et al. [8] argued the MDSI metric to assess the difference between images. However, these conventional metrics pose complexity in measurement and can not provide precise information on the color difference of textile material. On the contrary, Pandey et al. [17] proposed a special metric called Delta-E to compare the color depth between Digitally Modulated Screening (DMS) and Hybrid Modulated Screening (HMS). Again, Irshad et al. [9] used the Delta-E metric to develop a predictive model to assess the shade difference of fabric. Due to the fineness of the raw material of fabric, which is yarn, the effectiveness of these systems is not mentioned in these research works. However, these methods can not provide precise and real-time shade measurement systems for yarn.
To address these limitations, we propose a low-cost, real-time yarn shade monitoring system using an image processing technique. We attempt to find a suitable metric for yarn shade variation detection. Here, we use Delta-E as the primary metric due to its consistency in yarn shade measuring over other conventional metrics. In our system, a user can check the shade status using his mobile phone camera. Whenever the user clicks the button of ’Yarn shade check’ on the monitoring webpage, the system asks to provide images of the reference yarn shade and sample yarn shade. After giving these, the system analyzes the images according to the pre-programmed color similarity metric and finally gives the status of the shade difference.
To create a precise yarn shade monitoring system, we make the following contributions in this article:
We propose a low-cost shade-checking system for yarn that can be applicable to the textile industry.
We determine a suitable metric over fourteen traditionally used metrics for precisely monitoring the shade variation of yarn.
We design a system where users can check yarn shade in real time through continuous monitoring on the web.
We apply this system to real-time data obtained from textile industries
We evaluate the methodology by capturing the images directly from the bobbins for both cotton and synthetic yarn.

2 Background and Related Work

In this section, we discuss the existing shade variation monitoring systems and their effectiveness in determining yarn shade differences.
Various manual and semi-automatic approaches were made to monitor the shade of textile material. Abreu et al. [4] proposed a system where shade matching can be done with the help of a rating scale. Values in that scale are assigned from 0 to 4, where 0 represents an excellent match and 4 represents a significant mismatch. In this system, a worker compares the actual shade with the produced shade using this scale, which may create variability in the result from person to person. Similarly, Arikan et al. [1] suggested a fabric inspection machine with a PLC-controlled digital display to monitor shade differences. From this digital display, a worker needs to manually compare the produced sample with the actual sample. Besides, the high cost of instruments made this system difficult to implement for small-scale industries.
Kandi et al. [11] suggested a system where color matching between reference and sample fabric is done with the help of a spectrophotometer. In this process, the reference and sample were placed under a light at a 45° angle while the sensor measured the reflected light perpendicularly on the sample. These instruments measure the color of textile fabrics under controlled conditions, creating calibration databases by analyzing target colors. In this process, color-matching software was used to measure shade variation. However, this process may not be directly applicable in all sectors where shade measurement is very critical or limited, such as for raw yarn, dyed yarn, fiber, loose cotton, stone, marble etc. A similar type of system was suggested by Park et al. [18] where dyes were evaluated before the dyeing process by shade matching system. In this system, color strength was evaluated using color matching tests with the help of a spectrophotometer, fastness tests, and solubility tests. A major disadvantage of the system is the quality of dyes is affected by moisture. Hence, dye strength can not be measured accurately by this shade-matching system.
A clinical study was performed by Liberato et al. [15] in which they compared shade-matching accuracy systems among spectrophotometers, intraoral scanners, and visual inspection. According to this article, the spectrophotometer provided a more accurate shade-matching result. However, the high cost of installing and skill associated with the operation of a spectrophotometer made this system inefficient for all factories. Another noble approach was suggested by Wang et al. [25] where important factor involved in shade difference was determined using the principal component analysis technique (PCA). In this system, parameters such as the gram weight of the fabric, thickness, tightness, the linear density of yarn, the weft density of the fabric, and the warp density of the fabric were taken into consideration. According to this system, the most important factor determined by PCA can be used in shade variation monitoring. However, these parameters can not precisely determine shade variation, and these are only applicable to fabric shade but are not suitable for yarn.

2.1 Studies on Approaches Based on Image Similarity Metrics

Shade monitoring can be done using conventional and modern image similarity metrics. Significant approaches have been made to monitor the texture of different products. One of the most well-known parameters for assessing image similarity is Deep Image Structure and Texture Similarity (DISTS). Ding et al. [6] proposed a system to identify structure similarity and texture similarity by using deep learning. According to this system, machine learning and image processing techniques can be utilized to identify differences among various materials. This system approaches DISTS and DSS (Dynamic Structural Similarity) to assess the difference between actual and produced shade. A significant limitation of this system is that it is challenging to obtain values due to half-wave rectification, and the value may not perform appropriately with global texture similarities.  
On the other hand, a comparative study on assessing image similarity by Sara et al. [21] dictates comparison among MSE (Mean Square Error), FSIM (Feature Similarity Index Method), PSNR (Peak Signal to Noise Ratio), and SSIM (Structural Similarity Indexing Method) metrics. In this work, the authors argued that the SSIM (Structure Similarity Index) metric, which compares the structural information such as edges, overall layout, and texture rather than pixel difference between two images, FSIM (Features Similarity Index) which looks at how similar features are present between two images, PSNR and MSE which calculates pixel-wise difference between two images. They also reported that if the quality of images is very close, the FSIM and SSIM can measure the value very closely, which can not be distinguished by the human eye. The FSIM and SSIM can give exact value ranges that are between 0 to 1. For this reason, the FSIM and SSIM are called normalized metrics, but the MSE and PSNR can only compare between two images and do not give actual value, and these metrics are called non-normalized metrics. The value of the metrics varies depending on noise, light, dust, dirt, and environment [7]. A similar approach was proposed by Deshpande et al., [5] who designed a system for video quality assessment based on the PSNR (Peak Signal-to-Noise Ratio) metric. In this system, the RGB color model is converted to YUV color space where Y is Luma and U,V are color information. Then, the PSNR is calculated for the Luma component. PSNR is also related to the MSE(Mean Square Error) value. A lower MSE value means a higher PSNR value that represents a better-quality image.
Li et al. [13] proposed a system of image similarity metrics based on edges, texture, and smooth regions. The authors reported MS-SSIM (Multi-Scale Structural Similarity Index) as a metric that measures structural similarity between two images. MS-SSIM (Multi-Scale Structural Similarity Index) is designed to assess the quality of images at different scales or resolutions. It indicates alignment with human visual assessment. SSIM and MS-SSIM are both measured by comparing structure information between the reference image and the captured image. On the other hand, Ieremeiev et al. [8] argued that the MDSI (Mean Deviation Similarity Index) metric is better than PSNR and SSIM metrics for assessing the texture similarity between images. It is calculated by using a statistical Spearman rank order correlation coefficient based on Gaussian noise, blur, and impulsion noise. A crucial limitation of this system is that MDSI calculation is more complicated than others. Jia et al. [10] developed a system of image quality assessment by using the VSI index that reflects human perception. However, due to its complexity, there is not sufficient practical evidence that may work to assess shades of textile materials. To compare the similarity between images, Li et al. [14] proposed the GMSD (Gradient Magnitude Similarity Deviation) metric. A significant disadvantage of using this metric is its inability to precisely analyze large datasets, and the testing time is too lengthy.
Proskuriakov et al. [19] suggested a special metric called Delta-E that can measure the color difference of images with the help of CIE76, CIE-94, and CIE-2000 methods. It can measure average color differences using the Python PIL library. Similarly, Pandey et al. [17] evaluated the color difference between Digitally Modulated Screening (DMS) and Hybrid Modulated Screening (HMS) using the Delta-E metric. Irshad et al. [9] developed a predictive system to assess shade variation of fabric after water-repellent finishing using Delta-E. In this system, five neural networks were trained to predict the values of Delta-E, and the authors reported about 85% accuracy of that. However, these methods did not focus on the shade variation determination of yarn and were unable to express their effectiveness in real-time yarn shade variation assessment.
Figure 1:
Figure 1: Block diagram of the proposed methodology

3 Proposed Methodology

In this section, we discuss our proposed system, how we implement it for yarn shade variation assessment. In our system, we analyze conventional metrics with the advanced color difference calculation for precise yarn shade variation monitoring. The process flow of methodology is shown in Figure 1.

3.1 Image Acquisition

We capture images of yarn bobbins using an Android camera. At first, we capture the image of the shade of the reference sample (the sample that is approved by the buyer). Afterward, we take the images of trial samples (the samples that are made to achieve the buyer’s given shade). To ensure consistency among the samples, we maintain some pre-conditions, such as:
Consistent light and camera: We use Poco X3 android mobile equipped with 64-megapixel rear camera which includes Sony Exmor IMX682 as the primary sensor. Images of dyed yarn bobbins are captured under tube-light.
Fixed distance : We set our camera at a fixed distance of 22 cm from the base of the tripod.
Same bobbin size : In our case, the size of the bobbins are 8.4 cm in diameter, 14 cm in length and approximately 2.2 kilograms of weight.
Same angle of image capturing: The angle of capturing image is approximately 32 degrees and we maintain this angle for all images.
We take the captured images as input for different metrics analysis. After capturing the images we store them in Google drive.

3.2 Various Image Similarity Metrics Analysis

Initially, we analyze the yarn images using 14 traditionally used image similarity metrics such as DISTS, DSS index, FSIM, GMSD, HaarPSI index, MS-SSIM index, MDSI index, MS-GMSDc index, PSNR, SSIM, TV, VIFp index, VSI index, and SR-SIM. We choose these metrics for their impact on analyzing image similarity. Here, we use different Python libraries, such as piq and cv2, for image analysis. We also set a fixed length of 500 pixels and width of 500 pixels for all images to ensure equal dimension of all the samples. Finally, we consider the image of the intended shade (buyer’s approved sample) as the reference image and then compare the trial samples with it to obtain the metric values.
Figure 2:
Figure 2: Synthetic yarn samples

3.3 Principal Component Analysis

After determining the image similarity metric values, we need to find out the significant metrics for yarn shade variation. So, we use Principal Component Analysis (PCA) for feature extraction. After analyzing the variances of principal components, we get significant metrics for our data using the principal component coefficients. Here, the term ’significant metrics’ refers to the metrics that are able to capture greater variances for our dataset. Metrics representing higher variances can effectively capture the primary variations in yarn shade across the samples, as they are more sensitive to minute variations. These differences may contain the intermediate shades that may fall among the captured samples, which can make the system reliable in comparing closely related shades. So, we identify three metrics among the initial fourteen metrics that represent larger variances than the remaining eleven.
Since in factory practice, yarn shades are assessed manually through human eyes, we perform a human visual inspection to observe the effectiveness of these three critical metrics in determining shade differences. During this inspection, reference or buyer’s approved sample and trial samples are taken side by side and subjected to consistent lighting on a shade-checking table. Finally, we compare this visual assessment with the values of those significant metrics obtained from PCA analysis.

3.4 Delta-E Metric Analysis

Recognizing the limitations of these conventional metrics for determining the variation of yarn shades, we employ Delta-E metric. Delta-E is a specialized metric to detect the color difference between images as perceived by human eye. In this work, we use CIE Delta E 2000 to detect the yarn shade variation. This metric calculates distance between two colours in a three-dimensional color space (CIELAB), considering various factors like lightness, chroma, hue, and their interaction. Therefore, we perform Delta-E analysis based on the following formula [23]:
\[\begin{eqnarray*} \Delta E_{00} = \sqrt {\left(\frac{\Delta L^{\prime }}{k_L S_L} \right)^2 + \left(\frac{\Delta C^{\prime }}{k_C S_C} \right)^2 + \left(\frac{\Delta H^{\prime }}{k_H S_H} \right)^2 + R_T \left(\frac{\Delta C^{\prime }}{k_C S_C} \right) \left(\frac{\Delta H^{\prime }}{k_H S_H} \right)} \end{eqnarray*}\]
This formula calculates color difference, ΔE00, using ΔL′, ΔC′, and ΔH′ for lightness, chroma, and hue differences. kL, kC, and kH are weighting factors, while SL, SC, and SH adjust for human sensitivity to these differences, and RT accounts for chroma-hue interactions.
Based on the formula of Delta-E, we consider the image of buyer’s approved shade as reference and determine the Delta-E values for other samples of that shade group. We have found Delta-E metric provides better accuracy and the result also aligns with human perceived shade difference. To facilitate quick interpretation of Delta-E results, we categorize these values into four groups where each group represents different observations of shade variation. These groups are made based on a published reference [16] and modified to align with the buyer’s tolerance on shade matching of yarn. The groups are defined as follows:
Group 1: ΔE < 1: Colors are considered indistinguishable to the human eye.
Group 2: 1 < ΔE < 2: Colors have a very slight difference, barely perceptible to the human eye.
Group 3: 2 < ΔE < 5: Colors have a noticeable difference, but they are still relatively similar.
Group 4: ΔE > 5: Colors have a significant difference and are easily distinguishable to the human eye.

3.5 Data Visualization

For real-time monitoring, we employ the system on a web-based Flask application using a hosting platform named ’Pythonanywhere’. We consider the Delta-E metric as the primary metric for this web-based system due to its accuracy in determining yarn shade variation. This platform allows users to capture images directly from their Android mobile camera or from the gallery of the user’s phone. Afterward, the uploaded reference image and the image of the sample to be compared are saved in a specified folder on the server for processing. Based on the uploaded code in the flask application, the system converts the images from RGB to LAB color space to determine average color values for each image. Finally, it calculates the value of the Delta-E metric and provides the result on the monitoring webpage. The webpage shows the numerical value and significance based on the pre-defined groups mentioned in subsection 3.4. Since the processing code is uploaded to a Flask application on a free hosting platform, the system may take time to process the images based on the server load. However, maintaining consistent lighting and distance is necessary while capturing images to get precise shade variation results.
Table 1:
Parametersi1i2i3i4
DISTS0.25370.24030.32460.2762
DSS index0.47920.49720.44190.4057
FSIM Index0.71830.72280.74510.6939
GMSD index0.19220.18510.18100.2248
HaarPSI index0.50700.52240.51730.4476
MDSI index0.41050.42340.41130.4822
MS-SSIM index0.59970.55760.53730.3412
MS-GMSDc index0.18690.18040.17770.2277
PSNR index25.408624.304121.958620.4613
SSIM index0.47770.45860.42840.1765
TV index28.691528.691528.691528.6915
VIFp index0.03280.02320.01950.0162
VSI index0.90680.90450.89530.8888
SR-SIM index0.82710.78980.79110.7772
Table 1: Data of image similarity metrics for synthetic yarn sample
Figure 3:
Figure 3: Cotton yarn samples
Table 2:
Parameteri1i2i3i4i5i6i7i8i9
DISTS0.10470.14080.10110.09120.10270.12340.10760.11280.1096
DSS index0.49880.48960.50120.50650.49260.47240.50210.49660.5082
FSIM Index0.77510.77460.77580.77760.77790.78010.77650.77330.7782
GMSD index0.17860.18000.17880.17910.17600.17680.17880.17880.1789
HaarPSI index0.51370.51190.51290.51460.51500.51540.51410.50840.5116
MDSI index0.42680.42820.42790.42900.42390.42320.42800.42460.4264
MS-SSIM index0.36570.36060.36180.34320.36430.38080.34480.37130.3554
MS-GMSDc index0.17820.17920.17740.17740.17500.17620.17820.17720.1779
PSNR index19.788819.346619.419219.585019.700220.436218.543019.456419.4543
SSIM index0.21580.20680.21170.19440.21860.23950.19410.21480.2123
TV index88.311588.311588.311588.311588.311588.311588.311588.311588.3115
VIFp index0.00740.00710.00690.00660.00640.00870.00630.00630.0071
VSI index0.91420.91600.91670.91770.91390.91360.91700.91330.9146
SR-SIM index0.86140.78780.86190.86830.87790.86200.86540.86110.8788
Table 2: Data of image similarity metrics for cotton yarn sample

4 Experimental Evaluation

We collect real data for both cotton and synthetic yarn from two factories. We capture images of cotton yarn from Knit Concern Limited and synthetic yarn images from Colortex Ltd. We capture over 200 cotton yarn samples and 62 types of shades of synthetic yarn.

4.1 Evaluation of Yarn Images using Conventional Metrics

We perform factory-level experiments using an Android camera. Unlike lab experiments, where fewer and smaller bobbins are used to achieve the buyer’s approved shade, factory-level testing (during sample trials) is conducted with larger bobbins in greater quantity to assess whether the recipe and dyeing parameters will match the intended shade in bulk production. In this work, a Poco X3 Android mobile equipped with a 64-megapixel camera is used. The device is set according to pre-conditions such as fixed lighting, distance, angle, and bobbin size. Images are captured and analyzed by taking image of buyer’s approved sample (reference sample), and other trial samples are compared with it.
Initially, images are analyzed using 14 conventional metrics, including DISTS, DSS index, FSIM, GMSD, HaarPSI index, MDSI index, MS-SSIM index, MS-GMSDc index, PSNR, SSIM, TV, VIFp index, VSI index, and SR-SIM. These metrics are selected due to their significant impact on analyzing image similarity. Python libraries are used to determine the metric values for the images, with results shown in Table 1 and 2.
Significant metrics for yarn shade variation are then determined using a dimension reduction technique. Principal Component Analysis (PCA) is performed to extract significant parameters (metrics with higher variances) for our data, yielding 53.87% variance for the first principal component (PC1), 18.62% for the second, and 9.31% for the third. By forming equations with these variances and substituting values from the principal component coefficient matrix, it is found that MS-SSIM, MDSI, and SSIM metrics have greater significance for our sample data. Finally, the values of these three metrics are compared with human visual inspection results. Yarn shade variation is identified using these metrics, although occasional contradictions with the actual shade difference are observed. Therefore, a special metric named Delta-E is used to validate the findings.

4.2 Evaluation of Yarn Images using Delta-E Metric

In this work, we calculate the Delta-E metric values for both cotton and synthetic yarn samples. To determine the Delta-E metric values using the formula of CIE 2000, we take the shade of the buyer’s approved sample as a reference and then compare the trial samples against it. These Delta-E results are presented in tabular and graphical form, allowing us to observe the shade variation across yarn samples. For instance, if we look at Figure 2, a noticeable shade variation can be observed in the 4th image (i3) relative to the image of intended shade (i0), which is quantified by a Delta-E value of 4.5 in Table 3. This value belongs to group C, which indicates a noticeable difference between shades. Again, the i1 shade is closer to the reference shade than the i3 shade, resulting in a lower value (2.11) of Delta-E than that of i3. Similarly, for cotton yarn samples in Figure 3, the shade of i7 shows more significant variation than i1 when both are compared with the reference image (i0). This change is also reflected by the Delta-E metric values, shown in Table 4.
The effectiveness of the Delta-E metric is examined following the traditional method of yarn shade checking, which is practiced in factories. For each yarn sample, the Delta-E value is compared with an expert’s (the person who usually conducts the shade matching in the factory) assessment of the shade variation with the buyer’s approved sample. In this experiment, the result of the Delta-E metric correctly categorizes 51 out of 62 samples, which aligns with the shade group assigned by the expert, achieving an accuracy of 82.26%. Therefore, this accuracy demonstrates Delta-E’s effectiveness in determining shade variations of yarn that align with human evaluation.
Table 3:
Delta-E values for synthetic yarn sample
Image NoDelta-E valueShade group
i12.1112C
i23.0552C
i34.5007C
i42.2791C
Table 3: Delta-E values for synthetic yarn sample
Table 4:
Image no.Delta E valueShade group
i10.6614A
i20.1855A
i31.1057B
i40.4056A
i50.3698A
i60.8360A
i73.7850C
i81.0553B
i91.6079B
Table 4: Delta-E values for cotton yarn sample

4.3 Findings and Data Visualization

Initially, we attempt to monitor the shade variation using the conventional metrics of image similarity. Then, from the result of PCA, we find MS-SSIM, MDSI, SSIM as crucial metrics for our data. However, the results using these metrics do not suit well for yarn shade variation determination. On the other hand, Delta-E metric value gives almost precise result for determining the shade difference between yarn samples. We also observe the changes of Delta-E metric for the samples of synthetic and cotton yarn bobbins graphically from Figure 4 and Figure 5.
For real-time data visualization, we have developed a web-based application for shade variation determination on Pythonanywhere hosting platform, which is shown at Figure 6. Here, automatic shade variation determination is possible by capturing yarn images. After logging into the website, a user can see the upload yarn image option. After that the user can capture images of yarn bobbins directly using his Android mobile camera or can upload from the gallery of the phone. Finally, by analyzing the shade differences between the images, the system will display the output of the Delta-E value with its significance.
Figure 4:
Figure 4: Graph of Delta-E for synthetic yarn samples
Figure 5:
Figure 5: Graph of Delta-E for cotton yarn samples
Figure 6:
Figure 6: Web visualization

5 Conclusion

This work aims to find a reliable, low-cost system for monitoring yarn shade variation. For this purpose, various image similarity metrics of image processing techniques are implemented on the yarn bobbin images which are captured by a smartphone camera. Using PCA analysis, we determine the important features (metrics) from the 14 traditionally used image similarity metrics. However, the initial metrics DISTS, DSS index, FSIM, GMSD, HaarPSI index, MDSI index, MS-SSIM index, MS-GMSDc index, PSNR, SSIM, TV, VIFp index, VSI index, and SR-SIM do not work well to determine yarn shade variation. Delta-E metric emerges as a reliable feature for monitoring yarn shade variation. Finally, this system is implemented on a web-based platform so that a person can observe the shade variation by himself using his smartphone camera.
Future work can explore additional image processing technique with machine learning algorithm to make a more consistent automatic shade variation monitoring system. In future, we plan to integrate specialised camera with a microprocessor that can automate the system which itself able to capture images from the yarn bobbins and analyze the shade variation. Moreover, to obtain more accurate result, an expanded dataset based on different yarn types should be incorporated. In conclusion, this research provides a foundation for monitoring yarn shade variation using image processing technique and contributes significantly to the section of yarn dyeing quality control.

Acknowledgments

We would like to acknowledge Knit Concern Limited and Colortex Limited, Bangladesh for providing us the facility of real time factory data collection.

References

[1]
Cihat Okan Arikan. 2019. Developing an intelligent automation and reporting system for fabric inspection machines. Textile and Apparel 29, 1 (2019), 93–100.
[2]
HK Ashaduzzaman, M Rahman, A Taher, S Mia, et al. 2016. Causes and remedies of batch to batch shade variation in dyeing textile floor. J Textile Sci Eng 6, 264 (2016), 2.
[3]
Usman Ashraf, Tanveer Hussain, and Abdul Jabbar. 2014. Effect of cotton fiber and yarn characteristics on color variation in woven fabric dyed with vat dyes. The Journal of The Textile Institute 105, 12 (2014), 1287–1292.
[4]
João Luiz Bittencourt de Abreu, Camila Sobral Sampaio, Ernesto Byron Benalcázar Jalkh, and Ronaldo Hirata. 2021. Analysis of the color matching of universal resin composites in anterior restorations. Journal of Esthetic and Restorative Dentistry 33, 2 (2021), 269–276.
[5]
Renuka G Deshpande, Lata L Ragha, and Satyendra Kumar Sharma. 2018. Video quality assessment through PSNR estimation for different compression standards. Indonesian Journal of Electrical Engineering and Computer Science 11, 3 (2018), 918–924.
[6]
Keyan Ding, Yi Liu, Xueyi Zou, Shiqi Wang, and Kede Ma. 2021. Locally adaptive structure and texture similarity for image quality assessment. In Proceedings of the 29th ACM International Conference on multimedia. 2483–2491.
[7]
Richard Dosselmann and Xue Dong Yang. 2011. A comprehensive assessment of the structural similarity index. Signal, Image and Video Processing 5 (2011), 81–91.
[8]
Oleg Ieremeiev, Vladimir Lukin, Krzysztof Okarma, and Karen Egiazarian. 2020. Full-reference quality metric based on neural network to assess the visual quality of remote sensing images. Remote Sensing 12, 15 (2020), 2349.
[9]
Farida Irshad, Munir Ashraf, Assad Farooq, Muhammad Azeem Ashraf, and Nayab Khan. 2023. Development of Prediction System for Shade Change Variations in Dyed Cotton Fabric After Application of Water Repellent Finishes. Journal of Natural Fibers 20, 1 (2023), 2154302.
[10]
Huizhen Jia, Lu Zhang, and Tonghan Wang. 2018. Contrast and visual saliency similarity-induced index for assessing image quality. Ieee Access 6 (2018), 65885–65893.
[11]
Saeideh Gorji Kandi. 2011. The effect of spectrophotometer geometry on the measured colors for textile samples with different textures. Journal of Engineered Fibers and Fabrics 6, 4 (2011), 155892501100600410.
[12]
Chien-Chun Ku, Chen-Fu Chien, and Kang-Ting Ma. 2020. Digital transformation to empower smart production for Industry 3.5 and an empirical study for textile dyeing. Computers & Industrial Engineering 142 (2020), 106297.
[13]
Chaofeng Li and Alan C Bovik. 2009. Three-component weighted structural similarity index. In Image quality and system performance VI, Vol. 7242. SPIE, 252–260.
[14]
Xiaofeng Li, Xiaogang Yang, Shiwei Chen, Naixin Qi, and Yueping Huang. 2020. Intensity image quality assessment based on multiscale gradient magnitude similarity deviation. Optical Engineering 59, 10 (2020), 103107–103107.
[15]
Walleska Feijó Liberato, Isadora Carvalho Barreto, Priscila Paganini Costa, Cristina Costa de Almeida, Welson Pimentel, and Rodrigo Tiossi. 2019. A comparison between visual, intraoral scanner, and spectrophotometer shade matching: A clinical study. The Journal of prosthetic dentistry 121, 2 (2019), 271–275.
[16]
Wojciech Mokrzycki and Maciej Tatol. 2011. Color difference Delta E - A survey. Machine Graphics and Vision 20 (04 2011), 383–411.
[17]
Ambrish Pandey et al. 2023. Comparative study of delta e of hybrid modulated and digitally modulated screening on different grades of paper. The Scientific Temper 14, 03 (2023), 891–894.
[18]
J Park and J Shore. 1982. Evaluation and testing of dyes before use in textile dyeing. Review of progress in coloration and related topics 12, 1 (1982), 1–9.
[19]
NE Proskuriakov, BS Yakovlev, and NN Arkhangelskaia. 2021. Features of the CIE76, CIE-94, and CIE-2000 methods that affect the quality of the determining process of the image average color. In Journal of Physics: Conference Series, Vol. 1791. IOP Publishing, 012107.
[20]
M Rasel, Dip Das, and Malaz Rahman Khan. 2020. Current scenario of textile sector in Bangladesh (2019); A comprehensive review. International Journal of Innovative Studies in Sciences and Engineering Technology (IJISSET) 6, 1 (2020), 52–55.
[21]
Umme Sara, Morium Akter, and Mohammad Shorif Uddin. 2019. Image quality assessment through FSIM, SSIM, MSE and PSNR—a comparative study. Journal of Computer and Communications 7, 3 (2019), 8–18.
[22]
Abu Shaid, Md Abdus Shahid, Md Abdur Rahman Bhuiyan, Md Abdullahil Kafi, Shekh Md Mamun, and Md Azijur Rahman Kabir. [n. d.]. Shade Variation-a Major Obstacle of Double ply Cotton Yarn Dyeing in Cone Form. ([n. d.]).
[23]
Gaurav Sharma, Wencheng Wu, and Edul N Dalal. 2005. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application: Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur 30, 1 (2005), 21–30.
[24]
solidaritycenter.org. 2019. Garment-Textile-Industry-Fact-Sheet.8.2019.pdf. https://www.solidaritycenter.org/wp-content/uploads/2019/08/Garment-Textile-Industry-Fact-Sheet.8.2019.pdf. (Accessed on 08/19/2024).
[25]
Yu-wen Wang, Qing-zhu Yi, Yi Ding, Feng Ji, and Ni Wang. 2021. Study on the factors influencing the dyeing performance of cotton fabric with vat dyes based on principal component analysis. The Journal of the Textile Institute 112, 9 (2021), 1460–1466.
[26]
Alvin G Wee, Peter Monaghan, and William M Johnston. 2002. Variation in color between intended matched shade and fabricated shade of dental porcelain. The Journal of prosthetic dentistry 87, 6 (2002), 657–666.
[27]
J Yashini. 2020. An analysis on minimization of product error (Poka-Yoke) and excess work in progress (TPM & OEE) in textile industry. International Journal of Research in Engineering, Science and Management 3, 9 (2020), 17–22.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
NSysS '24: Proceedings of the 11th International Conference on Networking, Systems, and Security
December 2024
278 pages
ISBN:9798400711589
DOI:10.1145/3704522
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 January 2025

Check for updates

Author Tags

  1. Shade variation
  2. Yarn
  3. Image processing
  4. Textile industry

Qualifiers

  • Research-article

Conference

NSysS '24

Acceptance Rates

Overall Acceptance Rate 12 of 44 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 396
    Total Downloads
  • Downloads (Last 12 months)396
  • Downloads (Last 6 weeks)137
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media