1 Introduction

The composition and content of fibers are important in the import and export trade of textiles or clothing products. In order to guarantee the quality of products and protect the interests of consumers, the composition and content of fibers need to be strictly controlled. The longitudinal and cross-sectional features are important parameters for the automatic identification of different types of fibers [1, 2]. These indicators are often depended on the precise measurement of single fiber. However, multiple overlapped or adhesive fibers are often found in the practical application scenarios, which brings the difficulty to the extraction of the key parameters of the fibers. So it is necessary to study how to segment the overlapped fibers [3, 4].

The traditional method of fiber image segmentation has morphological image processing [5], watershed transform [6] and separation method based on branch slope at each intersection [7]. The morphology method obtains the contour curve of fibers by means of dilation and erosion, but the number of times needs to be specified manually, resulting in the deformation of the contour in the fiber image. The watershed method is simple and accurate, but it is too sensitive to noise and often produces serious over-segmentation results. The method based on the cross-branch slope is to match edges with similar slope, but the correctness will significantly reduce when the number of cross fibers is high. As a result, these methods are not practical in the actual application scenario.

Researchers have found that human vision has the ability to quickly search for a target of interest. If the target fiber can be accurately positioned during the pretreatment process, the interference of the image background can be reduced when extracting the fiber contour. Based on the significance theory [9], the image pretreatment method is proposed to extract the fiber contour curves. The second stage of this methodology is to detect the corners on the curves and locate the concave points in the corners set. The third stage is the matching process advanced in the paper with concave points obtained from the second stage. The final stage is to segment the overlapped fibers using the matching points. In this stage, we also perform comprehensive experiments using different types of overlapped fibers to demonstrate the usefulness of our approach and its generalization power.

2 Fiber Image Preprocessing

When the system receives the fiber grayscale images, the gray level of the edge contour is similar to that of the background noise, causing the contour of the fibers discontinuous. For avoiding the digital error and noise effects, the fiber images have to be denoised [8] before extracting the fiber contour.

As discussed in the introduction, the common methods of image preprocessing cannot clear up noise completely, so it is not trivial to distill the clear fiber contour curve. In order to achieve the purpose of getting the clear and accurate fiber target, the B-spline curve fitting and complex domain nonlinear anisotropic diffusion process are used here to obtain the edge of fiber image.

2.1 Target Fiber Protrusion Method by Visual Significance Theory

In this section, the B-spline surface fitting method is used to obtain the closed background of the image [10, 11], then subtract the fitting background from the original fiber image to obtain the target fiber image, this process also can solve the image illumination unevenness. B-spline function has the advantages of good smoothness after fitting and has a high fitting precision, thus is often applied for surface fitting. Considering the practical problems such as the timeliness and calculation of the algorithm, the cubic B-spline function is usually chosen. The cubic B-spline function is widely used because of its low computational cost and good stability when fitting the surface. Therefore, under the constraint of the least squares criterion, the cubic B-spline surface is used to fit the background.

The gray-scale fiber image of m * n size is represented as G(xi, yj, zi), 1 ≦ i ≦ m, 1 ≦ j ≦ n, zij = z(xi, yi) is the gray value of the gray-scale image at the point (xi, yj). With interval πu: 0 < u < 1, πv: 0 < v < 1. For the sake of convenience, use Fi(u) to denote (m + 1) B-spline functions on πu, (n + 1) cubic B-spline functions on u are denoted by Fj(v). As a result, the fitting background can be represented as follows:

$$ S_{mn} :F(u,v) = \sum\nolimits_{i = 0}^{m + 1} {\sum\nolimits_{j = 0}^{n + 1} {F_{i} (u)} } F_{j} (v)P_{i,j} $$
(1)

u, v is obtained by normalization. Pi,j is the control of the vertex. Fi(u) and Fj(v) is the cubic B-spline base function, defined as:

$$ F_{i} (X) = \frac{1}{N!}\sum\limits_{k = 0}^{N - i} {( - 1)^{k} C_{N + 1}^{k} (X + N - i - k)^{N} ,0 \le X \le 1} $$
(2)

If we set (m + 1) * (n + 1) control vertices and arrange them as a matrix of (m + 1) * (n + 1) size, when m = n = 3, feature grid of the double cubic B-spline surface is constructed, and the corresponding matrix form of the formula (1) is expressed as:

$$ S_{mn} :r(u,v) = UAPA^{T} V^{T} $$
(3)

where U = [u3, u2, v, 1], V = [v3, v2, v, 1],

$$ A = \frac{1}{6}\left[ {\begin{array}{*{20}c} { - 1} & 3 & { - 3} & 1 \\ 3 & { - 6} & 3 & 0 \\ { - 3} & 0 & 3 & 0 \\ 1 & 4 & 1 & 0 \\ \end{array} } \right];\;P = \left[ {\begin{array}{*{20}c} {P_{11} } & {P_{12} } & {P_{13} } & {P_{14} } \\ {P_{21} } & {P_{22} } & {P_{23} } & {P_{24} } \\ {P_{31} } & {P_{32} } & {P_{33} } & {P_{34} } \\ {P_{41} } & {P_{42} } & {P_{43} } & {P_{44} } \\ \end{array} } \right] $$

In order to control the fitting accuracy, use formula (4) to calculate the feature control vertices reversely. The least squares solution of Pi,j is obtained as follows:

$$ F_{obj} = \sum\nolimits_{i = 0}^{m + 1} {\sum\nolimits_{j = 0}^{n + 1} {\left| {z(x_{i} ,y_{j} ) - r(u_{i} ,v_{j} )} \right|} } $$
(4)

Use the above formula for P derivative, and set it to zero. We can obtain the least squares solution of Pi,j, and finally get the image of the least squares fitting surface, that is the background of the image.

The presence of image noise reduces the image quality and affects the result of segmentation. The improved complex domain anisotropic diffusion filter [12, 13] not only has the ability to suppress noise, but also has a high edge detection accuracy.

With the Laplacian operator adjacent to the discrete, the processed image can be expressed as:

$$ I_{i,j}^{t + 1} = I_{i,j}^{t} + \lambda [c_{N} \cdot \partial_{N} I + c_{S} \cdot \partial_{S} I + c_{E} \cdot \partial_{E} I + c_{W} \cdot \partial_{W} I] $$
(5)

where \( \lambda = 0.2 \), \( I_{i,j}^{t} \) is the original image, Cn, Cs, Ce, Cw are diffusion coefficients, the C(x) is expressed as: C(x) = 1/(1 + (x/k)2). Where x is the gradient of the image in all directions, k is the edge threshold. The image gradients in all directions is defined as:

$$ \begin{aligned} & \partial_{N} I_{i,j} = I_{i - 1,j}^{t} - I_{i,j}^{t} \, , \, \partial_{S} I_{i,j} = I_{i + 1,j}^{t} - I_{i,j}^{t} \\ & \partial_{E} I_{i,j} = I_{i,j + 1}^{t} - I_{i,j}^{t} \, , \, \partial_{W} I_{i,j} = I_{i,j - 1}^{t} - I_{i,j}^{t} \\ \end{aligned} $$
(6)

In Eq. (6), α represents the difference operation of two pixels.

2.2 Contour Extraction of Target Fiber Image

After the B-spline surface fitting and diffusion treatment, a clear fiber structure can be obtained. In order to meet demand for subsequent corner extraction, the fiber must be morphologically operated [14]. The morphology process uses the Set Theory to analysis and research the image, mainly through the intersection and set operations between structural elements and images, so the selection of structural elements counts a great deal. The expansion of corrosion is applied to the fiber image, and then the noise in the image can be removed. At last, a perfect fiber binary image is obtained after morphological filling processing.

As is shown in Fig. 1(b), the fitting image background is obtained by processing Fig. 1(a) using double cubic B-spline surface fitting under the constraint of the least squares criterion [15] and the complex domain filtering method; Subtracting the fitting background from the original gray-scale fiber image, the target fiber image is obtained just as Fig. 1(c); the binary image of the target image is extracted shown in Fig. 1(d), then the mathematical morphology method and fiber internal filling are used to eliminate the false edge inside the fiber. After the above processing, the fiber binary image with smooth edge can be obtained shown in Fig. 1(e). In order to detect the corners in the binary fiber contour image is extracted from Fig. 1(e), as is shown in Fig. 1(f).

Fig. 1.
figure 1

Fiber image preprocessing. (a) Gray scale image; (b) fitting background; (c) clear fiber image after preprocessing; (d) edge detection of (c); (e) fiber binary image after morphological process; (f) binary image of fiber contours;

3 Concave Points Matching and Fiber Segmentation Algorithm

The connection locations produced by the crossed boundaries of fibers are often embodied in the form of concave points. There may be numerous concave points generated by two overlapped fibers, but the concave points only in the same fiber boundary can be used for fiber segmentation. In this paper, the features of concave points and the variation regulation of triangle area are used to find the matching concave points. These matching points are used to segment the overlapped fibers at last.

3.1 Extraction Method of Fiber Contour Concave Points

The points having the maximum bending degree on the curve also have the local maximum curvature, called contour corners. For a digital curve composed of pixels with integer coordinates, it is not easy to get the curvature of the points on the curve directly. To this end, we need to establish a mathematical method to calculate the discrete curvature of each corner point, the general process is as follows:

  1. (1)

    Select a suitable scale to measure the discrete curvature;

  2. (2)

    Design a measurement method of discrete curvature and calculate the discrete curvature of each point on the curve;

  3. (3)

    Determine the absolute value of the discrete curvature maxima;

  4. (4)

    Set a threshold, the point that has the curvature value beyond the threshold is recognized as a corner.

There are many contour corners detection algorithms [16,17,18]. Because of the advantages of low complexity and computational cost, K cosine curvature algorithm [19, 20] is used to detect the corner points on the contour curve. The t-th contour is expressed as:

$$ L_{t} = \{ P_{t} (x_{i} ,y_{i} )|i = 1,2,3 \ldots n\} $$
(7)

where n is the number of contour points on the t-th contour curve. For point Pi(xi, yi), Pi−k(xi−k, yi−k) and Pi+k(xi+k, yi+k). The k-vectors at point Pi are define as:

$$ \begin{array}{*{20}l} {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{\alpha_{i} }} (k) = (x_{i + k} - x_{i} ,y_{i + k} - y_{i} )} \hfill \\ {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{\beta_{i} }} (k) = (x_{i - k} - x_{i} ,y_{i - k} - y_{i} )} \hfill \\ \end{array} $$

We compute the k-cosine, which is the cosine of the angle between the two k-vectors above, as follows:

$$ C_{i} (k) = \cos \vartheta_{i} = \frac{{\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{\alpha_{i} }} (k) \cdot \overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{\beta_{i} }} (k)}}{{\left| {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{\alpha_{i} }} (k)} \right|\left| {\overset{\lower0.5em\hbox{$\smash{\scriptscriptstyle\rightharpoonup}$}}{{\beta_{i} }} (k)} \right|}} $$
(8)

The cosine of the angle between two vectors formed by Pi and all the points on both sides of Pi can be calculated through this method. The largest cosine can be selected from all the cosine values, which is named as Cmax and the current K is denoted as KM. Satisfy Ci+Km(K) with the largest cosine value.

The points with curvature maxima should also have the local maximum of the cosine. The point with the cosine value over CT will be regarded as the point of maximum curvature. In this paper, a point with a vector angle below 165° is regarded as a corner, namely the range of Ci should be [−0.9, 1) (Fig. 2).

Fig. 2.
figure 2

The schematic of K-Cosine curvature algorithm

The corners in the fiber contour image includes concave points and convex points. Determination of concave points of corner points by vector angle method [21]. The vectors of Pi and Pi−k and Pi+k are shown in Fig. 3. Pi is the point which is located on the contour curve L, and takes the preceding Pi−k and succeeding Pi+k with a distance of k pixels from the point Pi.

Fig. 3.
figure 3

The judgment of concave points

The pixels of the fiber contour stored in counter-clockwise direction in this paper, Vt is the threshold of the angle formed by the concave point with its preceding and succeeding point. If the angle formed from the vector PiPi−k along the anticlockwise direction to the vector PiPi+k is less than or equal to Vt, then the corner Pi is judged as a concave point, otherwise Pi is a judged as a convex point. All concave points on each region can be obtained by traversing all corners. Obviously, Vt should be set as 180° in this paper.

3.2 Concave Points Matching Method

Algorithm Implementation.

From the above discussion and other features of the concave point of the overlapped fibers, the following steps are used to achieve the matching process:

  1. (1)

    For the concave point Pi on the contour curve, firstly, its coordinate and serial number should be obtained on the contour curve, then point Pj(i ≠ j) and the following method is adopted to determine if Pi and Pj have the possibility of matching: If there are other corners between Pi and Pj, Pi and Pj may be the matching points and then we can use the next step for further judgement; Otherwise the matching is not successful, continue to match with the next concave point Pj+1.

  2. (2)

    If the intermediate point on the curve of between Pi and Pj is inside the contour of the fiber, proceed to the next step. Otherwise, regard that the matching is not successful, continue to match with the next concave point.

  3. (3)

    Using the area value of the triangular support region form by the concave point, its precursor point (subsequent point) and the target concave point to determine whether Pi and Pj are on the same fiber contour. The matched concave points are divided into two types of matching set. There are total four matching points for the overlapped fiber set, and two matching points for the adhesive fiber set.

  4. (4)

    Examination: In order to prevent interference from other incoherent concave points, the matching points are placed in the same set in counterclockwise direction, and if the number of matching concave points in the set exceeds four, it means the error concave points are chosen and placed in. If the distance between the two concave points is far more than the distance among other two matching concave points, the matching concave points should also be removed.

  5. (5)

    Repeat the above steps, all the concave points will be matched.

Algorithm Description.

The intersection of two convex polygons in the plane is still a convex polygon, that is to say, the overlapped region of two fiber contours is a convex polygon, and the cross points in the vertex set is the concave point of the fiber contour. Since the contour curve between two adjacent concave points contains at least one convex polygon boundary, thereby there is at least one concave point between two adjacent concave points belonging to the same contour at the fiber overlapped region.

In the convex polygon, the midpoint of the two vertexs must be inside the polygon. Therefore, the midpoint of two matching concave points in the overlapped region is bounded to be inside the fiber contour curve.

The concave points are always located in the intersection of two contour curves. Take Pi(x, y) as the current detecting point, P L (x j−m , y j−m ) is the precursor point of Pi. According to the order in which pixels are stored, in the same way, P R (x j+m , y j+m ) is the subsequent point of Pi.

The triangle △P i P L P j is constructed by Pi, PL and Pj, and the distance D IL between Pi and PL, the distance D IJ between P i and P j , and the distance D JL between P L and P j are calculated. The Helen formula is also used here to calculate the area of the triangle supporting region.

$$ A = (D_{IL} + D_{IJ} + D_{JL} )/2 $$
(9)
$$ S_{iL} = \sqrt {A(A - D_{IL} )(A - D_{IJ} )(A - D_{JL} )} $$
(10)

Obviously, when the area SiL is closer to 0, indicating the connection line of Pi, PL and Pj is closer to a straight line, in this case, Pi and Pj may be on the same contour. On the contrary, the larger the area SiL is, the less likely Pi and Pj are on the same contour.

Similarly, we can make Pi, PR and Pj form a triangle △PiPRPj, the calculation method of the area of the triangle is as follows:

$$ S_{iR} = \sqrt {A(A - D_{IR} )(A - D_{IJ} )(A - D_{JR} )} $$
(11)

the same method can be used to judge whether Pi or Pj is on the same contour or not. An experiment is carried out to illustrate the procedure just as Fig. 4.

Fig. 4.
figure 4

Matching process of concave points. (a) The gray-scale fiber image; (b) the fiber binary image with false corners; (c) the fiber binary image after removing false corners; (d) fiber contour curve

The original fiber image is shown in Fig. 4(a), after the image preprocessing, the binarized fiber image is obtained. There are concave points, convex points, false concave points and false convex points in the binary image. Using the appropriate method to remove the false corner points, and then extract the edge of the fiber with a clear binary contour, as shown in Fig. 4(b)–(d).

In Fig. 4(d), the contour points P1 and P2 belong to the same fiber contour. On the left side of P1, take the precursor point P1L, and take the subsequent point P1R on the other side of P1. Using P1L, P1 and P2 to form a triangle △P1P1LP2, because of the three vertices are on the same fiber contour curve, the area of △P1P1LP2 satisfy: S1L < T, Where T is the threshold. The area of the triangular formed by P1, P1L and P3 are greater than the threshold T, denoting that they are not on the same side of contour curve. It can be seen that the introduction of P1L let P1 find its matching point P2. Similarly, △P1P1RP4 is constructed, so that P1 can find its another matching point P4. The area threshold is set here to find the points on the same line. In theory the smaller the better, 0 is an ideal value. However, the outline of the fiber boundaries may have a certain degree of distortion. The magnification of the fiber image used in this paper is 200 times, by means of a large number of testing, the threshold in this paper is set to about 15.

By way of the above algorithm, the matching point pair is obtained as: {P1, P2}, {P1, P4}, {P2, P3}, {P3, P4}. Then the segmentation method can be adopted to segment the overlapped fiber in the binary image, and finally get two independent fibers.

3.3 Fiber Image Segmentation Method Based on Concave Points Matching

In the previous section, the concave points in the binary fiber image has been matched. And the subsequent work is to separate the fiber image using the matching points.

Determine whether there is a crossed or adhesive set on the fiber contour. If it does not exist, it means that the fiber is independent one. After separation of individual fibers, there are only crossed or adhesive fibers left in the image.

Then the concave points are classified as cross and adhesive set. The adhesive fibers can be separated by the adhesive-typed matching points. After the completion of this step, there are only cross-typed fibers in the image. The segmentation of overlapped fiber of the cross-type is based on the four matching concave points in the intersection area. Draw lines between two adjacent points and we can segment the crossed fibers.

Figure 5 provides the separation process of two adhesive fibers. The preprocessing process of the image is same as Fig. 4 above. The concave points of two adhesive fibers can be detected by K-cosine curvature algorithm and the vector angle method, and then use the matching algorithm described in this article to verify whether they are matched to each other. Then use the segmentation method to segment the adhesive fibers, and finally get two independent fibers.

Fig. 5.
figure 5

The segmentation of adhesive fibers

4 Experimental Results

4.1 The Basic Morphology of Overlapped Fiber Images

The samples in this paper is provided by Shenzhen Entry-Exit Inspection and Quarantine Bureau. Each sample image contains more than two fibers. These fibers exist mainly in three ways: independent, crossed, or adhesive ones.

The independent fibers are easy to mark, but those overlapped fibers must be separated into individual ones before they are correctly marked. Figure 4 provides the basic morphology of crossed fiber image. After removing the noise and extracting the contour, the crossed position of the image includes four concave points. In order to restore the original outline shape of the two fibers, the concave points matching algorithms have to be used here. Figure 5 provides another basic morphology of adhesive fiber image. The adhesive position of the image includes two concave points.

4.2 Discussion of Matching and Segmentation Results

Figures 6 and 7 shows the segment process of more than two fibers. As the B-spline curve fitting and complex domain filtering is used, the clear contours are obtained here. Figure 6 contains three crossed fibers, with two crossed positions and eight concave points. The tests show that the fiber contours are extracted accurately based on the preprocessing algorithm described above and the concave points in this image are identified correctly. The individual fibers, obtained by the segmentation method after the matching process, are separated without any deformation.

Fig. 6.
figure 6

Three fibers segmentation of the crossed type

Fig. 7.
figure 7

The fiber segmentation of the mixed type

Figure 7 shows the segmentation process of four fibers, one of which is a separate isolated fiber and the other three cross or adhere each other. For the isolated fiber, we can use the principle of connecting domain to identify it. For the other three overlapped fibers, use the algorithm described in this paper to find the matching points, and then through segmentation method to achieve image segmentation effect.

In order to test the effect of algorithm, some sample are used here. The fibers segmentation accuracy of the crossed type, adhesive type as well as the mixed type of cross and adhesion are calculated as Table 1. The crossed type of each sample only has two fibers and the adhesive type has two to four fibers. The number of mixed type of each sample range from three to seven.

Table 1. The fibers segmentation accuracy

4.3 Matching Error and Resolvent

In Fig. 7(a), there are actually five adherent fibers, but actually six fibers are separated. Due to the fact that the contours on both sides of two concave points (point 2 and point 8 in Fig. 7(c)) which should not match are similar. What’s worse, they are also concentrated in a narrow area, causing the line connected by P2, P8 and their precursor point or subsequent point close to a straight line, resulting in matching error. At last, the number of fibers separated is greater than the actual number of fibers.

Figure 7(b) actually include five adherent fibers, but only four were separated. The reason for this is that the distance between the two concave points (P1, P4) is too long compared with the others. what’s worse, the triangle area formed with the forward point (or the successor point) at point P1 and the target point is too big in the process of concave points matching, resulting in the failure of the algorithm used in this paper (Fig. 8).

Fig. 8.
figure 8

False segmentation caused by matching errors. (a) Over-segmentation (b) under-segmentation

According to the algorithm of this paper, we can store the matching concave points in a counterclockwise order such as {left, right, right, upper}, we can obtain {P1, P2, P7, P8}. What’s more, prohibits the separation of fibers from non-adjacent concave points, because P1 and P7 or P2 and P8 are not adjacent, they cannot be used for segmentation, so the problem in Fig. 7(a) is solved. The expansion process of the image preprocessing in Fig. 7(b) results in the small area being smoothed and filled so that the concave point feature is weakened. Therefore, in the image preprocessing, the number of times of mathematical morphology should be minimized when using B-spline to obtain the contour.

5 Conclusions and Perspectives

In this paper, we have presented a fiber segmentation method, which classified corners of the overlapped fiber image into concave points and convex points. By using the characteristic relationship between the concave points and the Helen formula to calculate the area of the triangle supporting region to judge whether two points are belonging to the same fiber contour. The experimental results show that the algorithm can resolve the problem that the multiple fibers are not easily separated and the inaccurate segmentation when multiple fibers crossed or adhered each other. The algorithm is not only suitable for extracting the concave points at the corresponding overlapped boundary in the fiber image, but also can realize the segmentation of the overlapped fibers by the matching algorithm.

The proposed image segmentation approach has been inspired by the corner property of fiber contours. As a result, this framework may be suitable for separating other Fibrous objects. An interesting topic is whether this framework could be used to distinguish different fiber kinds, e.g. cotton and fibrilia, or wool and cashmere. From the results of segmentation, the next step will make use of the separated fibers to measure the diameters or other important features. This will be investigated in the future.