Elsevier

Computer-Aided Design

Volume 39, Issue 7, July 2007, Pages 568-582
Computer-Aided Design

Automatic body feature extraction from a marker-less scanned human body

https://doi.org/10.1016/j.cad.2007.03.003Get rights and content

Abstract

In this paper, we propose a novel method of body feature extraction from a marker-less scanned body. The descriptions of human body features mostly defined in ASTM (1999) and ISO (1989) are interpreted into logical mathematical definitions. Using these significant definitions, we employ image processing and computational geometry techniques to identify, automatically, body features from the torso cloud points. We have currently extracted 21 feature points and 35 feature lines on the human torso; this number may be extended if necessary. Moreover, less than 2 min processing time is taken for body feature extraction starting from the raw point cloud. This algorithm is successfully tested on several Asian female adults who are aged from 18 to 60.

Introduction

Anthropometry is an important issue in the field of human factor related production. A precise sizing system provides a useful foundation for the manufacture of daily commodities. Among these, the design of human apparel, such as clothes, glasses, hats and footwear is most important, needing precise civilian anthropometric data. Body dimensions are usually tape-measured by tailors. The accuracy of measurement is affected by the expertise of the operators and the cooperation of the person to be measured. Conducting nationwide large-scale anthropometry is time consuming and tedious. There is great need for an automatic anthropometry system. Recently, the 3D body scanner has become a notable tool for anthropometry. Anthroscan [1], BodyShape [2], Cyberware [3], Gemini [4], Hamamatsu [5], Inspeck [6], TC2 [7], TriForm [8], Vitus [9] are some examples. However, the raw data taken from such 3D scanners cannot be readily used by the industry because the scanning points contain little meaningful information. The key points in solving the problem are data extraction, feature identification, and data symbolization. Therefore, much research effort has been put into the post-processing of the raw data. Nurre separated the data into six regions by finding cusps on every slice of discrete points [10], [11]. Ju et al. segmented the body into 5 parts and determined the feature points layer by layer by computing its circumferences [12]. Wang el at. used fuzzy logic to recognize features in unorganized cloud points [13]. Pargas used the 3D scanner for measurement of body dimensions in the garment industry [14]. In the famous CAESAR project executed in Europe, Robinette et al. used a 3D body scanner to collect the dimensions of the body [15]. Prior to scanning, reflective markers were attached to the anatomical landmarks of the subject. Then the positions of the landmarks were determined semi-automatically. They used a neural net to identify the positions of the landmarks, by means of feature points [16]. Ashdown constructed a sizing system using the data from a 3D body scanner [17]. Wang developed an algorithm to extract key features on the human body, then built a set of parametric surfaces to represent the scanned subject [18]. Simmons compared different 3D scanners and concluded that a standard feature terminology and common feature recognition software is needed for various scanners [19]. Turning the scanned data into useful information for design purposes is still a long way off.

In this paper, we propose a novel method for body feature extraction from the scanned human body. The semantic definitions of body features found in ISO 8559 were interpreted into a series of mathematical definitions [20]. A total of 21 feature points and 35 feature lines on the human torso were identified. Each feature stands for an important landmark for garment making or the ergonomics industry. Contrary to previous studies, markers were not needed in this study. The features were identified by geometric properties and common proportions, thus eliminating the variations due to different operators. The overall goal of this research is to generate, automatically, a digital mannequin from the scanned body. With the anatomical features embedded in the digital mannequin, the automatic anthropometry system can be simply applied in the garment design industry, for example, as well as in ergonomic applications. Fig. 1 shows the workflow for digital mannequin generation. First, a subject is scanned by a full body scanner. The generated point cloud is aligned along its principle axes and segmented into 5 major parts: arms, legs, and a torso-and-head segment [21]. Then, the torso is encoded into a ranged map, in order to eliminate noise and fill up the void inside. Geometric features are easily discovered in the encoded image. Finally a parametric triangular tessellation is generated which retains every feature in it. This paper focuses on the investigation of feature extraction as a base for the next stage in tessellation and automatic anthropometry.

This paper is organized as follows. In Section 2, a detailed description is given for the techniques used for feature identification. In Section 3, we describe the methods for data alignment, segmentation and coded image representation. Next, we describe the definitions and search procedure for each feature on the torso. Finally, we discuss the results of our approach.

Section snippets

Mathematical theory

The feature extraction system in this study is based on the descriptions of feature points and feature lines in the garment design literature, handbooks, and standards (ASTM and ISO). The definitions are interpreted into logical mathematic definitions with reasonable proportions in case these definitions are not properly applicable, and finally coded into computer algorithm. In this way, the body features can be correctly and uniquely found without ambiguity. The theorem and methodology for

Pre-processing

The 3D data in its original format not only contains no geometric features, but also occupies a lot of memory space. To manipulate the data in 3D space is of computational difficulty and time consuming. The purpose of post-processing the raw scanned data is to sort the data into a more meaningful format, so that it can be conveniently used for feature recognition.

During the full body scanning, subjects are asked to keep their arms and legs slightly separated. Although foot prints inside the

Feature identification

In this research, body features are roughly classified into feature points and feature lines. Feature points are mostly located on the extremities of the body surface. Thus, feature points are defined by their respective geometries on the human body. In turn, feature lines are defined as a group of points which contain the same properties, such as zero-crossing points from the Sobel mask. In other words, feature lines can also be defined as the intersection curve of a plane passing through

Results

This feature extraction automation software is developed by using C++ language and executed on a Pentium IV 3.0 GHz personal computer. The whole process takes less than 2 min to identify every body feature starting from the raw point cloud. Each of these features significantly stands for an important landmark in apparel design and anthropometry. There are currently 21 feature points and 35 feature lines which are able to be identified and this number can be extended if needed.

Several Asian

Discussion and conclusions

An automatic body feature extraction algorithm based on image processing and computational geometry is presented. Our method in this study conducts the computations in 2D depth space which is much more efficient than the computations in the original complex 3D point cloud. Moreover, the voids produced by the body scanner are able to be filled using a simple interpolation method on the depth image. Noises generated from the body scanner are easy to eliminate using image processing techniques.

Future work

The ongoing work for this study may be cataloged into six approaches. Firstly, the building up of a data base for standard models with their representative ages, sex, and figures. Secondly, the application of similar methodology to build up a database for the human head, arms, and legs with the aim of constructing a realistic digital human. Thirdly, amending the feature definitions in order to apply the algorithm to scanned men or children. Fourthly, expanding the outcomes of this study to

Iat-Fai Leong is a Ph.D. student at National Cheng Kung University. He graduated in 1998 with a BS degree in Mechanical Engineering and received a M.Sc. degree in 2000, all at National Cheng Kung University, Taiwan. His research interests are in the areas of computer graphics and computer-aided geometric design.

References (32)

  • Hamamatsu. http://usa.hamamatsu.com/sys-industrial/blscanner,...
  • Inspeck. http://www.inspeck.com/,...
  • TC2. http://www.tc2.com/,...
  • TriForm. http://www.wwl.co.uk/,...
  • Vitus. http://www.vitronic.com/,...
  • Nurre JH. Locating landmarks on human body scan data. In: Proc. of international conference on recent advances in 3-D...
  • Cited by (57)

    • Development of automatic 3D body scan measurement line generation method

      2023, International Journal of Clothing Science and Technology
    View all citing articles on Scopus

    Iat-Fai Leong is a Ph.D. student at National Cheng Kung University. He graduated in 1998 with a BS degree in Mechanical Engineering and received a M.Sc. degree in 2000, all at National Cheng Kung University, Taiwan. His research interests are in the areas of computer graphics and computer-aided geometric design.

    Jing-Jing Fang is an associate professor in the Department of Mechanical Engineering in National Cheng Kung University, Taiwan. She leads her research team working on the area of digital mannequin, 3D garment design, pattern generating, image-based surgical planning, and surgical navigation. Her research interests are geometric modeling, object-oriented design, and virtual reality applications. She received her BS and M.Sc. in applied mathematics in Taiwan, 1984, and Ph.D. in mechanical and chemical engineering in Heriot-Watt University, Britain, 1996.

    Ming-June Tsai is a professor in the Department of Mechanical Engineering in National Cheng Kung University, Taiwan. He received his Ph.D. in Mechanical Engineering at Ohio State University, 1986. His research interests are robotics and automation, image process and feature recognition, design of optical inspection systems and geometrical reverse engineering systems.

    View full text