skip to main content
10.1145/3486637.3489485acmconferencesArticle/Chapter ViewAbstractPublication PagesgisConference Proceedingsconference-collections
research-article

Point cloud capture and segmentation of animal images using classification and clustering

Published: 02 November 2021 Publication History

Abstract

Measuring characteristics of animals in the wild is not always possible, due to their demeanour and lack of human contact. Remote capture and processing methods, including the segmentation of animal data into relevant body parts, are required. Existing solutions are either costly or too cumbersome to use in the wild. This study explores the use of RGB depth (RGB-D) cameras for data capture of a target animal from a distance. In addition, this study explores the extraction and segmentation of the resulting animal data into point clouds, and the creation of machine learning models for the automated segmentation of this data. Results of this study, including an experimental evaluation, demonstrate the feasibility of utilizing RGB-D cameras for animal data capture, and that classification outperformed clustering for automated animal data segmentation.

References

[1]
Andreas Berghänel, Oliver Schülke, and Julia Ostner. 2015. Locomotor play drives motor skill acquisition at the expense of growth: a life history trade-off. Science Advances 1, 7 (2015), 8 pages.
[2]
Charlotte A. Brassey and William I. Sellers. 2014. Scaling of convex hull volume to body mass in modern primates, non-primate mammals and birds. PLoS One 9, 3 (2014), 12 pages.
[3]
Monica Carfagni, Rocco Furferi, Lapo Governi, Chiara Santarelli, Michaela Servi, Francesca Uccheddu, and Yary Volpe. 2019. Metrological and Critical Characterization of the Intel® D415 Stereo Depth Camera. Sensors 19, 3 (2019), 20 pages.
[4]
Stefan K. Gehrig and Uwe Franke. 2007. Improving stereo sub-pixel accuracy for long-range stereo. In Proceedings of the 11th IEEE International Conference on Computer Vision. 1--7.
[5]
Anders Grunnet-Jepson, John N. Sweetser, and John Woodfill. [n. d.]. Best-known methods for tuning Intel® RealSense D400 Depth Cameras for Best Performance. https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/BKMs_Tuning_RealSense_D4xx_Cam.pdf (last accessed May 2021).
[6]
Jiawei Han, Micheline Kamber, and Jian Pei. 2011. Data mining concepts and techniques (3rd ed.). Morgan Kaufmann Publishers, Waltham, Mass.
[7]
Lvwen Huang, Shuqin Li, Anqi Zhu, Xinyun Fan, Chenyang Zhang, and Hongyan Wang. 2018. Non-contact body measurement for Qinchuan cattle with LIDAR sensor. Sensors 18, 9 (2018), 21 pages.
[8]
Yangyan Li, Rui Bu, Mingchao Sun, Wei Wu, Xinhan Di, and Baoquan Chen. 2018. PointCNN: Convolution on X-transformed Points. In Proceedings of the 32nd Conference on Neural Information Processing Systems. 11 pages.
[9]
Charles R. Qi, Li Yi, Hao. Su, and Leonidas J. Guibas. 2017. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Proceedings of the 31st Conference on Neural Information Processing Systems. 5105--5114.
[10]
Radu B. Rusu and Steve Cousins. 2011. 3D is here: Point Cloud Library. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation. 4 pages.
[11]
Markus Schoeler, Jeremie Papon, and Florentin Wörgötter. 2015. Constrained Planar Cuts - Object Partitioning for Point Clouds. In Proceedings of the 29th IEEE Conference on Computer Vision and Pattern Recognition. 5207 -- 5215.
[12]
Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, and Andrew Blake. 2011. Real-time human pose recognition in parts from single depth images. In 2011 IEEE Conference on Computer Vision and Pattern Recognition. 1297--1304.
[13]
Anh-Vu Vo, Linh Truong-Hong, Debra F. Laefer, and Michela Bertolotto. 2015. Octree-based region growing for point cloud segmentation. ISPRS Journal of Photogrammetry and Remote Sensing 104 (2015), 88--100.

Cited By

View all
  • (2022)HaniMob 2021 Workshop Report: The 1st ACM SIGSPATIAL Workshop on Animal Movement Ecology and Human MobilitySIGSPATIAL Special10.1145/3578484.357849213:3(33-36)Online publication date: 23-Dec-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
HANIMOB '21: Proceedings of the 1st ACM SIGSPATIAL International Workshop on Animal Movement Ecology and Human Mobility
November 2021
53 pages
ISBN:9781450391221
DOI:10.1145/3486637
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 November 2021

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. classification
  2. clustering
  3. point cloud
  4. segmentation

Qualifiers

  • Research-article

Conference

SIGSPATIAL '21
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)11
  • Downloads (Last 6 weeks)2
Reflects downloads up to 27 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2022)HaniMob 2021 Workshop Report: The 1st ACM SIGSPATIAL Workshop on Animal Movement Ecology and Human MobilitySIGSPATIAL Special10.1145/3578484.357849213:3(33-36)Online publication date: 23-Dec-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media