skip to main content
research-article

Robust Embedded Autonomous Driving Positioning System Fusing LiDAR and Inertial Sensors

Published:19 January 2024Publication History
Skip Abstract Section

Abstract

Autonomous driving emphasizes precise multi-sensor fusion positioning on limit resource embedded systems. LiDAR-centered sensor fusion system serves as a mainstream navigation system due to its insensitivity to illumination and viewpoint change. However, these types of systems suffer from handling large-scale sequential LiDAR data using limited resources on board, leading LiDAR-centralized sensor fusion unpractical. As a result, hand-crafted features such as plane and edge are leveraged in majority mainstream positioning methods to alleviate this unsatisfaction, triggering a new cornerstone in LiDAR Inertial sensor fusion. However, such super light weight feature extraction, although it achieves real-time constraint in LiDAR-centered sensor fusion, encounters severe vulnerability under high speed rotational or translational perturbation. In this paper, we propose a sparse tensor based LiDAR Inertial fusion method for autonomous driving embedded system. Leveraging the power of sparse tensor, the global geometrical feature is fetched so that the point cloud sparsity defect is alleviated. Inertial sensor is deployed to conquer the time-consuming step caused by the coarse level point-wise inlier matching. We construct our experiments on both representative dataset benchmarks and realistic scenes. The evaluation results show the robustness and accuracy of our proposed solution compared to classical methods.

REFERENCES

  1. [1] Aoki Yasuhiro, Goforth Hunter, Srivatsan Rangaprasad Arun, and Lucey Simon. 2019. PointNetLK: Robust & efficient point cloud registration using PointNet. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16–20, 2019. Computer Vision Foundation / IEEE, 71637172.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Behley Jens and Stachniss Cyrill. 2018. Efficient surfel-based SLAM using 3D laser range data in urban environments. In Robotics: Science and Systems XIV, Carnegie Mellon University, Pittsburgh, Pennsylvania, USA, June 26–30, 2018, Kress-Gazit Hadas, Srinivasa Siddhartha S., Howard Tom, and Atanasov Nikolay (Eds.).Google ScholarGoogle Scholar
  3. [3] Besl Paul J. and McKay Neil D.. 1992. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 14, 2 (1992), 239256.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Campos Carlos, Elvira Richard, Rodríguez Juan J. Gómez, Montiel José M. M., and Tardós Juan D.. 2021. ORB-SLAM3: An accurate open-source library for visual, visual–inertial, and multimap SLAM. IEEE Transactions on Robotics 37, 6 (2021), 18741890.Google ScholarGoogle ScholarCross RefCross Ref
  5. [5] Chen Liang-Chieh, Papandreou George, Kokkinos Iasonas, Murphy Kevin, and Yuille Alan L.. 2017. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, 4 (2017), 834848.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Chen X., Li S., Mersch B., Wiesmann L., Gall J., Behley J., and Stachniss C.. 2021. Moving object segmentation in 3D LiDAR data: A learning-based approach exploiting sequential data. IEEE Robotics and Automation Letters (RA-L) 6 (2021), 65296536. Issue 4. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  7. [7] Chen Xiaozhi, Ma Huimin, Wan Ji, Li Bo, and Xia Tian. 2017. Multi-view 3D object detection network for autonomous driving. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017. IEEE Computer Society, 65266534.Google ScholarGoogle ScholarCross RefCross Ref
  8. [8] Chen Xieyuanli, Milioto Andres, Palazzolo Emanuele, Giguère Philippe, Behley Jens, and Stachniss Cyrill. 2019. SuMa++: Efficient LiDAR-based semantic SLAM. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019, Macau, SAR, China, November 3–8, 2019. IEEE, 45304537.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. [9] Choy Christopher B., Dong Wei, and Koltun Vladlen. 2020. Deep global registration. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020. Computer Vision Foundation / IEEE, 25112520.Google ScholarGoogle ScholarCross RefCross Ref
  10. [10] Choy Christopher B., Gwak JunYoung, and Savarese Silvio. 2019. 4D spatio-temporal ConvNets: Minkowski convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16–20, 2019. Computer Vision Foundation / IEEE, 30753084.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Choy Christopher B., Park Jaesik, and Koltun Vladlen. 2019. Fully convolutional geometric features. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27–November 2, 2019. IEEE, 89578965.Google ScholarGoogle ScholarCross RefCross Ref
  12. [12] Dellenbach Pierre, Deschaud Jean-Emmanuel, Jacquet Bastien, and Goulette François. 2021. CT-ICP: Real-time Elastic LiDAR Odometry with Loop Closure. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  13. [13] Fitzgibbon Andrew W.. 2003. Robust registration of 2D and 3D point sets. Image Vis. Comput. 21, 13-14 (2003), 11451153.Google ScholarGoogle ScholarCross RefCross Ref
  14. [14] Geiger Andreas, Lenz Philip, and Urtasun Raquel. 2012. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Conference on Computer Vision and Pattern Recognition (CVPR ’12).Google ScholarGoogle ScholarCross RefCross Ref
  15. [15] Gelfand Natasha, Mitra Niloy J., Guibas Leonidas J., and Pottmann Helmut. 2005. Robust global registration. In Third Eurographics Symposium on Geometry Processing, Vienna, Austria, July 4–6, 2005(ACM International Conference Proceeding Series, Vol. 255), Desbrun Mathieu and Pottmann Helmut (Eds.). Eurographics Association, 197206.Google ScholarGoogle Scholar
  16. [16] Geneva Patrick, Maley James, and Huang Guoquan. 2019. An efficient Schmidt-EKF for 3D visual-inertial SLAM. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1210512115.Google ScholarGoogle ScholarCross RefCross Ref
  17. [17] Ghosh Saurav Kumar, RC Jaffer Sheriff, Jain Vibhor, and Dey Soumyajit. 2020. Reliable and secure design-space-exploration for cyber-physical systems. ACM Transactions on Embedded Computing Systems (TECS) 19, 3 (2020), 129.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Hawkins Andrew J.. 2021. Tesla’s ‘Phantom Braking’ Problem is Getting Worse, and the US Government has Questions. https://www.theverge.com/2022/6/3/23153241/tesla-phantom-braking-nhtsa-complaints-investigationGoogle ScholarGoogle Scholar
  19. [19] He Zhijian, Chen Yao, and Shen Zhaoyan. 2018. Attitude fusion of inertial and magnetic sensor under different magnetic filed distortions. ACM Transactions on Embedded Computing Systems (TECS) 17, 2 (2018), 122.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. [20] He Zhijian, Chen Yanming, Shen Zhaoyan, Huang Enyan, Li Shuai, Shao Zili, and Wang Qixin. 2015. Ard-mu-Copter: A simple open source quadcopter platform. In 2015 11th International Conference on Mobile Ad-hoc and Sensor Networks (MSN ’15). IEEE, 158164.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. [21] He Zhijian, Fan Xueli, Peng Yun, Shen Zhaoyan, Jiao Jianhao, and Liu Ming. 2022. EmPointMovSeg: Sparse tensor based moving object segmentation in 3D LiDAR point clouds for autonomous driving embedded system. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (2022).Google ScholarGoogle Scholar
  22. [22] Hong Hyesun, Jung Hanwoong, Park Kangkyu, and Ha Soonhoi. 2018. SeMo: Service-oriented and model-based software framework for cooperating robots. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 37, 11 (2018), 29522963.Google ScholarGoogle ScholarCross RefCross Ref
  23. [23] Hsieh Chiao, Li Yangge, Sun Dawei, Joshi Keyur, Misailovic Sasa, and Mitra Sayan. 2022. Verifying controllers with vision-based perception using safe approximate abstractions. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 41, 11 (2022), 42054216.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. [24] Johnson Andrew E. and Hebert Martial. 1999. Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Trans. Pattern Anal. Mach. Intell. 21, 5 (1999), 433449.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. [25] Khan Muhammad Umer, Li Shuai, Wang Qixin, and Shao Zili. 2016. CPS oriented control design for networked surveillance robots with multiple physical constraints. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 35, 5 (2016), 778791.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. [26] Li Zhichao and Wang Naiyan. 2020. DMLO: Deep matching LiDAR odometry. In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020, Las Vegas, NV, USA, October 24, 2020 - January 24, 2021. IEEE, 60106017.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. [27] Lin Jiarong and Zhang Fu. 2022. R 3 LIVE: A robust, real-time, RGB-colored, LiDAR-inertial-visual tightly-coupled state estimation and mapping package. In 2022 International Conference on Robotics and Automation (ICRA ’22). IEEE, 1067210678.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. [28] Liu Jiageng and Guo Ge. 2021. Vehicle localization during GPS outages with extended Kalman filter and deep learning. IEEE Transactions on Instrumentation and Measurement 70 (2021), 110.Google ScholarGoogle ScholarCross RefCross Ref
  29. [29] Liu Xingyu, Qi Charles R., and Guibas Leonidas J.. 2019. FlowNet3D: Learning scene flow in 3D point clouds. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16–20, 2019. Computer Vision Foundation / IEEE, 529537.Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Makadia Ameesh, Patterson Alexander, and Daniilidis Kostas. 2006. Fully automatic registration of 3D point clouds. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), 17–22 June 2006, New York, NY, USA. IEEE Computer Society, 12971304.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. [31] Min Zhixiang, Yang Yiding, and Dunn Enrique. 2020. VOLDOR: Visual odometry from log-logistic dense optical flow residuals. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020. Computer Vision Foundation / IEEE, 48974908.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Minaee Shervin, Boykov Yuri Y., Porikli Fatih, Plaza Antonio J., Kehtarnavaz Nasser, and Terzopoulos Demetri. 2021. Image segmentation using deep learning: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Pomerleau François, Colas Francis, and Siegwart Roland. 2015. A review of point cloud registration algorithms for mobile robotics. Found. Trends Robotics 4, 1 (2015), 1104.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. [34] Pomerleau François, Colas Francis, Siegwart Roland, and Magnenat Stéphane. 2013. Comparing ICP variants on real-world data sets. Autonomous Robots 34, 3 (2013), 133148.Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. [35] Qi Charles Ruizhongtai, Su Hao, Mo Kaichun, and Guibas Leonidas J.. 2017. PointNet: Deep learning on point sets for 3D classification and segmentation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017. IEEE Computer Society, 7785.Google ScholarGoogle Scholar
  36. [36] Qi Charles Ruizhongtai, Yi Li, Su Hao, and Guibas Leonidas J.. 2017. PointNet++: Deep hierarchical feature learning on point sets in a metric space. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4–9, 2017, Long Beach, CA, USA, Guyon Isabelle, Luxburg Ulrike von, Bengio Samy, Wallach Hanna M., Fergus Rob, Vishwanathan S. V. N., and Garnett Roman (Eds.). 50995108.Google ScholarGoogle Scholar
  37. [37] Qiao Siyuan, Zhu Yukun, Adam Hartwig, Yuille Alan, and Chen Liang-Chieh. 2021. ViP-DeepL ab: Learning visual perception with depth-aware video panoptic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 39974008.Google ScholarGoogle Scholar
  38. [38] Qin Tong, Li Peiliang, and Shen Shaojie. 2018. VINS-Mono: A robust and versatile monocular visual-inertial state estimator. IEEE Transactions on Robotics 34, 4 (2018), 10041020.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. [39] Rusu Radu Bogdan, Blodow Nico, and Beetz Michael. 2009. Fast point feature histograms (FPFH) for 3D registration. In 2009 IEEE International Conference on Robotics and Automation, ICRA 2009, Kobe, Japan, May 12–17, 2009. IEEE, 32123217.Google ScholarGoogle ScholarCross RefCross Ref
  40. [40] Rusu Radu Bogdan, Blodow Nico, Marton Zoltan Csaba, and Beetz Michael. 2008. Aligning point cloud views using persistent feature histograms. In 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, September 22–26, 2008, Acropolis Convention Center, Nice, France. IEEE, 33843391.Google ScholarGoogle ScholarCross RefCross Ref
  41. [41] Shan Tixiao, Englot Brendan, Meyers Drew, Wang Wei, Ratti Carlo, and Rus Daniela. 2021. Rotation Dataset Bag. https://drive.google.com/drive/folders/1gJHwfdHCRdjP7vuT556pv8atqrCJPbUqGoogle ScholarGoogle Scholar
  42. [42] Shan Tixiao and Englot Brendan J.. 2018. LeGO-LOAM: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2018, Madrid, Spain, October 1–5, 2018. IEEE, 47584765.Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. [43] Shan Tixiao, Englot Brendan J., Meyers Drew, Wang Wei, Ratti Carlo, and Rus Daniela. 2020. LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping. In IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2020, Las Vegas, NV, USA, October 24, 2020 - January 24, 2021. IEEE, 51355142.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. [44] Shan Tixiao, Englot Brendan J., Meyers Drew, Wang Wei, Ratti Carlo, and Rus Daniela. 2021. Walking Dataset Bag. https://drive.google.com/drive/folders/1gJHwfdHCRdjP7vuT556pv8atqrCJPbUqGoogle ScholarGoogle Scholar
  45. [45] Sharp Gregory C., Lee Sang Wook, and Wehe David K.. 2002. ICP registration using invariant features. IEEE Trans. Pattern Anal. Mach. Intell. 24, 1 (2002), 90102.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. [46] Silva Ivo, Pendão Cristiano, Torres-Sospedra Joaquín, and Moreira Adriano. 2021. TrackInFactory: A tight coupling particle filter for industrial vehicle tracking in indoor environments. IEEE Transactions on Systems, Man, and Cybernetics: Systems 52, 7 (2021), 41514162.Google ScholarGoogle ScholarCross RefCross Ref
  47. [47] Tan Rui, Xing Guoliang, Liu Xue, Yao Jianguo, and Yuan Zhaohui. 2013. Adaptive calibration for fusion-based cyber-physical systems. ACM Transactions on Embedded Computing Systems (TECS) 11, 4 (2013), 125.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. [48] Velas Martin, Spanel Michal, Hradis Michal, and Herout Adam. 2018. CNN for IMU assisted odometry estimation using velodyne LiDAR. In 2018 IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2018, Torres Vedras, Portugal, April 25–27, 2018, Costelha Hugo, Calado João M. F., Bento Luís Conde, Lopes Nuno, and Oliveira Paulo (Eds.). IEEE, 7177.Google ScholarGoogle ScholarCross RefCross Ref
  49. [49] Wang Guangming, Wu Xinrui, Liu Zhe, and Wang Hesheng. 2021. Hierarchical attention learning of scene flow in 3D point clouds. IEEE Trans. Image Process. 30 (2021), 51685181.Google ScholarGoogle ScholarCross RefCross Ref
  50. [50] Wang Han, Wang Chen, Chen Chun-Lin, and Xie Lihua. 2021. F-LOAM: Fast lidar odometry and mapping. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’21). IEEE, 43904396.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. [51] Wang Sen, Clark Ronald, Wen Hongkai, and Trigoni Niki. 2017. DeepVO: Towards end-to-end visual odometry with deep recurrent convolutional neural networks. In 2017 IEEE International Conference on Robotics and Automation (ICRA ’17). IEEE, 20432050.Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. [52] Wang Wei, Saputra Muhamad Risqi Utama, Zhao Peijun, Gusmao Pedro P. B. de, Yang Bo, Chen Changhao, Markham Andrew, and Trigoni Niki. 2019. DeepPCO: End-to-end point cloud odometry through deep parallel neural network. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2019, Macau, SAR, China, November 3–8, 2019. IEEE, 32483254.Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. [53] Wisth David, Camurri Marco, Das Sandipan, and Fallon Maurice. 2021. Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry. IEEE Robotics and Automation Letters 6, 2 (2021), 10041011.Google ScholarGoogle ScholarCross RefCross Ref
  54. [54] Wu Jin, Zheng Yu, Gao Zhi, Jiang Yi, Hu Xiangcheng, Zhu Yilong, Jiao Jianhao, and Liu Ming. 2022. Quadratic pose estimation problems: Globally optimal solutions, solvability/observability analysis, and uncertainty description. IEEE Transactions on Robotics 38, 5 (2022), 33143335. DOI:Google ScholarGoogle ScholarCross RefCross Ref
  55. [55] Xu Wei, Cai Yixi, He Dongjiao, Lin Jiarong, and Zhang Fu. 2022. FAST-LIO2: Fast direct lidar-inertial odometry. IEEE Transactions on Robotics (2022).Google ScholarGoogle Scholar
  56. [56] Xue Guanghui, Wei Jinbo, Li Ruixue, and Cheng Jian. 2022. LeGO-LOAM-SC: An improved simultaneous localization and mapping method fusing LeGO-LOAM and scan context for underground coalmine. Sensors 22, 2 (2022). DOI:Google ScholarGoogle ScholarCross RefCross Ref
  57. [57] Yang Jung-Cheng, Lin Chun-Jung, You Bing-Yuan, Yan Yin-Long, and Cheng Teng-Hu. 2021. RTLIO: Real-time lidar-inertial odometry and mapping for UAVS. Sensors 21, 12 (2021), 3955.Google ScholarGoogle ScholarCross RefCross Ref
  58. [58] Yang Nan, Stumberg Lukas von, Wang Rui, and Cremers Daniel. 2020. D3VO: Deep depth, deep pose and deep uncertainty for monocular visual odometry. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13–19, 2020. Computer Vision Foundation / IEEE, 12781289.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Yang Nan, Wang Rui, Stuckler Jorg, and Cremers Daniel. 2018. Deep virtual stereo odometry: Leveraging deep depth prediction for monocular direct sparse odometry. In Proceedings of the European Conference on Computer Vision (ECCV ’18). 817833.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Zhang Ji and Singh Sanjiv. 2014. LOAM: Lidar odometry and mapping in real-time. In Robotics: Science and Systems, Vol. 2. Berkeley, CA, 19.Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Zhou Tinghui, Brown Matthew, Snavely Noah, and Lowe David G.. 2017. Unsupervised learning of depth and ego-motion from video. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 18511858.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Robust Embedded Autonomous Driving Positioning System Fusing LiDAR and Inertial Sensors

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Embedded Computing Systems
        ACM Transactions on Embedded Computing Systems  Volume 23, Issue 1
        January 2024
        406 pages
        ISSN:1539-9087
        EISSN:1558-3465
        DOI:10.1145/3613501
        • Editor:
        • Tulika Mitra
        Issue’s Table of Contents

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 19 January 2024
        • Online AM: 17 October 2023
        • Accepted: 14 September 2023
        • Revised: 22 June 2023
        • Received: 23 February 2023
        Published in tecs Volume 23, Issue 1

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
      • Article Metrics

        • Downloads (Last 12 months)239
        • Downloads (Last 6 weeks)46

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      View Full Text