skip to main content
10.1145/3297156.3297178acmotherconferencesArticle/Chapter ViewAbstractPublication PagescsaiConference Proceedingsconference-collections
research-article

The Ceprei-Scape Dataset for the Validation of Autonomous Driving

Published: 08 December 2018 Publication History

Abstract

Validation and test is an enabling factor of the autonomous driving for a wide range of applications, especially for vision-based applications. Deep learning methods has been widely deployed in vision-based applications of autonomous driving and has benefitted enormously from large scale dataset. However, no current dataset focuses on the validation of autonomous driving and captures the feature of the validation process. To address this, the Ceprei-Scape dataset is proposed in this paper. Ceprei-Scape dataset is consist of RGB videos and corresponding dense Lidar point cloud. Comparing with existing datasets, our dataset has following unique properties. First, the scale of our dataset is very big and it contains 50k images and its pixel-level semantic classes. Second, each image in the dataset is tagged with high-accuracy 3D attributes achieved from dense 3D point cloud data. Third, the standard validation process required in the laws and regulations (EURO-NCAP, CNCAP, etc.) are included in our dataset to address the feature of the validation of the autonomous driving. We expect our new dataset can promote the validation and test of autonomous driving and deeply benefit the research and application of autonomous driving.

References

[1]
A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope. IJCV, 42(3):145--175, 2001.
[2]
B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In NIPS, 2014.
[3]
J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
[4]
J. Tighe, M. Niethammer, and S. Lazebnik. Scene parsing with object instance inference using regions and per exemplar detectors. IJCV, 112(2):150--171, 2015.
[5]
R. Benenson, M. Mathias, R. Timofte, and L. Van Gool. Pedestrian detection at 100 frames per second. In CVPR, 2012
[6]
P. Dollár, C. Wojek, B. Schiele, and P. Perona. Pedestrian detection: An evaluation of the state of the art. Trans. PAMI, 34(4):743--761, 2012.
[7]
M. Enzweiler and D. M. Gavrila. Monocular pedestrian detection: Survey and experiments. 31(12):2179--2195, 2009.
[8]
P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan.Object detection with discriminatively trained partbased models. Trans. PAMI, 32(9):1627--1645, 2010.
[9]
G. J. Brostow, J. Fauqueur, and R. Cipolla. Semantic object classes in video: A high-definition ground truth database. Pattern Recognition Letters, 30(2):88--97, 2009
[10]
A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The kitti dataset. International Journal of Robotics Research (IJRR), 2013.
[11]
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, and B. Schiele. The cityscapes dataset for semantic urban scene understanding. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[12]
S. Wang, M. Bai, G. Mattyus, H. Chu, W. Luo, B. Yang, J. Liang, J. Cheverie, S. Fidler, and R. Urtasun. Torontocity: Seeing the world with a million eyes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3009--3017, 2017.
[13]
Z.Wu, C. Shen, and A. v. d. Hengel. Wider or deeper: Revisiting the resnet model for visual recognition. arXiv preprintarXiv:1611.10080, 2016.
[14]
K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770--778, 2016.

Index Terms

  1. The Ceprei-Scape Dataset for the Validation of Autonomous Driving

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    CSAI '18: Proceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence
    December 2018
    641 pages
    ISBN:9781450366069
    DOI:10.1145/3297156
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    In-Cooperation

    • Shenzhen University: Shenzhen University

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 December 2018

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Scene dataset
    2. autonomous driving
    3. image labelling
    4. validation
    5. vision-based

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Conference

    CSAI '18

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 42
      Total Downloads
    • Downloads (Last 12 months)1
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 20 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media