skip to main content
10.1145/3441852.3476519acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
poster

SynSLaG: Synthetic Sign Language Generator

Published: 17 October 2021 Publication History

Abstract

Machine learning techniques have the potential to play an important role in sign language recognition. However, sign language datasets lack the volume and variety necessary to work well. To enlarge these datasets, we introduce SynSLaG, a tool that synthetically generates sign language datasets from 3D motion capture data. SynSLaG generates realistic images of various body shapes with ground truth 2D/3D poses, depth maps, body-part segmentations, optical flows, and surface normals. The large synthetic datasets provide possibilities for advancing sign language recognition and analysis.

Supplementary Material

VTT File (PD1017.vtt)
Supplemental materials (supplementary.zip)
MP4 File (PD1017.mp4)
Presentation video

References

[1]
World Federation of the Deaf, "Our Work," 2018. [Online]. Available: http://wfdeaf.org/our-work/. [Accessed 25 3 2020].
[2]
D. Bragg, O. Koller, M. Bellard, L. Berke, P. Boudreault, A. Braffort, N. Caselli, M. Huenerfauth, H. Kacorri, T. Verhoef, C. Vogler and M. M. Ringel, "Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective," in International ACM SIGACCESS Conference on Computers and Accessibility, Pittsburgh, PA, USA, 2019.
[3]
Y. Nagashima, "Construction of Multi-purpose Japanese Sign Language Database," in Human Systems Engineering and Design, Cham, 2019.
[4]
K. Watanabe, Y. Nagashima, D. Hara, Y. Horiuchi, S. Sako and A. Ichikawa, "Construction of a Japanese Sign Language Database with Various Data Types," in International Conference on Human-Computer Interaction - Posters, Cham, 2019.
[5]
G. Varol, J. Romero, X. Martin, N. Mahmood, M. J. Black, I. Laptev and C. Schmid, "Learning from Synthetic Humans," in IEEE Conference on Computer Vision and Pattern Recognition, 2017.
[6]
"Blender - a 3D modelling and rendering package.," [Online]. Available: https://www.blender.org/.
[7]
J. Huang, W. Zhou, Q. Zhang, H. Li and W. Li, "Video-Based Sign Language Recognition without Temporal Segmentation," in AAAI Conference on Artificial Intelligence, 2018.
[8]
U. v. Agris and K. F. Kraiss, "Towards a VideoCorpus for Signer-Independent Continuous SignLanguage Recognition," in International Workshop on Gesture in Human-Computer Interaction and Simulation, Lisbon, Portugal, 2007.
[9]
H. J. Vaezi and O. Koller, "MS-ASL: A Large-Scale Data Set and Benchmark for Understanding American Sign Language," in The British Machine Vision Conference, 2019.
[10]
J. Forster, C. Schmidt, O. Koller, M. Bellgardt and H. Ney, "Extensions of the Sign Language Recognition and Translation Corpus RWTH-PHOENIX-Weather," in International Conference on Language Resources and Evaluation, Reykjavik, Island, 2014.
[11]
O. Koller, S. Zargaran and H. Ney, "Re-Sign: Re-Aligned End-To-End Sequence Modelling With Deep Recurrent CNN-HMMs," in IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017.
[12]
X. Chai, H. Wang and X. Chen, "The DEVISIGN Large Vocabulary of Chinese Sign Language Database and Baseline Evaluations," Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, 2014.
[13]
D. Li, C. Rodriguez, X. Yu and H. Li, "Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison," IEEE Winter Conference on Applications of Computer Vision, 2020.
[14]
O. Koller, S. Zargaran, H. Ney and R. Bowden, "Deep Sign: Enabling Robust Statistical Continuous Sign Language Recognition via Hybrid CNN-HMMs," International Journal of Computer Vision, pp. 1311-1325, 2018.
[15]
N. T. Nguen, S. Sako and B. Kwolek, "Deep CNN-Based Recognition of JSL Finger Spelling," in International Conference on Hybrid Artificial Intelligence Systems, 2019.
[16]
L. Pishchulin, A. Jain, M. Andriluka, T. Thormählen and B. Schiele, "Articulated people detection and pose estimation: Reshaping the future," in IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 2012.
[17]
W. Qiu, "Generating Human Images and Ground Truth using Computer Graphics," Master's thesis, University of California, Los Angeles, 2016.
[18]
G. Rogez and C. Schmid, "MoCap-guided Data Augmentation for 3D Pose Estimation in the Wild," in Advances in Neural Information Processing Systems, 2016.
[19]
W. Chen, H. Wang, Y. Li, H. Su, Z. Wang, C. Tu, D. Lischinski, D. Cohen-Or and B. Chen, "Synthesizing Training Images for Boosting Human 3D Pose Estimation," in International Conference on 3D Vision, Stanford, CA, USA, 2016.
[20]
M. Loper, N. Mahmood, J. Romero, G. Pons-Moll and M. J. Black, "SMPL: A Skinned Multi-Person Linear Model," ACM Transactions on Graphics, November 2015.

Cited By

View all
  • (2025)Synthetic Datasets for Hand Gesture RecognitionProceedings of IEMTRONICS 202410.1007/978-981-97-4780-1_3(35-46)Online publication date: 30-Jan-2025
  • (2023)Dynamic Hand Gesture Recognition for Human-Robot Collaborative AssemblyArtificial Intelligence and Soft Computing10.1007/978-3-031-42505-9_10(112-121)Online publication date: 18-Jun-2023

Index Terms

  1. SynSLaG: Synthetic Sign Language Generator
            Index terms have been assigned to the content through auto-classification.

            Recommendations

            Comments

            Information & Contributors

            Information

            Published In

            cover image ACM Conferences
            ASSETS '21: Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility
            October 2021
            730 pages
            ISBN:9781450383066
            DOI:10.1145/3441852
            Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

            Sponsors

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            Published: 17 October 2021

            Check for updates

            Author Tags

            1. Database
            2. Sign language
            3. Synthetic data generator

            Qualifiers

            • Poster
            • Research
            • Refereed limited

            Funding Sources

            Conference

            ASSETS '21
            Sponsor:

            Acceptance Rates

            ASSETS '21 Paper Acceptance Rate 36 of 134 submissions, 27%;
            Overall Acceptance Rate 436 of 1,556 submissions, 28%

            Upcoming Conference

            ASSETS '25

            Contributors

            Other Metrics

            Bibliometrics & Citations

            Bibliometrics

            Article Metrics

            • Downloads (Last 12 months)20
            • Downloads (Last 6 weeks)0
            Reflects downloads up to 14 Feb 2025

            Other Metrics

            Citations

            Cited By

            View all
            • (2025)Synthetic Datasets for Hand Gesture RecognitionProceedings of IEMTRONICS 202410.1007/978-981-97-4780-1_3(35-46)Online publication date: 30-Jan-2025
            • (2023)Dynamic Hand Gesture Recognition for Human-Robot Collaborative AssemblyArtificial Intelligence and Soft Computing10.1007/978-3-031-42505-9_10(112-121)Online publication date: 18-Jun-2023

            View Options

            Login options

            View options

            PDF

            View or Download as a PDF file.

            PDF

            eReader

            View online with eReader.

            eReader

            HTML Format

            View this article in HTML Format.

            HTML Format

            Figures

            Tables

            Media

            Share

            Share

            Share this Publication link

            Share on social media