Skip to main content

Semantic Scene Builder: Towards a Context Sensitive Text-to-3D Scene Framework

  • Conference paper
  • First Online:
Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management (HCII 2023)

Abstract

We introduce Semantic Scene Builder (SeSB), a VR-based text-to-3D scene framework using SemAF (Semantic Annotation Framework) as a scheme for annotating discourse structures. SeSB integrates a variety of tools and resources by using SemAF and UIMA as a unified data structure to generate 3D scenes from textual descriptions. Based on VR, SeSB allows its users to change annotations through body movements instead of symbolic manipulations: from annotations in texts to corrections in editing steps to adjustments in generated scenes, all this is done by grabbing and moving objects. We evaluate SeSB in comparison with a state-of-the-art open source text-to-scene method (the only one which is publicly available) and find that our approach not only performs better, but also allows for modeling a greater variety of scenes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://laion.ai/.

  2. 2.

    VR here stands for fully-immersive virtual reality, supported by hand tracking and head-mounted displays [73].

  3. 3.

    https://dictionary.cambridge.org/dictionary/english/on.

  4. 4.

    http://jamespusto.com/wp-content/uploads/2014/07/SpaceEval-guidelines.pdf.

  5. 5.

    https://www.wikihow.com/.

  6. 6.

    https://neo4j.com/.

  7. 7.

    V-Framework, A-Framework are ML-Framework are synonyms to comply with the guidelines for author anonymity.

  8. 8.

    https://unity.com/.

  9. 9.

    https://huggingface.co/spaces/dalle-mini/dalle-mini.

References

  1. Alayrac, J.B., et al.: Flamingo: a visual language model for few-shot learning. arXiv preprint arXiv:2204.14198 (2022)

  2. Anderson, C.A.: Imagination and expectation: The effect of imagining. J. PmonaIity Soc. Psychol. 4(2), 293–330 (1983)

    Article  Google Scholar 

  3. Baden-Powell, C.: Architect’s pocket book of kitchen design. Routledge (2006)

    Google Scholar 

  4. Biewald, L.: Experiment tracking with weights and biases (2020). https://www.wandb.com/, software available from wandb.com

  5. Bisk, Y., et al.: Experience grounds language. arXiv preprint arXiv:2004.10151 (2020)

  6. Chang, A.X., et al.: Matterport3d: Learning from rgb-d data in indoor environments. In: International Conference on 3D Vision (3DV) (2017)

    Google Scholar 

  7. Chang, A.X., Eric, M., Savva, M., Manning, C.D.: SceneSeer: 3D scene design with natural language. arXiv preprint arXiv:1703.00050 (2017)

  8. Chang, A.X., et al.: ShapeNet: An Information-Rich 3D Model Repository. Tech. Rep. arXiv:1512.03012 [cs.GR], Stanford University – Princeton University – Toyota Technological Institute at Chicago (2015)

  9. Chang, A.X., Monroe, W., Savva, M., Potts, C., Manning, C.D.: Text to 3D scene generation with rich lexical grounding. In: Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP) (2015)

    Google Scholar 

  10. Chang, A.X., Savva, M., Manning, C.D.: Interactive learning of spatial knowledge for text to 3D scene generation. In: Association for Computational Linguistics (ACL) Workshop on Interactive Language Learning, Visualization, and Interfaces (ILLVI) (2014)

    Google Scholar 

  11. Chang, A.X., Savva, M., Manning, C.D.: Learning spatial knowledge for text to 3D scene generation. In: Empirical Methods in Natural Language Processing (EMNLP) (2014)

    Google Scholar 

  12. Chen, K., Choy, C.B., Savva, M., Chang, A.X., Funkhouser, T., Savarese, S.: Text2shape: Generating shapes from natural language by learning joint embeddings. arXiv preprint arXiv:1803.08495 (2018)

  13. Chu, C.X., Tandon, N., Weikum, G.: Distilling task knowledge from how-to communities. In: Proceedings of the 26th International Conference on World Wide Web, pp. 805–814 (2017)

    Google Scholar 

  14. Clark, H.H.: Using Language. Cambridge University Press, Cambridge (1996)

    Book  Google Scholar 

  15. Coyne, B., Bauer, D., Rambow, O.: VigNet: Grounding language in graphics using frame semantics. In: Proceedings of the ACL 2011 Workshop on Relational Models of Semantics, pp. 28–36. Association for Computational Linguistics, Portland, Oregon, USA (Jun 2011). https://www.aclweb.org/anthology/W11-0905

  16. Coyne, B., Sproat, R.: Wordseye: an automatic text-to-scene conversion system. In: Proceedings of the 28th annual conference on Computer Graphics and Interactive Techniques, pp. 487–496 (2001)

    Google Scholar 

  17. Dayma, B., et al.: (July 2021). https://doi.org/10.5281/zenodo.5146400, https://github.com/borisdayma/dalle-mini

  18. Dennerlein, K.: Narratologie des raumes. In: Narratologie des Raumes. de Gruyter (2009)

    Google Scholar 

  19. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota (Jun 2019). https://doi.org/10.18653/v1/N19-1423, https://www.aclweb.org/anthology/N19-1423

  20. Ding, M., Zheng, W., Hong, W., Tang, J.: Cogview2: Faster and better text-to-image generation via hierarchical transformers. arXiv preprint arXiv:2204.14217 (2022)

  21. Dumas, B., Lalanne, D., Oviatt, S.: Multimodal interfaces: a survey of principles, models and frameworks. In: Lalanne, D., Kohlas, J. (eds.) Human Machine Interaction. LNCS, vol. 5440, pp. 3–26. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-00437-7_1

    Chapter  Google Scholar 

  22. D’Souza, J., Ng, V.: Utd: Ensemble-based spatial relation extraction. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pp. 862–869 (2015)

    Google Scholar 

  23. Eberts, M., Ulges, A.: Span-based joint entity and relation extraction with transformer pre-training. CoRR abs/ arXiv: 1909.07755 (2019). https://arxiv.org/abs/1909.07755

  24. Etzioni, O., Fader, A., Christensen, J., Soderland, S., et al.: Open information extraction: The second generation. In: Twenty-Second International Joint Conference on Artificial Intelligence (2011)

    Google Scholar 

  25. Feist, M.I., Gentner, D.: On plates, bowls, and dishes: Factors in the use of english in and on. In: Proceedings of the Twentieth Annual Conference of the Cognitive Science Society, pp. 345–349. Routledge (1998)

    Google Scholar 

  26. Ferrucci, D., Lally, A.: Uima: an architectural approach to unstructured information processing in the corporate research environment. In: Natural Language Engineering, pp. 1–26 (2004)

    Google Scholar 

  27. Fisher, M., Ritchie, D., Savva, M., Funkhouser, T., Hanrahan, P.: Example-based synthesis of 3d object arrangements. In: ACM SIGGRAPH Asia 2012 papers, SIGGRAPH Asia 2012 (2012)

    Google Scholar 

  28. Fu, H., et al.: 3d-front: 3d furnished rooms with layouts and semantics. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10933–10942 (2021)

    Google Scholar 

  29. Fu, H., et al.: 3d-front: 3d furnished rooms with layouts and semantics. arXiv preprint arXiv:2011.09127 (2020)

  30. Fu, H., et al.: 3d-future: 3d furniture shape with texture. Int. J. Comput. Vision 129(12), 3313–3337 (2021)

    Article  Google Scholar 

  31. Gardner, M., et al.: Allennlp: A deep semantic natural language processing platform (2017)

    Google Scholar 

  32. Garrod, S., Pickering, M.J.: Why is conversation so easy? Trends Cogn. Sci. 8(1), 8–11 (2004)

    Article  Google Scholar 

  33. Gibney, M.J., et al.: Breakfast in human nutrition: The international breakfast research initiative. Nutrients 10(5), 559 (2018)

    Article  Google Scholar 

  34. Hassani, K., Lee, W.S.: Visualizing natural language descriptions: A survey. ACM Comput. Surv. 49(1) (June 2016). https://doi.org/10.1145/2932710, https://doi.org/10.1145/2932710

  35. Henlein, A., Abrami, G., Kett, A., Mehler, A.: Transfer of isospace into a 3d environment for annotations and applications. In: Proceedings of the 16th Joint ACL - ISO Workshop on Interoperable Semantic Annotation, pp. 32–35. European Language Resources Association, Marseille (May 2020), https://www.aclweb.org/anthology/2020.isa-1.4

  36. Herskovits, A.: Language and spatial cognition, vol. 12. Cambridge University Press Cambridge (1986)

    Google Scholar 

  37. Hua, B.S., Pham, Q.H., Nguyen, D.T., Tran, M.K., Yu, L.F., Yeung, S.K.: Scenenn: A scene meshes dataset with annotations. In: International Conference on 3D Vision (3DV) (2016)

    Google Scholar 

  38. Huguet Cabot, P.L., Navigli, R.: REBEL: Relation extraction by end-to-end language generation. In: Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 2370–2381. Association for Computational Linguistics, Punta Cana, Dominican Republic (Nov 2021). https://aclanthology.org/2021.findings-emnlp.204

  39. Hwang, J.D., et al.: Comet-atomic 2020: On symbolic and neural commonsense knowledge graphs. In: AAAI (2021)

    Google Scholar 

  40. Ide, N., Pustejovsky, J. (eds.): Handbook of Linguistic Annotation. Springer, Dordrecht (2017). https://doi.org/10.1007/978-94-024-0881-2

    Book  MATH  Google Scholar 

  41. ISO: Language resource management - Semantic annotation framework (SemAF) - Part 4: Semantic roles (SemAF-SR). Standard ISO/IEC TR 24617–4:2014, International Organization for Standardization (2014). https://www.iso.org/standard/56866.html

  42. ISO: Language resource management - Semantic annotation framework (SemAF) - Part 7: Spatial information (ISO-Space). Standard ISO/IEC TR 24617–7:2014, International Organization for Standardization (2014). https://www.iso.org/standard/60779.html

  43. ISO: Language resource management - Semantic annotation framework (SemAF) - Part 7: Spatial information (ISO-Space). Standard ISO/IEC TR 24617–7:2019, International Organization for Standardization (2019). https://www.iso.org/standard/76442.html

  44. Kermani, Z.S., Liao, Z., Tan, P., Zhang, H.: Learning 3d scene synthesis from annotated rgb-d images. In: Computer Graphics Forum, vol. 35, pp. 197–206. Wiley Online Library (2016)

    Google Scholar 

  45. Klie, J.C., de Castilho, R.E.: Dkpro cassis - reading and writing uima cas files in python (2020)

    Google Scholar 

  46. Kordjamshidi, P., Moens, M.F., van Otterlo, M.: Spatial role labeling: Task definition and annotation scheme. In: Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC 2010), pp. 413–420. European Language Resources Association (ELRA) (2010)

    Google Scholar 

  47. Kordjamshidi, P., Van Otterlo, M., Moens, M.F.: Spatial role labeling: Towards extraction of spatial relations from natural language. ACM Trans. Speech Lang. Process. (TSLP) 8(3), 1–36 (2011)

    Article  Google Scholar 

  48. Kumar, A.A.: Semantic memory: A review of methods, models, and current challenges. Psychon. Bull. Rev. 28(1), 40–80 (2021)

    Article  Google Scholar 

  49. Li, M., et al.: Grains: Generative recursive autoencoders for indoor scenes. ACM Trans. Graph. (TOG) 38(2), 1–16 (2019)

    Article  Google Scholar 

  50. Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10602-1_48

    Chapter  Google Scholar 

  51. Loureiro, D., Camacho-Collados, J.: Don’t neglect the obvious: On the role of unambiguous words in word sense disambiguation. In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 3514–3520. Association for Computational Linguistics, Online (Nov 2020). https://doi.org/10.18653/v1/2020.emnlp-main.283, https://www.aclweb.org/anthology/2020.emnlp-main.283

  52. Loureiro, D., Jorge, A.: Language modelling makes sense: Propagating representations through WordNet for full-coverage word sense disambiguation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5682–5691. Association for Computational Linguistics, Florence, Italy (Jul 2019). https://doi.org/10.18653/v1/P19-1569, https://www.aclweb.org/anthology/P19-1569

  53. Ma, R., et al.: Language-driven synthesis of 3d scenes from scene databases. ACM Trans. Graph. (TOG) 37(6), 1–16 (2018)

    Article  Google Scholar 

  54. Mainwaring, S.D., Tversky, B., Ohgishi, M., Schiano, D.J.: Descriptions of simple spatial scenes in english and japanese. Spat. Cogn. Comput. 3(1), 3–42 (2003)

    Article  Google Scholar 

  55. Manning, C.D., Surdeanu, M., Bauer, J., Finkel, J., Bethard, S.J., McClosky, D.: The Stanford CoreNLP natural language processing toolkit. In: Association for Computational Linguistics (ACL) System Demonstrations, pp. 55–60 (2014). https://www.aclweb.org/anthology/P/P14/P14-5010

  56. Marcus, G., Davis, E., Aaronson, S.: A very preliminary analysis of dall-e 2. arXiv preprint arXiv:2204.13807 (2022)

  57. Marszalek, M., Laptev, I., Schmid, C.: Actions in context. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2929–2936. IEEE (2009)

    Google Scholar 

  58. McClelland, J.L., Hill, F., Rudolph, M., Baldridge, J., Schütze, H.: Extending machine language models toward human-level language understanding. CoRR abs/ arXiv: 1912.05877 (2019), https://arxiv.org/abs/1912.05877

  59. Miller, G.A.: Wordnet: a lexical database for English. Commun. ACM 38(11), 39–41 (1995)

    Article  Google Scholar 

  60. Neumann, B., Möller, R.: On scene interpretation with description logics. Image Vis. Comput. 26(1), 82–101 (2008)

    Article  Google Scholar 

  61. Nichols, E., Botros, F.: Sprl-cww: Spatial relation classification with independent multi-class models. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pp. 895–901 (2015)

    Google Scholar 

  62. Oliva, A., Torralba, A.: The role of context in object recognition. Trends Cogn. Sci. 11(12), 520–527 (2007)

    Article  Google Scholar 

  63. Petrovich, M., Black, M.J., Varol, G.: Temos: Generating diverse human motions from textual descriptions. arXiv preprint arXiv:2204.14109 (2022)

  64. Pustejovsky, J., et al.: The specification language timeml. (2005)

    Google Scholar 

  65. Pustejovsky, J., Kordjamshidi, P., Moens, M.F., Levine, A., Dworman, S., Yocum, Z.: SemEval-2015 task 8: SpaceEval. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pp. 884–894. Association for Computational Linguistics, Denver, Colorado (Jun 2015). https://doi.org/10.18653/v1/S15-2149, https://www.aclweb.org/anthology/S15-2149

  66. Pustejovsky, J., Krishnaswamy, N.: VoxML: A visualization modeling language. In: Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pp. 4606–4613. European Language Resources Association (ELRA), Portorož, Slovenia (May 2016), https://aclanthology.org/L16-1730

  67. Pustejovsky, J., Moszkowicz, J.L., Verhagen, M.: ISO-space: the annotation of spatial information in language. In: Proceedings of the Sixth Joint ISO-ACL SIGSEM Workshop on ISA, pp. 1–9 (2011)

    Google Scholar 

  68. Qi, P., Zhang, Y., Zhang, Y., Bolton, J., Manning, C.D.: Stanza: A Python natural language processing toolkit for many human languages. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations (2020). https://nlp.stanford.edu/pubs/qi2020stanza.pdf

  69. Radford, A., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning, pp. 8748–8763. PMLR (2021)

    Google Scholar 

  70. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 (2022)

  71. Ramesh, A., et al.: Zero-shot text-to-image generation. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 139, pp. 8821–8831. PMLR (18–24 Jul 2021)

    Google Scholar 

  72. Randell, D.A., Cui, Z., Cohn, A.G.: A spatial logic based on regions and connection. In: KR 1992, 165–176 (1992)

    Google Scholar 

  73. Riva, G.: Virtual reality. Wiley encyclopedia of biomedical engineering (2006)

    Google Scholar 

  74. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models (2021)

    Google Scholar 

  75. Sadoski, M., Goetz, E.T., Olivarez, A., Jr., Lee, S., Roberts, N.M.: Imagination in story reading: The role of imagery, verbal recall, story analysis, and processing levels. J. Reading Behav. 22(1), 55–70 (1990)

    Article  Google Scholar 

  76. Sadoski, M., Paivio, A.: Imagery and text: A dual coding theory of reading and writing. Routledge (2013)

    Google Scholar 

  77. Saharia, C., et al.: Photorealistic text-to-image diffusion models with deep language understanding (2022). https://doi.org/10.48550/ARXIV.2205.11487, https://arxiv.org/abs/2205.11487

  78. Salaberri, H., Arregi, O., Zapirain, B.: Ixagroupehuspaceeval:(x-space) a wordnet-based approach towards the automatic recognition of spatial information following the iso-space annotation scheme. In: Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pp. 856–861 (2015)

    Google Scholar 

  79. Sap, M., et al.: Atomic: An atlas of machine commonsense for if-then reasoning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 3027–3035 (2019)

    Google Scholar 

  80. Savva, M., Chang, A.X., Hanrahan, P.: Semantically-enriched 3D models for common-sense knowledge. In: CVPR 2015 Workshop on Functionality, Physics, Intentionality and Causality (2015)

    Google Scholar 

  81. Settles, B.: Active learning literature survey (2009)

    Google Scholar 

  82. Shi, P., Lin, J.: Simple BERT models for relation extraction and semantic role labeling. CoRR abs/ arXiv: 1904.05255 (2019). https://arxiv.org/abs/1904.05255

  83. Shin, H.J., Park, J.Y., Yuk, D.B., Lee, J.S.: Bert-based spatial information extraction. In: Proceedings of the Third International Workshop on Spatial Language Understanding, pp. 10–17 (2020)

    Google Scholar 

  84. Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from RGBD Images. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 746–760. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33715-4_54

    Chapter  Google Scholar 

  85. Song, S., Yu, F., Zeng, A., Chang, A.X., Savva, M., Funkhouser, T.: Semantic scene completion from a single depth image. In: Proceedings of 30th IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  86. Speer, R., Chin, J., Havasi, C.: Conceptnet 5.5: An open multilingual graph of general knowledge (2017). https://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14972

  87. Stubbs, A.: Mae and mai: lightweight annotation and adjudication tools. In: Proceedings of the 5th Linguistic Annotation Workshop, pp. 129–133 (2011)

    Google Scholar 

  88. Tan, F., Feng, S., Ordonez, V.: Text2scene: Generating compositional scenes from textual descriptions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6710–6719 (2019)

    Google Scholar 

  89. Tosi, A., Pickering, M.J., Branigan, H.P.: Speakers’ use of agency and visual context in spatial descriptions. Cognition 194, 104070 (2020)

    Article  Google Scholar 

  90. Ulinski, M., Coyne, B., Hirschberg, J.: Spatialnet: A declarative resource for spatial relations. In: Proceedings of the Combined Workshop on Spatial Language Understanding (SpLU) and Grounded Communication for Robotics (RoboNLP), pp. 61–70 (2019)

    Google Scholar 

  91. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  92. Wang, X., Yeshwanth, C., Nießner, M.: Sceneformer: Indoor scene generation with transformers. In: 2021 International Conference on 3D Vision (3DV), pp. 106–115. IEEE (2021)

    Google Scholar 

  93. Ye, D., Lin, Y., Li, P., Sun, M.: Packed levitated marker for entity and relation extraction. In: Muresan, S., Nakov, P., Villavicencio, A. (eds.) Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Vol. 1: Long Papers), ACL 2022, Dublin, Ireland, 22–27 May 2022, pp. 4904–4917. Association for Computational Linguistics (2022). https://aclanthology.org/2022.acl-long.337

  94. Zhang, H., Khashabi, D., Song, Y., Roth, D.: Transomcs: From linguistic graphs to commonsense knowledge. In: Proceedings of International Joint Conference on Artificial Intelligence (IJCAI) 2020 (2020)

    Google Scholar 

  95. Zhang, S.H., Zhang, S.K., Liang, Y., Hall, P.: A survey of 3d indoor scene synthesis. J. Comput. Sci. Technol. 34(3), 594–608 (2019)

    Article  Google Scholar 

  96. Zhao, X., Hu, R., Guerrero, P., Mitra, N., Komura, T.: Relationship templates for creating scene variations. ACM Trans. Graph. (TOG) 35(6), 1–13 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Henlein .

Editor information

Editors and Affiliations

A Appendix

A Appendix

Fig. 4.
figure 4

DALL\(\cdot \)E Mini results for different inputs.

Table 3. IsoSpaceSpERT Hyperparameter
Fig. 5.
figure 5

Generated examples scenes from the evaluation.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Henlein, A. et al. (2023). Semantic Scene Builder: Towards a Context Sensitive Text-to-3D Scene Framework. In: Duffy, V.G. (eds) Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management. HCII 2023. Lecture Notes in Computer Science, vol 14029. Springer, Cham. https://doi.org/10.1007/978-3-031-35748-0_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-35748-0_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-35747-3

  • Online ISBN: 978-3-031-35748-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics