skip to main content
10.1145/3306306.3328008acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
abstract

Hands-on: rapid interactive application prototyping for media arts and performing arts in illimitable space

Published:28 July 2019Publication History

ABSTRACT

We complement the last three editions of the course at SIGGRAPH Asia (2015, 2016, 2018) and SIGGRAPH (2017) to make it more of a hands-on nature and include OpenISS. We explore a rapid prototyping of interactive graphical applications for stage and beyond using Jitter/Max and Processing with OpenGL, shaders, and featuring connectivity with various devices. Such rapid prototyping environment is ideal for entertainment computing, as well as for artists and live performances using real-time interactive graphics. We share the expertise we developed in connecting the real-time graphics with on-stage performance with the Illimitable Space System (ISS) v2 and its OpenISS core framework for creative near-realtime broadcasting, and the use of AI and HCI techniques in art.

References

  1. Edward A. Ashcroft, Anthony A. Faustini, Rangaswamy Jagannathan, and William W. Wadge. 1995. Multidimensional Programming. Oxford University Press, London. ISBN: 978-0195075977. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Edward A. Ashcroft and William W. Wadge. 1977. Lucid, a nonprocedural language with iteration. Commun. ACM 20, 7 (July 1977), 519--526. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Sebouh-Steve Bardakjian, Miao Song, Serguei A. Mokhov, and Sudhir P. Mudur. 2016. ISSv3: From Human Motion in the Real to the Interactive Documentary Film in AR/VR. In Proceedings of the SIGGRAPH ASIA 2016 Workshop on Virtual Reality Meets Physical Reality (VR Meets PR 2016). ACM, New York, NY, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Greg Borenstein. 2013. OpenCV for Processing. {online}. (July 2013). https://github.com/atduskgreg/opencv-processing.Google ScholarGoogle Scholar
  5. Niels Böttcher. 2007--2013. An introduction to Max/MSP. {online}, Medialogy, Aalborg University Copenhagen. (2007--2013). http://imi.aau.dk/~nib/maxmsp/introduction_to_MaxMsp.ppt.Google ScholarGoogle Scholar
  6. Tom Butterworth and Anton Marini. 2013. Syphon for Jitter. {online}. (Nov. 2013). https://github.com/Syphon/Jitter/releases/.Google ScholarGoogle Scholar
  7. Craig Caldwell. 2015. Bringing Story to Life: For Programmers, Animators, VFX Artists, and Interactive Designers. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 6:1--6:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Andres Colubri. 2014. Syphon for Processing. {online}. (2014). https://github.com/Syphon/Processing/releases.Google ScholarGoogle Scholar
  9. Cycling '74. 2005--2015. Max/MSP/Jitter. {online}. (2005--2015). http://cycling74.com/products/max/.Google ScholarGoogle Scholar
  10. Peter Elsea. 2007--2013. Max/MSP/Jitter Tutorials. {online}, University of California, Santa Cruz. (2007--2013). ftp://arts.ucsc.edu/pub/ems/MaxTutors/Jit.tutorials/.Google ScholarGoogle Scholar
  11. Ben Fry and Casey Reas. 2001--2015. Processing - a programming language, development environment, and online community. {online}. (2001--2015). http://www.processing.org/.Google ScholarGoogle Scholar
  12. Google LLC. 2017--2018. Google Brain Team: Machine Learning Algorithms. {online}. (2017--2018). https://magenta.tensorflow.org/.Google ScholarGoogle Scholar
  13. Peter Grogono. 2002. Getting Started with OpenGL. {online}. (2002). Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada.Google ScholarGoogle Scholar
  14. Intel Corporation, Willow Garage, and Itseez. 2000--2018. Itseez: Image Processing Algorithms. {online}. (2000--2018). https://opencv.org/.Google ScholarGoogle Scholar
  15. Joris and The Resolume Team. 2014. Resolume Arena Blog: Spout - Sharing Video between Applications on Windows. {online}. (May 2014). http://resolume.com/blog/11110/spout-sharing-video-between-applications-on-windows.Google ScholarGoogle Scholar
  16. Gene Kogan. 2014a. Kinect Projector Toolkit for image mapping and calibration. {online, GitHub}. (July 2014). https://github.com/genekogan/KinectProjectorToolkit.Google ScholarGoogle Scholar
  17. Gene Kogan. 2014b. Kinect Projector Toolkit for image mapping and calibration. {online}. (July 2014). https://github.com/genekogan/KinectProjectorToolkit.Google ScholarGoogle Scholar
  18. Joseph J. LaViola, Jr. 2015. Context Aware 3D Gesture Recognition for Games and Virtual Reality. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 10:1--10:61. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Hao Li, Anshuman Das, Tristan Swedish, Hyunsung Park, and Ramesh Raskar. 2015. Modeling and Capturing the Human Body: For Rendering, Health and Visualization. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 16:1--16:160. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. V. J. Manzo. 2011. Max/MSP/Jitter for Music: A Practical Guide to Developing Interactive Music Systems for Education and More. Oxford University Press.Google ScholarGoogle Scholar
  21. Microsoft. 2012a. Human Interface Guidelines: Kinect for Windows v. 1.5. {online}. (2012). http://go.microsoft.com/fwlink/?LinkId=247735.Google ScholarGoogle Scholar
  22. Microsoft. 2012b. The Kinect for Windows SDK v. 1.5. {online}. (21 May 2012). Online at http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx and http://msdn.microsoft.com/en-us/library/hh855347.Google ScholarGoogle Scholar
  23. Serguei A. Mokhov, Miao Song, Satish Chilkaka, Zinia Das, Jie Zhang, Jonathan Llewellyn, and Sudhir P. Mudur. 2016. Agile Forward-Reverse Requirements Elicitation as a Creative Design Process: A Case Study of llimitable Space System v2. Journal of Integrated Design and Process Science 20, 3 (Sept. 2016), 3--37.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Serguei A. Mokhov, Kin-Fung Yiu, Brian Ye, Jie Zhang, Haotao Lai, and Miao Song. 2017. Real-time Motion Capture for Performing Arts and Stage. {online}, TEDxConcordia. (Sept. 2017). https://www.youtube.com/watch?v=YgwnEmHFwI8.Google ScholarGoogle Scholar
  25. R. Molich and Jakob Nielsen. 1990. Improving a human-computer dialogue. Commun. ACM 33, 3 (March 1990), 338--348. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. OpenKinect Contributors. 2011--2018. OpenKinect: Open Source Drivers for Kinect v1. {online}. (2011--2018). http://openkinect.org.Google ScholarGoogle Scholar
  27. Jean-Marc Pelletier. 2012. jit.freenect.grab - a Max/MSP/Jitter external for Microsoft Kinect. {online}. (7 March 2012). RC5, http://jmpelletier.com/freenect/.Google ScholarGoogle Scholar
  28. Bill Polson. 2015. Pipeline Design Patterns. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 21:1--21:59. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Konstantinos Psimoulis, Paul Palmieri, Inna Taushanova-Atanasova, Yasmine Chiter, Amjrali Shirkhodaei, Navid Golabian, Mohammad-Ali Eghtesadi, Behrooz Hedayati, Piratheeban Annamalai, and Andrew Laramee. 2018. OpenISS Web Services API Implementation for OpenISS-as-a-Service. {online}, SOEN487 Team 10 and Team 11, Serguei Mokhov. (April 2018). https://github.com/OpenISS/OpenISS/tree/master/src/api/java.Google ScholarGoogle Scholar
  30. Miller Puckette and PD Community. 2007--2014. Pure Data. {online}. (2007--2014). http://puredata.org.Google ScholarGoogle Scholar
  31. Theresa-Marie Rhyne. 2015. Applying Color Theory to Digital Media and Visualization. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 5:1--5:112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Christian Richardt, James Tompkin, Jiamin Bai, and Christian Theobalt. 2015. User-centric Computational Videography. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 25:1--25:6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Yvonne Rogers, Helen Sharp, and Jenny Preece. 2011. Interaction Design: Beyond Human - Computer Interaction (3rd ed.). Wiley Publishing. Online resources: id-book.com. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Andreas Schlegel. 2011. oscP5 - A implementation of the OSC protocol for Processing. {online}. (2011). http://www.sojamo.de/libraries/oscP5/.Google ScholarGoogle Scholar
  35. Miao Song. 2012. Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space. Ph.D. Dissertation. Special Individualized Program/Computer Science and Software Engineering, Concordia University, Montreal, Canada. Online at http://spectrum.library.concordia.ca/975072 and http://arxiv.org/abs/1212.6250.Google ScholarGoogle Scholar
  36. Miao Song et al. 2014a. Real-Time Motion-Based Shadow and Green Screen Visualization, and Video Feedback for the Like Shadows Theatre Performance with the ISS. {theatre production, video, news}. (2--12 April 2014). http://www.concordia.ca/encs/cunews/main/stories/2014/06/04/digital-art-thatillustratesthelandofthelivingandthedead.html and http://www.concordia.ca/content/dam/encs/csse/news/docs/like-shadows-cse-academy.pdf.Google ScholarGoogle Scholar
  37. Miao Song and Serguei A. Mokhov. 2014. Dynamic Motion-Based Background Visualization for the Ascension Dance with the ISS. {dance show, video}. (18--19 Jan. 2014). http://vimeo.com/85049604.Google ScholarGoogle Scholar
  38. Miao Song, Serguei A. Mokhov, et al. 2015b. Illimitable Space System at CG in Asia International Resources. Talk and Demo. (10 Aug. 2015). http://s2015.siggraph.org/attendees/acm-siggraph-theater-events.Google ScholarGoogle Scholar
  39. Miao Song, Serguei A. Mokhov, Julie Chaffarod, et al. 2015a. Dynamic Motion-Based Visualization for the District 3 Demo Day with the ISSv2 and Processing. {demo, video}. (4 June 2015). https://vimeo.com/130122925 and https://vimeo.com/129692753.Google ScholarGoogle Scholar
  40. Miao Song, Serguei A. Mokhov, and Peter Grogono. 2014b. A Brief Technical Note on Haptic Jellyfish with Falcon and OpenGL. In Proceedings of the CHI'14 Extended Abstracts: ACM SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1525--1530. Includes video and poster. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Miao Song, Serguei A. Mokhov, Peter Grogono, and Sudhir P. Mudur. 2014a. Illimitable Space System as a Multimodal Interactive Artists' Toolbox for Real-time Performance. In Proceedings of the SIGGRAPH ASIA 2014 Workshop on Designing Tools for Crafting Interactive Artifacts (SIGGRAPH ASIA'14). ACM, New York, NY, USA, 2:1--2:4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Miao Song, Serguei A. Mokhov, Peter Grogono, and Sudhir P. Mudur. 2014b. On a Non-Web-Based Multimodal Interactive Documentary Production. In Proceedings of the 2014 International Conference on Virtual Systems Multimedia (VSMM'2014), Harold Thwaites, Sarah Kenderdine, and Jeffrey Shaw (Eds.). IEEE, 329--336.Google ScholarGoogle Scholar
  43. Miao Song, Serguei A. Mokhov, Alison R. Loader, and Maureen J. Simmonds. 2009. A Stereoscopic OpenGL-based Interactive Plug-in Framework for Maya and Beyond. In Proceedings of VRCAI'09. ACM, New York, NY, USA, 363--368. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Miao Song, Serguei A. Mokhov, Sudhir P. Mudur, and Peter Grogono. 2015. Rapid Interactive Real-time Application Prototyping for Media Arts and Stage Performance. In ACM SIGGRAPH Asia 2015 Courses (SIGGRAPH Asia'15). ACM, New York, NY, USA, 14:1--14:11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Miao Song, Serguei A. Mokhov, Sudhir P. Mudur, and Peter Grogono. 2016. Hands-on: Rapid Interactive Application Prototyping for Media Arts and Stage Production. In ACM SIGGRAPH Asia 2016 Courses (SIGGRAPH Asia'16). ACM, New York, NY, USA, 19:1--19:29. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Miao Song, Serguei A. Mokhov, Jilson Thomas, et al. 2015b. Dynamic Motion-Based Background Visualization for the Gray Zone Dance with the ISSv2. {dance show, video}. (14 Feb. 2015). https://vimeo.com/121177927.Google ScholarGoogle Scholar
  47. Miao Song, Serguei A. Mokhov, Jilson Thomas, and Sudhir P. Mudur. 2015a. Applications of the Illimitable Space System in the Context of Media Technology and On-Stage Performance: a Collaborative Interdisciplinary Experience. In Proceedings of GEM'15. IEEE. To appear.Google ScholarGoogle Scholar
  48. Debbie Stone, Caroline Jarrett, Mark Woodroffe, and Shailey Minocha. 2005. User Interface Design and Evaluation (1st ed.). Wiley Publishing. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Marian F. Ursu, Vilmos Zsombori, John Wyver, Lucie Conrad, Ian Kegel, and Doug Williams. 2009. Interactive Documentaries: A Golden Age. Comput. Entertain. 7, Article 41 (Sept. 2009), 29 pages. Issue 3. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. William W. Wadge and Edward A. Ashcroft. 1985. Lucid, the Dataflow Programming Language. Academic Press, London. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Todd Winkler. 2001. Compositing Interactive Music: Techniques and Ideas Using Max. MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Jie Zhang, Sebouh Bardakjian, Milin Li, Miao Song, Serguei A. Mokhov, Sudhir P. Mudur, and Jean-Claude Bustros. 2015. Towards Historical Exploration of Sites With an Augmented Reality Interactive Documentary Prototype App. In Proceedings of Appy Hour, SIGGRAPH'2015. ACM.Google ScholarGoogle Scholar

Index Terms

  1. Hands-on: rapid interactive application prototyping for media arts and performing arts in illimitable space

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader