ABSTRACT
We complement the last three editions of the course at SIGGRAPH Asia (2015, 2016, 2018) and SIGGRAPH (2017) to make it more of a hands-on nature and include OpenISS. We explore a rapid prototyping of interactive graphical applications for stage and beyond using Jitter/Max and Processing with OpenGL, shaders, and featuring connectivity with various devices. Such rapid prototyping environment is ideal for entertainment computing, as well as for artists and live performances using real-time interactive graphics. We share the expertise we developed in connecting the real-time graphics with on-stage performance with the Illimitable Space System (ISS) v2 and its OpenISS core framework for creative near-realtime broadcasting, and the use of AI and HCI techniques in art.
- Edward A. Ashcroft, Anthony A. Faustini, Rangaswamy Jagannathan, and William W. Wadge. 1995. Multidimensional Programming. Oxford University Press, London. ISBN: 978-0195075977. Google ScholarDigital Library
- Edward A. Ashcroft and William W. Wadge. 1977. Lucid, a nonprocedural language with iteration. Commun. ACM 20, 7 (July 1977), 519--526. Google ScholarDigital Library
- Sebouh-Steve Bardakjian, Miao Song, Serguei A. Mokhov, and Sudhir P. Mudur. 2016. ISSv3: From Human Motion in the Real to the Interactive Documentary Film in AR/VR. In Proceedings of the SIGGRAPH ASIA 2016 Workshop on Virtual Reality Meets Physical Reality (VR Meets PR 2016). ACM, New York, NY, USA. Google ScholarDigital Library
- Greg Borenstein. 2013. OpenCV for Processing. {online}. (July 2013). https://github.com/atduskgreg/opencv-processing.Google Scholar
- Niels Böttcher. 2007--2013. An introduction to Max/MSP. {online}, Medialogy, Aalborg University Copenhagen. (2007--2013). http://imi.aau.dk/~nib/maxmsp/introduction_to_MaxMsp.ppt.Google Scholar
- Tom Butterworth and Anton Marini. 2013. Syphon for Jitter. {online}. (Nov. 2013). https://github.com/Syphon/Jitter/releases/.Google Scholar
- Craig Caldwell. 2015. Bringing Story to Life: For Programmers, Animators, VFX Artists, and Interactive Designers. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 6:1--6:10. Google ScholarDigital Library
- Andres Colubri. 2014. Syphon for Processing. {online}. (2014). https://github.com/Syphon/Processing/releases.Google Scholar
- Cycling '74. 2005--2015. Max/MSP/Jitter. {online}. (2005--2015). http://cycling74.com/products/max/.Google Scholar
- Peter Elsea. 2007--2013. Max/MSP/Jitter Tutorials. {online}, University of California, Santa Cruz. (2007--2013). ftp://arts.ucsc.edu/pub/ems/MaxTutors/Jit.tutorials/.Google Scholar
- Ben Fry and Casey Reas. 2001--2015. Processing - a programming language, development environment, and online community. {online}. (2001--2015). http://www.processing.org/.Google Scholar
- Google LLC. 2017--2018. Google Brain Team: Machine Learning Algorithms. {online}. (2017--2018). https://magenta.tensorflow.org/.Google Scholar
- Peter Grogono. 2002. Getting Started with OpenGL. {online}. (2002). Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada.Google Scholar
- Intel Corporation, Willow Garage, and Itseez. 2000--2018. Itseez: Image Processing Algorithms. {online}. (2000--2018). https://opencv.org/.Google Scholar
- Joris and The Resolume Team. 2014. Resolume Arena Blog: Spout - Sharing Video between Applications on Windows. {online}. (May 2014). http://resolume.com/blog/11110/spout-sharing-video-between-applications-on-windows.Google Scholar
- Gene Kogan. 2014a. Kinect Projector Toolkit for image mapping and calibration. {online, GitHub}. (July 2014). https://github.com/genekogan/KinectProjectorToolkit.Google Scholar
- Gene Kogan. 2014b. Kinect Projector Toolkit for image mapping and calibration. {online}. (July 2014). https://github.com/genekogan/KinectProjectorToolkit.Google Scholar
- Joseph J. LaViola, Jr. 2015. Context Aware 3D Gesture Recognition for Games and Virtual Reality. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 10:1--10:61. Google ScholarDigital Library
- Hao Li, Anshuman Das, Tristan Swedish, Hyunsung Park, and Ramesh Raskar. 2015. Modeling and Capturing the Human Body: For Rendering, Health and Visualization. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 16:1--16:160. Google ScholarDigital Library
- V. J. Manzo. 2011. Max/MSP/Jitter for Music: A Practical Guide to Developing Interactive Music Systems for Education and More. Oxford University Press.Google Scholar
- Microsoft. 2012a. Human Interface Guidelines: Kinect for Windows v. 1.5. {online}. (2012). http://go.microsoft.com/fwlink/?LinkId=247735.Google Scholar
- Microsoft. 2012b. The Kinect for Windows SDK v. 1.5. {online}. (21 May 2012). Online at http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx and http://msdn.microsoft.com/en-us/library/hh855347.Google Scholar
- Serguei A. Mokhov, Miao Song, Satish Chilkaka, Zinia Das, Jie Zhang, Jonathan Llewellyn, and Sudhir P. Mudur. 2016. Agile Forward-Reverse Requirements Elicitation as a Creative Design Process: A Case Study of llimitable Space System v2. Journal of Integrated Design and Process Science 20, 3 (Sept. 2016), 3--37.Google ScholarDigital Library
- Serguei A. Mokhov, Kin-Fung Yiu, Brian Ye, Jie Zhang, Haotao Lai, and Miao Song. 2017. Real-time Motion Capture for Performing Arts and Stage. {online}, TEDxConcordia. (Sept. 2017). https://www.youtube.com/watch?v=YgwnEmHFwI8.Google Scholar
- R. Molich and Jakob Nielsen. 1990. Improving a human-computer dialogue. Commun. ACM 33, 3 (March 1990), 338--348. Google ScholarDigital Library
- OpenKinect Contributors. 2011--2018. OpenKinect: Open Source Drivers for Kinect v1. {online}. (2011--2018). http://openkinect.org.Google Scholar
- Jean-Marc Pelletier. 2012. jit.freenect.grab - a Max/MSP/Jitter external for Microsoft Kinect. {online}. (7 March 2012). RC5, http://jmpelletier.com/freenect/.Google Scholar
- Bill Polson. 2015. Pipeline Design Patterns. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 21:1--21:59. Google ScholarDigital Library
- Konstantinos Psimoulis, Paul Palmieri, Inna Taushanova-Atanasova, Yasmine Chiter, Amjrali Shirkhodaei, Navid Golabian, Mohammad-Ali Eghtesadi, Behrooz Hedayati, Piratheeban Annamalai, and Andrew Laramee. 2018. OpenISS Web Services API Implementation for OpenISS-as-a-Service. {online}, SOEN487 Team 10 and Team 11, Serguei Mokhov. (April 2018). https://github.com/OpenISS/OpenISS/tree/master/src/api/java.Google Scholar
- Miller Puckette and PD Community. 2007--2014. Pure Data. {online}. (2007--2014). http://puredata.org.Google Scholar
- Theresa-Marie Rhyne. 2015. Applying Color Theory to Digital Media and Visualization. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 5:1--5:112. Google ScholarDigital Library
- Christian Richardt, James Tompkin, Jiamin Bai, and Christian Theobalt. 2015. User-centric Computational Videography. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 25:1--25:6. Google ScholarDigital Library
- Yvonne Rogers, Helen Sharp, and Jenny Preece. 2011. Interaction Design: Beyond Human - Computer Interaction (3rd ed.). Wiley Publishing. Online resources: id-book.com. Google ScholarDigital Library
- Andreas Schlegel. 2011. oscP5 - A implementation of the OSC protocol for Processing. {online}. (2011). http://www.sojamo.de/libraries/oscP5/.Google Scholar
- Miao Song. 2012. Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space. Ph.D. Dissertation. Special Individualized Program/Computer Science and Software Engineering, Concordia University, Montreal, Canada. Online at http://spectrum.library.concordia.ca/975072 and http://arxiv.org/abs/1212.6250.Google Scholar
- Miao Song et al. 2014a. Real-Time Motion-Based Shadow and Green Screen Visualization, and Video Feedback for the Like Shadows Theatre Performance with the ISS. {theatre production, video, news}. (2--12 April 2014). http://www.concordia.ca/encs/cunews/main/stories/2014/06/04/digital-art-thatillustratesthelandofthelivingandthedead.html and http://www.concordia.ca/content/dam/encs/csse/news/docs/like-shadows-cse-academy.pdf.Google Scholar
- Miao Song and Serguei A. Mokhov. 2014. Dynamic Motion-Based Background Visualization for the Ascension Dance with the ISS. {dance show, video}. (18--19 Jan. 2014). http://vimeo.com/85049604.Google Scholar
- Miao Song, Serguei A. Mokhov, et al. 2015b. Illimitable Space System at CG in Asia International Resources. Talk and Demo. (10 Aug. 2015). http://s2015.siggraph.org/attendees/acm-siggraph-theater-events.Google Scholar
- Miao Song, Serguei A. Mokhov, Julie Chaffarod, et al. 2015a. Dynamic Motion-Based Visualization for the District 3 Demo Day with the ISSv2 and Processing. {demo, video}. (4 June 2015). https://vimeo.com/130122925 and https://vimeo.com/129692753.Google Scholar
- Miao Song, Serguei A. Mokhov, and Peter Grogono. 2014b. A Brief Technical Note on Haptic Jellyfish with Falcon and OpenGL. In Proceedings of the CHI'14 Extended Abstracts: ACM SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1525--1530. Includes video and poster. Google ScholarDigital Library
- Miao Song, Serguei A. Mokhov, Peter Grogono, and Sudhir P. Mudur. 2014a. Illimitable Space System as a Multimodal Interactive Artists' Toolbox for Real-time Performance. In Proceedings of the SIGGRAPH ASIA 2014 Workshop on Designing Tools for Crafting Interactive Artifacts (SIGGRAPH ASIA'14). ACM, New York, NY, USA, 2:1--2:4. Google ScholarDigital Library
- Miao Song, Serguei A. Mokhov, Peter Grogono, and Sudhir P. Mudur. 2014b. On a Non-Web-Based Multimodal Interactive Documentary Production. In Proceedings of the 2014 International Conference on Virtual Systems Multimedia (VSMM'2014), Harold Thwaites, Sarah Kenderdine, and Jeffrey Shaw (Eds.). IEEE, 329--336.Google Scholar
- Miao Song, Serguei A. Mokhov, Alison R. Loader, and Maureen J. Simmonds. 2009. A Stereoscopic OpenGL-based Interactive Plug-in Framework for Maya and Beyond. In Proceedings of VRCAI'09. ACM, New York, NY, USA, 363--368. Google ScholarDigital Library
- Miao Song, Serguei A. Mokhov, Sudhir P. Mudur, and Peter Grogono. 2015. Rapid Interactive Real-time Application Prototyping for Media Arts and Stage Performance. In ACM SIGGRAPH Asia 2015 Courses (SIGGRAPH Asia'15). ACM, New York, NY, USA, 14:1--14:11. Google ScholarDigital Library
- Miao Song, Serguei A. Mokhov, Sudhir P. Mudur, and Peter Grogono. 2016. Hands-on: Rapid Interactive Application Prototyping for Media Arts and Stage Production. In ACM SIGGRAPH Asia 2016 Courses (SIGGRAPH Asia'16). ACM, New York, NY, USA, 19:1--19:29. Google ScholarDigital Library
- Miao Song, Serguei A. Mokhov, Jilson Thomas, et al. 2015b. Dynamic Motion-Based Background Visualization for the Gray Zone Dance with the ISSv2. {dance show, video}. (14 Feb. 2015). https://vimeo.com/121177927.Google Scholar
- Miao Song, Serguei A. Mokhov, Jilson Thomas, and Sudhir P. Mudur. 2015a. Applications of the Illimitable Space System in the Context of Media Technology and On-Stage Performance: a Collaborative Interdisciplinary Experience. In Proceedings of GEM'15. IEEE. To appear.Google Scholar
- Debbie Stone, Caroline Jarrett, Mark Woodroffe, and Shailey Minocha. 2005. User Interface Design and Evaluation (1st ed.). Wiley Publishing. Google ScholarDigital Library
- Marian F. Ursu, Vilmos Zsombori, John Wyver, Lucie Conrad, Ian Kegel, and Doug Williams. 2009. Interactive Documentaries: A Golden Age. Comput. Entertain. 7, Article 41 (Sept. 2009), 29 pages. Issue 3. Google ScholarDigital Library
- William W. Wadge and Edward A. Ashcroft. 1985. Lucid, the Dataflow Programming Language. Academic Press, London. Google ScholarDigital Library
- Todd Winkler. 2001. Compositing Interactive Music: Techniques and Ideas Using Max. MIT Press. Google ScholarDigital Library
- Jie Zhang, Sebouh Bardakjian, Milin Li, Miao Song, Serguei A. Mokhov, Sudhir P. Mudur, and Jean-Claude Bustros. 2015. Towards Historical Exploration of Sites With an Augmented Reality Interactive Documentary Prototype App. In Proceedings of Appy Hour, SIGGRAPH'2015. ACM.Google Scholar
Index Terms
- Hands-on: rapid interactive application prototyping for media arts and performing arts in illimitable space
Recommendations
Rapid interactive real-time application prototyping for media arts and stage performance
SA '15: SIGGRAPH Asia 2015 CoursesWe explore a rapid prototyping of interactive graphical applications using Jitter/Max and Processing with OpenGL, shaders, and featuring connectivity with various devices such as, Kinect, Wii, iDevice-based controls, and others. Such rapid prototyping ...
Hands-on: rapid interactive application prototyping for media arts and stage production
SA '16: SIGGRAPH ASIA 2016 CoursesWe expand on the last year's successful first edition of the course to make it more of a hands-on nature. We explore a rapid prototyping of interactive graphical applications using Jitter/Max and Processing with OpenGL, shaders, and featuring ...
Hands-on: rapid interactive application prototyping for media arts and stage performance and beyond
SA '18: SIGGRAPH Asia 2018 CoursesWe complement the last three editions of the course at SIGGRAPH Asia (2015, 2016) and SIGGRAPH (2017) to make it more of a hands-on nature and include OpenISS. We explore a rapid prototyping of interactive graphical applications for stage and beyond ...
Comments