skip to main content
10.1145/3084863.3084864acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
abstract

Hands-on: rapid interactive application prototyping for media arts and stage performance

Published: 30 July 2017 Publication History

Abstract

We complement the last two editions of the of the course at SIG-GRAPH Asia to make it more of a hands-on nature. We explore a rapid prototyping of interactive graphical applications using Jitter/Max and Processing with OpenGL, shaders, and featuring connectivity with various devices such as, Kinect, Wii, iDevice-based controls, and others. Such rapid prototyping environment is ideal for entertainment computing, as well as for artists and live performances using real-time interactive graphics. We share the expertise we developed in connecting the real-time graphics with on-stage performance with the Illimitable Space System (ISS) v2.

References

[1]
Edward A. Ashcroft, Anthony A. Faustini, Rangaswamy Jagannathan, and William W. Wadge. 1995. Multidimensional Programming. Oxford University Press, London. ISBN: 978-0195075977.
[2]
Edward A. Ashcroft and William W. Wadge. 1977. Lucid, a nonprocedural language with iteration. Commun. ACM 20, 7 (July 1977), 519--526.
[3]
Sebouh-Steve Bardakjian, Miao Song, Serguei A. Mokhov, and Sudhir P. Mudur. 2016. ISSv3: From Human Motion in the Real to the Interactive Documentary Film in AR/VR. In Proceedings of the SIGGRAPH ASIA 2016 Workshop on Virtual Reality Meets Physical Reality (VR Meets PR 2016). ACM, New York, NY, USA.
[4]
Greg Borenstein. 2013. OpenCV for Processing. {online}. (July 2013). https://github.com/atduskgreg/opencv-processing.
[5]
Niels Böttcher. 2007--2013. An introduction to Max/MSP. {online}, Medialogy, Aalborg University Copenhagen. (2007--2013). http://imi.aau.dk/~nib/maxmsp/introduction_to_MaxMsp.ppt.
[6]
Tom Butterworth and Anton Marini. 2013. Syphon for Jitter. {online}. (Nov. 2013). https://github.com/Syphon/Jitter/releases/.
[7]
Craig Caldwell. 2015. Bringing Story to Life: For Programmers, Animators, VFX Artists, and Interactive Designers. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 6:1--6:10.
[8]
Andres Colubri. 2014. Syphon for Processing. {online}. (2014). https://github.com/Syphon/Processing/releases.
[9]
Cycling '74. 2005--2015. Max/MSP/Jitter. {online}. (2005--2015). http://cycling74.com/products/max/.
[10]
Peter Elsea. 2007--2013. Max/MSP/Jitter Tutorials. {online}, University of California, Santa Cruz. (2007--2013). ftp://arts.ucsc.edu/pub/ems/MaxTutors/Jit.tutorials/.
[11]
Ben Fry and Casey Reas. 2001--2015. Processing - a programming language, development environment, and online community. {online}. (2001--2015). http://www.processing.org/.
[12]
Peter Grogono. 2002. Getting Started with OpenGL. {online}. (2002). Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada.
[13]
Joris and The Resolume Team. 2014. Resolume Arena Blog: Spout - Sharing Video between Applications on Windows. {online}. (May 2014). http://resolume.com/blog/11110/spout-sharing-video-between-applications-on-windows.
[14]
Gene Kogan. 2014a. Kinect Projector Toolkit for image mapping and calibration. {online, GitHub}. (July 2014). https://github.com/genekogan/KinectProjectorToolkit.
[15]
Gene Kogan. 2014b. Kinect Projector Toolkit for image mapping and calibration. {online}. (July 2014). https://github.com/genekogan/KinectProjectorToolkit.
[16]
Joseph J. LaViola, Jr. 2015. Context Aware 3D Gesture Recognition for Games and Virtual Reality. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 10:1--10:61.
[17]
Hao Li, Anshuman Das, Tristan Swedish, Hyunsung Park, and Ramesh Raskar. 2015. Modeling and Capturing the Human Body: For Rendering, Health and Visualization. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 16:1--16:160.
[18]
V.J. Manzo. 2011. Max/MSP/Jitter for Music: A Practical Guide to Developing Interactive Music Systems for Education and More. Oxford University Press.
[19]
Microsoft. 2012a. Human Interface Guidelines: Kinect for Windows v. 1.5. {online}. (2012). http://go.microsoft.com/fwlink/?LinkId=247735.
[20]
Microsoft. 2012b. The Kinect for Windows SDK v. 1.5. {online}. (21 May 2012). Online at http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx and http://msdn.microsoft.com/en-us/library/hh855347.
[21]
R. Molich and Jakob Nielsen. 1990. Improving a human-computer dialogue. Commun. ACM 33, 3 (March 1990), 338--348.
[22]
Jean-Marc Pelletier. 2012. jit.freenect.grab - a Max/MSP/Jitter external for Microsoft Kinect. {online}. (7 March 2012). RC5, http://jmpelletier.com/freenect/.
[23]
Bill Poison. 2015. Pipeline Design Patterns. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 21:1--21:59.
[24]
Miller Puckette and PD Community. 2007--2014. Pure Data. {online}. (2007--2014). http://puredata.org.
[25]
Theresa-Marie Rhyne. 2015. Applying Color Theory to Digital Media and Visualization. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 5:1--5:112.
[26]
Christian Richardt, James Tompkin, Jiamin Bai, and Christian Theobalt. 2015. User-centric Computational Videography. In ACM SIGGRAPH 2015 Courses (SIGGRAPH'15). ACM, New York, NY, USA, 25:1--25:6.
[27]
Yvonne Rogers, Helen Sharp, and Jenny Preece. 2011. Interaction Design: Beyond Human - Computer Interaction (3rd ed.). Wiley Publishing. Online resources: id-book.com.
[28]
Andreas Schlegel. 2011. oscP5 - A implementation of the OSC protocol for Processing. {online}. (2011). http://www.sojamo.de/libraries/oscP5/.
[29]
Miao Song. 2012. Computer-Assisted Interactive Documentary and Performance Arts in Illimitable Space. Ph.D. Dissertation. Special Individualized Program/Computer Science and Software Engineering, Concordia University, Montreal, Canada. Online at http://spectrum.library.concordia.ca/975072 and http://arxiv.org/abs/1212.6250.
[30]
Miao Song et al. 2014a. Real-Time Motion-Based Shadow and Green Screen Visualization, and Video Feedback for the Like Shadows Theatre Performance with the ISS. {theatre production, video, news}. (2--12 April 2014). http://www.concordia.ca/encs/cunews/main/stories/2014/06/04/digital-art-thatillustratesthelandofthelivingandthedead.html and http://www.concordia.ca/content/dam/encs/csse/news/docs/like-shadows-cse-academy.pdf.
[31]
Miao Song and Serguei A. Mokhov. 2014. Dynamic Motion-Based Background Visualization for the Ascension Dance with the ISS. {dance show, video}. (18--19 Jan. 2014). http://vimeo.com/85049604.
[32]
Miao Song, Serguei A. Mokhov, et al. 2015b. Illimitable Space System at CG in Asia International Resources. (10 Aug. 2015). http://s2015.siggraph.org/attendees/acm-siggraph-theater-events.
[33]
Miao Song, Serguei A. Mokhov, Julie Chaffarod, et al. 2015a. Dynamic Motion-Based Visualization for the District 3 Demo Day with the ISSv2 and Processing. {demo, video}. (4 June 2015). https://vimeo.com/130122925 and https://vimeo.com/129692753.
[34]
Miao Song, Serguei A. Mokhov, and Peter Grogono. 2014b. A Brief Technical Note on Haptic Jellyfish with Falcon and OpenGL. In Proceedings of the CHI'14 Extended Abstracts: ACM SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1525--1530. Includes video and poster.
[35]
Miao Song, Serguei A. Mokhov, Peter Grogono, and Sudhir P. Mudur. 2014a. Illimitable Space System as a Multimodal Interactive Artists' Toolbox for Real-time Performance. In Proceedings of the SIGGRAPH ASIA 2014 Workshop on Designing Tools for Crafting Interactive Artifacts (SIGGRAPH ASIA'14). ACM, New York, NY, USA, 2:1--2:4.
[36]
Miao Song, Serguei A. Mokhov, Peter Grogono, and Sudhir P. Mudur. 2014b. On a Non-Web-Based Multimodal Interactive Documentary Production. In Proceedings of the 2014 International Conference on Virtual Systems Multimedia (VSMM'2014), Harold Thwaites, Sarah Kenderdine, and Jeffrey Shaw (Eds.). IEEE, 329--336.
[37]
Miao Song, Serguei A. Mokhov, Alison R. Loader, and Maureen J. Simmonds. 2009. A Stereoscopic OpenGL-based Interactive Plug-in Framework for Maya and Beyond. In Proceedings of VRCAI'09. ACM, New York, NY, USA, 363--368.
[38]
Miao Song, Serguei A. Mokhov, Sudhir P. Mudur, and Peter Grogono. 2015. Rapid Interactive Real-time Application Prototyping for Media Arts and Stage Performance. In ACM SIGGRAPH Asia 2015 Courses (SIGGRAPH Asia'15). ACM, New York, NY, USA, 14:1--14:11.
[39]
Miao Song, Serguei A. Mokhov, Sudhir P. Mudur, and Peter Grogono. 2016. Hands-on: Rapid Interactive Application Prototyping for Media Arts and Stage Production. In ACM SIGGRAPH Asia 2016 Courses (SIGGRAPH Asia'16). ACM, New York, NY, USA.
[40]
Miao Song, Serguei A. Mokhov, Jilson Thomas, et al. 2015. Dynamic Motion-Based Background Visualization for the Gray Zone Dance with the ISSv2. {dance show, video}. (14 Feb. 2015). https://vimeo.com/121177927.
[41]
Debbie Stone, Caroline Jarrett, Mark Woodroffe, and Shailey Minocha. 2005. User Interface Design and Evaluation (1st ed.). Wiley Publishing.
[42]
Marian F. Ursu, Vilmos Zsombori, John Wyver, Lucie Conrad, Ian Kegel, and Doug Williams. 2009. Interactive Documentaries: A Golden Age. Comput. Entertain. 7, Article 41 (Sept. 2009), 29 pages. Issue 3.
[43]
William W. Wadge and Edward A. Ashcroft. 1985. Lucid, the Dataflow Programming Language. Academic Press, London.
[44]
Todd Winkler. 2001. Compositing Interactive Music: Techniques and Ideas Using Max. MIT Press.
[45]
Jie Zhang, Sebouh Bardakjian, Milin Li, Miao Song, Serguei A. Mokhov, Sudhir P. Mudur, and Jean-Claude Bustros. 2015. Towards Historical Exploration of Sites With an Augmented Reality Interactive Documentary Prototype App. In Proceedings of Appy Hour, SIGGRAPH'2015. ACM.

Cited By

View all
  • (2020)Dataflow VFX Programming and Processing for Artists and OpenISSACM SIGGRAPH 2020 Labs10.1145/3388763.3407760(1-32)Online publication date: 17-Aug-2020
  • (2018)Real-time motion capture for performing arts and stageACM SIGGRAPH 2018 Educator's Forum10.1145/3215641.3215642(1-2)Online publication date: 12-Aug-2018

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGGRAPH '17: ACM SIGGRAPH 2017 Studio
July 2017
64 pages
ISBN:9781450350099
DOI:10.1145/3084863
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 July 2017

Check for updates

Author Tags

  1. OpenGL
  2. computer graphics education
  3. human-computer interfaces
  4. illimitable space system (ISS)
  5. interaction
  6. jitter/MAX
  7. kinect
  8. processing
  9. real-time

Qualifiers

  • Abstract

Conference

SIGGRAPH '17
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,822 of 8,601 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2020)Dataflow VFX Programming and Processing for Artists and OpenISSACM SIGGRAPH 2020 Labs10.1145/3388763.3407760(1-32)Online publication date: 17-Aug-2020
  • (2018)Real-time motion capture for performing arts and stageACM SIGGRAPH 2018 Educator's Forum10.1145/3215641.3215642(1-2)Online publication date: 12-Aug-2018

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media