skip to main content
10.1145/2807442.2807477acmconferencesArticle/Chapter ViewAbstractPublication PagesuistConference Proceedingsconference-collections
research-article

Joint 5D Pen Input for Light Field Displays

Published: 05 November 2015 Publication History

Abstract

Light field displays allow viewers to see view-dependent 3D content as if looking through a window; however, existing work on light field display interaction is limited. Yet, they have the potential to parallel 2D pen and touch screen systems, which present a joint input and display surface for natural interaction. We propose a 4D display and interaction space using a dual-purpose lenslet array, which combines light field display and light field pen sensing, and allows us to estimate the 3D position and 2D orientation of the pen. This method is simple, fast (150Hz), with position accuracy of 2-3mm and precision of 0.2-0.6mm from 0-350mm away from the lenslet array, and orientation accuracy of 2 degrees and precision of 0.2-0.3 degrees within a 45 degree field of view. Further, we 3D print the lenslet array with embedded baffles to reduce out-of-bounds cross-talk, and use an optical relay to allow interaction behind the focal plane. We demonstrate our joint display/sensing system with interactive light field painting.

References

[1]
E.H. Adelson and J.Y.A. Wang. 1992. Single Lens Stereo with a Plenoptic Camera. IEEE Trans. Pattern Anal. Mach. Intell. 14 (1992). Issue 2.
[2]
H. Benko and A. Wilson. 2009. DepthTouch: Using depth-sensing camera to enable freehand interactions on and above the interactive surface. Technical Report MSR-TR-2009-23. (2009).
[3]
Filippo Bergamasco, Andrea Albarelli, Emanuele Rodola, and Andrea Torsello. 2013. Can a Fully Unconstrained Imaging Model Be Applied Effectively to Central Cameras? IEEE CVPR (2013).
[4]
P.R. Burkhard, J.W. Langston, and J.W. Tetrud. 2002. Voluntarily simulated tremor in normal subjects. Clinical Neurophysiology 32, 2 (2002), 119--126.
[5]
A. Butler, O. Hilliges, S. Izadi, S. Hodges, D. Molyneaux, D. Kim, and D. Kong. 2011. Vermeer: direct interaction with a 360 viewable 3D display. In ACM UIST. 569--576.
[6]
J. Butterworth, A. Davidson, S. Hench, and M.T. Olano. 1992. 3DM: A three dimensional modeler using a head-mounted display. In ACM I3D. 135--138.
[7]
O. Cossairt, S. Nayar, and R. Ramamoorthi. 2008. Light field transfer: Global illumination between real and synthetic objects. ACM Trans. Graph. 27, 3 (2008).
[8]
M.F. Deering. 1995. HoloSketch: a virtual reality sketching/animation tool. ACM Trans. Comput.-Hum. Interact. 2, 3 (1995), 220--238.
[9]
Anders Eikenes. 2012. Intersection Point of Lines in 3D Space. MATLAB Central File Exchange. (2012). Retrieved January 10, 2015.
[10]
M. Fuchs, R. Raskar, H.P. Seidel, and H.P.A. Lensch. 2008. Towards passive 6D reflectance field displays. ACM Trans. Graph. (Proc. SIGGRAPH) 27, 3 (2008).
[11]
H. Gardner, D. Lifeng, Q. Wang, and G. Zhou. 2006. Line Drawing in Virtual Reality using a Game Pad. In 7th Australasian User Interface Conference, Vol. 50.
[12]
T. Georgiev, K.C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala. 2006. Spatio-Angular Resolution Tradeoff in Integral Photography. In EGSR.
[13]
J.Y. Han. 2005. Low-cost multi-touch sensing through frustrated total internal reflection. In ACM UIST.
[14]
R. I. Hartley and P. Sturm. 1997. Triangulation. Computer Vision and Image Understanding 68, 2 (1997), 146--157.
[15]
S. Heo, J. Han, S. Choi, S. Lee, G. Lee, H.E. Lee, S.H. Kim, W.C. Bang, D.K. Kim, and C.Y. Kim. 2011. IrCube tracker: an optical 6-DOF tracker based on LED directivity. In ACM UIST. 577--586.
[16]
M. Hirsch, S. Izadi, H. Holtzman, and R. Raskar. 2013. 8D: Interacting with a Relightable Glasses-free 3D Display. In ACM SIGCHI. 2209--2212.
[17]
M. Hirsch, D. Lanman, H. Holtzman, and R. Raskar. 2009. BiDi Screen: A Thin, Depth-Sensing LCD for 3D Interaction using Lights Fields. In ACM Trans. Graph.
[18]
M. Hirsch, G. Wetzstein, and R. Raskar. 2014. A compressive light field projection system. ACM Trans. Graph. 33, 4 (2014), 58.
[19]
S. Izadi, S. Hodges, A. Butler, A. Rrustemi, and B. Buxton. 2007. ThinSight: Integrated optical multi-touch sensing through thin form-factor displays. In ACM UIST.
[20]
S. Izadi, S. Hodges, S. Taylor, D. Rosenfeld, N. Villar, A. Butler, and J. Westhues. 2008. Going beyond the display: A surface technology with an electronically switchable diffuser. In ACM UIST.
[21]
M. Kavakli and D. Jayarathna. 2005. Virtual Hand: An Interface for Interactive Sketching in Virtual Reality. In Proc. CIMCA. 613--618.
[22]
D.F. Keefe, D. Acevedo, J. Miles, F. Drury, S.M. Swartz, and D.H. Laidlaw. 2008. Scientific Sketching for Collaborative VR Visualization Design. IEEE TVCG 14, 4 (2008), 835--847.
[23]
D. Kim, O. Hilliges, S. Izadi, A.D. Butler, J. Chen, I. Oikonomidis, and P. Olivier. 2012. Digits: Freehand 3D Interactions Anywhere using a Wrist-worn Gloveless Sensor. In ACM UIST. 167--176.
[24]
M. Koike and M. Makino. 2009. CRAYON: A 3D Solid Modeling System on the CAVE. In Proc. ICIG. 634--639.
[25]
A. Koppelhuber and O. Bimber. 2014. LumiConSense: A Transparent, Flexible, and Scalable Thin-Film Sensor. IEEE CG&A 34, 5 (Sept 2014), 98--102.
[26]
G.M. Lippmann. 1908. La Photographie Integrale. Comptes-Rendus 146 (1908), 446--451.
[27]
E. Lueder. 2012. 3D Displays. Wiley.
[28]
S.K. Nayar, P.N. Belhumeur, and T.E. Boult. 2004. Lighting Sensitive Display. ACM Transactions on Graphics 23, 4 (Oct 2004), 963--979.
[29]
R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan. 2005. Light Field Photography with a Hand-Held Plenoptic Camera. Stanford University CSTR 2005-02. (April 2005).
[30]
J. Schmid, M.S. Senn, M. Gross, and R.W. Sumner. 2011. OverCoat: An implicit canvas for 3D painting. ACM Trans. Graph. 30, 4, Article 28 (2011), 10 pages.
[31]
J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake. 2011. Real-time human pose recognition in parts from single depth images. In IEEE CVPR. 1297--1304.
[32]
J. Tompkin, S. Heinzle, J. Kautz, and W. Matusik. 2013. Content-adaptive Lenticular Prints. In ACM Trans. Graph. (Proc. SIGGRAPH), Vol. 32.
[33]
J. Tompkin, S. Muff, S. Jakuschevskij, J. McCann, J. Kautz, M. Alexa, and W. Matusik. 2012. Interactive Light Field Painting. In ACM SIGGRAPH Emerging Technologies. Article 12, 1 pages.
[34]
J. Underkoffier, B. Ullmer, and H. Ishii. 1999. Emancipated Pixels: Real-world Graphics in the Luminous Room. In ACM SIGGRAPH. 385--392.
[35]
A. Vorozcovs, A. Hogue, and W. Stuerzlinger. 2005. The Hedgehog: a novel optical tracking method for spatially immersive displays. In IEEE VR.
[36]
O. Wang, M. Fuchs, C. Fuchs, J. Davis, H.-P. Seidel, and H.P.A. Lensch. 2010. A Context-Aware Light Source. In IEEE ICCP. 1--8.
[37]
R. Wang, S. Paris, and J. Popović. 2011. 6D hands: Markerless hand-tracking for computer aided design. In ACM UIST. 549--558.
[38]
F. Weichert, D. Bachmann, B. Rudak, and D. Fisseler. 2013. Analysis of the Accuracy and Robustness of the Leap Motion Controller. Sensors 13, 5 (2013).
[39]
G. Wetzstein, I. Ihrke, D. Lanman, and W. Heidrich. 2011. Computational Plenoptic Imaging. CGF 30, 8 (2011).
[40]
A.D. Wilson. 2004. TouchLight: An imaging touch screen and display for gesture-based interaction. In Proc. ICMI. 69--76.
[41]
M. Zwicker, W. Matusik, F. Durand, and H.P. Pfister. 2006. Antialiasing for Automultiscopic 3D Displays. In Eurographics Symposium on Rendering.

Cited By

View all
  • (2024)OptiBasePen: Mobile Base+Pen Input on Passive Surfaces by Sensing Relative Base Motion Plus Close-Range Pen PositionProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676467(1-9)Online publication date: 13-Oct-2024
  • (2024)Are Multi-view Edges Incomplete for Depth Estimation?International Journal of Computer Vision10.1007/s11263-023-01890-y132:7(2639-2673)Online publication date: 12-Feb-2024
  • (2022)Three-dimensional interactive cursor based on voxel patterns for autostereoscopic displaysJournal of Information Display10.1080/15980316.2022.202959123:2(137-150)Online publication date: 6-Feb-2022
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
UIST '15: Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology
November 2015
686 pages
ISBN:9781450337793
DOI:10.1145/2807442
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 November 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. joint io
  2. light fields
  3. pen input
  4. through-the-lens sensing

Qualifiers

  • Research-article

Funding Sources

  • NSF
  • ERC

Conference

UIST '15

Acceptance Rates

UIST '15 Paper Acceptance Rate 70 of 297 submissions, 24%;
Overall Acceptance Rate 561 of 2,567 submissions, 22%

Upcoming Conference

UIST '25
The 38th Annual ACM Symposium on User Interface Software and Technology
September 28 - October 1, 2025
Busan , Republic of Korea

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)12
  • Downloads (Last 6 weeks)0
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)OptiBasePen: Mobile Base+Pen Input on Passive Surfaces by Sensing Relative Base Motion Plus Close-Range Pen PositionProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676467(1-9)Online publication date: 13-Oct-2024
  • (2024)Are Multi-view Edges Incomplete for Depth Estimation?International Journal of Computer Vision10.1007/s11263-023-01890-y132:7(2639-2673)Online publication date: 12-Feb-2024
  • (2022)Three-dimensional interactive cursor based on voxel patterns for autostereoscopic displaysJournal of Information Display10.1080/15980316.2022.202959123:2(137-150)Online publication date: 6-Feb-2022
  • (2019)DesignAR: Immersive 3D-Modeling Combining Augmented Reality with Interactive DisplaysProceedings of the 2019 ACM International Conference on Interactive Surfaces and Spaces10.1145/3343055.3359718(29-41)Online publication date: 10-Nov-2019
  • (2018)TeleHuman2Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems10.1145/3173574.3174096(1-10)Online publication date: 21-Apr-2018
  • (2018)Widening Viewing Angles of Automultiscopic Displays Using Refractive InsertsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2018.279459924:4(1554-1563)Online publication date: 1-Apr-2018
  • (2018)Natural and Fluid 3D Operations with Multiple Input Channels of a Digital PenIntelligent Computing Methodologies10.1007/978-3-319-95957-3_61(585-598)Online publication date: 6-Jul-2018
  • (2017)DodecaPenProceedings of the 30th Annual ACM Symposium on User Interface Software and Technology10.1145/3126594.3126664(365-374)Online publication date: 20-Oct-2017
  • (2017)Analyzing Interfaces and Workflows for Light Field EditingIEEE Journal of Selected Topics in Signal Processing10.1109/JSTSP.2017.274626311:7(1162-1172)Online publication date: Oct-2017
  • (2016)AnyLightProceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces10.1145/2992154.2992188(39-48)Online publication date: 6-Nov-2016

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media