Abstract:
Cutting-edge telepresence systems equipped with multiple cameras for capturing the whole scene of a collaboration space, face the challenge of transmitting huge amount of...Show MoreMetadata
Abstract:
Cutting-edge telepresence systems equipped with multiple cameras for capturing the whole scene of a collaboration space, face the challenge of transmitting huge amount of dynamic data from multiple viewpoints. With the introduction of Light Field Displays (LFDs) in to the remote collaboration space, it became possible to produce an impression of 3D virtual presence. In addition, LFDs in current generation also rely on the images obtained from cameras arranged in various spatial configurations. To have a realistic and natural 3D collaboration using LFDs, the data in the form of multiple camera images needs to be transmitted in real time using the available bandwidth. Classical compression methods might resolve this issue to a certain level. However, in many cases the achieved compression level is by far insufficient. Moreover, the available compression schemes do not consider any of the display-related attributes. Here, we propose a method by which we reduce the data from each of the camera images by discarding unused parts of the images at the acquisition site in a predetermined way using the display model and geometry, as well as the mapping between the captured and displayed light field. The proposed method is simple to implement and can exclude the unnecessary data in an automatic way. While similar methods exist for 2D screens or display walls, this is the first such algorithm for light fields. Our experimental results show that an identical light field reconstruction can be achieved with the reduced set of data which we would have got if all the data were transmitted. Moreover, the devised method provides very good processing speed.
Published in: 2013 3DTV Vision Beyond Depth (3DTV-CON)
Date of Conference: 07-08 October 2013
Date Added to IEEE Xplore: 02 December 2013
Electronic ISBN:978-1-4799-1369-5