Abstract
In studies of virtual image reconstruction in integral imaging, increasing the resolution and broadening the display range of the reconstructed image have become major concerns. The resolution of the reconstructed image can be improved by using a reconstruction technique based on a lens array model. However, we have to discard some pixels in the reconstructed image generated by this technique; otherwise, the display quality may suffer from stripe distortions caused by this method. We propose a novel reconstruction technique that provides an explicit analysis of the distortion and that uses matching blocks to replace the distortion area. The display range is greatly broadened and the display quality is improved, thus improving the integral imaging.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Integral imaging has attracted attention as a new type of true 3D display technique because of its excellent display quality, equipment-free characteristic, and lack of fatigue [1,2,3]. Studies on integral imaging have focused on the acquisition of the elementary image array [4,5,6,7,8,9,10], reconstruction of displayed images [11, 12], and applications in display devices [13]. Integral imaging can be of three main types: total optical integral imaging, computer-assisted integral imaging, and virtual reconstruction integral imaging. In virtual reconstruction integral imaging, we simulate an optical lens array and a charge-coupled device (CCD) to pick up an elementary image array, and then, we reconstruct virtual images based on this array. Through this procedure, we can obtain images of different depths or from different viewing points. In recent years, ray-tracing techniques [14, 15] have been widely used for virtual reconstruction owing to their excellent reconstruction quality. However, we can only trace a light ray that goes through the center of each microlens and obtain the pixels produced by the light ray. Thus, the resolution of the reconstructed image remains very low and is at most the same as that of the microlens.
Therefore, many studies have tried to improve the resolution of the reconstructed image. One study [16] proposed a novel reconstruction technique based on lens array models that is similar to the traditional ray-tracing technique. Instead of picking up one pixel from each elementary image, a square block of a certain size that contains an accurate pixel is picked up from each elementary image to constitute a reconstructed image from different viewing points. Thus, the resolution can be increased by a factor of the length of the square block. However, because of interference from stray light in the process of picking up and discarding the distortion area, the quality of the reconstructed image is greatly degraded. Therefore, eliminating stray light and correcting the distortion area remain major unsolved problems in virtual reconstruction.
Instead of using optical devices such as a lens array and CCD for pickup, we use the MAYA software to create a virtual camera array to simulate an actual picking-up system by properly adapting the parameters of virtual cameras. In other words, we use a virtual picking-up process instead of an optical one to effectively eliminate the interference of stray light, as shown in Fig. 1. After we eliminate the interference caused by stray light, we analyze how stripe distortion is produced, and we propose a method to find the matching block for the distortion area. Then, we substitute the distortion area with its matching block rather than directly discarding this area; doing so helps greatly expand the display range of the reconstructed image.
2 Simulating Optical Lens Array Using Virtual Camera Array
We develop a virtual picking-up system in MAYA to eliminate stray light. This implies that the elementary image can only contain objects from the nearest microlens, as shown in Fig. 1(a). To set the parameters of the virtual camera array correctly, it is necessary to first understand the picking-up procedure.
Figure 1(b) shows a part of the picking-up system; this, in turn, is a part of a complete optical system established according to the Gaussian formula. In the figure, we define h as the object distance, g as the image distance, p as the pitch between the microlens, and p also means the size of the microlens. If a virtual camera array is used in an analog optical picking-up system, we need to keep the viewing angle in the virtual condition identical to that in the optical system by altering the horizontal and vertical aperture values of the virtual camera while keeping the focal length of the virtual camera equal to that of the lens array. Therefore, we alter the aperture value to change the value of the camera’s negative to guarantee that the shooting range of one camera is the same as that of its matching microlens, as shown in Eq. (1).
In Eq. (1), θ is the horizontal or vertical angle and d is the focal length of the virtual camera. In the experiment, we use a camera array of size 50 × 50 to simulate a lens array of size 50 × 50. Figure 2(a) shows the setup of the picking-up system in MAYA, and Fig. 2(b) shows an elementary image array of size 10000 × 10000. We can eliminate the interference caused by stray light and also adjust the resolution of the elementary image flexibly in the virtual environment, thus obtaining a better image than the low-resolution one obtained using the optical system. In the experiment, we set the resolution of the elementary image as 200 × 200, which is a little too high for the optical picking-up system, and we obtained an elementary image array of 10000 × 10000 resolution.
Then, we apply the method proposed in [16] to reconstruct images from different viewing points. Figure 3 reconstructed image whose viewing point is at top left corner. We can clearly identify a striped distortion area, which is circled in the figure. A small display error may arise owing to the use of traditional image filters. Therefore, we have to discard the distortion area that causes a decrease in the display range. The building in the bottom right corner will be totally unseen in Fig. 3 if we discard the circled distortion area. Figure 3 shows that the distortion area does not have a total display error, indicating that we can correct these areas instead of discarding them directly. After we eliminate the interference of stray light, we can conclude from further analysis that the distortion arises from improper pixel extraction. We can fix it by obtaining correct pixel blocks to substitute for the distortion area to improve the image quality.
3 Analysis of Distortion
We need to analyze the origin of the distortion before we correct it. Figure 4 shows the principle of the method proposed in [16]. We call the areas extracted from the elementary image array, such as ba, cb, and dc, as extraction blocks. In the reconstruction process, we see that the extraction blocks ba and cb generated from AB and BC, respectively, are still in their own single elementary image because these areas are close to the viewing point O. By contrast, the extraction block dc goes across two elementary images because it is far from the viewing point. This block is not generated from the same elementary image, and thus, the object picked up in this area is discontinuous, thereby generating distortion.
As shown in Fig. 4, the correct extraction area is area 1. Area 2 is discrete from area 1, thus causing the generation of distortion. If we substitute area 2 by its matching area to make extraction block dc continuous, we can eliminate the distortion and expand the display range of the reconstructed image. Therefore, our next step is to locate the distortion area and search for its matching block. If we set the viewing point on the optical axis of the first camera at the top left corner, as shown in Fig. 5, we use virtual reconstruction to clarify the position of the viewing point rather than building an optical reconstruction system. We classify the distortion into three types.
3.1 Distortion in Bottom Left Corner
First, we analyze the distortion at the bottom left corner, as shown in Fig. 6. The reconstructed image is shown in the left-hand side of Fig. 6. M and N are extraction blocks in the same column that contain the distortion area. The colored area is the distortion area, and it grows wider as we approach the bottom of the image, we have to discard the extraction area with colored area which will reduce the display range of the reconstruction image. We amplify this area in the top of Fig. 6 to clearly analyze this area. From the reconstructed image, it can be inferred that plane light arrays AA’ and BB’ match AA” and BB”, respectively. Distortion area 2 should be searched in the upper elementary image and its matching block is 2’; distortion area 1 and its matching block 1’ can be searched in the same way. We use the matching block to replace the distortion area in the image reconstruction process to correct the distortion in the bottom left corner. If the coordinate of the first pixel that we extract is (m, n) in the top left corner of the elementary image array, the size of the elementary image is a × a, and the size of the extraction block is s × s. Then, we define parameter u in Eq. (2):
where i is the rank number of the extraction block in the vertical direction, l is the distance between the viewing point and the reconstruction plane, and Δ is the nonperiodic extraction distance mentioned in [16]. When i is small, u < 0; in this case, there will be no distortion in the extraction block; with an increase in i, the extraction block moves further away from the viewing point. Therefore, u will eventually become positive, and distortion will emerge in the extraction block. After the calculation, we can obtain the size of the distortion in the extraction block, which is u × s.
According to the above method, we can determine the position and size of the distortion area. We still have to consider one more condition. As shown in Fig. 6, we need to search for the matching block of the area in the nearest elementary image to reduce the deviation in the reconstructed image. This means that we need to set proper parameters for the system to make the matching system not go across two elementary images. Once it happens, we need to search for the matching block in the second-nearest elementary image. The searching method is the same as that used previously.
3.2 Distortion in Top Right Corner
Figure 7 shows the distortion in the top right corner. The reconstructed image is in the left-hand side of the figure, and extraction blocks M and N contain colored distortion areas. Therefore, the distortion will grow wider when approaching the right-hand side of the image. We also amplify this area to analyze the distortion. Based on the reconstruction plane, we conclude that light arrays AA’ and BB’ match AA” and BB”, respectively. For distortion area 2, we can find the matching block in the elementary image at the left, which is marked as area 2’. Other matching blocks are searched in the same way. We also replace the distortion area with its own matching block when reconstructing the image to improve the display quality of the top right corner. We define o in Eq. (4):
where j is the rank number of the extraction block in the horizontal direction. We make the following observations: When j is small, o < 0; in this case, there will be no distortion area in the extraction block; as j increases, the extraction block moves away from the viewing point. o will eventually become positive, and distortion emerges in the extraction block. After the calculation, we can obtain the size of the distortion in its extraction block as s × o. By using this method, we can obtain the position and size of the distortion area.
3.3 Distortion in Bottom Right Corner
This distortion area can still be divided into three parts, marked as 1, 2, and 3 in Fig. 8. The matching blocks for distortion areas 1 and 2 can be searched in the upper and left elementary images, respectively. Distortion area 3 is a special case. As shown in Fig. 8, block 3’ is the matching block of area 3; however, block 3’ is still located in the distortion area, so we have to search the matching block of area 3 in the elementary image that is at its top left corner. If we continue to use the parameters that have been set before, we can conclude that with increasing i and j, the distortion area will eventually emerge. The size of distortion area 1 is u × (s − o), the size of distortion area 2 is (s − u) × o and the size of distortion area 3 is u × o.
Thus far, we have corrected the distortion area. When the viewing points are at other positions, the analysis and substitution can be performed in the same way. By using this method, we can reconstruct images based on different viewing points with a larger display range.
4 Experiment Result
The virtual optical system is set according to the formula (5), we set the distance between the viewing point at the top left corner and the elementary image array as 3.5 mm + 21 mm + 329 mm = 353.5 mm. Therefore, the distance l between the viewing point and the reconstructed image plane is 329 mm. According to Eq. (1), we set the size of the virtual cameras’ aperture as 0.034 × 0.034 in MAYA. This means that the size of the cameras’ negatives is 1 mm × 1 mm. Furthermore, we set the focal length as 3 mm, distance between each camera as 1 mm, and the size of the camera array as 50 × 50. We use this camera array to simulate the lens array whose focal length is 3 mm, size is 50 × 50, and interval between microlens is 1 mm. We set the resolution of the elementary image as 200 × 200 to obtain an elementary image array whose resolution is 10000 × 10000, as shown in Fig. 2.
In this reconstruction procedure, we set the pixel as the unit, size of extraction block as 31 × 31 pixels, and coordinate of first pixel in elementary image array as (84, 84). According to [16], the interval of the extraction block is 202 pixels; this means that Δ is 202 − 200 = 2. After the above calculation, we can conclude that the distortion emerges after the 45th extraction block in both the vertical and the horizontal directions, and the width of the distortion area increases by 2, 4, 6, 8, … pixels. Accordingly, we can search for the matching block of the distortion area and replace the distortion area with its matching block in the reconstruction process. Figure 9 shows the contrast between before and after the correction.
Figure 9 shows the evident improvement in the reconstructed image and the expansion of the display range, which is important for the overall improvement of the display quality. The display range of the reconstruction image can be improved by 29.1% according to formula (6). Figures 10, 11 and 12 show the contrast of the reconstructed image based on the other three viewing points, the method we applied to acquire these reconstruction images obeys the principle we used before. Although there are still some evident mistakes in the remedied reconstruction pictures, the general dispaly qualities are greatly improved and the showing range is expanded.
The above figures show the excellent improvement in the display range and display quality, especially when the distortion area is evident to viewers. However, this method still has some limitations in that the number of calculations is large and it is a little complicated to implement in a program. Furthermore, the procedure is time-consuming, and the viewing angle is narrow and still needs to be optimized.
5 Summary
We devise a setup for a virtual picking-up system in which a virtual camera array is used to simulate an optical picking-up system and to expand the display range of the reconstructed image by analyzing the generation of the distortion area and substituting it with its matching block. In this way, we improve the display quality of the reconstructed image and increase the usage efficiency of the elementary image array. We assume that in a practical integral imaging application, we can orient the position of the viewer’s viewing point by using sensor equipment. Simultaneously, we can correct the elementary image array to avoid the generation of distortion to achieve better display quality and to make this method valuable and practical.
References
Lippmann, G.: Epreuves reversibles donnant la sensation du relief. J. Phys. Theor. Appl. 7, 821–825 (1908)
Ives, H.E.: Optical properties of a Lippmann lenticulated sheet. JOSA 21, 171–176 (1931)
Javidi, B., Sola-Pikabea, J., Martinez-Corral, M.: Breakthroughs in photonics 2014: recent advances in 3-D integral imaging sensing and display. IEEE Photonics J. 7, 0700907 (2015)
Jiao, T.T., Wang, Q.H., Deng, H., Zhou, L., Wang, F.: Computer-generated integral imaging based on 3DS MAX. Chin. J. Liq. Cryst. Disp. 23, 621–623 (2008)
Jiao, X.X., Zhao, X., Yang, Y., Fang, Z.L., Yuan, X.C.: Optical acquiring technique of three-dimensional integral imaging based on optimal pick-up distance. Opt. Precis. Eng. 19, 2805–2811 (2011)
Lyu, Y.Z., Wang, S.G., Ren, G.X., Yu, J.Q., Wang, X.Y.: Two dimensional multiview image array pickup and virtual viewpoint image synthesis algorithm. J. Harbin Eng. Univ. 34, 763–767 (2013)
Yang, S.W., et al.: Influences of the pickup process on the depth of field of integral imaging display. Opt. Commun. 386, 22–26 (2017)
Yuan, X.C., Xu, Y.P., Yang, Y., Zhao, X., Bu, J.: Design parameters of elemental images formed by camera array for crosstalk reduction in integral imaging. Opt. Precis. Eng. 19, 2050–2055 (2011)
Deng, H., Wang, Q.H., Liu, Y.: Generation method of elemental image array using micro-lens array with different specifications. Optoelectron. Technol. 34, 73–77 (2014)
Jang, J., Cho, M.: Fast computational integral imaging reconstruction by combined use of spatial filtering and rearrangement of elemental image pixels. Opt. Lasers Eng. 75, 57–62 (2015)
Fan, J., Wu, F., Lyu, G.J., Zhao, B.C., Ma, W.Y.: One-dimensional integral imaging display with large viewing angle. Optik 127, 5219–5220 (2016)
Lyu, Y.Z., Wang, S.G., Zhang, D.: Elemental image array generation and sparse viewpoint pickup in integral imaging. J. Jilin Univ. (Eng. Technol. Ed.) 43, 1–5 (2013)
Li, D., Zhao, X., Yang, Y., Fang, Z.L., Yuan, X.C.: Non-flipping reconstruction system design implementation in three dimension integral imaging. J. Optoelectron. Laser 23, 35–40 (2012)
Xing, S.J., et al.: High-efficient computer-generated integral imaging based on the backward ray-tracing technique and optical reconstruction. Opt. Express 25, 330–338 (2017)
Piao, Y., Wang, Y.: Non-periodic reconstruction technique of computational integral imaging. J. Inf. Comput. Sci. 5, 1259–1264 (2008)
Piao, Y., Wang, Y.: Technique of Integral Imaging, 1st edn. Publishing House of Electronics Industry, Beijing (2005)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, L., Wang, S., Wu, W., Wei, J., Li, T. (2019). A New Method to Expand the Showing Range of a Virtual Reconstructed Image in Integral Imaging. In: Zhao, Y., Barnes, N., Chen, B., Westermann, R., Kong, X., Lin, C. (eds) Image and Graphics. ICIG 2019. Lecture Notes in Computer Science(), vol 11902. Springer, Cham. https://doi.org/10.1007/978-3-030-34110-7_28
Download citation
DOI: https://doi.org/10.1007/978-3-030-34110-7_28
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34109-1
Online ISBN: 978-3-030-34110-7
eBook Packages: Computer ScienceComputer Science (R0)