Keywords

1 Introduction

Integral imaging has attracted attention as a new type of true 3D display technique because of its excellent display quality, equipment-free characteristic, and lack of fatigue [1,2,3]. Studies on integral imaging have focused on the acquisition of the elementary image array [4,5,6,7,8,9,10], reconstruction of displayed images [11, 12], and applications in display devices [13]. Integral imaging can be of three main types: total optical integral imaging, computer-assisted integral imaging, and virtual reconstruction integral imaging. In virtual reconstruction integral imaging, we simulate an optical lens array and a charge-coupled device (CCD) to pick up an elementary image array, and then, we reconstruct virtual images based on this array. Through this procedure, we can obtain images of different depths or from different viewing points. In recent years, ray-tracing techniques [14, 15] have been widely used for virtual reconstruction owing to their excellent reconstruction quality. However, we can only trace a light ray that goes through the center of each microlens and obtain the pixels produced by the light ray. Thus, the resolution of the reconstructed image remains very low and is at most the same as that of the microlens.

Therefore, many studies have tried to improve the resolution of the reconstructed image. One study [16] proposed a novel reconstruction technique based on lens array models that is similar to the traditional ray-tracing technique. Instead of picking up one pixel from each elementary image, a square block of a certain size that contains an accurate pixel is picked up from each elementary image to constitute a reconstructed image from different viewing points. Thus, the resolution can be increased by a factor of the length of the square block. However, because of interference from stray light in the process of picking up and discarding the distortion area, the quality of the reconstructed image is greatly degraded. Therefore, eliminating stray light and correcting the distortion area remain major unsolved problems in virtual reconstruction.

Instead of using optical devices such as a lens array and CCD for pickup, we use the MAYA software to create a virtual camera array to simulate an actual picking-up system by properly adapting the parameters of virtual cameras. In other words, we use a virtual picking-up process instead of an optical one to effectively eliminate the interference of stray light, as shown in Fig. 1. After we eliminate the interference caused by stray light, we analyze how stripe distortion is produced, and we propose a method to find the matching block for the distortion area. Then, we substitute the distortion area with its matching block rather than directly discarding this area; doing so helps greatly expand the display range of the reconstructed image.

Fig. 1.
figure 1

(a) The analysis of the generation of the distortion. In virtual picking-up system, it mainly derived from the interference of redundunt light; (b) Picking-up range of a single lens.

2 Simulating Optical Lens Array Using Virtual Camera Array

We develop a virtual picking-up system in MAYA to eliminate stray light. This implies that the elementary image can only contain objects from the nearest microlens, as shown in Fig. 1(a). To set the parameters of the virtual camera array correctly, it is necessary to first understand the picking-up procedure.

Figure 1(b) shows a part of the picking-up system; this, in turn, is a part of a complete optical system established according to the Gaussian formula. In the figure, we define h as the object distance, g as the image distance, p as the pitch between the microlens, and p also means the size of the microlens. If a virtual camera array is used in an analog optical picking-up system, we need to keep the viewing angle in the virtual condition identical to that in the optical system by altering the horizontal and vertical aperture values of the virtual camera while keeping the focal length of the virtual camera equal to that of the lens array. Therefore, we alter the aperture value to change the value of the camera’s negative to guarantee that the shooting range of one camera is the same as that of its matching microlens, as shown in Eq. (1).

$$ \tan \frac{\theta }{2} = \frac{p}{2g} = \frac{d}{2f} $$
(1)

In Eq. (1), θ is the horizontal or vertical angle and d is the focal length of the virtual camera. In the experiment, we use a camera array of size 50 × 50 to simulate a lens array of size 50 × 50. Figure 2(a) shows the setup of the picking-up system in MAYA, and Fig. 2(b) shows an elementary image array of size 10000 × 10000. We can eliminate the interference caused by stray light and also adjust the resolution of the elementary image flexibly in the virtual environment, thus obtaining a better image than the low-resolution one obtained using the optical system. In the experiment, we set the resolution of the elementary image as 200 × 200, which is a little too high for the optical picking-up system, and we obtained an elementary image array of 10000 × 10000 resolution.

Fig. 2.
figure 2

(a) Virtual setup and elementary image generation. The green part is camera array whose size is 50 * 50 in this virtual picking-up system and the front objects are virtual buildings and balloons. The picking-up system is built in MAYA and parameters of cameras are set according to the lens array to simulate; (b) Elementary image array generated by virtual system. The resolution of the element images generated by each virtual camera is 200 * 200 in the experiment, we joint these element images and get the elementary image array whose resolution is 10000 * 10000. (Color figure online)

Then, we apply the method proposed in [16] to reconstruct images from different viewing points. Figure 3 reconstructed image whose viewing point is at top left corner. We can clearly identify a striped distortion area, which is circled in the figure. A small display error may arise owing to the use of traditional image filters. Therefore, we have to discard the distortion area that causes a decrease in the display range. The building in the bottom right corner will be totally unseen in Fig. 3 if we discard the circled distortion area. Figure 3 shows that the distortion area does not have a total display error, indicating that we can correct these areas instead of discarding them directly. After we eliminate the interference of stray light, we can conclude from further analysis that the distortion arises from improper pixel extraction. We can fix it by obtaining correct pixel blocks to substitute for the distortion area to improve the image quality.

Fig. 3.
figure 3

Reconstructed image generated by the method proposed in [16] and the circled distortion area. The viewing point is set at the top left corner and the distortion area are mainly located in circled area, we have to discard these distortion area which will hamper the display range of the reconstruction image according to the method proposed in document [16].

3 Analysis of Distortion

We need to analyze the origin of the distortion before we correct it. Figure 4 shows the principle of the method proposed in [16]. We call the areas extracted from the elementary image array, such as ba, cb, and dc, as extraction blocks. In the reconstruction process, we see that the extraction blocks ba and cb generated from AB and BC, respectively, are still in their own single elementary image because these areas are close to the viewing point O. By contrast, the extraction block dc goes across two elementary images because it is far from the viewing point. This block is not generated from the same elementary image, and thus, the object picked up in this area is discontinuous, thereby generating distortion.

Fig. 4.
figure 4

Principle of generation of distortion. The background picture is elementary image array, in the element image which is far from the viewing point O, the extraction is not only located in a single element image and the area 2 is distortion area which need to be modified. Merely discarding this extraction area will not only hamper the display range of the reconstruction image but also reduce the using efficiency of the elementary image array because area 1 is correctly picked up.

As shown in Fig. 4, the correct extraction area is area 1. Area 2 is discrete from area 1, thus causing the generation of distortion. If we substitute area 2 by its matching area to make extraction block dc continuous, we can eliminate the distortion and expand the display range of the reconstructed image. Therefore, our next step is to locate the distortion area and search for its matching block. If we set the viewing point on the optical axis of the first camera at the top left corner, as shown in Fig. 5, we use virtual reconstruction to clarify the position of the viewing point rather than building an optical reconstruction system. We classify the distortion into three types.

Fig. 5.
figure 5

Position of virtual viewing point. The viewing point and its position is simulated according to parameters of the picking-up system, it can be set anywhere and after it is set we can calculate the size of the extraction area and their position in elementary image array to get the reconstruction image at this viewing point.

3.1 Distortion in Bottom Left Corner

First, we analyze the distortion at the bottom left corner, as shown in Fig. 6. The reconstructed image is shown in the left-hand side of Fig. 6. M and N are extraction blocks in the same column that contain the distortion area. The colored area is the distortion area, and it grows wider as we approach the bottom of the image, we have to discard the extraction area with colored area which will reduce the display range of the reconstruction image. We amplify this area in the top of Fig. 6 to clearly analyze this area. From the reconstructed image, it can be inferred that plane light arrays AA’ and BB’ match AA” and BB”, respectively. Distortion area 2 should be searched in the upper elementary image and its matching block is 2’; distortion area 1 and its matching block 1’ can be searched in the same way. We use the matching block to replace the distortion area in the image reconstruction process to correct the distortion in the bottom left corner. If the coordinate of the first pixel that we extract is (m, n) in the top left corner of the elementary image array, the size of the elementary image is a × a, and the size of the extraction block is s × s. Then, we define parameter u in Eq. (2):

Fig. 6.
figure 6

Distortion in bottom left corner and matching blocks for distortion area. The blue stripe and the red stripe is distortion area which need to be modified, their matching block should be searched in adjacent element images. We use the matching block to replace the distortion area and joint all the corrected extraction block to generate reconstruction images based on different viewing points.

$$ \begin{aligned} {\text{u}} = m + s + (a + \Delta )(i - 1) - ai \hfill \\ = m + s + \Delta (i - 1) - a \hfill \\ \end{aligned} $$
(2)
$$ \Delta = \frac{\text{pg}}{h + l} $$
(3)

where i is the rank number of the extraction block in the vertical direction, l is the distance between the viewing point and the reconstruction plane, and Δ is the nonperiodic extraction distance mentioned in [16]. When i is small, u < 0; in this case, there will be no distortion in the extraction block; with an increase in i, the extraction block moves further away from the viewing point. Therefore, u will eventually become positive, and distortion will emerge in the extraction block. After the calculation, we can obtain the size of the distortion in the extraction block, which is u × s.

According to the above method, we can determine the position and size of the distortion area. We still have to consider one more condition. As shown in Fig. 6, we need to search for the matching block of the area in the nearest elementary image to reduce the deviation in the reconstructed image. This means that we need to set proper parameters for the system to make the matching system not go across two elementary images. Once it happens, we need to search for the matching block in the second-nearest elementary image. The searching method is the same as that used previously.

3.2 Distortion in Top Right Corner

Figure 7 shows the distortion in the top right corner. The reconstructed image is in the left-hand side of the figure, and extraction blocks M and N contain colored distortion areas. Therefore, the distortion will grow wider when approaching the right-hand side of the image. We also amplify this area to analyze the distortion. Based on the reconstruction plane, we conclude that light arrays AA’ and BB’ match AA” and BB”, respectively. For distortion area 2, we can find the matching block in the elementary image at the left, which is marked as area 2’. Other matching blocks are searched in the same way. We also replace the distortion area with its own matching block when reconstructing the image to improve the display quality of the top right corner. We define o in Eq. (4):

Fig. 7.
figure 7

Distortion in top right corner and matching blocks for distortion area. The distortion area is still in stripe shape and according to the method in Fig. 6, we can get corrected extraction block and generate reconstruction images.

$$ {\text{o}} = n + s + (a + \Delta )(j - 1) - aj = n + s + \Delta (j - 1) - a $$
(4)

where j is the rank number of the extraction block in the horizontal direction. We make the following observations: When j is small, o < 0; in this case, there will be no distortion area in the extraction block; as j increases, the extraction block moves away from the viewing point. o will eventually become positive, and distortion emerges in the extraction block. After the calculation, we can obtain the size of the distortion in its extraction block as s × o. By using this method, we can obtain the position and size of the distortion area.

3.3 Distortion in Bottom Right Corner

This distortion area can still be divided into three parts, marked as 1, 2, and 3 in Fig. 8. The matching blocks for distortion areas 1 and 2 can be searched in the upper and left elementary images, respectively. Distortion area 3 is a special case. As shown in Fig. 8, block 3’ is the matching block of area 3; however, block 3’ is still located in the distortion area, so we have to search the matching block of area 3 in the elementary image that is at its top left corner. If we continue to use the parameters that have been set before, we can conclude that with increasing i and j, the distortion area will eventually emerge. The size of distortion area 1 is u × (s − o), the size of distortion area 2 is (s − u) × o and the size of distortion area 3 is u × o.

Fig. 8.
figure 8

Distortion at bottom right corner and matching blocks for distortion area. The distortion area is not in stripe shape and the method of finding matching block of the distortion area is different from the method in Figs. 6 and 7. We separate the distortion into three small blocks and find their individual matching blocks, the we joint these three matching blocks and correct area together to get the corrected block.

Thus far, we have corrected the distortion area. When the viewing points are at other positions, the analysis and substitution can be performed in the same way. By using this method, we can reconstruct images based on different viewing points with a larger display range.

4 Experiment Result

The virtual optical system is set according to the formula (5), we set the distance between the viewing point at the top left corner and the elementary image array as 3.5 mm + 21 mm + 329 mm = 353.5 mm. Therefore, the distance l between the viewing point and the reconstructed image plane is 329 mm. According to Eq. (1), we set the size of the virtual cameras’ aperture as 0.034 × 0.034 in MAYA. This means that the size of the cameras’ negatives is 1 mm × 1 mm. Furthermore, we set the focal length as 3 mm, distance between each camera as 1 mm, and the size of the camera array as 50 × 50. We use this camera array to simulate the lens array whose focal length is 3 mm, size is 50 × 50, and interval between microlens is 1 mm. We set the resolution of the elementary image as 200 × 200 to obtain an elementary image array whose resolution is 10000 × 10000, as shown in Fig. 2.

$$ \frac{1}{3} = \frac{1}{3. 5} + \frac{1}{21} $$
(5)

In this reconstruction procedure, we set the pixel as the unit, size of extraction block as 31 × 31 pixels, and coordinate of first pixel in elementary image array as (84, 84). According to [16], the interval of the extraction block is 202 pixels; this means that Δ is 202 − 200 = 2. After the above calculation, we can conclude that the distortion emerges after the 45th extraction block in both the vertical and the horizontal directions, and the width of the distortion area increases by 2, 4, 6, 8, … pixels. Accordingly, we can search for the matching block of the distortion area and replace the distortion area with its matching block in the reconstruction process. Figure 9 shows the contrast between before and after the correction.

Fig. 9.
figure 9

Contrast between before and after the correction of the reconstructed image.

Figure 9 shows the evident improvement in the reconstructed image and the expansion of the display range, which is important for the overall improvement of the display quality. The display range of the reconstruction image can be improved by 29.1% according to formula (6). Figures 10, 11 and 12 show the contrast of the reconstructed image based on the other three viewing points, the method we applied to acquire these reconstruction images obeys the principle we used before. Although there are still some evident mistakes in the remedied reconstruction pictures, the general dispaly qualities are greatly improved and the showing range is expanded.

Fig. 10.
figure 10

Contrast of reconstructed images based on viewing point at top right corner. (a) before remediation; (b) after remediation. In the pic the distortion area is correctly replaced by their own matching blocks and the display range of the reconstruction image is expanded.

Fig. 11.
figure 11

Contrast of reconstructed images based on viewing point at bottom right corner. (a) before remediation; (b) after remediation. The position of the viewing point is different from which in Fig. 10, as we can see the showing range of building roof is larger in this pic than which in Fig. 10. The distortion area is mainly located in the black background which makes the contrast of (a) and (b) less obvious.

Fig. 12.
figure 12

Contrast of reconstructed images based on viewing point at bottom left corner. (a) before remediation; (b) after remediation. According to the position of this viewing point, we can see more right side of each building. The distortion area is mainly located at right side of the image, as we can see, after processing the distortion is eliminated and the right side of the images is completely reconstructed.

$$ \frac{(50*31*51*31) - (44*31*44*31)}{44*31*44*31} = 29.1\% $$
(6)

The above figures show the excellent improvement in the display range and display quality, especially when the distortion area is evident to viewers. However, this method still has some limitations in that the number of calculations is large and it is a little complicated to implement in a program. Furthermore, the procedure is time-consuming, and the viewing angle is narrow and still needs to be optimized.

5 Summary

We devise a setup for a virtual picking-up system in which a virtual camera array is used to simulate an optical picking-up system and to expand the display range of the reconstructed image by analyzing the generation of the distortion area and substituting it with its matching block. In this way, we improve the display quality of the reconstructed image and increase the usage efficiency of the elementary image array. We assume that in a practical integral imaging application, we can orient the position of the viewer’s viewing point by using sensor equipment. Simultaneously, we can correct the elementary image array to avoid the generation of distortion to achieve better display quality and to make this method valuable and practical.