Abstract
The performance of coded exposure imaging critically depends on finding good binary sequences. Previous coded exposure imaging methods have mostly relied on random searching to find the binary codes, but that approach can easily fail to find good long sequences, due to the exponentially expanding search space. In this paper, we present two algorithms for generating the binary sequences, which are especially well suited for generating short and long binary sequences, respectively. We show that the concept of low autocorrelation binary sequences, which has been successfully exploited in the field of information theory, can be applied to generate shutter fluttering patterns. We also propose a new measure for good binary sequences. Based on the new measure, we introduce two new algorithms for coded exposure imaging - a modified Legendre sequence method and a memetic algorithm. Experiments using both synthetic and real data show that our new algorithms consistently generate better binary sequences for the coded exposure problem, yielding better deblurring and resolution enhancement results compared to previous methods of generating the binary codes.




















Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.Notes
Note that either \(\{0,1\}\) or \(\{-1,1\}\) can be used to represent the sequence value as shown in No et al. (1996).
We set the number of candidates to 100 for the experiment.
In McCloskey et al. (2012), the weighted sum of 6 metrics was used: (1) the minimum of MTF, (2) the mean of MTF, (3) the variance of MTF, (4) the number of peaky frequencies, (5) weighted peaky frequencies and (6) the number of open chops.
When using Trigger mode 5, the frame rate of the Flea 3 camera dips to 5 frames per second.
References
Agrawal, A., & Raskar, R. (2007). Resolving objects at higher resolution from a single motion-blurred image. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Agrawal, A., & Raskar, R. (2009). Optimal single image capture for motion deblurring. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Agrawal, A., & Xu, Y. (2009). Coded exposure deblurring: Optimized codes for psf estimation and invertibility. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Asif, M. S., Ayremlou, A., Sankaranarayanan, A., Veeraraghavan, A., & Baraniuk, R. (2015). Flatcam: Thin, bare-sensor cameras using coded aperture and computation. arXiv preprint arXiv:1509.00116.
Baden, J. M. (2011). Efficient optimization of the merit factor of long binary sequences. IEEE Transactions on Information Theory, 57(12), 8084–8094.
Borwein, P., Choi, K. K., & Jedwab, J. (2004). Binary sequences with merit factor greater than 6.34. IEEE Transactions on Information Theory, 50(12), 3234–3249.
Borwein, P., Kaltofen, E., & Mossinghoff, M. J. (2007). Irreducible polynomials and barker sequences. ACM Communications in Computation Algebra, 41(4), 118–121.
Boufounos, P. (2007). Generating binary processes with all-pole spectra. In: Proceeding of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
Chen, X., Ong, Y.-S., Lim, M.-H., & Tan, K. C. (2011). A multi-facet survey on memetic computation. IEEE Transactions on Evolutionary Computation, 15(5), 591–607.
Cossairt, O., Gupta, M., & Nayar, S. K. (2013). When does computational imaging improve performance? IEEE Transactions on Image Processing (TIP), 22(2), 447–458.
Fergus, R., Singh, B., Hertzmann, A., Roweis, S. T., & Freeman, W. T. (2006). Removing camera shake from a single photograph. ACM Transactions on Graphics, 25(3), 787–794.
Franzen, R. (1999). Kodak lossless true color image suite. http://www.r0k.us/graphics/kodak/.
Gallardo, J. E., Cotta, C., & Fernández, A. J. (2009). Finding low autocorrelation binary sequences with memetic algorithms. Applied Soft Computing, 9(4), 1252–1262.
Golay, M. (1983). The merit factor of legendre sequences (corresp.). IEEE Transactions on Information Theory, 29, 934–936.
Golay, M. J. (1977). Sieves for low autocorrelation binary sequences. IEEE Transactions on Information Theory, 23(1), 43–51.
Gorthi, S. S., Schaak, D., & Schonbrun, E. (2013). Fluorescence imaging of flowing cells using a temporally coded excitation. Optics Express, 21(4), 5164–5170.
Høholdt, T., & Jensen, H. E. (1988). Determination of the merit factor of legendre sequences. IEEE Transactions on Information Theory, 34(1), 161–164.
Jedwab, J. (2005). A survey of the merit factor problem for binary sequences. In: Proceeding of Sequences and Their Applications.
Jensen, J. M., Jensen, H. E., & Høholdt, T. (1991). The merit factor of binary sequences related to difference sets. IEEE Transactions on Information Theory, 37(3), 617–626.
Jeon, H.-G., Lee, J.-Y., Han, Y., Kim, S. J., & Kweon, I. S. (2013). Fluttering pattern generation using modified legendre sequence for coded exposure imaging. In: Proceedings of International Conference on Computer Vision (ICCV).
Jeon, H.-G., Lee, J.-Y., Han, Y., Kim, S. J., & Kweon, I. S. (2015). Complementary sets of shutter sequences for motion deblurring. In: Proceedings of International Conference on Computer Vision (ICCV).
Krishnan, D., & Fergus, R. (2009). Fast image deconvolution using hyper-laplacian. In: Advances in Neural Information Processing Systems (NIPS).
Lempel, A., Cohn, M., & Eastman, W. (1977). A class of balanced binary sequences with optimal autocorrelation properties. IEEE Transactions on Information Theory, 23(1), 38–42.
Levin, A., Fergus, R., Durand, F., & Freeman, W. T. (2007). Image and depth from a conventional camera with a coded aperture. ACM Transactions on Graphics, 26(3).
Levin, A., Weiss, Y., Durand, F., & Freeman, W. (2009) Understanding and evaluating blind deconvolution algorithms. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1964–1971.
Lucy, L. B. (1974). An iterative technique for the rectification of observed distributions. Astronomical Journal, 79, 745–754.
Ma, C., Liu, Z., Tian, L., Dai, Q., & Waller, L. (2015). Motion deblurring with temporally coded illumination in an led array microscope. Optics Letters, 40(10), 2281–2284.
Martin, D., Fowlkes, C., Tal, D., & Malik, J. (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of International Conference on Computer Vision (ICCV).
McCloskey, S. (2010) Velocity-dependent shutter sequences for motion deblurring. In: Proceedings of European Conference on Computer Vision (ECCV).
McCloskey, S., Ding, Y., & Yu, J. (2012). Design and estimation of coded exposure point spread functions. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 34(10), 2071–2077.
Mertens, S. (1996). Exhaustive search for low-autocorrelation binary sequences. Journal of Physics A, 29, 473–481.
Michalewicz, Z. (1996). Genetic algorithms+ data structures= evolution programs. springer.
Militzer, B., Zamparelli, M., & Beule, D. (1998). Evolutionary search for low autocorrelated binary sequences. IEEE Trans Evolutionary Computation, 2(1), 34–39.
Nagahara, H., Zhou, C., Watanabe, T., Ishiguro, H., & Nayar, S. K. (2010). Programmable aperture camera using lcos. In: Proceedings of European Conference on Computer Vision (ECCV).
No, J.-S., Lee, H.-K., Chung, H., Song, H.-Y., & Yang, K. (1996). Trace representation of legendre sequences of mersenne prime period. IEEE Transactions on Information Theory, 42(6), 2254–2255.
Raskar, R., Agrawal, A., & Tumblin, J. (2006). Coded exposure photography: motion deblurring using fluttered shutter. ACM Transactions on Graphics, 25(3), 795–804.
Richardson, W. H. (1972). Bayesian-based iterative method of image restoration. Journal of the Optical Society of America (JOSA), 62, 55–59.
Schechner, Y. Y., Nayar, S. K., & Belhumeur, P. N. (2007). Multiplexing for optimal lighting. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 29(8), 1339–1354.
Shan, Q., Jia, J., & Agarwala, A. (2008). High-quality motion deblurring from a single image. ACM Transactions on Graphics, 27(3), 73:1–73:10.
Tai, Y.-W., Kong, N., Lin, S., & Shin, S. Y. (2010) Coded exposure imaging for projective motion deblurring. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing (TIP), 13(4), 600–612.
Wiener, N. (1964). Extrapolation, Interpolation, and Smoothing of Stationary Time Series. : The MIT Press.
Xiong, T., & Hall, J. I. (2011). Modifications of modified jacobi sequences. IEEE Transactions on Information Theory, 57(1), 493–504.
Zhou, C., Lin, S., & Nayar, S. (2011). Coded aperture pairs for depth from defocus and defocus deblurring. International Journal on Computer Vision (IJCV), 93(1), 53.
Zuo, C., Sun, J., Feng, S., Zhang, M., & Chen, Q. (2016). Programmable aperture microscopy: A computational method for multi-modal phase contrast and light field imaging. Optics and Lasers in Engineering, 80, 24–31.
Acknowledgements
This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP) (Nos. 2010-0028680 and 2016-4014610). Hae-Gon Jeon was partially supported by Global PH.D Fellowship Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-20151034617).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Cordelia Schmid.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Appendix
Appendix
1.1 Appendix 1: Derivation of Eq. (7)
Let \({\hat{U}}\) be a fluttering shutter pattern with the elements \({\hat{u}}_{i}\in \{0, 1\}\). We denote B as \(B={\hat{U}}-\mu \) with elements \(b_{i}\in \{-\mu , 1-\mu \}\), where \(\mu \) is the mean value of the elements in \({\hat{U}}\). Then we introduce \(U=2(B+\mu -0.5)\), where \(u_{i}\in \{-1, 1\}\). The difference between \({\hat{U}}\) and U is that the sequence values have changed from \(\{0,1\}\) to \(\{-1,1\}\).
Let \({\hat{a}}_{k}\) be the autocovariance of \({\hat{U}}\) and \(t_{k}\) be the autocorrelation of B, then \({\hat{a}}_{k}=t_{k}\) (Boufounos 2007). We denote \(a_{k}\) as the autocorrelation of U, which is derived as follows.
As mentioned in the original paper, m becomes 0 with the assumption that the sequence is balanced with equal number of zeros and ones for optimal autocorrelation properties.
1.2 Appendix 2: Example Sequences
See Table 2
Rights and permissions
About this article
Cite this article
Jeon, HG., Lee, JY., Han, Y. et al. Generating Fluttering Patterns with Low Autocorrelation for Coded Exposure Imaging. Int J Comput Vis 123, 269–286 (2017). https://doi.org/10.1007/s11263-016-0976-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11263-016-0976-4