Skip to main content Accessibility help
×
Hostname: page-component-7c8c6479df-7qhmt Total loading time: 0 Render date: 2024-03-28T23:41:49.379Z Has data issue: false hasContentIssue false

6 - Multi-sensor fusion for motion deblurring

Published online by Cambridge University Press:  05 June 2014

Jingyi Yu
Affiliation:
University of Delaware
A. N. Rajagopalan
Affiliation:
Indian Institute of Technology, Madras
Rama Chellappa
Affiliation:
University of Maryland, College Park
Get access

Summary

This chapter presents multi-sensor fusion techniques for motion deblurring. With recent advances in digital imaging, the use of high resolution, high-speed, or high dynamic range cameras has become common practice. However, thus far no single image sensor can satisfy the diverse requirements of all the current industrial camera applications. For example, high-speed (HS) cameras can capture fast motion with little motion blur but require expensive sensors, bandwidth, and storage. The image resolution in HS cameras is often much lower than many commercial still cameras. This is mainly because the image resolution needs to scale linearly with the exposure time (Ben-Ezra & Nayar 2003) to maintain the signal-to-noise ratio (SNR), i.e. a higher speed maps to a lower resolution. In addition, the relatively low bandwidth on the usual interfaces such as USB 2.0 or FireWire IEEE 1394a restricts the image resolution especially when streaming videos at 100–200 fps.

The problem of acquiring high quality imagery with little motion under low light is particularly challenging. To guarantee enough exposures, one can choose to use either a wide aperture or a slow shutter. For example, by coupling a wide aperture with fast shutters, we can capture fast motions of scene objects with low noise. However, wide apertures lead to a shallow depth-of-field (DoF) where only parts of the scene can be clearly focused. In contrast, by coupling a slow shutter with a narrow aperture, one can capture all depth layers in focus.

Type
Chapter
Information
Motion Deblurring
Algorithms and Systems
, pp. 123 - 140
Publisher: Cambridge University Press
Print publication year: 2014

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Baker, S. & Matthews, I. (2004). Lucas–Kanade 20 years on: A unifying framework. International Journal of Computer Vision, 56(3), 221–55.Google Scholar
Ben-Ezra, M. & Nayar, S. (06 2004). Motion-based motion deblurring. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(6), 689–98.Google Scholar
Ben-Ezra, M. & Nayar, S. K. (2003). Motion deblurring using hybrid imaging. In Proceedings of the 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 657–64.
Bergen, J. R., Anandan, P., Hanna, K. J. & Hingorani, R. (1992). Hierarchical model-based motion estimation. In Proceedings of the Second European Conference on Computer Vision, pp. 237–52.
Boykov, Y. & Funka-Lea, G. (2006). Graph cuts and efficient N-D image segmentation. International Journal of Computer Vision, 70(2), 109–31.Google Scholar
Dabov, K., Foi, A., Katkovnik, V. & Egiazarian, K. (2007). Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16(8), 2080–95.Google Scholar
Eisemann, E. & Durand, F. (2004). Photography enhancement via intrinsic relighting. ACM Transactions on Graphics, 23(3), 673–8.Google Scholar
Fergus, R., Singh, B., Hertzmann, A., Roweis, S. T. & Freeman, W. T. (2006). Removing camera shake from a single photograph. ACM Transactions on Graphics, 25(3), 787–94.Google Scholar
Kolmogorov, V. & Zabih, R. (2002). Multi-camera scene reconstruction via graph cuts. In Proceedings of the 7 th European Conference on Computer Vision, Part III, pp. 82-96.
Kopf, J., Cohen, M. F., Lischinski, D. & Uyttendaele, M. (2007). Joint bilateral upsampling. ACM Transactions on Graphics, 26(3), 96:1–10.Google Scholar
Krishnan, D. & Fergus, R. (2009). Fast image deconvolution using hyper-Laplacian priors. In Neural Information Processing Systems Conference, 22, 1-9.Google Scholar
Li, F., Ji, Y. & Yu, J. (2013). A Hybrid Camera Array for Low Light Imaging. University of Delaware Technical Report UD-CIS-2013-01.
Li, F., Yu, J. & Chai, J. (2008). A hybrid camera for motion deblurring and depth map super-resolution. In Computer Vision and Pattern Recognition, pp. 1-8.
Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. IEEE International Conference on Computer Vision, 60(2), 91-110.Google Scholar
Mertens, T., Kautz, J. & Van Reeth, F. (2007). Exposure fusion. In IEEE 15th Pacific Conference on Computer Graphics and Applications, pp. 382–90.
Petschnigg, G., Szeliski, R., Agrawala, M., Cohen, M., Hoppe, H. & Toyama, K. (2004). Digital photography with flash and no-flash image pairs. ACM Transactions on Graphics, 23(3), 664–72.Google Scholar
Tai, Y.-W., Du, H., Brown, M. S. & Lin, S. (2010). Correction of spatially varying image and video motion blur using a hybrid camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(6), 1012–28.Google Scholar
Tomasi, C. & Manduchi, R. (1998). Bilateral filtering for gray and color images. Proceedings of the 6th International Conference on Computer Vision, pp. 839–46.
Wang, Y., Yang, J., Yin, W. & Zhang, Y. (2008). A new alternating minimization algorithm for total variation image reconstruction. SIAM Journal on Imaging Sciences, 1(3), 248–72.Google Scholar
Yang, Q., Yang, R., Davis, J. & Nister, D. (06 2007). Spatial-depth super resolution for range images. Computer Vision and Pattern Recognition, pp. 1-8.
Yitzhaky, Y., Mor, I., Lantzman, A. & Kopeika, N. S. (1998). Direct method for restoration of motion-blurred images. Journal of the Optical Society of America A: Optics, Image Science & Vision, 15(6), 1512–19.Google Scholar
Yu, Z., Thorpe, C., Yu, X., Grauer-Gray, S., Li, F. & Yu, J. (2011). Dynamic depth of field on live video streams: a stereo solution. Computer Graphics International, pp. 1-9.
Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), 1330–4.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×