Skip to main content

Advertisement

Log in

Fixed-Lens camera setup and calibrated image registration for multifocus multiview 3D reconstruction

  • S.I. : DICTA 2019
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Image-based 3D reconstruction or 3D photogrammetry of small-scale objects including insects and biological specimens is challenging due to the use of a high magnification lens with inherently limited depth of field, and the object’s fine structures. Therefore, the traditional 3D reconstruction techniques cannot be applied without additional image preprocessing. One such preprocessing technique is multifocus stacking/fusion that combines a set of partially focused images captured at different distances from the same viewing angle to create a single in-focus image. We found that the image formation is not properly considered by the traditional multifocus image capture and stacking techniques. The resulting in-focus images contain artifacts that violate the perspective projection. A 3D reconstruction using such images often fails to produce accurate 3D models of the captured objects. This paper shows how this problem can be solved effectively by a new multifocus multiview 3D reconstruction procedure which includes a new Fixed-Lens multifocus image capture and a calibrated image registration technique using analytic homography transformation. The experimental results using the real and synthetic images demonstrate the effectiveness of the proposed solutions by showing that both the fixed-lens image capture and multifocus stacking with calibrated image alignment significantly reduce the errors in the camera poses and produce more complete 3D reconstructed models as compared with those by the conventional moving lens image capture and multifocus stacking.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. https://github.com/chuong/multifocus_multiview_stereo_reconstruction.

References

  1. 3DSOM (2019) 3dsom - 3d models from photos. http://www.3dsom.com/, [Online; accessed 14-July-2019]

  2. Agisoft (2019) Agisoft - metashape (photoscan). https://www.agisoft.com/, [Online; accessed 14-July-2019]

  3. AliceVision (2019) Meshroom: A 3D reconstruction software. https://github.com/alicevision/meshroom

  4. Amin-Naji M, Aghagolzadeh A, Ezoji M (2018) Fully Convolutional Networks for Multi-Focus Image Fusion. In: 2018 9th International Symposium on Telecommunications (IST), pp 553–558

  5. Ascencio C (2020) Estimation of the Homography Matrix to Image Stitching bookTitle = Applications of Hybrid Metaheuristic Algorithms for Image Processing, Springer International Publishing, Cham, pp 205–230. https://doi.org/10.1007/978-3-030-40977-7_10

  6. Aslantas V, Kurban R (2010) Fusion of multi-focus images using differential evolution algorithm. Exp Syst Appl 37(12):8861–8870. https://doi.org/10.1016/j.eswa.2010.06.011

    Article  Google Scholar 

  7. Bai X, Zhang Y, Zhou F, Xue B (2015) Quadtree-based multi-focus image fusion using a weighted focus-measure. Inf Fusion 22:105–118. https://doi.org/10.1016/j.inffus.2014.05.003

    Article  Google Scholar 

  8. Blender-Foundation (2020) Blender - a 3D modelling and rendering package. Blender Foundation, Stichting Blender Foundation, Amsterdam, http://www.blender.org

  9. Burt P, Adelson E (1983) The Laplacian Pyramid as a Compact Image Code. IEEE Trans Commun 31(4):532–540

    Article  Google Scholar 

  10. Cignoni P, Callieri M, Corsini M, Dellepiane M, Ganovelli F, Ranzuglia G (2008) MeshLab: an Open-Source Mesh Processing Tool. In: Eurographics Italian Chapter Conference

  11. De I, Chanda B (2013) Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure. Inf Fusion 14(2):136–146. https://doi.org/10.1016/j.inffus.2012.01.007

    Article  Google Scholar 

  12. Digital Archive of Natural History (2018a) Agrilus anxius. https://sketchfab.com/3d-models/agrilus-anxius-2fd91429bd54423cafea7d7ba22273cc, [Online; accessed 31-October-2020]

  13. Digital Archive of Natural History (2018b) Beechnut (Fagus sylvatica). https://sketchfab.com/3d-models/beechnut-fagus-sylvatica-509980aaf55746ddbe18b7f03e15c9c6, [Online; accessed 31-October-2020]

  14. Fujii H, Kodama K, Hamamoto T (2016) Scene flow estimation through 3D analysis of multi-focus images. In: 2016 Visual Communications and Image Processing (VCIP), IEEE, pp 1–4

  15. Gallo A, Muzzupappa M, Bruno F (2014) 3D reconstruction of small sized objects from a sequence of multi-focused images. J Cult Herit 15(2):173–182. https://doi.org/10.1016/j.culher.2013.04.009

    Article  Google Scholar 

  16. Geng J (2011) Structured-light 3D surface imaging: a tutorial. Adv Opt Photon 3(2):128–160. https://doi.org/10.1364/AOP.3.000128http://aop.osa.org/abstract.cfm?URI=aop-3-2-128

  17. Guo X, Nie R, Cao J, Zhou D, Qian W (2018) Fully Convolutional Network-Based Multifocus Image Fusion. Neural Comput 30(7):1775–1800

    Article  MathSciNet  Google Scholar 

  18. Hartley R, Zisserman A (2003) Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, USA

    MATH  Google Scholar 

  19. Hecht E (2002) Optics. Addison Wesley Longman, Reading, Massachusetts

    Google Scholar 

  20. Huang J, Le Z, Ma Y, Mei X, Fan F (2020) A generative adversarial network with adaptive constraints for multi-focus image fusion. Neural Comput and Applic. https://doi.org/10.1007/s00521-020-04863-1

  21. Hui Li, Manjunath BS, Mitra SK (1994) Multi-sensor image fusion using the wavelet transform. In: Proceedings of 1st International Conference on Image Processing, vol 1, pp 51–55 vol.1

  22. Ji Z, Kang X, Zhang K, Duan P, Hao Q (2019) A Two-Stage Multi-Focus Image Fusion Framework Robust to Image Mis-Registration. IEEE Access 7:123231–123243

    Article  Google Scholar 

  23. Kodama K, Kubota A (2013) Efficient Reconstruction of All-in-Focus Images Through Shifted Pinholes From Multi-Focus Images for Dense Light Field Synthesis and Rendering. IEEE Trans Image Proc 22(11):4407–4421. https://doi.org/10.1109/TIP.2013.2273668

    Article  MathSciNet  MATH  Google Scholar 

  24. Kodama K, Wang Z, Sato M, Murakami T (2017) Real-time 3-D image reconstruction from multi-focus images by efficient linear filtering with multi-dimensional symmetry. In: 2017 IEEE International Conference on Image Processing (ICIP), pp 3575–3579, https://doi.org/10.1109/ICIP.2017.8296948

  25. Lewis JJ, O’Callaghan RJ, Nikolov SG, Bull DR, Canagarajah N (2007) Pixel- and region-based image fusion with complex wavelets. Inf Fusion 8(2):119–130. https://doi.org/10.1016/j.inffus.2005.09.006 (special Issue on Image Fusion: Advances in the State of the Art)

    Article  Google Scholar 

  26. Li H, Nguyen C (2019) Perspective-consistent multifocus multiview 3D reconstruction of small objects. In: 2019 Digital Image Computing: Techniques and Applications (DICTA), IEEE, pp 1–8

  27. Li S, Kwok JT, Wang Y (2001) Combination of images with diverse focuses using the spatial frequency. Inf Fusion 2(3):169–176. https://doi.org/10.1016/S1566-2535(01)00038-0

    Article  Google Scholar 

  28. Li S, Kang X, Hu J (2013) Image Fusion With Guided Filtering. IEEE Trans Image Proc 22(7):2864–2875

    Article  Google Scholar 

  29. Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: A survey of the state of the art. Inf Fusion 33:100–112. https://doi.org/10.1016/j.inffus.2016.05.004

    Article  Google Scholar 

  30. Liang Y, Mao Y, Tang Z, Yan M, Zhao Y, Liu J (2019) Efficient misalignment-robust multi-focus microscopical images fusion. Sign Proc 161:111–123

    Article  Google Scholar 

  31. Lie WN, Ho CC (2019) Multi-Focus Image Fusion and Depth Map Estimation Based on Iterative Region Splitting Techniques. J Imag 5(9):73. https://doi.org/10.3390/jimaging5090073

    Article  Google Scholar 

  32. Liu Y, Liu S, Wang Z (2015) Multi-focus image fusion with dense SIFT. Inf Fusion 23:139–155. https://doi.org/10.1016/j.inffus.2014.05.004

    Article  Google Scholar 

  33. Liu Y, Wang L, Cheng J, Li C, Chen X (2020) Multi-focus image fusion: A Survey of the state of the art. Inf Fusion 64:71–91. https://doi.org/10.1016/j.inffus.2020.06.013

    Article  Google Scholar 

  34. Lowe DG (2004) Distinctive Image Features from Scale-Invariant Keypoints. Int J Comput Vision 60(2):91–110. https://doi.org/10.1023/B:VISI.0000029664.99615.94

    Article  Google Scholar 

  35. Mishra D, Palkar B (2015) Image Fusion Techniques: A Review. Int J Comput Appl 130:7–13

    Google Scholar 

  36. Murgia F, Giusto D, Perra C (2015) 3D reconstruction from plenoptic image. In: 2015 23rd Telecommunications Forum Telfor (TELFOR), pp 448–451, https://doi.org/10.1109/TELFOR.2015.7377504

  37. Mustafa HT, Yang J, Zareapoor M (2019) Multi-scale convolutional neural network for multi-focus image fusion. Imag Vision Comput 85:26–35. https://doi.org/10.1016/j.imavis.2019.03.001

    Article  Google Scholar 

  38. Nencini F, Garzelli A, Baronti S, Alparone L (2007) Remote sensing image fusion using the curvelet transform. Inf Fusion 8(2):143–156. https://doi.org/10.1016/j.inffus.2006.02.001 (special Issue on Image Fusion: Advances in the State of the Art)

    Article  Google Scholar 

  39. Nguyen CV, Lovell DR, Adcock M, La Salle J (2014) Capturing natural-colour 3D models of insects for species discovery and diagnostics. PloS one 9(4):e94346

    Article  Google Scholar 

  40. Pan T, Jiang J, Yao J, Wang B, Tan B (2020) A Novel Multi-Focus Image Fusion Network with U-Shape Structure. Sens (Basel, Switzerland) 20(14):3901. https://doi.org/10.3390/s20143901

    Article  Google Scholar 

  41. Ritz M, Langguth F, Scholz M, Goesele M, Stork A (2012) High resolution acquisition of detailed surfaces with lens-shifted structured light. Computers & Graphics 36(1):16–27. https://doi.org/10.1016/j.cag.2011.10.004http://www.sciencedirect.com/science/article/pii/S0097849311001506, cultural Heritage

  42. Rucklidge W (ed) (1996) The Hausdorff distance, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 27–42. https://doi.org/10.1007/BFb0015093,

  43. Sakamoto T, Kodama K, Hamamoto T (2012a) A novel scheme for 4-D light-field compression based on 3-D representation by multi-focus images. In: 2012 19th IEEE international conference on image processing, IEEE, pp 2901–2904

  44. Sakamoto T, Kodama K, Hamamoto T (2012b) A study on efficient compression of multi-focus images for dense light-field reconstruction. In: 2012 Visual Communications and Image Processing, IEEE, pp 1–6

  45. Schönberger JL, contributors (2020) COLMAP: a general-purpose Structure-from-Motion (SfM) and Multi-View Stereo (MVS) pipeline. https://github.com/colmap, [Online; accessed 15-November-2020]

  46. Seitz SM, Curless B, Diebel J, Scharstein D, Szeliski R (2006) A comparison and evaluation of multi-view stereo reconstruction algorithms. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), IEEE, vol 1, pp 519–528

  47. Silvester CM, Hillson S (2020) A critical assessment of the potential for Structure-from-Motion photogrammetry to produce high fidelity 3D dental models. Am J Phys Anthropol 173(2):381–392. https://doi.org/10.1002/ajpa.24109

    Article  Google Scholar 

  48. Skinner KA, Johnson-Roberson M (2016) Towards real-time underwater 3D reconstruction with plenoptic cameras. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 2014–2021, https://doi.org/10.1109/IROS.2016.7759317

  49. Ströbel B, Schmelzle S, Blüthgen N, Heethoff M (2018) An automated device for the digitization and 3D modelling of insects, combining extended-depth-of-field and all-side multi-view imaging. ZooKeys 759:1

    Article  Google Scholar 

  50. Sturm P (2014) Pinhole Camera Model, Springer US, Boston, MA, pp 610–613. https://doi.org/10.1007/978-0-387-31439-6_472,

  51. Szeliski R (2006) Image Alignment and Stitching: A Tutorial. Found Trends Comput Graph Vis 2(1):1–104. https://doi.org/10.1561/0600000009

    Article  MathSciNet  MATH  Google Scholar 

  52. Szeliski R (2010) Comput Vision: Algorithms Appl, 1st edn. Springer-Verlag, Berlin, Heidelberg

    Google Scholar 

  53. Toet A (1989) Image fusion by a ratio of low-pass pyramid. Pattern Recogn Lett 9(4):245–253. https://doi.org/10.1016/0167-8655(89)90003-2

    Article  MATH  Google Scholar 

  54. Wang W, Chang F (2011) A Multi-focus Image Fusion Method Based on Laplacian Pyramid. J Comput 6(12):2559–2566. https://doi.org/10.4304/jcp.6.12.2559-2566

    Article  Google Scholar 

  55. Xu H, Fan F, Zhang H, Le Z, Huang J (2020) A Deep Model for Multi-Focus Image Fusion Based on Gradients and Connected Regions. IEEE Access 8:26316–26327

    Article  Google Scholar 

  56. Yang B, Li S (2010) Multifocus Image Fusion and Restoration With Sparse Representation. IEEE Trans Instrum Measure 59(4):884–892

    Article  Google Scholar 

  57. Yang B, Zl Jing, Ht Zhao (2010) Review of pixel-level image fusion. J Shanghai Jiaotong Univ (Sci) 15:6–12. https://doi.org/10.1007/s12204-010-7186-y

    Article  Google Scholar 

  58. Zhang H, Le Z, Shao Z, Xu H, Ma J (2021) MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Inf Fusion 66:40–53. https://doi.org/10.1016/j.inffus.2020.08.022

    Article  Google Scholar 

  59. Zhang Q, long Guo B (2009) Multifocus image fusion using the nonsubsampled contourlet transform. Sign Proc 89(7):1334–1346. https://doi.org/10.1016/j.sigpro.2009.01.012

    Article  MATH  Google Scholar 

  60. Zhou Y, Guo H, Fu R, Liang G, Wang C, Wu X (2015) 3D reconstruction based on light field information. In: 2015 IEEE International Conference on Information and Automation, pp 976–981, https://doi.org/10.1109/ICInfA.2015.7279428

  61. Zhou Z, Li S, Wang B (2014) Multi-scale weighted gradient-based fusion for multi-focus images. Inf Fusion 20:60–72. https://doi.org/10.1016/j.inffus.2013.11.005

    Article  Google Scholar 

  62. Zhu D, Wu C, Liu Y, Fu D (2018) 3D reconstruction based on light field images. In: Yu H, Dong J (eds) Ninth International Conference on Graphic and Image Processing (ICGIP 2017), International Society for Optics and Photonics, SPIE, vol 10615, pp 951 – 959, https://doi.org/10.1117/12.2304504,

  63. Zitova B, Flusser J (2003) Image registration methods: a survey. Image Vision Comput 21(11):977–1000

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Dr. Matt Adcock and Mr. Stuart Anderson from CSIRO Data61 CPS Quantitative Imaging, Mr. Nunzio Knerr from CSIRO National Research Collections Australia for their help with building the image capture system, Mr. Fabien Castan from Mikros Image for his help with using Meshroom for this project and Mr. Julien Haudegond and Mr. Enguerrand De Smet from Mikros Image for their Blender add-on to create synthetic images. This research is partially funded by CSIRO Julius Career Award.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shah Ariful Hoque Chowdhury.

Ethics declarations

Conflict of interest

The authors report no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (PDF 30, 940 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chowdhury, S.A.H., Nguyen, C., Li, H. et al. Fixed-Lens camera setup and calibrated image registration for multifocus multiview 3D reconstruction. Neural Comput & Applic 33, 7421–7440 (2021). https://doi.org/10.1007/s00521-021-05926-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-05926-7

Keywords

Navigation