Skip to main content

FisherRF: Active View Selection and Mapping with Radiance Fields Using Fisher Information

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

This study addresses the challenging problem of active view selection and uncertainty quantification within the domain of Radiance Fields. Neural Radiance Fields (NeRF) have greatly advanced image rendering and reconstruction, but the cost of acquiring images poses the need to select the most informative viewpoints efficiently. Existing approaches depend on modifying the model architecture or hypothetical perturbation field to indirectly approximate the model uncertainty. However, selecting views from indirect approximation does not guarantee optimal information gain for the model. By leveraging Fisher Information, we directly quantify observed information on the parameters of Radiance Fields and select candidate views by maximizing the Expected Information Gain (EIG). Our method achieves state-of-the-art results on multiple tasks, including view selection, active mapping, and uncertainty quantification, demonstrating its potential to advance the field of Radiance Fields.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ash, J.T., Goel, S., Krishnamurthy, A., Kakade, S.M.: Gone fishing: neural active learning with fisher embeddings. In: NeurIPS, vol. abs/2106.09675 (2021)

    Google Scholar 

  2. Ash, J.T., Zhang, C., Krishnamurthy, A., Langford, J., Agarwal, A.: Deep batch active learning by diverse, uncertain gradient lower bounds. In: ICLR (2020). https://openreview.net/forum?id=ryghZJBKPS

  3. Bajcsy, R.: Active perception. Proc. IEEE 76(8), 966–1005 (1988)

    Article  Google Scholar 

  4. Bajcsy, R., Aloimonos, Y., Tsotsos, J.K.: Revisiting active perception. Auton. Robot. 42, 177–196 (2018)

    Article  Google Scholar 

  5. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: unbounded anti-aliased neural radiance fields. In: CVPR (2022)

    Google Scholar 

  6. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: anti-aliased grid-based neural radiance fields. In: ICCV (2023)

    Google Scholar 

  7. Chang, A., et al.: Matterport3D: learning from RGB-D data in indoor environments. In: International Conference on 3D Vision (2017)

    Google Scholar 

  8. Chaplot, D.S., Gandhi, D., Gupta, S., Gupta, A., Salakhutdinov, R.: Learning to explore using active neural slam. arXiv preprint arXiv:2004.05155 (2020)

  9. Daxberger, E., Kristiadi, A., Immer, A., Eschenhagen, R., Bauer, M., Hennig, P.: Laplace redux–effortless Bayesian deep learning. In: NeurIPS (2021)

    Google Scholar 

  10. Dellaert, F., Yen-Chen, L.: Neural volume rendering: NeRF and beyond (2021)

    Google Scholar 

  11. Dhami, H., Sharma, V.D., Tokekar, P.: Pred-NBV: prediction-guided next-best-view planning for 3D object reconstruction. In: 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7149–7154. IEEE (2023)

    Google Scholar 

  12. Dornhege, C., Kleiner, A.: A frontier-void-based approach for autonomous exploration in 3D. Adv. Robot. 27(6), 459–468 (2013)

    Article  Google Scholar 

  13. Gao, K., Gao, Y., He, H., Lu, D., Xu, L., Li, J.: NeRF: neural radiance field in 3d vision, a comprehensive review (2023)

    Google Scholar 

  14. Georgakis, G., Bucher, B., Arapin, A., Schmeckpeper, K., Matni, N., Daniilidis, K.: Uncertainty-driven planner for exploration and navigation. In: 2022 International Conference on Robotics and Automation (ICRA), pp. 11295–11302. IEEE (2022)

    Google Scholar 

  15. Georgakis, G., Bucher, B., Arapin, A., Schmeckpeper, K., Matni, N., Daniilidis, K.: Uncertainty-driven planner for exploration and navigation. In: ICRA (2022)

    Google Scholar 

  16. Goli, L., Reading, C., Sellán, S., Jacobson, A., Tagliasacchi, A.: Bayes’ rays: uncertainty quantification in neural radiance fields. arXiv (2023)

    Google Scholar 

  17. Guédon, A., Monasse, P., Lepetit, V.: SCONE: surface coverage optimization in unknown environments by volumetric integration. Adv. Neural. Inf. Process. Syst. 35, 20731–20743 (2022)

    Google Scholar 

  18. Guédon, A., Monnier, T., Monasse, P., Lepetit, V.: MACARONS: mapping and coverage anticipation with RGB online self-supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 940–951 (2023)

    Google Scholar 

  19. Hinton, G.E., van Camp, D.: Keeping the neural networks simple by minimizing the description length of the weights. In: Proceedings of the Sixth Annual Conference on Computational Learning Theory, COLT 1993, pp. 5–13. Association for Computing Machinery, New York (1993). https://doi.org/10.1145/168304.168306

  20. Hochreiter, S., Schmidhuber, J.: Simplifying neural nets by discovering flat minima. In: NeurIPS (1994)

    Google Scholar 

  21. Houlsby, N., Huszar, F., Ghahramani, Z., Lengyel, M.: Bayesian active learning for classification and preference learning. CoRR abs/1112.5745 (2011). http://dblp.uni-trier.de/db/journals/corr/corr1112.html#abs-1112-5745

  22. Jin, L., Chen, X., Rückin, J., Popović, M.: NeU-NBV: next best view planning using uncertainty estimation in image-based neural rendering. arXiv preprint arXiv:2303.01284 (2023)

  23. Karaman, S., Frazzoli, E.: Incremental sampling-based algorithms for optimal motion planning. Robot. Sci. Syst. VI 104(2), 267–274 (2010)

    Google Scholar 

  24. Keetha, N., et al.: SplaTAM: splat, track & map 3D gaussians for dense RGB-D SLAM. arXiv preprint arXiv:2312.02126 (2023)

  25. Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D Gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. 42(4) (2023). https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

  26. Kirsch, A., Amersfoort, J.v., Gal, Y.: BatchBALD: efficient and diverse batch acquisition for deep Bayesian active learning. In: NeurIPS (2019)

    Google Scholar 

  27. Kirsch, A., Farquhar, S., Atighehchian, P., Jesson, A., Branchaud-Charron, F., Gal, Y.: Stochastic batch acquisition for deep active learning. arXiv preprint arXiv:2106.12059 (2021)

  28. Kirsch, A., Gal, Y.: Unifying approaches in active learning and active sampling via fisher information and information-theoretic quantities. Trans. Mach. Learn. Res. (2022). https://openreview.net/forum?id=UVDAKQANOW. Expert Certification

  29. Kothawade, S.N., Beck, N.A., Killamsetty, K., Iyer, R.K.: SIMILAR: submodular information measures based active learning in realistic scenarios. In: Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems (2021). https://openreview.net/forum?id=VGDFaLNFFk

  30. LaValle, S.: Rapidly-exploring random trees: a new tool for path planning. Research Report 9811 (1998)

    Google Scholar 

  31. Lee, S., Chen, L., Wang, J., Liniger, A., Kumar, S., Yu, F.: Uncertainty guided policy for active robotic 3D reconstruction using neural radiance fields. IEEE Robot. Autom. Lett. 7(4), 12070–12077 (2022)

    Article  Google Scholar 

  32. Lindley, D.V.: On a measure of the information provided by an experiment. Ann. Math. Stat. 27(4), 986–1005 (1956). https://doi.org/10.1214/aoms/1177728069

    Article  MathSciNet  Google Scholar 

  33. Lluvia, I., Lazkano, E., Ansuategi, A.: Active mapping and robot exploration: a survey. Sensors 21(7), 2445 (2021)

    Article  Google Scholar 

  34. MacDonald, L.E., Valmadre, J., Lucey, S.: On progressive sharpening, flat minima and generalisation (2023)

    Google Scholar 

  35. MacKay, D.J.C.: Bayesian interpolation. Neural Comput. 4(3), 415–447 (1992). https://doi.org/10.1162/neco.1992.4.3.415

    Article  Google Scholar 

  36. Martin-Brualla, R., Radwan, N., Sajjadi, M.S.M., Barron, J.T., Dosovitskiy, A., Duckworth, D.: NeRF in the wild: neural radiance fields for unconstrained photo collections. In: CVPR (2021)

    Google Scholar 

  37. Matsuki, H., Murai, R., Kelly, P.H.J., Davison, A.J.: Gaussian splatting SLAM (2024)

    Google Scholar 

  38. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: ECCV (2020)

    Google Scholar 

  39. Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P., Martin-Brualla, R., Barron, J.T.: MultiNeRF: a code release for Mip-NeRF 360, ref-NeRF, and RawNeRF (2022). https://github.com/google-research/multinerf

  40. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. 41(4), 102:1–102:15 (2022). https://doi.org/10.1145/3528223.3530127

  41. Nemhauser, G.L., Wolsey, L.A., Fisher, M.L.: An analysis of approximations for maximizing submodular set functions-I. Math. Program. 14, 265–294 (1978)

    Article  MathSciNet  Google Scholar 

  42. Ortiz, J., et al.: iSDF: real-time neural signed distance fields for robot perception. In: Robotics: Science and Systems (2022)

    Google Scholar 

  43. Pan, X., Lai, Z., Song, S., Huang, G.: ActiveNeRF: learning where to see with uncertainty estimation. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13693, pp. 230–246. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19827-4_14

    Chapter  Google Scholar 

  44. Papachristos, C., Khattak, S., Alexis, K.: Uncertainty-aware receding horizon exploration and mapping using aerial robots. In: 2017 IEEE international conference on robotics and automation (ICRA), pp. 4568–4575. IEEE (2017)

    Google Scholar 

  45. Placed, J.A., et al.: A survey on active simultaneous localization and mapping: state of the art and new frontiers. IEEE Trans. Robot. (2023)

    Google Scholar 

  46. Ramakrishnan, S.K., Al-Halah, Z., Grauman, K.: Occupancy anticipation for efficient exploration and navigation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12350, pp. 400–418. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58558-7_24

    Chapter  Google Scholar 

  47. Ramakrishnan, S.K., et al.: Habitat-matterport 3D dataset (HM3D): 1000 large-scale 3D environments for embodied AI. In: Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (2021)

    Google Scholar 

  48. Ran, Y., et al.: NeurAR: neural uncertainty for autonomous 3D reconstruction with implicit neural representations. IEEE Robot. Autom. Lett. 8(2), 1125–1132 (2023). https://doi.org/10.1109/lra.2023.3235686

    Article  Google Scholar 

  49. Reiser, C., Peng, S., Liao, Y., Geiger, A.: KiloNeRF: speeding up neural radiance fields with thousands of tiny MLPs. In: ICCV (2021)

    Google Scholar 

  50. Ren, P., et al.: A survey of deep active learning. ACM Comput. Surv. (CSUR) 54(9), 1–40 (2021)

    Article  Google Scholar 

  51. Sandström, E., Li, Y., Van Gool, L., Oswald, M.R.: Point-SLAM: dense neural point cloud-based SLAM. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 18433–18444 (2023)

    Google Scholar 

  52. Sara Fridovich-Keil and Alex Yu, Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: radiance fields without neural networks. In: CVPR (2022)

    Google Scholar 

  53. Savva, M., et al.: Habitat: a platform for embodied AI research. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  54. Schervish, M.: Theory of Statistics. Springer Series in Statistics. Springer, New York (2012). https://books.google.com/books?id=s5LHBgAAQBAJ

  55. Shen, J., Agudo, A., Moreno-Noguer, F., Ruiz, A.: Conditional-flow NeRF: accurate 3D modelling with reliable uncertainty quantification. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13663, pp. 540–557. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-20062-5_31

    Chapter  Google Scholar 

  56. Shen, J., Ruiz, A., Agudo, A., Moreno-Noguer, F.: Stochastic neural radiance fields: quantifying uncertainty in implicit 3D representations. CoRR abs/2109.02123 (2021). https://arxiv.org/abs/2109.02123

  57. Shen, S., Michael, N., Kumar, V.: Autonomous indoor 3D exploration with a micro-aerial vehicle. In: 2012 IEEE International Conference on Robotics and Automation, pp. 9–15. IEEE (2012)

    Google Scholar 

  58. Smith, S., Le, Q.V.: A Bayesian perspective on generalization and stochastic gradient descent. In: ICLR (2018). https://openreview.net/pdf?id=BJij4yg0Z

  59. Sucar, E., Liu, S., Ortiz, J., Davison, A.: iMAP: implicit mapping and positioning in real-time. In: Proceedings of the International Conference on Computer Vision (ICCV) (2021)

    Google Scholar 

  60. Sun, C., Sun, M., Chen, H.T.: Direct voxel grid optimization: super-fast convergence for radiance fields reconstruction. In: CVPR (2022)

    Google Scholar 

  61. Sünderhauf, N., Abou-Chakra, J., Miller, D.: Density-aware NeRF ensembles: quantifying predictive uncertainty in neural radiance fields. In: ICRA (2023)

    Google Scholar 

  62. Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: NeurIPS, vol. 2, pp. 1398–1402. IEEE (2003)

    Google Scholar 

  63. Xia, F., Zamir, A.R., He, Z., Sax, A., Malik, J., Savarese, S.: Gibson ENV: real-world perception for embodied agents. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9068–9079 (2018)

    Google Scholar 

  64. Yamauchi, B.: A frontier-based approach for autonomous exploration. In: Proceedings 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA 1097. Towards New Computational Principles for Robotics and Automation, pp. 146–151. IEEE (1997)

    Google Scholar 

  65. Yan, C., et al.: GS-SLAM: dense visual slam with 3D gaussian splatting. arXiv preprint arXiv:2311.11700 (2023)

  66. Yan, D., Liu, J., Quan, F., Chen, H., Fu, M.: Active implicit object reconstruction using uncertainty-guided next-best-view optimization (2023)

    Google Scholar 

  67. Yan, Z., Yang, H., Zha, H.: Active neural mapping. In: ICCV (2023)

    Google Scholar 

  68. Ye, K., et al.: Multi-robot active mapping via neural bipartite graph matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14839–14848 (2022)

    Google Scholar 

  69. Yücer, K., Sorkine-Hornung, A., Wang, O., Sorkine-Hornung, O.: Efficient 3D object segmentation from densely sampled light fields with applications to 3D reconstruction. ACM Trans. Graph. 35(3) (2016).https://doi.org/10.1145/2876504

  70. Yugay, V., Li, Y., Gevers, T., Oswald, M.R.: Gaussian-SLAM: photo-realistic dense slam with gaussian splatting (2023)

    Google Scholar 

  71. Zhan, H., Zheng, J., Xu, Y., Reid, I., Rezatofighi, H.: ActiverMap: radiance field for active mapping and planning (2022)

    Google Scholar 

  72. Zhan, X., Wang, Q., Hao Huang, K., Xiong, H., Dou, D., Chan, A.B.: A comparative survey of deep active learning (2022)

    Google Scholar 

  73. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR, pp. 586–595 (2018)

    Google Scholar 

  74. Zhu, C., Ding, R., Lin, M., Wu, Y.: A 3D frontier-based exploration tool for MAVs. In: 2015 IEEE 27th International Conference on Tools with Artificial Intelligence (ICTAI), pp. 348–352. IEEE (2015)

    Google Scholar 

  75. Zhu, Z., et al.: Nice-SLAM: neural implicit scalable encoding for SLAM (2021)

    Google Scholar 

Download references

Acknowledgements

The authors gratefully appreciate support through the following grants: NSF FRR 2220868, NSF IIS-RI 2212433, NSF TRIPODS 1934960, NSF CPS 2038873. The authors thank Pratik Chaudhari for the insightful discussion and Yinshuang Xu for proofreading the drafts.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wen Jiang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 66096 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, W., Lei, B., Daniilidis, K. (2025). FisherRF: Active View Selection and Mapping with Radiance Fields Using Fisher Information. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15071. Springer, Cham. https://doi.org/10.1007/978-3-031-72624-8_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72624-8_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72623-1

  • Online ISBN: 978-3-031-72624-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics