Skip to main content
Log in

Employing multimodal co-learning to evaluate the robustness of sensor fusion for industry 5.0 tasks

  • Focus
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Industry 5.0 focuses on collaboration between humans and machines, demanding robustness and efficiency and the accuracy of intelligent and innovative components used. The use of sensors and the fusion of data obtained from various sensors/modes increased with the rise of the internet of things. The multimodal sensor fusion provides better accuracies than a single source/mode system. Typically, in multimodal fusion, data from all sources is assumed to be available all the time, aligned, and noiseless. Contrary to this highly unrealistic assumption, data from one or more sources is generally unavailable or noisy in most real-world applications. Hence, there is a need for robust sensor fusion, which is a more realistic scenario and critical for Industry 5.0 implementations. Multimodal co-learning is one such approach to study the robustness of sensor fusion for missing and noisy modalities. In this work, to demonstrate the effectiveness of multimodal co-learning for robustness study, gas detection systems are considered a case study. Such gas detection systems are prevalent and crucial to avoid accidents due to gas leaks in many industries and form a part of the industry 5.0 setup. This work considers the primary dataset of gas sensors data and thermal images for robustness experimentation. The results demonstrate that multi-task fusion is robust to missing and noisy modalities than intermediate fusion. Having an additional low-resolution thermal modality supports co-learning and makes it robust to 20% missing sensor data, 90% missing thermal image data, and Gaussian and Normal noise. The end-to-end system architecture proposed can be easily extended to other multimodal applications in Industry 5.0. This study is a step towards creating standard practices for a multimodal co-learning charter for various industrial applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Availability of data and material

The multimodal gas detection dataset presented in (Narkhede et al. 2021) is used for this study, which was made available upon request for non-commercial use. The readers can contact the corresponding authors of this research paper.

Code availability

Code can be made available upon request post-publication of the paper for non-commercial use.

References

Download references

Acknowledgments

This research uses a primary gas detection dataset created as a part of a minor research project funded by Symbiosis International (Deemed University), Pune 412115, India.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: Anil Rahate, Rahee Walambe; Methodology: Anil Rahate, Ketan Kotecha, Rahee Walambe; Software, Data Curation, Experimentation, Validation: Anil Rahate, Shruti Mandaokar, Pulkit Chandel; Writing- original draft & editing, Visualization, Investigation: Anil Rahate; Writing—review & editing: Rahee Walambe, Sheela Ramanna, Ketan Kotecha; Project Administration, Supervision, Approval: Ketan Kotecha, Rahee Walambe.

Corresponding authors

Correspondence to Rahee Walambe or Ketan Kotecha.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to declare relevant to this article’s content.

Ethical approval

Not applicable.

Consent to participate

Not applicable

Consent for publication

The authors provide consent for publication in this journal

Additional information

Communicated by Deepak kumar Jain.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rahate, A., Mandaokar, S., Chandel, P. et al. Employing multimodal co-learning to evaluate the robustness of sensor fusion for industry 5.0 tasks. Soft Comput 27, 4139–4155 (2023). https://doi.org/10.1007/s00500-022-06802-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-022-06802-9

Keywords

Navigation