Deep Learning for Multi-modal Sensor Data Fusion in Autonomous Vehicles
Cover
PDF

How to Cite

[1]
Dr. Daniel Vega, “Deep Learning for Multi-modal Sensor Data Fusion in Autonomous Vehicles”, Journal of AI in Healthcare and Medicine, vol. 3, no. 1, pp. 70–89, Jun. 2023, Accessed: Dec. 23, 2024. [Online]. Available: https://healthsciencepub.com/index.php/jaihm/article/view/55

Abstract

Primarily, the design of a multi-modal sensor data fusion system for autonomous vehicles was determined by several key factors. A significant increase in traffic on highways and rural road traffic dictates the selection of efficient methodological solutions in road transport [1]. The variety of environmental conditions (which include not only atmospheric conditions but also light conditions, e.g. bright. insolation and unfriendly weather) impose significant conditional parameters in the field of sensory data processing. In order to ensure an adequate level of road safety the sensory outfit must function stably and efficiently regardless of particular environmental conditions. An almost ideal environment in which autonomous carriers should recognize surroundings refers among others to homogeneous, monochromatic surfaces, without reflections, with one intensity of lighting regardless of the time of day and which is not influenced by the weather. The determined research problem will be difficult to solve if the solution is supposed to provide an adequate smooth operation of a system when conditions do not deviate significantly from ideal, hypothetical conditions in the range of environmental conditions described above. Therefore, the influence of the following parameters should not be disregarded when creating a multi-modal sensor data fusion system for autonomous vehicles in the process of designing the system in order to use it in temporary, dynamic and changing conditions: frequent changes in light conditions, e.g. blackout, coming in and coming out of tunnels, both transition conditions and different weather phenomena such as snow, rain, fog or hail [2].

PDF

References

[1] S. Samaras, E. Diamantidou, D. Ataloglou, N. Sakellariou et al., "Deep Learning on Multi Sensor Data for Counter UAV Applications—A Systematic Review," 2019. ncbi.nlm.nih.gov

[2] D. Xu, H. Li, Q. Wang, Z. Song et al., "M2DA: Multi-Modal Fusion Transformer Incorporating Driver Attention for Autonomous Driving," 2024. [PDF]

Tatineni, Sumanth. "Ethical Considerations in AI and Data Science: Bias, Fairness, and Accountability." International Journal of Information Technology and Management Information Systems (IJITMIS) 10.1 (2019): 11-21.

Vemoori, V. “Towards Secure and Trustworthy Autonomous Vehicles: Leveraging Distributed Ledger Technology for Secure Communication and Exploring Explainable Artificial Intelligence for Robust Decision-Making and Comprehensive Testing”. Journal of Science & Technology, vol. 1, no. 1, Nov. 2020, pp. 130-7, https://thesciencebrigade.com/jst/article/view/224.

Shaik, Mahammad, et al. "Granular Access Control for the Perpetually Expanding Internet of Things: A Deep Dive into Implementing Role-Based Access Control (RBAC) for Enhanced Device Security and Privacy." British Journal of Multidisciplinary and Advanced Studies 2.2 (2018): 136-160.

Vemori, Vamsi. "Human-in-the-Loop Moral Decision-Making Frameworks for Situationally Aware Multi-Modal Autonomous Vehicle Networks: An Accessibility-Focused Approach." Journal of Computational Intelligence and Robotics 2.1 (2022): 54-87.

[7] T. L. Kim and T. H. Park, "Camera-LiDAR Fusion Method with Feature Switch Layer for Object Detection Networks," 2022. ncbi.nlm.nih.gov

[8] Q. Zhang, X. Hu, Z. Su, and Z. Song, "3D car-detection based on a Mobile Deep Sensor Fusion Model and real-scene applications," 2020. ncbi.nlm.nih.gov

[9] L. Chen, P. Wu, K. Chitta, B. Jaeger et al., "End-to-end Autonomous Driving: Challenges and Frontiers," 2023. [PDF]

[10] Q. V. Lai-Dang, J. Lee, B. Park, and D. Har, "Sensor Fusion by Spatial Encoding for Autonomous Driving," 2023. [PDF]

[11] D. Jong Yeong, G. Velasco-Hernandez, J. Barry, and J. Walsh, "Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review," 2021. ncbi.nlm.nih.gov

[12] Z. Wei, F. Zhang, S. Chang, Y. Liu et al., "MmWave Radar and Vision Fusion for Object Detection in Autonomous Driving: A Review," 2022. ncbi.nlm.nih.gov

[13] S. Pankavich, N. Neri, and D. Shutt, "Bistable Dynamics and Hopf Bifurcation in a Refined Model of Early Stage HIV Infection," 2019. [PDF]

[14] J. Elfring, R. Appeldoorn, S. van den Dries, and M. Kwakkernaat, "Effective World Modeling: Multisensor Data Fusion Methodology for Automated Driving," 2016. ncbi.nlm.nih.gov

[15] F. Manfio Barbosa and F. Santos Osório, "Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts, Datasets and Metrics," 2023. [PDF]

[16] F. Jibrin Abdu, Y. Zhang, M. Fu, Y. Li et al., "Application of Deep Learning on Millimeter-Wave Radar Signals: A Review," 2021. ncbi.nlm.nih.gov

[17] M. Rahimi, H. Liu, I. Durazo Cardenas, A. Starr et al., "A Review on Technologies for Localisation and Navigation in Autonomous Railway Maintenance Systems," 2022. ncbi.nlm.nih.gov

[18] L. Caltagirone, M. Bellone, L. Svensson, M. Wahde et al., "Lidar–Camera Semi-Supervised Learning for Semantic Segmentation," 2021. ncbi.nlm.nih.gov

[19] Z. Wang, X. Zeng, S. Leon Song, and Y. Hu, "Towards Efficient Architecture and Algorithms for Sensor Fusion," 2022. [PDF]

[20] Y. Qu, M. Yang, J. Zhang, W. Xie et al., "An Outline of Multi-Sensor Fusion Methods for Mobile Agents Indoor Navigation," 2021. ncbi.nlm.nih.gov

[21] B. Shahian Jahromi, T. Tulabandhula, and S. Cetin, "Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles," 2019. ncbi.nlm.nih.gov

[22] Y. Gong, J. Lu, J. Wu, and W. Liu, "Multi-modal Fusion Technology based on Vehicle Information: A Survey," 2022. [PDF]

Downloads

Download data is not yet available.