Open Access
Subscription Access
Open Access
Subscription Access
Transduction Based Deep Belief Networks Learning-Based Multi-Camera Fusion for Robust Scene Reconstruction
Subscribe/Renew Journal
In the realm of scene reconstruction, conventional methods often struggle with challenges posed by occlusions, lighting variations, and noisy data. To address these limitations, this paper introduces a Transduction-based Deep Belief Network (T-DBN) within a learning-based multi-camera fusion framework, offering robust scene reconstruction by effectively fusing data from multiple cameras and adapting to diverse conditions. Traditional scene reconstruction methods often struggle with challenging scenarios due to limitations in handling occlusions, lighting variations, and noisy data. The proposed T-DBN model overcomes these limitations by effectively fusing information from multiple cameras using a transduction scheme, allowing it to adapt to varying conditions. The network learns to decipher scene structures and characteristics by training on a diverse dataset. Experimental results demonstrate the superiority of the Proposed T-DBN in achieving accurate and reliable scene reconstruction compared to existing techniques. This work presents a significant advancement in multi-camera fusion and scene reconstruction through the integration of deep learning and transduction strategies.
Keywords
Transduction, Deep Belief Networks, Multi-Camera Fusion, Scene Reconstruction, Robustness
Subscription
Login to verify subscription
User
Font Size
Information
- J.M. Frahm and R. Koch, “Pose Estimation for Multi-Camera Systems”, Proceedings of Joint International Symposium on Pattern Recognition, pp. 286-293, 2004.
- Z. Wang and T.C. Lueth, “A Robust 6-D Pose Tracking Approach by Fusing a Multi-Camera Tracking Device and an AHRS Module”, IEEE Transactions on Instrumentation and Measurement, Vol. 71, pp. 1-11, 2021.
- B. Subramanian, T. Gunasekaran and S. Hariprasath, “Diabetic Retinopathy-Feature Extraction and Classification using Adaptive Super Pixel Algorithm”, International Journal on Engineering and Advanced Technology, Vol. 9, pp. 618-627, 2019.
- F. Tschopp and J. Nieto, “Versavis-An Open Versatile Multi-Camera Visual-Inertial Sensor Suite”, Sensors, Vol. 20, No. 5, pp. 1439-1445, 2020.
- G. Xiang and G. Wang, “Semantic-Structure-Aware Multi-Level Information Fusion for Robust Global Orientation Optimization of Autonomous Mobile Robots”, Sensors, Vol. 23, No. 3, pp. 1125-1132, 2023.
- S. Ghosh and G. Gallego, “Multi‐Event‐Camera Depth Estimation and Outlier Rejection by Refocused Events Fusion”, Advanced Intelligent Systems, Vol. 4, No. 12, pp. 1-12, 2022.
- V. Saravanan and C. Chandrasekar, “Qos-Continuous Live Media Streaming in Mobile Environment using VBR and Edge Network”, International Journal of Computer Applications, Vol. 53, No. 6, pp. 1-12, 2012.
- K. Srihari and M. Masud, “Nature-Inspired-based Approach for Automated Cyberbullying Classification on Multimedia Social Networking”, Mathematical Problems in Engineering, Vol. 2021, pp. 1-12, 2021
- S. Jung and K. Lee, “3D Reconstruction using 3D Registration-based ToF-Stereo Fusion”, Sensors, Vol. 22, No. 21, pp. 8369-8378, 2022.
- X. Han, H. Hu and Z. Liu, “Mmptrack: Large-Scale Densely Annotated Multi-Camera Multiple People Tracking Benchmark”, Proceedings of IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 4860-4869, 2023.
- Y. Dong and M. Li, “A Practical Multi-camera SLAM System for Large Mobile Robots”, Proceedings of International Conference on Big Data, Artificial Intelligence and Risk Management, pp. 179-184, 2022.
Abstract Views: 119
PDF Views: 1