Abstract
Self-driving vehicles are envisioned as automated and safety-focused vehicles facilitating smooth movement on roads. This research proposes a novel, robust, and intelligent navigation framework for such vehicles through an integrated fusion of advanced technologies like predictive analytics with remote sensing and detection for accurate obstacle/object detection. TaskTrek, ViewVerse, and RuleRise form the core of the essential model governing vehicle-environment interaction. TaskTrek handles kinematic trajectory synthesis and space-time traffic modeling, ViewVerse provides LiDAR-based volumetric perception and radar-assisted navigational intelligence, and RuleRise manages topological localization, vehicle actuation, and autonomous decision-making through multimodal sensory fusion. The model applies an iterative Multi-FacBiNet method, which uses the cognitive Fully Convolutional Neural Network (FCNN) method to detect and classify obstacles during vehicle movement on the road. Upon stimulation during vehicle movement, the model provided an encouraging outcome. The fusion of predictive intelligence, Radar, and sensing technologies gave 95.3% proficiency. Minimum obstacle detection, processing, and response delays of 0.116 seconds, 0.105 seconds, and 0.36 seconds, respectively, are recorded. The computed mean obstacle detection accuracy for right, left, front and back camera angles are 88.3%, 83.8%, 91.4%, and 89.9%, respectively. Further, a comprehensive analysis of the model’s performance in different on-road scenarios considering metrics like traffic load, road type, and region density was done. The model generated a very impressive accuracy of obstacle detection on all parameters. The results of this study not only aid in accelerating the development of precise navigation-enabled self-driving vehicles but also in the context of environmentally friendly mobility/motion tracking solutions.