Depth-Aware Rover: A Study of Edge AI and Monocular Vision for Real-World Implementation
cs.CV updates on arXiv.org
Lomash Relia, Jai G Singla, Amitabh, Nitant Dube
arXiv:2604.22331v1 Announce Type: new Abstract: This study analyses simulated and real-world implementations of depth-aware rover navigation, highlighting the transition from stereo vision to monocular depth estimation using edge AI. A Unity-based lunar terrain simulator with stereo cameras and OpenCV's StereoSGBM was used to generate disparity maps. A physical rover built on Raspberry Pi 4 employed UniDepthV2 for monocular metric depth estimation and YOLO12n for real-time object detection. While stereo vision yielded higher accuracy in simulation, the monocular approach proved more robust and cost-effective in real-world deployment, achieving 0.1 FPS for depth and 10 FPS for detection.
