Abstract:In unknown environments without global obstacle location information,real time avoidance is a challenging task for unmanned platforms.To address this issue,a method that fuses deep neural networks (DNNs) with an improved artificial potential field (APF) algorithm is proposed in this paper.Firstly,YOLOv5s and a lightweight depth estimation model is used to construct an obstacle perception module to detect the location and depth of obstacles.Then,the target frame and equivalent depth are utilized to describe the three dimensional information of the surrounding obstacles.Subsequently,the platform is projected onto the image plane,and the core area is calculated based on the positional relationship between the obstacle′s equivalent depth grid and the core area to compute the direction of the core area in terms of the force on the virtual potential field in the image plane,the required yaw angle,and the linear velocity.Finally,the control system receives the signals and guides the unmanned platform to steer or brake,ensuring that the internal depth of the core area is greater than the safety distance.The experiment is based on monocular visible light and infrared for obstacle avoidance test,and the results demonstrate that the perception module can accurately detect the location and depth of common obstacles and the core area can intuitively reflect the positional relationship between the unmanned platform and surrounding obstacles.Compared to traditional methods,the proposed method relies on a monocular sensor alone to effectively avoid obstacles,achieving lower cost and flexible deployment,which provides a new idea for unmanned platforms to avoid obstacles in unknown environments during daytime and nighttime.