COLLABORATIVE ALTITUDE-ADAPTIVE REINFORCEMENT LEARNING FOR ACTIVE SEARCH WITH UNMANNED AERIAL VEHICLE SWARMS

Collaborative altitude-adaptive reinforcement learning for active search with unmanned aerial vehicle swarms

Collaborative altitude-adaptive reinforcement learning for active search with unmanned aerial vehicle swarms

Blog Article

Active search with unmanned aerial vehicle (UAV) swarms in cluttered and unpredictable environments poses a critical challenge in search and rescue missions, where the rapid localizations of survivors are of paramount importance, as the majority of urban disaster victims Stroller Book are surface casualties.However, the altitude-dependent sensor performance of UAV introduces a crucial trade-off between coverage and accuracy, significantly influencing the coordination and decision-making of UAV swarms.The optimal strategy has to strike a balance between exploring larger areas at higher altitudes and exploiting regions of high target probability at lower altitudes.To address these challenges, collaborative altitude-adaptive reinforcement learning (CARL) was proposed which incorporated an altitude-aware sensor model, a confidence-informed assessment module, and an altitude-adaptive planner based on proximal policy optimization (PPO) algorithms.

CARL enabled UAV to dynamically Geri Chairs adjust their sensing location and made informed decisions.Furthermore, a tailored reward shaping strategy was introduced, which maximized search efficiency in extensive environments.Comprehensive simulations under diverse conditions demonstrate that CARL surpasses baseline methods, achieves a 12% improvement in full recovery rate, and showcase its potential for enhancing the effectiveness of UAV swarms in active search missions.

Report this page