2,176 | 0 | 8 |
下载次数 | 被引频次 | 阅读次数 |
【背景】随着低空经济的发展,低空空域正逐步成为城市交通变革的重要领域。无人机等新型交通工具在城市空间中的应用愈发广泛,但面临着路径规划、动态避障及多机协同等挑战。【目目标标】优化无人机路径规划算法,以增强无人机在低空复杂环境中的路径规划与避障能力、确保多无人机协同作业的高效性和安全性。【方法】引入N步更新策略与改进后的优先经验回放机制,提出一种基于强化学习的NP-MTDDQN算法。基于山东省济南市市中区的风流数据,构建包含多样建筑物和动态障碍物的三维低空环境,进行模拟实验以验证算法的有效性。【结果】三组对照实验表明,NP-MTDDQN算法在不同建筑物密度、不同优先级建筑物分布以及包含不同动态障碍物的复杂环境中,均能找到最优路径、区分建筑物优先级、识别并躲避动态障碍物,从而实现多无人机的协同作业。与传统的DQN和DDQN算法相比,该算法在路径规划的效率与准确性方面均有提升。【结论】NP-MTDDQN算法为低空智能交通网络中多无人机的路径规划问题提供了新的解决方案,有望提升城市空中交通的管理效率和响应速度。
Abstract:[Background] Owing to the emergence of low-altitude economies, low-altitude airspace has become increasingly pivotal for the revolution of urban transportation. Unmanned aerial vehicles(UAVs) and other novel transportation modes are being used increasingly in urban spaces; however,they are affected by challenges such as path planning, dynamic obstacle avoidance, and multi-UAV coordination. [Objective] To optimize UAV path-planning algorithms, enhance the path-planning and obstacle-avoidance capabilities of UAVs in complex low-altitude environments, and ensure the efficiency and safety of multi-UAV collaborative operations. [Methods] An NP-MTDDQN algorithm based on reinforcement learning, which incorporates an N-step update strategy and an improved prioritized experience replay mechanism, is proposed. A three-dimensional low-altitude environment featuring diverse buildings and dynamic obstacles is constructed using wind-flow data from Shizhong District, Jinan City, Shandong Province, to conduct simulation experiments and validate the effectiveness of the algorithm.[Results] Three sets of controlled experiments indicate that the NPMTDDQN algorithm can identify optimal paths, distinguish building priorities, as well as identify and avoid dynamic obstacles in complex environments with varying building densities, different priority building distributions, and dynamic obstacles, thereby facilitating multi-UAV collaboration.Compared with the conventional DQN and DDQN algorithms, the NP-MTDDQN algorithm exhibits improved efficiency and accuracy in path planning. [Conclusions] The NP-MTDDQN algorithm provides a novel solution to the path-planning problem for multi-UAVs in low-altitude intelligent transportation networks, thus potentially enhancing the management efficiency and response speed of urban air traffic.
[1] FAN L, WANG X, YANG J, et al. Social radars for social vision of intelligent vehicles:a new direction for vehicle research and development[J]. IEEE Transactions on Intelligent Vehicles, 2024, 9(3):4244-4248.
[2] YUN W J, JUNG S, KIM J, et al. Distributed deep reinforcement learning for autonomous aerial eVTOL mobility in drone taxi applications[J]. ICT Express, 2021, 7(1):1-4.
[3] KIM S, KIM T, SUH K, et al. Energy and environmental performance of a passenger drone for an urban air mobility(UAM)policy with 3D spatial information in Seoul[J].Journal of Cleaner Production, 2023, 415:137683.
[4] GUAN X, SHI H, XU D, et al. The exploration and practice of low-altitude airspace flight service and traffic management in China[J]. Green Energy and Intelligent Transportation, 2024, 3(2):100149.
[5] DECKER C, CHIAMBARETTO P. Economic policy choices and trade-offs for Unmanned aircraft systems Traffic Management(UTM):Insights from Europe and the United States[J]. Transportation Research Part A:Policy and Practice, 2022, 157:40-58.
[6] MCCARTHY T, PFORTE L, BURKE R. Fundamental elements of an urban UTM[J]. Aerospace, 2020, 7(7):85.
[7] SAFADI Y, FU R, QUAN Q, et al. Macroscopic fundamental diagrams for low-altitude air city transport[J].Transportation Research Part C:Emerging Technologies,2023, 152:104141.
[8] QU W, XU C, TAN X, et al. Preliminary concept of urban air mobility traffic rules[J]. Drones, 2023, 7(1):54.
[9] HUANG C, FANG S, WU H, et al. Low-altitude intelligent transportation:System architecture, infrastructure,and key technologies[J]. Journal of Industrial Information Integration, 2024, 42:100694.
[10] GUO J, CHEN L, LI L, et al. Advanced air mobility:an innovation for future diversified transportation and society[J]. IEEE Transactions on Intelligent Vehicles, 2024,9(2):3106-3110.
[11]刘泉,陈瑶瑶,洪晓苇,等.面向无人机的城市低空空域规划的国际经验[J].城市规划学刊, 2024(5):64-70.LIU Quan, CHEN Yaoyao, HONG Xiaowei, et al. International experience of urban low altitude airspace planning for drones[J]. Urban Planning Forum, 2024(5):64-70.
[12] KIM M S, HONG W H, LEE Y H, et al. Selection of take-off and landing sites for firefighter drones in urban areas using a GIS-based multi-criteria model[J]. Drones,2022, 6(12):412.
[13] LI X, TUPAYACHI J, SHARMIN A, et al. Drone-aided delivery methods, challenge, and the future:a methodological review[J]. Drones, 2023, 7(3):191.
[14] MOSHREF-JAVADI M, WINKENBACH M. Applications and Research avenues for drone-based models in logistics:a classification and review[J]. Expert Systems with Applications, 2021, 177:114854.
[15] AL-RUBAYE S, TSOURDOS A, NAMUDURI K. Advanced air mobility operation and infrastructure for sustainable connected eVTOL vehicle[J]. Drones, 2023, 7(5):319.
[16] KELLER A, BEN-MOSHE B. A robust and accurate landing methodology for drones on moving targets[J].Drones, 2022, 6(4):98.
[17] ST?CKER C, BENNETT R, NEX F, et al. Review of the current state of UAV regulations[J]. Remote Sensing,2017, 9(5):459.
[18] TRAN T H, NGUYEN D D. Management and regulation of drone operation in urban environment:a case study[J]. Social Sciences, 2022, 11(10):474.
[19] ALAMOURI A, LAMPERT A, GERKE M. An exploratory investigation of UAS regulations in Europe and the impact on effective use and economic potential[J].Drones, 2021, 5(3):63.
[20] SOUANEF T, AL-RUBAYE S, TSOURDOS A, et al.Digital twin development for the airspace of the future[J]. Drones, 2023, 7(7):484.
[21]刘洁敏,苏雪娇,沈振江.无人机交通治理导向的城市低空空域与地上地下空间协同开发模式探析[J/OL].国际城市规划:1-16.(2024-08-09)[2024-11-04] https://doi.org/10.19830/j.upi.2024.236.LIU Jiemin, SU Xuejiao, SHEN Zhenjiang. Exploring the collaborative development model of urban low-altitude airspace and surface-subsurface spaces oriented by UAV-based traffic governance[J/OL]. Urban Planning International:1-16.(2024-08-09)[2024-11-04] https://doi.org/10.19830/j.upi.2024.236
[22]谢华,韩斯特,尹嘉男,等.城市低空无人机飞行计划协同推演与优化调配方法[J].航空学报, 2024, 45(19):269-291.XIE H, HAN S T, YIN J N, et al. Cooperative deduction and optimal allocation method for urban low-altitude UAV flight plan[J]. Acta Aeronautica et Astronautica Sinica, 2024, 45(19):269-291.
[23] ALAMOUDI O, AL-HASHIMI M. On the energy behaviors of the bellman–ford and dijkstra algorithms:a detailed empirical study[J]. Journal of Sensor and Actuator Networks, 2024, 13(5):67.
[24] WANG H, LOU S, JING J, et al. The EBS-A*algorithm:an improved A*algorithm for path planning[J].PLoS One, 2022, 17(2):e0263841.
[25] SZCZEPANSKI R, BEREIT A, TARCZEWSKI T. Efficient local path planning algorithm using artificial potential field supported by augmented reality[J]. Energies,2021, 14(20):6642.
[26] WU Z, MENG Z, ZHAO W, et al. Fast-RRT:a RRTbased optimal path finding method[J]. Applied Sciences,2021, 11(24):11777.
[27] DARESTANI M Z, LIU J Y, HECKEL R. Test-Time training can close the natural distribution shift performance gap in deep learning based compressed sensing[DB/OL].(2022-04-14)[2024-12-06]. https://doi. org/10.48550/arXiv.2204.07204.
[28] SHAN D, ZHANG S, WANG X, et al. Path-planning strategy:adaptive ant colony optimization combined with an enhanced dynamic window approach[J]. Electronics, 2024, 13(5):825.
[29] ALKAFAWEEN E, HASSANAT A, ESSA E, et al. An efficiency boost for genetic algorithms:initializing the GA with the iterative approximate method for optimizing the traveling salesman problem—experimental insights[J]. Applied Sciences, 2024, 14(8):3151.
[30]黄琰,张锦.基于深度强化学习的车辆路径问题求解方法[J].交通运输工程与信息学报, 2022, 20(3):114-127.HUANG Yan, ZHANG Jin. Solving vehicle routing problem using deep reinforcement learning[J]. Journal of Transportation Engineering and Information, 2022, 20(3):114-127.
[31] YAN C, XIANG X, WANG C. Towards real-time path planning through deep reinforcement learning for a UAV in dynamic environments[J]. Journal of Intelligent&Robotic Systems, 2020, 98(2):297-309.
[32] YAN N, HUANG S B, KONG C. Reinforcement learning-based autonomous navigation and obstacle avoidance for USVs under partially observable conditions[J].Mathematical Problems in Engineering, 2021, 2021(1):5519033.
[33] YANG Y, LI J T, PENG L L. Multi-robot path planning based on a deep reinforcement learning DQN algorithm[J]. CAAI Transactions on Intelligence Technology,2020, 5(3):177-183.
[34] HAN H, WANG J, KUANG L, et al. Improved robot path planning method based on deep reinforcement learning[J]. Sensors, 2023, 23(12):5622.
[35] DEGUALE D A, YU L, SINISHAW M L, et al. Enhancing stability and performance in mobile robot path planning with PMR-dueling DQN algorithm[J]. Sensors,2024, 24(5):1523.
[36] KONG F, WANG Q, GAO S, et al. B-APFDQN:a UAV path planning algorithm based on deep Q-network and artificial potential field[J]. IEEE Access, 2023, 11:44051-44064.
[37] HUANG R, QIN C, LI J L, et al. Path planning of mobile robot in unknown dynamic continuous environment using reward-modified deep Q-network[J]. Optimal Control Applications and Methods, 2023, 44(3):1570-1587.
[38] SHEN Q, ZHANG D, HE Q, et al. A novel multi-objective dung beetle optimizer for Multi-UAV cooperative path planning[J]. Heliyon, 2024, 10(17):e37286.
[39]凌帅,杨娟,孙鹏,等.多目标协同下的即时配送路径优化[J/OL].交通运输工程与信息学报:1-24(2024-09-30)[2024-12-20]. https://doi. org/10.19961/j. cnki. 1672-4747. 2024.06.008.LING Shuai, YANG Juan, SUN Peng, et al. The multiobjective instant delivery routing problem considering the labor rights and safety[J/OL]. Journal of Transportation Engineering and Information:1-24(2024-09-30)[2024-12-20]. https://doi.org/10.19961/j.cnki.1672-4747.2024.06.008.
[40] ZHANG X, XIA S, LI X, et al. Multi-objective particle swarm optimization with multi-mode collaboration based on reinforcement learning for path planning of unmanned air vehicles[J]. Knowledge-Based Systems,2022, 250:109075.
[41]王娜,马利民,姜云春,等.基于多Agent深度强化学习的无人机协作规划方法[J].计算机应用与软件, 2024, 41(9):83-89, 96.WANG Na, MA Limin, JIANG Yunchun, et al. A UAV cooperative planning method based on Multi-Agent deep reinforcement learning[J]. Computer Applications and Software, 2024, 41(9):83-89, 96.
[42]张子贤,关伟,奇格奇.基于多智能体元强化学习的危险品运输路径优化[J].交通运输工程与信息学报,2024, 22(3):93-106.ZHANG Zixian, GUAN Wei, QI Geqi. Optimization method for hazardous freight transportation routes based on multi-agent meta-reinforcement learning[J]. Journal of Transportation Engineering and Information, 2024, 22(3):93-106.
[43] CHEN Y, DONG Q, SHANG X, et al. Multi-UAV autonomous path planning in reconnaissance missions considering incomplete information:a reinforcement learning method[J]. Drones, 2022, 7(1):10.
基本信息:
DOI:10.19961/j.cnki.1672-4747.2024.12.007
中图分类号:V279;V355
引用信息:
[1]丁杰,王迪.基于强化学习的城市低空无人机路径规划[J].交通运输工程与信息学报,2025,23(02):189-206.DOI:10.19961/j.cnki.1672-4747.2024.12.007.
基金信息:
山东省自然科学基金项目(ZR2024QG174); 山东省城市更新学会重点研究专项项目(SURS240603)