In mobile edge computing,unmanned aerial vehicles(UAVs)equipped with computing servers have emerged as a promising solution due to their exceptional attributes of high mobility,flexibility,rapid deployment,and terrain...In mobile edge computing,unmanned aerial vehicles(UAVs)equipped with computing servers have emerged as a promising solution due to their exceptional attributes of high mobility,flexibility,rapid deployment,and terrain agnosticism.These attributes enable UAVs to reach designated areas,thereby addressing temporary computing swiftly in scenarios where ground-based servers are overloaded or unavailable.However,the inherent broadcast nature of line-of-sight transmission methods employed by UAVs renders them vulnerable to eavesdropping attacks.Meanwhile,there are often obstacles that affect flight safety in real UAV operation areas,and collisions between UAVs may also occur.To solve these problems,we propose an innovative A*SAC deep reinforcement learning algorithm,which seamlessly integrates the benefits of Soft Actor-Critic(SAC)and A*(A-Star)algorithms.This algorithm jointly optimizes the hovering position and task offloading proportion of the UAV through a task offloading function.Furthermore,our algorithm incorporates a path-planning function that identifies the most energy-efficient route for the UAV to reach its optimal hovering point.This approach not only reduces the flight energy consumption of the UAV but also lowers overall energy consumption,thereby optimizing system-level energy efficiency.Extensive simulation results demonstrate that,compared to other algorithms,our approach achieves superior system benefits.Specifically,it exhibits an average improvement of 13.18%in terms of different computing task sizes,25.61%higher on average in terms of the power of electromagnetic wave interference intrusion into UAVs emitted by different auxiliary UAVs,and 35.78%higher on average in terms of the maximum computing frequency of different auxiliary UAVs.As for path planning,the simulation results indicate that our algorithm is capable of determining the optimal collision-avoidance path for each auxiliary UAV,enabling them to safely reach their designated endpoints in diverse obstacle-ridden environments.展开更多
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the...As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.展开更多
基金supported by the Central University Basic Research Business Fee Fund Project(J2023-027)Open Fund of Key Laboratory of Flight Techniques and Flight Safety,CAAC(No.FZ2022KF06)China Postdoctoral Science Foundation(No.2022M722248).
文摘In mobile edge computing,unmanned aerial vehicles(UAVs)equipped with computing servers have emerged as a promising solution due to their exceptional attributes of high mobility,flexibility,rapid deployment,and terrain agnosticism.These attributes enable UAVs to reach designated areas,thereby addressing temporary computing swiftly in scenarios where ground-based servers are overloaded or unavailable.However,the inherent broadcast nature of line-of-sight transmission methods employed by UAVs renders them vulnerable to eavesdropping attacks.Meanwhile,there are often obstacles that affect flight safety in real UAV operation areas,and collisions between UAVs may also occur.To solve these problems,we propose an innovative A*SAC deep reinforcement learning algorithm,which seamlessly integrates the benefits of Soft Actor-Critic(SAC)and A*(A-Star)algorithms.This algorithm jointly optimizes the hovering position and task offloading proportion of the UAV through a task offloading function.Furthermore,our algorithm incorporates a path-planning function that identifies the most energy-efficient route for the UAV to reach its optimal hovering point.This approach not only reduces the flight energy consumption of the UAV but also lowers overall energy consumption,thereby optimizing system-level energy efficiency.Extensive simulation results demonstrate that,compared to other algorithms,our approach achieves superior system benefits.Specifically,it exhibits an average improvement of 13.18%in terms of different computing task sizes,25.61%higher on average in terms of the power of electromagnetic wave interference intrusion into UAVs emitted by different auxiliary UAVs,and 35.78%higher on average in terms of the maximum computing frequency of different auxiliary UAVs.As for path planning,the simulation results indicate that our algorithm is capable of determining the optimal collision-avoidance path for each auxiliary UAV,enabling them to safely reach their designated endpoints in diverse obstacle-ridden environments.
基金funded by the Fundamental Research Funds for the Central Universities(J2023-024,J2023-027).
文摘As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.