Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us...Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.展开更多
Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic ...Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system.展开更多
To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the vario...To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the various materials inves-tigated for the fabrication of synaptic devices,silicon carbide(SiC)has emerged as a preferred choices due to its high electron mobility,superior thermal conductivity,and excellent thermal stability,which exhibits promising potential for neuromorphic applications in harsh environments.In this review,the recent progress in SiC-based synaptic devices is summarized.Firstly,an in-depth discussion is conducted regarding the categories,working mechanisms,and structural designs of these devices.Subse-quently,several application scenarios for SiC-based synaptic devices are presented.Finally,a few perspectives and directions for their future development are outlined.展开更多
Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasti...Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology.展开更多
With the advancement of artificial intelligence,optic in-sensing reservoir computing based on emerging semiconductor devices is high desirable for real-time analog signal processing.Here,we disclose a flexible optomem...With the advancement of artificial intelligence,optic in-sensing reservoir computing based on emerging semiconductor devices is high desirable for real-time analog signal processing.Here,we disclose a flexible optomemristor based on C_(27)H_(30)O_(15)/FeOx heterostructure that presents a highly sensitive to the light stimuli and artificial optic synaptic features such as short-and long-term plasticity(STP and LTP),enabling the developed optomemristor to implement complex analogy signal processing through building a real-physical dynamic-based in-sensing reservoir computing algorithm and yielding an accuracy of 94.88%for speech recognition.The charge trapping and detrapping mediated by the optic active layer of C_(27)H_(30)O_(15) that is extracted from the lotus flower is response for the positive photoconductance memory in the prepared optomemristor.This work provides a feasible organic−inorganic heterostructure as well as an optic in-sensing vision computing for an advanced optic computing system in future complex signal processing.展开更多
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a...With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.展开更多
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t...Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.展开更多
In this study,we investigate the ef-ficacy of a hybrid parallel algo-rithm aiming at enhancing the speed of evaluation of two-electron repulsion integrals(ERI)and Fock matrix generation on the Hygon C86/DCU(deep compu...In this study,we investigate the ef-ficacy of a hybrid parallel algo-rithm aiming at enhancing the speed of evaluation of two-electron repulsion integrals(ERI)and Fock matrix generation on the Hygon C86/DCU(deep computing unit)heterogeneous computing platform.Multiple hybrid parallel schemes are assessed using a range of model systems,including those with up to 1200 atoms and 10000 basis func-tions.The findings of our research reveal that,during Hartree-Fock(HF)calculations,a single DCU ex-hibits 33.6 speedups over 32 C86 CPU cores.Compared with the efficiency of Wuhan Electronic Structure Package on Intel X86 and NVIDIA A100 computing platform,the Hygon platform exhibits good cost-effective-ness,showing great potential in quantum chemistry calculation and other high-performance scientific computations.展开更多
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the...As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.展开更多
With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)of...With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)offers an effective solution to mitigate resource constraints by enabling task offloading to edge cloud infrastructure,thereby reducing the computational burden on connected vehicles.However,this sharing-based and distributed computing paradigm necessitates ensuring the credibility and reliability of various computation nodes.Existing vehicular edge computing platforms have not adequately considered themisbehavior of vehicles.We propose a practical task offloading algorithm based on reputation assessment to address the task offloading problem in vehicular edge computing under an unreliable environment.This approach integrates deep reinforcement learning and reputation management to address task offloading challenges.Simulation experiments conducted using Veins demonstrate the feasibility and effectiveness of the proposed method.展开更多
As Internet of Vehicles(IoV)technology continues to advance,edge computing has become an important tool for assisting vehicles in handling complex tasks.However,the process of offloading tasks to edge servers may expo...As Internet of Vehicles(IoV)technology continues to advance,edge computing has become an important tool for assisting vehicles in handling complex tasks.However,the process of offloading tasks to edge servers may expose vehicles to malicious external attacks,resulting in information loss or even tampering,thereby creating serious security vulnerabilities.Blockchain technology can maintain a shared ledger among servers.In the Raft consensus mechanism,as long as more than half of the nodes remain operational,the system will not collapse,effectively maintaining the system’s robustness and security.To protect vehicle information,we propose a security framework that integrates the Raft consensus mechanism from blockchain technology with edge computing.To address the additional latency introduced by blockchain,we derived a theoretical formula for system delay and proposed a convex optimization solution to minimize the system latency,ensuring that the system meets the requirements for low latency and high reliability.Simulation results demonstrate that the optimized data extraction rate significantly reduces systemdelay,with relatively stable variations in latency.Moreover,the proposed optimization solution based on this model can provide valuable insights for enhancing security and efficiency in future network environments,such as 5G and next-generation smart city systems.展开更多
Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves ...Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets.展开更多
The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with e...The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with each other.Blockchain-based STS devices need a lot of computer power to change different video feed quality and forms into different versions and structures that meet the needs of different users.On the other hand,existing blockchains can’t support live streaming because they take too long to process and don’t have enough computer power.Large amounts of video data being sent and analyzed put too much stress on networks for vehicles.A video surveillance method is suggested in this paper to improve the performance of the blockchain system’s data and lower the latency across the multiple access edge computing(MEC)system.The integration of MEC and blockchain for video surveillance in autonomous vehicles(IMEC-BVS)framework has been proposed.To deal with this problem,the joint optimization problem is shown using the actor-critical asynchronous advantage(ACAA)method and deep reinforcement training as a Markov Choice Progression(MCP).Simulation results show that the suggested method quickly converges and improves the performance of MEC and blockchain when used together for video surveillance in self-driving cars compared to other methods.展开更多
Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of ch...Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of chaos by the technique of reservoir computing(RC).In particular,by incorporating a parameter-control channel into the standard RC,it is demonstrated that the machine is able to not only replicate the dynamics of the training states,but also infer new dynamics not included in the training set.The new machine-learning scheme,termed parameter-aware RC,opens up new avenues for data-based analysis of chaotic systems,and holds promise for predicting and controlling many real-world complex systems.Here,using typical chaotic systems as examples,we give a comprehensive introduction to this powerful machine-learning technique,including the algorithm,the implementation,the performance,and the open questions calling for further studies.展开更多
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ...Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.展开更多
The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing pr...The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable.展开更多
Cognitive computing has emerged as a transformative force in artificial intelligence(AI)education,bridging theoretical advancements with practical applications.This article explores the role of cognitive models in en-...Cognitive computing has emerged as a transformative force in artificial intelligence(AI)education,bridging theoretical advancements with practical applications.This article explores the role of cognitive models in en-hancing learning systems,from intelligent tutoring and personalized recommendations to virtual laboratories and special education support.It examines key technologies—such as knowledge graphs,natural language processing,and multimodal data analysis—that enable adaptive,human-like responsiveness.The study also ad-dresses technical challenges like interpretability and data privacy,alongside ethical concerns including equity and bias.Looking forward,it discusses how cognitive computing could reshape future learning modalities and aligns with trends like artificial general intelligence and interdisciplinary learning science.By tracing the path from theory to practice,this work underscores the potential of cognitive computing to create an inclusive,dy-namic educational landscape,while highlighting the need for ethical and technical rigor to ensure its responsible evolution.展开更多
The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resource...The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.展开更多
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat...Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.展开更多
Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device ...Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device energy utilization.To tackle these challenges,this study proposes an optimal scheduling model for energy storage power plants based on edge computing and the improved whale optimization algorithm(IWOA).The proposed model designs an edge computing framework,transferring a large share of data processing and storage tasks to the network edge.This architecture effectively reduces transmission costs by minimizing data travel time.In addition,the model considers demand response strategies and builds an objective function based on the minimization of the sum of electricity purchase cost and operation cost.The IWOA enhances the optimization process by utilizing adaptive weight adjustments and an optimal neighborhood perturbation strategy,preventing the algorithm from converging to suboptimal solutions.Experimental results demonstrate that the proposed scheduling model maximizes the flexibility of the energy storage plant,facilitating efficient charging and discharging.It successfully achieves peak shaving and valley filling for both electrical and heat loads,promoting the effective utilization of renewable energy sources.The edge-computing framework significantly reduces transmission delays between energy devices.Furthermore,IWOA outperforms traditional algorithms in optimizing the objective function.展开更多
文摘Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency.
基金supported by the"Science and Technology Development Plan Project of Jilin Province,China"(Grant No.20240101018JJ)the Fundamental Research Funds for the Central Universities(Grant No.2412023YQ004)the National Natural Science Foundation of China(Grant Nos.52072065,52272140,52372137,and U23A20568).
文摘Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system.
基金supported by the Natural Science Foundation of Zhejiang Province(Grant No.LQ24F040007)the National Natural Science Foundation of China(Grant No.U22A2075)the Opening Project of State Key Laboratory of Polymer Materials Engineering(Sichuan University)(Grant No.sklpme2024-1-21).
文摘To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the various materials inves-tigated for the fabrication of synaptic devices,silicon carbide(SiC)has emerged as a preferred choices due to its high electron mobility,superior thermal conductivity,and excellent thermal stability,which exhibits promising potential for neuromorphic applications in harsh environments.In this review,the recent progress in SiC-based synaptic devices is summarized.Firstly,an in-depth discussion is conducted regarding the categories,working mechanisms,and structural designs of these devices.Subse-quently,several application scenarios for SiC-based synaptic devices are presented.Finally,a few perspectives and directions for their future development are outlined.
基金supported by National Key Research and Development Program of China(Grant No.2022YFA1405600)Beijing Natural Science Foundation(Grant No.Z210006)+3 种基金National Natural Science Foundation of China—Young Scientists Fund(Grant No.12104051,62122004)Hong Kong Research Grant Council(Grant Nos.27206321,17205922,17212923 and C1009-22GF)Shenzhen Science and Technology Innovation Commission(SGDX20220530111405040)partially supported by ACCESS—AI Chip Center for Emerging Smart Systems,sponsored by Innovation and Technology Fund(ITF),Hong Kong SAR。
文摘Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology.
基金supported by the Key Project of Chongqing Natural Science Foundation Joint Fund[CSTB2023NSCQ-LZX0103,(G.Z.)]Chongqing Natural Science Foundation[CSTB2024NSCQ-MSX0012,(C.L.)]+1 种基金Fundamental Research Funds for the Central Universities[SWUZLPY03,(G.Z.)]Fundamental Research Funds for the Central Universities[Swu020019,(G.Z.):SWU-XDJH202319,(G.Z.)1].
文摘With the advancement of artificial intelligence,optic in-sensing reservoir computing based on emerging semiconductor devices is high desirable for real-time analog signal processing.Here,we disclose a flexible optomemristor based on C_(27)H_(30)O_(15)/FeOx heterostructure that presents a highly sensitive to the light stimuli and artificial optic synaptic features such as short-and long-term plasticity(STP and LTP),enabling the developed optomemristor to implement complex analogy signal processing through building a real-physical dynamic-based in-sensing reservoir computing algorithm and yielding an accuracy of 94.88%for speech recognition.The charge trapping and detrapping mediated by the optic active layer of C_(27)H_(30)O_(15) that is extracted from the lotus flower is response for the positive photoconductance memory in the prepared optomemristor.This work provides a feasible organic−inorganic heterostructure as well as an optic in-sensing vision computing for an advanced optic computing system in future complex signal processing.
基金supported by Future Network Scientific Research Fund Project (FNSRFP-2021-ZD-4)National Natural Science Foundation of China (No.61991404,61902182)+1 种基金National Key Research and Development Program of China under Grant 2020YFB1600104Key Research and Development Plan of Jiangsu Province under Grant BE2020084-2。
文摘With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay.
基金supported by National Natural Science Foundation of China No.62231012Natural Science Foundation for Outstanding Young Scholars of Heilongjiang Province under Grant YQ2020F001Heilongjiang Province Postdoctoral General Foundation under Grant AUGA4110004923.
文摘Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms.
基金supported by the National Natural Science Foundation of China(No.22373112 to Ji Qi,No.22373111 and 21921004 to Minghui Yang)GH-fund A(No.202107011790)。
文摘In this study,we investigate the ef-ficacy of a hybrid parallel algo-rithm aiming at enhancing the speed of evaluation of two-electron repulsion integrals(ERI)and Fock matrix generation on the Hygon C86/DCU(deep computing unit)heterogeneous computing platform.Multiple hybrid parallel schemes are assessed using a range of model systems,including those with up to 1200 atoms and 10000 basis func-tions.The findings of our research reveal that,during Hartree-Fock(HF)calculations,a single DCU ex-hibits 33.6 speedups over 32 C86 CPU cores.Compared with the efficiency of Wuhan Electronic Structure Package on Intel X86 and NVIDIA A100 computing platform,the Hygon platform exhibits good cost-effective-ness,showing great potential in quantum chemistry calculation and other high-performance scientific computations.
基金funded by the Fundamental Research Funds for the Central Universities(J2023-024,J2023-027).
文摘As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments.
基金supported by the Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022020)the Science and Technology Research Program of Henan Province of China(232102210134,182102210130)Key Research Projects of Henan Provincial Universities(25B520005).
文摘With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)offers an effective solution to mitigate resource constraints by enabling task offloading to edge cloud infrastructure,thereby reducing the computational burden on connected vehicles.However,this sharing-based and distributed computing paradigm necessitates ensuring the credibility and reliability of various computation nodes.Existing vehicular edge computing platforms have not adequately considered themisbehavior of vehicles.We propose a practical task offloading algorithm based on reputation assessment to address the task offloading problem in vehicular edge computing under an unreliable environment.This approach integrates deep reinforcement learning and reputation management to address task offloading challenges.Simulation experiments conducted using Veins demonstrate the feasibility and effectiveness of the proposed method.
基金supported in part by the National Natural Science Foundation of China under Grant No.61701197in part by the National Key Research and Development Program of China under Grant No.2021YFA1000500(4)in part by the 111 project under Grant No.B23008.
文摘As Internet of Vehicles(IoV)technology continues to advance,edge computing has become an important tool for assisting vehicles in handling complex tasks.However,the process of offloading tasks to edge servers may expose vehicles to malicious external attacks,resulting in information loss or even tampering,thereby creating serious security vulnerabilities.Blockchain technology can maintain a shared ledger among servers.In the Raft consensus mechanism,as long as more than half of the nodes remain operational,the system will not collapse,effectively maintaining the system’s robustness and security.To protect vehicle information,we propose a security framework that integrates the Raft consensus mechanism from blockchain technology with edge computing.To address the additional latency introduced by blockchain,we derived a theoretical formula for system delay and proposed a convex optimization solution to minimize the system latency,ensuring that the system meets the requirements for low latency and high reliability.Simulation results demonstrate that the optimized data extraction rate significantly reduces systemdelay,with relatively stable variations in latency.Moreover,the proposed optimization solution based on this model can provide valuable insights for enhancing security and efficiency in future network environments,such as 5G and next-generation smart city systems.
基金funded by Multimedia University(Ref:MMU/RMC/PostDoc/NEW/2024/9804).
文摘Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets.
文摘The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with each other.Blockchain-based STS devices need a lot of computer power to change different video feed quality and forms into different versions and structures that meet the needs of different users.On the other hand,existing blockchains can’t support live streaming because they take too long to process and don’t have enough computer power.Large amounts of video data being sent and analyzed put too much stress on networks for vehicles.A video surveillance method is suggested in this paper to improve the performance of the blockchain system’s data and lower the latency across the multiple access edge computing(MEC)system.The integration of MEC and blockchain for video surveillance in autonomous vehicles(IMEC-BVS)framework has been proposed.To deal with this problem,the joint optimization problem is shown using the actor-critical asynchronous advantage(ACAA)method and deep reinforcement training as a Markov Choice Progression(MCP).Simulation results show that the suggested method quickly converges and improves the performance of MEC and blockchain when used together for video surveillance in self-driving cars compared to other methods.
基金Project supported by the National Natural Science Foundation of China(Grant No.12275165)XGW was also supported by the Fundamental Research Funds for the Central Universities(Grant No.GK202202003).
文摘Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of chaos by the technique of reservoir computing(RC).In particular,by incorporating a parameter-control channel into the standard RC,it is demonstrated that the machine is able to not only replicate the dynamics of the training states,but also infer new dynamics not included in the training set.The new machine-learning scheme,termed parameter-aware RC,opens up new avenues for data-based analysis of chaotic systems,and holds promise for predicting and controlling many real-world complex systems.Here,using typical chaotic systems as examples,we give a comprehensive introduction to this powerful machine-learning technique,including the algorithm,the implementation,the performance,and the open questions calling for further studies.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(Grant No.2022D01B 187)。
文摘Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)(Project Nos.RS-2024-00438551,30%,2022-11220701,30%,2021-0-01816,30%)the National Research Foundation of Korea(NRF)grant funded by the Korean Government(Project No.RS2023-00208460,10%).
文摘The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable.
文摘Cognitive computing has emerged as a transformative force in artificial intelligence(AI)education,bridging theoretical advancements with practical applications.This article explores the role of cognitive models in en-hancing learning systems,from intelligent tutoring and personalized recommendations to virtual laboratories and special education support.It examines key technologies—such as knowledge graphs,natural language processing,and multimodal data analysis—that enable adaptive,human-like responsiveness.The study also ad-dresses technical challenges like interpretability and data privacy,alongside ethical concerns including equity and bias.Looking forward,it discusses how cognitive computing could reshape future learning modalities and aligns with trends like artificial general intelligence and interdisciplinary learning science.By tracing the path from theory to practice,this work underscores the potential of cognitive computing to create an inclusive,dy-namic educational landscape,while highlighting the need for ethical and technical rigor to ensure its responsible evolution.
文摘The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment.
文摘Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods.
基金supported by the Changzhou Science and Technology Support Project(CE20235045)Open Subject of Jiangsu Province Key Laboratory of Power Transmission and Distribution(2021JSSPD12)+1 种基金Talent Projects of Jiangsu University of Technology(KYY20018)Postgraduate Research&Practice Innovation Program of Jiangsu Province(SJCX23_1633).
文摘Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device energy utilization.To tackle these challenges,this study proposes an optimal scheduling model for energy storage power plants based on edge computing and the improved whale optimization algorithm(IWOA).The proposed model designs an edge computing framework,transferring a large share of data processing and storage tasks to the network edge.This architecture effectively reduces transmission costs by minimizing data travel time.In addition,the model considers demand response strategies and builds an objective function based on the minimization of the sum of electricity purchase cost and operation cost.The IWOA enhances the optimization process by utilizing adaptive weight adjustments and an optimal neighborhood perturbation strategy,preventing the algorithm from converging to suboptimal solutions.Experimental results demonstrate that the proposed scheduling model maximizes the flexibility of the energy storage plant,facilitating efficient charging and discharging.It successfully achieves peak shaving and valley filling for both electrical and heat loads,promoting the effective utilization of renewable energy sources.The edge-computing framework significantly reduces transmission delays between energy devices.Furthermore,IWOA outperforms traditional algorithms in optimizing the objective function.