期刊文献+
共找到250,391篇文章
< 1 2 250 >
每页显示 20 50 100
Offload Strategy for Edge Computing in Satellite Networks Based on Software Defined Network
1
作者 Zhiguo Liu Yuqing Gui +1 位作者 Lin Wang Yingru Jiang 《Computers, Materials & Continua》 SCIE EI 2025年第1期863-879,共17页
Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in us... Satellite edge computing has garnered significant attention from researchers;however,processing a large volume of tasks within multi-node satellite networks still poses considerable challenges.The sharp increase in user demand for latency-sensitive tasks has inevitably led to offloading bottlenecks and insufficient computational capacity on individual satellite edge servers,making it necessary to implement effective task offloading scheduling to enhance user experience.In this paper,we propose a priority-based task scheduling strategy based on a Software-Defined Network(SDN)framework for satellite-terrestrial integrated networks,which clarifies the execution order of tasks based on their priority.Subsequently,we apply a Dueling-Double Deep Q-Network(DDQN)algorithm enhanced with prioritized experience replay to derive a computation offloading strategy,improving the experience replay mechanism within the Dueling-DDQN framework.Next,we utilize the Deep Deterministic Policy Gradient(DDPG)algorithm to determine the optimal resource allocation strategy to reduce the processing latency of sub-tasks.Simulation results demonstrate that the proposed d3-DDPG algorithm outperforms other approaches,effectively reducing task processing latency and thus improving user experience and system efficiency. 展开更多
关键词 Satellite network edge computing task scheduling computing offloading
在线阅读 下载PDF
Optoelectronic memristor based on a-C:Te film for muti-mode reservoir computing 被引量:1
2
作者 Qiaoling Tian Kuo Xun +7 位作者 Zhuangzhuang Li Xiaoning Zhao Ya Lin Ye Tao Zhongqiang Wang Daniele Ielmini Haiyang Xu Yichun Liu 《Journal of Semiconductors》 2025年第2期144-149,共6页
Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic ... Optoelectronic memristor is generating growing research interest for high efficient computing and sensing-memory applications.In this work,an optoelectronic memristor with Au/a-C:Te/Pt structure is developed.Synaptic functions,i.e.,excita-tory post-synaptic current and pair-pulse facilitation are successfully mimicked with the memristor under electrical and optical stimulations.More importantly,the device exhibited distinguishable response currents by adjusting 4-bit input electrical/opti-cal signals.A multi-mode reservoir computing(RC)system is constructed with the optoelectronic memristors to emulate human tactile-visual fusion recognition and an accuracy of 98.7%is achieved.The optoelectronic memristor provides potential for developing multi-mode RC system. 展开更多
关键词 optoelectronic memristor volatile switching muti-mode reservoir computing
在线阅读 下载PDF
Synaptic devices based on silicon carbide for neuromorphic computing 被引量:1
3
作者 Boyu Ye Xiao Liu +2 位作者 Chao Wu Wensheng Yan Xiaodong Pi 《Journal of Semiconductors》 2025年第2期38-51,共14页
To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the vario... To address the increasing demand for massive data storage and processing,brain-inspired neuromorphic comput-ing systems based on artificial synaptic devices have been actively developed in recent years.Among the various materials inves-tigated for the fabrication of synaptic devices,silicon carbide(SiC)has emerged as a preferred choices due to its high electron mobility,superior thermal conductivity,and excellent thermal stability,which exhibits promising potential for neuromorphic applications in harsh environments.In this review,the recent progress in SiC-based synaptic devices is summarized.Firstly,an in-depth discussion is conducted regarding the categories,working mechanisms,and structural designs of these devices.Subse-quently,several application scenarios for SiC-based synaptic devices are presented.Finally,a few perspectives and directions for their future development are outlined. 展开更多
关键词 silicon carbide wide bandgap semiconductors synaptic devices neuromorphic computing high temperature
在线阅读 下载PDF
Nano device fabrication for in-memory and in-sensor reservoir computing
4
作者 Yinan Lin Xi Chen +4 位作者 Qianyu Zhang Junqi You Renjing Xu Zhongrui Wang Linfeng Sun 《International Journal of Extreme Manufacturing》 2025年第1期46-71,共26页
Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasti... Recurrent neural networks(RNNs)have proven to be indispensable for processing sequential and temporal data,with extensive applications in language modeling,text generation,machine translation,and time-series forecasting.Despite their versatility,RNNs are frequently beset by significant training expenses and slow convergence times,which impinge upon their deployment in edge AI applications.Reservoir computing(RC),a specialized RNN variant,is attracting increased attention as a cost-effective alternative for processing temporal and sequential data at the edge.RC’s distinctive advantage stems from its compatibility with emerging memristive hardware,which leverages the energy efficiency and reduced footprint of analog in-memory and in-sensor computing,offering a streamlined and energy-efficient solution.This review offers a comprehensive explanation of RC’s underlying principles,fabrication processes,and surveys recent progress in nano-memristive device based RC systems from the viewpoints of in-memory and in-sensor RC function.It covers a spectrum of memristive device,from established oxide-based memristive device to cutting-edge material science developments,providing readers with a lucid understanding of RC’s hardware implementation and fostering innovative designs for in-sensor RC systems.Lastly,we identify prevailing challenges and suggest viable solutions,paving the way for future advancements in in-sensor RC technology. 展开更多
关键词 reservoir computing memristive device fabrication compute-in-memory in-sensor computing
在线阅读 下载PDF
Flexible artificial vision computing system based on FeOx optomemristor for speech recognition
5
作者 Jie Li Yue Xin +6 位作者 Bai Sun Dengshun Gu Changrong Liao Xiaofang Hu Lidan Wang Shukai Duan Guangdong Zhou 《Journal of Semiconductors》 2025年第1期225-232,共8页
With the advancement of artificial intelligence,optic in-sensing reservoir computing based on emerging semiconductor devices is high desirable for real-time analog signal processing.Here,we disclose a flexible optomem... With the advancement of artificial intelligence,optic in-sensing reservoir computing based on emerging semiconductor devices is high desirable for real-time analog signal processing.Here,we disclose a flexible optomemristor based on C_(27)H_(30)O_(15)/FeOx heterostructure that presents a highly sensitive to the light stimuli and artificial optic synaptic features such as short-and long-term plasticity(STP and LTP),enabling the developed optomemristor to implement complex analogy signal processing through building a real-physical dynamic-based in-sensing reservoir computing algorithm and yielding an accuracy of 94.88%for speech recognition.The charge trapping and detrapping mediated by the optic active layer of C_(27)H_(30)O_(15) that is extracted from the lotus flower is response for the positive photoconductance memory in the prepared optomemristor.This work provides a feasible organic−inorganic heterostructure as well as an optic in-sensing vision computing for an advanced optic computing system in future complex signal processing. 展开更多
关键词 reservoir computing flexible optomemristor analogy signal processing optic computing
在线阅读 下载PDF
Joint offloading decision and resource allocation in vehicular edge computing networks
6
作者 Shumo Wang Xiaoqin Song +3 位作者 Han Xu Tiecheng Song Guowei Zhang Yang Yang 《Digital Communications and Networks》 2025年第1期71-82,共12页
With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications a... With the rapid development of Intelligent Transportation Systems(ITS),many new applications for Intelligent Connected Vehicles(ICVs)have sprung up.In order to tackle the conflict between delay-sensitive applications and resource-constrained vehicles,computation offloading paradigm that transfers computation tasks from ICVs to edge computing nodes has received extensive attention.However,the dynamic network conditions caused by the mobility of vehicles and the unbalanced computing load of edge nodes make ITS face challenges.In this paper,we propose a heterogeneous Vehicular Edge Computing(VEC)architecture with Task Vehicles(TaVs),Service Vehicles(SeVs)and Roadside Units(RSUs),and propose a distributed algorithm,namely PG-MRL,which jointly optimizes offloading decision and resource allocation.In the first stage,the offloading decisions of TaVs are obtained through a potential game.In the second stage,a multi-agent Deep Deterministic Policy Gradient(DDPG),one of deep reinforcement learning algorithms,with centralized training and distributed execution is proposed to optimize the real-time transmission power and subchannel selection.The simulation results show that the proposed PG-MRL algorithm has significant improvements over baseline algorithms in terms of system delay. 展开更多
关键词 computation offloading Resource allocation Vehicular edge computing Potential game Multi-agent deep deterministic policy gradient
在线阅读 下载PDF
DDPG-Based Intelligent Computation Offloading and Resource Allocation for LEO Satellite Edge Computing Network
7
作者 Jia Min Wu Jian +2 位作者 Zhang Liang Wang Xinyu Guo Qing 《China Communications》 2025年第3期1-15,共15页
Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for t... Low earth orbit(LEO)satellites with wide coverage can carry the mobile edge computing(MEC)servers with powerful computing capabilities to form the LEO satellite edge computing system,providing computing services for the global ground users.In this paper,the computation offloading problem and resource allocation problem are formulated as a mixed integer nonlinear program(MINLP)problem.This paper proposes a computation offloading algorithm based on deep deterministic policy gradient(DDPG)to obtain the user offloading decisions and user uplink transmission power.This paper uses the convex optimization algorithm based on Lagrange multiplier method to obtain the optimal MEC server resource allocation scheme.In addition,the expression of suboptimal user local CPU cycles is derived by relaxation method.Simulation results show that the proposed algorithm can achieve excellent convergence effect,and the proposed algorithm significantly reduces the system utility values at considerable time cost compared with other algorithms. 展开更多
关键词 computation offloading deep deterministic policy gradient low earth orbit satellite mobile edge computing resource allocation
在线阅读 下载PDF
Accelerating Hartree-Fock Self-consistent Field Calculation on C86/DCU Heterogenous Computing Platform
8
作者 Ji Qi Huimin Zhang +1 位作者 Dezun Shan Minghui Yang 《Chinese Journal of Chemical Physics》 2025年第1期81-94,I0056,共15页
In this study,we investigate the ef-ficacy of a hybrid parallel algo-rithm aiming at enhancing the speed of evaluation of two-electron repulsion integrals(ERI)and Fock matrix generation on the Hygon C86/DCU(deep compu... In this study,we investigate the ef-ficacy of a hybrid parallel algo-rithm aiming at enhancing the speed of evaluation of two-electron repulsion integrals(ERI)and Fock matrix generation on the Hygon C86/DCU(deep computing unit)heterogeneous computing platform.Multiple hybrid parallel schemes are assessed using a range of model systems,including those with up to 1200 atoms and 10000 basis func-tions.The findings of our research reveal that,during Hartree-Fock(HF)calculations,a single DCU ex-hibits 33.6 speedups over 32 C86 CPU cores.Compared with the efficiency of Wuhan Electronic Structure Package on Intel X86 and NVIDIA A100 computing platform,the Hygon platform exhibits good cost-effective-ness,showing great potential in quantum chemistry calculation and other high-performance scientific computations. 展开更多
关键词 Quantum chemistry Self-consistent field HARTREE-FOCK Electron repulsion inte-grals Heterogenous parallel computing C86/deep computing unit
在线阅读 下载PDF
Dynamic Task Offloading Scheme for Edge Computing via Meta-Reinforcement Learning
9
作者 Jiajia Liu Peng Xie +2 位作者 Wei Li Bo Tang Jianhua Liu 《Computers, Materials & Continua》 2025年第2期2609-2635,共27页
As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the... As an important complement to cloud computing, edge computing can effectively reduce the workload of the backbone network. To reduce latency and energy consumption of edge computing, deep learning is used to learn the task offloading strategies by interacting with the entities. In actual application scenarios, users of edge computing are always changing dynamically. However, the existing task offloading strategies cannot be applied to such dynamic scenarios. To solve this problem, we propose a novel dynamic task offloading framework for distributed edge computing, leveraging the potential of meta-reinforcement learning (MRL). Our approach formulates a multi-objective optimization problem aimed at minimizing both delay and energy consumption. We model the task offloading strategy using a directed acyclic graph (DAG). Furthermore, we propose a distributed edge computing adaptive task offloading algorithm rooted in MRL. This algorithm integrates multiple Markov decision processes (MDP) with a sequence-to-sequence (seq2seq) network, enabling it to learn and adapt task offloading strategies responsively across diverse network environments. To achieve joint optimization of delay and energy consumption, we incorporate the non-dominated sorting genetic algorithm II (NSGA-II) into our framework. Simulation results demonstrate the superiority of our proposed solution, achieving a 21% reduction in time delay and a 19% decrease in energy consumption compared to alternative task offloading schemes. Moreover, our scheme exhibits remarkable adaptability, responding swiftly to changes in various network environments. 展开更多
关键词 Edge computing adaptive META task offloading joint optimization
在线阅读 下载PDF
A Task Offloading Method for Vehicular Edge Computing Based on Reputation Assessment
10
作者 Jun Li Yawei Dong +2 位作者 Liang Ni Guopeng Feng Fangfang Shan 《Computers, Materials & Continua》 2025年第5期3537-3552,共16页
With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)of... With the development of vehicle networks and the construction of roadside units,Vehicular Ad Hoc Networks(VANETs)are increasingly promoting cooperative computing patterns among vehicles.Vehicular edge computing(VEC)offers an effective solution to mitigate resource constraints by enabling task offloading to edge cloud infrastructure,thereby reducing the computational burden on connected vehicles.However,this sharing-based and distributed computing paradigm necessitates ensuring the credibility and reliability of various computation nodes.Existing vehicular edge computing platforms have not adequately considered themisbehavior of vehicles.We propose a practical task offloading algorithm based on reputation assessment to address the task offloading problem in vehicular edge computing under an unreliable environment.This approach integrates deep reinforcement learning and reputation management to address task offloading challenges.Simulation experiments conducted using Veins demonstrate the feasibility and effectiveness of the proposed method. 展开更多
关键词 Vehicular edge computing task offloading reputation assessment
在线阅读 下载PDF
Optimizing System Latency for Blockchain-Encrypted Edge Computing in Internet of Vehicles
11
作者 Cui Zhang Maoxin Ji +2 位作者 Qiong Wu Pingyi Fan Qiang Fan 《Computers, Materials & Continua》 2025年第5期3519-3536,共18页
As Internet of Vehicles(IoV)technology continues to advance,edge computing has become an important tool for assisting vehicles in handling complex tasks.However,the process of offloading tasks to edge servers may expo... As Internet of Vehicles(IoV)technology continues to advance,edge computing has become an important tool for assisting vehicles in handling complex tasks.However,the process of offloading tasks to edge servers may expose vehicles to malicious external attacks,resulting in information loss or even tampering,thereby creating serious security vulnerabilities.Blockchain technology can maintain a shared ledger among servers.In the Raft consensus mechanism,as long as more than half of the nodes remain operational,the system will not collapse,effectively maintaining the system’s robustness and security.To protect vehicle information,we propose a security framework that integrates the Raft consensus mechanism from blockchain technology with edge computing.To address the additional latency introduced by blockchain,we derived a theoretical formula for system delay and proposed a convex optimization solution to minimize the system latency,ensuring that the system meets the requirements for low latency and high reliability.Simulation results demonstrate that the optimized data extraction rate significantly reduces systemdelay,with relatively stable variations in latency.Moreover,the proposed optimization solution based on this model can provide valuable insights for enhancing security and efficiency in future network environments,such as 5G and next-generation smart city systems. 展开更多
关键词 Blockchain edge computing internet of vehicles latency optimization
在线阅读 下载PDF
Modified Neural Network Used for Host Utilization Predication in Cloud Computing Environment
12
作者 Arif Ullah Siti Fatimah Abdul Razak +1 位作者 Sumendra Yogarayan Md Shohel Sayeed 《Computers, Materials & Continua》 2025年第3期5185-5204,共20页
Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves ... Networking,storage,and hardware are just a few of the virtual computing resources that the infrastruc-ture service model offers,depending on what the client needs.One essential aspect of cloud computing that improves resource allocation techniques is host load prediction.This difficulty means that hardware resource allocation in cloud computing still results in hosting initialization issues,which add several minutes to response times.To solve this issue and accurately predict cloud capacity,cloud data centers use prediction algorithms.This permits dynamic cloud scalability while maintaining superior service quality.For host prediction,we therefore present a hybrid convolutional neural network long with short-term memory model in this work.First,the suggested hybrid model is input is subjected to the vector auto regression technique.The data in many variables that,prior to analysis,has been filtered to eliminate linear interdependencies.After that,the persisting data are processed and sent into the convolutional neural network layer,which gathers intricate details about the utilization of each virtual machine and central processing unit.The next step involves the use of extended short-term memory,which is suitable for representing the temporal information of irregular trends in time series components.The key to the entire process is that we used the most appropriate activation function for this type of model a scaled polynomial constant unit.Cloud systems require accurate prediction due to the increasing degrees of unpredictability in data centers.Because of this,two actual load traces were used in this study’s assessment of the performance.An example of the load trace is in the typical dispersed system.In comparison to CNN,VAR-GRU,VAR-MLP,ARIMA-LSTM,and other models,the experiment results demonstrate that our suggested approach offers state-of-the-art performance with higher accuracy in both datasets. 展开更多
关键词 Cloud computing DATACENTER virtual machine(VM) PREDICATION algorithm
在线阅读 下载PDF
Blockchain-Enabled Edge Computing Techniques for Advanced Video Surveillance in Autonomous Vehicles
13
作者 Mohammad Tabrez Quasim Khair Ul Nisa 《Computers, Materials & Continua》 2025年第4期1239-1255,共17页
The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with e... The blockchain-based audiovisual transmission systems were built to create a distributed and flexible smart transport system(STS).This system lets customers,video creators,and service providers directly connect with each other.Blockchain-based STS devices need a lot of computer power to change different video feed quality and forms into different versions and structures that meet the needs of different users.On the other hand,existing blockchains can’t support live streaming because they take too long to process and don’t have enough computer power.Large amounts of video data being sent and analyzed put too much stress on networks for vehicles.A video surveillance method is suggested in this paper to improve the performance of the blockchain system’s data and lower the latency across the multiple access edge computing(MEC)system.The integration of MEC and blockchain for video surveillance in autonomous vehicles(IMEC-BVS)framework has been proposed.To deal with this problem,the joint optimization problem is shown using the actor-critical asynchronous advantage(ACAA)method and deep reinforcement training as a Markov Choice Progression(MCP).Simulation results show that the suggested method quickly converges and improves the performance of MEC and blockchain when used together for video surveillance in self-driving cars compared to other methods. 展开更多
关键词 Blockchain multiple access edge computing video surveillance autonomous vehicles
在线阅读 下载PDF
Model-free prediction of chaotic dynamics with parameter-aware reservoir computing
14
作者 Jianmin Guo Yao Du +3 位作者 Haibo Luo Xuan Wang Yizhen Yu Xingang Wang 《Chinese Physics B》 2025年第4期143-152,共10页
Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of ch... Model-free,data-driven prediction of chaotic motions is a long-standing challenge in nonlinear science.Stimulated by the recent progress in machine learning,considerable attention has been given to the inference of chaos by the technique of reservoir computing(RC).In particular,by incorporating a parameter-control channel into the standard RC,it is demonstrated that the machine is able to not only replicate the dynamics of the training states,but also infer new dynamics not included in the training set.The new machine-learning scheme,termed parameter-aware RC,opens up new avenues for data-based analysis of chaotic systems,and holds promise for predicting and controlling many real-world complex systems.Here,using typical chaotic systems as examples,we give a comprehensive introduction to this powerful machine-learning technique,including the algorithm,the implementation,the performance,and the open questions calling for further studies. 展开更多
关键词 chaos prediction time-series analysis bifurcation diagram parameter-aware reservoir computing
在线阅读 下载PDF
FedCLCC:A personalized federated learning algorithm for edge cloud collaboration based on contrastive learning and conditional computing
15
作者 Kangning Yin Xinhui Ji +1 位作者 Yan Wang Zhiguo Wang 《Defence Technology(防务技术)》 2025年第1期80-93,共14页
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ... Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms. 展开更多
关键词 Federated learning Statistical heterogeneity Personalized model Conditional computing Contrastive learning
在线阅读 下载PDF
GENOME:Genetic Encoding for Novel Optimization of Malware Detection and Classification in Edge Computing
16
作者 Sang-Hoon Choi Ki-Woong Park 《Computers, Materials & Continua》 2025年第3期4021-4039,共19页
The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing pr... The proliferation of Internet of Things(IoT)devices has established edge computing as a critical paradigm for real-time data analysis and low-latency processing.Nevertheless,the distributed nature of edge computing presents substantial security challenges,rendering it a prominent target for sophisticated malware attacks.Existing signature-based and behavior-based detection methods are ineffective against the swiftly evolving nature of malware threats and are constrained by the availability of resources.This paper suggests the Genetic Encoding for Novel Optimization of Malware Evaluation(GENOME)framework,a novel solution that is intended to improve the performance of malware detection and classification in peripheral computing environments.GENOME optimizes data storage and computa-tional efficiency by converting malware artifacts into compact,structured sequences through a Deoxyribonucleic Acid(DNA)encoding mechanism.The framework employs two DNA encoding algorithms,standard and compressed,which substantially reduce data size while preserving high detection accuracy.The Edge-IIoTset dataset was used to conduct experiments that showed that GENOME was able to achieve high classification performance using models such as Random Forest and Logistic Regression,resulting in a reduction of data size by up to 42%.Further evaluations with the CIC-IoT-23 dataset and Deep Learning models confirmed GENOME’s scalability and adaptability across diverse datasets and algorithms.The potential of GENOME to address critical challenges,such as the rapid mutation of malware,real-time processing demands,and resource limitations,is emphasized in this study.GENOME offers comprehensive protection for peripheral computing environments by offering a security solution that is both efficient and scalable. 展开更多
关键词 Edge computing IoT security MALWARE machine learning malware classification malware detection
在线阅读 下载PDF
Cognitive Computing Models in Artificial Intelligence Education: From Theory to Practice
17
作者 Changkui LI 《Artificial Intelligence Education Studies》 2025年第1期1-14,共14页
Cognitive computing has emerged as a transformative force in artificial intelligence(AI)education,bridging theoretical advancements with practical applications.This article explores the role of cognitive models in en-... Cognitive computing has emerged as a transformative force in artificial intelligence(AI)education,bridging theoretical advancements with practical applications.This article explores the role of cognitive models in en-hancing learning systems,from intelligent tutoring and personalized recommendations to virtual laboratories and special education support.It examines key technologies—such as knowledge graphs,natural language processing,and multimodal data analysis—that enable adaptive,human-like responsiveness.The study also ad-dresses technical challenges like interpretability and data privacy,alongside ethical concerns including equity and bias.Looking forward,it discusses how cognitive computing could reshape future learning modalities and aligns with trends like artificial general intelligence and interdisciplinary learning science.By tracing the path from theory to practice,this work underscores the potential of cognitive computing to create an inclusive,dy-namic educational landscape,while highlighting the need for ethical and technical rigor to ensure its responsible evolution. 展开更多
关键词 Cognitive computing Artificial Intelligence Education Adaptive Learning Ethical AI Future Learning
在线阅读 下载PDF
Innovative Approaches to Task Scheduling in Cloud Computing Environments Using an Advanced Willow Catkin Optimization Algorithm
18
作者 Jeng-Shyang Pan Na Yu +3 位作者 Shu-Chuan Chu An-Ning Zhang Bin Yan Junzo Watada 《Computers, Materials & Continua》 2025年第2期2495-2520,共26页
The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resource... The widespread adoption of cloud computing has underscored the critical importance of efficient resource allocation and management, particularly in task scheduling, which involves assigning tasks to computing resources for optimized resource utilization. Several meta-heuristic algorithms have shown effectiveness in task scheduling, among which the relatively recent Willow Catkin Optimization (WCO) algorithm has demonstrated potential, albeit with apparent needs for enhanced global search capability and convergence speed. To address these limitations of WCO in cloud computing task scheduling, this paper introduces an improved version termed the Advanced Willow Catkin Optimization (AWCO) algorithm. AWCO enhances the algorithm’s performance by augmenting its global search capability through a quasi-opposition-based learning strategy and accelerating its convergence speed via sinusoidal mapping. A comprehensive evaluation utilizing the CEC2014 benchmark suite, comprising 30 test functions, demonstrates that AWCO achieves superior optimization outcomes, surpassing conventional WCO and a range of established meta-heuristics. The proposed algorithm also considers trade-offs among the cost, makespan, and load balancing objectives. Experimental results of AWCO are compared with those obtained using the other meta-heuristics, illustrating that the proposed algorithm provides superior performance in task scheduling. The method offers a robust foundation for enhancing the utilization of cloud computing resources in the domain of task scheduling within a cloud computing environment. 展开更多
关键词 Willow catkin optimization algorithm cloud computing task scheduling opposition-based learning strategy
在线阅读 下载PDF
Container cluster placement in edge computing based on reinforcement learning incorporating graph convolutional networks scheme
19
作者 Zhuo Chen Bowen Zhu Chuan Zhou 《Digital Communications and Networks》 2025年第1期60-70,共11页
Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilizat... Container-based virtualization technology has been more widely used in edge computing environments recently due to its advantages of lighter resource occupation, faster startup capability, and better resource utilization efficiency. To meet the diverse needs of tasks, it usually needs to instantiate multiple network functions in the form of containers interconnect various generated containers to build a Container Cluster(CC). Then CCs will be deployed on edge service nodes with relatively limited resources. However, the increasingly complex and timevarying nature of tasks brings great challenges to optimal placement of CC. This paper regards the charges for various resources occupied by providing services as revenue, the service efficiency and energy consumption as cost, thus formulates a Mixed Integer Programming(MIP) model to describe the optimal placement of CC on edge service nodes. Furthermore, an Actor-Critic based Deep Reinforcement Learning(DRL) incorporating Graph Convolutional Networks(GCN) framework named as RL-GCN is proposed to solve the optimization problem. The framework obtains an optimal placement strategy through self-learning according to the requirements and objectives of the placement of CC. Particularly, through the introduction of GCN, the features of the association relationship between multiple containers in CCs can be effectively extracted to improve the quality of placement.The experiment results show that under different scales of service nodes and task requests, the proposed method can obtain the improved system performance in terms of placement error ratio, time efficiency of solution output and cumulative system revenue compared with other representative baseline methods. 展开更多
关键词 Edge computing Network virtualization Container cluster Deep reinforcement learning Graph convolutional network
在线阅读 下载PDF
Research on the Optimal Scheduling Model of Energy Storage Plant Based on Edge Computing and Improved Whale Optimization Algorithm
20
作者 Zhaoyu Zeng Fuyin Ni 《Energy Engineering》 2025年第3期1153-1174,共22页
Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device ... Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device energy utilization.To tackle these challenges,this study proposes an optimal scheduling model for energy storage power plants based on edge computing and the improved whale optimization algorithm(IWOA).The proposed model designs an edge computing framework,transferring a large share of data processing and storage tasks to the network edge.This architecture effectively reduces transmission costs by minimizing data travel time.In addition,the model considers demand response strategies and builds an objective function based on the minimization of the sum of electricity purchase cost and operation cost.The IWOA enhances the optimization process by utilizing adaptive weight adjustments and an optimal neighborhood perturbation strategy,preventing the algorithm from converging to suboptimal solutions.Experimental results demonstrate that the proposed scheduling model maximizes the flexibility of the energy storage plant,facilitating efficient charging and discharging.It successfully achieves peak shaving and valley filling for both electrical and heat loads,promoting the effective utilization of renewable energy sources.The edge-computing framework significantly reduces transmission delays between energy devices.Furthermore,IWOA outperforms traditional algorithms in optimizing the objective function. 展开更多
关键词 Energy storage plant edge computing optimal energy scheduling improved whale optimization algorithm
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部