期刊文献+
共找到5,660篇文章
< 1 2 250 >
每页显示 20 50 100
LoRa Sense:Sensing and Optimization of LoRa Link Behavior Using Path-Loss Models in Open-Cast Mines
1
作者 Bhanu Pratap Reddy Bhavanam Prashanth Ragam 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期425-466,共42页
The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic developm... The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic development.This study provides valuable insights into optimizing wireless communication,paving the way for a more connected and productive future in the mining industry.The IoT revolution is advancing across industries,but harsh geometric environments,including open-pit mines,pose unique challenges for reliable communication.The advent of IoT in the mining industry has significantly improved communication for critical operations through the use of Radio Frequency(RF)protocols such as Bluetooth,Wi-Fi,GSM/GPRS,Narrow Band(NB)-IoT,SigFox,ZigBee,and Long Range Wireless Area Network(LoRaWAN).This study addresses the optimization of network implementations by comparing two leading free-spreading IoT-based RF protocols such as ZigBee and LoRaWAN.Intensive field tests are conducted in various opencast mines to investigate coverage potential and signal attenuation.ZigBee is tested in the Tadicherla open-cast coal mine in India.Similarly,LoRaWAN field tests are conducted at one of the associated cement companies(ACC)in the limestone mine in Bargarh,India,covering both Indoor-toOutdoor(I2O)and Outdoor-to-Outdoor(O2O)environments.A robust framework of path-loss models,referred to as Free space,Egli,Okumura-Hata,Cost231-Hata and Ericsson models,combined with key performance metrics,is employed to evaluate the patterns of signal attenuation.Extensive field testing and careful data analysis revealed that the Egli model is the most consistent path-loss model for the ZigBee protocol in an I2O environment,with a coefficient of determination(R^(2))of 0.907,balanced error metrics such as Normalized Root Mean Square Error(NRMSE)of 0.030,Mean Square Error(MSE)of 4.950,Mean Absolute Percentage Error(MAPE)of 0.249 and Scatter Index(SI)of 2.723.In the O2O scenario,the Ericsson model showed superior performance,with the highest R^(2)value of 0.959,supported by strong correlation metrics:NRMSE of 0.026,MSE of 8.685,MAPE of 0.685,Mean Absolute Deviation(MAD)of 20.839 and SI of 2.194.For the LoRaWAN protocol,the Cost-231 model achieved the highest R^(2)value of 0.921 in the I2O scenario,complemented by the lowest metrics:NRMSE of 0.018,MSE of 1.324,MAPE of 0.217,MAD of 9.218 and SI of 1.238.In the O2O environment,the Okumura-Hata model achieved the highest R^(2)value of 0.978,indicating a strong fit with metrics NRMSE of 0.047,MSE of 27.807,MAPE of 27.494,MAD of 37.287 and SI of 3.927.This advancement in reliable communication networks promises to transform the opencast landscape into networked signal attenuation.These results support decision-making for mining needs and ensure reliable communications even in the face of formidable obstacles. 展开更多
关键词 Internet of things long range wireless area network ZigBee mining environments path-loss models coefficient of determination mean square error
在线阅读 下载PDF
The applied principles of EEG analysis methods in neuroscience and clinical neurology
2
作者 Hao Zhang Qing-Qi Zhou +10 位作者 He Chen Xiao-Qing Hu Wei-Guang Li Yang Bai Jun-Xia Han Yao Wang Zhen-Hu Liang Dan Chen Feng-Yu Cong Jia-Qing Yan Xiao-Li Li 《Military Medical Research》 CSCD 2024年第6期907-941,共35页
Electroencephalography(EEG)is a non-invasive measurement method for brain activity.Due to its safety,high resolution,and hypersensitivity to dynamic changes in brain neural signals,EEG has aroused much interest in sci... Electroencephalography(EEG)is a non-invasive measurement method for brain activity.Due to its safety,high resolution,and hypersensitivity to dynamic changes in brain neural signals,EEG has aroused much interest in scientific research and medical felds.This article reviews the types of EEG signals,multiple EEG signal analysis methods,and the application of relevant methods in the neuroscience feld and for diagnosing neurological diseases.First,3 types of EEG signals,including time-invariant EEG,accurate event-related EEG,and random event-related EEG,are introduced.Second,5 main directions for the methods of EEG analysis,including power spectrum analysis,time-frequency analysis,connectivity analysis,source localization methods,and machine learning methods,are described in the main section,along with diferent sub-methods and effect evaluations for solving the same problem.Finally,the application scenarios of different EEG analysis methods are emphasized,and the advantages and disadvantages of similar methods are distinguished.This article is expected to assist researchers in selecting suitable EEG analysis methods based on their research objectives,provide references for subsequent research,and summarize current issues and prospects for the future. 展开更多
关键词 Electroencephalogram analysis methods Applied principles NEUROSCIENCE DIAGNOSIS Neurological diseases
在线阅读 下载PDF
Detection and Recognition of Spray Code Numbers on Can Surfaces Based on OCR
3
作者 Hailong Wang Junchao Shi 《Computers, Materials & Continua》 SCIE EI 2025年第1期1109-1128,共20页
A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can ... A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition. 展开更多
关键词 Can coding recognition differentiable binarization network scene visual text recognition model pruning and quantification transport model
在线阅读 下载PDF
A Review of Artificial Intelligence Applications in Contemporary Computer Network Technologies
4
作者 Ackim Lutepo Kai Zhang 《Communications and Network》 2024年第3期90-107,共18页
Rapid advancement in science and technology has seen computer network technology being upgraded constantly, and computer technology, in particular, has been applied more and more extensively, which has brought conveni... Rapid advancement in science and technology has seen computer network technology being upgraded constantly, and computer technology, in particular, has been applied more and more extensively, which has brought convenience to people’s lives. The number of people using the internet around the globe has also increased significantly, exerting a profound influence on artificial intelligence. Further, the constant upgrading and development of artificial intelligence has led to the continuous innovation and improvement of computer technology. Countries around the world have also registered an increase in investment, paying more attention to artificial intelligence. Through an analysis of the current development situation and the existing applications of artificial intelligence, this paper explicates the role of artificial intelligence in the face of the unceasing expansion of computer network technology. 展开更多
关键词 Artificial Intelligence Network Technology Internet of Things (IoT) CYBERSECURITY Mobile Communication
在线阅读 下载PDF
Early identification of stroke through deep learning with multi-modal human speech and movement data
5
作者 Zijun Ou Haitao Wang +9 位作者 Bin Zhang Haobang Liang Bei Hu Longlong Ren Yanjuan Liu Yuhu Zhang Chengbo Dai Hejun Wu Weifeng Li Xin Li 《Neural Regeneration Research》 SCIE CAS 2025年第1期234-241,共8页
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are... Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting. 展开更多
关键词 artificial intelligence deep learning DIAGNOSIS early detection FAST SCREENING STROKE
在线阅读 下载PDF
Evolutionary Particle Swarm Optimization Algorithm Based on Collective Prediction for Deployment of Base Stations
6
作者 Jiaying Shen Donglin Zhu +5 位作者 Yujia Liu Leyi Wang Jialing Hu Zhaolong Ouyang Changjun Zhou Taiyong Li 《Computers, Materials & Continua》 SCIE EI 2025年第1期345-369,共25页
The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(I... The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(IoT)relies on the support of base stations,which provide a solid foundation for achieving a more intelligent way of living.In a specific area,achieving higher signal coverage with fewer base stations has become an urgent problem.Therefore,this article focuses on the effective coverage area of base station signals and proposes a novel Evolutionary Particle Swarm Optimization(EPSO)algorithm based on collective prediction,referred to herein as ECPPSO.Introducing a new strategy called neighbor-based evolution prediction(NEP)addresses the issue of premature convergence often encountered by PSO.ECPPSO also employs a strengthening evolution(SE)strategy to enhance the algorithm’s global search capability and efficiency,ensuring enhanced robustness and a faster convergence speed when solving complex optimization problems.To better adapt to the actual communication needs of base stations,this article conducts simulation experiments by changing the number of base stations.The experimental results demonstrate thatunder the conditionof 50 ormore base stations,ECPPSOconsistently achieves the best coverage rate exceeding 95%,peaking at 99.4400%when the number of base stations reaches 80.These results validate the optimization capability of the ECPPSO algorithm,proving its feasibility and effectiveness.Further ablative experiments and comparisons with other algorithms highlight the advantages of ECPPSO. 展开更多
关键词 Particle swarm optimization effective coverage area global optimization base station deployment
在线阅读 下载PDF
The Impact of COVID-19 on Cardiovascular Disease: A Machine Learning Predictive Study
7
作者 Nidhi Priyadarshini Phillip Smith 《World Journal of Cardiovascular Diseases》 2025年第2期19-47,共29页
The COVID-19 pandemic has profoundly impacted global health, with far-reaching consequences beyond respiratory complications. Increasing evidence highlights the link between COVID-19 and cardiovascular diseases (CVD),... The COVID-19 pandemic has profoundly impacted global health, with far-reaching consequences beyond respiratory complications. Increasing evidence highlights the link between COVID-19 and cardiovascular diseases (CVD), raising concerns about long-term health risks for those recovering from the virus. This study rigorously investigates the influence of COVID-19 on cardiovascular disease risk, focusing on conditions such as heart failure and myocardial infarction. Using a dataset of 52,683 individuals aged 30 to 80, including both COVID-19 survivors and those unaffected, the study employs machine learning models—logistic regression, decision trees, and random forests—to predict cardiovascular outcomes. The multifaceted approach allowed for a comprehensive evaluation of the model’s predictive capabilities, with logistic regression yielding the highest Binary F1 score of 0.94, effectively identifying cardiovascular risks in both the COVID-19 and non-COVID-19 groups. The correlation matrix revealed significant associations between COVID-19 and key symptoms of heart disease, emphasizing the need for early cardiovascular risk assessment. These findings underscore the importance of machine learning in enhancing early diagnosis and developing preventive strategies for COVID-19-related heart complications. Ultimately, this research contributes to a broader understanding of the pandemic’s lasting health effects, highlighting the critical role of cardiovascular care in post-COVID-19 recovery. 展开更多
关键词 Cardiovascular Diseases COVID-19 Logistic Regression Decision Tree Classifier Random Forest F1 Macro
在线阅读 下载PDF
Foundations of Holographic Quantum Computation
8
作者 Logan Nye 《Journal of Applied Mathematics and Physics》 2025年第1期11-60,共50页
We present a comprehensive mathematical framework establishing the foundations of holographic quantum computing, a novel paradigm that leverages holographic phenomena to achieve superior error correction and algorithm... We present a comprehensive mathematical framework establishing the foundations of holographic quantum computing, a novel paradigm that leverages holographic phenomena to achieve superior error correction and algorithmic efficiency. We rigorously demonstrate that quantum information can be encoded and processed using holographic principles, establishing fundamental theorems characterizing the error-correcting properties of holographic codes. We develop a complete set of universal quantum gates with explicit constructions and prove exponential speedups for specific classes of computational problems. Our framework demonstrates that holographic quantum codes achieve a code rate scaling as O(1/logn), superior to traditional quantum LDPC codes, while providing inherent protection against errors via geometric properties of the code structures. We prove a threshold theorem establishing that arbitrary quantum computations can be performed reliably when physical error rates fall below a constant threshold. Notably, our analysis suggests certain algorithms, including those involving high-dimensional state spaces and long-range interactions, achieve exponential speedups over both classical and conventional quantum approaches. This work establishes the theoretical foundations for a new approach to quantum computation that provides natural fault tolerance and scalability, directly addressing longstanding challenges of the field. 展开更多
关键词 Holographic Quantum Computing Error Correction Universal Quantum Gates Exponential Speedups Fault Tolerance
在线阅读 下载PDF
SFPBL:Soft Filter Pruning Based on Logistic Growth Differential Equation for Neural Network
9
作者 Can Hu Shanqing Zhang +2 位作者 Kewei Tao Gaoming Yang Li Li 《Computers, Materials & Continua》 2025年第3期4913-4930,共18页
The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and int... The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network. 展开更多
关键词 Filter pruning channel pruning CNN complexity deep neural networks filtering theory logistic model
在线阅读 下载PDF
PathActMarker:an R package for inferring pathway activity of complex diseases
10
作者 Xingyi LI Jun HAO +4 位作者 Zhelin ZHAO Junming LI Xingyu LIAO Min LI Xuequn SHANG 《Frontiers of Computer Science》 2025年第3期123-124,共2页
1 Introduction The process of complex diseases is closely linked to the disruption of key biological pathways,it is crucial to identify the dysfunctional pathways and quantify the degree of dysregulation at the indivi... 1 Introduction The process of complex diseases is closely linked to the disruption of key biological pathways,it is crucial to identify the dysfunctional pathways and quantify the degree of dysregulation at the individual sample level[1]. 展开更多
关键词 pathway activity DYSREGULATION complex diseases biological pathways disruption key biological pathwaysit identify dysfunctional pathways
原文传递
Teaching Reform and Practice of Statistics Courses in Big Data Management and Applications Major in the Context of New Quality Productivity
11
作者 Tinghui Huang Junchao Dong Liang Min 《Journal of Contemporary Educational Research》 2025年第2期23-31,共9页
In the new era,the impact of emerging productive forces has permeated every sector of industry.As the core production factor of these forces,data plays a pivotal role in industrial transformation and social developmen... In the new era,the impact of emerging productive forces has permeated every sector of industry.As the core production factor of these forces,data plays a pivotal role in industrial transformation and social development.Consequently,many domestic universities have introduced majors or courses related to big data.Among these,the Big Data Management and Applications major stands out for its interdisciplinary approach and emphasis on practical skills.However,as an emerging field,it has not yet accumulated a robust foundation in teaching theory and practice.Current instructional practices face issues such as unclear training objectives,inconsistent teaching methods and course content,insufficient integration of practical components,and a shortage of qualified faculty-factors that hinder both the development of the major and the overall quality of education.Taking the statistics course within the Big Data Management and Applications major as an example,this paper examines the challenges faced by statistics education in the context of emerging productive forces and proposes corresponding improvement measures.By introducing innovative teaching concepts and strategies,the teaching system for professional courses is optimized,and authentic classroom scenarios are recreated through illustrative examples.Questionnaire surveys and statistical analyses of data collected before and after the teaching reforms indicate that the curriculum changes effectively enhance instructional outcomes,promote the development of the major,and improve the quality of talent cultivation. 展开更多
关键词 New quality productivity Big data Compound talents Statistics course Teaching examples
在线阅读 下载PDF
DeepSwarm:towards swarm deep learning with bi-directional optimization of data acquisition and processing
12
作者 Sicong LIU Bin GUO +4 位作者 Ziqi WANG Lehao WANG Zimu ZHOU Xiaochen LI Zhiwen YU 《Frontiers of Computer Science》 2025年第3期125-127,共3页
1 Introduction On-device deep learning(DL)on mobile and embedded IoT devices drives various applications[1]like robotics image recognition[2]and drone swarm classification[3].Efficient local data processing preserves ... 1 Introduction On-device deep learning(DL)on mobile and embedded IoT devices drives various applications[1]like robotics image recognition[2]and drone swarm classification[3].Efficient local data processing preserves privacy,enhances responsiveness,and saves bandwidth.However,current ondevice DL relies on predefined patterns,leading to accuracy and efficiency bottlenecks.It is difficult to provide feedback on data processing performance during the data acquisition stage,as processing typically occurs after data acquisition. 展开更多
关键词 drone swarm classification efficient local data processing data processing deep learning dl device deep learning bi directional optimization iot devices swarm deep learning
原文传递
Aspect-Level Sentiment Analysis of Bi-Graph Convolutional Networks Based on Enhanced Syntactic Structural Information
13
作者 Junpeng Hu Yegang Li 《Journal of Computer and Communications》 2025年第1期72-89,共18页
Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dep... Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dependent syntactic trees, which improves the classification performance of the models to some extent. However, the technical limitations of dependent syntactic trees can introduce considerable noise into the model. Meanwhile, it is difficult for a single graph convolutional network to aggregate both semantic and syntactic structural information of nodes, which affects the final sentence classification. To cope with the above problems, this paper proposes a bi-channel graph convolutional network model. The model introduces a phrase structure tree and transforms it into a hierarchical phrase matrix. The adjacency matrix of the dependent syntactic tree and the hierarchical phrase matrix are combined as the initial matrix of the graph convolutional network to enhance the syntactic information. The semantic information feature representations of the sentences are obtained by the graph convolutional network with a multi-head attention mechanism and fused to achieve complementary learning of dual-channel features. Experimental results show that the model performs well and improves the accuracy of sentiment classification on three public benchmark datasets, namely Rest14, Lap14 and Twitter. 展开更多
关键词 Aspect-Level Sentiment Analysis Sentiment Knowledge Multi-Head Attention Mechanism Graph Convolutional Networks
在线阅读 下载PDF
Quality Assurance and Evaluation of Software Engineering Education in China and Europe:Theoretical Framework and Practical Pathways
14
作者 Jianguo Chen Mingzhi Mao +2 位作者 Xiangyuan Zhu Qingzhen Xu Zibin Zheng 《计算机教育》 2025年第3期145-155,共11页
With the rapid advancement of information technology,the quality assurance and evaluation of software engineering education have become pivotal concerns for higher education institutions.In this paper,we focus on a co... With the rapid advancement of information technology,the quality assurance and evaluation of software engineering education have become pivotal concerns for higher education institutions.In this paper,we focus on a comparative study of software engineering education in China and Europe,aiming to explore the theoretical frameworks and practical pathways employed in both regions.Initially,we introduce and contrast the engineering education accreditation systems of China and Europe,including the Chinese engineering education accreditation framework and the European EUR-ACE(European Accreditation of Engineering Programmes)standards,highlighting their core principles and evaluation methodologies.Subsequently,we provide case studies of several universities in China and Europe,such as Sun Yat-sen University,Tsinghua University,Technical University of Munich,and Imperial College London.Finally,we offer recommendations to foster mutual learning and collaboration between Chinese and European institutions,aiming to enhance the overall quality of software engineering education globally.This work provides valuable insights for educational administrators,faculty members,and policymakers,contributing to the ongoing improvement and innovative development of software engineering education in China and Europe. 展开更多
关键词 Software engineering education Quality assurance and evaluation Chinese engineering education accreditation European accreditation of engineering programmes
在线阅读 下载PDF
Recognition of carrier-based aircraft flight deck operations based on dynamic graph
15
作者 Xingyu GUO Jiaxin LI +3 位作者 Hua WANG Junnan LIU Yafei LI Mingliang XU 《Chinese Journal of Aeronautics》 2025年第3期474-490,共17页
Accurate recognition of flight deck operations for carrier-based aircraft, based on operation trajectories, is critical for optimizing carrier-based aircraft performance. This recognition involves understanding short-... Accurate recognition of flight deck operations for carrier-based aircraft, based on operation trajectories, is critical for optimizing carrier-based aircraft performance. This recognition involves understanding short-term and long-term spatial collaborative relationships among support agents and positions from long spatial–temporal trajectories. While the existing methods excel at recognizing collaborative behaviors from short trajectories, they often struggle with long spatial–temporal trajectories. To address this challenge, this paper introduces a dynamic graph method to enhance flight deck operation recognition. First, spatial–temporal collaborative relationships are modeled as a dynamic graph. Second, a discretized and compressed method is proposed to assign values to the states of this dynamic graph. To extract features that represent diverse collaborative relationships among agents and account for the duration of these relationships, a biased random walk is then conducted. Subsequently, the Swin Transformer is employed to comprehend spatial–temporal collaborative relationships, and a fully connected layer is applied to deck operation recognition. Finally, to address the scarcity of real datasets, a simulation pipeline is introduced to generate deck operations in virtual flight deck scenarios. Experimental results on the simulation dataset demonstrate the superior performance of the proposed method. 展开更多
关键词 Carrier-based aircraft Flight deck operation Operation recognition Long spatial-temporal trajectories Dynamic graph Biased random walk Graph embeddings
原文传递
Exploration of NWDAF Development Architecture for 6G AI-Native Networks
16
作者 HE Shiwen PENG Shilin +2 位作者 DONG Haolei WANG Liangpeng AN Zhenyu 《ZTE Communications》 2025年第1期45-52,共8页
Artificial intelligence(AI)-native communication is considered one of the key technologies for the development of 6G mobile communication networks.This paper investigates the architecture for developing the network da... Artificial intelligence(AI)-native communication is considered one of the key technologies for the development of 6G mobile communication networks.This paper investigates the architecture for developing the network data analytics function(NWDAF)in 6G AI-native networks.The architecture integrates two key components:data collection and management,and model training and management.It achieves real-time data collection and management,establishing a complete workflow encompassing AI model training,deployment,and intelligent decision-making.The architecture workflow is evaluated through a vertical scaling use case by constructing an AI-native network testbed on Kubernetes.Within this proposed NWDAF,several machine learning(ML)models are trained to make vertical scaling decisions for user plane function(UPF)instances based on data collected from various network functions(NFs).These decisions are executed through the Ku-bernetes API,which dynamically allocates appropriate resources to UPF instances.The experimental results show that all implemented models demonstrate satisfactory predictive capabilities.Moreover,compared with the threshold-based method in Kubernetes,all models show a significant advantage in response time.This study not only introduces a novel AI-native NWDAF architecture but also demonstrates the potential of AI models to significantly improve network management and resource scaling in 6G networks. 展开更多
关键词 6G AI-native NWDAF UPF scaling
在线阅读 下载PDF
A Software Defect Prediction Method Using a Multivariate Heterogeneous Hybrid Deep Learning Algorithm
17
作者 Qi Fei Haojun Hu +1 位作者 Guisheng Yin Zhian Sun 《Computers, Materials & Continua》 2025年第2期3251-3279,共29页
Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detecti... Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detection efficiency. Additionally, this technology provides developers with a means to quickly identify errors, thereby improving software robustness and overall quality. However, current research in software defect prediction often faces challenges, such as relying on a single data source or failing to adequately account for the characteristics of multiple coexisting data sources. This approach may overlook the differences and potential value of various data sources, affecting the accuracy and generalization performance of prediction results. To address this issue, this study proposes a multivariate heterogeneous hybrid deep learning algorithm for defect prediction (DP-MHHDL). Initially, Abstract Syntax Tree (AST), Code Dependency Network (CDN), and code static quality metrics are extracted from source code files and used as inputs to ensure data diversity. Subsequently, for the three types of heterogeneous data, the study employs a graph convolutional network optimization model based on adjacency and spatial topologies, a Convolutional Neural Network-Bidirectional Long Short-Term Memory (CNN-BiLSTM) hybrid neural network model, and a TabNet model to extract data features. These features are then concatenated and processed through a fully connected neural network for defect prediction. Finally, the proposed framework is evaluated using ten promise defect repository projects, and performance is assessed with three metrics: F1, Area under the curve (AUC), and Matthews correlation coefficient (MCC). The experimental results demonstrate that the proposed algorithm outperforms existing methods, offering a novel solution for software defect prediction. 展开更多
关键词 Software defect prediction multiple heterogeneous data graph convolutional network models based on adjacency and spatial topologies CNN-BiLSTM TabNet
在线阅读 下载PDF
Maximum Power Point Tracking Control of Offshore Wind-Photovoltaic Hybrid Power Generation System with Crane-Assisted
18
作者 Xiangyang Cao Yaojie Zheng +1 位作者 Hanbin Xiao Min Xiao 《Computer Modeling in Engineering & Sciences》 2025年第4期289-334,共46页
This study investigates the Maximum Power Point Tracking(MPPT)control method of offshore windphotovoltaic hybrid power generation system with offshore crane-assisted.A new algorithm of Global Fast Integral Sliding Mod... This study investigates the Maximum Power Point Tracking(MPPT)control method of offshore windphotovoltaic hybrid power generation system with offshore crane-assisted.A new algorithm of Global Fast Integral Sliding Mode Control(GFISMC)is proposed based on the tip speed ratio method and sliding mode control.The algorithm uses fast integral sliding mode surface and fuzzy fast switching control items to ensure that the offshore wind power generation system can track the maximum power point quickly and with low jitter.An offshore wind power generation system model is presented to verify the algorithm effect.An offshore off-grid wind-solar hybrid power generation systemis built in MATLAB/Simulink.Compared with other MPPT algorithms,this study has specific quantitative improvements in terms of convergence speed,tracking accuracy or computational efficiency.Finally,the improved algorithm is further analyzed and carried out by using Yuankuan Energy’s ModelingTech semi-physical simulation platform.The results verify the feasibility and effectiveness of the improved algorithm in the offshore wind-solar hybrid power generation system. 展开更多
关键词 Offshore wind power generation efficiency maximum power point tracking(MPPT) integral sliding mode control grey wolf optimization algorithm offshore photovoltaic cells
在线阅读 下载PDF
Large eddy simulation of low-Reynolds-number flow past the SD7003 airfoil with an improved high-precision IPDG method
19
作者 Shixi Hao Ming Zhao +5 位作者 Qiushi Ding Jiabing Xiao Yanan Chen Wei Liu Xiaojian Li Zhengxian Liu 《Acta Mechanica Sinica》 2025年第2期70-87,共18页
At low-Reynolds-number,the performance of airfoil is known to be greatly affected by the formation and burst of a laminar separation bubble(LSB),which requires a more precise simulation of the delicate flow structures... At low-Reynolds-number,the performance of airfoil is known to be greatly affected by the formation and burst of a laminar separation bubble(LSB),which requires a more precise simulation of the delicate flow structures.A framework based on the interior penalty discontinuous Galerkin method and large eddy simulation approach was adopted in the present study.The performances of various subgrid models,including the Smagorinsky(SM)model,the dynamic Smagorinsky(DSM)model,the wall-adapting local-eddy-viscosity(WALE)model,and the VREMAN model,have been analyzed through flow simulations of the SD7003 airfoil at a Reynolds number of 60000.It turns out that the SM model fails to predict the emergence of LSB,even modified by the Van-Driest damping function.On the contrary,the best agreement is generally achieved by the WALE model in terms of flow separation,reattachment,and transition locations,together with the aerodynamic loads.Furthermore,the influence of numerical dissipation has also been discussed through the comparison of skin friction and resolved Reynolds stresses.As numerical dissipation decreases,the prediction accuracy of the WALE model degrades.Meanwhile,nonlinear variation could be observed from the performances of the DSM model,which could be attributed to the interaction between the numerical dissipation and the subgrid model. 展开更多
关键词 Discontinuous Galerkin Interior penalty method Subgrid-scale model Large eddy simulation Laminar separation
原文传递
Enhancing the generalization capability of 2D array pointer networks through multiple teacher-forcing knowledge distillation
20
作者 Qidong Liu Xin Shen +3 位作者 Chaoyue Liu Dong Chen Xin Zhou Mingliang Xu 《Journal of Automation and Intelligence》 2025年第1期29-38,共10页
The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combin... The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combinatorial optimization.Recently,reinforcement learning approaches such as 2D Array Pointer Networks(2D-Ptr)have demonstrated remarkable speed in decision-making by modeling multiple agents’concurrent choices as a sequence of consecutive actions.However,these learning-based models often struggle with generalization,meaning they cannot seamlessly adapt to new scenarios with varying numbers of vehicles or customers without retraining.Inspired by the potential of multi-teacher knowledge distillation to harness diverse knowledge from multiple sources and craft a comprehensive student model,we propose to enhance the generalization capability of 2D-Ptr through Multiple Teacher-forcing Knowledge Distillation(MTKD).We initially train 12 unique 2D-Ptr models under various settings to serve as teacher models.Subsequently,we randomly sample a teacher model and a batch of problem instances,focusing on those where the chosen teacher performed best.This teacher model then solves these instances,generating high-reward action sequences to guide knowledge transfer to the student model.We conduct rigorous evaluations across four distinct datasets,each comprising four HCVRP instances of varying scales.Our empirical findings underscore the proposed method superiority over existing learning-based methods in terms of both computational efficiency and solution quality. 展开更多
关键词 Vehicle routing problem Multi-teacher knowledge distillation Teacher-forcing Pointer network
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部