The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic developm...The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic development.This study provides valuable insights into optimizing wireless communication,paving the way for a more connected and productive future in the mining industry.The IoT revolution is advancing across industries,but harsh geometric environments,including open-pit mines,pose unique challenges for reliable communication.The advent of IoT in the mining industry has significantly improved communication for critical operations through the use of Radio Frequency(RF)protocols such as Bluetooth,Wi-Fi,GSM/GPRS,Narrow Band(NB)-IoT,SigFox,ZigBee,and Long Range Wireless Area Network(LoRaWAN).This study addresses the optimization of network implementations by comparing two leading free-spreading IoT-based RF protocols such as ZigBee and LoRaWAN.Intensive field tests are conducted in various opencast mines to investigate coverage potential and signal attenuation.ZigBee is tested in the Tadicherla open-cast coal mine in India.Similarly,LoRaWAN field tests are conducted at one of the associated cement companies(ACC)in the limestone mine in Bargarh,India,covering both Indoor-toOutdoor(I2O)and Outdoor-to-Outdoor(O2O)environments.A robust framework of path-loss models,referred to as Free space,Egli,Okumura-Hata,Cost231-Hata and Ericsson models,combined with key performance metrics,is employed to evaluate the patterns of signal attenuation.Extensive field testing and careful data analysis revealed that the Egli model is the most consistent path-loss model for the ZigBee protocol in an I2O environment,with a coefficient of determination(R^(2))of 0.907,balanced error metrics such as Normalized Root Mean Square Error(NRMSE)of 0.030,Mean Square Error(MSE)of 4.950,Mean Absolute Percentage Error(MAPE)of 0.249 and Scatter Index(SI)of 2.723.In the O2O scenario,the Ericsson model showed superior performance,with the highest R^(2)value of 0.959,supported by strong correlation metrics:NRMSE of 0.026,MSE of 8.685,MAPE of 0.685,Mean Absolute Deviation(MAD)of 20.839 and SI of 2.194.For the LoRaWAN protocol,the Cost-231 model achieved the highest R^(2)value of 0.921 in the I2O scenario,complemented by the lowest metrics:NRMSE of 0.018,MSE of 1.324,MAPE of 0.217,MAD of 9.218 and SI of 1.238.In the O2O environment,the Okumura-Hata model achieved the highest R^(2)value of 0.978,indicating a strong fit with metrics NRMSE of 0.047,MSE of 27.807,MAPE of 27.494,MAD of 37.287 and SI of 3.927.This advancement in reliable communication networks promises to transform the opencast landscape into networked signal attenuation.These results support decision-making for mining needs and ensure reliable communications even in the face of formidable obstacles.展开更多
Electroencephalography(EEG)is a non-invasive measurement method for brain activity.Due to its safety,high resolution,and hypersensitivity to dynamic changes in brain neural signals,EEG has aroused much interest in sci...Electroencephalography(EEG)is a non-invasive measurement method for brain activity.Due to its safety,high resolution,and hypersensitivity to dynamic changes in brain neural signals,EEG has aroused much interest in scientific research and medical felds.This article reviews the types of EEG signals,multiple EEG signal analysis methods,and the application of relevant methods in the neuroscience feld and for diagnosing neurological diseases.First,3 types of EEG signals,including time-invariant EEG,accurate event-related EEG,and random event-related EEG,are introduced.Second,5 main directions for the methods of EEG analysis,including power spectrum analysis,time-frequency analysis,connectivity analysis,source localization methods,and machine learning methods,are described in the main section,along with diferent sub-methods and effect evaluations for solving the same problem.Finally,the application scenarios of different EEG analysis methods are emphasized,and the advantages and disadvantages of similar methods are distinguished.This article is expected to assist researchers in selecting suitable EEG analysis methods based on their research objectives,provide references for subsequent research,and summarize current issues and prospects for the future.展开更多
A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can ...A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.展开更多
Rapid advancement in science and technology has seen computer network technology being upgraded constantly, and computer technology, in particular, has been applied more and more extensively, which has brought conveni...Rapid advancement in science and technology has seen computer network technology being upgraded constantly, and computer technology, in particular, has been applied more and more extensively, which has brought convenience to people’s lives. The number of people using the internet around the globe has also increased significantly, exerting a profound influence on artificial intelligence. Further, the constant upgrading and development of artificial intelligence has led to the continuous innovation and improvement of computer technology. Countries around the world have also registered an increase in investment, paying more attention to artificial intelligence. Through an analysis of the current development situation and the existing applications of artificial intelligence, this paper explicates the role of artificial intelligence in the face of the unceasing expansion of computer network technology.展开更多
Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are...Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.展开更多
The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(I...The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(IoT)relies on the support of base stations,which provide a solid foundation for achieving a more intelligent way of living.In a specific area,achieving higher signal coverage with fewer base stations has become an urgent problem.Therefore,this article focuses on the effective coverage area of base station signals and proposes a novel Evolutionary Particle Swarm Optimization(EPSO)algorithm based on collective prediction,referred to herein as ECPPSO.Introducing a new strategy called neighbor-based evolution prediction(NEP)addresses the issue of premature convergence often encountered by PSO.ECPPSO also employs a strengthening evolution(SE)strategy to enhance the algorithm’s global search capability and efficiency,ensuring enhanced robustness and a faster convergence speed when solving complex optimization problems.To better adapt to the actual communication needs of base stations,this article conducts simulation experiments by changing the number of base stations.The experimental results demonstrate thatunder the conditionof 50 ormore base stations,ECPPSOconsistently achieves the best coverage rate exceeding 95%,peaking at 99.4400%when the number of base stations reaches 80.These results validate the optimization capability of the ECPPSO algorithm,proving its feasibility and effectiveness.Further ablative experiments and comparisons with other algorithms highlight the advantages of ECPPSO.展开更多
The COVID-19 pandemic has profoundly impacted global health, with far-reaching consequences beyond respiratory complications. Increasing evidence highlights the link between COVID-19 and cardiovascular diseases (CVD),...The COVID-19 pandemic has profoundly impacted global health, with far-reaching consequences beyond respiratory complications. Increasing evidence highlights the link between COVID-19 and cardiovascular diseases (CVD), raising concerns about long-term health risks for those recovering from the virus. This study rigorously investigates the influence of COVID-19 on cardiovascular disease risk, focusing on conditions such as heart failure and myocardial infarction. Using a dataset of 52,683 individuals aged 30 to 80, including both COVID-19 survivors and those unaffected, the study employs machine learning models—logistic regression, decision trees, and random forests—to predict cardiovascular outcomes. The multifaceted approach allowed for a comprehensive evaluation of the model’s predictive capabilities, with logistic regression yielding the highest Binary F1 score of 0.94, effectively identifying cardiovascular risks in both the COVID-19 and non-COVID-19 groups. The correlation matrix revealed significant associations between COVID-19 and key symptoms of heart disease, emphasizing the need for early cardiovascular risk assessment. These findings underscore the importance of machine learning in enhancing early diagnosis and developing preventive strategies for COVID-19-related heart complications. Ultimately, this research contributes to a broader understanding of the pandemic’s lasting health effects, highlighting the critical role of cardiovascular care in post-COVID-19 recovery.展开更多
We present a comprehensive mathematical framework establishing the foundations of holographic quantum computing, a novel paradigm that leverages holographic phenomena to achieve superior error correction and algorithm...We present a comprehensive mathematical framework establishing the foundations of holographic quantum computing, a novel paradigm that leverages holographic phenomena to achieve superior error correction and algorithmic efficiency. We rigorously demonstrate that quantum information can be encoded and processed using holographic principles, establishing fundamental theorems characterizing the error-correcting properties of holographic codes. We develop a complete set of universal quantum gates with explicit constructions and prove exponential speedups for specific classes of computational problems. Our framework demonstrates that holographic quantum codes achieve a code rate scaling as O(1/logn), superior to traditional quantum LDPC codes, while providing inherent protection against errors via geometric properties of the code structures. We prove a threshold theorem establishing that arbitrary quantum computations can be performed reliably when physical error rates fall below a constant threshold. Notably, our analysis suggests certain algorithms, including those involving high-dimensional state spaces and long-range interactions, achieve exponential speedups over both classical and conventional quantum approaches. This work establishes the theoretical foundations for a new approach to quantum computation that provides natural fault tolerance and scalability, directly addressing longstanding challenges of the field.展开更多
The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and int...The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network.展开更多
In the new era,the impact of emerging productive forces has permeated every sector of industry.As the core production factor of these forces,data plays a pivotal role in industrial transformation and social developmen...In the new era,the impact of emerging productive forces has permeated every sector of industry.As the core production factor of these forces,data plays a pivotal role in industrial transformation and social development.Consequently,many domestic universities have introduced majors or courses related to big data.Among these,the Big Data Management and Applications major stands out for its interdisciplinary approach and emphasis on practical skills.However,as an emerging field,it has not yet accumulated a robust foundation in teaching theory and practice.Current instructional practices face issues such as unclear training objectives,inconsistent teaching methods and course content,insufficient integration of practical components,and a shortage of qualified faculty-factors that hinder both the development of the major and the overall quality of education.Taking the statistics course within the Big Data Management and Applications major as an example,this paper examines the challenges faced by statistics education in the context of emerging productive forces and proposes corresponding improvement measures.By introducing innovative teaching concepts and strategies,the teaching system for professional courses is optimized,and authentic classroom scenarios are recreated through illustrative examples.Questionnaire surveys and statistical analyses of data collected before and after the teaching reforms indicate that the curriculum changes effectively enhance instructional outcomes,promote the development of the major,and improve the quality of talent cultivation.展开更多
Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dep...Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dependent syntactic trees, which improves the classification performance of the models to some extent. However, the technical limitations of dependent syntactic trees can introduce considerable noise into the model. Meanwhile, it is difficult for a single graph convolutional network to aggregate both semantic and syntactic structural information of nodes, which affects the final sentence classification. To cope with the above problems, this paper proposes a bi-channel graph convolutional network model. The model introduces a phrase structure tree and transforms it into a hierarchical phrase matrix. The adjacency matrix of the dependent syntactic tree and the hierarchical phrase matrix are combined as the initial matrix of the graph convolutional network to enhance the syntactic information. The semantic information feature representations of the sentences are obtained by the graph convolutional network with a multi-head attention mechanism and fused to achieve complementary learning of dual-channel features. Experimental results show that the model performs well and improves the accuracy of sentiment classification on three public benchmark datasets, namely Rest14, Lap14 and Twitter.展开更多
With the rapid advancement of information technology,the quality assurance and evaluation of software engineering education have become pivotal concerns for higher education institutions.In this paper,we focus on a co...With the rapid advancement of information technology,the quality assurance and evaluation of software engineering education have become pivotal concerns for higher education institutions.In this paper,we focus on a comparative study of software engineering education in China and Europe,aiming to explore the theoretical frameworks and practical pathways employed in both regions.Initially,we introduce and contrast the engineering education accreditation systems of China and Europe,including the Chinese engineering education accreditation framework and the European EUR-ACE(European Accreditation of Engineering Programmes)standards,highlighting their core principles and evaluation methodologies.Subsequently,we provide case studies of several universities in China and Europe,such as Sun Yat-sen University,Tsinghua University,Technical University of Munich,and Imperial College London.Finally,we offer recommendations to foster mutual learning and collaboration between Chinese and European institutions,aiming to enhance the overall quality of software engineering education globally.This work provides valuable insights for educational administrators,faculty members,and policymakers,contributing to the ongoing improvement and innovative development of software engineering education in China and Europe.展开更多
Artificial intelligence(AI)-native communication is considered one of the key technologies for the development of 6G mobile communication networks.This paper investigates the architecture for developing the network da...Artificial intelligence(AI)-native communication is considered one of the key technologies for the development of 6G mobile communication networks.This paper investigates the architecture for developing the network data analytics function(NWDAF)in 6G AI-native networks.The architecture integrates two key components:data collection and management,and model training and management.It achieves real-time data collection and management,establishing a complete workflow encompassing AI model training,deployment,and intelligent decision-making.The architecture workflow is evaluated through a vertical scaling use case by constructing an AI-native network testbed on Kubernetes.Within this proposed NWDAF,several machine learning(ML)models are trained to make vertical scaling decisions for user plane function(UPF)instances based on data collected from various network functions(NFs).These decisions are executed through the Ku-bernetes API,which dynamically allocates appropriate resources to UPF instances.The experimental results show that all implemented models demonstrate satisfactory predictive capabilities.Moreover,compared with the threshold-based method in Kubernetes,all models show a significant advantage in response time.This study not only introduces a novel AI-native NWDAF architecture but also demonstrates the potential of AI models to significantly improve network management and resource scaling in 6G networks.展开更多
Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detecti...Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detection efficiency. Additionally, this technology provides developers with a means to quickly identify errors, thereby improving software robustness and overall quality. However, current research in software defect prediction often faces challenges, such as relying on a single data source or failing to adequately account for the characteristics of multiple coexisting data sources. This approach may overlook the differences and potential value of various data sources, affecting the accuracy and generalization performance of prediction results. To address this issue, this study proposes a multivariate heterogeneous hybrid deep learning algorithm for defect prediction (DP-MHHDL). Initially, Abstract Syntax Tree (AST), Code Dependency Network (CDN), and code static quality metrics are extracted from source code files and used as inputs to ensure data diversity. Subsequently, for the three types of heterogeneous data, the study employs a graph convolutional network optimization model based on adjacency and spatial topologies, a Convolutional Neural Network-Bidirectional Long Short-Term Memory (CNN-BiLSTM) hybrid neural network model, and a TabNet model to extract data features. These features are then concatenated and processed through a fully connected neural network for defect prediction. Finally, the proposed framework is evaluated using ten promise defect repository projects, and performance is assessed with three metrics: F1, Area under the curve (AUC), and Matthews correlation coefficient (MCC). The experimental results demonstrate that the proposed algorithm outperforms existing methods, offering a novel solution for software defect prediction.展开更多
This study investigates the Maximum Power Point Tracking(MPPT)control method of offshore windphotovoltaic hybrid power generation system with offshore crane-assisted.A new algorithm of Global Fast Integral Sliding Mod...This study investigates the Maximum Power Point Tracking(MPPT)control method of offshore windphotovoltaic hybrid power generation system with offshore crane-assisted.A new algorithm of Global Fast Integral Sliding Mode Control(GFISMC)is proposed based on the tip speed ratio method and sliding mode control.The algorithm uses fast integral sliding mode surface and fuzzy fast switching control items to ensure that the offshore wind power generation system can track the maximum power point quickly and with low jitter.An offshore wind power generation system model is presented to verify the algorithm effect.An offshore off-grid wind-solar hybrid power generation systemis built in MATLAB/Simulink.Compared with other MPPT algorithms,this study has specific quantitative improvements in terms of convergence speed,tracking accuracy or computational efficiency.Finally,the improved algorithm is further analyzed and carried out by using Yuankuan Energy’s ModelingTech semi-physical simulation platform.The results verify the feasibility and effectiveness of the improved algorithm in the offshore wind-solar hybrid power generation system.展开更多
The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combin...The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combinatorial optimization.Recently,reinforcement learning approaches such as 2D Array Pointer Networks(2D-Ptr)have demonstrated remarkable speed in decision-making by modeling multiple agents’concurrent choices as a sequence of consecutive actions.However,these learning-based models often struggle with generalization,meaning they cannot seamlessly adapt to new scenarios with varying numbers of vehicles or customers without retraining.Inspired by the potential of multi-teacher knowledge distillation to harness diverse knowledge from multiple sources and craft a comprehensive student model,we propose to enhance the generalization capability of 2D-Ptr through Multiple Teacher-forcing Knowledge Distillation(MTKD).We initially train 12 unique 2D-Ptr models under various settings to serve as teacher models.Subsequently,we randomly sample a teacher model and a batch of problem instances,focusing on those where the chosen teacher performed best.This teacher model then solves these instances,generating high-reward action sequences to guide knowledge transfer to the student model.We conduct rigorous evaluations across four distinct datasets,each comprising four HCVRP instances of varying scales.Our empirical findings underscore the proposed method superiority over existing learning-based methods in terms of both computational efficiency and solution quality.展开更多
This paper explores the construction methods of“Same Course with Different Structures”curriculum resources based on knowledge graphs and their applications in the field of education.By reviewing the theoretical foun...This paper explores the construction methods of“Same Course with Different Structures”curriculum resources based on knowledge graphs and their applications in the field of education.By reviewing the theoretical foundations of knowledge graph technology,the“Same Course with Different Structures”teaching model,and curriculum resource construction,and integrating existing literature,the paper analyzes the methods for constructing curriculum resources using knowledge graphs.The research finds that knowledge graphs can effectively integrate multi-source data,support personalized teaching and precision education,and provide both a scientific foundation and technical support for the development of curriculum resources within the“Same Course with Different Structures”framework.展开更多
We experimentally analyze the effect of the optical power on the time delay signature identification and the random bit generation in chaotic semiconductor laser with optical feedback.Due to the inevitable noise durin...We experimentally analyze the effect of the optical power on the time delay signature identification and the random bit generation in chaotic semiconductor laser with optical feedback.Due to the inevitable noise during the photoelectric detection and analog-digital conversion,the varying of output optical power would change the signal to noise ratio,then impact time delay signature identification and the random bit generation.Our results show that,when the optical power is less than-14 dBm,with the decreasing of the optical power,the actual identified time delay signature degrades and the entropy of the chaotic signal increases.Moreover,the extracted random bit sequence with lower optical power is more easily pass through the randomness testing.展开更多
In response to the limitations and low computational efficiency of one-dimensional water and sediment models in representing complex hydrological conditions, this paper proposes a dual branch convolution method based ...In response to the limitations and low computational efficiency of one-dimensional water and sediment models in representing complex hydrological conditions, this paper proposes a dual branch convolution method based on deep learning. This method utilizes the ability of deep learning to extract data features and introduces a dual branch convolutional network to handle the non-stationary and nonlinear characteristics of noise and reservoir sediment transport data. This method combines permutation variant structure to preserve the original time series information, constructs a corresponding time series model, models and analyzes the changes in the outbound water and sediment sequence, and can more accurately predict the future trend of outbound sediment changes based on the current sequence changes. The experimental results show that the DCON model established in this paper has good predictive performance in monthly, bimonthly, seasonal, and semi-annual predictions, with determination coefficients of 0.891, 0.898, 0.921, and 0.931, respectively. The results can provide more reference schemes for personnel formulating reservoir scheduling plans. Although this study has shown good applicability in predicting sediment discharge, it has not been able to make timely predictions for some non-periodic events in reservoirs. Therefore, future research will gradually incorporate monitoring devices to obtain more comprehensive data, in order to further validate and expand the conclusions of this study.展开更多
The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamic...The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.展开更多
文摘The Internet of Things(IoT)has orchestrated various domains in numerous applications,contributing significantly to the growth of the smart world,even in regions with low literacy rates,boosting socio-economic development.This study provides valuable insights into optimizing wireless communication,paving the way for a more connected and productive future in the mining industry.The IoT revolution is advancing across industries,but harsh geometric environments,including open-pit mines,pose unique challenges for reliable communication.The advent of IoT in the mining industry has significantly improved communication for critical operations through the use of Radio Frequency(RF)protocols such as Bluetooth,Wi-Fi,GSM/GPRS,Narrow Band(NB)-IoT,SigFox,ZigBee,and Long Range Wireless Area Network(LoRaWAN).This study addresses the optimization of network implementations by comparing two leading free-spreading IoT-based RF protocols such as ZigBee and LoRaWAN.Intensive field tests are conducted in various opencast mines to investigate coverage potential and signal attenuation.ZigBee is tested in the Tadicherla open-cast coal mine in India.Similarly,LoRaWAN field tests are conducted at one of the associated cement companies(ACC)in the limestone mine in Bargarh,India,covering both Indoor-toOutdoor(I2O)and Outdoor-to-Outdoor(O2O)environments.A robust framework of path-loss models,referred to as Free space,Egli,Okumura-Hata,Cost231-Hata and Ericsson models,combined with key performance metrics,is employed to evaluate the patterns of signal attenuation.Extensive field testing and careful data analysis revealed that the Egli model is the most consistent path-loss model for the ZigBee protocol in an I2O environment,with a coefficient of determination(R^(2))of 0.907,balanced error metrics such as Normalized Root Mean Square Error(NRMSE)of 0.030,Mean Square Error(MSE)of 4.950,Mean Absolute Percentage Error(MAPE)of 0.249 and Scatter Index(SI)of 2.723.In the O2O scenario,the Ericsson model showed superior performance,with the highest R^(2)value of 0.959,supported by strong correlation metrics:NRMSE of 0.026,MSE of 8.685,MAPE of 0.685,Mean Absolute Deviation(MAD)of 20.839 and SI of 2.194.For the LoRaWAN protocol,the Cost-231 model achieved the highest R^(2)value of 0.921 in the I2O scenario,complemented by the lowest metrics:NRMSE of 0.018,MSE of 1.324,MAPE of 0.217,MAD of 9.218 and SI of 1.238.In the O2O environment,the Okumura-Hata model achieved the highest R^(2)value of 0.978,indicating a strong fit with metrics NRMSE of 0.047,MSE of 27.807,MAPE of 27.494,MAD of 37.287 and SI of 3.927.This advancement in reliable communication networks promises to transform the opencast landscape into networked signal attenuation.These results support decision-making for mining needs and ensure reliable communications even in the face of formidable obstacles.
基金supported by the STI2030 Major Projects(2021ZD0204300)the National Natural Science Foundation of China(61803003,62003228).
文摘Electroencephalography(EEG)is a non-invasive measurement method for brain activity.Due to its safety,high resolution,and hypersensitivity to dynamic changes in brain neural signals,EEG has aroused much interest in scientific research and medical felds.This article reviews the types of EEG signals,multiple EEG signal analysis methods,and the application of relevant methods in the neuroscience feld and for diagnosing neurological diseases.First,3 types of EEG signals,including time-invariant EEG,accurate event-related EEG,and random event-related EEG,are introduced.Second,5 main directions for the methods of EEG analysis,including power spectrum analysis,time-frequency analysis,connectivity analysis,source localization methods,and machine learning methods,are described in the main section,along with diferent sub-methods and effect evaluations for solving the same problem.Finally,the application scenarios of different EEG analysis methods are emphasized,and the advantages and disadvantages of similar methods are distinguished.This article is expected to assist researchers in selecting suitable EEG analysis methods based on their research objectives,provide references for subsequent research,and summarize current issues and prospects for the future.
文摘A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.
文摘Rapid advancement in science and technology has seen computer network technology being upgraded constantly, and computer technology, in particular, has been applied more and more extensively, which has brought convenience to people’s lives. The number of people using the internet around the globe has also increased significantly, exerting a profound influence on artificial intelligence. Further, the constant upgrading and development of artificial intelligence has led to the continuous innovation and improvement of computer technology. Countries around the world have also registered an increase in investment, paying more attention to artificial intelligence. Through an analysis of the current development situation and the existing applications of artificial intelligence, this paper explicates the role of artificial intelligence in the face of the unceasing expansion of computer network technology.
基金supported by the Ministry of Science and Technology of China,No.2020AAA0109605(to XL)Meizhou Major Scientific and Technological Innovation PlatformsProjects of Guangdong Provincial Science & Technology Plan Projects,No.2019A0102005(to HW).
文摘Early identification and treatment of stroke can greatly improve patient outcomes and quality of life.Although clinical tests such as the Cincinnati Pre-hospital Stroke Scale(CPSS)and the Face Arm Speech Test(FAST)are commonly used for stroke screening,accurate administration is dependent on specialized training.In this study,we proposed a novel multimodal deep learning approach,based on the FAST,for assessing suspected stroke patients exhibiting symptoms such as limb weakness,facial paresis,and speech disorders in acute settings.We collected a dataset comprising videos and audio recordings of emergency room patients performing designated limb movements,facial expressions,and speech tests based on the FAST.We compared the constructed deep learning model,which was designed to process multi-modal datasets,with six prior models that achieved good action classification performance,including the I3D,SlowFast,X3D,TPN,TimeSformer,and MViT.We found that the findings of our deep learning model had a higher clinical value compared with the other approaches.Moreover,the multi-modal model outperformed its single-module variants,highlighting the benefit of utilizing multiple types of patient data,such as action videos and speech audio.These results indicate that a multi-modal deep learning model combined with the FAST could greatly improve the accuracy and sensitivity of early stroke identification of stroke,thus providing a practical and powerful tool for assessing stroke patients in an emergency clinical setting.
基金supported by the National Natural Science Foundation of China(Nos.62272418,62102058)Basic Public Welfare Research Program of Zhejiang Province(No.LGG18E050011)the Major Open Project of Key Laboratory for Advanced Design and Intelligent Computing of the Ministry of Education under Grant ADIC2023ZD001,National Undergraduate Training Program on Innovation and Entrepreneurship(No.202410345054).
文摘The wireless signals emitted by base stations serve as a vital link connecting people in today’s society and have been occupying an increasingly important role in real life.The development of the Internet of Things(IoT)relies on the support of base stations,which provide a solid foundation for achieving a more intelligent way of living.In a specific area,achieving higher signal coverage with fewer base stations has become an urgent problem.Therefore,this article focuses on the effective coverage area of base station signals and proposes a novel Evolutionary Particle Swarm Optimization(EPSO)algorithm based on collective prediction,referred to herein as ECPPSO.Introducing a new strategy called neighbor-based evolution prediction(NEP)addresses the issue of premature convergence often encountered by PSO.ECPPSO also employs a strengthening evolution(SE)strategy to enhance the algorithm’s global search capability and efficiency,ensuring enhanced robustness and a faster convergence speed when solving complex optimization problems.To better adapt to the actual communication needs of base stations,this article conducts simulation experiments by changing the number of base stations.The experimental results demonstrate thatunder the conditionof 50 ormore base stations,ECPPSOconsistently achieves the best coverage rate exceeding 95%,peaking at 99.4400%when the number of base stations reaches 80.These results validate the optimization capability of the ECPPSO algorithm,proving its feasibility and effectiveness.Further ablative experiments and comparisons with other algorithms highlight the advantages of ECPPSO.
文摘The COVID-19 pandemic has profoundly impacted global health, with far-reaching consequences beyond respiratory complications. Increasing evidence highlights the link between COVID-19 and cardiovascular diseases (CVD), raising concerns about long-term health risks for those recovering from the virus. This study rigorously investigates the influence of COVID-19 on cardiovascular disease risk, focusing on conditions such as heart failure and myocardial infarction. Using a dataset of 52,683 individuals aged 30 to 80, including both COVID-19 survivors and those unaffected, the study employs machine learning models—logistic regression, decision trees, and random forests—to predict cardiovascular outcomes. The multifaceted approach allowed for a comprehensive evaluation of the model’s predictive capabilities, with logistic regression yielding the highest Binary F1 score of 0.94, effectively identifying cardiovascular risks in both the COVID-19 and non-COVID-19 groups. The correlation matrix revealed significant associations between COVID-19 and key symptoms of heart disease, emphasizing the need for early cardiovascular risk assessment. These findings underscore the importance of machine learning in enhancing early diagnosis and developing preventive strategies for COVID-19-related heart complications. Ultimately, this research contributes to a broader understanding of the pandemic’s lasting health effects, highlighting the critical role of cardiovascular care in post-COVID-19 recovery.
文摘We present a comprehensive mathematical framework establishing the foundations of holographic quantum computing, a novel paradigm that leverages holographic phenomena to achieve superior error correction and algorithmic efficiency. We rigorously demonstrate that quantum information can be encoded and processed using holographic principles, establishing fundamental theorems characterizing the error-correcting properties of holographic codes. We develop a complete set of universal quantum gates with explicit constructions and prove exponential speedups for specific classes of computational problems. Our framework demonstrates that holographic quantum codes achieve a code rate scaling as O(1/logn), superior to traditional quantum LDPC codes, while providing inherent protection against errors via geometric properties of the code structures. We prove a threshold theorem establishing that arbitrary quantum computations can be performed reliably when physical error rates fall below a constant threshold. Notably, our analysis suggests certain algorithms, including those involving high-dimensional state spaces and long-range interactions, achieve exponential speedups over both classical and conventional quantum approaches. This work establishes the theoretical foundations for a new approach to quantum computation that provides natural fault tolerance and scalability, directly addressing longstanding challenges of the field.
基金supported by the National Natural Science Foundation of China under Grant No.62172132.
文摘The surge of large-scale models in recent years has led to breakthroughs in numerous fields,but it has also introduced higher computational costs and more complex network architectures.These increasingly large and intricate networks pose challenges for deployment and execution while also exacerbating the issue of network over-parameterization.To address this issue,various network compression techniques have been developed,such as network pruning.A typical pruning algorithm follows a three-step pipeline involving training,pruning,and retraining.Existing methods often directly set the pruned filters to zero during retraining,significantly reducing the parameter space.However,this direct pruning strategy frequently results in irreversible information loss.In the early stages of training,a network still contains much uncertainty,and evaluating filter importance may not be sufficiently rigorous.To manage the pruning process effectively,this paper proposes a flexible neural network pruning algorithm based on the logistic growth differential equation,considering the characteristics of network training.Unlike other pruning algorithms that directly reduce filter weights,this algorithm introduces a three-stage adaptive weight decay strategy inspired by the logistic growth differential equation.It employs a gentle decay rate in the initial training stage,a rapid decay rate during the intermediate stage,and a slower decay rate in the network convergence stage.Additionally,the decay rate is adjusted adaptively based on the filter weights at each stage.By controlling the adaptive decay rate at each stage,the pruning of neural network filters can be effectively managed.In experiments conducted on the CIFAR-10 and ILSVRC-2012 datasets,the pruning of neural networks significantly reduces the floating-point operations while maintaining the same pruning rate.Specifically,when implementing a 30%pruning rate on the ResNet-110 network,the pruned neural network not only decreases floating-point operations by 40.8%but also enhances the classification accuracy by 0.49%compared to the original network.
文摘In the new era,the impact of emerging productive forces has permeated every sector of industry.As the core production factor of these forces,data plays a pivotal role in industrial transformation and social development.Consequently,many domestic universities have introduced majors or courses related to big data.Among these,the Big Data Management and Applications major stands out for its interdisciplinary approach and emphasis on practical skills.However,as an emerging field,it has not yet accumulated a robust foundation in teaching theory and practice.Current instructional practices face issues such as unclear training objectives,inconsistent teaching methods and course content,insufficient integration of practical components,and a shortage of qualified faculty-factors that hinder both the development of the major and the overall quality of education.Taking the statistics course within the Big Data Management and Applications major as an example,this paper examines the challenges faced by statistics education in the context of emerging productive forces and proposes corresponding improvement measures.By introducing innovative teaching concepts and strategies,the teaching system for professional courses is optimized,and authentic classroom scenarios are recreated through illustrative examples.Questionnaire surveys and statistical analyses of data collected before and after the teaching reforms indicate that the curriculum changes effectively enhance instructional outcomes,promote the development of the major,and improve the quality of talent cultivation.
文摘Aspect-oriented sentiment analysis is a meticulous sentiment analysis task that aims to analyse the sentiment polarity of specific aspects. Most of the current research builds graph convolutional networks based on dependent syntactic trees, which improves the classification performance of the models to some extent. However, the technical limitations of dependent syntactic trees can introduce considerable noise into the model. Meanwhile, it is difficult for a single graph convolutional network to aggregate both semantic and syntactic structural information of nodes, which affects the final sentence classification. To cope with the above problems, this paper proposes a bi-channel graph convolutional network model. The model introduces a phrase structure tree and transforms it into a hierarchical phrase matrix. The adjacency matrix of the dependent syntactic tree and the hierarchical phrase matrix are combined as the initial matrix of the graph convolutional network to enhance the syntactic information. The semantic information feature representations of the sentences are obtained by the graph convolutional network with a multi-head attention mechanism and fused to achieve complementary learning of dual-channel features. Experimental results show that the model performs well and improves the accuracy of sentiment classification on three public benchmark datasets, namely Rest14, Lap14 and Twitter.
基金supported by the Guangdong Higher Education Association’s“14th Five Year Plan”2024 Higher Education Research Project(24GYB03)the Natural Science Foundation of Guangdong Province(2024A1515010255)。
文摘With the rapid advancement of information technology,the quality assurance and evaluation of software engineering education have become pivotal concerns for higher education institutions.In this paper,we focus on a comparative study of software engineering education in China and Europe,aiming to explore the theoretical frameworks and practical pathways employed in both regions.Initially,we introduce and contrast the engineering education accreditation systems of China and Europe,including the Chinese engineering education accreditation framework and the European EUR-ACE(European Accreditation of Engineering Programmes)standards,highlighting their core principles and evaluation methodologies.Subsequently,we provide case studies of several universities in China and Europe,such as Sun Yat-sen University,Tsinghua University,Technical University of Munich,and Imperial College London.Finally,we offer recommendations to foster mutual learning and collaboration between Chinese and European institutions,aiming to enhance the overall quality of software engineering education globally.This work provides valuable insights for educational administrators,faculty members,and policymakers,contributing to the ongoing improvement and innovative development of software engineering education in China and Europe.
基金supported by the National Key Research and Development Program of China under Grant No.2023YFE0200700National Natural Science Foundation of China under Grant No.62171474ZTE Industry University-Institute Cooperation Funds under Grant No.IA20241014013。
文摘Artificial intelligence(AI)-native communication is considered one of the key technologies for the development of 6G mobile communication networks.This paper investigates the architecture for developing the network data analytics function(NWDAF)in 6G AI-native networks.The architecture integrates two key components:data collection and management,and model training and management.It achieves real-time data collection and management,establishing a complete workflow encompassing AI model training,deployment,and intelligent decision-making.The architecture workflow is evaluated through a vertical scaling use case by constructing an AI-native network testbed on Kubernetes.Within this proposed NWDAF,several machine learning(ML)models are trained to make vertical scaling decisions for user plane function(UPF)instances based on data collected from various network functions(NFs).These decisions are executed through the Ku-bernetes API,which dynamically allocates appropriate resources to UPF instances.The experimental results show that all implemented models demonstrate satisfactory predictive capabilities.Moreover,compared with the threshold-based method in Kubernetes,all models show a significant advantage in response time.This study not only introduces a novel AI-native NWDAF architecture but also demonstrates the potential of AI models to significantly improve network management and resource scaling in 6G networks.
文摘Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detection efficiency. Additionally, this technology provides developers with a means to quickly identify errors, thereby improving software robustness and overall quality. However, current research in software defect prediction often faces challenges, such as relying on a single data source or failing to adequately account for the characteristics of multiple coexisting data sources. This approach may overlook the differences and potential value of various data sources, affecting the accuracy and generalization performance of prediction results. To address this issue, this study proposes a multivariate heterogeneous hybrid deep learning algorithm for defect prediction (DP-MHHDL). Initially, Abstract Syntax Tree (AST), Code Dependency Network (CDN), and code static quality metrics are extracted from source code files and used as inputs to ensure data diversity. Subsequently, for the three types of heterogeneous data, the study employs a graph convolutional network optimization model based on adjacency and spatial topologies, a Convolutional Neural Network-Bidirectional Long Short-Term Memory (CNN-BiLSTM) hybrid neural network model, and a TabNet model to extract data features. These features are then concatenated and processed through a fully connected neural network for defect prediction. Finally, the proposed framework is evaluated using ten promise defect repository projects, and performance is assessed with three metrics: F1, Area under the curve (AUC), and Matthews correlation coefficient (MCC). The experimental results demonstrate that the proposed algorithm outperforms existing methods, offering a novel solution for software defect prediction.
基金supported by the 2022 Sanya Science and Technology Innovation Project,China(No.2022KJCX03)the Sanya Science and Education Innovation Park,Wuhan University of Technology,China(Grant No.2022KF0028)the Hainan Provincial Joint Project of Sanya Yazhou Bay Science and Technology City,China(Grant No.2021JJLH0036).
文摘This study investigates the Maximum Power Point Tracking(MPPT)control method of offshore windphotovoltaic hybrid power generation system with offshore crane-assisted.A new algorithm of Global Fast Integral Sliding Mode Control(GFISMC)is proposed based on the tip speed ratio method and sliding mode control.The algorithm uses fast integral sliding mode surface and fuzzy fast switching control items to ensure that the offshore wind power generation system can track the maximum power point quickly and with low jitter.An offshore wind power generation system model is presented to verify the algorithm effect.An offshore off-grid wind-solar hybrid power generation systemis built in MATLAB/Simulink.Compared with other MPPT algorithms,this study has specific quantitative improvements in terms of convergence speed,tracking accuracy or computational efficiency.Finally,the improved algorithm is further analyzed and carried out by using Yuankuan Energy’s ModelingTech semi-physical simulation platform.The results verify the feasibility and effectiveness of the improved algorithm in the offshore wind-solar hybrid power generation system.
基金in part by the National Science Foundation of China under Grant No.62276238in part by the National Science Foundation for Distinguished Young Scholars of China under Grant No.62325602in part by the Natural Science Foundation of Henan,China under Grant No.232300421095.
文摘The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combinatorial optimization.Recently,reinforcement learning approaches such as 2D Array Pointer Networks(2D-Ptr)have demonstrated remarkable speed in decision-making by modeling multiple agents’concurrent choices as a sequence of consecutive actions.However,these learning-based models often struggle with generalization,meaning they cannot seamlessly adapt to new scenarios with varying numbers of vehicles or customers without retraining.Inspired by the potential of multi-teacher knowledge distillation to harness diverse knowledge from multiple sources and craft a comprehensive student model,we propose to enhance the generalization capability of 2D-Ptr through Multiple Teacher-forcing Knowledge Distillation(MTKD).We initially train 12 unique 2D-Ptr models under various settings to serve as teacher models.Subsequently,we randomly sample a teacher model and a batch of problem instances,focusing on those where the chosen teacher performed best.This teacher model then solves these instances,generating high-reward action sequences to guide knowledge transfer to the student model.We conduct rigorous evaluations across four distinct datasets,each comprising four HCVRP instances of varying scales.Our empirical findings underscore the proposed method superiority over existing learning-based methods in terms of both computational efficiency and solution quality.
基金Educational and Teaching Reform Project of Beihua University:Research on the Construction of“Same Course with Different Structures”Course Resources Based on Knowledge Graphs。
文摘This paper explores the construction methods of“Same Course with Different Structures”curriculum resources based on knowledge graphs and their applications in the field of education.By reviewing the theoretical foundations of knowledge graph technology,the“Same Course with Different Structures”teaching model,and curriculum resource construction,and integrating existing literature,the paper analyzes the methods for constructing curriculum resources using knowledge graphs.The research finds that knowledge graphs can effectively integrate multi-source data,support personalized teaching and precision education,and provide both a scientific foundation and technical support for the development of curriculum resources within the“Same Course with Different Structures”framework.
基金Project supported in part by the National Natural Science Foundation of China(Grant Nos.62005129 and 62175116)。
文摘We experimentally analyze the effect of the optical power on the time delay signature identification and the random bit generation in chaotic semiconductor laser with optical feedback.Due to the inevitable noise during the photoelectric detection and analog-digital conversion,the varying of output optical power would change the signal to noise ratio,then impact time delay signature identification and the random bit generation.Our results show that,when the optical power is less than-14 dBm,with the decreasing of the optical power,the actual identified time delay signature degrades and the entropy of the chaotic signal increases.Moreover,the extracted random bit sequence with lower optical power is more easily pass through the randomness testing.
基金NationalNatural Science Foundation of China(U2243236,51879115,U2243215),Recipients of funds:Xinjie Li,URL:https://www.nsfc.gov.cn/(accessed on 25 November 2024).
文摘In response to the limitations and low computational efficiency of one-dimensional water and sediment models in representing complex hydrological conditions, this paper proposes a dual branch convolution method based on deep learning. This method utilizes the ability of deep learning to extract data features and introduces a dual branch convolutional network to handle the non-stationary and nonlinear characteristics of noise and reservoir sediment transport data. This method combines permutation variant structure to preserve the original time series information, constructs a corresponding time series model, models and analyzes the changes in the outbound water and sediment sequence, and can more accurately predict the future trend of outbound sediment changes based on the current sequence changes. The experimental results show that the DCON model established in this paper has good predictive performance in monthly, bimonthly, seasonal, and semi-annual predictions, with determination coefficients of 0.891, 0.898, 0.921, and 0.931, respectively. The results can provide more reference schemes for personnel formulating reservoir scheduling plans. Although this study has shown good applicability in predicting sediment discharge, it has not been able to make timely predictions for some non-periodic events in reservoirs. Therefore, future research will gradually incorporate monitoring devices to obtain more comprehensive data, in order to further validate and expand the conclusions of this study.
基金funded by Ministry of Higher Education(MoHE)Malaysia,under Transdisciplinary Research Grant Scheme(TRGS/1/2019/UKM/01/4/2).
文摘The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.