Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To sa...Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link.展开更多
The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because o...The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.展开更多
Every second, a large volume of useful data is created in social media about the various kind of online purchases and in another forms of reviews. Particularly, purchased products review data is enormously growing in ...Every second, a large volume of useful data is created in social media about the various kind of online purchases and in another forms of reviews. Particularly, purchased products review data is enormously growing in different database repositories every day. Most of the review data are useful to new customers for theier further purchases as well as existing companies to view customers feedback about various products. Data Mining and Machine Leaning techniques are familiar to analyse such kind of data to visualise and know the potential use of the purchased items through online. The customers are making quality of products through their sentiments about the purchased items from different online companies. In this research work, it is analysed sentiments of Headphone review data, which is collected from online repositories. For the analysis of Headphone review data, some of the Machine Learning techniques like Support Vector Machines, Naive Bayes, Decision Trees and Random Forest Algorithms and a Hybrid method are applied to find the quality via the customers’ sentiments. The accuracy and performance of the taken algorithms are also analysed based on the three types of sentiments such as positive, negative and neutral.展开更多
The reasonable quantification of the concrete freezing environment on the Qinghai–Tibet Plateau(QTP) is the primary issue in frost resistant concrete design, which is one of the challenges that the QTP engineering ma...The reasonable quantification of the concrete freezing environment on the Qinghai–Tibet Plateau(QTP) is the primary issue in frost resistant concrete design, which is one of the challenges that the QTP engineering managers should take into account. In this paper, we propose a more realistic method to calculate the number of concrete freeze–thaw cycles(NFTCs) on the QTP. The calculated results show that the NFTCs increase as the altitude of the meteorological station increases with the average NFTCs being 208.7. Four machine learning methods, i.e., the random forest(RF) model, generalized boosting method(GBM), generalized linear model(GLM), and generalized additive model(GAM), are used to fit the NFTCs. The root mean square error(RMSE) values of the RF, GBM, GLM, and GAM are 32.3, 4.3, 247.9, and 161.3, respectively. The R^(2) values of the RF, GBM, GLM, and GAM are 0.93, 0.99, 0.48, and 0.66, respectively. The GBM method performs the best compared to the other three methods, which was shown by the results of RMSE and R^(2) values. The quantitative results from the GBM method indicate that the lowest, medium, and highest NFTC values are distributed in the northern, central, and southern parts of the QTP, respectively. The annual NFTCs in the QTP region are mainly concentrated at 160 and above, and the average NFTCs is 200 across the QTP. Our results can provide scientific guidance and a theoretical basis for the freezing resistance design of concrete in various projects on the QTP.展开更多
针对大规模无人机自组网面临的任务需求多样性、电磁环境复杂性、节点高机动性等问题,充分考虑无人机节点高速移动的特点,基于无人机拓扑稳定度和链路通信容量指标设计了一种无人机多点中继(multi-point relay,MPR)选择方法;为了减少网...针对大规模无人机自组网面临的任务需求多样性、电磁环境复杂性、节点高机动性等问题,充分考虑无人机节点高速移动的特点,基于无人机拓扑稳定度和链路通信容量指标设计了一种无人机多点中继(multi-point relay,MPR)选择方法;为了减少网络路由更新时间,增加无人机自组网路由策略的稳定性和可靠性,提出了一种基于Q-learning的自适应链路状态路由协议(Q-learning based adaptive link state routing,QALSR)。仿真结果表明,所提算法性能指标优于现有的主动路由协议。展开更多
针对无线传感器网络中存在的安全问题,提出了基于Q-Learning的分簇无线传感网信任管理机制(Q-learning based trust management mechanism for clustered wireless sensor networks,QLTMM-CWSN).该机制主要考虑通信信任、数据信任和能...针对无线传感器网络中存在的安全问题,提出了基于Q-Learning的分簇无线传感网信任管理机制(Q-learning based trust management mechanism for clustered wireless sensor networks,QLTMM-CWSN).该机制主要考虑通信信任、数据信任和能量信任3个方面.在网络运行过程中,基于节点的通信行为、数据分布和能量消耗,使用Q-Learning算法更新节点信任值,并选择簇内信任值最高的节点作为可信簇头节点.当簇中主簇头节点的信任值低于阈值时,可信簇头节点代替主簇头节点管理簇内成员节点,维护正常的数据传输.研究结果表明,QLTMM-CWSN机制能有效抵御通信攻击、伪造本地数据攻击、能量攻击和混合攻击.展开更多
For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in...For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
为了优化区域交通信号配时方案,提升区域通行效率,文章提出一种基于改进多智能体Nash Q Learning的区域交通信号协调控制方法。首先,采用离散化编码方法,通过划分单元格将连续状态信息转化为离散形式。其次,在算法中融入长短时记忆网络(...为了优化区域交通信号配时方案,提升区域通行效率,文章提出一种基于改进多智能体Nash Q Learning的区域交通信号协调控制方法。首先,采用离散化编码方法,通过划分单元格将连续状态信息转化为离散形式。其次,在算法中融入长短时记忆网络(Long Short Term Memory,LSTM)模块,用于从状态数据中挖掘更多的隐藏信息,丰富Q值表中的状态数据。最后,基于微观交通仿真软件SUMO(Simulation of Urban Mobility)的仿真测试结果表明,相较于原始Nash Q Learning交通信号控制方法,所提方法在低、中、高流量下车辆的平均等待时间分别减少了11.5%、16.2%和10.0%,平均排队长度分别减少了9.1%、8.2%和7.6%,平均停车次数分别减少了18.3%、16.1%和10.0%。结果证明了该算法具有更好的控制效果。展开更多
Myocardial infarction(MI)is one of the leading causes of death globally among cardiovascular diseases,necessitating modern and accurate diagnostics for cardiac patient conditions.Among the available functional diagnos...Myocardial infarction(MI)is one of the leading causes of death globally among cardiovascular diseases,necessitating modern and accurate diagnostics for cardiac patient conditions.Among the available functional diagnostic methods,electrocardiography(ECG)is particularly well-known for its ability to detect MI.However,confirming its accuracy—particularly in identifying the localization of myocardial damage—often presents challenges in practice.This study,therefore,proposes a new approach based on machine learning models for the analysis of 12-lead ECG data to accurately identify the localization of MI.In particular,the learning vector quantization(LVQ)algorithm was applied,considering the contribution of each ECG lead in the 12-channel system,which obtained an accuracy of 87%in localizing damaged myocardium.The developed model was tested on verified data from the PTB database,including 445 ECG recordings from both healthy individuals and MI-diagnosed patients.The results demonstrated that the 12-lead ECG system allows for a comprehensive understanding of cardiac activities in myocardial infarction patients,serving as an essential tool for the diagnosis of myocardial conditions and localizing their damage.A comprehensive comparison was performed,including CNN,SVM,and Logistic Regression,to evaluate the proposed LVQ model.The results demonstrate that the LVQ model achieves competitive performance in diagnostic tasks while maintaining computational efficiency,making it suitable for resource-constrained environments.This study also applies a carefully designed data pre-processing flow,including class balancing and noise removal,which improves the reliability and reproducibility of the results.These aspects highlight the potential application of the LVQ model in cardiac diagnostics,opening up prospects for its use along with more complex neural network architectures.展开更多
Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irr...Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irrelevant or redundant features,and the variability in risk factors such as age,lifestyle,andmedical history.These challenges often lead to inefficient and less accuratemodels.Traditional predictionmethodologies face limitations in effectively handling large feature sets and optimizing classification performance,which can result in overfitting poor generalization,and high computational cost.This work proposes a novel classification model for heart disease prediction that addresses these challenges by integrating feature selection through a Genetic Algorithm(GA)with an ensemble deep learning approach optimized using the Tunicate Swarm Algorithm(TSA).GA selects the most relevant features,reducing dimensionality and improvingmodel efficiency.Theselected features are then used to train an ensemble of deep learning models,where the TSA optimizes the weight of each model in the ensemble to enhance prediction accuracy.This hybrid approach addresses key challenges in the field,such as high dimensionality,redundant features,and classification performance,by introducing an efficient feature selection mechanism and optimizing the weighting of deep learning models in the ensemble.These enhancements result in a model that achieves superior accuracy,generalization,and efficiency compared to traditional methods.The proposed model demonstrated notable advancements in both prediction accuracy and computational efficiency over traditionalmodels.Specifically,it achieved an accuracy of 97.5%,a sensitivity of 97.2%,and a specificity of 97.8%.Additionally,with a 60-40 data split and 5-fold cross-validation,the model showed a significant reduction in training time(90 s),memory consumption(950 MB),and CPU usage(80%),highlighting its effectiveness in processing large,complex medical datasets for heart disease prediction.展开更多
基金National Key Research and Development Program(2021YFB2900604)。
文摘Low Earth orbit(LEO)satellite networks exhibit distinct characteristics,e.g.,limited resources of individual satellite nodes and dynamic network topology,which have brought many challenges for routing algorithms.To satisfy quality of service(QoS)requirements of various users,it is critical to research efficient routing strategies to fully utilize satellite resources.This paper proposes a multi-QoS information optimized routing algorithm based on reinforcement learning for LEO satellite networks,which guarantees high level assurance demand services to be prioritized under limited satellite resources while considering the load balancing performance of the satellite networks for low level assurance demand services to ensure the full and effective utilization of satellite resources.An auxiliary path search algorithm is proposed to accelerate the convergence of satellite routing algorithm.Simulation results show that the generated routing strategy can timely process and fully meet the QoS demands of high assurance services while effectively improving the load balancing performance of the link.
基金supported in part by the National Key Research and Development Program of China under Grant No.2021YFF0901300in part by the National Natural Science Foundation of China under Grant Nos.62173076 and 72271048.
文摘The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.
文摘Every second, a large volume of useful data is created in social media about the various kind of online purchases and in another forms of reviews. Particularly, purchased products review data is enormously growing in different database repositories every day. Most of the review data are useful to new customers for theier further purchases as well as existing companies to view customers feedback about various products. Data Mining and Machine Leaning techniques are familiar to analyse such kind of data to visualise and know the potential use of the purchased items through online. The customers are making quality of products through their sentiments about the purchased items from different online companies. In this research work, it is analysed sentiments of Headphone review data, which is collected from online repositories. For the analysis of Headphone review data, some of the Machine Learning techniques like Support Vector Machines, Naive Bayes, Decision Trees and Random Forest Algorithms and a Hybrid method are applied to find the quality via the customers’ sentiments. The accuracy and performance of the taken algorithms are also analysed based on the three types of sentiments such as positive, negative and neutral.
基金supported by Shandong Provincial Natural Science Foundation (grant number: ZR2023MD036)Key Research and Development Project in Shandong Province (grant number: 2019GGX101064)project for excellent youth foundation of the innovation teacher team, Shandong (grant number: 2022KJ310)。
文摘The reasonable quantification of the concrete freezing environment on the Qinghai–Tibet Plateau(QTP) is the primary issue in frost resistant concrete design, which is one of the challenges that the QTP engineering managers should take into account. In this paper, we propose a more realistic method to calculate the number of concrete freeze–thaw cycles(NFTCs) on the QTP. The calculated results show that the NFTCs increase as the altitude of the meteorological station increases with the average NFTCs being 208.7. Four machine learning methods, i.e., the random forest(RF) model, generalized boosting method(GBM), generalized linear model(GLM), and generalized additive model(GAM), are used to fit the NFTCs. The root mean square error(RMSE) values of the RF, GBM, GLM, and GAM are 32.3, 4.3, 247.9, and 161.3, respectively. The R^(2) values of the RF, GBM, GLM, and GAM are 0.93, 0.99, 0.48, and 0.66, respectively. The GBM method performs the best compared to the other three methods, which was shown by the results of RMSE and R^(2) values. The quantitative results from the GBM method indicate that the lowest, medium, and highest NFTC values are distributed in the northern, central, and southern parts of the QTP, respectively. The annual NFTCs in the QTP region are mainly concentrated at 160 and above, and the average NFTCs is 200 across the QTP. Our results can provide scientific guidance and a theoretical basis for the freezing resistance design of concrete in various projects on the QTP.
文摘针对大规模无人机自组网面临的任务需求多样性、电磁环境复杂性、节点高机动性等问题,充分考虑无人机节点高速移动的特点,基于无人机拓扑稳定度和链路通信容量指标设计了一种无人机多点中继(multi-point relay,MPR)选择方法;为了减少网络路由更新时间,增加无人机自组网路由策略的稳定性和可靠性,提出了一种基于Q-learning的自适应链路状态路由协议(Q-learning based adaptive link state routing,QALSR)。仿真结果表明,所提算法性能指标优于现有的主动路由协议。
文摘针对无线传感器网络中存在的安全问题,提出了基于Q-Learning的分簇无线传感网信任管理机制(Q-learning based trust management mechanism for clustered wireless sensor networks,QLTMM-CWSN).该机制主要考虑通信信任、数据信任和能量信任3个方面.在网络运行过程中,基于节点的通信行为、数据分布和能量消耗,使用Q-Learning算法更新节点信任值,并选择簇内信任值最高的节点作为可信簇头节点.当簇中主簇头节点的信任值低于阈值时,可信簇头节点代替主簇头节点管理簇内成员节点,维护正常的数据传输.研究结果表明,QLTMM-CWSN机制能有效抵御通信攻击、伪造本地数据攻击、能量攻击和混合攻击.
基金supported by the National Natural Science Foundation of China (62173333, 12271522)Beijing Natural Science Foundation (Z210002)the Research Fund of Renmin University of China (2021030187)。
文摘For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
文摘为了优化区域交通信号配时方案,提升区域通行效率,文章提出一种基于改进多智能体Nash Q Learning的区域交通信号协调控制方法。首先,采用离散化编码方法,通过划分单元格将连续状态信息转化为离散形式。其次,在算法中融入长短时记忆网络(Long Short Term Memory,LSTM)模块,用于从状态数据中挖掘更多的隐藏信息,丰富Q值表中的状态数据。最后,基于微观交通仿真软件SUMO(Simulation of Urban Mobility)的仿真测试结果表明,相较于原始Nash Q Learning交通信号控制方法,所提方法在低、中、高流量下车辆的平均等待时间分别减少了11.5%、16.2%和10.0%,平均排队长度分别减少了9.1%、8.2%和7.6%,平均停车次数分别减少了18.3%、16.1%和10.0%。结果证明了该算法具有更好的控制效果。
基金funded by the Ministry of Science and Higher Education of the Republic of Kazakhstan,grant numbers AP14969403 and AP23485820.
文摘Myocardial infarction(MI)is one of the leading causes of death globally among cardiovascular diseases,necessitating modern and accurate diagnostics for cardiac patient conditions.Among the available functional diagnostic methods,electrocardiography(ECG)is particularly well-known for its ability to detect MI.However,confirming its accuracy—particularly in identifying the localization of myocardial damage—often presents challenges in practice.This study,therefore,proposes a new approach based on machine learning models for the analysis of 12-lead ECG data to accurately identify the localization of MI.In particular,the learning vector quantization(LVQ)algorithm was applied,considering the contribution of each ECG lead in the 12-channel system,which obtained an accuracy of 87%in localizing damaged myocardium.The developed model was tested on verified data from the PTB database,including 445 ECG recordings from both healthy individuals and MI-diagnosed patients.The results demonstrated that the 12-lead ECG system allows for a comprehensive understanding of cardiac activities in myocardial infarction patients,serving as an essential tool for the diagnosis of myocardial conditions and localizing their damage.A comprehensive comparison was performed,including CNN,SVM,and Logistic Regression,to evaluate the proposed LVQ model.The results demonstrate that the LVQ model achieves competitive performance in diagnostic tasks while maintaining computational efficiency,making it suitable for resource-constrained environments.This study also applies a carefully designed data pre-processing flow,including class balancing and noise removal,which improves the reliability and reproducibility of the results.These aspects highlight the potential application of the LVQ model in cardiac diagnostics,opening up prospects for its use along with more complex neural network architectures.
文摘Heart disease prediction is a critical issue in healthcare,where accurate early diagnosis can save lives and reduce healthcare costs.The problem is inherently complex due to the high dimensionality of medical data,irrelevant or redundant features,and the variability in risk factors such as age,lifestyle,andmedical history.These challenges often lead to inefficient and less accuratemodels.Traditional predictionmethodologies face limitations in effectively handling large feature sets and optimizing classification performance,which can result in overfitting poor generalization,and high computational cost.This work proposes a novel classification model for heart disease prediction that addresses these challenges by integrating feature selection through a Genetic Algorithm(GA)with an ensemble deep learning approach optimized using the Tunicate Swarm Algorithm(TSA).GA selects the most relevant features,reducing dimensionality and improvingmodel efficiency.Theselected features are then used to train an ensemble of deep learning models,where the TSA optimizes the weight of each model in the ensemble to enhance prediction accuracy.This hybrid approach addresses key challenges in the field,such as high dimensionality,redundant features,and classification performance,by introducing an efficient feature selection mechanism and optimizing the weighting of deep learning models in the ensemble.These enhancements result in a model that achieves superior accuracy,generalization,and efficiency compared to traditional methods.The proposed model demonstrated notable advancements in both prediction accuracy and computational efficiency over traditionalmodels.Specifically,it achieved an accuracy of 97.5%,a sensitivity of 97.2%,and a specificity of 97.8%.Additionally,with a 60-40 data split and 5-fold cross-validation,the model showed a significant reduction in training time(90 s),memory consumption(950 MB),and CPU usage(80%),highlighting its effectiveness in processing large,complex medical datasets for heart disease prediction.