期刊文献+
共找到395篇文章
< 1 2 20 >
每页显示 20 50 100
A Study on the Explainability of Thyroid Cancer Prediction:SHAP Values and Association-Rule Based Feature Integration Framework
1
作者 Sujithra Sankar S.Sathyalakshmi 《Computers, Materials & Continua》 SCIE EI 2024年第5期3111-3138,共28页
In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroi... In the era of advanced machine learning techniques,the development of accurate predictive models for complex medical conditions,such as thyroid cancer,has shown remarkable progress.Accurate predictivemodels for thyroid cancer enhance early detection,improve resource allocation,and reduce overtreatment.However,the widespread adoption of these models in clinical practice demands predictive performance along with interpretability and transparency.This paper proposes a novel association-rule based feature-integratedmachine learning model which shows better classification and prediction accuracy than present state-of-the-artmodels.Our study also focuses on the application of SHapley Additive exPlanations(SHAP)values as a powerful tool for explaining thyroid cancer prediction models.In the proposed method,the association-rule based feature integration framework identifies frequently occurring attribute combinations in the dataset.The original dataset is used in trainingmachine learning models,and further used in generating SHAP values fromthesemodels.In the next phase,the dataset is integrated with the dominant feature sets identified through association-rule based analysis.This new integrated dataset is used in re-training the machine learning models.The new SHAP values generated from these models help in validating the contributions of feature sets in predicting malignancy.The conventional machine learning models lack interpretability,which can hinder their integration into clinical decision-making systems.In this study,the SHAP values are introduced along with association-rule based feature integration as a comprehensive framework for understanding the contributions of feature sets inmodelling the predictions.The study discusses the importance of reliable predictive models for early diagnosis of thyroid cancer,and a validation framework of explainability.The proposed model shows an accuracy of 93.48%.Performance metrics such as precision,recall,F1-score,and the area under the receiver operating characteristic(AUROC)are also higher than the baseline models.The results of the proposed model help us identify the dominant feature sets that impact thyroid cancer classification and prediction.The features{calcification}and{shape}consistently emerged as the top-ranked features associated with thyroid malignancy,in both association-rule based interestingnessmetric values and SHAPmethods.The paper highlights the potential of the rule-based integrated models with SHAP in bridging the gap between the machine learning predictions and the interpretability of this prediction which is required for real-world medical applications. 展开更多
关键词 Explainable AI machine learning clinical decision support systems thyroid cancer association-rule based framework SHAP values classification and prediction
在线阅读 下载PDF
Explainability and Interpretability in Electric Load Forecasting Using Machine Learning Techniques–A Review 被引量:1
2
作者 Lukas Baur Konstantin Ditschuneit +3 位作者 Maximilian Schambach Can Kaymakci Thomas Wollmann Alexander Sauer 《Energy and AI》 EI 2024年第2期483-496,共14页
Electric Load Forecasting(ELF)is the central instrument for planning and controlling demand response programs,electricity trading,and consumption optimization.Due to the increasing automation of these processes,meanin... Electric Load Forecasting(ELF)is the central instrument for planning and controlling demand response programs,electricity trading,and consumption optimization.Due to the increasing automation of these processes,meaningful and transparent forecasts become more and more important.Still,at the same time,the complexity of the used machine learning models and architectures increases.Because there is an increasing interest in interpretable and explainable load forecasting methods,this work conducts a literature review to present already applied approaches regarding explainability and interpretability for load forecasts using Machine Learning.Based on extensive literature research covering eight publication portals,recurring modeling approaches,trends,and modeling techniques are identified and clustered by properties to achieve more interpretable and explainable load forecasts.The results on interpretability show an increase in the use of probabilistic models,methods for time series decomposition and the use of fuzzy logic in addition to classically interpretable models.Dominant explainable approaches are Feature Importance and Attention mechanisms.The discussion shows that a lot of knowledge from the related field of time series forecasting still needs to be adapted to the problems in ELF.Compared to other applications of explainable and interpretable methods such as clustering,there are currently relatively few research results,but with an increasing trend. 展开更多
关键词 Electric load forecasting explainability InterpretabilityStructured review
原文传递
A Comprehensive Survey on Trustworthy Graph Neural Networks:Privacy,Robustness,Fairness,and Explainability
3
作者 Enyan Dai Tianxiang Zhao +5 位作者 Huaisheng Zhu Junjie Xu Zhimeng Guo Hui Liu Jiliang Tang Suhang Wang 《Machine Intelligence Research》 EI CSCD 2024年第6期1011-1061,共51页
Graph neural networks(GNNs)have made rapid developments in the recent years.Due to their great ability in modeling graph-structured data,GNNs are vastly used in various applications,including high-stakes scenarios suc... Graph neural networks(GNNs)have made rapid developments in the recent years.Due to their great ability in modeling graph-structured data,GNNs are vastly used in various applications,including high-stakes scenarios such as financial analysis,traffic predictions,and drug discovery.Despite their great potential in benefiting humans in the real world,recent study shows that GNNs can leak private information,are vulnerable to adversarial attacks,can inherit and magnify societal bias from training data and lack inter-pretability,which have risk of causing unintentional harm to the users and society.For example,existing works demonstrate that at-tackers can fool the GNNs to give the outcome they desire with unnoticeable perturbation on training graph.GNNs trained on social networks may embed the discrimination in their decision process,strengthening the undesirable societal bias.Consequently,trust-worthy GNNs in various aspects are emerging to prevent the harm from GNN models and increase the users'trust in GNNs.In this pa-per,we give a comprehensive survey of GNNs in the computational aspects of privacy,robustness,fairness,and explainability.For each aspect,we give the taxonomy of the related methods and formulate the general frameworks for the multiple categories of trustworthy GNNs.We also discuss the future research directions of each aspect and connections between these aspects to help achieve trustworthi-ness. 展开更多
关键词 Graph neural networks(GNNs) TRUSTWORTHY PRIVACY ROBUSTNESS FAIRNESS explainability
原文传递
Imbalanced rock burst assessment using variational autoencoder-enhanced gradient boosting algorithms and explainability
4
作者 Shan Lin Zenglong Liang +2 位作者 Miao Dong Hongwei Guo Hong Zheng 《Underground Space》 SCIE EI CSCD 2024年第4期226-245,共20页
We conducted a study to evaluate the potential and robustness of gradient boosting algorithms in rock burst assessment,established a variational autoencoder(VAE)to address the imbalance rock burst dataset,and proposed... We conducted a study to evaluate the potential and robustness of gradient boosting algorithms in rock burst assessment,established a variational autoencoder(VAE)to address the imbalance rock burst dataset,and proposed a multilevel explainable artificial intelligence(XAI)tailored for tree-based ensemble learning.We collected 537 data from real-world rock burst records and selected four critical features contributing to rock burst occurrences.Initially,we employed data visualization to gain insight into the data’s structure and performed correlation analysis to explore the data distribution and feature relationships.Then,we set up a VAE model to generate samples for the minority class due to the imbalanced class distribution.In conjunction with the VAE,we compared and evaluated six state-of-theart ensemble models,including gradient boosting algorithms and the classical logistic regression model,for rock burst prediction.The results indicated that gradient boosting algorithms outperformed the classical single models,and the VAE-classifier outperformed the original classifier,with the VAE-NGBoost model yielding the most favorable results.Compared to other resampling methods combined with NGBoost for imbalanced datasets,such as synthetic minority oversampling technique(SMOTE),SMOTE-edited nearest neighbours(SMOTE-ENN),and SMOTE-tomek links(SMOTE-Tomek),the VAE-NGBoost model yielded the best performance.Finally,we developed a multilevel XAI model using feature sensitivity analysis,Tree Shapley Additive exPlanations(Tree SHAP),and Anchor to provide an in-depth exploration of the decision-making mechanics of VAE-NGBoost,further enhancing the accountability of treebased ensemble models in predicting rock burst occurrences. 展开更多
关键词 Gradient boosting VAE Ensemble learning Explainable artificial intelligence(XAI) Rock burst
原文传递
Detecting anomalies in blockchain transactions using machine learning classifiers and explainability analysis
5
作者 Mohammad Hasan Mohammad Shahriar Rahman +1 位作者 Helge Janicke Iqbal H.Sarker 《Blockchain(Research and Applications)》 EI 2024年第3期106-122,共17页
As the use of blockchain for digital payments continues to rise,it becomes susceptible to various malicious attacks.Successfully detecting anomalies within blockchain transactions is essential for bolstering trust in ... As the use of blockchain for digital payments continues to rise,it becomes susceptible to various malicious attacks.Successfully detecting anomalies within blockchain transactions is essential for bolstering trust in digital payments.However,the task of anomaly detection in blockchain transaction data is challenging due to the infrequent occurrence of illicit transactions.Although several studies have been conducted in the field,a limitation persists:the lack of explanations for the model’s predictions.This study seeks to overcome this limitation by integrating explainable artificial intelligence(XAI)techniques and anomaly rules into tree-based ensemble classifiers for detecting anomalous Bitcoin transactions.The shapley additive explanation(SHAP)method is employed to measure the contribution of each feature,and it is compatible with ensemble models.Moreover,we present rules for interpreting whether a Bitcoin transaction is anomalous or not.Additionally,we introduce an under-sampling algorithm named XGBCLUS,designed to balance anomalous and non-anomalous transaction data.This algorithm is compared against other commonly used under-sampling and over-sampling techniques.Finally,the outcomes of various tree-based single classifiers are compared with those of stacking and voting ensemble classifiers.Our experimental results demonstrate that:(i)XGBCLUS enhances true positive rate(TPR)and receiver operating characteristic-area under curve(ROC-AUC)scores compared to state-of-the-art under-sampling and over-sampling techniques,and(ii)our proposed ensemble classifiers outperform traditional single tree-based machine learning classifiers in terms of accuracy,TPR,and false positive rate(FPR)scores. 展开更多
关键词 Anomaly detection Blockchain Bitcoin transactions Data imbalance Data sampling Explainable AI Machine learning Decision tree Anomaly rules
原文传递
High-throughput screening of CO_(2) cycloaddition MOF catalyst with an explainable machine learning model
6
作者 Xuefeng Bai Yi Li +3 位作者 Yabo Xie Qiancheng Chen Xin Zhang Jian-Rong Li 《Green Energy & Environment》 SCIE EI CAS 2025年第1期132-138,共7页
The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF str... The high porosity and tunable chemical functionality of metal-organic frameworks(MOFs)make it a promising catalyst design platform.High-throughput screening of catalytic performance is feasible since the large MOF structure database is available.In this study,we report a machine learning model for high-throughput screening of MOF catalysts for the CO_(2) cycloaddition reaction.The descriptors for model training were judiciously chosen according to the reaction mechanism,which leads to high accuracy up to 97%for the 75%quantile of the training set as the classification criterion.The feature contribution was further evaluated with SHAP and PDP analysis to provide a certain physical understanding.12,415 hypothetical MOF structures and 100 reported MOFs were evaluated under 100℃ and 1 bar within one day using the model,and 239 potentially efficient catalysts were discovered.Among them,MOF-76(Y)achieved the top performance experimentally among reported MOFs,in good agreement with the prediction. 展开更多
关键词 Metal-organic frameworks High-throughput screening Machine learning Explainable model CO_(2)cycloaddition
在线阅读 下载PDF
Intrumer:A Multi Module Distributed Explainable IDS/IPS for Securing Cloud Environment
7
作者 Nazreen Banu A S.K.B.Sangeetha 《Computers, Materials & Continua》 SCIE EI 2025年第1期579-607,共29页
The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approach... The increasing use of cloud-based devices has reached the critical point of cybersecurity and unwanted network traffic.Cloud environments pose significant challenges in maintaining privacy and security.Global approaches,such as IDS,have been developed to tackle these issues.However,most conventional Intrusion Detection System(IDS)models struggle with unseen cyberattacks and complex high-dimensional data.In fact,this paper introduces the idea of a novel distributed explainable and heterogeneous transformer-based intrusion detection system,named INTRUMER,which offers balanced accuracy,reliability,and security in cloud settings bymultiplemodulesworking together within it.The traffic captured from cloud devices is first passed to the TC&TM module in which the Falcon Optimization Algorithm optimizes the feature selection process,and Naie Bayes algorithm performs the classification of features.The selected features are classified further and are forwarded to the Heterogeneous Attention Transformer(HAT)module.In this module,the contextual interactions of the network traffic are taken into account to classify them as normal or malicious traffic.The classified results are further analyzed by the Explainable Prevention Module(XPM)to ensure trustworthiness by providing interpretable decisions.With the explanations fromthe classifier,emergency alarms are transmitted to nearby IDSmodules,servers,and underlying cloud devices for the enhancement of preventive measures.Extensive experiments on benchmark IDS datasets CICIDS 2017,Honeypots,and NSL-KDD were conducted to demonstrate the efficiency of the INTRUMER model in detecting network trafficwith high accuracy for different types.Theproposedmodel outperforms state-of-the-art approaches,obtaining better performance metrics:98.7%accuracy,97.5%precision,96.3%recall,and 97.8%F1-score.Such results validate the robustness and effectiveness of INTRUMER in securing diverse cloud environments against sophisticated cyber threats. 展开更多
关键词 Cloud computing intrusion detection system TRANSFORMERS and explainable artificial intelligence(XAI)
在线阅读 下载PDF
AI-Powered Threat Detection in Online Communities: A Multi-Modal Deep Learning Approach
8
作者 Ravi Teja Potla 《Journal of Computer and Communications》 2025年第2期155-171,共17页
The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Tr... The fast increase of online communities has brought about an increase in cyber threats inclusive of cyberbullying, hate speech, misinformation, and online harassment, making content moderation a pressing necessity. Traditional single-modal AI-based detection systems, which analyze both text, photos, or movies in isolation, have established useless at taking pictures multi-modal threats, in which malicious actors spread dangerous content throughout a couple of formats. To cope with these demanding situations, we advise a multi-modal deep mastering framework that integrates Natural Language Processing (NLP), Convolutional Neural Networks (CNNs), and Long Short-Term Memory (LSTM) networks to become aware of and mitigate online threats effectively. Our proposed model combines BERT for text class, ResNet50 for photograph processing, and a hybrid LSTM-3-d CNN community for video content material analysis. We constructed a large-scale dataset comprising 500,000 textual posts, 200,000 offensive images, and 50,000 annotated motion pictures from more than one platform, which includes Twitter, Reddit, YouTube, and online gaming forums. The system became carefully evaluated using trendy gadget mastering metrics which include accuracy, precision, remember, F1-score, and ROC-AUC curves. Experimental outcomes demonstrate that our multi-modal method extensively outperforms single-modal AI classifiers, achieving an accuracy of 92.3%, precision of 91.2%, do not forget of 90.1%, and an AUC rating of 0.95. The findings validate the necessity of integrating multi-modal AI for actual-time, high-accuracy online chance detection and moderation. Future paintings will have consciousness on improving hostile robustness, enhancing scalability for real-world deployment, and addressing ethical worries associated with AI-driven content moderation. 展开更多
关键词 Multi-Model AI Deep Learning Natural Language Processing (NLP) Explainable AI (XI) Federated Learning Cyber Threat Detection LSTM CNNS
在线阅读 下载PDF
Explainability-based Trust Algorithm for electricity price forecasting models
9
作者 Leena Heistrene Ram Machlev +5 位作者 Michael Perl Juri Belikov Dmitry Baimel Kfir Levy Shie Mannor Yoash Levron 《Energy and AI》 2023年第4期141-158,共18页
Advanced machine learning(ML)algorithms have outperformed traditional approaches in various forecasting applications,especially electricity price forecasting(EPF).However,the prediction accuracy of ML reduces substant... Advanced machine learning(ML)algorithms have outperformed traditional approaches in various forecasting applications,especially electricity price forecasting(EPF).However,the prediction accuracy of ML reduces substantially if the input data is not similar to the ones seen by the model during training.This is often observed in EPF problems when market dynamics change owing to a rise in fuel prices,an increase in renewable penetration,a change in operational policies,etc.While the dip in model accuracy for unseen data is a cause for concern,what is more,challenging is not knowing when the ML model would respond in such a manner.Such uncertainty makes the power market participants,like bidding agents and retailers,vulnerable to substantial financial loss caused by the prediction errors of EPF models.Therefore,it becomes essential to identify whether or not the model prediction at a given instance is trustworthy.In this light,this paper proposes a trust algorithm for EPF users based on explainable artificial intelligence techniques.The suggested algorithm generates trust scores that reflect the model’s prediction quality for each new input.These scores are formulated in two stages:in the first stage,the coarse version of the score is formed using correlations of local and global explanations,and in the second stage,the score is fine-tuned further by the Shapley additive explanations values of different features.Such score-based explanations are more straightforward than feature-based visual explanations for EPF users like asset managers and traders.A dataset from Italy’s and ERCOT’s electricity market validates the efficacy of the proposed algorithm.Results show that the algorithm has more than 85%accuracy in identifying good predictions when the data distribution is similar to the training dataset.In the case of distribution shift,the algorithm shows the same accuracy level in identifying bad predictions. 展开更多
关键词 Electricity price forecasting EPF Explainable AI model XAI SHAP explainability
原文传递
Causal temporal graph attention network for fault diagnosis of chemical processes 被引量:1
10
作者 Jiaojiao Luo Zhehao Jin +3 位作者 Heping Jin Qian Li Xu Ji Yiyang Dai 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第6期20-32,共13页
Fault detection and diagnosis(FDD)plays a significant role in ensuring the safety and stability of chemical processes.With the development of artificial intelligence(AI)and big data technologies,data-driven approaches... Fault detection and diagnosis(FDD)plays a significant role in ensuring the safety and stability of chemical processes.With the development of artificial intelligence(AI)and big data technologies,data-driven approaches with excellent performance are widely used for FDD in chemical processes.However,improved predictive accuracy has often been achieved through increased model complexity,which turns models into black-box methods and causes uncertainty regarding their decisions.In this study,a causal temporal graph attention network(CTGAN)is proposed for fault diagnosis of chemical processes.A chemical causal graph is built by causal inference to represent the propagation path of faults.The attention mechanism and chemical causal graph were combined to help us notice the key variables relating to fault fluctuations.Experiments in the Tennessee Eastman(TE)process and the green ammonia(GA)process showed that CTGAN achieved high performance and good explainability. 展开更多
关键词 Chemical processes Safety Fault diagnosis Causal discovery Attention mechanism explainability
在线阅读 下载PDF
Transparency:The Missing Link to Boosting AI Transformations in Chemical Engineering
11
作者 Yue Yuan Donovan Chaffart +1 位作者 Tao Wu Jesse Zhu 《Engineering》 SCIE EI CAS CSCD 2024年第8期45-60,共16页
The issue of opacity within data-driven artificial intelligence(AI)algorithms has become an impediment to these algorithms’extensive utilization,especially within sensitive domains concerning health,safety,and high p... The issue of opacity within data-driven artificial intelligence(AI)algorithms has become an impediment to these algorithms’extensive utilization,especially within sensitive domains concerning health,safety,and high profitability,such as chemical engineering(CE).In order to promote reliable AI utilization in CE,this review discusses the concept of transparency within AI utilizations,which is defined based on both explainable AI(XAI)concepts and key features from within the CE field.This review also highlights the requirements of reliable AI from the aspects of causality(i.e.,the correlations between the predictions and inputs of an AI),explainability(i.e.,the operational rationales of the workflows),and informativeness(i.e.,the mechanistic insights of the investigating systems).Related techniques are evaluated together with state-of-the-art applications to highlight the significance of establishing reliable AI applications in CE.Furthermore,a comprehensive transparency analysis case study is provided as an example to enhance understanding.Overall,this work provides a thorough discussion of this subject matter in a way that—for the first time—is particularly geared toward chemical engineers in order to raise awareness of responsible AI utilization.With this vital missing link,AI is anticipated to serve as a novel and powerful tool that can tremendously aid chemical engineers in solving bottleneck challenges in CE. 展开更多
关键词 TRANSPARENCY Explainable AI Reliability CAUSALITY explainability INFORMATIVENESS Hybrid modeling Physics-informed
在线阅读 下载PDF
Meta databases of steel frame buildings for surrogate modelling and machine learning-based feature importance analysis 被引量:1
12
作者 Delbaz Samadian Imrose B.Muhit +1 位作者 Annalisa Occhipinti Nashwan Dawood 《Resilient Cities and Structures》 2024年第1期20-43,共24页
Traditionally,nonlinear time history analysis(NLTHA)is used to assess the performance of structures under fu-ture hazards which is necessary to develop effective disaster risk management strategies.However,this method... Traditionally,nonlinear time history analysis(NLTHA)is used to assess the performance of structures under fu-ture hazards which is necessary to develop effective disaster risk management strategies.However,this method is computationally intensive and not suitable for analyzing a large number of structures on a city-wide scale.Surrogate models offer an efficient and reliable alternative and facilitate evaluating the performance of multiple structures under different hazard scenarios.However,creating a comprehensive database for surrogate mod-elling at the city level presents challenges.To overcome this,the present study proposes meta databases and a general framework for surrogate modelling of steel structures.The dataset includes 30,000 steel moment-resisting frame buildings,representing low-rise,mid-rise and high-rise buildings,with criteria for connections,beams,and columns.Pushover analysis is performed and structural parameters are extracted,and finally,incorporating two different machine learning algorithms,random forest and Shapley additive explanations,sensitivity and explain-ability analyses of the structural parameters are performed to identify the most significant factors in designing steel moment resisting frames.The framework and databases can be used as a validated source of surrogate modelling of steel frame structures in order for disaster risk management. 展开更多
关键词 Surrogate models Meta database Pushover analysis Steel moment resisting frames Sensitivity and explainability analyses Machine learning
在线阅读 下载PDF
Machine Learning-Driven Classification for Enhanced Rule Proposal Framework
13
作者 B.Gomathi R.Manimegalai +1 位作者 Srivatsan Santhanam Atreya Biswas 《Computer Systems Science & Engineering》 2024年第6期1749-1765,共17页
In enterprise operations,maintaining manual rules for enterprise processes can be expensive,time-consuming,and dependent on specialized domain knowledge in that enterprise domain.Recently,rule-generation has been auto... In enterprise operations,maintaining manual rules for enterprise processes can be expensive,time-consuming,and dependent on specialized domain knowledge in that enterprise domain.Recently,rule-generation has been automated in enterprises,particularly through Machine Learning,to streamline routine tasks.Typically,these machine models are black boxes where the reasons for the decisions are not always transparent,and the end users need to verify the model proposals as a part of the user acceptance testing to trust it.In such scenarios,rules excel over Machine Learning models as the end-users can verify the rules and have more trust.In many scenarios,the truth label changes frequently thus,it becomes difficult for the Machine Learning model to learn till a considerable amount of data has been accumulated,but with rules,the truth can be adapted.This paper presents a novel framework for generating human-understandable rules using the Classification and Regression Tree(CART)decision tree method,which ensures both optimization and user trust in automated decision-making processes.The framework generates comprehensible rules in the form of if condition and then predicts class even in domains where noise is present.The proposed system transforms enterprise operations by automating the production of human-readable rules from structured data,resulting in increased efficiency and transparency.Removing the need for human rule construction saves time and money while guaranteeing that users can readily check and trust the automatic judgments of the system.The remarkable performance metrics of the framework,which achieve 99.85%accuracy and 96.30%precision,further support its efficiency in translating complex data into comprehensible rules,eventually empowering users and enhancing organizational decision-making processes. 展开更多
关键词 Classification and regression tree process automation rules engine model interpretability explainability model trust
在线阅读 下载PDF
Machine learning in predicting postoperative complications in Crohn’s disease
14
作者 Li-Fan Zhang Liu-Xiang Chen +1 位作者 Wen-Juan Yang Bing Hu 《World Journal of Gastrointestinal Surgery》 SCIE 2024年第8期2745-2747,共3页
Crohn's disease(CD)is a chronic inflammatory bowel disease of unknown origin that can cause significant disability and morbidity with its progression.Due to the unique nature of CD,surgery is often necessary for m... Crohn's disease(CD)is a chronic inflammatory bowel disease of unknown origin that can cause significant disability and morbidity with its progression.Due to the unique nature of CD,surgery is often necessary for many patients during their lifetime,and the incidence of postoperative complications is high,which can affect the prognosis of patients.Therefore,it is essential to identify and manage post-operative complications.Machine learning(ML)has become increasingly im-portant in the medical field,and ML-based models can be used to predict post-operative complications of intestinal resection for CD.Recently,a valuable article titled“Predicting short-term major postoperative complications in intestinal resection for Crohn's disease:A machine learning-based study”was published by Wang et al.We appreciate the authors'creative work,and we are willing to share our views and discuss them with the authors. 展开更多
关键词 Crohn’s disease Intestinal resection Postoperative complications Machine learning explainability
在线阅读 下载PDF
IDS-INT:Intrusion detection system using transformer-based transfer learning for imbalanced network traffic 被引量:6
15
作者 Farhan Ullah Shamsher Ullah +1 位作者 Gautam Srivastava Jerry Chun-Wei Lin 《Digital Communications and Networks》 SCIE CSCD 2024年第1期190-204,共15页
A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a... A network intrusion detection system is critical for cyber security against llegitimate attacks.In terms of feature perspectives,network traffic may include a variety of elements such as attack reference,attack type,a subcategory of attack,host information,malicious scripts,etc.In terms of network perspectives,network traffic may contain an imbalanced number of harmful attacks when compared to normal traffic.It is challenging to identify a specific attack due to complex features and data imbalance issues.To address these issues,this paper proposes an Intrusion Detection System using transformer-based transfer learning for Imbalanced Network Traffic(IDS-INT).IDS-INT uses transformer-based transfer learning to learn feature interactions in both network feature representation and imbalanced data.First,detailed information about each type of attack is gathered from network interaction descriptions,which include network nodes,attack type,reference,host information,etc.Second,the transformer-based transfer learning approach is developed to learn detailed feature representation using their semantic anchors.Third,the Synthetic Minority Oversampling Technique(SMOTE)is implemented to balance abnormal traffic and detect minority attacks.Fourth,the Convolution Neural Network(CNN)model is designed to extract deep features from the balanced network traffic.Finally,the hybrid approach of the CNN-Long Short-Term Memory(CNN-LSTM)model is developed to detect different types of attacks from the deep features.Detailed experiments are conducted to test the proposed approach using three standard datasets,i.e.,UNsWNB15,CIC-IDS2017,and NSL-KDD.An explainable AI approach is implemented to interpret the proposed method and develop a trustable model. 展开更多
关键词 Network intrusion detection Transfer learning Features extraction Imbalance data Explainable AI CYBERSECURITY
在线阅读 下载PDF
自动可解释机器学习滑坡易发性评价模型
16
作者 马祥龙 文海家 +2 位作者 张廷斌 孙德亮 潘明辰 《北京师范大学学报(自然科学版)》 CSCD 北大核心 2024年第6期806-818,共13页
模型训练的复杂性和预测结果的难以解释极大限制了机器学习在滑坡易发性评价领域的发展.本研究基于SHAP-XGBoost算法构建综合可解释的滑坡易发性评价模型,将“可解释的人工智能(explainable artificial intelligence,XAI)”和“自动机... 模型训练的复杂性和预测结果的难以解释极大限制了机器学习在滑坡易发性评价领域的发展.本研究基于SHAP-XGBoost算法构建综合可解释的滑坡易发性评价模型,将“可解释的人工智能(explainable artificial intelligence,XAI)”和“自动机器学习(automated machine learning,AutoML)”引入滑坡易发性评价研究,实现复杂模型训练、超参数优化、滑坡易发性评价制图和模型解释的自动化运行.该模型以网格单元和斜坡单元2种尺度在三峡库区奉节县的测试结果表明:模型实现了可解释的自动化滑坡易发性评价,具有较高的预测精度;基于网格单元与斜坡单元构建的模型测试集AUC值为0.875和0.873,准确率、精确度、召回率与F1分数值均远>0.5;SHAP算法可从全局与局部2个方面对模型进行解释,有助于理解模型决策成因与滑坡灾害的发生规律.此外,SHAP算法亦可解释单个评价单元的预测结果,具有较高的可信度.研究结果为自动机器学习与模型的可解释研究提供重要参考. 展开更多
关键词 AutoML Explainable SHAP 滑坡易发性区划
在线阅读 下载PDF
Machine Fault Diagnosis Using Audio Sensors Data and Explainable AI Techniques-LIME and SHAP 被引量:1
17
作者 Aniqua Nusrat Zereen Abir Das Jia Uddin 《Computers, Materials & Continua》 SCIE EI 2024年第9期3463-3484,共22页
Machine fault diagnostics are essential for industrial operations,and advancements in machine learning have significantly advanced these systems by providing accurate predictions and expedited solutions.Machine learni... Machine fault diagnostics are essential for industrial operations,and advancements in machine learning have significantly advanced these systems by providing accurate predictions and expedited solutions.Machine learning models,especially those utilizing complex algorithms like deep learning,have demonstrated major potential in extracting important information fromlarge operational datasets.Despite their efficiency,machine learningmodels face challenges,making Explainable AI(XAI)crucial for improving their understandability and fine-tuning.The importance of feature contribution and selection using XAI in the diagnosis of machine faults is examined in this study.The technique is applied to evaluate different machine-learning algorithms.Extreme Gradient Boosting,Support Vector Machine,Gaussian Naive Bayes,and Random Forest classifiers are used alongside Logistic Regression(LR)as a baseline model because their efficacy and simplicity are evaluated thoroughly with empirical analysis.The XAI is used as a targeted feature selection technique to select among 29 features of the time and frequency domain.The XAI approach is lightweight,trained with only targeted features,and achieved similar results as the traditional approach.The accuracy without XAI on baseline LR is 79.57%,whereas the approach with XAI on LR is 80.28%. 展开更多
关键词 Explainable AI feature selection machine learning machine fault diagnosis
在线阅读 下载PDF
Spatial Attention Integrated EfficientNet Architecture for Breast Cancer Classification with Explainable AI
18
作者 Sannasi Chakravarthy Bharanidharan Nagarajan +4 位作者 Surbhi Bhatia Khan Vinoth Kumar Venkatesan Mahesh Thyluru Ramakrishna Ahlam AlMusharraf Khursheed Aurungzeb 《Computers, Materials & Continua》 SCIE EI 2024年第9期5029-5045,共17页
Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research l... Breast cancer is a type of cancer responsible for higher mortality rates among women.The cruelty of breast cancer always requires a promising approach for its earlier detection.In light of this,the proposed research leverages the representation ability of pretrained EfficientNet-B0 model and the classification ability of the XGBoost model for the binary classification of breast tumors.In addition,the above transfer learning model is modified in such a way that it will focus more on tumor cells in the input mammogram.Accordingly,the work proposed an EfficientNet-B0 having a Spatial Attention Layer with XGBoost(ESA-XGBNet)for binary classification of mammograms.For this,the work is trained,tested,and validated using original and augmented mammogram images of three public datasets namely CBIS-DDSM,INbreast,and MIAS databases.Maximumclassification accuracy of 97.585%(CBISDDSM),98.255%(INbreast),and 98.91%(MIAS)is obtained using the proposed ESA-XGBNet architecture as compared with the existing models.Furthermore,the decision-making of the proposed ESA-XGBNet architecture is visualized and validated using the Attention Guided GradCAM-based Explainable AI technique. 展开更多
关键词 EfficientNet MAMMOGRAMS breast cancer Explainable AI deep-learning transfer learning
在线阅读 下载PDF
Assessor Feedback Mechanism for Machine Learning Model
19
作者 Musulmon Lolaev Anand Paul Jeonghong Kim 《Computers, Materials & Continua》 SCIE EI 2024年第12期4707-4726,共20页
Evaluating artificial intelligence(AI)systems is crucial for their successful deployment and safe operation in real-world applications.The assessor meta-learning model has been recently introduced to assess AI system ... Evaluating artificial intelligence(AI)systems is crucial for their successful deployment and safe operation in real-world applications.The assessor meta-learning model has been recently introduced to assess AI system behaviors developed from emergent characteristics of AI systems and their responses on a test set.The original approach lacks covering continuous ranges,for example,regression problems,and it produces only the probability of success.In this work,to address existing limitations and enhance practical applicability,we propose an assessor feedback mechanism designed to identify and learn from AI system errors,enabling the system to perform the target task more effectively while concurrently correcting its mistakes.Our empirical analysis demonstrates the efficacy of this approach.Specifically,we introduce a transition methodology that converts prediction errors into relative success,which is particularly beneficial for regression tasks.We then apply this framework to both neural network and support vector machine models across regression and classification tasks,thoroughly testing its performance on a comprehensive suite of 30 diverse datasets.Our findings highlight the robustness and adaptability of the assessor feedback mechanism,showcasing its potential to improve model accuracy and reliability across varied data contexts. 展开更多
关键词 Artificial Intelligence assessor model EVALUATION META-LEARNING TRUSTWORTHY explainable AI
在线阅读 下载PDF
Explainable Artificial Intelligence(XAI)Model for Cancer Image Classification
20
作者 Amit Singhal Krishna Kant Agrawal +3 位作者 Angeles Quezada Adrian Rodriguez Aguiñaga Samantha Jiménez Satya Prakash Yadav 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期401-441,共41页
The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and ... The use of Explainable Artificial Intelligence(XAI)models becomes increasingly important for making decisions in smart healthcare environments.It is to make sure that decisions are based on trustworthy algorithms and that healthcare workers understand the decisions made by these algorithms.These models can potentially enhance interpretability and explainability in decision-making processes that rely on artificial intelligence.Nevertheless,the intricate nature of the healthcare field necessitates the utilization of sophisticated models to classify cancer images.This research presents an advanced investigation of XAI models to classify cancer images.It describes the different levels of explainability and interpretability associated with XAI models and the challenges faced in deploying them in healthcare applications.In addition,this study proposes a novel framework for cancer image classification that incorporates XAI models with deep learning and advanced medical imaging techniques.The proposed model integrates several techniques,including end-to-end explainable evaluation,rule-based explanation,and useradaptive explanation.The proposed XAI reaches 97.72%accuracy,90.72%precision,93.72%recall,96.72%F1-score,9.55%FDR,9.66%FOR,and 91.18%DOR.It will discuss the potential applications of the proposed XAI models in the smart healthcare environment.It will help ensure trust and accountability in AI-based decisions,which is essential for achieving a safe and reliable smart healthcare environment. 展开更多
关键词 Explainable artificial intelligence artificial intelligence XAI healthcare CANCER image classification
在线阅读 下载PDF
上一页 1 2 20 下一页 到第
使用帮助 返回顶部