期刊文献+
共找到10篇文章
< 1 >
每页显示 20 50 100
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
1
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 Khursheed Aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
在线阅读 下载PDF
AI-Driven Resource and Communication-Aware Virtual Machine Placement Using Multi-Objective Swarm Optimization for Enhanced Efficiency in Cloud-Based Smart Manufacturing
2
作者 Praveena Nuthakki Pavan Kumar T. +3 位作者 Musaed Alhussein Muhammad Shahid Anwar Khursheed Aurangzeb Leenendra Chowdary Gunnam 《Computers, Materials & Continua》 SCIE EI 2024年第12期4743-4756,共14页
Cloud computing has emerged as a vital platform for processing resource-intensive workloads in smart manu-facturing environments,enabling scalable and flexible access to remote data centers over the internet.In these ... Cloud computing has emerged as a vital platform for processing resource-intensive workloads in smart manu-facturing environments,enabling scalable and flexible access to remote data centers over the internet.In these environments,Virtual Machines(VMs)are employed to manage workloads,with their optimal placement on Physical Machines(PMs)being crucial for maximizing resource utilization.However,achieving high resource utilization in cloud data centers remains a challenge due to multiple conflicting objectives,particularly in scenarios involving inter-VM communication dependencies,which are common in smart manufacturing applications.This manuscript presents an AI-driven approach utilizing a modified Multi-Objective Particle Swarm Optimization(MOPSO)algorithm,enhanced with improved mutation and crossover operators,to efficiently place VMs.This approach aims to minimize the impact on networking devices during inter-VM communication while enhancing resource utilization.The proposed algorithm is benchmarked against other multi-objective algorithms,such as Multi-Objective Evolutionary Algorithm with Decomposition(MOEA/D),demonstrating its superiority in optimizing resource allocation in cloud-based environments for smart manufacturing. 展开更多
关键词 Resource utilization smart manufacturing EFFICIENCY inter VM communication virtual machine placement cloud computing multi-objective optimization
在线阅读 下载PDF
A systematic mapping to investigate the application of machine learning techniques in requirement engineering activities
3
作者 Shoaib Hassan Qianmu Li +3 位作者 Khursheed Aurangzeb Affan Yasin Javed Ali Khan Muhammad Shahid Anwar 《CAAI Transactions on Intelligence Technology》 2024年第6期1412-1434,共23页
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML tech... Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing capacity.Despite the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement Engineering(RE)activities to solve the problems that occur in RE activities.The authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–2023.The authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this period.Forty-five research studies were selected based on our exclusion and inclusion criteria.The results show that the scientific community used 57 algorithms.Among those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random Forest.The results show that researchers used these algorithms in eight major RE activities.Those activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural language.Our selected research studies used 32 private and 41 public data sources.The most popular data sources that were detected in selected studies are the Metric Data Programme from NASA,Predictor Models in Software Engineering,and iTrust Electronic Health Care System. 展开更多
关键词 data sources machine learning requirement engineering supervised learning algorithms
在线阅读 下载PDF
Three-Stage Transfer Learning with AlexNet50 for MRI Image Multi-Class Classification with Optimal Learning Rate
4
作者 Suganya Athisayamani A.Robert Singh +1 位作者 Gyanendra Prasad Joshi Woong Cho 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期155-183,共29页
In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue... In radiology,magnetic resonance imaging(MRI)is an essential diagnostic tool that provides detailed images of a patient’s anatomical and physiological structures.MRI is particularly effective for detecting soft tissue anomalies.Traditionally,radiologists manually interpret these images,which can be labor-intensive and time-consuming due to the vast amount of data.To address this challenge,machine learning,and deep learning approaches can be utilized to improve the accuracy and efficiency of anomaly detection in MRI scans.This manuscript presents the use of the Deep AlexNet50 model for MRI classification with discriminative learning methods.There are three stages for learning;in the first stage,the whole dataset is used to learn the features.In the second stage,some layers of AlexNet50 are frozen with an augmented dataset,and in the third stage,AlexNet50 with an augmented dataset with the augmented dataset.This method used three publicly available MRI classification datasets:Harvard whole brain atlas(HWBA-dataset),the School of Biomedical Engineering of Southern Medical University(SMU-dataset),and The National Institute of Neuroscience and Hospitals brain MRI dataset(NINS-dataset)for analysis.Various hyperparameter optimizers like Adam,stochastic gradient descent(SGD),Root mean square propagation(RMS prop),Adamax,and AdamW have been used to compare the performance of the learning process.HWBA-dataset registers maximum classification performance.We evaluated the performance of the proposed classification model using several quantitative metrics,achieving an average accuracy of 98%. 展开更多
关键词 MRI TUMORS CLASSIFICATION AlexNet50 transfer learning hyperparameter tuning OPTIMIZER
在线阅读 下载PDF
NPBMT: A Novel and Proficient Buffer Management Technique for Internet of Vehicle-Based DTNs
5
作者 Sikandar Khan Khalid Saeed +3 位作者 Muhammad Faran Majeed Salman A.AlQahtani Khursheed Aurangzeb Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2023年第10期1303-1323,共21页
Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer ... Delay Tolerant Networks(DTNs)have the major problem of message delay in the network due to a lack of endto-end connectivity between the nodes,especially when the nodes are mobile.The nodes in DTNs have limited buffer storage for storing delayed messages.This instantaneous sharing of data creates a low buffer/shortage problem.Consequently,buffer congestion would occur and there would be no more space available in the buffer for the upcoming messages.To address this problem a buffer management policy is proposed named“A Novel and Proficient Buffer Management Technique(NPBMT)for the Internet of Vehicle-Based DTNs”.NPBMT combines appropriate-size messages with the lowest Time-to-Live(TTL)and then drops a combination of the appropriate messages to accommodate the newly arrived messages.To evaluate the performance of the proposed technique comparison is done with Drop Oldest(DOL),Size Aware Drop(SAD),and Drop Larges(DLA).The proposed technique is implemented in the Opportunistic Network Environment(ONE)simulator.The shortest path mapbased movement model has been used as the movement path model for the nodes with the epidemic routing protocol.From the simulation results,a significant change has been observed in the delivery probability as the proposed policy delivered 380 messages,DOL delivered 186 messages,SAD delivered 190 messages,and DLA delivered only 95 messages.A significant decrease has been observed in the overhead ratio,as the SAD overhead ratio is 324.37,DLA overhead ratio is 266.74,and DOL and NPBMT overhead ratios are 141.89 and 52.85,respectively,which reveals a significant reduction of overhead ratio in NPBMT as compared to existing policies.The network latency average of DOL is 7785.5,DLA is 5898.42,and SAD is 5789.43 whereas the NPBMT latency average is 3909.4.This reveals that the proposed policy keeps the messages for a short time in the network,which reduces the overhead ratio. 展开更多
关键词 Delay tolerant networks buffer management message drop policy ONE simulator NPBMT
在线阅读 下载PDF
Network Traffic Synthesis and Simulation Framework for Cybersecurity Exercise Systems
6
作者 Dong-Wook Kim Gun-Yoon Sin +3 位作者 Kwangsoo Kim Jaesik Kang Sun-Young Im Myung-Mook Han 《Computers, Materials & Continua》 SCIE EI 2024年第9期3637-3653,共17页
In the rapidly evolving field of cybersecurity,the challenge of providing realistic exercise scenarios that accurately mimic real-world threats has become increasingly critical.Traditional methods often fall short in ... In the rapidly evolving field of cybersecurity,the challenge of providing realistic exercise scenarios that accurately mimic real-world threats has become increasingly critical.Traditional methods often fall short in capturing the dynamic and complex nature of modern cyber threats.To address this gap,we propose a comprehensive framework designed to create authentic network environments tailored for cybersecurity exercise systems.Our framework leverages advanced simulation techniques to generate scenarios that mirror actual network conditions faced by professionals in the field.The cornerstone of our approach is the use of a conditional tabular generative adversarial network(CTGAN),a sophisticated tool that synthesizes realistic synthetic network traffic by learning fromreal data patterns.This technology allows us to handle technical components and sensitive information with high fidelity,ensuring that the synthetic data maintains statistical characteristics similar to those observed in real network environments.By meticulously analyzing the data collected from various network layers and translating these into structured tabular formats,our framework can generate network traffic that closely resembles that found in actual scenarios.An integral part of our process involves deploying this synthetic data within a simulated network environment,structured on software-defined networking(SDN)principles,to test and refine the traffic patterns.This simulation not only facilitates a direct comparison between the synthetic and real traffic but also enables us to identify discrepancies and refine the accuracy of our simulations.Our initial findings indicate an error rate of approximately 29.28%between the synthetic and real traffic data,highlighting areas for further improvement and adjustment.By providing a diverse array of network scenarios through our framework,we aim to enhance the exercise systems used by cybersecurity professionals.This not only improves their ability to respond to actual cyber threats but also ensures that the exercise is cost-effective and efficient. 展开更多
关键词 Cybersecurity exercise synthetic network traffic generative adversarial network traffic generation software-defined networking
在线阅读 下载PDF
ResMHA-Net:Enhancing Glioma Segmentation and Survival Prediction Using a Novel Deep Learning Framework
7
作者 Novsheena Rasool Javaid Iqbal Bhat +4 位作者 Najib Ben Aoun Abdullah Alharthi Niyaz Ahmad Wani Vikram Chopra Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第10期885-909,共25页
Gliomas are aggressive brain tumors known for their heterogeneity,unclear borders,and diverse locations on Magnetic Resonance Imaging(MRI)scans.These factors present significant challenges for MRI-based segmentation,a... Gliomas are aggressive brain tumors known for their heterogeneity,unclear borders,and diverse locations on Magnetic Resonance Imaging(MRI)scans.These factors present significant challenges for MRI-based segmentation,a crucial step for effective treatment planning and monitoring of glioma progression.This study proposes a novel deep learning framework,ResNet Multi-Head Attention U-Net(ResMHA-Net),to address these challenges and enhance glioma segmentation accuracy.ResMHA-Net leverages the strengths of both residual blocks from the ResNet architecture and multi-head attention mechanisms.This powerful combination empowers the network to prioritize informative regions within the 3D MRI data and capture long-range dependencies.By doing so,ResMHANet effectively segments intricate glioma sub-regions and reduces the impact of uncertain tumor boundaries.We rigorously trained and validated ResMHA-Net on the BraTS 2018,2019,2020 and 2021 datasets.Notably,ResMHA-Net achieved superior segmentation accuracy on the BraTS 2021 dataset compared to the previous years,demonstrating its remarkable adaptability and robustness across diverse datasets.Furthermore,we collected the predicted masks obtained from three datasets to enhance survival prediction,effectively augmenting the dataset size.Radiomic features were then extracted from these predicted masks and,along with clinical data,were used to train a novel ensemble learning-based machine learning model for survival prediction.This model employs a voting mechanism aggregating predictions from multiple models,leading to significant improvements over existing methods.This ensemble approach capitalizes on the strengths of various models,resulting in more accurate and reliable predictions for patient survival.Importantly,we achieved an impressive accuracy of 73%for overall survival(OS)prediction. 展开更多
关键词 GLIOMA MRI SEGMENTATION multihead attention survival prediction deep learning
在线阅读 下载PDF
Anomaly Detection Based on Discrete Wavelet Transformation for Insider Threat Classification
8
作者 Dong-Wook Kim Gun-Yoon Shin Myung-Mook Han 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期153-164,共12页
Unlike external attacks,insider threats arise from legitimate users who belong to the organization.These individuals may be a potential threat for hostile behavior depending on their motives.For insider detection,many... Unlike external attacks,insider threats arise from legitimate users who belong to the organization.These individuals may be a potential threat for hostile behavior depending on their motives.For insider detection,many intrusion detection systems learn and prevent known scenarios,but because malicious behavior has similar patterns to normal behavior,in reality,these systems can be evaded.Furthermore,because insider threats share a feature space similar to normal behavior,identifying them by detecting anomalies has limitations.This study proposes an improved anomaly detection methodology for insider threats that occur in cybersecurity in which a discrete wavelet transformation technique is applied to classify normal vs.malicious users.The discrete wavelet transformation technique easily discovers new patterns or decomposes synthesized data,making it possible to distinguish between shared characteristics.To verify the efficacy of the proposed methodology,experiments were conducted in which normal users and malicious users were classified based on insider threat scenarios provided in Carnegie Mellon University’s Computer Emergency Response Team(CERT)dataset.The experimental results indicate that the proposed methodology with discrete wavelet transformation reduced the false-positive rate by 82%to 98%compared to the case with no wavelet applied.Thus,the proposed methodology has high potential for application to similar feature spaces. 展开更多
关键词 Anomaly detection CYBERSECURITY discrete wavelet transformation insider threat classification
在线阅读 下载PDF
Improved Speech Emotion Recognition Focusing on High-Level Data Representations and Swift Feature Extraction Calculation
9
作者 Akmalbek Abdusalomov Alpamis Kutlimuratov +1 位作者 Rashid Nasimov Taeg Keun Whangbo 《Computers, Materials & Continua》 SCIE EI 2023年第12期2915-2933,共19页
The performance of a speech emotion recognition(SER)system is heavily influenced by the efficacy of its feature extraction techniques.The study was designed to advance the field of SER by optimizing feature extraction... The performance of a speech emotion recognition(SER)system is heavily influenced by the efficacy of its feature extraction techniques.The study was designed to advance the field of SER by optimizing feature extraction tech-niques,specifically through the incorporation of high-resolution Mel-spectrograms and the expedited calculation of Mel Frequency Cepstral Coefficients(MFCC).This initiative aimed to refine the system’s accuracy by identifying and mitigating the shortcomings commonly found in current approaches.Ultimately,the primary objective was to elevate both the intricacy and effectiveness of our SER model,with a focus on augmenting its proficiency in the accurate identification of emotions in spoken language.The research employed a dual-strategy approach for feature extraction.Firstly,a rapid computation technique for MFCC was implemented and integrated with a Bi-LSTM layer to optimize the encoding of MFCC features.Secondly,a pretrained ResNet model was utilized in conjunction with feature Stats pooling and dense layers for the effective encoding of Mel-spectrogram attributes.These two sets of features underwent separate processing before being combined in a Convolutional Neural Network(CNN)outfitted with a dense layer,with the aim of enhancing their representational richness.The model was rigorously evaluated using two prominent databases:CMU-MOSEI and RAVDESS.Notable findings include an accuracy rate of 93.2%on the CMU-MOSEI database and 95.3%on the RAVDESS database.Such exceptional performance underscores the efficacy of this innovative approach,which not only meets but also exceeds the accuracy benchmarks established by traditional models in the field of speech emotion recognition. 展开更多
关键词 Feature extraction MFCC ResNet speech emotion recognition
在线阅读 下载PDF
Towards Cache-Assisted Hierarchical Detection for Real-Time Health Data Monitoring in IoHT
10
作者 Muhammad Tahir Mingchu Li +4 位作者 Irfan Khan Salman AAl Qahtani Rubia Fatima Javed Ali Khan Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2023年第11期2529-2544,共16页
Real-time health data monitoring is pivotal for bolstering road services’safety,intelligence,and efficiency within the Internet of Health Things(IoHT)framework.Yet,delays in data retrieval can markedly hinder the eff... Real-time health data monitoring is pivotal for bolstering road services’safety,intelligence,and efficiency within the Internet of Health Things(IoHT)framework.Yet,delays in data retrieval can markedly hinder the efficacy of big data awareness detection systems.We advocate for a collaborative caching approach involving edge devices and cloud networks to combat this.This strategy is devised to streamline the data retrieval path,subsequently diminishing network strain.Crafting an adept cache processing scheme poses its own set of challenges,especially given the transient nature of monitoring data and the imperative for swift data transmission,intertwined with resource allocation tactics.This paper unveils a novel mobile healthcare solution that harnesses the power of our collaborative caching approach,facilitating nuanced health monitoring via edge devices.The system capitalizes on cloud computing for intricate health data analytics,especially in pinpointing health anomalies.Given the dynamic locational shifts and possible connection disruptions,we have architected a hierarchical detection system,particularly during crises.This system caches data efficiently and incorporates a detection utility to assess data freshness and potential lag in response times.Furthermore,we introduce the Cache-Assisted Real-Time Detection(CARD)model,crafted to optimize utility.Addressing the inherent complexity of the NP-hard CARD model,we have championed a greedy algorithm as a solution.Simulations reveal that our collaborative caching technique markedly elevates the Cache Hit Ratio(CHR)and data freshness,outshining its contemporaneous benchmark algorithms.The empirical results underscore the strength and efficiency of our innovative IoHT-based health monitoring solution.To encapsulate,this paper tackles the nuances of real-time health data monitoring in the IoHT landscape,presenting a joint edge-cloud caching strategy paired with a hierarchical detection system.Our methodology yields enhanced cache efficiency and data freshness.The corroborative numerical data accentuates the feasibility and relevance of our model,casting a beacon for the future trajectory of real-time health data monitoring systems. 展开更多
关键词 Real-time health data monitoring Cache-Assisted Real-Time Detection(CARD) edge-cloud collaborative caching scheme hierarchical detection Internet of Health Things(IoHT)
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部