期刊文献+
共找到298篇文章
< 1 2 15 >
每页显示 20 50 100
Segmentation of Head and Neck Tumors Using Dual PET/CT Imaging:Comparative Analysis of 2D,2.5D,and 3D Approaches Using UNet Transformer
1
作者 Mohammed A.Mahdi Shahanawaj Ahamad +3 位作者 Sawsan A.Saad Alaa Dafhalla Alawi Alqushaibi Rizwan Qureshi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第12期2351-2373,共23页
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p... The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging. 展开更多
关键词 PET/CT imaging tumor segmentation weighted fusion transformer multi-modal imaging deep learning neural networks clinical oncology
在线阅读 下载PDF
Bifurcation analysis and control study of improved full-speed differential model in connected vehicle environment
2
作者 艾文欢 雷正清 +2 位作者 李丹洋 方栋梁 刘大为 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第7期245-266,共22页
In recent years, the traffic congestion problem has become more and more serious, and the research on traffic system control has become a new hot spot. Studying the bifurcation characteristics of traffic flow systems ... In recent years, the traffic congestion problem has become more and more serious, and the research on traffic system control has become a new hot spot. Studying the bifurcation characteristics of traffic flow systems and designing control schemes for unstable pivots can alleviate the traffic congestion problem from a new perspective. In this work, the full-speed differential model considering the vehicle network environment is improved in order to adjust the traffic flow from the perspective of bifurcation control, the existence conditions of Hopf bifurcation and saddle-node bifurcation in the model are proved theoretically, and the stability mutation point for the stability of the transportation system is found. For the unstable bifurcation point, a nonlinear system feedback controller is designed by using Chebyshev polynomial approximation and stochastic feedback control method. The advancement, postponement, and elimination of Hopf bifurcation are achieved without changing the system equilibrium point, and the mutation behavior of the transportation system is controlled so as to alleviate the traffic congestion. The changes in the stability of complex traffic systems are explained through the bifurcation analysis, which can better capture the characteristics of the traffic flow. By adjusting the control parameters in the feedback controllers, the influence of the boundary conditions on the stability of the traffic system is adequately described, and the effects of the unstable focuses and saddle points on the system are suppressed to slow down the traffic flow. In addition, the unstable bifurcation points can be eliminated and the Hopf bifurcation can be controlled to advance, delay, and disappear,so as to realize the control of the stability behavior of the traffic system, which can help to alleviate the traffic congestion and describe the actual traffic phenomena as well. 展开更多
关键词 bifurcation analysis vehicle queuing bifurcation control Hopf bifurcation
在线阅读 下载PDF
An Optimized Approach to Deep Learning for Botnet Detection and Classification for Cybersecurity in Internet of Things Environment
3
作者 Abdulrahman Alzahrani 《Computers, Materials & Continua》 SCIE EI 2024年第8期2331-2349,共19页
The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent ... The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent botnets in interconnected devices.Anomaly detection models evaluate transmission patterns,network traffic,and device behaviour to detect deviations from usual activities.Machine learning(ML)techniques detect patterns signalling botnet activity,namely sudden traffic increase,unusual command and control patterns,or irregular device behaviour.In addition,intrusion detection systems(IDSs)and signature-based techniques are applied to recognize known malware signatures related to botnets.Various ML and deep learning(DL)techniques have been developed to detect botnet attacks in IoT systems.To overcome security issues in an IoT environment,this article designs a gorilla troops optimizer with DL-enabled botnet attack detection and classification(GTODL-BADC)technique.The GTODL-BADC technique follows feature selection(FS)with optimal DL-based classification for accomplishing security in an IoT environment.For data preprocessing,the min-max data normalization approach is primarily used.The GTODL-BADC technique uses the GTO algorithm to select features and elect optimal feature subsets.Moreover,the multi-head attention-based long short-term memory(MHA-LSTM)technique was applied for botnet detection.Finally,the tree seed algorithm(TSA)was used to select the optimum hyperparameter for the MHA-LSTM method.The experimental validation of the GTODL-BADC technique can be tested on a benchmark dataset.The simulation results highlighted that the GTODL-BADC technique demonstrates promising performance in the botnet detection process. 展开更多
关键词 Botnet detection internet of things gorilla troops optimizer hyperparameter tuning intrusion detection system
在线阅读 下载PDF
Deep Reinforcement Learning-Based Task Offloading and Service Migrating Policies in Service Caching-Assisted Mobile Edge Computing
4
作者 Ke Hongchang Wang Hui +1 位作者 Sun Hongbin Halvin Yang 《China Communications》 SCIE CSCD 2024年第4期88-103,共16页
Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.... Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms. 展开更多
关键词 deep reinforcement learning mobile edge computing service caching service migrating
在线阅读 下载PDF
Improving Prediction of Chronic Kidney Disease Using KNN Imputed SMOTE Features and TrioNet Model
5
作者 Nazik Alturki Abdulaziz Altamimi +5 位作者 Muhammad Umer Oumaima Saidani Amal Alshardan Shtwai Alsubai Marwan Omar Imran Ashraf 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3513-3534,共22页
Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ... Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ML classifier algorithms to identify CKD early.This study explores the application of advanced machine learning techniques on a CKD dataset obtained from the University of California,UC Irvine Machine Learning repository.The research introduces TrioNet,an ensemble model combining extreme gradient boosting,random forest,and extra tree classifier,which excels in providing highly accurate predictions for CKD.Furthermore,K nearest neighbor(KNN)imputer is utilized to deal withmissing values while synthetic minority oversampling(SMOTE)is used for class-imbalance problems.To ascertain the efficacy of the proposed model,a comprehensive comparative analysis is conducted with various machine learning models.The proposed TrioNet using KNN imputer and SMOTE outperformed other models with 98.97%accuracy for detectingCKD.This in-depth analysis demonstrates the model’s capabilities and underscores its potential as a valuable tool in the diagnosis of CKD. 展开更多
关键词 Precisionmedicine chronic kidney disease detection SMOTE missing values healthcare KNNimputer ensemble learning
在线阅读 下载PDF
Arabic Dialect Identification in Social Media:A Comparative Study of Deep Learning and Transformer Approaches
6
作者 Enas Yahya Alqulaity Wael M.S.Yafooz +1 位作者 Abdullah Alourani Ayman Jaradat 《Intelligent Automation & Soft Computing》 2024年第5期907-928,共22页
Arabic dialect identification is essential in Natural Language Processing(NLP)and forms a critical component of applications such as machine translation,sentiment analysis,and cross-language text generation.The diffic... Arabic dialect identification is essential in Natural Language Processing(NLP)and forms a critical component of applications such as machine translation,sentiment analysis,and cross-language text generation.The difficulties in differentiating between Arabic dialects have garnered more attention in the last 10 years,particularly in social media.These difficulties result from the overlapping vocabulary of the dialects,the fluidity of online language use,and the difficulties in telling apart dialects that are closely related.Managing dialects with limited resources and adjusting to the ever-changing linguistic trends on social media platforms present additional challenges.A strong dialect recognition technique is essential to improving communication technology and cross-cultural understanding in light of the increase in social media usage.To distinguish Arabic dialects on social media,this research suggests a hybrid Deep Learning(DL)approach.The Long Short-Term Memory(LSTM)and Bidirectional Long Short-Term Memory(BiLSTM)architectures make up the model.A new textual dataset that focuses on three main dialects,i.e.,Levantine,Saudi,and Egyptian,is also available.Approximately 11,000 user-generated comments from Twitter are included in this dataset,which has been painstakingly annotated to guarantee accuracy in dialect classification.Transformers,DL models,and basic machine learning classifiers are used to conduct several tests to evaluate the performance of the suggested model.Various methodologies,including TF-IDF,word embedding,and self-attention mechanisms,are used.The suggested model fares better than other models in terms of accuracy,obtaining a remarkable 96.54%,according to the trial results.This study advances the discipline by presenting a new dataset and putting forth a practical model for Arabic dialect identification.This model may prove crucial for future work in sociolinguistic studies and NLP. 展开更多
关键词 Dialectal Arabic TRANSFORMERS deep learning natural language processing systems
在线阅读 下载PDF
Potential Benefits and Obstacles of the Use of Internet of Things in Saudi Universities: Empirical Study
7
作者 Najmah Adel Fallatah Fahad Mahmoud Ghabban +4 位作者 Omair Ameerbakhsh Ibrahim Alfadli Wael Ghazy Alheadary Salem Sulaiman Alatawi Ashwaq Hasen Al-Shehri 《Advances in Internet of Things》 2024年第1期1-20,共20页
Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of thi... Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of things (devices) that can be connected through the internet. The purpose: this paper aims to explore the concept of the Internet of Things (IoT) generally and outline the main definitions of IoT. The paper also aims to examine and discuss the obstacles and potential benefits of IoT in Saudi universities. Methodology: the researchers reviewed the previous literature and focused on several databases to use the recent studies and research related to the IoT. Then, the researchers also used quantitative methodology to examine the factors affecting the obstacles and potential benefits of IoT. The data were collected by using a questionnaire distributed online among academic staff and a total of 150 participants completed the survey. Finding: the result of this study reveals there are twelve factors that affect the potential benefits of using IoT such as reducing human errors, increasing business income and worker’s productivity. It also shows the eighteen factors which affect obstacles the IoT use, for example sensors’ cost, data privacy, and data security. These factors have the most influence on using IoT in Saudi universities. 展开更多
关键词 Internet of Things (IoT) M2M Factors Obstacles Potential Benefits UNIVERSITIES
在线阅读 下载PDF
Automated Extraction and Analysis of CBC Test from Scanned Images
8
作者 Iman S. Alansari 《Journal of Software Engineering and Applications》 2024年第2期129-141,共13页
Health care is an important part of human life and is a right for everyone. One of the most basic human rights is to receive health care whenever they need it. However, this is simply not an option for everyone due to... Health care is an important part of human life and is a right for everyone. One of the most basic human rights is to receive health care whenever they need it. However, this is simply not an option for everyone due to the social conditions in which some communities live and not everyone has access to it. This paper aims to serve as a reference point and guide for users who are interested in monitoring their health, particularly their blood analysis to be aware of their health condition in an easy way. This study introduces an algorithmic approach for extracting and analyzing Complete Blood Count (CBC) parameters from scanned images. The algorithm employs Optical Character Recognition (OCR) technology to process images containing tabular data, specifically targeting CBC parameter tables. Upon image processing, the algorithm extracts data and identifies CBC parameters and their corresponding values. It evaluates the status (High, Low, or Normal) of each parameter and subsequently presents evaluations, and any potential diagnoses. The primary objective is to automate the extraction and evaluation of CBC parameters, aiding healthcare professionals in swiftly assessing blood analysis results. The algorithmic framework aims to streamline the interpretation of CBC tests, potentially improving efficiency and accuracy in clinical diagnostics. 展开更多
关键词 Image Processing Optical Character Recognition Tesseract OCR Health Care Application
在线阅读 下载PDF
Research and Application of Caideng Model Rendering Technology for Virtual Reality
9
作者 Xuefeng Wang Yadong Wu +1 位作者 Yan Luo Dan Luo 《Journal of Computer and Communications》 2024年第4期95-110,共16页
With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of C... With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience. 展开更多
关键词 Virtual Reality Caideng Model Lighting Model Point Light Rendering
在线阅读 下载PDF
dentification of blasting vibration and coal-roc fracturing microseismic signals 被引量:9
10
作者 Zhang Xing-Li Jia Rui-Sheng +2 位作者 Lu Xin-Ming Peng Yan-Jun Zhao Wei-Dong 《Applied Geophysics》 SCIE CSCD 2018年第2期280-289,364,共11页
A new method based on variational mode decomposition (VMD) is proposed to distinguish between coal-rock fracturing and blasting vibration microseismic signals. First, the signals are decomposed to obtain the variati... A new method based on variational mode decomposition (VMD) is proposed to distinguish between coal-rock fracturing and blasting vibration microseismic signals. First, the signals are decomposed to obtain the variational mode components, which are ranked by frequency in descending order. Second, each mode component is extracted to form the eigenvector of the energy of the original signal and calculate the center of gravity coefficient of the energy distribution plane. Finally, the coal-rock fracturing and blasting vibration signals are classified using a decision tree stump. Experimental results suggest that VMD can effectively separate the signal components into coal-rock fracturing and blasting vibration signals based on frequency. The contrast in the energy distribution center coefficient after the dimension reduction of the energy distribution eigenvector accurately identifies the two types of microseismic signals. The method is verified by comparing it to EMD and wavelet packet decomposition. 展开更多
关键词 Coal-rock fracturing microseismic blasting vibration variational modedecomposition signal identification
在线阅读 下载PDF
Formal Modeling and Discovery of Multi-instance Business Processes: A Cloud Resource Management Case Study 被引量:3
11
作者 Cong Liu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第12期2151-2160,共10页
Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years;however,... Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years;however, most of them have difficulties in handling multi-instance sub-processes. To address this challenge, we first introduce a multi-instance business process model(MBPM) to support the modeling of processes with multiple sub-process instantiations. Formal semantics of MBPMs are precisely defined by using multi-instance Petri nets(MPNs)that are an extension of Petri nets with distinguishable tokens.Then, a novel process discovery technique is developed to support the discovery of MBPMs from event logs with sub-process multi-instantiation information. In addition, we propose to measure the quality of the discovered MBPMs against the input event logs by transforming an MBPM to a classical Petri net such that existing quality metrics, e.g., fitness and precision, can be used.The proposed discovery approach is properly implemented as plugins in the Pro M toolkit. Based on a cloud resource management case study, we compare our approach with the state-of-theart process discovery techniques. The results demonstrate that our approach outperforms existing approaches to discover process models with multi-instance sub-processes. 展开更多
关键词 Cloud resource management process multi-instance Petri nets(MPNs) multi-instance sub-processes process discovery quality evaluation
在线阅读 下载PDF
Evolution of the Internet AS-level topology:From nodes and edges to components 被引量:2
12
作者 Xiao Liu Jinfa Wang +1 位作者 Wei Jing Hai Zhao 《Chinese Physics B》 SCIE EI CAS CSCD 2018年第12期200-210,共11页
Studying the topology of infrastructure communication networks(e.g., the Internet) has become a means to understand and develop complex systems. Therefore, investigating the evolution of Internet network topology migh... Studying the topology of infrastructure communication networks(e.g., the Internet) has become a means to understand and develop complex systems. Therefore, investigating the evolution of Internet network topology might elucidate disciplines governing the dynamic process of complex systems. It may also contribute to a more intelligent communication network framework based on its autonomous behavior. In this paper, the Internet Autonomous Systems(ASes) topology from 1998 to 2013 was studied by deconstructing and analysing topological entities on three different scales(i.e., nodes,edges and 3 network components: single-edge component M1, binary component M2 and triangle component M3). The results indicate that: a) 95% of the Internet edges are internal edges(as opposed to external and boundary edges); b) the Internet network consists mainly of internal components, particularly M2 internal components; c) in most cases, a node initially connects with multiple nodes to form an M2 component to take part in the network; d) the Internet network evolves to lower entropy. Furthermore, we find that, as a complex system, the evolution of the Internet exhibits a behavioral series,which is similar to the biological phenomena concerned with the study on metabolism and replication. To the best of our knowledge, this is the first study of the evolution of the Internet network through analysis of dynamic features of its nodes,edges and components, and therefore our study represents an innovative approach to the subject. 展开更多
关键词 complex system Internet AS-level topology EVOLUTION network component
在线阅读 下载PDF
Segmentation and Classification of Stomach Abnormalities Using Deep Learning 被引量:2
13
作者 Javeria Naz Muhammad Attique Khan +3 位作者 Majed Alhaisoni Oh-Young Song Usman Tariq Seifedine Kadry 《Computers, Materials & Continua》 SCIE EI 2021年第10期607-625,共19页
An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification... An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification of GI abnormalities by deep learning.The first bleeding region is segmented using a hybrid approach.The threshold is applied to each channel extracted from the original RGB image.Later,all channels are merged through mutual information and pixel-based techniques.As a result,the image is segmented.Texture and deep learning features are extracted in the proposed classification task.The transfer learning(TL)approach is used for the extraction of deep features.The Local Binary Pattern(LBP)method is used for texture features.Later,an entropy-based feature selection approach is implemented to select the best features of both deep learning and texture vectors.The selected optimal features are combined with a serial-based technique and the resulting vector is fed to the Ensemble Learning Classifier.The experimental process is evaluated on the basis of two datasets:Private and KVASIR.The accuracy achieved is 99.8 per cent for the private data set and 86.4 percent for the KVASIR data set.It can be confirmed that the proposed method is effective in detecting and classifying GI abnormalities and exceeds other methods of comparison. 展开更多
关键词 Gastrointestinal tract contrast stretching SEGMENTATION deep learning features selection
在线阅读 下载PDF
Exploring and Modelling IoT Offloading Policies in Edge Cloud Environments 被引量:2
14
作者 Jaber Almutairi Mohammad Aldossary 《Computer Systems Science & Engineering》 SCIE EI 2022年第5期611-624,共14页
The Internet of Things(IoT)has recently become a popular technology that can play increasingly important roles in every aspect of our daily life.For collaboration between IoT devices and edge cloud servers,edge server... The Internet of Things(IoT)has recently become a popular technology that can play increasingly important roles in every aspect of our daily life.For collaboration between IoT devices and edge cloud servers,edge server nodes provide the computation and storage capabilities for IoT devices through the task offloading process for accelerating tasks with large resource requests.However,the quantitative impact of different offloading architectures and policies on IoT applications’performance remains far from clear,especially with a dynamic and unpredictable range of connected physical and virtual devices.To this end,this work models the performance impact by exploiting a potential latency that exhibits within the environment of edge cloud.Also,it investigates and compares the effects of loosely-coupled(LC)and orchestrator-enabled(OE)architecture.The LC scheme can smoothly address task redistribution with less time consumption for the offloading sceneries with small scale and small task requests.Moreover,the OE scheme not only outperforms the LC scheme in the large-scale tasks requests and offloading occurs but also reduces the overall time by 28.19%.Finally,to achieve optimized solutions for optimal offloading placement with different constraints,orchestration is important. 展开更多
关键词 Internet of things application deployment latency-sensitive edge orchestrator
在线阅读 下载PDF
Software Defect Prediction Using Supervised Machine Learning and Ensemble Techniques: A Comparative Study 被引量:5
15
作者 Abdullah Alsaeedi Mohammad Zubair Khan 《Journal of Software Engineering and Applications》 2019年第5期85-100,共16页
An essential objective of software development is to locate and fix defects ahead of schedule that could be expected under diverse circumstances. Many software development activities are performed by individuals, whic... An essential objective of software development is to locate and fix defects ahead of schedule that could be expected under diverse circumstances. Many software development activities are performed by individuals, which may lead to different software bugs over the development to occur, causing disappointments in the not-so-distant future. Thus, the prediction of software defects in the first stages has become a primary interest in the field of software engineering. Various software defect prediction (SDP) approaches that rely on software metrics have been proposed in the last two decades. Bagging, support vector machines (SVM), decision tree (DS), and random forest (RF) classifiers are known to perform well to predict defects. This paper studies and compares these supervised machine learning and ensemble classifiers on 10 NASA datasets. The experimental results showed that, in the majority of cases, RF was the best performing classifier compared to the others. 展开更多
关键词 MACHINE Learning ENSEMBLES Prediction SOFTWARE Metrics SOFTWARE DEFECT
在线阅读 下载PDF
Investigating and Modelling of Task Offloading Latency in Edge-Cloud Environment 被引量:1
16
作者 Jaber Almutairi Mohammad Aldossary 《Computers, Materials & Continua》 SCIE EI 2021年第9期4143-4160,共18页
Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computa... Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing.However,different service architecture and offloading strategies have a different impact on the service time performance of IoT applications.Therefore,this paper presents an Edge-Cloud system architecture that supports scheduling offloading tasks of IoT applications in order to minimize the enormous amount of transmitting data in the network.Also,it introduces the offloading latency models to investigate the delay of different offloading scenarios/schemes and explores the effect of computational and communication demand on each one.A series of experiments conducted on an EdgeCloudSim show that different offloading decisions within the Edge-Cloud system can lead to various service times due to the computational resources and communications types.Finally,this paper presents a comprehensive review of the current state-of-the-art research on task offloading issues in the Edge-Cloud environment. 展开更多
关键词 Edge-cloud computing resource management latency models scheduling task offloading internet of things
在线阅读 下载PDF
Deep Integration of Innovation and Entrepreneurship(InE)Education in Chinese University Classrooms 被引量:1
17
作者 Yuli Zhao Yin Zhang +3 位作者 Bin Zhang Kening Gao Hai Yu Zhiliang Zhu 《计算机教育》 2021年第12期25-33,共9页
Recently,InE has been regarded as a popular education strategy in Chinese universities.However,problems have been exposed in the adoption of InE,for example,in InE courses and competitions.The purpose of this paper is... Recently,InE has been regarded as a popular education strategy in Chinese universities.However,problems have been exposed in the adoption of InE,for example,in InE courses and competitions.The purpose of this paper is to provide a possible solution to the problems,which is to organize effective InE courses by integrating InE with Inter-Course-level Problem-Based Learning(ICPBL).A detailed case is demonstrated by an ICPBL elective course design with deep integration of InE in the teaching,learning,and assessments.This paper contributes to a new curriculum design for promoting InE education in practically for Chinese universities. 展开更多
关键词 Engineering Education Innovation and Entrepreneurship INTERCOURSE Inter-Course-level Problem-Based Learning(ICPBL)
在线阅读 下载PDF
Towards Intelligent Detection and Classification of Rice Plant Diseases Based on Leaf Image Dataset 被引量:1
18
作者 Fawad Ali Shah Habib Akbar +4 位作者 Abid Ali Parveen Amna Maha Aljohani Eman A.Aldhahri Harun Jamil 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期1385-1413,共29页
The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information... The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information technology has increased.A sensing system is mandatory to detect rice diseases using Artificial Intelligence(AI).It is being adopted in all medical and plant sciences fields to access and measure the accuracy of results and detection while lowering the risk of diseases.Deep Neural Network(DNN)is a novel technique that will help detect disease present on a rice leave because DNN is also considered a state-of-the-art solution in image detection using sensing nodes.Further in this paper,the adoption of the mixed-method approach Deep Convolutional Neural Network(Deep CNN)has assisted the research in increasing the effectiveness of the proposed method.Deep CNN is used for image recognition and is a class of deep-learning neural networks.CNN is popular and mostly used in the field of image recognition.A dataset of images with three main leaf diseases is selected for training and testing the proposed model.After the image acquisition and preprocessing process,the Deep CNN model was trained to detect and classify three rice diseases(Brown spot,bacterial blight,and blast disease).The proposed model achieved 98.3%accuracy in comparison with similar state-of-the-art techniques. 展开更多
关键词 Rice plant disease detection convolution neural network image classification biological classification
在线阅读 下载PDF
Review of the Impact of Grassland Degradation on Ecosystem Service Value 被引量:2
19
作者 Bo Xiao Liangjun Zhao +3 位作者 Liping Zheng Liang Tan Fengling Zheng A. Siya•Man Like 《Open Journal of Applied Sciences》 CAS 2022年第7期1083-1097,共15页
In recent years, grassland degradation has become one of China’s most critical environmental problems due to the interaction of natural environmental factors and human causes. Based on the systematic analysis of the ... In recent years, grassland degradation has become one of China’s most critical environmental problems due to the interaction of natural environmental factors and human causes. Based on the systematic analysis of the spatial characteristics of grassland degradation and the current research status of environmental drivers, this paper summarizes and summarizes the research methods on the impact of grassland degradation on natural ecological service function and social and economic value to understand further the natural ecological service function of grassland degradation and its impact on social and economic benefits. The results show that since the function of grassland ecosystem service is much larger than the biomass value it provides, we should focus on the effective management of grassland from the design concept of ecological service function to achieve the sustainable development of grassland. We should do an excellent job in the comprehensive application of various ecosystems and service value evaluation methods in the future. 展开更多
关键词 Grassland Degradation Remote Sensing Monitoring Ecological Problems Eco-Service Functions
在线阅读 下载PDF
Leveraging User-Generated Comments and Fused BiLSTM Models to Detect and Predict Issues with Mobile Apps 被引量:2
20
作者 Wael M.S.Yafooz Abdullah Alsaeedi 《Computers, Materials & Continua》 SCIE EI 2024年第4期735-759,共25页
In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mo... In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mobileapps. The use of these apps eases our daily lives, and all customers who need any type of service can accessit easily, comfortably, and efficiently through mobile apps. Particularly, Saudi Arabia greatly depends on digitalservices to assist people and visitors. Such mobile devices are used in organizing daily work schedules and services,particularly during two large occasions, Umrah and Hajj. However, pilgrims encounter mobile app issues such asslowness, conflict, unreliability, or user-unfriendliness. Pilgrims comment on these issues on mobile app platformsthrough reviews of their experiences with these digital services. Scholars have made several attempts to solve suchmobile issues by reporting bugs or non-functional requirements by utilizing user comments.However, solving suchissues is a great challenge, and the issues still exist. Therefore, this study aims to propose a hybrid deep learningmodel to classify and predict mobile app software issues encountered by millions of pilgrims during the Hajj andUmrah periods from the user perspective. Firstly, a dataset was constructed using user-generated comments fromrelevant mobile apps using natural language processing methods, including information extraction, the annotationprocess, and pre-processing steps, considering a multi-class classification problem. Then, several experimentswere conducted using common machine learning classifiers, Artificial Neural Networks (ANN), Long Short-TermMemory (LSTM), and Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) architectures, toexamine the performance of the proposed model. Results show 96% in F1-score and accuracy, and the proposedmodel outperformed the mentioned models. 展开更多
关键词 Mobile apps issues play store user comments deep learning LSTM bidirectional LSTM
在线阅读 下载PDF
上一页 1 2 15 下一页 到第
使用帮助 返回顶部