The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p...The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.展开更多
In recent years, the traffic congestion problem has become more and more serious, and the research on traffic system control has become a new hot spot. Studying the bifurcation characteristics of traffic flow systems ...In recent years, the traffic congestion problem has become more and more serious, and the research on traffic system control has become a new hot spot. Studying the bifurcation characteristics of traffic flow systems and designing control schemes for unstable pivots can alleviate the traffic congestion problem from a new perspective. In this work, the full-speed differential model considering the vehicle network environment is improved in order to adjust the traffic flow from the perspective of bifurcation control, the existence conditions of Hopf bifurcation and saddle-node bifurcation in the model are proved theoretically, and the stability mutation point for the stability of the transportation system is found. For the unstable bifurcation point, a nonlinear system feedback controller is designed by using Chebyshev polynomial approximation and stochastic feedback control method. The advancement, postponement, and elimination of Hopf bifurcation are achieved without changing the system equilibrium point, and the mutation behavior of the transportation system is controlled so as to alleviate the traffic congestion. The changes in the stability of complex traffic systems are explained through the bifurcation analysis, which can better capture the characteristics of the traffic flow. By adjusting the control parameters in the feedback controllers, the influence of the boundary conditions on the stability of the traffic system is adequately described, and the effects of the unstable focuses and saddle points on the system are suppressed to slow down the traffic flow. In addition, the unstable bifurcation points can be eliminated and the Hopf bifurcation can be controlled to advance, delay, and disappear,so as to realize the control of the stability behavior of the traffic system, which can help to alleviate the traffic congestion and describe the actual traffic phenomena as well.展开更多
The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent ...The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent botnets in interconnected devices.Anomaly detection models evaluate transmission patterns,network traffic,and device behaviour to detect deviations from usual activities.Machine learning(ML)techniques detect patterns signalling botnet activity,namely sudden traffic increase,unusual command and control patterns,or irregular device behaviour.In addition,intrusion detection systems(IDSs)and signature-based techniques are applied to recognize known malware signatures related to botnets.Various ML and deep learning(DL)techniques have been developed to detect botnet attacks in IoT systems.To overcome security issues in an IoT environment,this article designs a gorilla troops optimizer with DL-enabled botnet attack detection and classification(GTODL-BADC)technique.The GTODL-BADC technique follows feature selection(FS)with optimal DL-based classification for accomplishing security in an IoT environment.For data preprocessing,the min-max data normalization approach is primarily used.The GTODL-BADC technique uses the GTO algorithm to select features and elect optimal feature subsets.Moreover,the multi-head attention-based long short-term memory(MHA-LSTM)technique was applied for botnet detection.Finally,the tree seed algorithm(TSA)was used to select the optimum hyperparameter for the MHA-LSTM method.The experimental validation of the GTODL-BADC technique can be tested on a benchmark dataset.The simulation results highlighted that the GTODL-BADC technique demonstrates promising performance in the botnet detection process.展开更多
Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy....Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.展开更多
Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ...Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ML classifier algorithms to identify CKD early.This study explores the application of advanced machine learning techniques on a CKD dataset obtained from the University of California,UC Irvine Machine Learning repository.The research introduces TrioNet,an ensemble model combining extreme gradient boosting,random forest,and extra tree classifier,which excels in providing highly accurate predictions for CKD.Furthermore,K nearest neighbor(KNN)imputer is utilized to deal withmissing values while synthetic minority oversampling(SMOTE)is used for class-imbalance problems.To ascertain the efficacy of the proposed model,a comprehensive comparative analysis is conducted with various machine learning models.The proposed TrioNet using KNN imputer and SMOTE outperformed other models with 98.97%accuracy for detectingCKD.This in-depth analysis demonstrates the model’s capabilities and underscores its potential as a valuable tool in the diagnosis of CKD.展开更多
Arabic dialect identification is essential in Natural Language Processing(NLP)and forms a critical component of applications such as machine translation,sentiment analysis,and cross-language text generation.The diffic...Arabic dialect identification is essential in Natural Language Processing(NLP)and forms a critical component of applications such as machine translation,sentiment analysis,and cross-language text generation.The difficulties in differentiating between Arabic dialects have garnered more attention in the last 10 years,particularly in social media.These difficulties result from the overlapping vocabulary of the dialects,the fluidity of online language use,and the difficulties in telling apart dialects that are closely related.Managing dialects with limited resources and adjusting to the ever-changing linguistic trends on social media platforms present additional challenges.A strong dialect recognition technique is essential to improving communication technology and cross-cultural understanding in light of the increase in social media usage.To distinguish Arabic dialects on social media,this research suggests a hybrid Deep Learning(DL)approach.The Long Short-Term Memory(LSTM)and Bidirectional Long Short-Term Memory(BiLSTM)architectures make up the model.A new textual dataset that focuses on three main dialects,i.e.,Levantine,Saudi,and Egyptian,is also available.Approximately 11,000 user-generated comments from Twitter are included in this dataset,which has been painstakingly annotated to guarantee accuracy in dialect classification.Transformers,DL models,and basic machine learning classifiers are used to conduct several tests to evaluate the performance of the suggested model.Various methodologies,including TF-IDF,word embedding,and self-attention mechanisms,are used.The suggested model fares better than other models in terms of accuracy,obtaining a remarkable 96.54%,according to the trial results.This study advances the discipline by presenting a new dataset and putting forth a practical model for Arabic dialect identification.This model may prove crucial for future work in sociolinguistic studies and NLP.展开更多
Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of thi...Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of things (devices) that can be connected through the internet. The purpose: this paper aims to explore the concept of the Internet of Things (IoT) generally and outline the main definitions of IoT. The paper also aims to examine and discuss the obstacles and potential benefits of IoT in Saudi universities. Methodology: the researchers reviewed the previous literature and focused on several databases to use the recent studies and research related to the IoT. Then, the researchers also used quantitative methodology to examine the factors affecting the obstacles and potential benefits of IoT. The data were collected by using a questionnaire distributed online among academic staff and a total of 150 participants completed the survey. Finding: the result of this study reveals there are twelve factors that affect the potential benefits of using IoT such as reducing human errors, increasing business income and worker’s productivity. It also shows the eighteen factors which affect obstacles the IoT use, for example sensors’ cost, data privacy, and data security. These factors have the most influence on using IoT in Saudi universities.展开更多
Health care is an important part of human life and is a right for everyone. One of the most basic human rights is to receive health care whenever they need it. However, this is simply not an option for everyone due to...Health care is an important part of human life and is a right for everyone. One of the most basic human rights is to receive health care whenever they need it. However, this is simply not an option for everyone due to the social conditions in which some communities live and not everyone has access to it. This paper aims to serve as a reference point and guide for users who are interested in monitoring their health, particularly their blood analysis to be aware of their health condition in an easy way. This study introduces an algorithmic approach for extracting and analyzing Complete Blood Count (CBC) parameters from scanned images. The algorithm employs Optical Character Recognition (OCR) technology to process images containing tabular data, specifically targeting CBC parameter tables. Upon image processing, the algorithm extracts data and identifies CBC parameters and their corresponding values. It evaluates the status (High, Low, or Normal) of each parameter and subsequently presents evaluations, and any potential diagnoses. The primary objective is to automate the extraction and evaluation of CBC parameters, aiding healthcare professionals in swiftly assessing blood analysis results. The algorithmic framework aims to streamline the interpretation of CBC tests, potentially improving efficiency and accuracy in clinical diagnostics.展开更多
With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of C...With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.展开更多
A new method based on variational mode decomposition (VMD) is proposed to distinguish between coal-rock fracturing and blasting vibration microseismic signals. First, the signals are decomposed to obtain the variati...A new method based on variational mode decomposition (VMD) is proposed to distinguish between coal-rock fracturing and blasting vibration microseismic signals. First, the signals are decomposed to obtain the variational mode components, which are ranked by frequency in descending order. Second, each mode component is extracted to form the eigenvector of the energy of the original signal and calculate the center of gravity coefficient of the energy distribution plane. Finally, the coal-rock fracturing and blasting vibration signals are classified using a decision tree stump. Experimental results suggest that VMD can effectively separate the signal components into coal-rock fracturing and blasting vibration signals based on frequency. The contrast in the energy distribution center coefficient after the dimension reduction of the energy distribution eigenvector accurately identifies the two types of microseismic signals. The method is verified by comparing it to EMD and wavelet packet decomposition.展开更多
Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years;however,...Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years;however, most of them have difficulties in handling multi-instance sub-processes. To address this challenge, we first introduce a multi-instance business process model(MBPM) to support the modeling of processes with multiple sub-process instantiations. Formal semantics of MBPMs are precisely defined by using multi-instance Petri nets(MPNs)that are an extension of Petri nets with distinguishable tokens.Then, a novel process discovery technique is developed to support the discovery of MBPMs from event logs with sub-process multi-instantiation information. In addition, we propose to measure the quality of the discovered MBPMs against the input event logs by transforming an MBPM to a classical Petri net such that existing quality metrics, e.g., fitness and precision, can be used.The proposed discovery approach is properly implemented as plugins in the Pro M toolkit. Based on a cloud resource management case study, we compare our approach with the state-of-theart process discovery techniques. The results demonstrate that our approach outperforms existing approaches to discover process models with multi-instance sub-processes.展开更多
Studying the topology of infrastructure communication networks(e.g., the Internet) has become a means to understand and develop complex systems. Therefore, investigating the evolution of Internet network topology migh...Studying the topology of infrastructure communication networks(e.g., the Internet) has become a means to understand and develop complex systems. Therefore, investigating the evolution of Internet network topology might elucidate disciplines governing the dynamic process of complex systems. It may also contribute to a more intelligent communication network framework based on its autonomous behavior. In this paper, the Internet Autonomous Systems(ASes) topology from 1998 to 2013 was studied by deconstructing and analysing topological entities on three different scales(i.e., nodes,edges and 3 network components: single-edge component M1, binary component M2 and triangle component M3). The results indicate that: a) 95% of the Internet edges are internal edges(as opposed to external and boundary edges); b) the Internet network consists mainly of internal components, particularly M2 internal components; c) in most cases, a node initially connects with multiple nodes to form an M2 component to take part in the network; d) the Internet network evolves to lower entropy. Furthermore, we find that, as a complex system, the evolution of the Internet exhibits a behavioral series,which is similar to the biological phenomena concerned with the study on metabolism and replication. To the best of our knowledge, this is the first study of the evolution of the Internet network through analysis of dynamic features of its nodes,edges and components, and therefore our study represents an innovative approach to the subject.展开更多
An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification...An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification of GI abnormalities by deep learning.The first bleeding region is segmented using a hybrid approach.The threshold is applied to each channel extracted from the original RGB image.Later,all channels are merged through mutual information and pixel-based techniques.As a result,the image is segmented.Texture and deep learning features are extracted in the proposed classification task.The transfer learning(TL)approach is used for the extraction of deep features.The Local Binary Pattern(LBP)method is used for texture features.Later,an entropy-based feature selection approach is implemented to select the best features of both deep learning and texture vectors.The selected optimal features are combined with a serial-based technique and the resulting vector is fed to the Ensemble Learning Classifier.The experimental process is evaluated on the basis of two datasets:Private and KVASIR.The accuracy achieved is 99.8 per cent for the private data set and 86.4 percent for the KVASIR data set.It can be confirmed that the proposed method is effective in detecting and classifying GI abnormalities and exceeds other methods of comparison.展开更多
The Internet of Things(IoT)has recently become a popular technology that can play increasingly important roles in every aspect of our daily life.For collaboration between IoT devices and edge cloud servers,edge server...The Internet of Things(IoT)has recently become a popular technology that can play increasingly important roles in every aspect of our daily life.For collaboration between IoT devices and edge cloud servers,edge server nodes provide the computation and storage capabilities for IoT devices through the task offloading process for accelerating tasks with large resource requests.However,the quantitative impact of different offloading architectures and policies on IoT applications’performance remains far from clear,especially with a dynamic and unpredictable range of connected physical and virtual devices.To this end,this work models the performance impact by exploiting a potential latency that exhibits within the environment of edge cloud.Also,it investigates and compares the effects of loosely-coupled(LC)and orchestrator-enabled(OE)architecture.The LC scheme can smoothly address task redistribution with less time consumption for the offloading sceneries with small scale and small task requests.Moreover,the OE scheme not only outperforms the LC scheme in the large-scale tasks requests and offloading occurs but also reduces the overall time by 28.19%.Finally,to achieve optimized solutions for optimal offloading placement with different constraints,orchestration is important.展开更多
An essential objective of software development is to locate and fix defects ahead of schedule that could be expected under diverse circumstances. Many software development activities are performed by individuals, whic...An essential objective of software development is to locate and fix defects ahead of schedule that could be expected under diverse circumstances. Many software development activities are performed by individuals, which may lead to different software bugs over the development to occur, causing disappointments in the not-so-distant future. Thus, the prediction of software defects in the first stages has become a primary interest in the field of software engineering. Various software defect prediction (SDP) approaches that rely on software metrics have been proposed in the last two decades. Bagging, support vector machines (SVM), decision tree (DS), and random forest (RF) classifiers are known to perform well to predict defects. This paper studies and compares these supervised machine learning and ensemble classifiers on 10 NASA datasets. The experimental results showed that, in the majority of cases, RF was the best performing classifier compared to the others.展开更多
Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computa...Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing.However,different service architecture and offloading strategies have a different impact on the service time performance of IoT applications.Therefore,this paper presents an Edge-Cloud system architecture that supports scheduling offloading tasks of IoT applications in order to minimize the enormous amount of transmitting data in the network.Also,it introduces the offloading latency models to investigate the delay of different offloading scenarios/schemes and explores the effect of computational and communication demand on each one.A series of experiments conducted on an EdgeCloudSim show that different offloading decisions within the Edge-Cloud system can lead to various service times due to the computational resources and communications types.Finally,this paper presents a comprehensive review of the current state-of-the-art research on task offloading issues in the Edge-Cloud environment.展开更多
Recently,InE has been regarded as a popular education strategy in Chinese universities.However,problems have been exposed in the adoption of InE,for example,in InE courses and competitions.The purpose of this paper is...Recently,InE has been regarded as a popular education strategy in Chinese universities.However,problems have been exposed in the adoption of InE,for example,in InE courses and competitions.The purpose of this paper is to provide a possible solution to the problems,which is to organize effective InE courses by integrating InE with Inter-Course-level Problem-Based Learning(ICPBL).A detailed case is demonstrated by an ICPBL elective course design with deep integration of InE in the teaching,learning,and assessments.This paper contributes to a new curriculum design for promoting InE education in practically for Chinese universities.展开更多
The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information...The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information technology has increased.A sensing system is mandatory to detect rice diseases using Artificial Intelligence(AI).It is being adopted in all medical and plant sciences fields to access and measure the accuracy of results and detection while lowering the risk of diseases.Deep Neural Network(DNN)is a novel technique that will help detect disease present on a rice leave because DNN is also considered a state-of-the-art solution in image detection using sensing nodes.Further in this paper,the adoption of the mixed-method approach Deep Convolutional Neural Network(Deep CNN)has assisted the research in increasing the effectiveness of the proposed method.Deep CNN is used for image recognition and is a class of deep-learning neural networks.CNN is popular and mostly used in the field of image recognition.A dataset of images with three main leaf diseases is selected for training and testing the proposed model.After the image acquisition and preprocessing process,the Deep CNN model was trained to detect and classify three rice diseases(Brown spot,bacterial blight,and blast disease).The proposed model achieved 98.3%accuracy in comparison with similar state-of-the-art techniques.展开更多
In recent years, grassland degradation has become one of China’s most critical environmental problems due to the interaction of natural environmental factors and human causes. Based on the systematic analysis of the ...In recent years, grassland degradation has become one of China’s most critical environmental problems due to the interaction of natural environmental factors and human causes. Based on the systematic analysis of the spatial characteristics of grassland degradation and the current research status of environmental drivers, this paper summarizes and summarizes the research methods on the impact of grassland degradation on natural ecological service function and social and economic value to understand further the natural ecological service function of grassland degradation and its impact on social and economic benefits. The results show that since the function of grassland ecosystem service is much larger than the biomass value it provides, we should focus on the effective management of grassland from the design concept of ecological service function to achieve the sustainable development of grassland. We should do an excellent job in the comprehensive application of various ecosystems and service value evaluation methods in the future.展开更多
In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mo...In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mobileapps. The use of these apps eases our daily lives, and all customers who need any type of service can accessit easily, comfortably, and efficiently through mobile apps. Particularly, Saudi Arabia greatly depends on digitalservices to assist people and visitors. Such mobile devices are used in organizing daily work schedules and services,particularly during two large occasions, Umrah and Hajj. However, pilgrims encounter mobile app issues such asslowness, conflict, unreliability, or user-unfriendliness. Pilgrims comment on these issues on mobile app platformsthrough reviews of their experiences with these digital services. Scholars have made several attempts to solve suchmobile issues by reporting bugs or non-functional requirements by utilizing user comments.However, solving suchissues is a great challenge, and the issues still exist. Therefore, this study aims to propose a hybrid deep learningmodel to classify and predict mobile app software issues encountered by millions of pilgrims during the Hajj andUmrah periods from the user perspective. Firstly, a dataset was constructed using user-generated comments fromrelevant mobile apps using natural language processing methods, including information extraction, the annotationprocess, and pre-processing steps, considering a multi-class classification problem. Then, several experimentswere conducted using common machine learning classifiers, Artificial Neural Networks (ANN), Long Short-TermMemory (LSTM), and Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) architectures, toexamine the performance of the proposed model. Results show 96% in F1-score and accuracy, and the proposedmodel outperformed the mentioned models.展开更多
基金supported by Scientific Research Deanship at University of Ha’il,Saudi Arabia through project number RG-23137.
文摘The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.
基金Project supported by the National Natural Science Foundation of China(Grant No.72361031)the Gansu Province University Youth Doctoral Support Project(Grant No.2023QB-049)。
文摘In recent years, the traffic congestion problem has become more and more serious, and the research on traffic system control has become a new hot spot. Studying the bifurcation characteristics of traffic flow systems and designing control schemes for unstable pivots can alleviate the traffic congestion problem from a new perspective. In this work, the full-speed differential model considering the vehicle network environment is improved in order to adjust the traffic flow from the perspective of bifurcation control, the existence conditions of Hopf bifurcation and saddle-node bifurcation in the model are proved theoretically, and the stability mutation point for the stability of the transportation system is found. For the unstable bifurcation point, a nonlinear system feedback controller is designed by using Chebyshev polynomial approximation and stochastic feedback control method. The advancement, postponement, and elimination of Hopf bifurcation are achieved without changing the system equilibrium point, and the mutation behavior of the transportation system is controlled so as to alleviate the traffic congestion. The changes in the stability of complex traffic systems are explained through the bifurcation analysis, which can better capture the characteristics of the traffic flow. By adjusting the control parameters in the feedback controllers, the influence of the boundary conditions on the stability of the traffic system is adequately described, and the effects of the unstable focuses and saddle points on the system are suppressed to slow down the traffic flow. In addition, the unstable bifurcation points can be eliminated and the Hopf bifurcation can be controlled to advance, delay, and disappear,so as to realize the control of the stability behavior of the traffic system, which can help to alleviate the traffic congestion and describe the actual traffic phenomena as well.
文摘The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent botnets in interconnected devices.Anomaly detection models evaluate transmission patterns,network traffic,and device behaviour to detect deviations from usual activities.Machine learning(ML)techniques detect patterns signalling botnet activity,namely sudden traffic increase,unusual command and control patterns,or irregular device behaviour.In addition,intrusion detection systems(IDSs)and signature-based techniques are applied to recognize known malware signatures related to botnets.Various ML and deep learning(DL)techniques have been developed to detect botnet attacks in IoT systems.To overcome security issues in an IoT environment,this article designs a gorilla troops optimizer with DL-enabled botnet attack detection and classification(GTODL-BADC)technique.The GTODL-BADC technique follows feature selection(FS)with optimal DL-based classification for accomplishing security in an IoT environment.For data preprocessing,the min-max data normalization approach is primarily used.The GTODL-BADC technique uses the GTO algorithm to select features and elect optimal feature subsets.Moreover,the multi-head attention-based long short-term memory(MHA-LSTM)technique was applied for botnet detection.Finally,the tree seed algorithm(TSA)was used to select the optimum hyperparameter for the MHA-LSTM method.The experimental validation of the GTODL-BADC technique can be tested on a benchmark dataset.The simulation results highlighted that the GTODL-BADC technique demonstrates promising performance in the botnet detection process.
基金supported by Jilin Provincial Science and Technology Department Natural Science Foundation of China(20210101415JC)Jilin Provincial Science and Technology Department Free exploration research project of China(YDZJ202201ZYTS642).
文摘Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number PNURSP2024R333,Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ML classifier algorithms to identify CKD early.This study explores the application of advanced machine learning techniques on a CKD dataset obtained from the University of California,UC Irvine Machine Learning repository.The research introduces TrioNet,an ensemble model combining extreme gradient boosting,random forest,and extra tree classifier,which excels in providing highly accurate predictions for CKD.Furthermore,K nearest neighbor(KNN)imputer is utilized to deal withmissing values while synthetic minority oversampling(SMOTE)is used for class-imbalance problems.To ascertain the efficacy of the proposed model,a comprehensive comparative analysis is conducted with various machine learning models.The proposed TrioNet using KNN imputer and SMOTE outperformed other models with 98.97%accuracy for detectingCKD.This in-depth analysis demonstrates the model’s capabilities and underscores its potential as a valuable tool in the diagnosis of CKD.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘Arabic dialect identification is essential in Natural Language Processing(NLP)and forms a critical component of applications such as machine translation,sentiment analysis,and cross-language text generation.The difficulties in differentiating between Arabic dialects have garnered more attention in the last 10 years,particularly in social media.These difficulties result from the overlapping vocabulary of the dialects,the fluidity of online language use,and the difficulties in telling apart dialects that are closely related.Managing dialects with limited resources and adjusting to the ever-changing linguistic trends on social media platforms present additional challenges.A strong dialect recognition technique is essential to improving communication technology and cross-cultural understanding in light of the increase in social media usage.To distinguish Arabic dialects on social media,this research suggests a hybrid Deep Learning(DL)approach.The Long Short-Term Memory(LSTM)and Bidirectional Long Short-Term Memory(BiLSTM)architectures make up the model.A new textual dataset that focuses on three main dialects,i.e.,Levantine,Saudi,and Egyptian,is also available.Approximately 11,000 user-generated comments from Twitter are included in this dataset,which has been painstakingly annotated to guarantee accuracy in dialect classification.Transformers,DL models,and basic machine learning classifiers are used to conduct several tests to evaluate the performance of the suggested model.Various methodologies,including TF-IDF,word embedding,and self-attention mechanisms,are used.The suggested model fares better than other models in terms of accuracy,obtaining a remarkable 96.54%,according to the trial results.This study advances the discipline by presenting a new dataset and putting forth a practical model for Arabic dialect identification.This model may prove crucial for future work in sociolinguistic studies and NLP.
文摘Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of things (devices) that can be connected through the internet. The purpose: this paper aims to explore the concept of the Internet of Things (IoT) generally and outline the main definitions of IoT. The paper also aims to examine and discuss the obstacles and potential benefits of IoT in Saudi universities. Methodology: the researchers reviewed the previous literature and focused on several databases to use the recent studies and research related to the IoT. Then, the researchers also used quantitative methodology to examine the factors affecting the obstacles and potential benefits of IoT. The data were collected by using a questionnaire distributed online among academic staff and a total of 150 participants completed the survey. Finding: the result of this study reveals there are twelve factors that affect the potential benefits of using IoT such as reducing human errors, increasing business income and worker’s productivity. It also shows the eighteen factors which affect obstacles the IoT use, for example sensors’ cost, data privacy, and data security. These factors have the most influence on using IoT in Saudi universities.
文摘Health care is an important part of human life and is a right for everyone. One of the most basic human rights is to receive health care whenever they need it. However, this is simply not an option for everyone due to the social conditions in which some communities live and not everyone has access to it. This paper aims to serve as a reference point and guide for users who are interested in monitoring their health, particularly their blood analysis to be aware of their health condition in an easy way. This study introduces an algorithmic approach for extracting and analyzing Complete Blood Count (CBC) parameters from scanned images. The algorithm employs Optical Character Recognition (OCR) technology to process images containing tabular data, specifically targeting CBC parameter tables. Upon image processing, the algorithm extracts data and identifies CBC parameters and their corresponding values. It evaluates the status (High, Low, or Normal) of each parameter and subsequently presents evaluations, and any potential diagnoses. The primary objective is to automate the extraction and evaluation of CBC parameters, aiding healthcare professionals in swiftly assessing blood analysis results. The algorithmic framework aims to streamline the interpretation of CBC tests, potentially improving efficiency and accuracy in clinical diagnostics.
文摘With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.
基金This work was supported by the National Key Research and Development program of China (No. 2016YFC0801406), Shandong Key Research and Development program (Nos. 2016ZDJS02A05 and 2018GGX 109013) and Shandong Provincial Natural Science Foundation (No. ZR2018MEE008).
文摘A new method based on variational mode decomposition (VMD) is proposed to distinguish between coal-rock fracturing and blasting vibration microseismic signals. First, the signals are decomposed to obtain the variational mode components, which are ranked by frequency in descending order. Second, each mode component is extracted to form the eigenvector of the energy of the original signal and calculate the center of gravity coefficient of the energy distribution plane. Finally, the coal-rock fracturing and blasting vibration signals are classified using a decision tree stump. Experimental results suggest that VMD can effectively separate the signal components into coal-rock fracturing and blasting vibration signals based on frequency. The contrast in the energy distribution center coefficient after the dimension reduction of the energy distribution eigenvector accurately identifies the two types of microseismic signals. The method is verified by comparing it to EMD and wavelet packet decomposition.
基金supported by the National Natural Science Foundation of China(61902222)the Taishan Scholars Program of Shandong Province(tsqn201909109)+1 种基金the Natural Science Excellent Youth Foundation of Shandong Province(ZR2021YQ45)the Youth Innovation Science and Technology Team Foundation of Shandong Higher School(2021KJ031)。
文摘Process discovery, as one of the most challenging process analysis techniques, aims to uncover business process models from event logs. Many process discovery approaches were invented in the past twenty years;however, most of them have difficulties in handling multi-instance sub-processes. To address this challenge, we first introduce a multi-instance business process model(MBPM) to support the modeling of processes with multiple sub-process instantiations. Formal semantics of MBPMs are precisely defined by using multi-instance Petri nets(MPNs)that are an extension of Petri nets with distinguishable tokens.Then, a novel process discovery technique is developed to support the discovery of MBPMs from event logs with sub-process multi-instantiation information. In addition, we propose to measure the quality of the discovered MBPMs against the input event logs by transforming an MBPM to a classical Petri net such that existing quality metrics, e.g., fitness and precision, can be used.The proposed discovery approach is properly implemented as plugins in the Pro M toolkit. Based on a cloud resource management case study, we compare our approach with the state-of-theart process discovery techniques. The results demonstrate that our approach outperforms existing approaches to discover process models with multi-instance sub-processes.
基金Project supported by the National Natural Science Foundation of China(Grant No.61671142)
文摘Studying the topology of infrastructure communication networks(e.g., the Internet) has become a means to understand and develop complex systems. Therefore, investigating the evolution of Internet network topology might elucidate disciplines governing the dynamic process of complex systems. It may also contribute to a more intelligent communication network framework based on its autonomous behavior. In this paper, the Internet Autonomous Systems(ASes) topology from 1998 to 2013 was studied by deconstructing and analysing topological entities on three different scales(i.e., nodes,edges and 3 network components: single-edge component M1, binary component M2 and triangle component M3). The results indicate that: a) 95% of the Internet edges are internal edges(as opposed to external and boundary edges); b) the Internet network consists mainly of internal components, particularly M2 internal components; c) in most cases, a node initially connects with multiple nodes to form an M2 component to take part in the network; d) the Internet network evolves to lower entropy. Furthermore, we find that, as a complex system, the evolution of the Internet exhibits a behavioral series,which is similar to the biological phenomena concerned with the study on metabolism and replication. To the best of our knowledge, this is the first study of the evolution of the Internet network through analysis of dynamic features of its nodes,edges and components, and therefore our study represents an innovative approach to the subject.
基金This research was financially supported in part by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D program.(Project No.P0016038)in part by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2016-0-00312)supervised by the IITP(Institute for Information&communications Technology Planning&Evaluation).
文摘An automated system is proposed for the detection and classification of GI abnormalities.The proposed method operates under two pipeline procedures:(a)segmentation of the bleeding infection region and(b)classification of GI abnormalities by deep learning.The first bleeding region is segmented using a hybrid approach.The threshold is applied to each channel extracted from the original RGB image.Later,all channels are merged through mutual information and pixel-based techniques.As a result,the image is segmented.Texture and deep learning features are extracted in the proposed classification task.The transfer learning(TL)approach is used for the extraction of deep features.The Local Binary Pattern(LBP)method is used for texture features.Later,an entropy-based feature selection approach is implemented to select the best features of both deep learning and texture vectors.The selected optimal features are combined with a serial-based technique and the resulting vector is fed to the Ensemble Learning Classifier.The experimental process is evaluated on the basis of two datasets:Private and KVASIR.The accuracy achieved is 99.8 per cent for the private data set and 86.4 percent for the KVASIR data set.It can be confirmed that the proposed method is effective in detecting and classifying GI abnormalities and exceeds other methods of comparison.
文摘The Internet of Things(IoT)has recently become a popular technology that can play increasingly important roles in every aspect of our daily life.For collaboration between IoT devices and edge cloud servers,edge server nodes provide the computation and storage capabilities for IoT devices through the task offloading process for accelerating tasks with large resource requests.However,the quantitative impact of different offloading architectures and policies on IoT applications’performance remains far from clear,especially with a dynamic and unpredictable range of connected physical and virtual devices.To this end,this work models the performance impact by exploiting a potential latency that exhibits within the environment of edge cloud.Also,it investigates and compares the effects of loosely-coupled(LC)and orchestrator-enabled(OE)architecture.The LC scheme can smoothly address task redistribution with less time consumption for the offloading sceneries with small scale and small task requests.Moreover,the OE scheme not only outperforms the LC scheme in the large-scale tasks requests and offloading occurs but also reduces the overall time by 28.19%.Finally,to achieve optimized solutions for optimal offloading placement with different constraints,orchestration is important.
文摘An essential objective of software development is to locate and fix defects ahead of schedule that could be expected under diverse circumstances. Many software development activities are performed by individuals, which may lead to different software bugs over the development to occur, causing disappointments in the not-so-distant future. Thus, the prediction of software defects in the first stages has become a primary interest in the field of software engineering. Various software defect prediction (SDP) approaches that rely on software metrics have been proposed in the last two decades. Bagging, support vector machines (SVM), decision tree (DS), and random forest (RF) classifiers are known to perform well to predict defects. This paper studies and compares these supervised machine learning and ensemble classifiers on 10 NASA datasets. The experimental results showed that, in the majority of cases, RF was the best performing classifier compared to the others.
基金In addition,the authors would like to thank the Deanship of Scientific Research,Prince Sattam bin Abdulaziz University,Al-Kharj,Saudi Arabia,for supporting this work.
文摘Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing.However,different service architecture and offloading strategies have a different impact on the service time performance of IoT applications.Therefore,this paper presents an Edge-Cloud system architecture that supports scheduling offloading tasks of IoT applications in order to minimize the enormous amount of transmitting data in the network.Also,it introduces the offloading latency models to investigate the delay of different offloading scenarios/schemes and explores the effect of computational and communication demand on each one.A series of experiments conducted on an EdgeCloudSim show that different offloading decisions within the Edge-Cloud system can lead to various service times due to the computational resources and communications types.Finally,this paper presents a comprehensive review of the current state-of-the-art research on task offloading issues in the Edge-Cloud environment.
基金This research was financially supported by the PBL Research and Application Project of Northeastern University(Grant No.PBL-JX2021yb029,PBL-JX2021yb027).
文摘Recently,InE has been regarded as a popular education strategy in Chinese universities.However,problems have been exposed in the adoption of InE,for example,in InE courses and competitions.The purpose of this paper is to provide a possible solution to the problems,which is to organize effective InE courses by integrating InE with Inter-Course-level Problem-Based Learning(ICPBL).A detailed case is demonstrated by an ICPBL elective course design with deep integration of InE in the teaching,learning,and assessments.This paper contributes to a new curriculum design for promoting InE education in practically for Chinese universities.
基金funded by the University of Haripur,KP Pakistan Researchers Supporting Project number (PKURFL2324L33)。
文摘The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information technology has increased.A sensing system is mandatory to detect rice diseases using Artificial Intelligence(AI).It is being adopted in all medical and plant sciences fields to access and measure the accuracy of results and detection while lowering the risk of diseases.Deep Neural Network(DNN)is a novel technique that will help detect disease present on a rice leave because DNN is also considered a state-of-the-art solution in image detection using sensing nodes.Further in this paper,the adoption of the mixed-method approach Deep Convolutional Neural Network(Deep CNN)has assisted the research in increasing the effectiveness of the proposed method.Deep CNN is used for image recognition and is a class of deep-learning neural networks.CNN is popular and mostly used in the field of image recognition.A dataset of images with three main leaf diseases is selected for training and testing the proposed model.After the image acquisition and preprocessing process,the Deep CNN model was trained to detect and classify three rice diseases(Brown spot,bacterial blight,and blast disease).The proposed model achieved 98.3%accuracy in comparison with similar state-of-the-art techniques.
文摘In recent years, grassland degradation has become one of China’s most critical environmental problems due to the interaction of natural environmental factors and human causes. Based on the systematic analysis of the spatial characteristics of grassland degradation and the current research status of environmental drivers, this paper summarizes and summarizes the research methods on the impact of grassland degradation on natural ecological service function and social and economic value to understand further the natural ecological service function of grassland degradation and its impact on social and economic benefits. The results show that since the function of grassland ecosystem service is much larger than the biomass value it provides, we should focus on the effective management of grassland from the design concept of ecological service function to achieve the sustainable development of grassland. We should do an excellent job in the comprehensive application of various ecosystems and service value evaluation methods in the future.
文摘In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mobileapps. The use of these apps eases our daily lives, and all customers who need any type of service can accessit easily, comfortably, and efficiently through mobile apps. Particularly, Saudi Arabia greatly depends on digitalservices to assist people and visitors. Such mobile devices are used in organizing daily work schedules and services,particularly during two large occasions, Umrah and Hajj. However, pilgrims encounter mobile app issues such asslowness, conflict, unreliability, or user-unfriendliness. Pilgrims comment on these issues on mobile app platformsthrough reviews of their experiences with these digital services. Scholars have made several attempts to solve suchmobile issues by reporting bugs or non-functional requirements by utilizing user comments.However, solving suchissues is a great challenge, and the issues still exist. Therefore, this study aims to propose a hybrid deep learningmodel to classify and predict mobile app software issues encountered by millions of pilgrims during the Hajj andUmrah periods from the user perspective. Firstly, a dataset was constructed using user-generated comments fromrelevant mobile apps using natural language processing methods, including information extraction, the annotationprocess, and pre-processing steps, considering a multi-class classification problem. Then, several experimentswere conducted using common machine learning classifiers, Artificial Neural Networks (ANN), Long Short-TermMemory (LSTM), and Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) architectures, toexamine the performance of the proposed model. Results show 96% in F1-score and accuracy, and the proposedmodel outperformed the mentioned models.