Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed f...Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed for software measurement,which is not considered during the development of most software systems.Many research studies have investigated different approaches for measuring software quality,but with no practical approaches to quantify and measure quality attributes.This paper proposes a software quality measurement model,based on a software interconnection model,to measure the quality of software components and the overall quality of the software system.Unlike most of the existing approaches,the proposed approach can be applied at the early stages of software development,to different architectural design models,and at different levels of system decomposition.This article introduces a software measurement model that uses a heuristic normalization of the software’s internal quality attributes,i.e.,coupling and cohesion,for software quality measurement.In this model,the quality of a software component is measured based on its internal strength and the coupling it exhibits with other component(s).The proposed model has been experimented with nine software engineering teams that have agreed to participate in the experiment during the development of their different software systems.The experiments have shown that coupling reduces the internal strength of the coupled components by the amount of coupling they exhibit,which degrades their quality and the overall quality of the software system.The introduced model can help in understanding the quality of software design.In addition,it identifies the locations in software design that exhibit unnecessary couplings that degrade the quality of the software systems,which can be eliminated.展开更多
Our dependability on software in every aspect of our lives has exceeded the level that was expected in the past. We have now reached a point where we are currently stuck with technology, and it made life much easier t...Our dependability on software in every aspect of our lives has exceeded the level that was expected in the past. We have now reached a point where we are currently stuck with technology, and it made life much easier than before. The rapid increase of technology adoption in the different aspects of life has made technology affordable and has led to an even stronger adoption in the society. As technology advances, almost every kind of technology is now connected to the network like infrastructure, automobiles, airplanes, chemical factories, power stations, and many other systems that are business and mission critical. Because of our high dependency on technology in most, if not all, aspects of life, a system failure is considered to be very critical and might result in harming the surrounding environment or put human life at risk. We apply our conceptual framework to integration between security and safety by creating a SaS (Safety and Security) domain model. Furthermore, it demonstrates that it is possible to use goal-oriented KAOS (Knowledge Acquisition in automated Specification) language in threat and hazard analysis to cover both safety and security domains making their outputs, or artifacts, well-structured and comprehensive, which results in dependability due to the comprehensiveness of the analysis. The conceptual framework can thereby act as an interface for active interactions in risk and hazard management in terms of universal coverage, finding solutions for differences and contradictions which can be overcome by integrating the safety and security domains and using a unified system analysis technique (KAOS) that will result in analysis centrality. For validation we chose the Systems-Theoretic Accident Model and Processes (STAMP) approach and its modelling language, namely System-Theoretic Process Analysis for safety (STPA), on the safety side and System-Theoretic Process Analysis for Security (STPA-sec) on the security side in order to be the base of the experiment in comparison to what was done in SaS. The concepts of SaS domain model were applied on STAMP approach using the same example @RemoteSurgery.展开更多
Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, ...Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, especially at the small enterprises, to play many roles, and sometimes simultaneously. Beside the technical and managerial skills, software engineers should have additional intellectual skills such as domain-specific abstract thinking. Therefore, software engineering curriculum should help the students to build and improve their skills to meet the labor market needs. This study aims to explore the perceptions of software engineering students on the influence of learning software modeling and design on their domain-specific abstract thinking. Also, we explore the role of the course project in improving their domain-specific abstract thinking. The study results have shown that, most of the surveyed students believe that learning and practicing modeling and design concepts contribute to their ability to think abstractly on specific domain. However, this finding is influenced by the students’ lack of the comprehension of some modeling and design aspects (e.g., generalization). We believe that, such aspects should be introduced to the students at early levels of software engineering curriculum, which certainly will improve their ability to think abstractly on specific domain.展开更多
Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty...Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata,which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model modifications.First, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units(PDMUs), and a reverse breadth-first search(BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications.展开更多
Electrolysis tanks are used to smeltmetals based on electrochemical principles,and the short-circuiting of the pole plates in the tanks in the production process will lead to high temperatures,thus affecting normal pr...Electrolysis tanks are used to smeltmetals based on electrochemical principles,and the short-circuiting of the pole plates in the tanks in the production process will lead to high temperatures,thus affecting normal production.Aiming at the problems of time-consuming and poor accuracy of existing infrared methods for high-temperature detection of dense pole plates in electrolysis tanks,an infrared dense pole plate anomalous target detection network YOLOv5-RMF based on You Only Look Once version 5(YOLOv5)is proposed.Firstly,we modified the Real-Time Enhanced Super-Resolution Generative Adversarial Network(Real-ESRGAN)by changing the U-shaped network(U-Net)to Attention U-Net,to preprocess the images;secondly,we propose a new Focus module that introduces the Marr operator,which can provide more boundary information for the network;again,because Complete Intersection over Union(CIOU)cannot accommodate target borders that are increasing and decreasing,replace CIOU with Extended Intersection over Union(EIOU),while the loss function is changed to Focal and Efficient IOU(Focal-EIOU)due to the different difficulty of sample detection.On the homemade dataset,the precision of our method is 94%,the recall is 70.8%,and the map@.5 is 83.6%,which is an improvement of 1.3%in precision,9.7%in recall,and 7%in map@.5 over the original network.The algorithm can meet the needs of electrolysis tank pole plate abnormal temperature detection,which can lay a technical foundation for improving production efficiency and reducing production waste.展开更多
In pursuit of enhancing the Wireless Sensor Networks(WSNs)energy efficiency and operational lifespan,this paper delves into the domain of energy-efficient routing protocols.InWSNs,the limited energy resources of Senso...In pursuit of enhancing the Wireless Sensor Networks(WSNs)energy efficiency and operational lifespan,this paper delves into the domain of energy-efficient routing protocols.InWSNs,the limited energy resources of Sensor Nodes(SNs)are a big challenge for ensuring their efficient and reliable operation.WSN data gathering involves the utilization of a mobile sink(MS)to mitigate the energy consumption problem through periodic network traversal.The mobile sink(MS)strategy minimizes energy consumption and latency by visiting the fewest nodes or predetermined locations called rendezvous points(RPs)instead of all cluster heads(CHs).CHs subsequently transmit packets to neighboring RPs.The unique determination of this study is the shortest path to reach RPs.As the mobile sink(MS)concept has emerged as a promising solution to the energy consumption problem in WSNs,caused by multi-hop data collection with static sinks.In this study,we proposed two novel hybrid algorithms,namely“ Reduced k-means based on Artificial Neural Network”(RkM-ANN)and“Delay Bound Reduced kmeans with ANN”(DBRkM-ANN)for designing a fast,efficient,and most proficient MS path depending upon rendezvous points(RPs).The first algorithm optimizes the MS’s latency,while the second considers the designing of delay-bound paths,also defined as the number of paths with delay over bound for the MS.Both methods use a weight function and k-means clustering to choose RPs in a way that maximizes efficiency and guarantees network-wide coverage.In addition,a method of using MS scheduling for efficient data collection is provided.Extensive simulations and comparisons to several existing algorithms have shown the effectiveness of the suggested methodologies over a wide range of performance indicators.展开更多
Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a ...Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in English.This model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection methods.Our model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing methods.Experimental results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities.展开更多
With the development of hardware devices and the upgrading of smartphones,a large number of users save privacy-related information in mobile devices,mainly smartphones,which puts forward higher demands on the protecti...With the development of hardware devices and the upgrading of smartphones,a large number of users save privacy-related information in mobile devices,mainly smartphones,which puts forward higher demands on the protection of mobile users’privacy information.At present,mobile user authenticationmethods based on humancomputer interaction have been extensively studied due to their advantages of high precision and non-perception,but there are still shortcomings such as low data collection efficiency,untrustworthy participating nodes,and lack of practicability.To this end,this paper proposes a privacy-enhanced mobile user authentication method with motion sensors,which mainly includes:(1)Construct a smart contract-based private chain and federated learning to improve the data collection efficiency of mobile user authentication,reduce the probability of the model being bypassed by attackers,and reduce the overhead of data centralized processing and the risk of privacy leakage;(2)Use certificateless encryption to realize the authentication of the device to ensure the credibility of the client nodes participating in the calculation;(3)Combine Variational Mode Decomposition(VMD)and Long Short-TermMemory(LSTM)to analyze and model the motion sensor data of mobile devices to improve the accuracy of model certification.The experimental results on the real environment dataset of 1513 people show that themethod proposed in this paper can effectively resist poisoning attacks while ensuring the accuracy and efficiency of mobile user authentication.展开更多
The increased adoption of Internet of Medical Things (IoMT) technologies has resulted in the widespread use ofBody Area Networks (BANs) in medical and non-medical domains. However, the performance of IEEE 802.15.4-bas...The increased adoption of Internet of Medical Things (IoMT) technologies has resulted in the widespread use ofBody Area Networks (BANs) in medical and non-medical domains. However, the performance of IEEE 802.15.4-based BANs is impacted by challenges related to heterogeneous data traffic requirements among nodes, includingcontention during finite backoff periods, association delays, and traffic channel access through clear channelassessment (CCA) algorithms. These challenges lead to increased packet collisions, queuing delays, retransmissions,and the neglect of critical traffic, thereby hindering performance indicators such as throughput, packet deliveryratio, packet drop rate, and packet delay. Therefore, we propose Dynamic Next Backoff Period and Clear ChannelAssessment (DNBP-CCA) schemes to address these issues. The DNBP-CCA schemes leverage a combination ofthe Dynamic Next Backoff Period (DNBP) scheme and the Dynamic Next Clear Channel Assessment (DNCCA)scheme. The DNBP scheme employs a fuzzy Takagi, Sugeno, and Kang (TSK) model’s inference system toquantitatively analyze backoff exponent, channel clearance, collision ratio, and data rate as input parameters. Onthe other hand, the DNCCA scheme dynamically adapts the CCA process based on requested data transmission tothe coordinator, considering input parameters such as buffer status ratio and acknowledgement ratio. As a result,simulations demonstrate that our proposed schemes are better than some existing representative approaches andenhance data transmission, reduce node collisions, improve average throughput, and packet delivery ratio, anddecrease average packet drop rate and packet delay.展开更多
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p...The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.展开更多
In the digital age, the global character of the Internet has significantly improved our daily lives by providing access to large amounts of knowledge and allowing for seamless connections. However, this enormously int...In the digital age, the global character of the Internet has significantly improved our daily lives by providing access to large amounts of knowledge and allowing for seamless connections. However, this enormously interconnected world is not without its risks. Malicious URLs are a powerful menace, masquerading as legitimate links while holding the intent to hack computer systems or steal sensitive personal information. As the sophistication and frequency of cyberattacks increase, identifying bad URLs has emerged as a critical aspect of cybersecurity. This study presents a new approach that enables the average end-user to check URL safety using Microsoft Excel. Using the powerful VirusTotal API for URL inspections, this study creates an Excel add-in that integrates Python and Excel to deliver a seamless, user-friendly interface. Furthermore, the study improves Excel’s capabilities by allowing users to encrypt and decrypt text communications directly in the spreadsheet. Users may easily encrypt their conversations by simply typing a key and the required text into predefined cells, enhancing their personal cybersecurity with a layer of cryptographic secrecy. This strategy democratizes access to advanced cybersecurity solutions, making attentive digital integrity a feature rather than a daunting burden.展开更多
Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision...Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.展开更多
One of the most basic and difficult areas of computer vision and image understanding applications is still object detection. Deep neural network models and enhanced object representation have led to significant progre...One of the most basic and difficult areas of computer vision and image understanding applications is still object detection. Deep neural network models and enhanced object representation have led to significant progress in object detection. This research investigates in greater detail how object detection has changed in the recent years in the deep learning age. We provide an overview of the literature on a range of cutting-edge object identification algorithms and the theoretical underpinnings of these techniques. Deep learning technologies are contributing to substantial innovations in the field of object detection. While Convolutional Neural Networks (CNN) have laid a solid foundation, new models such as You Only Look Once (YOLO) and Vision Transformers (ViTs) have expanded the possibilities even further by providing high accuracy and fast detection in a variety of settings. Even with these developments, integrating CNN, YOLO and ViTs, into a coherent framework still poses challenges with juggling computing demand, speed, and accuracy especially in dynamic contexts. Real-time processing in applications like surveillance and autonomous driving necessitates improvements that take use of each model type’s advantages. The goal of this work is to provide an object detection system that maximizes detection speed and accuracy while decreasing processing requirements by integrating YOLO, CNN, and ViTs. Improving real-time detection performance in changing weather and light exposure circumstances, as well as detecting small or partially obscured objects in crowded cities, are among the goals. We provide a hybrid architecture which leverages CNN for robust feature extraction, YOLO for rapid detection, and ViTs for remarkable global context capture via self-attention techniques. Using an innovative training regimen that prioritizes flexible learning rates and data augmentation procedures, the model is trained on an extensive dataset of urban settings. Compared to solo YOLO, CNN, or ViTs models, the suggested model exhibits an increase in detection accuracy. This improvement is especially noticeable in difficult situations such settings with high occlusion and low light. In addition, it attains a decrease in inference time in comparison to baseline models, allowing real-time object detection without performance loss. This work introduces a novel method of object identification that integrates CNN, YOLO and ViTs, in a synergistic way. The resultant framework extends the use of integrated deep learning models in practical applications while also setting a new standard for detection performance under a variety of conditions. Our research advances computer vision by providing a scalable and effective approach to object identification problems. Its possible uses include autonomous navigation, security, and other areas.展开更多
In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different f...In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different feature sets and their performances were evaluated in terms of accuracy and F-measure metrics.While the first experiments directly used the own stock features as the model inputs,the second experiments utilized reduced stock features through Variational AutoEncoders(VAE).In the last experiments,in order to grasp the effects of the other banking stocks on individual stock performance,the features belonging to other stocks were also given as inputs to our models.While combining other stock features was done for both own(named as allstock_own)and VAE-reduced(named as allstock_VAE)stock features,the expanded dimensions of the feature sets were reduced by Recursive Feature Elimination.As the highest success rate increased up to 0.685 with allstock_own and LSTM with attention model,the combination of allstock_VAE and LSTM with the attention model obtained an accuracy rate of 0.675.Although the classification results achieved with both feature types was close,allstock_VAE achieved these results using nearly 16.67%less features compared to allstock_own.When all experimental results were examined,it was found out that the models trained with allstock_own and allstock_VAE achieved higher accuracy rates than those using individual stock features.It was also concluded that the results obtained with the VAE-reduced stock features were similar to those obtained by own stock features.展开更多
One of the most complex tasks for computer-aided diagnosis(Intelligent decision support system)is the segmentation of lesions.Thus,this study proposes a new fully automated method for the segmentation of ovarian and b...One of the most complex tasks for computer-aided diagnosis(Intelligent decision support system)is the segmentation of lesions.Thus,this study proposes a new fully automated method for the segmentation of ovarian and breast ultrasound images.The main contributions of this research is the development of a novel Viola–James model capable of segmenting the ultrasound images of breast and ovarian cancer cases.In addition,proposed an approach that can efficiently generate region-of-interest(ROI)and new features that can be used in characterizing lesion boundaries.This study uses two databases in training and testing the proposed segmentation approach.The breast cancer database contains 250 images,while that of the ovarian tumor has 100 images obtained from several hospitals in Iraq.Results of the experiments showed that the proposed approach demonstrates better performance compared with those of other segmentation methods used for segmenting breast and ovarian ultrasound images.The segmentation result of the proposed system compared with the other existing techniques in the breast cancer data set was 78.8%.By contrast,the segmentation result of the proposed system in the ovarian tumor data set was 79.2%.In the classification results,we achieved 95.43%accuracy,92.20%sensitivity,and 97.5%specificity when we used the breast cancer data set.For the ovarian tumor data set,we achieved 94.84%accuracy,96.96%sensitivity,and 90.32%specificity.展开更多
The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,whi...The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.展开更多
Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep lear...Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep learning approaches for capturing user and product information from a short text.However,such previously used approaches do not fairly and efficiently incorporate users’preferences and product characteristics.The proposed novel Hybrid Deep Collaborative Filtering(HDCF)model combines deep learning capabilities and deep interaction modeling with high performance for True Recommendations.To overcome the cold start problem,the new overall rating is generated by aggregating the Deep Multivariate Rating DMR(Votes,Likes,Stars,and Sentiment scores of reviews)from different external data sources because different sites have different rating scores about the same product that make confusion for the user to make a decision,either product is truly popular or not.The proposed novel HDCF model consists of four major modules such as User Product Attention,Deep Collaborative Filtering,Neural Sentiment Classifier,and Deep Multivariate Rating(UPA-DCF+NSC+DMR)to solve the addressed problems.Experimental results demonstrate that our novel model is outperforming state-of-the-art IMDb,Yelp2013,and Yelp2014 datasets for the true top-n recommendation of products using HDCF to increase the accuracy,confidence,and trust of recommendation services.展开更多
The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles...The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles to avoid congestion.Therefore,optimal path selection to route traffic between the origin and destination is vital.This research proposed a realistic strategy to reduce traffic management service response time by enabling real-time content distribution in IoV systems using heterogeneous network access.Firstly,this work proposed a novel use of the Ant Colony Optimization(ACO)algorithm and formulated the path planning optimization problem as an Integer Linear Program(ILP).This integrates the future estimation metric to predict the future arrivals of the vehicles,searching the optimal routes.Considering the mobile nature of IOV,fuzzy logic is used for congestion level estimation along with the ACO to determine the optimal path.The model results indicate that the suggested scheme outperforms the existing state-of-the-art methods by identifying the shortest and most cost-effective path.Thus,this work strongly supports its use in applications having stringent Quality of Service(QoS)requirements for the vehicles.展开更多
The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The...The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The application of herbicide is effective but causes environmental and health concerns.Hence,Precision Agriculture(PA)suggests the variable spraying of herbicides so that herbicide chemicals do not affect the primary plants.Motivated by the gap above,we proposed a Deep Learning(DL)based model for detecting Eggplant(Brinjal)weed in this paper.The key objective of this study is to detect plant and non-plant(weed)parts from crop images.With the help of object detection,the precise location of weeds from images can be achieved.The dataset is collected manually from a private farm in Gandhinagar,Gujarat,India.The combined approach of classification and object detection is applied in the proposed model.The Convolutional Neural Network(CNN)model is used to classify weed and non-weed images;further DL models are applied for object detection.We have compared DL models based on accuracy,memory usage,and Intersection over Union(IoU).ResNet-18,YOLOv3,CenterNet,and Faster RCNN are used in the proposed work.CenterNet outperforms all other models in terms of accuracy,i.e.,88%.Compared to other models,YOLOv3 is the least memory-intensive,utilizing 4.78 GB to evaluate the data.展开更多
文摘Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed for software measurement,which is not considered during the development of most software systems.Many research studies have investigated different approaches for measuring software quality,but with no practical approaches to quantify and measure quality attributes.This paper proposes a software quality measurement model,based on a software interconnection model,to measure the quality of software components and the overall quality of the software system.Unlike most of the existing approaches,the proposed approach can be applied at the early stages of software development,to different architectural design models,and at different levels of system decomposition.This article introduces a software measurement model that uses a heuristic normalization of the software’s internal quality attributes,i.e.,coupling and cohesion,for software quality measurement.In this model,the quality of a software component is measured based on its internal strength and the coupling it exhibits with other component(s).The proposed model has been experimented with nine software engineering teams that have agreed to participate in the experiment during the development of their different software systems.The experiments have shown that coupling reduces the internal strength of the coupled components by the amount of coupling they exhibit,which degrades their quality and the overall quality of the software system.The introduced model can help in understanding the quality of software design.In addition,it identifies the locations in software design that exhibit unnecessary couplings that degrade the quality of the software systems,which can be eliminated.
文摘Our dependability on software in every aspect of our lives has exceeded the level that was expected in the past. We have now reached a point where we are currently stuck with technology, and it made life much easier than before. The rapid increase of technology adoption in the different aspects of life has made technology affordable and has led to an even stronger adoption in the society. As technology advances, almost every kind of technology is now connected to the network like infrastructure, automobiles, airplanes, chemical factories, power stations, and many other systems that are business and mission critical. Because of our high dependency on technology in most, if not all, aspects of life, a system failure is considered to be very critical and might result in harming the surrounding environment or put human life at risk. We apply our conceptual framework to integration between security and safety by creating a SaS (Safety and Security) domain model. Furthermore, it demonstrates that it is possible to use goal-oriented KAOS (Knowledge Acquisition in automated Specification) language in threat and hazard analysis to cover both safety and security domains making their outputs, or artifacts, well-structured and comprehensive, which results in dependability due to the comprehensiveness of the analysis. The conceptual framework can thereby act as an interface for active interactions in risk and hazard management in terms of universal coverage, finding solutions for differences and contradictions which can be overcome by integrating the safety and security domains and using a unified system analysis technique (KAOS) that will result in analysis centrality. For validation we chose the Systems-Theoretic Accident Model and Processes (STAMP) approach and its modelling language, namely System-Theoretic Process Analysis for safety (STPA), on the safety side and System-Theoretic Process Analysis for Security (STPA-sec) on the security side in order to be the base of the experiment in comparison to what was done in SaS. The concepts of SaS domain model were applied on STAMP approach using the same example @RemoteSurgery.
文摘Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, especially at the small enterprises, to play many roles, and sometimes simultaneously. Beside the technical and managerial skills, software engineers should have additional intellectual skills such as domain-specific abstract thinking. Therefore, software engineering curriculum should help the students to build and improve their skills to meet the labor market needs. This study aims to explore the perceptions of software engineering students on the influence of learning software modeling and design on their domain-specific abstract thinking. Also, we explore the role of the course project in improving their domain-specific abstract thinking. The study results have shown that, most of the surveyed students believe that learning and practicing modeling and design concepts contribute to their ability to think abstractly on specific domain. However, this finding is influenced by the students’ lack of the comprehension of some modeling and design aspects (e.g., generalization). We believe that, such aspects should be introduced to the students at early levels of software engineering curriculum, which certainly will improve their ability to think abstractly on specific domain.
基金supported by the Shanghai Science and Technology Committee (22511105500)the National Nature Science Foundation of China (62172299, 62032019)+2 种基金the Space Optoelectronic Measurement and Perception LaboratoryBeijing Institute of Control Engineering(LabSOMP-2023-03)the Central Universities of China (2023-4-YB-05)。
文摘Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata,which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model modifications.First, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units(PDMUs), and a reverse breadth-first search(BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications.
文摘Electrolysis tanks are used to smeltmetals based on electrochemical principles,and the short-circuiting of the pole plates in the tanks in the production process will lead to high temperatures,thus affecting normal production.Aiming at the problems of time-consuming and poor accuracy of existing infrared methods for high-temperature detection of dense pole plates in electrolysis tanks,an infrared dense pole plate anomalous target detection network YOLOv5-RMF based on You Only Look Once version 5(YOLOv5)is proposed.Firstly,we modified the Real-Time Enhanced Super-Resolution Generative Adversarial Network(Real-ESRGAN)by changing the U-shaped network(U-Net)to Attention U-Net,to preprocess the images;secondly,we propose a new Focus module that introduces the Marr operator,which can provide more boundary information for the network;again,because Complete Intersection over Union(CIOU)cannot accommodate target borders that are increasing and decreasing,replace CIOU with Extended Intersection over Union(EIOU),while the loss function is changed to Focal and Efficient IOU(Focal-EIOU)due to the different difficulty of sample detection.On the homemade dataset,the precision of our method is 94%,the recall is 70.8%,and the map@.5 is 83.6%,which is an improvement of 1.3%in precision,9.7%in recall,and 7%in map@.5 over the original network.The algorithm can meet the needs of electrolysis tank pole plate abnormal temperature detection,which can lay a technical foundation for improving production efficiency and reducing production waste.
基金Research Supporting Project Number(RSP2024R421),King Saud University,Riyadh,Saudi Arabia.
文摘In pursuit of enhancing the Wireless Sensor Networks(WSNs)energy efficiency and operational lifespan,this paper delves into the domain of energy-efficient routing protocols.InWSNs,the limited energy resources of Sensor Nodes(SNs)are a big challenge for ensuring their efficient and reliable operation.WSN data gathering involves the utilization of a mobile sink(MS)to mitigate the energy consumption problem through periodic network traversal.The mobile sink(MS)strategy minimizes energy consumption and latency by visiting the fewest nodes or predetermined locations called rendezvous points(RPs)instead of all cluster heads(CHs).CHs subsequently transmit packets to neighboring RPs.The unique determination of this study is the shortest path to reach RPs.As the mobile sink(MS)concept has emerged as a promising solution to the energy consumption problem in WSNs,caused by multi-hop data collection with static sinks.In this study,we proposed two novel hybrid algorithms,namely“ Reduced k-means based on Artificial Neural Network”(RkM-ANN)and“Delay Bound Reduced kmeans with ANN”(DBRkM-ANN)for designing a fast,efficient,and most proficient MS path depending upon rendezvous points(RPs).The first algorithm optimizes the MS’s latency,while the second considers the designing of delay-bound paths,also defined as the number of paths with delay over bound for the MS.Both methods use a weight function and k-means clustering to choose RPs in a way that maximizes efficiency and guarantees network-wide coverage.In addition,a method of using MS scheduling for efficient data collection is provided.Extensive simulations and comparisons to several existing algorithms have shown the effectiveness of the suggested methodologies over a wide range of performance indicators.
基金funded by Scientific Research Deanship at University of Ha’il-Saudi Arabia through Project Number RG-23092。
文摘Cyberbullying,a critical concern for digital safety,necessitates effective linguistic analysis tools that can navigate the complexities of language use in online spaces.To tackle this challenge,our study introduces a new approach employing Bidirectional Encoder Representations from the Transformers(BERT)base model(cased),originally pretrained in English.This model is uniquely adapted to recognize the intricate nuances of Arabic online communication,a key aspect often overlooked in conventional cyberbullying detection methods.Our model is an end-to-end solution that has been fine-tuned on a diverse dataset of Arabic social media(SM)tweets showing a notable increase in detection accuracy and sensitivity compared to existing methods.Experimental results on a diverse Arabic dataset collected from the‘X platform’demonstrate a notable increase in detection accuracy and sensitivity compared to existing methods.E-BERT shows a substantial improvement in performance,evidenced by an accuracy of 98.45%,precision of 99.17%,recall of 99.10%,and an F1 score of 99.14%.The proposed E-BERT not only addresses a critical gap in cyberbullying detection in Arabic online forums but also sets a precedent for applying cross-lingual pretrained models in regional language applications,offering a scalable and effective framework for enhancing online safety across Arabic-speaking communities.
基金Wenzhou Key Scientific and Technological Projects(No.ZG2020031)Wenzhou Polytechnic Research Projects(No.WZY2021002)+3 种基金Key R&D Projects in Zhejiang Province(No.2021C01117)Major Program of Natural Science Foundation of Zhejiang Province(LD22F020002)the Cloud Security Key Technology Research Laboratorythe Researchers Supporting Project Number(RSP2023R509),King Saud University,Riyadh,Saudi Arabia.
文摘With the development of hardware devices and the upgrading of smartphones,a large number of users save privacy-related information in mobile devices,mainly smartphones,which puts forward higher demands on the protection of mobile users’privacy information.At present,mobile user authenticationmethods based on humancomputer interaction have been extensively studied due to their advantages of high precision and non-perception,but there are still shortcomings such as low data collection efficiency,untrustworthy participating nodes,and lack of practicability.To this end,this paper proposes a privacy-enhanced mobile user authentication method with motion sensors,which mainly includes:(1)Construct a smart contract-based private chain and federated learning to improve the data collection efficiency of mobile user authentication,reduce the probability of the model being bypassed by attackers,and reduce the overhead of data centralized processing and the risk of privacy leakage;(2)Use certificateless encryption to realize the authentication of the device to ensure the credibility of the client nodes participating in the calculation;(3)Combine Variational Mode Decomposition(VMD)and Long Short-TermMemory(LSTM)to analyze and model the motion sensor data of mobile devices to improve the accuracy of model certification.The experimental results on the real environment dataset of 1513 people show that themethod proposed in this paper can effectively resist poisoning attacks while ensuring the accuracy and efficiency of mobile user authentication.
基金Research Supporting Project Number(RSP2024R421),King Saud University,Riyadh,Saudi Arabia。
文摘The increased adoption of Internet of Medical Things (IoMT) technologies has resulted in the widespread use ofBody Area Networks (BANs) in medical and non-medical domains. However, the performance of IEEE 802.15.4-based BANs is impacted by challenges related to heterogeneous data traffic requirements among nodes, includingcontention during finite backoff periods, association delays, and traffic channel access through clear channelassessment (CCA) algorithms. These challenges lead to increased packet collisions, queuing delays, retransmissions,and the neglect of critical traffic, thereby hindering performance indicators such as throughput, packet deliveryratio, packet drop rate, and packet delay. Therefore, we propose Dynamic Next Backoff Period and Clear ChannelAssessment (DNBP-CCA) schemes to address these issues. The DNBP-CCA schemes leverage a combination ofthe Dynamic Next Backoff Period (DNBP) scheme and the Dynamic Next Clear Channel Assessment (DNCCA)scheme. The DNBP scheme employs a fuzzy Takagi, Sugeno, and Kang (TSK) model’s inference system toquantitatively analyze backoff exponent, channel clearance, collision ratio, and data rate as input parameters. Onthe other hand, the DNCCA scheme dynamically adapts the CCA process based on requested data transmission tothe coordinator, considering input parameters such as buffer status ratio and acknowledgement ratio. As a result,simulations demonstrate that our proposed schemes are better than some existing representative approaches andenhance data transmission, reduce node collisions, improve average throughput, and packet delivery ratio, anddecrease average packet drop rate and packet delay.
基金supported by Scientific Research Deanship at University of Ha’il,Saudi Arabia through project number RG-23137.
文摘The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.
文摘In the digital age, the global character of the Internet has significantly improved our daily lives by providing access to large amounts of knowledge and allowing for seamless connections. However, this enormously interconnected world is not without its risks. Malicious URLs are a powerful menace, masquerading as legitimate links while holding the intent to hack computer systems or steal sensitive personal information. As the sophistication and frequency of cyberattacks increase, identifying bad URLs has emerged as a critical aspect of cybersecurity. This study presents a new approach that enables the average end-user to check URL safety using Microsoft Excel. Using the powerful VirusTotal API for URL inspections, this study creates an Excel add-in that integrates Python and Excel to deliver a seamless, user-friendly interface. Furthermore, the study improves Excel’s capabilities by allowing users to encrypt and decrypt text communications directly in the spreadsheet. Users may easily encrypt their conversations by simply typing a key and the required text into predefined cells, enhancing their personal cybersecurity with a layer of cryptographic secrecy. This strategy democratizes access to advanced cybersecurity solutions, making attentive digital integrity a feature rather than a daunting burden.
文摘Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.
文摘One of the most basic and difficult areas of computer vision and image understanding applications is still object detection. Deep neural network models and enhanced object representation have led to significant progress in object detection. This research investigates in greater detail how object detection has changed in the recent years in the deep learning age. We provide an overview of the literature on a range of cutting-edge object identification algorithms and the theoretical underpinnings of these techniques. Deep learning technologies are contributing to substantial innovations in the field of object detection. While Convolutional Neural Networks (CNN) have laid a solid foundation, new models such as You Only Look Once (YOLO) and Vision Transformers (ViTs) have expanded the possibilities even further by providing high accuracy and fast detection in a variety of settings. Even with these developments, integrating CNN, YOLO and ViTs, into a coherent framework still poses challenges with juggling computing demand, speed, and accuracy especially in dynamic contexts. Real-time processing in applications like surveillance and autonomous driving necessitates improvements that take use of each model type’s advantages. The goal of this work is to provide an object detection system that maximizes detection speed and accuracy while decreasing processing requirements by integrating YOLO, CNN, and ViTs. Improving real-time detection performance in changing weather and light exposure circumstances, as well as detecting small or partially obscured objects in crowded cities, are among the goals. We provide a hybrid architecture which leverages CNN for robust feature extraction, YOLO for rapid detection, and ViTs for remarkable global context capture via self-attention techniques. Using an innovative training regimen that prioritizes flexible learning rates and data augmentation procedures, the model is trained on an extensive dataset of urban settings. Compared to solo YOLO, CNN, or ViTs models, the suggested model exhibits an increase in detection accuracy. This improvement is especially noticeable in difficult situations such settings with high occlusion and low light. In addition, it attains a decrease in inference time in comparison to baseline models, allowing real-time object detection without performance loss. This work introduces a novel method of object identification that integrates CNN, YOLO and ViTs, in a synergistic way. The resultant framework extends the use of integrated deep learning models in practical applications while also setting a new standard for detection performance under a variety of conditions. Our research advances computer vision by providing a scalable and effective approach to object identification problems. Its possible uses include autonomous navigation, security, and other areas.
文摘In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different feature sets and their performances were evaluated in terms of accuracy and F-measure metrics.While the first experiments directly used the own stock features as the model inputs,the second experiments utilized reduced stock features through Variational AutoEncoders(VAE).In the last experiments,in order to grasp the effects of the other banking stocks on individual stock performance,the features belonging to other stocks were also given as inputs to our models.While combining other stock features was done for both own(named as allstock_own)and VAE-reduced(named as allstock_VAE)stock features,the expanded dimensions of the feature sets were reduced by Recursive Feature Elimination.As the highest success rate increased up to 0.685 with allstock_own and LSTM with attention model,the combination of allstock_VAE and LSTM with the attention model obtained an accuracy rate of 0.675.Although the classification results achieved with both feature types was close,allstock_VAE achieved these results using nearly 16.67%less features compared to allstock_own.When all experimental results were examined,it was found out that the models trained with allstock_own and allstock_VAE achieved higher accuracy rates than those using individual stock features.It was also concluded that the results obtained with the VAE-reduced stock features were similar to those obtained by own stock features.
文摘One of the most complex tasks for computer-aided diagnosis(Intelligent decision support system)is the segmentation of lesions.Thus,this study proposes a new fully automated method for the segmentation of ovarian and breast ultrasound images.The main contributions of this research is the development of a novel Viola–James model capable of segmenting the ultrasound images of breast and ovarian cancer cases.In addition,proposed an approach that can efficiently generate region-of-interest(ROI)and new features that can be used in characterizing lesion boundaries.This study uses two databases in training and testing the proposed segmentation approach.The breast cancer database contains 250 images,while that of the ovarian tumor has 100 images obtained from several hospitals in Iraq.Results of the experiments showed that the proposed approach demonstrates better performance compared with those of other segmentation methods used for segmenting breast and ovarian ultrasound images.The segmentation result of the proposed system compared with the other existing techniques in the breast cancer data set was 78.8%.By contrast,the segmentation result of the proposed system in the ovarian tumor data set was 79.2%.In the classification results,we achieved 95.43%accuracy,92.20%sensitivity,and 97.5%specificity when we used the breast cancer data set.For the ovarian tumor data set,we achieved 94.84%accuracy,96.96%sensitivity,and 90.32%specificity.
文摘The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.
文摘Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep learning approaches for capturing user and product information from a short text.However,such previously used approaches do not fairly and efficiently incorporate users’preferences and product characteristics.The proposed novel Hybrid Deep Collaborative Filtering(HDCF)model combines deep learning capabilities and deep interaction modeling with high performance for True Recommendations.To overcome the cold start problem,the new overall rating is generated by aggregating the Deep Multivariate Rating DMR(Votes,Likes,Stars,and Sentiment scores of reviews)from different external data sources because different sites have different rating scores about the same product that make confusion for the user to make a decision,either product is truly popular or not.The proposed novel HDCF model consists of four major modules such as User Product Attention,Deep Collaborative Filtering,Neural Sentiment Classifier,and Deep Multivariate Rating(UPA-DCF+NSC+DMR)to solve the addressed problems.Experimental results demonstrate that our novel model is outperforming state-of-the-art IMDb,Yelp2013,and Yelp2014 datasets for the true top-n recommendation of products using HDCF to increase the accuracy,confidence,and trust of recommendation services.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP),granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea.(No.20204010600090).
文摘The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles to avoid congestion.Therefore,optimal path selection to route traffic between the origin and destination is vital.This research proposed a realistic strategy to reduce traffic management service response time by enabling real-time content distribution in IoV systems using heterogeneous network access.Firstly,this work proposed a novel use of the Ant Colony Optimization(ACO)algorithm and formulated the path planning optimization problem as an Integer Linear Program(ILP).This integrates the future estimation metric to predict the future arrivals of the vehicles,searching the optimal routes.Considering the mobile nature of IOV,fuzzy logic is used for congestion level estimation along with the ACO to determine the optimal path.The model results indicate that the suggested scheme outperforms the existing state-of-the-art methods by identifying the shortest and most cost-effective path.Thus,this work strongly supports its use in applications having stringent Quality of Service(QoS)requirements for the vehicles.
基金funded by the Researchers Supporting Project Number(RSP2023R 509),King Saud University,Riyadh,Saudi Arabia.
文摘The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The application of herbicide is effective but causes environmental and health concerns.Hence,Precision Agriculture(PA)suggests the variable spraying of herbicides so that herbicide chemicals do not affect the primary plants.Motivated by the gap above,we proposed a Deep Learning(DL)based model for detecting Eggplant(Brinjal)weed in this paper.The key objective of this study is to detect plant and non-plant(weed)parts from crop images.With the help of object detection,the precise location of weeds from images can be achieved.The dataset is collected manually from a private farm in Gandhinagar,Gujarat,India.The combined approach of classification and object detection is applied in the proposed model.The Convolutional Neural Network(CNN)model is used to classify weed and non-weed images;further DL models are applied for object detection.We have compared DL models based on accuracy,memory usage,and Intersection over Union(IoU).ResNet-18,YOLOv3,CenterNet,and Faster RCNN are used in the proposed work.CenterNet outperforms all other models in terms of accuracy,i.e.,88%.Compared to other models,YOLOv3 is the least memory-intensive,utilizing 4.78 GB to evaluate the data.