期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Semantic Segmentation and YOLO Detector over Aerial Vehicle Images
1
作者 Asifa Mehmood Qureshi Abdul Haleem Butt +5 位作者 Abdulwahab Alazeb Naif Al Mudawi Mohammad Alonazi Nouf Abdullah Almujally Ahmad Jalal Hui Liu 《Computers, Materials & Continua》 SCIE EI 2024年第8期3315-3332,共18页
Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overa... Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overall accuracy.Deep learning is considered to be an efficient method for object detection in vision-based systems.In this paper,we proposed a vision-based vehicle detection and tracking system based on a You Look Only Once version 5(YOLOv5)detector combined with a segmentation technique.The model consists of six steps.In the first step,all the extracted traffic sequence images are subjected to pre-processing to remove noise and enhance the contrast level of the images.These pre-processed images are segmented by labelling each pixel to extract the uniform regions to aid the detection phase.A single-stage detector YOLOv5 is used to detect and locate vehicles in images.Each detection was exposed to Speeded Up Robust Feature(SURF)feature extraction to track multiple vehicles.Based on this,a unique number is assigned to each vehicle to easily locate them in the succeeding image frames by extracting them using the feature-matching technique.Further,we implemented a Kalman filter to track multiple vehicles.In the end,the vehicle path is estimated by using the centroid points of the rectangular bounding box predicted by the tracking algorithm.The experimental results and comparison reveal that our proposed vehicle detection and tracking system outperformed other state-of-the-art systems.The proposed implemented system provided 94.1%detection precision for Roundabout and 96.1%detection precision for Vehicle Aerial Imaging from Drone(VAID)datasets,respectively. 展开更多
关键词 Semantic segmentation YOLOv5 vehicle detection and tracking Kalman filter SURF
在线阅读 下载PDF
Drone-Based Public Surveillance Using 3D Point Clouds and Neuro-Fuzzy Classifier
2
作者 Yawar Abbas Aisha Ahmed Alarfaj +3 位作者 Ebtisam Abdullah Alabdulqader Asaad Algarni Ahmad Jalal Hui Liu 《Computers, Materials & Continua》 2025年第3期4759-4776,共18页
Human Activity Recognition(HAR)in drone-captured videos has become popular because of the interest in various fields such as video surveillance,sports analysis,and human-robot interaction.However,recognizing actions f... Human Activity Recognition(HAR)in drone-captured videos has become popular because of the interest in various fields such as video surveillance,sports analysis,and human-robot interaction.However,recognizing actions from such videos poses the following challenges:variations of human motion,the complexity of backdrops,motion blurs,occlusions,and restricted camera angles.This research presents a human activity recognition system to address these challenges by working with drones’red-green-blue(RGB)videos.The first step in the proposed system involves partitioning videos into frames and then using bilateral filtering to improve the quality of object foregrounds while reducing background interference before converting from RGB to grayscale images.The YOLO(You Only Look Once)algorithm detects and extracts humans from each frame,obtaining their skeletons for further processing.The joint angles,displacement and velocity,histogram of oriented gradients(HOG),3D points,and geodesic Distance are included.These features are optimized using Quadratic Discriminant Analysis(QDA)and utilized in a Neuro-Fuzzy Classifier(NFC)for activity classification.Real-world evaluations on the Drone-Action,Unmanned Aerial Vehicle(UAV)-Gesture,and Okutama-Action datasets substantiate the proposed system’s superiority in accuracy rates over existing methods.In particular,the system obtains recognition rates of 93%for drone action,97%for UAV gestures,and 81%for Okutama-action,demonstrating the system’s reliability and ability to learn human activity from drone videos. 展开更多
关键词 Activity recognition geodesic distance pattern recognition neuro fuzzy classifier
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部