期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Tool Wear Monitoring in Drilling Using Multiple Feature Fusion of the Cutting Force
1
作者 ZHENG Jian-ming, LI Yan, HUANG Yu-mei, LI Shu-juan, XIAO Ji-ming, YUAN Qi-long Institute of Mechanical and Precision Instrument Engineering, Xi’an University of Technology, Xi’an 710048, P. R. China 《International Journal of Plant Engineering and Management》 2001年第1期33-40,共8页
This paper presents a tool wear monitoring method in drilling process using cutting force signal. The kurtosis coefficient and the energy of a special frequency band of cutting force signals were taken as the signal f... This paper presents a tool wear monitoring method in drilling process using cutting force signal. The kurtosis coefficient and the energy of a special frequency band of cutting force signals were taken as the signal features of tool wear as well as the mean value and the standard deviation from the time and frequency domain. The relationships between the signal feature and tool wear were discussed; then the vectors constituted of the signal features were input to the artificial neural network for fusion in order to realize intelligent identification of tool wear. The experimental results show that the artificial neural network can realize fusion of multiple features effectively, but the identification precision and the extending ability are not ideal owing to the relationship between the features and the tool wear being fuzzy and not certain. 展开更多
关键词 tool wear monitoring multiple feature fusion neural network
在线阅读 下载PDF
Non-destructive detection of chicken freshness based on multiple features image fusion and support vector machine
2
作者 Xiuguo Zou Chengrui Xin +6 位作者 Chenyang Wang Yuhua Li Shuchen Wang Wentian Zhang Jiao Jiao Li Steven Su Maohua Xiao 《International Journal of Agricultural and Biological Engineering》 2024年第6期264-272,共9页
With the rise in global meat consumption and chicken becoming a principal source of white meat,methods for efficiently and accurately determining the freshness of chicken are of increasing importance,since traditional... With the rise in global meat consumption and chicken becoming a principal source of white meat,methods for efficiently and accurately determining the freshness of chicken are of increasing importance,since traditional detection methods fail to satisfy modern production needs.A non-destructive method based on machine vision and machine learning technology was proposed for detecting chicken breast freshness.A self-designed machine vision system was first used to collect images of chicken breast samples stored at 4℃ for 1-7 d.The Region of Interest(ROI)for each image was then extracted and a total of 700 ROI images were obtained.Six color features were extracted from two different color spaces RGB(red,green,blue)and HSI(hue,saturation,intensity).Six main Gray Level Co-occurrence Matrix(GLCM)texture feature parameters were also calculated from four directions.Principal Component Analysis(PCA)was used to reduce the dimension of these 30 extracted feature parameters for multiple features image fusion.Four principal components were taken as input and chicken breast freshness level as output.A 10-fold cross-validation was used to partition the dataset.Four machine learning methods,Particle Swarm Optimization-Support Vector Machine(PSO-SVM),Random Forest(RF),Gradient Boosting Decision Tree(GBDT),and Naive Bayes Classifier(NBC),were used to establish a chicken breast freshness level prediction model.Among these,SVM had the best prediction effect with prediction accuracy reaching 0.9867.The results proved the feasibility of using a detection method based on multiple features image fusion and machine learning,providing a theoretical reference for the nondestructive detection of chicken breast freshness. 展开更多
关键词 chicken freshness color space gray level co-occurrence matrix multiple features image fusion machine learning
原文传递
Two-Stream Deep Learning Architecture-Based Human Action Recognition
3
作者 Faheem Shehzad Muhammad Attique Khan +5 位作者 Muhammad Asfand E.Yar Muhammad Sharif Majed Alhaisoni Usman Tariq Arnab Majumdar Orawit Thinnukool 《Computers, Materials & Continua》 SCIE EI 2023年第3期5931-5949,共19页
Human action recognition(HAR)based on Artificial intelligence reasoning is the most important research area in computer vision.Big breakthroughs in this field have been observed in the last few years;additionally,the ... Human action recognition(HAR)based on Artificial intelligence reasoning is the most important research area in computer vision.Big breakthroughs in this field have been observed in the last few years;additionally,the interest in research in this field is evolving,such as understanding of actions and scenes,studying human joints,and human posture recognition.Many HAR techniques are introduced in the literature.Nonetheless,the challenge of redundant and irrelevant features reduces recognition accuracy.They also faced a few other challenges,such as differing perspectives,environmental conditions,and temporal variations,among others.In this work,a deep learning and improved whale optimization algorithm based framework is proposed for HAR.The proposed framework consists of a few core stages i.e.,frames initial preprocessing,fine-tuned pre-trained deep learning models through transfer learning(TL),features fusion using modified serial based approach,and improved whale optimization based best features selection for final classification.Two pre-trained deep learning models such as InceptionV3 and Resnet101 are fine-tuned and TL is employed to train on action recognition datasets.The fusion process increases the length of feature vectors;therefore,improved whale optimization algorithm is proposed and selects the best features.The best selected features are finally classified usingmachine learning(ML)classifiers.Four publicly accessible datasets such as Ut-interaction,Hollywood,Free Viewpoint Action Recognition usingMotion History Volumes(IXMAS),and centre of computer vision(UCF)Sports,are employed and achieved the testing accuracy of 100%,99.9%,99.1%,and 100%respectively.Comparison with state of the art techniques(SOTA),the proposed method showed the improved accuracy. 展开更多
关键词 Human action recognition deep learning transfer learning fusion of multiple features features optimization
在线阅读 下载PDF
See clearly on rainy days:Hybrid multiscale loss guided multifeature fusion network for single image rain removal
4
作者 Huiyuan Fu Yu Zhang Huadong Ma 《Computational Visual Media》 EI CSCD 2021年第4期467-482,共16页
The quality of photos is highly susceptible to severe weather such as heavy rain;it can also degrade the performance of various visual tasks like object detection.Rain removal is a challenging problem because rain str... The quality of photos is highly susceptible to severe weather such as heavy rain;it can also degrade the performance of various visual tasks like object detection.Rain removal is a challenging problem because rain streaks have different appearances even in one image.Regions where rain accumulates appear foggy or misty,while rain streaks can be clearly seen in areas where rain is less heavy.We propose removing various rain effects in pictures using a hybrid multiscale loss guided multiple feature fusion de-raining network(MSGMFFNet).Specially,to deal with rain streaks,our method generates a rain streak attention map,while preprocessing uses gamma correction and contrast enhancement to enhanced images to address the problem of rain accumulation.Using these tools,the model can restore a result with abundant details.Furthermore,a hybrid multiscale loss combining L1 loss and edge loss is used to guide the training process to pay attention to edge and content information.Comprehensive experiments conducted on both synthetic and real-world datasets demonstrate the effectiveness of our method. 展开更多
关键词 single image rain removal multiple feature fusion deep learning hybrid multiscale loss
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部