期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
A Novelty Framework in Image-Captioning with Visual Attention-Based Refined Visual Features
1
作者 Alaa Thobhani Beiji Zou +4 位作者 Xiaoyan Kui Amr Abdussalam Muhammad Asim Mohammed ELAffendi Sajid Shah 《Computers, Materials & Continua》 2025年第3期3943-3964,共22页
Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do ... Image captioning,the task of generating descriptive sentences for images,has advanced significantly with the integration of semantic information.However,traditional models still rely on static visual features that do not evolve with the changing linguistic context,which can hinder the ability to form meaningful connections between the image and the generated captions.This limitation often leads to captions that are less accurate or descriptive.In this paper,we propose a novel approach to enhance image captioning by introducing dynamic interactions where visual features continuously adapt to the evolving linguistic context.Our model strengthens the alignment between visual and linguistic elements,resulting in more coherent and contextually appropriate captions.Specifically,we introduce two innovative modules:the Visual Weighting Module(VWM)and the Enhanced Features Attention Module(EFAM).The VWM adjusts visual features using partial attention,enabling dynamic reweighting of the visual inputs,while the EFAM further refines these features to improve their relevance to the generated caption.By continuously adjusting visual features in response to the linguistic context,our model bridges the gap between static visual features and dynamic language generation.We demonstrate the effectiveness of our approach through experiments on the MS-COCO dataset,where our method outperforms state-of-the-art techniques in terms of caption quality and contextual relevance.Our results show that dynamic visual-linguistic alignment significantly enhances image captioning performance. 展开更多
关键词 Image-captioning visual attention deep learning visual features
在线阅读 下载PDF
AI-Driven Pattern Recognition in Medicinal Plants: A Comprehensive Review and Comparative Analysis
2
作者 Mohd Asif Hajam Tasleem Arif +2 位作者 Akib Mohi Ud Din Khanday Mudasir Ahmad Wani Muhammad Asim 《Computers, Materials & Continua》 SCIE EI 2024年第11期2077-2131,共55页
The pharmaceutical industry increasingly values medicinal plants due to their perceived safety and costeffectiveness compared to modern drugs.Throughout the extensive history of medicinal plant usage,various plant par... The pharmaceutical industry increasingly values medicinal plants due to their perceived safety and costeffectiveness compared to modern drugs.Throughout the extensive history of medicinal plant usage,various plant parts,including flowers,leaves,and roots,have been acknowledged for their healing properties and employed in plant identification.Leaf images,however,stand out as the preferred and easily accessible source of information.Manual plant identification by plant taxonomists is intricate,time-consuming,and prone to errors,relying heavily on human perception.Artificial intelligence(AI)techniques offer a solution by automating plant recognition processes.This study thoroughly examines cutting-edge AI approaches for leaf image-based plant identification,drawing insights from literature across renowned repositories.This paper critically summarizes relevant literature based on AI algorithms,extracted features,and results achieved.Additionally,it analyzes extensively used datasets in automated plant classification research.It also offers deep insights into implemented techniques and methods employed for medicinal plant recognition.Moreover,this rigorous review study discusses opportunities and challenges in employing these AI-based approaches.Furthermore,in-depth statistical findings and lessons learned from this survey are highlighted with novel research areas with the aim of offering insights to the readers and motivating new research directions.This review is expected to serve as a foundational resource for future researchers in the field of AI-based identification of medicinal plants. 展开更多
关键词 Pattern recognition artificial intelligence machine learning deep learning image processing plant leaf identification
在线阅读 下载PDF
A Concise and Varied Visual Features-Based Image Captioning Model with Visual Selection
3
作者 Alaa Thobhani Beiji Zou +4 位作者 Xiaoyan Kui Amr Abdussalam Muhammad Asim Naveed Ahmed Mohammed Ali Alshara 《Computers, Materials & Continua》 SCIE EI 2024年第11期2873-2894,共22页
Image captioning has gained increasing attention in recent years.Visual characteristics found in input images play a crucial role in generating high-quality captions.Prior studies have used visual attention mechanisms... Image captioning has gained increasing attention in recent years.Visual characteristics found in input images play a crucial role in generating high-quality captions.Prior studies have used visual attention mechanisms to dynamically focus on localized regions of the input image,improving the effectiveness of identifying relevant image regions at each step of caption generation.However,providing image captioning models with the capability of selecting the most relevant visual features from the input image and attending to them can significantly improve the utilization of these features.Consequently,this leads to enhanced captioning network performance.In light of this,we present an image captioning framework that efficiently exploits the extracted representations of the image.Our framework comprises three key components:the Visual Feature Detector module(VFD),the Visual Feature Visual Attention module(VFVA),and the language model.The VFD module is responsible for detecting a subset of the most pertinent features from the local visual features,creating an updated visual features matrix.Subsequently,the VFVA directs its attention to the visual features matrix generated by the VFD,resulting in an updated context vector employed by the language model to generate an informative description.Integrating the VFD and VFVA modules introduces an additional layer of processing for the visual features,thereby contributing to enhancing the image captioning model’s performance.Using the MS-COCO dataset,our experiments show that the proposed framework competes well with state-of-the-art methods,effectively leveraging visual representations to improve performance.The implementation code can be found here:https://github.com/althobhani/VFDICM(accessed on 30 July 2024). 展开更多
关键词 Visual attention image captioning visual feature detector visual feature visual attention
在线阅读 下载PDF
A Survey on Enhancing Image Captioning with Advanced Strategies and Techniques
4
作者 Alaa Thobhani Beiji Zou +4 位作者 Xiaoyan Kui Amr Abdussalam Muhammad Asim Sajid Shah Mohammed ELAffendi 《Computer Modeling in Engineering & Sciences》 2025年第3期2247-2280,共34页
Image captioning has seen significant research efforts over the last decade.The goal is to generate meaningful semantic sentences that describe visual content depicted in photographs and are syntactically accurate.Man... Image captioning has seen significant research efforts over the last decade.The goal is to generate meaningful semantic sentences that describe visual content depicted in photographs and are syntactically accurate.Many real-world applications rely on image captioning,such as helping people with visual impairments to see their surroundings.To formulate a coherent and relevant textual description,computer vision techniques are utilized to comprehend the visual content within an image,followed by natural language processing methods.Numerous approaches and models have been developed to deal with this multifaceted problem.Several models prove to be stateof-the-art solutions in this field.This work offers an exclusive perspective emphasizing the most critical strategies and techniques for enhancing image caption generation.Rather than reviewing all previous image captioning work,we analyze various techniques that significantly improve image caption generation and achieve significant performance improvements,including encompassing image captioning with visual attention methods,exploring semantic information types in captions,and employing multi-caption generation techniques.Further,advancements such as neural architecture search,few-shot learning,multi-phase learning,and cross-modal embedding within image caption networks are examined for their transformative effects.The comprehensive quantitative analysis conducted in this study identifies cutting-edgemethodologies and sheds light on their profound impact,driving forward the forefront of image captioning technology. 展开更多
关键词 Image captioning semantic attention multi-caption natural language processing visual attention methods
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部