期刊文献+
共找到7,633篇文章
< 1 2 250 >
每页显示 20 50 100
Large language models in traditional Chinese medicine: a systematic review
1
作者 Zhe Chen Hui Wang +5 位作者 Chengxian Li Chunxiang Liu Fengwen Yang Dong Zhang Alice Josephine Fauci Junhua Zhang 《Acupuncture and Herbal Medicine》 2025年第1期57-67,共11页
Objective:Generative artificial intelligence(AI)technology,represented by large language models(LLMs),has gradually been developed for traditional Chinese medicine(TCM);however,challenges remain in effectively enhanci... Objective:Generative artificial intelligence(AI)technology,represented by large language models(LLMs),has gradually been developed for traditional Chinese medicine(TCM);however,challenges remain in effectively enhancing AI applications for TCM.Therefore,this study is the first systematic review to analyze LLMs in TCM retrospectively,focusing on and summarizing the evidence of their performance in generative tasks.Methods:We extensively searched electronic databases for articles published until June 2024 to identify publicly available studies on LLMs in TCM.Two investigators independently selected and extracted the related information and evaluation metrics.Based on the available data,this study used descriptive analysis for a comprehensive systematic review of LLM technology related to TCM.Results:Ten studies published between 2023 and 2024 met our eligibility criteria and were included in this review,including 40%LLMs in the TCM vertical domain,40%containing TCM data,and 20%honoring the TCM contribution,with a foundational model parameter range from 1.8 to 33 billion.All included studies used manual or automatic evaluation metrics to evaluate model performance and fully discussed the challenges and contributions through an overview of LLMs in TCM.Conclusions:LLMs have achieved significant advantages in TCM applications and can effectively address intelligent TCM tasks.Further in-depth development of LLMs is needed in various vertical TCM fields,including clinical and fundamental research.Focusing on the functional segmentation development direction of generative AI technologies in TCM application scenarios to meet the practical needs-oriented demands of TCM digitalization is essential. 展开更多
关键词 Generative artificial intelligence Intelligence clinical applications Large language model systematic review Traditional Chinese medicine
在线阅读 下载PDF
基于SysML的空中分布式作战体系建模研究
2
作者 王小龙 王暖臣 +2 位作者 穆歌 张旭东 李新津 《电光与控制》 北大核心 2025年第2期1-6,共6页
为开展带有智能、无人特征的空中分布式作战体系研究,支撑装备和能力建设发展,提出一种基于SysML的体系建模方法。在梳理概念发展的基础上,总结空中分布式作战体系特点,分析其制胜机理。借鉴元建模思想,以DoDAF2.0元模型为基础构建空中... 为开展带有智能、无人特征的空中分布式作战体系研究,支撑装备和能力建设发展,提出一种基于SysML的体系建模方法。在梳理概念发展的基础上,总结空中分布式作战体系特点,分析其制胜机理。借鉴元建模思想,以DoDAF2.0元模型为基础构建空中分布式作战体系数据元模型,结合SysML图形特点遴选体系模型、构建建模框架、梳理建模流程。通过智能无人机集群作战体系的示例验证所提方法的有效性,为新型作战体系建模提供思路和技术支撑。 展开更多
关键词 空中分布式作战 体系建模 sysml DoDAF2.0 元模型
在线阅读 下载PDF
Security Vulnerability Analyses of Large Language Models (LLMs) through Extension of the Common Vulnerability Scoring System (CVSS) Framework
3
作者 Alicia Biju Vishnupriya Ramesh Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期340-358,共19页
Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, a... Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks. 展开更多
关键词 Common Vulnerability Scoring system (CVSS) Large language Models (LLMs) DALL-E Prompt Injections Training Data Poisoning CVSS Metrics
在线阅读 下载PDF
Optimizing Fine-Tuning in Quantized Language Models:An In-Depth Analysis of Key Variables
4
作者 Ao Shen Zhiquan Lai +1 位作者 Dongsheng Li Xiaoyu Hu 《Computers, Materials & Continua》 SCIE EI 2025年第1期307-325,共19页
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci... Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments. 展开更多
关键词 Large-scale language Model Parameter-Efficient Fine-Tuning parameter quantization key variable trainable parameters experimental analysis
在线阅读 下载PDF
Robust Detection and Analysis of Smart Contract Vulnerabilities with Large Language Model Agents
5
作者 Nishank P. Kuppa Vijay K. Madisetti 《Journal of Information Security》 2025年第1期197-226,共30页
Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart cont... Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart contracts for personal financial gain, which undermines the integrity of the Ethereum blockchain. This paper proposes a computer program called SADA (Static and Dynamic Analyzer), a novel approach to smart contract vulnerability detection using multiple Large Language Model (LLM) agents to analyze and flag suspicious Solidity code for Ethereum smart contracts. SADA not only improves upon existing vulnerability detection methods but also paves the way for more secure smart contract development practices in the rapidly evolving blockchain ecosystem. 展开更多
关键词 Blockchain Ethereum Smart Contracts Security Decentralized Applications WEB3 Cryptocurrency Large language Models
在线阅读 下载PDF
Multilingual Text Summarization in Healthcare Using Pre-Trained Transformer-Based Language Models
6
作者 Josua Käser Thomas Nagy +1 位作者 Patrick Stirnemann Thomas Hanne 《Computers, Materials & Continua》 2025年第4期201-217,共17页
We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of t... We analyze the suitability of existing pre-trained transformer-based language models(PLMs)for abstractive text summarization on German technical healthcare texts.The study focuses on the multilingual capabilities of these models and their ability to perform the task of abstractive text summarization in the healthcare field.The research hypothesis was that large language models could perform high-quality abstractive text summarization on German technical healthcare texts,even if the model is not specifically trained in that language.Through experiments,the research questions explore the performance of transformer language models in dealing with complex syntax constructs,the difference in performance between models trained in English and German,and the impact of translating the source text to English before conducting the summarization.We conducted an evaluation of four PLMs(GPT-3,a translation-based approach also utilizing GPT-3,a German language Model,and a domain-specific bio-medical model approach).The evaluation considered the informativeness using 3 types of metrics based on Recall-Oriented Understudy for Gisting Evaluation(ROUGE)and the quality of results which is manually evaluated considering 5 aspects.The results show that text summarization models could be used in the German healthcare domain and that domain-independent language models achieved the best results.The study proves that text summarization models can simplify the search for pre-existing German knowledge in various domains. 展开更多
关键词 Text summarization pre-trained transformer-based language models large language models technical healthcare texts natural language processing
在线阅读 下载PDF
A Dynamic Knowledge Base Updating Mechanism-Based Retrieval-Augmented Generation Framework for Intelligent Question-and-Answer Systems
7
作者 Yu Li 《Journal of Computer and Communications》 2025年第1期41-58,共18页
In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilizati... In the context of power generation companies, vast amounts of specialized data and expert knowledge have been accumulated. However, challenges such as data silos and fragmented knowledge hinder the effective utilization of this information. This study proposes a novel framework for intelligent Question-and-Answer (Q&A) systems based on Retrieval-Augmented Generation (RAG) to address these issues. The system efficiently acquires domain-specific knowledge by leveraging external databases, including Relational Databases (RDBs) and graph databases, without additional fine-tuning for Large Language Models (LLMs). Crucially, the framework integrates a Dynamic Knowledge Base Updating Mechanism (DKBUM) and a Weighted Context-Aware Similarity (WCAS) method to enhance retrieval accuracy and mitigate inherent limitations of LLMs, such as hallucinations and lack of specialization. Additionally, the proposed DKBUM dynamically adjusts knowledge weights within the database, ensuring that the most recent and relevant information is utilized, while WCAS refines the alignment between queries and knowledge items by enhanced context understanding. Experimental validation demonstrates that the system can generate timely, accurate, and context-sensitive responses, making it a robust solution for managing complex business logic in specialized industries. 展开更多
关键词 Retrieval-Augmented Generation Question-and-Answer Large language Models Dynamic Knowledge Base Updating Mechanism Weighted Context-Aware Similarity
在线阅读 下载PDF
A Critical Review of Methods and Challenges in Large Language Models
8
作者 Milad Moradi Ke Yan +2 位作者 David Colwell Matthias Samwald Rhona Asgari 《Computers, Materials & Continua》 2025年第2期1681-1698,共18页
This critical review provides an in-depth analysis of Large Language Models(LLMs),encompassing their foundational principles,diverse applications,and advanced training methodologies.We critically examine the evolution... This critical review provides an in-depth analysis of Large Language Models(LLMs),encompassing their foundational principles,diverse applications,and advanced training methodologies.We critically examine the evolution from Recurrent Neural Networks(RNNs)to Transformer models,highlighting the significant advancements and innovations in LLM architectures.The review explores state-of-the-art techniques such as in-context learning and various fine-tuning approaches,with an emphasis on optimizing parameter efficiency.We also discuss methods for aligning LLMs with human preferences,including reinforcement learning frameworks and human feedback mechanisms.The emerging technique of retrieval-augmented generation,which integrates external knowledge into LLMs,is also evaluated.Additionally,we address the ethical considerations of deploying LLMs,stressing the importance of responsible and mindful application.By identifying current gaps and suggesting future research directions,this review provides a comprehensive and critical overview of the present state and potential advancements in LLMs.This work serves as an insightful guide for researchers and practitioners in artificial intelligence,offering a unified perspective on the strengths,limitations,and future prospects of LLMs. 展开更多
关键词 Large language models artificial intelligence natural language processing machine learning generative artificial intelligence
在线阅读 下载PDF
Evaluating research quality with Large Language Models:An analysis of ChatGPT’s effectiveness with different settings and inputs
9
作者 Mike Thelwall 《Journal of Data and Information Science》 2025年第1期7-25,共19页
Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether ... Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations. 展开更多
关键词 ChatGPT Large language Models LLMs SCIENTOMETRICS Research Assessment
在线阅读 下载PDF
Large language models for robotics:Opportunities,challenges,and perspectives
10
作者 Jiaqi Wang Enze Shi +7 位作者 Huawen Hu Chong Ma Yiheng Liu Xuhui Wang Yincheng Yao Xuan Liu Bao Ge Shu Zhang 《Journal of Automation and Intelligence》 2025年第1期52-64,共13页
Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and langua... Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction. 展开更多
关键词 Large language models ROBOTICS Generative AI Embodied intelligence
在线阅读 下载PDF
Smart Contract Vulnerability Detection Using Large Language Models and Graph Structural Analysis
11
作者 Ra-Yeon Choi Yeji Song +3 位作者 Minsoo Jang Taekyung Kim Jinhyun Ahn Dong-Hyuk Im 《Computers, Materials & Continua》 2025年第4期785-801,共17页
Smart contracts are self-executing programs on blockchains that manage complex business logic with transparency and integrity.However,their immutability after deployment makes programming errors particularly critical,... Smart contracts are self-executing programs on blockchains that manage complex business logic with transparency and integrity.However,their immutability after deployment makes programming errors particularly critical,as such errors can be exploited to compromise blockchain security.Existing vulnerability detection methods often rely on fixed rules or target specific vulnerabilities,limiting their scalability and adaptability to diverse smart contract scenarios.Furthermore,natural language processing approaches for source code analysis frequently fail to capture program flow,which is essential for identifying structural vulnerabilities.To address these limitations,we propose a novel model that integrates textual and structural information for smart contract vulnerability detection.Our approach employs the CodeBERT NLP model for textual analysis,augmented with structural insights derived from control flow graphs created using the abstract syntax tree and opcode of smart contracts.Each graph node is embedded using Sent2Vec,and centrality analysis is applied to highlight critical paths and nodes within the code.The extracted features are normalized and combined into a prompt for a large language model to detect vulnerabilities effectivel.Experimental results demonstrate the superiority of our model,achieving an accuracy of 86.70%,a recall of 84.87%,a precision of 85.24%,and an F1-score of 84.46%.These outcomes surpass existing methods,including CodeBERT alone(accuracy:81.26%,F1-score:79.84%)and CodeBERT combined with abstract syntax tree analysis(accuracy:83.48%,F1-score:79.65%).The findings underscore the effectiveness of incorporating graph structural information alongside text-based analysis,offering improved scalability and performance in detecting diverse vulnerabilities. 展开更多
关键词 Blockchain smart contract vulnerability detection large language model
在线阅读 下载PDF
Learning Temporal User Features for Repost Prediction with Large Language Models
12
作者 Wu-Jiu Sun Xiao Fan Liu 《Computers, Materials & Continua》 2025年第3期4117-4136,共20页
Predicting information dissemination on social media,specifcally users’reposting behavior,is crucial for applications such as advertising campaigns.Conventional methods use deep neural networks to make predictions ba... Predicting information dissemination on social media,specifcally users’reposting behavior,is crucial for applications such as advertising campaigns.Conventional methods use deep neural networks to make predictions based on features related to user topic interests and social preferences.However,these models frequently fail to account for the difculties arising from limited training data and model size,which restrict their capacity to learn and capture the intricate patterns within microblogging data.To overcome this limitation,we introduce a novel model Adapt pre-trained Large Language model for Reposting Prediction(ALL-RP),which incorporates two key steps:(1)extracting features from post content and social interactions using a large language model with extensive parameters and trained on a vast corpus,and(2)performing semantic and temporal adaptation to transfer the large language model’s knowledge of natural language,vision,and graph structures to reposting prediction tasks.Specifcally,the temporal adapter in the ALL-RP model captures multi-dimensional temporal information from evolving patterns of user topic interests and social preferences,thereby providing a more realistic refection of user attributes.Additionally,to enhance the robustness of feature modeling,we introduce a variant of the temporal adapter that implements multiple temporal adaptations in parallel while maintaining structural simplicity.Experimental results on real-world datasets demonstrate that the ALL-RP model surpasses state-of-the-art models in predicting both individual user reposting behavior and group sharing behavior,with performance gains of 2.81%and 4.29%,respectively. 展开更多
关键词 Reposting prediction large language model semantic adaptation temporal adaptation
在线阅读 下载PDF
On large language models safety,security,and privacy:A survey
13
作者 Ran Zhang Hong-Wei Li +2 位作者 Xin-Yuan Qian Wen-Bo Jiang Han-Xiao Chen 《Journal of Electronic Science and Technology》 2025年第1期1-21,共21页
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De... The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats. 展开更多
关键词 Large language models Privacy issues Safety issues Security issues
在线阅读 下载PDF
TIPS:Tailored Information Extraction in Public Security Using Domain-Enhanced Large Language Model
14
作者 Yue Liu Qinglang Guo +1 位作者 Chunyao Yang Yong Liao 《Computers, Materials & Continua》 2025年第5期2555-2572,共18页
Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and ... Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and events—while also involving reasoning tasks like personnel classification,relationship judgment,and implicit inference.Moreover,utilizing models for extracting information from police incident data poses a significant challenge—data scarcity,which limits the effectiveness of traditional rule-based and machine-learning methods.To address these,we propose TIPS.In collaboration with public security experts,we used de-identified police incident data to create templates that enable large language models(LLMs)to populate data slots and generate simulated data,enhancing data density and diversity.We then designed schemas to efficiently manage complex extraction and reasoning tasks,constructing a high-quality dataset and fine-tuning multiple open-source LLMs.Experiments showed that the fine-tuned ChatGLM-4-9B model achieved an F1 score of 87.14%,nearly 30%higher than the base model,significantly reducing error rates.Manual corrections further improved performance by 9.39%.This study demonstrates that combining largescale pre-trained models with limited high-quality domain-specific data can greatly enhance information extraction in low-resource environments,offering a new approach for intelligent public security applications. 展开更多
关键词 Public security information extraction large language model prompt engineering
在线阅读 下载PDF
Large Language Models in Software Engineering Education: A Preliminary Study on Software Requirements Engineering Courses
15
作者 Feng Chen Shaomin Zhu +1 位作者 Xin Liu Ying Qian 《计算机教育》 2025年第3期24-33,共10页
The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.... The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.To do so,there is a need to understand the impact of LLMs on software engineering education.In this paper,we conducted a preliminary case study on three software requirements engineering classes where students are allowed to use LLMs to assist in their projects.Based on the students’experience,performance,and feedback from a survey conducted at the end of the courses,we characterized the challenges and benefits of applying LLMs in software engineering education.This research contributes to the ongoing discourse on the integration of LLMs in education,emphasizing both their prominent potential and the need for balanced,mindful usage. 展开更多
关键词 Large language models Software engineering Software requirements engineering EDUCATION
在线阅读 下载PDF
Potential role of large language models and personalized medicine to innovate cardiac rehabilitation
16
作者 Rishith Mishra Hersh Patel +1 位作者 Aleena Jamal Som Singh 《World Journal of Clinical Cases》 2025年第19期1-4,共4页
Cardiac rehabilitation is a crucial multidisciplinary approach to improve patient outcomes.There is a growing body of evidence that suggests that these programs contribute towards reducing cardiovascular mortality and... Cardiac rehabilitation is a crucial multidisciplinary approach to improve patient outcomes.There is a growing body of evidence that suggests that these programs contribute towards reducing cardiovascular mortality and recurrence.Despite this,cardiac rehabilitation is underutilized and adherence to these programs has been a demonstrated barrier in achieving these outcomes.As a result,there is a growing focus on innovating these programs,especially from the standpoint of digital health and personalized medicine.This editorial discusses the possible roles of large language models,such as their role in ChatGPT,in further personalizing cardiac rehabilitation programs through simplifying medical jargon and employing motivational interviewing techniques,thus boosting patient engagement and adherence.However,these possibilities must be further investigated in the clinical literature.Likewise,the integration of large language models in cardiac rehabilitation will be challenging in its nascent stages to ensure accurate and ethical information delivery. 展开更多
关键词 Cardiac rehabilitation Large language models Patient education Motivational interviewing Artificial intelligence
在线阅读 下载PDF
Quantitative Assessment of Generative Large Language Models on Design Pattern Application
17
作者 Dae-Kyoo Kim 《Computers, Materials & Continua》 2025年第3期3843-3872,共30页
Design patterns offer reusable solutions for common software issues,enhancing quality.The advent of generative large language models(LLMs)marks progress in software development,but their efficacy in applying design pa... Design patterns offer reusable solutions for common software issues,enhancing quality.The advent of generative large language models(LLMs)marks progress in software development,but their efficacy in applying design patterns is not fully assessed.The recent introduction of generative large language models(LLMs)like ChatGPT and CoPilot has demonstrated significant promise in software development.They assist with a variety of tasks including code generation,modeling,bug fixing,and testing,leading to enhanced efficiency and productivity.Although initial uses of these LLMs have had a positive effect on software development,their potential influence on the application of design patterns remains unexplored.This study introduces a method to quantify LLMs’ability to implement design patterns,using Role-Based Metamodeling Language(RBML)for a rigorous specification of the pattern’s problem,solution,and transformation rules.The method evaluates the pattern applicability of a software application using the pattern’s problem specification.If deemed applicable,the application is input to the LLM for pattern application.The resulting application is assessed for conformance to the pattern’s solution specification and for completeness against the pattern’s transformation rules.Evaluating the method with ChatGPT 4 across three applications reveals ChatGPT’s high proficiency,achieving averages of 98%in conformance and 87%in completeness,thereby demonstrating the effectiveness of the method.Using RBML,this study confirms that LLMs,specifically ChatGPT 4,have great potential in effective and efficient application of design patterns with high conformance and completeness.This opens avenues for further integrating LLMs into complex software engineering processes. 展开更多
关键词 Design patterns large language models pattern application pattern-based refactoring quantitative assessment
在线阅读 下载PDF
Causal Representation Enhances Cross-Domain Named Entity Recognition in Large Language Models
18
作者 Jiahao Wu Jinzhong Xu +2 位作者 Xiaoming Liu Guan Yang Jie Liu 《Computers, Materials & Continua》 2025年第5期2809-2828,共20页
Large language models cross-domain named entity recognition task in the face of the scarcity of large language labeled data in a specific domain,due to the entity bias arising from the variation of entity information ... Large language models cross-domain named entity recognition task in the face of the scarcity of large language labeled data in a specific domain,due to the entity bias arising from the variation of entity information between different domains,which makes large language models prone to spurious correlations problems when dealing with specific domains and entities.In order to solve this problem,this paper proposes a cross-domain named entity recognition method based on causal graph structure enhancement,which captures the cross-domain invariant causal structural representations between feature representations of text sequences and annotation sequences by establishing a causal learning and intervention module,so as to improve the utilization of causal structural features by the large languagemodels in the target domains,and thus effectively alleviate the false entity bias triggered by the false relevance problem;meanwhile,through the semantic feature fusion module,the semantic information of the source and target domains is effectively combined.The results show an improvement of 2.47%and 4.12%in the political and medical domains,respectively,compared with the benchmark model,and an excellent performance in small-sample scenarios,which proves the effectiveness of causal graph structural enhancement in improving the accuracy of cross-domain entity recognition and reducing false correlations. 展开更多
关键词 Large language model entity bias causal graph structure
在线阅读 下载PDF
Amalgamation of Classical and Large Language Models for Duplicate Bug Detection:A Comparative Study
19
作者 Sai Venkata Akhil Ammu Sukhjit Singh Sehra +1 位作者 Sumeet Kaur Sehra Jaiteg Singh 《Computers, Materials & Continua》 2025年第4期435-453,共19页
Duplicate bug reporting is a critical problem in the software repositories’mining area.Duplicate bug reports can lead to redundant efforts,wasted resources,and delayed software releases.Thus,their accurate identifica... Duplicate bug reporting is a critical problem in the software repositories’mining area.Duplicate bug reports can lead to redundant efforts,wasted resources,and delayed software releases.Thus,their accurate identification is essential for streamlining the bug triage process mining area.Several researchers have explored classical information retrieval,natural language processing,text and data mining,and machine learning approaches.The emergence of large language models(LLMs)(ChatGPT and Huggingface)has presented a new line of models for semantic textual similarity(STS).Although LLMs have shown remarkable advancements,there remains a need for longitudinal studies to determine whether performance improvements are due to the scale of the models or the unique embeddings they produce compared to classical encoding models.This study systematically investigates this issue by comparing classical word embedding techniques against LLM-based embeddings for duplicate bug detection.In this study,we have proposed an amalgamation of models to detect duplicate bug reports using textual and non-textual information about bug reports.The empirical evaluation has been performed on the open-source datasets and evaluated based on established metrics using the mean reciprocal rank(MRR),mean average precision(MAP),and recall rate.The experimental results have shown that combined LLMs can outperform(recall-rate@k 68%–74%)other individual=models for duplicate bug detection.These findings highlight the effectiveness of amalgamating multiple techniques in improving the duplicate bug report detection accuracy. 展开更多
关键词 Duplicate bug detection large language models information retrieval
在线阅读 下载PDF
Improving Machine Translation Formality with Large Language Models
20
作者 Murun Yang Fuxue Li 《Computers, Materials & Continua》 2025年第2期2061-2075,共15页
Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lac... Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose how to improve NMT formality with large language models (LLMs), which combines the style transfer and evaluation capabilities of an LLM and the high-quality translation generation ability of NMT models to improve NMT formality. The proposed method (namely INMTF) encompasses two approaches. The first involves a revision approach using an LLM to revise the NMT-generated translation, ensuring a formal translation style. The second approach employs an LLM as a reward model for scoring translation formality, and then uses reinforcement learning algorithms to fine-tune the NMT model to maximize the reward score, thereby enhancing the formality of the generated translations. Considering the substantial parameter size of LLMs, we also explore methods to reduce the computational cost of INMTF. Experimental results demonstrate that INMTF significantly outperforms baselines in terms of translation formality and translation quality, with an improvement of +9.19 style accuracy points in the German-to-English task and +2.16 COMET score in the Russian-to-English task. Furthermore, our work demonstrates the potential of integrating LLMs within NMT frameworks to bridge the gap between NMT outputs and the formality required in various real-world translation scenarios. 展开更多
关键词 Neural machine translation FORMALITY large language model text style transfer style evaluation reinforcement learning
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部