期刊文献+
共找到102,951篇文章
< 1 2 250 >
每页显示 20 50 100
Optimizing Fine-Tuning in Quantized Language Models:An In-Depth Analysis of Key Variables
1
作者 Ao Shen Zhiquan Lai +1 位作者 Dongsheng Li Xiaoyu Hu 《Computers, Materials & Continua》 SCIE EI 2025年第1期307-325,共19页
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci... Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments. 展开更多
关键词 Large-scale language Model Parameter-Efficient Fine-Tuning parameter quantization key variable trainable parameters experimental analysis
在线阅读 下载PDF
Robust Detection and Analysis of Smart Contract Vulnerabilities with Large Language Model Agents
2
作者 Nishank P. Kuppa Vijay K. Madisetti 《Journal of Information Security》 2025年第1期197-226,共30页
Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart cont... Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart contracts for personal financial gain, which undermines the integrity of the Ethereum blockchain. This paper proposes a computer program called SADA (Static and Dynamic Analyzer), a novel approach to smart contract vulnerability detection using multiple Large Language Model (LLM) agents to analyze and flag suspicious Solidity code for Ethereum smart contracts. SADA not only improves upon existing vulnerability detection methods but also paves the way for more secure smart contract development practices in the rapidly evolving blockchain ecosystem. 展开更多
关键词 Blockchain Ethereum Smart Contracts Security Decentralized Applications WEB3 Cryptocurrency Large language Models
在线阅读 下载PDF
A Critical Review of Methods and Challenges in Large Language Models
3
作者 Milad Moradi Ke Yan +2 位作者 David Colwell Matthias Samwald Rhona Asgari 《Computers, Materials & Continua》 2025年第2期1681-1698,共18页
This critical review provides an in-depth analysis of Large Language Models(LLMs),encompassing their foundational principles,diverse applications,and advanced training methodologies.We critically examine the evolution... This critical review provides an in-depth analysis of Large Language Models(LLMs),encompassing their foundational principles,diverse applications,and advanced training methodologies.We critically examine the evolution from Recurrent Neural Networks(RNNs)to Transformer models,highlighting the significant advancements and innovations in LLM architectures.The review explores state-of-the-art techniques such as in-context learning and various fine-tuning approaches,with an emphasis on optimizing parameter efficiency.We also discuss methods for aligning LLMs with human preferences,including reinforcement learning frameworks and human feedback mechanisms.The emerging technique of retrieval-augmented generation,which integrates external knowledge into LLMs,is also evaluated.Additionally,we address the ethical considerations of deploying LLMs,stressing the importance of responsible and mindful application.By identifying current gaps and suggesting future research directions,this review provides a comprehensive and critical overview of the present state and potential advancements in LLMs.This work serves as an insightful guide for researchers and practitioners in artificial intelligence,offering a unified perspective on the strengths,limitations,and future prospects of LLMs. 展开更多
关键词 Large language models artificial intelligence natural language processing machine learning generative artificial intelligence
在线阅读 下载PDF
Towards efficient and effective unlearning of large language models for recommendation
4
作者 Hangyu WANG Jianghao LIN +4 位作者 Bo CHEN Yang YANG Ruiming TANG Weinan ZHANG Yong YU 《Frontiers of Computer Science》 2025年第3期119-121,共3页
1 Introduction Large Language Models(LLMs)possess massive parameters and are trained on vast datasets,demonstrating exceptional proficiency in various tasks.The remarkable advancements in LLMs also inspire the explora... 1 Introduction Large Language Models(LLMs)possess massive parameters and are trained on vast datasets,demonstrating exceptional proficiency in various tasks.The remarkable advancements in LLMs also inspire the exploration of leveraging LLMs as recommenders(LLMRec),whose effectiveness stems from extensive open-world knowledge and reasoning ability in LLMs[1].LLMRec obtains the recommendation ability through instruction tuning on the user interaction data.But in many cases,it is also crucial for LLMRec to forget specific user data,which is referred to as recommendation unlearning[2],as shown in Fig.1. 展开更多
关键词 large language models llms possess user interaction data large language models instruction tuning recommendation unlearning
原文传递
Workplace English Language Needs for Medical Students in China Learning and Using English as Non-Native Speakers
5
作者 Haiying Liang Michael Reiss Talia Isaacs 《Chinese Journal of Applied Linguistics》 2025年第1期114-135,156,共23页
This mixed-methods study presents a needs analysis to investigate the workplace English language needs of medical students in China who are learning and using English as non-native speakers,the circumstances in which ... This mixed-methods study presents a needs analysis to investigate the workplace English language needs of medical students in China who are learning and using English as non-native speakers,the circumstances in which the various language skills are required,and stakeholders’perceived workplace preparedness in the light of language-related instructional provision during medical training.A leading university in China was chosen as the study case.Altogether,294 online questionnaires were collected from undergraduate medical students,graduate medical students and recent graduates working as physicians,and 33 semi-structured individual interviews were conducted with undergraduate medical students,graduate medical students,recent graduates working as physicians,medical teachers,English for Medical Purposes(EMP)teachers,program leaders and English-speaking patients.Results showed that in addition to physicians experiencing pressure to publish scientific articles internationally,participants attached greater importance to physicians’oral English communication ability,especially in undertaking clinical consultations in English,working with medical interpreters or acting as ad hoc interpreters.The participants also reported a lack of relevant EMP courses or trainings available at this university.Given these communicative events that physicians face in China,EMP courses need to include training in these specific areas. 展开更多
关键词 English for medical purposes health communication language for specific purposes medical education mixed methods needs analysis second language
在线阅读 下载PDF
The Teaching of the Chinese Language: A True Asset in Consolidating Bilateral Relations Between Congo- Brazzaville and China
6
作者 Grace Boukete 《法语国家与地区研究(中法文)》 2025年第2期16-28,共13页
In contemporary society,language transcends its function as a mere tool for communication.It is intrinsically linked to a people’s culture,values,and worldview.This observation is particularly salient in our globaliz... In contemporary society,language transcends its function as a mere tool for communication.It is intrinsically linked to a people’s culture,values,and worldview.This observation is particularly salient in our globalized world,where linguistic considerations have assumed strategic importance.Recognizing this potential,numerous states are seeking to bolster their international relations by prioritizing the teaching of their languages.It is within this context that we focus our attention on the teaching of Chinese in Congo-Brazzaville.Our objective is to investigate the significance of this language in consolidating bilateral relations between China and Congo.To achieve this,we analyze the current state of Chinese language instruction within the Congolese educational system,emphasize the implications of its dissemination,and propose potential avenues for improvement.Our approach is qualitative,drawing upon observations,interviews,and a review of existing literature.We aim to understand how the teaching of Chinese can contribute to the economic,cultural,and social development of Congo,while simultaneously strengthening ties with China. 展开更多
关键词 Congo-Brazzaville Chinese language instruction bilateral relations
在线阅读 下载PDF
Evaluating research quality with Large Language Models:An analysis of ChatGPT’s effectiveness with different settings and inputs
7
作者 Mike Thelwall 《Journal of Data and Information Science》 2025年第1期7-25,共19页
Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether ... Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations. 展开更多
关键词 ChatGPT Large language Models LLMs SCIENTOMETRICS Research Assessment
在线阅读 下载PDF
Large language models for robotics:Opportunities,challenges,and perspectives
8
作者 Jiaqi Wang Enze Shi +7 位作者 Huawen Hu Chong Ma Yiheng Liu Xuhui Wang Yincheng Yao Xuan Liu Bao Ge Shu Zhang 《Journal of Automation and Intelligence》 2025年第1期52-64,共13页
Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and langua... Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction. 展开更多
关键词 Large language models ROBOTICS Generative AI Embodied intelligence
在线阅读 下载PDF
The Impact of English Language Anxiety on the Cross-Cultural Adaptability of Chinese Overseas Students in Malaysia
9
作者 Yang Xiaohan Liu Yu 《Contemporary Social Sciences》 2025年第1期83-101,共19页
With the deepening of cross-cultural educational cooperation between China and Malaysia,the cross-cultural challenges that Chinese overseas students face in Malaysia due to language and cultural differences have becom... With the deepening of cross-cultural educational cooperation between China and Malaysia,the cross-cultural challenges that Chinese overseas students face in Malaysia due to language and cultural differences have become increasingly prominent.Focusing on Chinese graduate students at a public university in Malaysia where English is the medium of instruction,this study employs a scale survey method in conjunction with IBM SPSS 26.0 and Smart PLS 4.0 for data analysis to quantitatively explore the level of language anxiety and its relationship with cross-cultural adaptability and learning motivation.The results indicate that most Chinese graduate students experience notable language anxiety,which is significantly negatively correlated with cross-cultural adaptability,especially academic adaptability,but is not related to learning motivation.Furthermore,the study reveals the complex influencing mechanism of language anxiety within multicultural educational environments and offers suggestions for improvement tailored to Malaysia’s unique educational context.These include utilizing technological tools for language interventions,optimizing classroom teaching strategies,enhancing language learning motivation through external incentives,strengthening training for cross-cultural adaptation skills,and promoting deeper cross-cultural communication.This study provides theoretical support and practical references for alleviating language anxiety and enhancing the cross-cultural adaptability of Chinese overseas students. 展开更多
关键词 language anxiety cross-cultural adaptability learning motivation MALAYSIA overseas students
在线阅读 下载PDF
TIPS:Tailored Information Extraction in Public Security Using Domain-Enhanced Large Language Model
10
作者 Yue Liu Qinglang Guo +1 位作者 Chunyao Yang Yong Liao 《Computers, Materials & Continua》 2025年第5期2555-2572,共18页
Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and ... Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and events—while also involving reasoning tasks like personnel classification,relationship judgment,and implicit inference.Moreover,utilizing models for extracting information from police incident data poses a significant challenge—data scarcity,which limits the effectiveness of traditional rule-based and machine-learning methods.To address these,we propose TIPS.In collaboration with public security experts,we used de-identified police incident data to create templates that enable large language models(LLMs)to populate data slots and generate simulated data,enhancing data density and diversity.We then designed schemas to efficiently manage complex extraction and reasoning tasks,constructing a high-quality dataset and fine-tuning multiple open-source LLMs.Experiments showed that the fine-tuned ChatGLM-4-9B model achieved an F1 score of 87.14%,nearly 30%higher than the base model,significantly reducing error rates.Manual corrections further improved performance by 9.39%.This study demonstrates that combining largescale pre-trained models with limited high-quality domain-specific data can greatly enhance information extraction in low-resource environments,offering a new approach for intelligent public security applications. 展开更多
关键词 Public security information extraction large language model prompt engineering
在线阅读 下载PDF
Learning Temporal User Features for Repost Prediction with Large Language Models
11
作者 Wu-Jiu Sun Xiao Fan Liu 《Computers, Materials & Continua》 2025年第3期4117-4136,共20页
Predicting information dissemination on social media,specifcally users’reposting behavior,is crucial for applications such as advertising campaigns.Conventional methods use deep neural networks to make predictions ba... Predicting information dissemination on social media,specifcally users’reposting behavior,is crucial for applications such as advertising campaigns.Conventional methods use deep neural networks to make predictions based on features related to user topic interests and social preferences.However,these models frequently fail to account for the difculties arising from limited training data and model size,which restrict their capacity to learn and capture the intricate patterns within microblogging data.To overcome this limitation,we introduce a novel model Adapt pre-trained Large Language model for Reposting Prediction(ALL-RP),which incorporates two key steps:(1)extracting features from post content and social interactions using a large language model with extensive parameters and trained on a vast corpus,and(2)performing semantic and temporal adaptation to transfer the large language model’s knowledge of natural language,vision,and graph structures to reposting prediction tasks.Specifcally,the temporal adapter in the ALL-RP model captures multi-dimensional temporal information from evolving patterns of user topic interests and social preferences,thereby providing a more realistic refection of user attributes.Additionally,to enhance the robustness of feature modeling,we introduce a variant of the temporal adapter that implements multiple temporal adaptations in parallel while maintaining structural simplicity.Experimental results on real-world datasets demonstrate that the ALL-RP model surpasses state-of-the-art models in predicting both individual user reposting behavior and group sharing behavior,with performance gains of 2.81%and 4.29%,respectively. 展开更多
关键词 Reposting prediction large language model semantic adaptation temporal adaptation
在线阅读 下载PDF
Large Language Models in Software Engineering Education: A Preliminary Study on Software Requirements Engineering Courses
12
作者 Feng Chen Shaomin Zhu +1 位作者 Xin Liu Ying Qian 《计算机教育》 2025年第3期24-33,共10页
The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.... The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.To do so,there is a need to understand the impact of LLMs on software engineering education.In this paper,we conducted a preliminary case study on three software requirements engineering classes where students are allowed to use LLMs to assist in their projects.Based on the students’experience,performance,and feedback from a survey conducted at the end of the courses,we characterized the challenges and benefits of applying LLMs in software engineering education.This research contributes to the ongoing discourse on the integration of LLMs in education,emphasizing both their prominent potential and the need for balanced,mindful usage. 展开更多
关键词 Large language models Software engineering Software requirements engineering EDUCATION
在线阅读 下载PDF
Causal Representation Enhances Cross-Domain Named Entity Recognition in Large Language Models
13
作者 Jiahao Wu Jinzhong Xu +2 位作者 Xiaoming Liu Guan Yang Jie Liu 《Computers, Materials & Continua》 2025年第5期2809-2828,共20页
Large language models cross-domain named entity recognition task in the face of the scarcity of large language labeled data in a specific domain,due to the entity bias arising from the variation of entity information ... Large language models cross-domain named entity recognition task in the face of the scarcity of large language labeled data in a specific domain,due to the entity bias arising from the variation of entity information between different domains,which makes large language models prone to spurious correlations problems when dealing with specific domains and entities.In order to solve this problem,this paper proposes a cross-domain named entity recognition method based on causal graph structure enhancement,which captures the cross-domain invariant causal structural representations between feature representations of text sequences and annotation sequences by establishing a causal learning and intervention module,so as to improve the utilization of causal structural features by the large languagemodels in the target domains,and thus effectively alleviate the false entity bias triggered by the false relevance problem;meanwhile,through the semantic feature fusion module,the semantic information of the source and target domains is effectively combined.The results show an improvement of 2.47%and 4.12%in the political and medical domains,respectively,compared with the benchmark model,and an excellent performance in small-sample scenarios,which proves the effectiveness of causal graph structural enhancement in improving the accuracy of cross-domain entity recognition and reducing false correlations. 展开更多
关键词 Large language model entity bias causal graph structure
在线阅读 下载PDF
Developing Language Assessment Literacy of Pre-Service English Teachers:Frameworks and Cultivation Strategies
14
作者 Jie Cao 《Journal of Contemporary Educational Research》 2025年第1期1-8,共8页
Assessment is a crucial aspect of the teaching process for teachers.Teachers’assessment literacy is closely related to students’learning outcomes.The language assessment literacy of foreign language teachers is a si... Assessment is a crucial aspect of the teaching process for teachers.Teachers’assessment literacy is closely related to students’learning outcomes.The language assessment literacy of foreign language teachers is a significant component of both teachers’professional development and students’learning,and it has become a research hotspot in the field of domestic language testing.Based on clarifying the theoretical framework of language assessment literacy,this paper proposes the main cultivation paths for pre-service English teachers’language assessment literacy,aiming to provide inspiration and references for the cultivation,reform,and development of teachers in basic foreign language education. 展开更多
关键词 Pre-service English teachers language assessment literacy Cultivation strategies
在线阅读 下载PDF
Assessing the possibility of using large language models in ocular surface diseases
15
作者 Qian Ling Zi-Song Xu +11 位作者 Yan-Mei Zeng Qi Hong Xian-Zhe Qian Jin-Yu Hu Chong-Gang Pei Hong Wei Jie Zou Cheng Chen Xiao-Yu Wang Xu Chen Zhen-Kai Wu Yi Shao 《International Journal of Ophthalmology(English edition)》 2025年第1期1-8,共8页
AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surfa... AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surface diseases:ChatGPT-4,ChatGPT-3.5,Claude 2,PaLM2,and SenseNova.METHODS:A group of experienced ophthalmology professors were asked to develop a 100-question singlechoice question on ocular surface diseases designed to assess the performance of LLMs and human participants in answering ophthalmology specialty exam questions.The exam includes questions on the following topics:keratitis disease(20 questions),keratoconus,keratomalaciac,corneal dystrophy,corneal degeneration,erosive corneal ulcers,and corneal lesions associated with systemic diseases(20 questions),conjunctivitis disease(20 questions),trachoma,pterygoid and conjunctival tumor diseases(20 questions),and dry eye disease(20 questions).Then the total score of each LLMs and compared their mean score,mean correlation,variance,and confidence were calculated.RESULTS:GPT-4 exhibited the highest performance in terms of LLMs.Comparing the average scores of the LLMs group with the four human groups,chief physician,attending physician,regular trainee,and graduate student,it was found that except for ChatGPT-4,the total score of the rest of the LLMs is lower than that of the graduate student group,which had the lowest score in the human group.Both ChatGPT-4 and PaLM2 were more likely to give exact and correct answers,giving very little chance of an incorrect answer.ChatGPT-4 showed higher credibility when answering questions,with a success rate of 59%,but gave the wrong answer to the question 28% of the time.CONCLUSION:GPT-4 model exhibits excellent performance in both answer relevance and confidence.PaLM2 shows a positive correlation(up to 0.8)in terms of answer accuracy during the exam.In terms of answer confidence,PaLM2 is second only to GPT4 and surpasses Claude 2,SenseNova,and GPT-3.5.Despite the fact that ocular surface disease is a highly specialized discipline,GPT-4 still exhibits superior performance,suggesting that its potential and ability to be applied in this field is enormous,perhaps with the potential to be a valuable resource for medical students and clinicians in the future. 展开更多
关键词 ChatGPT-4.0 ChatGPT-3.5 large language models ocular surface diseases
在线阅读 下载PDF
Potential role of large language models and personalized medicine to innovate cardiac rehabilitation
16
作者 Rishith Mishra Hersh Patel +1 位作者 Aleena Jamal Som Singh 《World Journal of Clinical Cases》 2025年第19期1-4,共4页
Cardiac rehabilitation is a crucial multidisciplinary approach to improve patient outcomes.There is a growing body of evidence that suggests that these programs contribute towards reducing cardiovascular mortality and... Cardiac rehabilitation is a crucial multidisciplinary approach to improve patient outcomes.There is a growing body of evidence that suggests that these programs contribute towards reducing cardiovascular mortality and recurrence.Despite this,cardiac rehabilitation is underutilized and adherence to these programs has been a demonstrated barrier in achieving these outcomes.As a result,there is a growing focus on innovating these programs,especially from the standpoint of digital health and personalized medicine.This editorial discusses the possible roles of large language models,such as their role in ChatGPT,in further personalizing cardiac rehabilitation programs through simplifying medical jargon and employing motivational interviewing techniques,thus boosting patient engagement and adherence.However,these possibilities must be further investigated in the clinical literature.Likewise,the integration of large language models in cardiac rehabilitation will be challenging in its nascent stages to ensure accurate and ethical information delivery. 展开更多
关键词 Cardiac rehabilitation Large language models Patient education Motivational interviewing Artificial intelligence
在线阅读 下载PDF
Quantitative Assessment of Generative Large Language Models on Design Pattern Application
17
作者 Dae-Kyoo Kim 《Computers, Materials & Continua》 2025年第3期3843-3872,共30页
Design patterns offer reusable solutions for common software issues,enhancing quality.The advent of generative large language models(LLMs)marks progress in software development,but their efficacy in applying design pa... Design patterns offer reusable solutions for common software issues,enhancing quality.The advent of generative large language models(LLMs)marks progress in software development,but their efficacy in applying design patterns is not fully assessed.The recent introduction of generative large language models(LLMs)like ChatGPT and CoPilot has demonstrated significant promise in software development.They assist with a variety of tasks including code generation,modeling,bug fixing,and testing,leading to enhanced efficiency and productivity.Although initial uses of these LLMs have had a positive effect on software development,their potential influence on the application of design patterns remains unexplored.This study introduces a method to quantify LLMs’ability to implement design patterns,using Role-Based Metamodeling Language(RBML)for a rigorous specification of the pattern’s problem,solution,and transformation rules.The method evaluates the pattern applicability of a software application using the pattern’s problem specification.If deemed applicable,the application is input to the LLM for pattern application.The resulting application is assessed for conformance to the pattern’s solution specification and for completeness against the pattern’s transformation rules.Evaluating the method with ChatGPT 4 across three applications reveals ChatGPT’s high proficiency,achieving averages of 98%in conformance and 87%in completeness,thereby demonstrating the effectiveness of the method.Using RBML,this study confirms that LLMs,specifically ChatGPT 4,have great potential in effective and efficient application of design patterns with high conformance and completeness.This opens avenues for further integrating LLMs into complex software engineering processes. 展开更多
关键词 Design patterns large language models pattern application pattern-based refactoring quantitative assessment
在线阅读 下载PDF
VTAN: A Novel Video Transformer Attention-Based Network for Dynamic Sign Language Recognition
18
作者 Ziyang Deng Weidong Min +2 位作者 Qing Han Mengxue Liu Longfei Li 《Computers, Materials & Continua》 2025年第2期2793-2812,共20页
Dynamic sign language recognition holds significant importance, particularly with the application of deep learning to address its complexity. However, existing methods face several challenges. Firstly, recognizing dyn... Dynamic sign language recognition holds significant importance, particularly with the application of deep learning to address its complexity. However, existing methods face several challenges. Firstly, recognizing dynamic sign language requires identifying keyframes that best represent the signs, and missing these keyframes reduces accuracy. Secondly, some methods do not focus enough on hand regions, which are small within the overall frame, leading to information loss. To address these challenges, we propose a novel Video Transformer Attention-based Network (VTAN) for dynamic sign language recognition. Our approach prioritizes informative frames and hand regions effectively. To tackle the first issue, we designed a keyframe extraction module enhanced by a convolutional autoencoder, which focuses on selecting information-rich frames and eliminating redundant ones from the video sequences. For the second issue, we developed a soft attention-based transformer module that emphasizes extracting features from hand regions, ensuring that the network pays more attention to hand information within sequences. This dual-focus approach improves effective dynamic sign language recognition by addressing the key challenges of identifying critical frames and emphasizing hand regions. Experimental results on two public benchmark datasets demonstrate the effectiveness of our network, outperforming most of the typical methods in sign language recognition tasks. 展开更多
关键词 Dynamic sign language recognition TRANSFORMER soft attention attention-based visual feature aggregation
在线阅读 下载PDF
The Application of Task-Based Language Teaching in College English Teaching in the Information Age
19
作者 Hui Zhang 《Journal of Contemporary Educational Research》 2025年第3期149-154,共6页
With the advent of the information age,profound changes have taken place in education.As an important part of higher education,college English teaching is also continually exploring innovative teaching methods to impr... With the advent of the information age,profound changes have taken place in education.As an important part of higher education,college English teaching is also continually exploring innovative teaching methods to improve teaching quality.Task-based language teaching,with its unique teaching philosophy and practice,emphasizes the use of language for meaningful communication during task completion,which is in line with the goal of cultivating students’comprehensive English language skills.This paper first examines the basic characteristics of task-based language teaching and its application value in college English teaching,and then discusses the specific application strategies of task-based language teaching in college English teaching practice in the information age,to provide a useful reference for the reform and innovation of college English Teaching in the new era. 展开更多
关键词 Application value Task-based language teaching College English teaching Information age
在线阅读 下载PDF
Application of large language models in disease diagnosis and treatment
20
作者 Xintian Yang Tongxin Li +6 位作者 Qin Su Yaling Liu Chenxi Kang Yong Lyu Lina Zhao Yongzhan Nie Yanglin Pan 《Chinese Medical Journal》 2025年第2期130-142,共13页
Large language models(LLMs)such as ChatGPT,Claude,Llama,and Qwen are emerging as transformative technologies for the diagnosis and treatment of various diseases.With their exceptional long-context reasoning capabiliti... Large language models(LLMs)such as ChatGPT,Claude,Llama,and Qwen are emerging as transformative technologies for the diagnosis and treatment of various diseases.With their exceptional long-context reasoning capabilities,LLMs are proficient in clinically relevant tasks,particularly in medical text analysis and interactive dialogue.They can enhance diagnostic accuracy by processing vast amounts of patient data and medical literature and have demonstrated their utility in diagnosing common diseases and facilitating the identification of rare diseases by recognizing subtle patterns in symptoms and test results.Building on their image-recognition abilities,multimodal LLMs(MLLMs)show promising potential for diagnosis based on radiography,chest computed tomography(CT),electrocardiography(ECG),and common pathological images.These models can also assist in treatment planning by suggesting evidence-based interventions and improving clinical decision support systems through integrated analysis of patient records.Despite these promising developments,significant challenges persist regarding the use of LLMs in medicine,including concerns regarding algorithmic bias,the potential for hallucinations,and the need for rigorous clinical validation.Ethical considerations also underscore the importance of maintaining the function of supervision in clinical practice.This paper highlights the rapid advancements in research on the diagnostic and therapeutic applications of LLMs across different medical disciplines and emphasizes the importance of policymaking,ethical supervision,and multidisciplinary collaboration in promoting more effective and safer clinical applications of LLMs.Future directions include the integration of proprietary clinical knowledge,the investigation of open-source and customized models,and the evaluation of real-time effects in clinical diagnosis and treatment practices. 展开更多
关键词 Large language models Artificial intelligence DIAGNOSIS Treatment planning Clinical decision support
原文传递
上一页 1 2 250 下一页 到第
使用帮助 返回顶部