期刊文献+
共找到4,404篇文章
< 1 2 221 >
每页显示 20 50 100
Optimizing Fine-Tuning in Quantized Language Models:An In-Depth Analysis of Key Variables
1
作者 Ao Shen Zhiquan Lai +1 位作者 Dongsheng Li Xiaoyu Hu 《Computers, Materials & Continua》 SCIE EI 2025年第1期307-325,共19页
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci... Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments. 展开更多
关键词 Large-scale language Model Parameter-Efficient Fine-Tuning parameter quantization key variable trainable parameters experimental analysis
在线阅读 下载PDF
Robust Detection and Analysis of Smart Contract Vulnerabilities with Large Language Model Agents
2
作者 Nishank P. Kuppa Vijay K. Madisetti 《Journal of Information Security》 2025年第1期197-226,共30页
Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart cont... Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart contracts for personal financial gain, which undermines the integrity of the Ethereum blockchain. This paper proposes a computer program called SADA (Static and Dynamic Analyzer), a novel approach to smart contract vulnerability detection using multiple Large Language Model (LLM) agents to analyze and flag suspicious Solidity code for Ethereum smart contracts. SADA not only improves upon existing vulnerability detection methods but also paves the way for more secure smart contract development practices in the rapidly evolving blockchain ecosystem. 展开更多
关键词 Blockchain Ethereum Smart Contracts Security Decentralized Applications WEB3 Cryptocurrency Large language Models
在线阅读 下载PDF
The Impact of English Language Anxiety on the Cross-Cultural Adaptability of Chinese Overseas Students in Malaysia
3
作者 Yang Xiaohan Liu Yu 《Contemporary Social Sciences》 2025年第1期83-101,共19页
With the deepening of cross-cultural educational cooperation between China and Malaysia,the cross-cultural challenges that Chinese overseas students face in Malaysia due to language and cultural differences have becom... With the deepening of cross-cultural educational cooperation between China and Malaysia,the cross-cultural challenges that Chinese overseas students face in Malaysia due to language and cultural differences have become increasingly prominent.Focusing on Chinese graduate students at a public university in Malaysia where English is the medium of instruction,this study employs a scale survey method in conjunction with IBM SPSS 26.0 and Smart PLS 4.0 for data analysis to quantitatively explore the level of language anxiety and its relationship with cross-cultural adaptability and learning motivation.The results indicate that most Chinese graduate students experience notable language anxiety,which is significantly negatively correlated with cross-cultural adaptability,especially academic adaptability,but is not related to learning motivation.Furthermore,the study reveals the complex influencing mechanism of language anxiety within multicultural educational environments and offers suggestions for improvement tailored to Malaysia’s unique educational context.These include utilizing technological tools for language interventions,optimizing classroom teaching strategies,enhancing language learning motivation through external incentives,strengthening training for cross-cultural adaptation skills,and promoting deeper cross-cultural communication.This study provides theoretical support and practical references for alleviating language anxiety and enhancing the cross-cultural adaptability of Chinese overseas students. 展开更多
关键词 language anxiety cross-cultural adaptability learning motivation MALAYSIA overseas students
在线阅读 下载PDF
Potential role of large language models and personalized medicine to innovate cardiac rehabilitation
4
作者 Rishith Mishra Hersh Patel +1 位作者 Aleena Jamal Som Singh 《World Journal of Clinical Cases》 2025年第19期1-4,共4页
Cardiac rehabilitation is a crucial multidisciplinary approach to improve patient outcomes.There is a growing body of evidence that suggests that these programs contribute towards reducing cardiovascular mortality and... Cardiac rehabilitation is a crucial multidisciplinary approach to improve patient outcomes.There is a growing body of evidence that suggests that these programs contribute towards reducing cardiovascular mortality and recurrence.Despite this,cardiac rehabilitation is underutilized and adherence to these programs has been a demonstrated barrier in achieving these outcomes.As a result,there is a growing focus on innovating these programs,especially from the standpoint of digital health and personalized medicine.This editorial discusses the possible roles of large language models,such as their role in ChatGPT,in further personalizing cardiac rehabilitation programs through simplifying medical jargon and employing motivational interviewing techniques,thus boosting patient engagement and adherence.However,these possibilities must be further investigated in the clinical literature.Likewise,the integration of large language models in cardiac rehabilitation will be challenging in its nascent stages to ensure accurate and ethical information delivery. 展开更多
关键词 Cardiac rehabilitation Large language models Patient education Motivational interviewing Artificial intelligence
在线阅读 下载PDF
On large language models safety,security,and privacy:A survey
5
作者 Ran Zhang Hong-Wei Li +2 位作者 Xin-Yuan Qian Wen-Bo Jiang Han-Xiao Chen 《Journal of Electronic Science and Technology》 2025年第1期1-21,共21页
The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.De... The integration of artificial intelligence(AI)technology,particularly large language models(LLMs),has become essential across various sectors due to their advanced language comprehension and generation capabilities.Despite their transformative impact in fields such as machine translation and intelligent dialogue systems,LLMs face significant challenges.These challenges include safety,security,and privacy concerns that undermine their trustworthiness and effectiveness,such as hallucinations,backdoor attacks,and privacy leakage.Previous works often conflated safety issues with security concerns.In contrast,our study provides clearer and more reasonable definitions for safety,security,and privacy within the context of LLMs.Building on these definitions,we provide a comprehensive overview of the vulnerabilities and defense mechanisms related to safety,security,and privacy in LLMs.Additionally,we explore the unique research challenges posed by LLMs and suggest potential avenues for future research,aiming to enhance the robustness and reliability of LLMs in the face of emerging threats. 展开更多
关键词 Large language models Privacy issues Safety issues Security issues
在线阅读 下载PDF
Large Language Models in Software Engineering Education: A Preliminary Study on Software Requirements Engineering Courses
6
作者 Feng Chen Shaomin Zhu +1 位作者 Xin Liu Ying Qian 《计算机教育》 2025年第3期24-33,共10页
The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.... The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.To do so,there is a need to understand the impact of LLMs on software engineering education.In this paper,we conducted a preliminary case study on three software requirements engineering classes where students are allowed to use LLMs to assist in their projects.Based on the students’experience,performance,and feedback from a survey conducted at the end of the courses,we characterized the challenges and benefits of applying LLMs in software engineering education.This research contributes to the ongoing discourse on the integration of LLMs in education,emphasizing both their prominent potential and the need for balanced,mindful usage. 展开更多
关键词 Large language models Software engineering Software requirements engineering EDUCATION
在线阅读 下载PDF
Assessing the possibility of using large language models in ocular surface diseases
7
作者 Qian Ling Zi-Song Xu +11 位作者 Yan-Mei Zeng Qi Hong Xian-Zhe Qian Jin-Yu Hu Chong-Gang Pei Hong Wei Jie Zou Cheng Chen Xiao-Yu Wang Xu Chen Zhen-Kai Wu Yi Shao 《International Journal of Ophthalmology(English edition)》 2025年第1期1-8,共8页
AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surfa... AIM:To assess the possibility of using different large language models(LLMs)in ocular surface diseases by selecting five different LLMS to test their accuracy in answering specialized questions related to ocular surface diseases:ChatGPT-4,ChatGPT-3.5,Claude 2,PaLM2,and SenseNova.METHODS:A group of experienced ophthalmology professors were asked to develop a 100-question singlechoice question on ocular surface diseases designed to assess the performance of LLMs and human participants in answering ophthalmology specialty exam questions.The exam includes questions on the following topics:keratitis disease(20 questions),keratoconus,keratomalaciac,corneal dystrophy,corneal degeneration,erosive corneal ulcers,and corneal lesions associated with systemic diseases(20 questions),conjunctivitis disease(20 questions),trachoma,pterygoid and conjunctival tumor diseases(20 questions),and dry eye disease(20 questions).Then the total score of each LLMs and compared their mean score,mean correlation,variance,and confidence were calculated.RESULTS:GPT-4 exhibited the highest performance in terms of LLMs.Comparing the average scores of the LLMs group with the four human groups,chief physician,attending physician,regular trainee,and graduate student,it was found that except for ChatGPT-4,the total score of the rest of the LLMs is lower than that of the graduate student group,which had the lowest score in the human group.Both ChatGPT-4 and PaLM2 were more likely to give exact and correct answers,giving very little chance of an incorrect answer.ChatGPT-4 showed higher credibility when answering questions,with a success rate of 59%,but gave the wrong answer to the question 28% of the time.CONCLUSION:GPT-4 model exhibits excellent performance in both answer relevance and confidence.PaLM2 shows a positive correlation(up to 0.8)in terms of answer accuracy during the exam.In terms of answer confidence,PaLM2 is second only to GPT4 and surpasses Claude 2,SenseNova,and GPT-3.5.Despite the fact that ocular surface disease is a highly specialized discipline,GPT-4 still exhibits superior performance,suggesting that its potential and ability to be applied in this field is enormous,perhaps with the potential to be a valuable resource for medical students and clinicians in the future. 展开更多
关键词 ChatGPT-4.0 ChatGPT-3.5 large language models ocular surface diseases
在线阅读 下载PDF
A Study on the Cross-Cultural Communication of Chinese Opera Cultural Elements in Teaching Materials of Chinese as a Foreign Language:Taking New Practical Chinese Readers as an Example
8
作者 Xi Wang Dong Yao 《Journal of Contemporary Educational Research》 2025年第1期74-80,共7页
This paper selects the widely used New Practical Chinese Readers,a comprehensive teaching material for Chinese as a foreign language,analyzing its content selection,presentation format,and organizational characteristi... This paper selects the widely used New Practical Chinese Readers,a comprehensive teaching material for Chinese as a foreign language,analyzing its content selection,presentation format,and organizational characteristics.By reviewing the inclusion of Chinese opera cultural elements in this material,the study identifies existing issues and provides recommendations for improvement.Introducing opera culture into Chinese language teaching materials can align with global cultural exchanges,helping more people learn about traditional Chinese culture and enhancing China’s international influence. 展开更多
关键词 Chinese opera cultural elements Teaching materials Chinese as a foreign language Cross-cultural communication
在线阅读 下载PDF
Developing Language Assessment Literacy of Pre-Service English Teachers:Frameworks and Cultivation Strategies
9
作者 Jie Cao 《Journal of Contemporary Educational Research》 2025年第1期1-8,共8页
Assessment is a crucial aspect of the teaching process for teachers.Teachers’assessment literacy is closely related to students’learning outcomes.The language assessment literacy of foreign language teachers is a si... Assessment is a crucial aspect of the teaching process for teachers.Teachers’assessment literacy is closely related to students’learning outcomes.The language assessment literacy of foreign language teachers is a significant component of both teachers’professional development and students’learning,and it has become a research hotspot in the field of domestic language testing.Based on clarifying the theoretical framework of language assessment literacy,this paper proposes the main cultivation paths for pre-service English teachers’language assessment literacy,aiming to provide inspiration and references for the cultivation,reform,and development of teachers in basic foreign language education. 展开更多
关键词 Pre-service English teachers language assessment literacy Cultivation strategies
在线阅读 下载PDF
Systematizing Teacher Development:A Review of Foreign Language Teacher Learning
10
作者 Guang ZENG 《Chinese Journal of Applied Linguistics》 2024年第3期518-523,526,共7页
Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Prof... Foreign language teaching practice is developing rapidly,but research on foreign language teacher learning is currently relatively fragmented and unstructured.The book Foreign Language Teacher Learning,written by Professor Kang Yan from Capital Normal University,published in September 2022,makes a systematic introduction to foreign language teacher learning,which to some extent makes up for this shortcoming.Her book presents the lineage of foreign language teacher learning research at home and abroad,analyzes both theoretical and practical aspects,reviews the cuttingedge research results,and foresees the future development trend,painting a complete research picture for researchers in the field of foreign language teaching and teacher education as well as front-line teachers interested in foreign language teacher learning.This is an important inspiration for conducting foreign language teacher learning research in the future.And this paper makes a review of the book from aspects such as its content,major characteristics,contributions and limitations. 展开更多
关键词 foreign language teacher learning foreign language teacher education foreign language teaching teacher development
在线阅读 下载PDF
The Dynamic Interplay Between Language Motivation and English Speaking Fluency:Implications for Effective Teaching Strategies
11
作者 Yiran Yang 《Journal of Contemporary Educational Research》 2024年第7期56-62,共7页
In this study,we aim to investigate the reciprocal influence between language motivation and English speaking fluency among language learners,and to draw implications for effective teaching methodologies.By analyzing ... In this study,we aim to investigate the reciprocal influence between language motivation and English speaking fluency among language learners,and to draw implications for effective teaching methodologies.By analyzing multiple cases of language learners in conjunction with relevant theories and practical insights,the study uncovers a dynamic correlation between language motivation and speaking fluency.The research findings indicate that heightened language motivation can positively impact learners’speaking fluency,while improved oral skills,in turn,bolster learners’language confidence and motivation.Building on these insights,the study proposes impactful teaching approaches,such as cultivating learners’enthusiasm for language acquisition,providing diverse opportunities for oral practice,and fostering active engagement in language communication.These strategies are designed to enhance language motivation and speaking fluency among learners,offering valuable guidance and reference for educators. 展开更多
关键词 language motivation English speaking fluency language learners Teaching methodologies Oral proficiency language confidence
在线阅读 下载PDF
A Comprehensive Study on Gender Language and Its Differences in China
12
作者 Suofeiya Fan 《Journal of Contemporary Educational Research》 2024年第7期187-191,共5页
The study of language and gender,especially the study of gender language differences involves many fields such as psychology,sociology,anthropology,language and literature,news media,education,and so on.Starting from ... The study of language and gender,especially the study of gender language differences involves many fields such as psychology,sociology,anthropology,language and literature,news media,education,and so on.Starting from the broad definition of gender language,this paper composes and reviews the research history of domestic gender language and its differences.Around the research history of domestic gender language,the research period is divided according to the timeline into germination,genesis,and growth.Divided by theme and content,the main content is the phenomenon of sexism in language;the second is the study of gender language style differences;the third is the root causes of sexism and verbal gender differences,i.e.,the construction of the corresponding theories;and the fourth is the discussion of the limitations of the study of gender language in foreign countries. 展开更多
关键词 language and gender Gender language differences language differences
在线阅读 下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
13
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
在线阅读 下载PDF
Plain language in the healthcare of Japan:a systematic review of“plain Japanese”
14
作者 Hatsune Kido Soichiro Saeki +5 位作者 Mayu Hiraiwa Masashi Yasunaga Rie Tomizawa Chika Honda Toshio Fukuoka Kaori Minamitani 《Global Health Journal》 2024年第3期113-118,共6页
Objective:Despite the decrease in the number of foreign visitors and residents in Japan due to the coronavirus disease 2019,a resurgence is remarkable from 2022.However,Japan's medical support system for foreign p... Objective:Despite the decrease in the number of foreign visitors and residents in Japan due to the coronavirus disease 2019,a resurgence is remarkable from 2022.However,Japan's medical support system for foreign patients,especially residents,is inadequate,with language barriers potentially causing health disparities.Comprehensive interpretation and translation services are challenging,but“plain Japanese”may be a viable alternative for foreign patients with basic Japanese language skills.This study explores the application and obstacles of plain Japanese in the medical sector.Methods:A literature review was performed across these databases:Web of Science,PubMed,Google Scholar,Scopus,CINAHL Plus,Springer Link and Ichushi-Web(Japanese medical literature).The search covered themes related to healthcare,care for foreign patients,and scholarly articles,and was conducted in July 2023.Results:The study incorporated five papers.Each paper emphasized the language barriers foreign residents in Japan face when accessing healthcare,highlighting the critical role and necessity of plain Japanese in medical environments.Most of the reports focused on the challenges of delivering medical care to foreign patients and the training of healthcare professionals in using plain Japanese for communication.Conclusion:The knowledge and application of plain Japanese among healthcare professionals are inadequate,and literature also remains scarce.With the increasing number of foreign residents in Japan,the establishment of a healthcare system that effectively uses plain Japanese is essential.However,plain Japanese may not be the optimal linguistic assistance in certain situations,thus it is imperative to encourage more research and reports on healthcare services using plain Japanese. 展开更多
关键词 Plain Japanese Easy Japanese Plain language Foreign residents Healthcareaccess language barriers Emigrants and immigrants
在线阅读 下载PDF
Enhancing Communication Accessibility:UrSL-CNN Approach to Urdu Sign Language Translation for Hearing-Impaired Individuals
15
作者 Khushal Das Fazeel Abid +4 位作者 Jawad Rasheed Kamlish Tunc Asuroglu Shtwai Alsubai Safeeullah Soomro 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期689-711,共23页
Deaf people or people facing hearing issues can communicate using sign language(SL),a visual language.Many works based on rich source language have been proposed;however,the work using poor resource language is still ... Deaf people or people facing hearing issues can communicate using sign language(SL),a visual language.Many works based on rich source language have been proposed;however,the work using poor resource language is still lacking.Unlike other SLs,the visuals of the Urdu Language are different.This study presents a novel approach to translating Urdu sign language(UrSL)using the UrSL-CNN model,a convolutional neural network(CNN)architecture specifically designed for this purpose.Unlike existingworks that primarily focus on languageswith rich resources,this study addresses the challenge of translating a sign language with limited resources.We conducted experiments using two datasets containing 1500 and 78,000 images,employing a methodology comprising four modules:data collection,pre-processing,categorization,and prediction.To enhance prediction accuracy,each sign image was transformed into a greyscale image and underwent noise filtering.Comparative analysis with machine learning baseline methods(support vectormachine,GaussianNaive Bayes,randomforest,and k-nearest neighbors’algorithm)on the UrSL alphabets dataset demonstrated the superiority of UrSL-CNN,achieving an accuracy of 0.95.Additionally,our model exhibited superior performance in Precision,Recall,and F1-score evaluations.This work not only contributes to advancing sign language translation but also holds promise for improving communication accessibility for individuals with hearing impairments. 展开更多
关键词 Convolutional neural networks Pakistan sign language visual language
在线阅读 下载PDF
Unlocking the Potential:A Comprehensive Systematic Review of ChatGPT in Natural Language Processing Tasks
16
作者 Ebtesam Ahmad Alomari 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期43-85,共43页
As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects in... As Natural Language Processing(NLP)continues to advance,driven by the emergence of sophisticated large language models such as ChatGPT,there has been a notable growth in research activity.This rapid uptake reflects increasing interest in the field and induces critical inquiries into ChatGPT’s applicability in the NLP domain.This review paper systematically investigates the role of ChatGPT in diverse NLP tasks,including information extraction,Name Entity Recognition(NER),event extraction,relation extraction,Part of Speech(PoS)tagging,text classification,sentiment analysis,emotion recognition and text annotation.The novelty of this work lies in its comprehensive analysis of the existing literature,addressing a critical gap in understanding ChatGPT’s adaptability,limitations,and optimal application.In this paper,we employed a systematic stepwise approach following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA)framework to direct our search process and seek relevant studies.Our review reveals ChatGPT’s significant potential in enhancing various NLP tasks.Its adaptability in information extraction tasks,sentiment analysis,and text classification showcases its ability to comprehend diverse contexts and extract meaningful details.Additionally,ChatGPT’s flexibility in annotation tasks reducesmanual efforts and accelerates the annotation process,making it a valuable asset in NLP development and research.Furthermore,GPT-4 and prompt engineering emerge as a complementary mechanism,empowering users to guide the model and enhance overall accuracy.Despite its promising potential,challenges persist.The performance of ChatGP Tneeds tobe testedusingmore extensivedatasets anddiversedata structures.Subsequently,its limitations in handling domain-specific language and the need for fine-tuning in specific applications highlight the importance of further investigations to address these issues. 展开更多
关键词 Generative AI large languagemodel(LLM) natural language processing(NLP) ChatGPT GPT(generative pretraining transformer) GPT-4 sentiment analysis NER information extraction ANNOTATION text classification
在线阅读 下载PDF
Smaller & Smarter: Score-Driven Network Chaining of Smaller Language Models
17
作者 Gunika Dhingra Siddansh Chawla +1 位作者 Vijay K. Madisetti Arshdeep Bahga 《Journal of Software Engineering and Applications》 2024年第1期23-42,共20页
With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily meas... With the continuous evolution and expanding applications of Large Language Models (LLMs), there has been a noticeable surge in the size of the emerging models. It is not solely the growth in model size, primarily measured by the number of parameters, but also the subsequent escalation in computational demands, hardware and software prerequisites for training, all culminating in a substantial financial investment as well. In this paper, we present novel techniques like supervision, parallelization, and scoring functions to get better results out of chains of smaller language models, rather than relying solely on scaling up model size. Firstly, we propose an approach to quantify the performance of a Smaller Language Models (SLM) by introducing a corresponding supervisor model that incrementally corrects the encountered errors. Secondly, we propose an approach to utilize two smaller language models (in a network) performing the same task and retrieving the best relevant output from the two, ensuring peak performance for a specific task. Experimental evaluations establish the quantitative accuracy improvements on financial reasoning and arithmetic calculation tasks from utilizing techniques like supervisor models (in a network of model scenario), threshold scoring and parallel processing over a baseline study. 展开更多
关键词 Large language Models (LLMs) Smaller language Models (SLMs) FINANCE NETWORKING Supervisor Model Scoring Function
在线阅读 下载PDF
Exploration of the Impact of Language Proficiency on Vocabulary Association Patterns in English Language Education
18
作者 Yuxin Jin 《Journal of Contemporary Educational Research》 2024年第11期76-81,共6页
This paper explores the lexical association patterns of English as a second language and their relationship with language proficiency.Through the vocabulary association test,the study analyzes the differences in vocab... This paper explores the lexical association patterns of English as a second language and their relationship with language proficiency.Through the vocabulary association test,the study analyzes the differences in vocabulary association between learners with different language levels.The participants were 100 non-native English-speaking un-dergraduate students from a top 200 university,such as the University of Nottingham,and a university outside the top 200,such as the University of Aberdeen;the two groups of learners differed in their vocabulary size and learning style.It was found that the two groups of learners differed significantly in vocabulary size,language background,and learning experience.In addition,the study raises three core questions:first,learners’lexical association patterns,second,dif-ferences in association among learners with different language proficiency levels,and third,other variables that affect vocabulary association ability.The limitations of the study are that reaction time was not measured and the influence of native language background on word association was not fully considered;future research should further explore these aspects. 展开更多
关键词 Lexical association patterns Vocabulary association Learning style language proficiency Second language learners
在线阅读 下载PDF
The Auxiliary Role of Large Language Models in Clinical Dialogues
19
作者 Haipeng Luo Yuchu Zhang 《Journal of Electronic Research and Application》 2024年第6期72-78,共7页
In recent years,large language models(LLMs)have made significant progress in natural language processing(NLP).These models not only perform well in a variety of language tasks but also show great potential in the medi... In recent years,large language models(LLMs)have made significant progress in natural language processing(NLP).These models not only perform well in a variety of language tasks but also show great potential in the medical field.This paper aims to explore the application of LLMs in clinical dialogues,analyzing their role in improving the efficiency of doctor-patient communication,aiding in diagnosis and treatment,and providing emotional support.The paper also discusses the challenges and limitations of the model in terms of privacy protection,ethical issues,and practical applications.Through comprehensive analysis,we conclude that applying LLMs in clinical dialogues is promising.However,it requires careful consideration and caution by practitioners in practice. 展开更多
关键词 Large language models Clinical dialogues Natural language processing Healthcare assistance Ethical issues
在线阅读 下载PDF
A Review of Research on Second Language Acquisition from a Positive Psychology Perspective
20
作者 Yanhui Wu 《Journal of Contemporary Educational Research》 2024年第6期189-193,共5页
This paper reviews the research on second language acquisition from the perspective of positive psychology.First,it introduces the background and purpose of the study and discusses the significance of the application ... This paper reviews the research on second language acquisition from the perspective of positive psychology.First,it introduces the background and purpose of the study and discusses the significance of the application of positive psychology in the field of language acquisition.Then,the basic theories of positive psychology,including the core concepts and principles of positive psychology,are summarized.Subsequently,the theory of second language acquisition is defined and outlined,including the definition,characteristics,and related developmental theories of second language acquisition.On this basis,the study of second language acquisition from the perspective of positive psychology is discussed in detail.By combing and synthesizing the literature,this paper summarizes the current situation and trend of second language acquisition research under the perspective of positive psychology and puts forward some future research directions and suggestions. 展开更多
关键词 Positive psychology Second language acquisition language learning motivation Positive emotion Positive mindset
在线阅读 下载PDF
上一页 1 2 221 下一页 到第
使用帮助 返回顶部