Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether ...Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations.展开更多
Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart cont...Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart contracts for personal financial gain, which undermines the integrity of the Ethereum blockchain. This paper proposes a computer program called SADA (Static and Dynamic Analyzer), a novel approach to smart contract vulnerability detection using multiple Large Language Model (LLM) agents to analyze and flag suspicious Solidity code for Ethereum smart contracts. SADA not only improves upon existing vulnerability detection methods but also paves the way for more secure smart contract development practices in the rapidly evolving blockchain ecosystem.展开更多
This critical review provides an in-depth analysis of Large Language Models(LLMs),encompassing their foundational principles,diverse applications,and advanced training methodologies.We critically examine the evolution...This critical review provides an in-depth analysis of Large Language Models(LLMs),encompassing their foundational principles,diverse applications,and advanced training methodologies.We critically examine the evolution from Recurrent Neural Networks(RNNs)to Transformer models,highlighting the significant advancements and innovations in LLM architectures.The review explores state-of-the-art techniques such as in-context learning and various fine-tuning approaches,with an emphasis on optimizing parameter efficiency.We also discuss methods for aligning LLMs with human preferences,including reinforcement learning frameworks and human feedback mechanisms.The emerging technique of retrieval-augmented generation,which integrates external knowledge into LLMs,is also evaluated.Additionally,we address the ethical considerations of deploying LLMs,stressing the importance of responsible and mindful application.By identifying current gaps and suggesting future research directions,this review provides a comprehensive and critical overview of the present state and potential advancements in LLMs.This work serves as an insightful guide for researchers and practitioners in artificial intelligence,offering a unified perspective on the strengths,limitations,and future prospects of LLMs.展开更多
Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and langua...Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction.展开更多
Predicting information dissemination on social media,specifcally users’reposting behavior,is crucial for applications such as advertising campaigns.Conventional methods use deep neural networks to make predictions ba...Predicting information dissemination on social media,specifcally users’reposting behavior,is crucial for applications such as advertising campaigns.Conventional methods use deep neural networks to make predictions based on features related to user topic interests and social preferences.However,these models frequently fail to account for the difculties arising from limited training data and model size,which restrict their capacity to learn and capture the intricate patterns within microblogging data.To overcome this limitation,we introduce a novel model Adapt pre-trained Large Language model for Reposting Prediction(ALL-RP),which incorporates two key steps:(1)extracting features from post content and social interactions using a large language model with extensive parameters and trained on a vast corpus,and(2)performing semantic and temporal adaptation to transfer the large language model’s knowledge of natural language,vision,and graph structures to reposting prediction tasks.Specifcally,the temporal adapter in the ALL-RP model captures multi-dimensional temporal information from evolving patterns of user topic interests and social preferences,thereby providing a more realistic refection of user attributes.Additionally,to enhance the robustness of feature modeling,we introduce a variant of the temporal adapter that implements multiple temporal adaptations in parallel while maintaining structural simplicity.Experimental results on real-world datasets demonstrate that the ALL-RP model surpasses state-of-the-art models in predicting both individual user reposting behavior and group sharing behavior,with performance gains of 2.81%and 4.29%,respectively.展开更多
Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and ...Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and events—while also involving reasoning tasks like personnel classification,relationship judgment,and implicit inference.Moreover,utilizing models for extracting information from police incident data poses a significant challenge—data scarcity,which limits the effectiveness of traditional rule-based and machine-learning methods.To address these,we propose TIPS.In collaboration with public security experts,we used de-identified police incident data to create templates that enable large language models(LLMs)to populate data slots and generate simulated data,enhancing data density and diversity.We then designed schemas to efficiently manage complex extraction and reasoning tasks,constructing a high-quality dataset and fine-tuning multiple open-source LLMs.Experiments showed that the fine-tuned ChatGLM-4-9B model achieved an F1 score of 87.14%,nearly 30%higher than the base model,significantly reducing error rates.Manual corrections further improved performance by 9.39%.This study demonstrates that combining largescale pre-trained models with limited high-quality domain-specific data can greatly enhance information extraction in low-resource environments,offering a new approach for intelligent public security applications.展开更多
Cardiac rehabilitation is a crucial multidisciplinary approach to improve patient outcomes.There is a growing body of evidence that suggests that these programs contribute towards reducing cardiovascular mortality and...Cardiac rehabilitation is a crucial multidisciplinary approach to improve patient outcomes.There is a growing body of evidence that suggests that these programs contribute towards reducing cardiovascular mortality and recurrence.Despite this,cardiac rehabilitation is underutilized and adherence to these programs has been a demonstrated barrier in achieving these outcomes.As a result,there is a growing focus on innovating these programs,especially from the standpoint of digital health and personalized medicine.This editorial discusses the possible roles of large language models,such as their role in ChatGPT,in further personalizing cardiac rehabilitation programs through simplifying medical jargon and employing motivational interviewing techniques,thus boosting patient engagement and adherence.However,these possibilities must be further investigated in the clinical literature.Likewise,the integration of large language models in cardiac rehabilitation will be challenging in its nascent stages to ensure accurate and ethical information delivery.展开更多
Design patterns offer reusable solutions for common software issues,enhancing quality.The advent of generative large language models(LLMs)marks progress in software development,but their efficacy in applying design pa...Design patterns offer reusable solutions for common software issues,enhancing quality.The advent of generative large language models(LLMs)marks progress in software development,but their efficacy in applying design patterns is not fully assessed.The recent introduction of generative large language models(LLMs)like ChatGPT and CoPilot has demonstrated significant promise in software development.They assist with a variety of tasks including code generation,modeling,bug fixing,and testing,leading to enhanced efficiency and productivity.Although initial uses of these LLMs have had a positive effect on software development,their potential influence on the application of design patterns remains unexplored.This study introduces a method to quantify LLMs’ability to implement design patterns,using Role-Based Metamodeling Language(RBML)for a rigorous specification of the pattern’s problem,solution,and transformation rules.The method evaluates the pattern applicability of a software application using the pattern’s problem specification.If deemed applicable,the application is input to the LLM for pattern application.The resulting application is assessed for conformance to the pattern’s solution specification and for completeness against the pattern’s transformation rules.Evaluating the method with ChatGPT 4 across three applications reveals ChatGPT’s high proficiency,achieving averages of 98%in conformance and 87%in completeness,thereby demonstrating the effectiveness of the method.Using RBML,this study confirms that LLMs,specifically ChatGPT 4,have great potential in effective and efficient application of design patterns with high conformance and completeness.This opens avenues for further integrating LLMs into complex software engineering processes.展开更多
The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education....The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.To do so,there is a need to understand the impact of LLMs on software engineering education.In this paper,we conducted a preliminary case study on three software requirements engineering classes where students are allowed to use LLMs to assist in their projects.Based on the students’experience,performance,and feedback from a survey conducted at the end of the courses,we characterized the challenges and benefits of applying LLMs in software engineering education.This research contributes to the ongoing discourse on the integration of LLMs in education,emphasizing both their prominent potential and the need for balanced,mindful usage.展开更多
Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lac...Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose how to improve NMT formality with large language models (LLMs), which combines the style transfer and evaluation capabilities of an LLM and the high-quality translation generation ability of NMT models to improve NMT formality. The proposed method (namely INMTF) encompasses two approaches. The first involves a revision approach using an LLM to revise the NMT-generated translation, ensuring a formal translation style. The second approach employs an LLM as a reward model for scoring translation formality, and then uses reinforcement learning algorithms to fine-tune the NMT model to maximize the reward score, thereby enhancing the formality of the generated translations. Considering the substantial parameter size of LLMs, we also explore methods to reduce the computational cost of INMTF. Experimental results demonstrate that INMTF significantly outperforms baselines in terms of translation formality and translation quality, with an improvement of +9.19 style accuracy points in the German-to-English task and +2.16 COMET score in the Russian-to-English task. Furthermore, our work demonstrates the potential of integrating LLMs within NMT frameworks to bridge the gap between NMT outputs and the formality required in various real-world translation scenarios.展开更多
BACKGROUND Inflammatory bowel disease(IBD)is a global health burden that affects millions of individuals worldwide,necessitating extensive patient education.Large language models(LLMs)hold promise for addressing patie...BACKGROUND Inflammatory bowel disease(IBD)is a global health burden that affects millions of individuals worldwide,necessitating extensive patient education.Large language models(LLMs)hold promise for addressing patient information needs.However,LLM use to deliver accurate and comprehensible IBD-related medical information has yet to be thoroughly investigated.AIM To assess the utility of three LLMs(ChatGPT-4.0,Claude-3-Opus,and Gemini-1.5-Pro)as a reference point for patients with IBD.METHODS In this comparative study,two gastroenterology experts generated 15 IBD-related questions that reflected common patient concerns.These questions were used to evaluate the performance of the three LLMs.The answers provided by each model were independently assessed by three IBD-related medical experts using a Likert scale focusing on accuracy,comprehensibility,and correlation.Simultaneously,three patients were invited to evaluate the comprehensibility of their answers.Finally,a readability assessment was performed.RESULTS Overall,each of the LLMs achieved satisfactory levels of accuracy,comprehensibility,and completeness when answering IBD-related questions,although their performance varies.All of the investigated models demonstrated strengths in providing basic disease information such as IBD definition as well as its common symptoms and diagnostic methods.Nevertheless,when dealing with more complex medical advice,such as medication side effects,dietary adjustments,and complication risks,the quality of answers was inconsistent between the LLMs.Notably,Claude-3-Opus generated answers with better readability than the other two models.CONCLUSION LLMs have the potential as educational tools for patients with IBD;however,there are discrepancies between the models.Further optimization and the development of specialized models are necessary to ensure the accuracy and safety of the information provided.展开更多
Large language models cross-domain named entity recognition task in the face of the scarcity of large language labeled data in a specific domain,due to the entity bias arising from the variation of entity information ...Large language models cross-domain named entity recognition task in the face of the scarcity of large language labeled data in a specific domain,due to the entity bias arising from the variation of entity information between different domains,which makes large language models prone to spurious correlations problems when dealing with specific domains and entities.In order to solve this problem,this paper proposes a cross-domain named entity recognition method based on causal graph structure enhancement,which captures the cross-domain invariant causal structural representations between feature representations of text sequences and annotation sequences by establishing a causal learning and intervention module,so as to improve the utilization of causal structural features by the large languagemodels in the target domains,and thus effectively alleviate the false entity bias triggered by the false relevance problem;meanwhile,through the semantic feature fusion module,the semantic information of the source and target domains is effectively combined.The results show an improvement of 2.47%and 4.12%in the political and medical domains,respectively,compared with the benchmark model,and an excellent performance in small-sample scenarios,which proves the effectiveness of causal graph structural enhancement in improving the accuracy of cross-domain entity recognition and reducing false correlations.展开更多
Objective:Generative artificial intelligence(AI)technology,represented by large language models(LLMs),has gradually been developed for traditional Chinese medicine(TCM);however,challenges remain in effectively enhanci...Objective:Generative artificial intelligence(AI)technology,represented by large language models(LLMs),has gradually been developed for traditional Chinese medicine(TCM);however,challenges remain in effectively enhancing AI applications for TCM.Therefore,this study is the first systematic review to analyze LLMs in TCM retrospectively,focusing on and summarizing the evidence of their performance in generative tasks.Methods:We extensively searched electronic databases for articles published until June 2024 to identify publicly available studies on LLMs in TCM.Two investigators independently selected and extracted the related information and evaluation metrics.Based on the available data,this study used descriptive analysis for a comprehensive systematic review of LLM technology related to TCM.Results:Ten studies published between 2023 and 2024 met our eligibility criteria and were included in this review,including 40%LLMs in the TCM vertical domain,40%containing TCM data,and 20%honoring the TCM contribution,with a foundational model parameter range from 1.8 to 33 billion.All included studies used manual or automatic evaluation metrics to evaluate model performance and fully discussed the challenges and contributions through an overview of LLMs in TCM.Conclusions:LLMs have achieved significant advantages in TCM applications and can effectively address intelligent TCM tasks.Further in-depth development of LLMs is needed in various vertical TCM fields,including clinical and fundamental research.Focusing on the functional segmentation development direction of generative AI technologies in TCM application scenarios to meet the practical needs-oriented demands of TCM digitalization is essential.展开更多
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor...The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.展开更多
Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, a...Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks.展开更多
Objective To develop and evaluate a fine-tuned large language model(LLM)for traditional Chinese medicine(TCM)prescription recommendation named TCMLLM-PR.Methods First,we constructed an instruction-tuning dataset conta...Objective To develop and evaluate a fine-tuned large language model(LLM)for traditional Chinese medicine(TCM)prescription recommendation named TCMLLM-PR.Methods First,we constructed an instruction-tuning dataset containing 68654 samples(ap-proximately 10 million tokens)by integrating data from eight sources,including four TCM textbooks,Pharmacopoeia of the People’s Republic of China 2020(CHP),Chinese Medicine Clinical Cases(CMCC),and hospital clinical records covering lung disease,liver disease,stroke,diabetes,and splenic-stomach disease.Then,we trained TCMLLM-PR using Chat-GLM-6B with P-Tuning v2 technology.The evaluation consisted of three aspects:(i)compari-son with traditional prescription recommendation models(PTM,TCMPR,and PresRecST);(ii)comparison with TCM-specific LLMs(ShenNong,Huatuo,and HuatuoGPT)and general-domain ChatGPT;(iii)assessment of model migration capability across different disease datasets.We employed precision,recall,and F1 score as evaluation metrics.Results The experiments showed that TCMLLM-PR significantly outperformed baseline models on TCM textbooks and CHP datasets,with F1@10 improvements of 31.80%and 59.48%,respectively.In cross-dataset validation,the model performed best when migrating from TCM textbooks to liver disease dataset,achieving an F1@10 of 0.1551.Analysis of real-world cases demonstrated that TCMLLM-PR's prescription recommendations most closely matched actual doctors’prescriptions.Conclusion This study integrated LLMs into TCM prescription recommendations,leverag-ing a tailored instruction-tuning dataset and developing TCMLLM-PR.This study will pub-licly release the best model parameters of TCMLLM-PR to promote the development of the decision-making process in TCM practices(https://github.com/2020MEAI/TCMLLM).展开更多
With the improvement of multisource information sensing and data acquisition capabilities inside tunnels,the availability of multimodal data in tunnel engineering has significantly increased.However,due to structural ...With the improvement of multisource information sensing and data acquisition capabilities inside tunnels,the availability of multimodal data in tunnel engineering has significantly increased.However,due to structural differences in multimodal data,traditional intelligent advanced geological prediction models have limited capacity for data fusion.Furthermore,the lack of pre-trained models makes it difficult for neural networks trained from scratch to deeply explore the features of multimodal data.To address these challenges,we utilize the fusion capability of knowledge graph for multimodal data and the pre-trained knowledge of large language models(LLMs)to establish an intelligent advanced geological prediction model(GeoPredict-LLM).First,we develop an advanced geological prediction ontology model,forming a knowledge graph database.Using knowledge graph embeddings,multisource and multimodal data are transformed into low-dimensional vectors with a unified structure.Secondly,pre-trained LLMs,through reprogramming,reconstruct these low-dimensional vectors,imparting linguistic characteristics to the data.This transformation effectively reframes the complex task of advanced geological prediction as a"language-based"problem,enabling the model to approach the task from a linguistic perspective.Moreover,we propose the prompt-as-prefix method,which enables output generation,while freezing the core of the LLM,thereby significantly reduces the number of training parameters.Finally,evaluations show that compared to neural network models without pre-trained models,GeoPredict-LLM significantly improves prediction accuracy.It is worth noting that as long as a knowledge graph database can be established,GeoPredict-LLM can be adapted to multimodal data mining tasks with minimal modifications.展开更多
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ...Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.展开更多
Large language models(LLMs)have significantly advanced artificial intelligence(AI)by excelling in tasks such as understanding,generation,and reasoning across multiple modalities.Despite these achievements,LLMs have in...Large language models(LLMs)have significantly advanced artificial intelligence(AI)by excelling in tasks such as understanding,generation,and reasoning across multiple modalities.Despite these achievements,LLMs have inherent limitations including outdated information,hallucinations,inefficiency,lack of interpretability,and challenges in domain-specific accuracy.To address these issues,this survey explores three promising directions in the post-LLM era:knowledge empowerment,model collaboration,and model co-evolution.First,we examine methods of integrating external knowledge into LLMs to enhance factual accuracy,reasoning capabilities,and interpretability,including incorporating knowledge into training objectives,instruction tuning,retrieval-augmented inference,and knowledge prompting.Second,we discuss model collaboration strategies that leverage the complementary strengths of LLMs and smaller models to improve efficiency and domain-specific performance through techniques such as model merging,functional model collaboration,and knowledge injection.Third,we delve into model co-evolution,in which multiple models collaboratively evolve by sharing knowledge,parameters,and learning strategies to adapt to dynamic environments and tasks,thereby enhancing their adaptability and continual learning.We illustrate how the integration of these techniques advances AI capabilities in science,engineering,and society—particularly in hypothesis development,problem formulation,problem-solving,and interpretability across various domains.We conclude by outlining future pathways for further advancement and applications.展开更多
A large language model(LLM)is constructed to address the sophisticated demands of data retrieval and analysis,detailed well profiling,computation of key technical indicators,and the solutions to complex problems in re...A large language model(LLM)is constructed to address the sophisticated demands of data retrieval and analysis,detailed well profiling,computation of key technical indicators,and the solutions to complex problems in reservoir performance analysis(RPA).The LLM is constructed for RPA scenarios with incremental pre-training,fine-tuning,and functional subsystems coupling.Functional subsystem and efficient coupling methods are proposed based on named entity recognition(NER),tool invocation,and Text-to-SQL construction,all aimed at resolving pivotal challenges in developing the specific application of LLMs for RDA.This study conducted a detailed accuracy test on feature extraction models,tool classification models,data retrieval models and analysis recommendation models.The results indicate that these models have demonstrated good performance in various key aspects of reservoir dynamic analysis.The research takes some injection and production well groups in the PK3 Block of the Daqing Oilfield as an example for testing.Testing results show that our model has significant potential and practical value in assisting reservoir engineers with RDA.The research results provide a powerful support to the application of LLM in reservoir performance analysis.展开更多
文摘Purpose:Evaluating the quality of academic journal articles is a time consuming but critical task for national research evaluation exercises,appointments and promotion.It is therefore important to investigate whether Large Language Models(LLMs)can play a role in this process.Design/methodology/approach:This article assesses which ChatGPT inputs(full text without tables,figures,and references;title and abstract;title only)produce better quality score estimates,and the extent to which scores are affected by ChatGPT models and system prompts.Findings:The optimal input is the article title and abstract,with average ChatGPT scores based on these(30 iterations on a dataset of 51 papers)correlating at 0.67 with human scores,the highest ever reported.ChatGPT 4o is slightly better than 3.5-turbo(0.66),and 4o-mini(0.66).Research limitations:The data is a convenience sample of the work of a single author,it only includes one field,and the scores are self-evaluations.Practical implications:The results suggest that article full texts might confuse LLM research quality evaluations,even though complex system instructions for the task are more effective than simple ones.Thus,whilst abstracts contain insufficient information for a thorough assessment of rigour,they may contain strong pointers about originality and significance.Finally,linear regression can be used to convert the model scores into the human scale scores,which is 31%more accurate than guessing.Originality/value:This is the first systematic comparison of the impact of different prompts,parameters and inputs for ChatGPT research quality evaluations.
文摘Smart contracts on the Ethereum blockchain continue to revolutionize decentralized applications (dApps) by allowing for self-executing agreements. However, bad actors have continuously found ways to exploit smart contracts for personal financial gain, which undermines the integrity of the Ethereum blockchain. This paper proposes a computer program called SADA (Static and Dynamic Analyzer), a novel approach to smart contract vulnerability detection using multiple Large Language Model (LLM) agents to analyze and flag suspicious Solidity code for Ethereum smart contracts. SADA not only improves upon existing vulnerability detection methods but also paves the way for more secure smart contract development practices in the rapidly evolving blockchain ecosystem.
文摘This critical review provides an in-depth analysis of Large Language Models(LLMs),encompassing their foundational principles,diverse applications,and advanced training methodologies.We critically examine the evolution from Recurrent Neural Networks(RNNs)to Transformer models,highlighting the significant advancements and innovations in LLM architectures.The review explores state-of-the-art techniques such as in-context learning and various fine-tuning approaches,with an emphasis on optimizing parameter efficiency.We also discuss methods for aligning LLMs with human preferences,including reinforcement learning frameworks and human feedback mechanisms.The emerging technique of retrieval-augmented generation,which integrates external knowledge into LLMs,is also evaluated.Additionally,we address the ethical considerations of deploying LLMs,stressing the importance of responsible and mindful application.By identifying current gaps and suggesting future research directions,this review provides a comprehensive and critical overview of the present state and potential advancements in LLMs.This work serves as an insightful guide for researchers and practitioners in artificial intelligence,offering a unified perspective on the strengths,limitations,and future prospects of LLMs.
基金supported by National Natural Science Foundation of China(62376219 and 62006194)Foundational Research Project in Specialized Discipline(Grant No.G2024WD0146)Faculty Construction Project(Grant No.24GH0201148).
文摘Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction.
文摘Predicting information dissemination on social media,specifcally users’reposting behavior,is crucial for applications such as advertising campaigns.Conventional methods use deep neural networks to make predictions based on features related to user topic interests and social preferences.However,these models frequently fail to account for the difculties arising from limited training data and model size,which restrict their capacity to learn and capture the intricate patterns within microblogging data.To overcome this limitation,we introduce a novel model Adapt pre-trained Large Language model for Reposting Prediction(ALL-RP),which incorporates two key steps:(1)extracting features from post content and social interactions using a large language model with extensive parameters and trained on a vast corpus,and(2)performing semantic and temporal adaptation to transfer the large language model’s knowledge of natural language,vision,and graph structures to reposting prediction tasks.Specifcally,the temporal adapter in the ALL-RP model captures multi-dimensional temporal information from evolving patterns of user topic interests and social preferences,thereby providing a more realistic refection of user attributes.Additionally,to enhance the robustness of feature modeling,we introduce a variant of the temporal adapter that implements multiple temporal adaptations in parallel while maintaining structural simplicity.Experimental results on real-world datasets demonstrate that the ALL-RP model surpasses state-of-the-art models in predicting both individual user reposting behavior and group sharing behavior,with performance gains of 2.81%and 4.29%,respectively.
文摘Processing police incident data in public security involves complex natural language processing(NLP)tasks,including information extraction.This data contains extensive entity information—such as people,locations,and events—while also involving reasoning tasks like personnel classification,relationship judgment,and implicit inference.Moreover,utilizing models for extracting information from police incident data poses a significant challenge—data scarcity,which limits the effectiveness of traditional rule-based and machine-learning methods.To address these,we propose TIPS.In collaboration with public security experts,we used de-identified police incident data to create templates that enable large language models(LLMs)to populate data slots and generate simulated data,enhancing data density and diversity.We then designed schemas to efficiently manage complex extraction and reasoning tasks,constructing a high-quality dataset and fine-tuning multiple open-source LLMs.Experiments showed that the fine-tuned ChatGLM-4-9B model achieved an F1 score of 87.14%,nearly 30%higher than the base model,significantly reducing error rates.Manual corrections further improved performance by 9.39%.This study demonstrates that combining largescale pre-trained models with limited high-quality domain-specific data can greatly enhance information extraction in low-resource environments,offering a new approach for intelligent public security applications.
文摘Cardiac rehabilitation is a crucial multidisciplinary approach to improve patient outcomes.There is a growing body of evidence that suggests that these programs contribute towards reducing cardiovascular mortality and recurrence.Despite this,cardiac rehabilitation is underutilized and adherence to these programs has been a demonstrated barrier in achieving these outcomes.As a result,there is a growing focus on innovating these programs,especially from the standpoint of digital health and personalized medicine.This editorial discusses the possible roles of large language models,such as their role in ChatGPT,in further personalizing cardiac rehabilitation programs through simplifying medical jargon and employing motivational interviewing techniques,thus boosting patient engagement and adherence.However,these possibilities must be further investigated in the clinical literature.Likewise,the integration of large language models in cardiac rehabilitation will be challenging in its nascent stages to ensure accurate and ethical information delivery.
文摘Design patterns offer reusable solutions for common software issues,enhancing quality.The advent of generative large language models(LLMs)marks progress in software development,but their efficacy in applying design patterns is not fully assessed.The recent introduction of generative large language models(LLMs)like ChatGPT and CoPilot has demonstrated significant promise in software development.They assist with a variety of tasks including code generation,modeling,bug fixing,and testing,leading to enhanced efficiency and productivity.Although initial uses of these LLMs have had a positive effect on software development,their potential influence on the application of design patterns remains unexplored.This study introduces a method to quantify LLMs’ability to implement design patterns,using Role-Based Metamodeling Language(RBML)for a rigorous specification of the pattern’s problem,solution,and transformation rules.The method evaluates the pattern applicability of a software application using the pattern’s problem specification.If deemed applicable,the application is input to the LLM for pattern application.The resulting application is assessed for conformance to the pattern’s solution specification and for completeness against the pattern’s transformation rules.Evaluating the method with ChatGPT 4 across three applications reveals ChatGPT’s high proficiency,achieving averages of 98%in conformance and 87%in completeness,thereby demonstrating the effectiveness of the method.Using RBML,this study confirms that LLMs,specifically ChatGPT 4,have great potential in effective and efficient application of design patterns with high conformance and completeness.This opens avenues for further integrating LLMs into complex software engineering processes.
基金supported in part by the Teaching Reform Project of Chongqing University of Posts and Telecommunications,China under Grant No.XJG23234Chongqing Municipal Higher Education Teaching Reform Research Project under Grant No.203399the Doctoral Direct Train Project of Chongqing Science and Technology Bureau under Grant No.CSTB2022BSXM-JSX0007。
文摘The advent of large language models(LLMs)has made knowledge acquisition and content creation increasingly easier and cheaper,which in turn redefines learning and urges transformation in software engineering education.To do so,there is a need to understand the impact of LLMs on software engineering education.In this paper,we conducted a preliminary case study on three software requirements engineering classes where students are allowed to use LLMs to assist in their projects.Based on the students’experience,performance,and feedback from a survey conducted at the end of the courses,we characterized the challenges and benefits of applying LLMs in software engineering education.This research contributes to the ongoing discourse on the integration of LLMs in education,emphasizing both their prominent potential and the need for balanced,mindful usage.
文摘Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose how to improve NMT formality with large language models (LLMs), which combines the style transfer and evaluation capabilities of an LLM and the high-quality translation generation ability of NMT models to improve NMT formality. The proposed method (namely INMTF) encompasses two approaches. The first involves a revision approach using an LLM to revise the NMT-generated translation, ensuring a formal translation style. The second approach employs an LLM as a reward model for scoring translation formality, and then uses reinforcement learning algorithms to fine-tune the NMT model to maximize the reward score, thereby enhancing the formality of the generated translations. Considering the substantial parameter size of LLMs, we also explore methods to reduce the computational cost of INMTF. Experimental results demonstrate that INMTF significantly outperforms baselines in terms of translation formality and translation quality, with an improvement of +9.19 style accuracy points in the German-to-English task and +2.16 COMET score in the Russian-to-English task. Furthermore, our work demonstrates the potential of integrating LLMs within NMT frameworks to bridge the gap between NMT outputs and the formality required in various real-world translation scenarios.
基金Supported by the China Health Promotion Foundation Young Doctors'Research Foundation for Inflammatory Bowel Disease,the Taishan Scholars Program of Shandong Province,China,No.tsqn202306343National Natural Science Foundation of China,No.82270578.
文摘BACKGROUND Inflammatory bowel disease(IBD)is a global health burden that affects millions of individuals worldwide,necessitating extensive patient education.Large language models(LLMs)hold promise for addressing patient information needs.However,LLM use to deliver accurate and comprehensible IBD-related medical information has yet to be thoroughly investigated.AIM To assess the utility of three LLMs(ChatGPT-4.0,Claude-3-Opus,and Gemini-1.5-Pro)as a reference point for patients with IBD.METHODS In this comparative study,two gastroenterology experts generated 15 IBD-related questions that reflected common patient concerns.These questions were used to evaluate the performance of the three LLMs.The answers provided by each model were independently assessed by three IBD-related medical experts using a Likert scale focusing on accuracy,comprehensibility,and correlation.Simultaneously,three patients were invited to evaluate the comprehensibility of their answers.Finally,a readability assessment was performed.RESULTS Overall,each of the LLMs achieved satisfactory levels of accuracy,comprehensibility,and completeness when answering IBD-related questions,although their performance varies.All of the investigated models demonstrated strengths in providing basic disease information such as IBD definition as well as its common symptoms and diagnostic methods.Nevertheless,when dealing with more complex medical advice,such as medication side effects,dietary adjustments,and complication risks,the quality of answers was inconsistent between the LLMs.Notably,Claude-3-Opus generated answers with better readability than the other two models.CONCLUSION LLMs have the potential as educational tools for patients with IBD;however,there are discrepancies between the models.Further optimization and the development of specialized models are necessary to ensure the accuracy and safety of the information provided.
基金supported by National Natural Science Foundation of China Joint Fund for Enterprise Innovation Development(U23B2029)National Natural Science Foundation of China(62076167,61772020)+1 种基金Key Scientific Research Project of Higher Education Institutions in Henan Province(24A520058,24A520060,23A520022)Postgraduate Education Reform and Quality Improvement Project of Henan Province(YJS2024AL053).
文摘Large language models cross-domain named entity recognition task in the face of the scarcity of large language labeled data in a specific domain,due to the entity bias arising from the variation of entity information between different domains,which makes large language models prone to spurious correlations problems when dealing with specific domains and entities.In order to solve this problem,this paper proposes a cross-domain named entity recognition method based on causal graph structure enhancement,which captures the cross-domain invariant causal structural representations between feature representations of text sequences and annotation sequences by establishing a causal learning and intervention module,so as to improve the utilization of causal structural features by the large languagemodels in the target domains,and thus effectively alleviate the false entity bias triggered by the false relevance problem;meanwhile,through the semantic feature fusion module,the semantic information of the source and target domains is effectively combined.The results show an improvement of 2.47%and 4.12%in the political and medical domains,respectively,compared with the benchmark model,and an excellent performance in small-sample scenarios,which proves the effectiveness of causal graph structural enhancement in improving the accuracy of cross-domain entity recognition and reducing false correlations.
基金supported by the National Multidisciplinary Innovation Team of Traditional Chinese Medicine(ZYYCXTD-D-202204)China Postdoctoral Science Foundation(2023M742627)+1 种基金Postdoctoral Fellowship Program of CPSF(GZC20231928)Foundation of State Key Laboratory of Component-based Chinese Medicine(CBCM2023201).
文摘Objective:Generative artificial intelligence(AI)technology,represented by large language models(LLMs),has gradually been developed for traditional Chinese medicine(TCM);however,challenges remain in effectively enhancing AI applications for TCM.Therefore,this study is the first systematic review to analyze LLMs in TCM retrospectively,focusing on and summarizing the evidence of their performance in generative tasks.Methods:We extensively searched electronic databases for articles published until June 2024 to identify publicly available studies on LLMs in TCM.Two investigators independently selected and extracted the related information and evaluation metrics.Based on the available data,this study used descriptive analysis for a comprehensive systematic review of LLM technology related to TCM.Results:Ten studies published between 2023 and 2024 met our eligibility criteria and were included in this review,including 40%LLMs in the TCM vertical domain,40%containing TCM data,and 20%honoring the TCM contribution,with a foundational model parameter range from 1.8 to 33 billion.All included studies used manual or automatic evaluation metrics to evaluate model performance and fully discussed the challenges and contributions through an overview of LLMs in TCM.Conclusions:LLMs have achieved significant advantages in TCM applications and can effectively address intelligent TCM tasks.Further in-depth development of LLMs is needed in various vertical TCM fields,including clinical and fundamental research.Focusing on the functional segmentation development direction of generative AI technologies in TCM application scenarios to meet the practical needs-oriented demands of TCM digitalization is essential.
文摘The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.
文摘Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks.
基金National Key Research and Development Program(2023YFC3502604)National Natural Science Foundation of China(U23B2062 and 82374302).
文摘Objective To develop and evaluate a fine-tuned large language model(LLM)for traditional Chinese medicine(TCM)prescription recommendation named TCMLLM-PR.Methods First,we constructed an instruction-tuning dataset containing 68654 samples(ap-proximately 10 million tokens)by integrating data from eight sources,including four TCM textbooks,Pharmacopoeia of the People’s Republic of China 2020(CHP),Chinese Medicine Clinical Cases(CMCC),and hospital clinical records covering lung disease,liver disease,stroke,diabetes,and splenic-stomach disease.Then,we trained TCMLLM-PR using Chat-GLM-6B with P-Tuning v2 technology.The evaluation consisted of three aspects:(i)compari-son with traditional prescription recommendation models(PTM,TCMPR,and PresRecST);(ii)comparison with TCM-specific LLMs(ShenNong,Huatuo,and HuatuoGPT)and general-domain ChatGPT;(iii)assessment of model migration capability across different disease datasets.We employed precision,recall,and F1 score as evaluation metrics.Results The experiments showed that TCMLLM-PR significantly outperformed baseline models on TCM textbooks and CHP datasets,with F1@10 improvements of 31.80%and 59.48%,respectively.In cross-dataset validation,the model performed best when migrating from TCM textbooks to liver disease dataset,achieving an F1@10 of 0.1551.Analysis of real-world cases demonstrated that TCMLLM-PR's prescription recommendations most closely matched actual doctors’prescriptions.Conclusion This study integrated LLMs into TCM prescription recommendations,leverag-ing a tailored instruction-tuning dataset and developing TCMLLM-PR.This study will pub-licly release the best model parameters of TCMLLM-PR to promote the development of the decision-making process in TCM practices(https://github.com/2020MEAI/TCMLLM).
基金the National Natural Science Foundation of China(Grant Nos.52279103 and 52379103)。
文摘With the improvement of multisource information sensing and data acquisition capabilities inside tunnels,the availability of multimodal data in tunnel engineering has significantly increased.However,due to structural differences in multimodal data,traditional intelligent advanced geological prediction models have limited capacity for data fusion.Furthermore,the lack of pre-trained models makes it difficult for neural networks trained from scratch to deeply explore the features of multimodal data.To address these challenges,we utilize the fusion capability of knowledge graph for multimodal data and the pre-trained knowledge of large language models(LLMs)to establish an intelligent advanced geological prediction model(GeoPredict-LLM).First,we develop an advanced geological prediction ontology model,forming a knowledge graph database.Using knowledge graph embeddings,multisource and multimodal data are transformed into low-dimensional vectors with a unified structure.Secondly,pre-trained LLMs,through reprogramming,reconstruct these low-dimensional vectors,imparting linguistic characteristics to the data.This transformation effectively reframes the complex task of advanced geological prediction as a"language-based"problem,enabling the model to approach the task from a linguistic perspective.Moreover,we propose the prompt-as-prefix method,which enables output generation,while freezing the core of the LLM,thereby significantly reduces the number of training parameters.Finally,evaluations show that compared to neural network models without pre-trained models,GeoPredict-LLM significantly improves prediction accuracy.It is worth noting that as long as a knowledge graph database can be established,GeoPredict-LLM can be adapted to multimodal data mining tasks with minimal modifications.
基金We acknowledge funding from NSFC Grant 62306283.
文摘Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.
基金supported in part by National Natural Science Foundation of China(62441605)。
文摘Large language models(LLMs)have significantly advanced artificial intelligence(AI)by excelling in tasks such as understanding,generation,and reasoning across multiple modalities.Despite these achievements,LLMs have inherent limitations including outdated information,hallucinations,inefficiency,lack of interpretability,and challenges in domain-specific accuracy.To address these issues,this survey explores three promising directions in the post-LLM era:knowledge empowerment,model collaboration,and model co-evolution.First,we examine methods of integrating external knowledge into LLMs to enhance factual accuracy,reasoning capabilities,and interpretability,including incorporating knowledge into training objectives,instruction tuning,retrieval-augmented inference,and knowledge prompting.Second,we discuss model collaboration strategies that leverage the complementary strengths of LLMs and smaller models to improve efficiency and domain-specific performance through techniques such as model merging,functional model collaboration,and knowledge injection.Third,we delve into model co-evolution,in which multiple models collaboratively evolve by sharing knowledge,parameters,and learning strategies to adapt to dynamic environments and tasks,thereby enhancing their adaptability and continual learning.We illustrate how the integration of these techniques advances AI capabilities in science,engineering,and society—particularly in hypothesis development,problem formulation,problem-solving,and interpretability across various domains.We conclude by outlining future pathways for further advancement and applications.
基金Supported by the National Talent Fund of the Ministry of Science and Technology of China(20230240011)China University of Geosciences(Wuhan)Research Fund(162301192687)。
文摘A large language model(LLM)is constructed to address the sophisticated demands of data retrieval and analysis,detailed well profiling,computation of key technical indicators,and the solutions to complex problems in reservoir performance analysis(RPA).The LLM is constructed for RPA scenarios with incremental pre-training,fine-tuning,and functional subsystems coupling.Functional subsystem and efficient coupling methods are proposed based on named entity recognition(NER),tool invocation,and Text-to-SQL construction,all aimed at resolving pivotal challenges in developing the specific application of LLMs for RDA.This study conducted a detailed accuracy test on feature extraction models,tool classification models,data retrieval models and analysis recommendation models.The results indicate that these models have demonstrated good performance in various key aspects of reservoir dynamic analysis.The research takes some injection and production well groups in the PK3 Block of the Daqing Oilfield as an example for testing.Testing results show that our model has significant potential and practical value in assisting reservoir engineers with RDA.The research results provide a powerful support to the application of LLM in reservoir performance analysis.