摘要
[目的/意义]以ChatGPT为代表的生成式人工智能(AI)所展现出的强大“类人性表达”能力使科研活动中的知识获取途径从检索式向生成式转变,思维运作过程从综合式向选择式过渡。科研方法的转变进一步导致生成式AI冲击现有的科研评价规则,在科研场景中引发输出低质量信息的风险与研究者不当利用进行欺诈舞弊的风险。有必要对现有的科研评价规则进行创新、改善,以此规制生成式人工智能所引发的科研风险。[方法/过程]通过文献分析法提出当前学界从可版权性出发的赋权论探讨难以满足风险规制的需求。[结果/结论]应采取行为规制模式,在行业软法中设置使用披露义务、内容审查义务以及创新性说明义务,并将研究者的AI使用场景分为作为语言辅助工具的使用、作为事实发现工具的使用以及作为思想生成工具的使用,并根据不同场景为其分配不同程度的义务。
[Purpose/significance]The powerful“human-like expression”capability demonstrated by the generative AI represented by Chat GPT has shifted the knowledge acquisition from the“Retrieval mode”to the“Generation mode”and the thinking process from“Synthesis mode”to“Selection mode”in research activities.This shift in research methods further leads to the impact of generative AI on the existing rules of research evaluation and raises the risk of low quality information output and fraudulent use by researchers in research scenarios.There is a need to improve the existing rules of research evaluation as a way to regulate the risks of research arising from generative AI.[Method/process]The literature analysis method suggests that the current academic exploration of empowerment theory from copyright-ability can hardly meet the needs of risk regulation.[Result/conclusion]A behavioral regulation model should be adopted to set up the obligations of disclosure of use,review of content,and explanation of innovation in the industry soft law,and classify the scenarios of AI use by researchers into use as a language aid,use as a fact discovery tool,and use as an idea generation tool,and assign different degrees of obligations to them according to different scenarios.
出处
《情报理论与实践》
CSSCI
北大核心
2023年第6期24-32,共9页
Information Studies:Theory & Application
基金
江苏高校哲学社会科学研究重大项目“要素市场化配置视域下数据交易安全的刑法规制研究”的成果,项目编号:2022SJZD001。