In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with l...In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with large-scale knowledge graphs that contain vast numbers of entities and relations.In particular,resource-intensive embeddings often lead to increased computational costs,and may limit scalability and adaptability in practical environ-ments,such as in low-resource settings or real-world applications.This paper explores an approach to knowledge graph representation learning that leverages small,reserved entities and relation sets for parameter-efficient embedding.We introduce a hierarchical attention network designed to refine and maximize the representational quality of embeddings by selectively focusing on these reserved sets,thereby reducing model complexity.Empirical assessments validate that our model achieves high performance on the benchmark dataset with fewer parameters and smaller embedding dimensions.The ablation studies further highlight the impact and contribution of each component in the proposed hierarchical attention structure.展开更多
Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,...Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,sentiment analysis and question-answering systems.However,previous studies ignored much unusedstructural information in sentences that could enhance the performance of the relation extraction task.Moreover,most existing dependency-based models utilize self-attention to distinguish the importance of context,whichhardly deals withmultiple-structure information.To efficiently leverage multiple structure information,this paperproposes a dynamic structure attention mechanism model based on textual structure information,which deeplyintegrates word embedding,named entity recognition labels,part of speech,dependency tree and dependency typeinto a graph convolutional network.Specifically,our model extracts text features of different structures from theinput sentence.Textual Structure information Graph Convolutional Networks employs the dynamic structureattention mechanism to learn multi-structure attention,effectively distinguishing important contextual features invarious structural information.In addition,multi-structure weights are carefully designed as amergingmechanismin the different structure attention to dynamically adjust the final attention.This paper combines these featuresand trains a graph convolutional network for relation extraction.We experiment on supervised relation extractiondatasets including SemEval 2010 Task 8,TACRED,TACREV,and Re-TACED,the result significantly outperformsthe previous.展开更多
Event temporal relation extraction is an important part of natural language processing.Many models are being used in this task with the development of deep learning.However,most of the existing methods cannot accurate...Event temporal relation extraction is an important part of natural language processing.Many models are being used in this task with the development of deep learning.However,most of the existing methods cannot accurately obtain the degree of association between different tokens and events,and event-related information cannot be effectively integrated.In this paper,we propose an event information integration model that integrates event information through multilayer bidirectional long short-term memory(Bi-LSTM)and attention mechanism.Although the above scheme can improve the extraction performance,it can still be further optimized.To further improve the performance of the previous scheme,we propose a novel relational graph attention network that incorporates edge attributes.In this approach,we first build a semantic dependency graph through dependency parsing,model a semantic graph that considers the edges’attributes by using top-k attention mechanisms to learn hidden semantic contextual representations,and finally predict event temporal relations.We evaluate proposed models on the TimeBank-Dense dataset.Compared to previous baselines,the Micro-F1 scores obtained by our models improve by 3.9%and 14.5%,respectively.展开更多
Thetransformer-based semantic segmentation approaches,which divide the image into different regions by sliding windows and model the relation inside each window,have achieved outstanding success.However,since the rela...Thetransformer-based semantic segmentation approaches,which divide the image into different regions by sliding windows and model the relation inside each window,have achieved outstanding success.However,since the relation modeling between windows was not the primary emphasis of previous work,it was not fully utilized.To address this issue,we propose a Graph-Segmenter,including a graph transformer and a boundary-aware attention module,which is an effective network for simultaneously modeling the more profound relation between windows in a global view and various pixels inside each window as a local one,and for substantial low-cost boundary adjustment.Specifically,we treat every window and pixel inside the window as nodes to construct graphs for both views and devise the graph transformer.The introduced boundary-awareattentionmoduleoptimizes theedge information of the target objects by modeling the relationship between the pixel on the object's edge.Extensive experiments on three widely used semantic segmentation datasets(Cityscapes,ADE-20k and PASCAL Context)demonstrate that our proposed network,a Graph Transformer with Boundary-aware Attention,can achieve state-of-the-art segmentation performance.展开更多
远程监督关系抽取通过自动对齐自然语言文本与知识库生成带有标签的训练数据集,解决样本人工标注的问题。目前的远程监督研究大多没有关注到长尾(long-tail)数据,因此远程监督得到的大多数句包中所含句子太少,不能真实全面地反映数据的...远程监督关系抽取通过自动对齐自然语言文本与知识库生成带有标签的训练数据集,解决样本人工标注的问题。目前的远程监督研究大多没有关注到长尾(long-tail)数据,因此远程监督得到的大多数句包中所含句子太少,不能真实全面地反映数据的情况。因此,提出基于位置-类型注意力机制和图卷积网络的远程监督关系抽取模型PG+PTATT。利用图卷积网络GCN聚合相似句包的隐含高阶特征,并对句包进行优化以此得到句包更丰富全面的特征信息;同时构建位置-类型注意力机制PTATT,以解决远程监督关系抽取中错误标签的问题。PTATT利用实体词与非实体词的位置关系以及类型关系进行建模,减少噪声词带来的影响。提出的模型在New York Times数据集上进行实验验证,实验结果表明提出的模型能够有效解决远程监督关系抽取中存在的问题;同时,能够有效提升关系抽取的正确率。展开更多
基金supported by the National Science and Technology Council(NSTC),Taiwan,under Grants Numbers 112-2622-E-029-009 and 112-2221-E-029-019.
文摘In the domain of knowledge graph embedding,conventional approaches typically transform entities and relations into continuous vector spaces.However,parameter efficiency becomes increasingly crucial when dealing with large-scale knowledge graphs that contain vast numbers of entities and relations.In particular,resource-intensive embeddings often lead to increased computational costs,and may limit scalability and adaptability in practical environ-ments,such as in low-resource settings or real-world applications.This paper explores an approach to knowledge graph representation learning that leverages small,reserved entities and relation sets for parameter-efficient embedding.We introduce a hierarchical attention network designed to refine and maximize the representational quality of embeddings by selectively focusing on these reserved sets,thereby reducing model complexity.Empirical assessments validate that our model achieves high performance on the benchmark dataset with fewer parameters and smaller embedding dimensions.The ablation studies further highlight the impact and contribution of each component in the proposed hierarchical attention structure.
文摘Deep neural network-based relational extraction research has made significant progress in recent years,andit provides data support for many natural language processing downstream tasks such as building knowledgegraph,sentiment analysis and question-answering systems.However,previous studies ignored much unusedstructural information in sentences that could enhance the performance of the relation extraction task.Moreover,most existing dependency-based models utilize self-attention to distinguish the importance of context,whichhardly deals withmultiple-structure information.To efficiently leverage multiple structure information,this paperproposes a dynamic structure attention mechanism model based on textual structure information,which deeplyintegrates word embedding,named entity recognition labels,part of speech,dependency tree and dependency typeinto a graph convolutional network.Specifically,our model extracts text features of different structures from theinput sentence.Textual Structure information Graph Convolutional Networks employs the dynamic structureattention mechanism to learn multi-structure attention,effectively distinguishing important contextual features invarious structural information.In addition,multi-structure weights are carefully designed as amergingmechanismin the different structure attention to dynamically adjust the final attention.This paper combines these featuresand trains a graph convolutional network for relation extraction.We experiment on supervised relation extractiondatasets including SemEval 2010 Task 8,TACRED,TACREV,and Re-TACED,the result significantly outperformsthe previous.
基金supported by the National key Research&Development Program of China(No.2017YFC0820503)the National Natural Science Foundation of China(No.62072149)+2 种基金the National Social Science Foundation of China(No.19ZDA348)the Primary Research&Development Plan of Zhejiang(No.2021C03156)the Public Welfare Research Program of Zhejiang(No.LGG19F020017)。
文摘Event temporal relation extraction is an important part of natural language processing.Many models are being used in this task with the development of deep learning.However,most of the existing methods cannot accurately obtain the degree of association between different tokens and events,and event-related information cannot be effectively integrated.In this paper,we propose an event information integration model that integrates event information through multilayer bidirectional long short-term memory(Bi-LSTM)and attention mechanism.Although the above scheme can improve the extraction performance,it can still be further optimized.To further improve the performance of the previous scheme,we propose a novel relational graph attention network that incorporates edge attributes.In this approach,we first build a semantic dependency graph through dependency parsing,model a semantic graph that considers the edges’attributes by using top-k attention mechanisms to learn hidden semantic contextual representations,and finally predict event temporal relations.We evaluate proposed models on the TimeBank-Dense dataset.Compared to previous baselines,the Micro-F1 scores obtained by our models improve by 3.9%and 14.5%,respectively.
文摘Thetransformer-based semantic segmentation approaches,which divide the image into different regions by sliding windows and model the relation inside each window,have achieved outstanding success.However,since the relation modeling between windows was not the primary emphasis of previous work,it was not fully utilized.To address this issue,we propose a Graph-Segmenter,including a graph transformer and a boundary-aware attention module,which is an effective network for simultaneously modeling the more profound relation between windows in a global view and various pixels inside each window as a local one,and for substantial low-cost boundary adjustment.Specifically,we treat every window and pixel inside the window as nodes to construct graphs for both views and devise the graph transformer.The introduced boundary-awareattentionmoduleoptimizes theedge information of the target objects by modeling the relationship between the pixel on the object's edge.Extensive experiments on three widely used semantic segmentation datasets(Cityscapes,ADE-20k and PASCAL Context)demonstrate that our proposed network,a Graph Transformer with Boundary-aware Attention,can achieve state-of-the-art segmentation performance.
文摘远程监督关系抽取通过自动对齐自然语言文本与知识库生成带有标签的训练数据集,解决样本人工标注的问题。目前的远程监督研究大多没有关注到长尾(long-tail)数据,因此远程监督得到的大多数句包中所含句子太少,不能真实全面地反映数据的情况。因此,提出基于位置-类型注意力机制和图卷积网络的远程监督关系抽取模型PG+PTATT。利用图卷积网络GCN聚合相似句包的隐含高阶特征,并对句包进行优化以此得到句包更丰富全面的特征信息;同时构建位置-类型注意力机制PTATT,以解决远程监督关系抽取中错误标签的问题。PTATT利用实体词与非实体词的位置关系以及类型关系进行建模,减少噪声词带来的影响。提出的模型在New York Times数据集上进行实验验证,实验结果表明提出的模型能够有效解决远程监督关系抽取中存在的问题;同时,能够有效提升关系抽取的正确率。