期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
TACFN:Transformer-Based Adaptive Cross-Modal Fusion Network for Multimodal Emotion Recognition
1
作者 Feng Liu Ziwang Fu +1 位作者 Yunlong Wang Qijian Zheng 《CAAI Artificial Intelligence Research》 2023年第1期75-82,共8页
The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suf... The fusion technique is the key to the multimodal emotion recognition task.Recently,cross-modal attention-based fusion methods have demonstrated high performance and strong robustness.However,cross-modal attention suffers from redundant features and does not capture complementary features well.We find that it is not necessary to use the entire information of one modality to reinforce the other during cross-modal interaction,and the features that can reinforce a modality may contain only a part of it.To this end,we design an innovative Transformer-based Adaptive Cross-modal Fusion Network(TACFN).Specifically,for the redundant features,we make one modality perform intra-modal feature selection through a self-attention mechanism,so that the selected features can adaptively and efficiently interact with another modality.To better capture the complementary information between the modalities,we obtain the fused weight vector by splicing and use the weight vector to achieve feature reinforcement of the modalities.We apply TCAFN to the RAVDESS and IEMOCAP datasets.For fair comparison,we use the same unimodal representations to validate the effectiveness of the proposed fusion method.The experimental results show that TACFN brings a significant performance improvement compared to other methods and reaches the state-of-the-art performance.All code and models could be accessed from https://github.com/shuzihuaiyu/TACFN. 展开更多
关键词 multimodal emotion recognition multimodal fusion adaptive cross-modal blocks TRANSFORMER computational perception
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部