期刊文献+

基于时空联合学习的城市交通流短时预测模型 被引量:2

Short-Time Prediction Model for Urban Traffic Flow Based on Joint Spatio-Temporal Learning
在线阅读 下载PDF
导出
摘要 时空联合分析可反映研究对象在时空维的变化规律,对揭示区域过程的时空交互关系和机制具有重要意义。聚焦时空联合特征的学习与交通流物理特性的建模问题,提出一种层次化的动态网络模型JST-DHNet,以融合不同尺度下的时空联合学习与内嵌领域知识学习。利用基于图乘积运算替代以往矩阵拼接方式构建多种时空图结构。结合时空小波变换与时空傅里叶变换,设计2种不同层次的时空同步学习模块,分别学习交通流的全域与局域时空特征。针对交通流的宏观流体动力学性质,通过基于图的广义偏微分方程设计一种新的时空扩散卷积,以学习真实场景下的交通波传播机制。在此基础上,采用注意力机制将不同尺度的时空联合特征进行融合。在4种不同路网规模的真实交通流数据集上进行测试,结果表明,JST-DHNet的预测性能优于采用时空分离式学习模块的预测模型,相比STSGCN时空联合学习模型,JST-DHNet预测精度的平均绝对百分比误差、平均绝对误差和均方根误差分别降低4.46%、6.65%、10.11%,且训练时间缩短近80%。 A joint spatio-temporal analysis can reflect the changing pattern of a studied object in the spatio-temporal dimension,which is significant for revealing the spatio-temporal interactions and mechanisms of regional processes.With a focus on joint spatio-temporal feature learning and modeling of traffic flow physical characteristics,this paper proposes a dynamic hierarchical network model called the JST-DHNet,which involves multi-scale joint spatio-temporal learning and physics-informed learning.First,instead of simply connecting adjacent graph snapshots like in previous studies,we developed multiple spatio-temporal graph structures using the graph product.Next,based on the joint time-vertex wavelet transform and Fourier transform,two spatial-temporal synchronous learning modules with different scales are designed to learn the global and local spatio-temporal characteristics of traffic flow,respectively.Based on the macroscopic fluid-dynamical properties of the traffic flow,we developed a novel spatio-temporal diffusion convolution with a graph-based partial differential equation,which enables the learning of the propagation mechanism of traffic waves in actual physical scenarios.Finally,the fusion of joint spatio-temporal features at different scales is performed by adopting an attention mechanism.After testing on four datasets of actual road network traffic flow with different sizes,the experimental results show that JST-DHNet outperforms other separated learning models.Compared with the existing joint spatio-temporal learning model called Spatial-Temporal Synchronous Graph Convolutional Network(STSGCN),JST-DHNet not only improves the prediction accuracy of the Mean Absolute Percentage Error(MAPE),Mean Absolute Error(MAE),and Root-Mean-Squared Error(RMSE)by 4.46%,6.65%,and 10.11%,respectively,but also shortens the training time by nearly 80%.
作者 葛宇然 付强 GE Yuran;FU Qiang(Key Laboratory of Road and Traffic Engineering of the Ministry of Education,Tongji University,Shanghai 201804,China)
出处 《计算机工程》 CAS CSCD 北大核心 2023年第1期270-278,共9页 Computer Engineering
基金 国家自然科学基金重点项目(71734004) 上海市科技攻关项目(19DZ1208900)。
关键词 智能交通系统 时空域联合 交通流预测 图信号处理 交通流理论 Intelligent Traffic System(ITS) spatio-temporal union traffic flow prediction Graph Signal Processing(GSP) traffic flow theory
  • 相关文献

参考文献4

二级参考文献63

  • 1Fujiyoshi H, Lipton A J, Kanade T. Real-time human mo- tion analysis by image skeletonization. IEICE Transactions on Information and Systems, 2004, 87-D(1): 113-120.
  • 2Chaudhry R, Ravichandran A, Hager G, Vidal R. His- tograms of oriented optical flow and Binet-Cauchy kernels on nonlinear dynamical systems for the recognition of hu- man actions. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL: IEEE, 2009. 1932-1939.
  • 3Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: Proceedings of the 2005 IEEE Con- ference on Computer Vision and Pattern Recognition. San Diego, CA, USA: IEEE, 2005. 886-893.
  • 4Lowe D G. Object recognition from local scale-invariant fea- tures. In: Proceedings of the 7th IEEE International Confer- ence on Computer Vision. Kerkyra: IEEE, 1999. 1150-1157.
  • 5Schuldt C, Laptev I, Caputo B. Recognizing human actions: a local SVM approach. In: Proceedings of the 17th In- ternational Conference on Pattern Recognition. Cambridge: IEEE, 2004. 32-36.
  • 6Dollar P, Rabaud V, Cottrell G, Belongie S. Behavior recog- nition via sparse spatio-temporal features. In: Proceedings of the 2005 IEEE International Workshop on Visual Surveil- lance and Performance Evaluation of Tracking and Surveil- lance. Beijing, China: IEEE, 2005.65-72.
  • 7Rapantzikos K, Avrithis Y, Kollias S. Dense saliency-based spatiotemporal feature points for action recognition. In: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL: IEEE, 2009. 1454-1461.
  • 8Knopp J, Prasad M, Willems G, Timofte R, Van Gool L. Hough transform and 3D SURF for robust three dimensional classification. In: Proceedings of the llth European Confer- ence on Computer Vision (ECCV 2010). Berlin Heidelberg: Springer. 2010. 589-602.
  • 9Klaser A, Marszaeek M, Schmid C. A spatio-temporal de- scriptor based on 3D-gradients. In: Proceedings of the 19th British Machine Vision Conference. Leeds: BMVA Press, 2008. 99.1-99.10.
  • 10Wang H, Ullah M M, Klaser A, Laptev I, Schmid C. Evalua- tion of local spatio-temporal features for action recognition. In: Proceedings of the 2009 British Machine Vision Confer- ence. London, UK: BMVA Press, 2009. 124.1-124.11.

共引文献147

同被引文献14

引证文献2

二级引证文献8

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部