期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Recent advances in efficient computation of deep convolutional neural networks 被引量:37
1
作者 Jian CHENG Pei-song WANG +2 位作者 Gang LI Qing-hao HU Han-qing LU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2018年第1期64-77,共14页
Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems.At the same time,the computational complexity and resource consumption of t... Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems.At the same time,the computational complexity and resource consumption of these networks continue to increase.This poses a significant challenge to the deployment of such networks,especially in real-time applications or on resource-limited devices.Thus,network acceleration has become a hot topic within the deep learning community.As for hardware implementation of deep neural networks,a batch of accelerators based on a field-programmable gate array(FPGA) or an application-specific integrated circuit(ASIC)have been proposed in recent years.In this paper,we provide a comprehensive survey of recent advances in network acceleration,compression,and accelerator design from both algorithm and hardware points of view.Specifically,we provide a thorough analysis of each of the following topics:network pruning,low-rank approximation,network quantization,teacher–student networks,compact network design,and hardware accelerators.Finally,we introduce and discuss a few possible future directions. 展开更多
关键词 deep neural networks Acceleration Compression Hardware accelerator
原文传递
DyPipe: A Holistic Approach to Accelerating Dynamic Neural Networks with Dynamic Pipelining
2
作者 庄毅敏 胡杏 +1 位作者 陈小兵 支天 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第4期899-910,共12页
Dynamic neural network(NN)techniques are increasingly important because they facilitate deep learning techniques with more complex network architectures.However,existing studies,which predominantly optimize the static... Dynamic neural network(NN)techniques are increasingly important because they facilitate deep learning techniques with more complex network architectures.However,existing studies,which predominantly optimize the static computational graphs by static scheduling methods,usually focus on optimizing static neural networks in deep neural network(DNN)accelerators.We analyze the execution process of dynamic neural networks and observe that dynamic features introduce challenges for efficient scheduling and pipelining in existing DNN accelerators.We propose DyPipe,a holistic approach to optimizing dynamic neural network inferences in enhanced DNN accelerators.DyPipe achieves significant performance improvements for dynamic neural networks while it introduces negligible overhead for static neural networks.Our evaluation demonstrates that DyPipe achieves 1.7x speedup on dynamic neural networks and maintains more than 96%performance for static neural networks. 展开更多
关键词 dynamic neural network(NN) deep neural network(DNN)accelerator dynamic pipelining
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部