目前数据中心规模迅速扩大和网络带宽大幅度提升,传统软件网络协议栈的处理器开销较大,并且难以满足众多数据中心应用程序在吞吐、延迟等方面的需求.远程直接内存访问(remote direct memory access,RDMA)技术采用零拷贝、内核旁路和处...目前数据中心规模迅速扩大和网络带宽大幅度提升,传统软件网络协议栈的处理器开销较大,并且难以满足众多数据中心应用程序在吞吐、延迟等方面的需求.远程直接内存访问(remote direct memory access,RDMA)技术采用零拷贝、内核旁路和处理器功能卸载等思想,能够高带宽、低延迟地读写远端主机内存数据.兼容以太网的RDMA技术正在数据中心领域展开应用,以太网RDMA网卡作为主要功能承载设备,对其部署发挥重要作用.综述从架构、优化和实现评估3个方面进行分析:1)对以太网RDMA网卡的通用架构进行了总结,并对其关键功能部件进行了介绍;2)重点阐述了存储资源、可靠传输和应用相关3方面的优化技术,包括面向网卡缓存资源的连接可扩展性和面向主机内存资源的注册访问优化,面向有损以太网实现可靠传输的拥塞控制、流量控制和重传机制优化,面向分布式存储中不同存储类型、数据库系统、云存储系统以及面向数据中心应用的多租户性能隔离、安全性、可编程性等方面的优化工作;3)调研了不同实现方式、评估方式.最后,给出总结和展望.展开更多
复杂系统的协同仿真中需要运行支撑软件RTI(Run Time Infrastructure)来解决异构模型、异构仿真软件间的数据交互的问题.但RTI的TCP/IP通信机制却无法使得HPC(High Performance Computer)的高速网络Infiniband(IB)在仿真中发挥最大的优...复杂系统的协同仿真中需要运行支撑软件RTI(Run Time Infrastructure)来解决异构模型、异构仿真软件间的数据交互的问题.但RTI的TCP/IP通信机制却无法使得HPC(High Performance Computer)的高速网络Infiniband(IB)在仿真中发挥最大的优势.针对这一问题,本文提出在IB网络架构下基于RDMA(Remote Direct Memory Access)通信机制对RTI进行优化,并以开源HLA项目CERTI软件为基础,研制运行在IB网络下的IB-CERTI软件,最后在不同网络环境下进行对比实验,实验结果证明了IB—CERTI软件在仿真通信中的高效性,特别是仿真邦员间的交互数据量越大,越能提高仿真数据传输效率.展开更多
内存对象缓存系统在通信方面受制于传统以太网的高延迟,在存储方面受限于服务器内可部署的内存规模,亟需融合新一代高性能I/O技术来提升性能、扩展容量.以广泛应用的Memcached为例,聚焦内存对象缓存系统的数据通路并基于高性能I/O对其...内存对象缓存系统在通信方面受制于传统以太网的高延迟,在存储方面受限于服务器内可部署的内存规模,亟需融合新一代高性能I/O技术来提升性能、扩展容量.以广泛应用的Memcached为例,聚焦内存对象缓存系统的数据通路并基于高性能I/O对其进行通信加速与存储扩展.首先,基于日益流行的高性能远程直接内存访问(remote direct memory access,RDMA)语义重新设计通信协议,并针对不同的Memcached操作及消息大小设计不同的策略,降低了通信延迟.其次,利用高性能NVMe SSD来扩展Memcached存储,采用日志结构管理内存与外存2级存储,并通过用户级驱动实现对SSD的直接访问,降低了软件开销.最终,实现了支持JVM环境的高性能缓存系统U2cache.U2cache通过旁路操作系统内核和JVM运行时与内存拷贝、RDMA通信、SSD访问交叠流水的方法,显著降低了数据访问开销.实验结果表明,U2cache通信延迟接近RDMA底层硬件性能;对大消息而言,相较无优化版本,性能提高超过20%;访问SSD中的数据时,相比通过内核I/O软件栈的方式,访问延迟最高降低了31%.展开更多
The multicore evolution has stimulated renewed interests in scaling up applications on shared-memory multiprocessors,significantly improving the scalability of many applications.But the scalability is limited within a...The multicore evolution has stimulated renewed interests in scaling up applications on shared-memory multiprocessors,significantly improving the scalability of many applications.But the scalability is limited within a single node;therefore programmers still have to redesign applications to scale out over multiple nodes.This paper revisits the design and implementation of distributed shared memory (DSM)as a way to scale out applications optimized for non-uniform memory access (NUMA)architecture over a well-connected cluster.This paper presents MAGI,an efficient DSM system that provides a transparent shared address space with scalable performance on a cluster with fast network interfaces.MAGI is unique in that it presents a NUMA abstraction to fully harness the multicore resources in each node through hierarchical synchronization and memory management.MAGI also exploits the memory access patterns of big-data applications and leverages a set of optimizations for remote direct memory access (RDMA)to reduce the number of page faults and the cost of the coherence protocol.MAGI has been implemented as a user-space library with pthread-compatible interfaces and can run existing multithreaded applications with minimized modifications.We deployed MAGI over an 8-node RDMA-enabled cluster.Experimental evaluation shows that MAGI achieves up to 9.25:4 speedup compared with an unoptimized implementation,leading to a sealable performance for large-scale data-intensive applications.展开更多
Remote direct memory access (RDMA) has become one of the state-of-the-art high-performance network technologies in datacenters. The reliable transport of RDMA is designed based on a lossless underlying network and can...Remote direct memory access (RDMA) has become one of the state-of-the-art high-performance network technologies in datacenters. The reliable transport of RDMA is designed based on a lossless underlying network and cannot endure a high packet loss rate. However, except for switch buffer overflow, there is another kind of packet loss in the RDMA network, i.e., packet corruption, which has not been discussed in depth. The packet corruption incurs long application tail latency by causing timeout retransmissions. The challenges to solving packet corruption in the RDMA network include: 1) packet corruption is inevitable with any remedial mechanisms and 2) RDMA hardware is not programmable. This paper proposes some designs which can guarantee the expected tail latency of applications with the existence of packet corruption. The key idea is controlling the occurring probabilities of timeout events caused by packet corruption through transforming timeout retransmissions into out-of-order retransmissions. We build a probabilistic model to estimate the occurrence probabilities and real effects of the corruption patterns. We implement these two mechanisms with the help of programmable switches and the zero-byte message RDMA feature. We build an ns-3 simulation and implement optimization mechanisms on our testbed. The simulation and testbed experiments show that the optimizations can decrease the flow completion time by several orders of magnitudes with less than 3% bandwidth cost at different packet corruption rates.展开更多
Machine learning techniques have become ubiquitous both in industry and academic applications.Increasing model sizes and training data volumes necessitate fast and efficient distributed training approaches.Collective ...Machine learning techniques have become ubiquitous both in industry and academic applications.Increasing model sizes and training data volumes necessitate fast and efficient distributed training approaches.Collective communications greatly simplify inter-and intra-node data transfer and are an essential part of the distributed training process as information such as gradients must be shared between processing nodes.In this paper,we survey the current state-of-the-art collective communication libraries(namely xCCL,including NCCL,oneCCL,RCCL,MSCCL,ACCL,and Gloo),with a focus on the industry-led ones for deep learning workloads.We investigate the design features of these xCCLs,discuss their use cases in the industry deep learning workloads,compare their performance with industry-made benchmarks(i.e.,NCCL Tests and PARAM),and discuss key take-aways and interesting observations.We believe our survey sheds light on potential research directions of future designs for xCCLs.展开更多
文摘目前数据中心规模迅速扩大和网络带宽大幅度提升,传统软件网络协议栈的处理器开销较大,并且难以满足众多数据中心应用程序在吞吐、延迟等方面的需求.远程直接内存访问(remote direct memory access,RDMA)技术采用零拷贝、内核旁路和处理器功能卸载等思想,能够高带宽、低延迟地读写远端主机内存数据.兼容以太网的RDMA技术正在数据中心领域展开应用,以太网RDMA网卡作为主要功能承载设备,对其部署发挥重要作用.综述从架构、优化和实现评估3个方面进行分析:1)对以太网RDMA网卡的通用架构进行了总结,并对其关键功能部件进行了介绍;2)重点阐述了存储资源、可靠传输和应用相关3方面的优化技术,包括面向网卡缓存资源的连接可扩展性和面向主机内存资源的注册访问优化,面向有损以太网实现可靠传输的拥塞控制、流量控制和重传机制优化,面向分布式存储中不同存储类型、数据库系统、云存储系统以及面向数据中心应用的多租户性能隔离、安全性、可编程性等方面的优化工作;3)调研了不同实现方式、评估方式.最后,给出总结和展望.
文摘复杂系统的协同仿真中需要运行支撑软件RTI(Run Time Infrastructure)来解决异构模型、异构仿真软件间的数据交互的问题.但RTI的TCP/IP通信机制却无法使得HPC(High Performance Computer)的高速网络Infiniband(IB)在仿真中发挥最大的优势.针对这一问题,本文提出在IB网络架构下基于RDMA(Remote Direct Memory Access)通信机制对RTI进行优化,并以开源HLA项目CERTI软件为基础,研制运行在IB网络下的IB-CERTI软件,最后在不同网络环境下进行对比实验,实验结果证明了IB—CERTI软件在仿真通信中的高效性,特别是仿真邦员间的交互数据量越大,越能提高仿真数据传输效率.
文摘内存对象缓存系统在通信方面受制于传统以太网的高延迟,在存储方面受限于服务器内可部署的内存规模,亟需融合新一代高性能I/O技术来提升性能、扩展容量.以广泛应用的Memcached为例,聚焦内存对象缓存系统的数据通路并基于高性能I/O对其进行通信加速与存储扩展.首先,基于日益流行的高性能远程直接内存访问(remote direct memory access,RDMA)语义重新设计通信协议,并针对不同的Memcached操作及消息大小设计不同的策略,降低了通信延迟.其次,利用高性能NVMe SSD来扩展Memcached存储,采用日志结构管理内存与外存2级存储,并通过用户级驱动实现对SSD的直接访问,降低了软件开销.最终,实现了支持JVM环境的高性能缓存系统U2cache.U2cache通过旁路操作系统内核和JVM运行时与内存拷贝、RDMA通信、SSD访问交叠流水的方法,显著降低了数据访问开销.实验结果表明,U2cache通信延迟接近RDMA底层硬件性能;对大消息而言,相较无优化版本,性能提高超过20%;访问SSD中的数据时,相比通过内核I/O软件栈的方式,访问延迟最高降低了31%.
基金the National Key Research and Development Program of China under Grant No.2016YFBI000500the National Natural Science Foundation of China under Grant No.61572314the National Youth Top-Notch Talent Support Program of China.
文摘The multicore evolution has stimulated renewed interests in scaling up applications on shared-memory multiprocessors,significantly improving the scalability of many applications.But the scalability is limited within a single node;therefore programmers still have to redesign applications to scale out over multiple nodes.This paper revisits the design and implementation of distributed shared memory (DSM)as a way to scale out applications optimized for non-uniform memory access (NUMA)architecture over a well-connected cluster.This paper presents MAGI,an efficient DSM system that provides a transparent shared address space with scalable performance on a cluster with fast network interfaces.MAGI is unique in that it presents a NUMA abstraction to fully harness the multicore resources in each node through hierarchical synchronization and memory management.MAGI also exploits the memory access patterns of big-data applications and leverages a set of optimizations for remote direct memory access (RDMA)to reduce the number of page faults and the cost of the coherence protocol.MAGI has been implemented as a user-space library with pthread-compatible interfaces and can run existing multithreaded applications with minimized modifications.We deployed MAGI over an 8-node RDMA-enabled cluster.Experimental evaluation shows that MAGI achieves up to 9.25:4 speedup compared with an unoptimized implementation,leading to a sealable performance for large-scale data-intensive applications.
基金This work was supported by the Key-Area Research and Development Program of Guangdong Province of China under Grant No.2020B0101390001the National Natural Science Foundation of China under Grant Nos.61772265 and 62072228the Fundamental Research Funds for the Central Universities of China,the Collaborative Innovation Center of Novel Software Technology and Industrialization of Jiangsu Province of China,and the Jiangsu Innovation and Entrepreneurship(Shuangchuang)Program of China.
文摘Remote direct memory access (RDMA) has become one of the state-of-the-art high-performance network technologies in datacenters. The reliable transport of RDMA is designed based on a lossless underlying network and cannot endure a high packet loss rate. However, except for switch buffer overflow, there is another kind of packet loss in the RDMA network, i.e., packet corruption, which has not been discussed in depth. The packet corruption incurs long application tail latency by causing timeout retransmissions. The challenges to solving packet corruption in the RDMA network include: 1) packet corruption is inevitable with any remedial mechanisms and 2) RDMA hardware is not programmable. This paper proposes some designs which can guarantee the expected tail latency of applications with the existence of packet corruption. The key idea is controlling the occurring probabilities of timeout events caused by packet corruption through transforming timeout retransmissions into out-of-order retransmissions. We build a probabilistic model to estimate the occurrence probabilities and real effects of the corruption patterns. We implement these two mechanisms with the help of programmable switches and the zero-byte message RDMA feature. We build an ns-3 simulation and implement optimization mechanisms on our testbed. The simulation and testbed experiments show that the optimizations can decrease the flow completion time by several orders of magnitudes with less than 3% bandwidth cost at different packet corruption rates.
基金supported in part by the U.S.National Science Foundation under Grant No.CCF-2132049,a Google Research Award,and a Meta Faculty Research Awardthe Expanse cluster at SDSC(San Diego Supercomputer Center)through allocation CIS210053 from the Advanced Cyberinfrastructure Coordination Ecosystem:Services&Support(ACCESS)program,which is supported by the U.S.National Science Foundation under Grant Nos.2138259,2138286,2138307,2137603,and 2138296.
文摘Machine learning techniques have become ubiquitous both in industry and academic applications.Increasing model sizes and training data volumes necessitate fast and efficient distributed training approaches.Collective communications greatly simplify inter-and intra-node data transfer and are an essential part of the distributed training process as information such as gradients must be shared between processing nodes.In this paper,we survey the current state-of-the-art collective communication libraries(namely xCCL,including NCCL,oneCCL,RCCL,MSCCL,ACCL,and Gloo),with a focus on the industry-led ones for deep learning workloads.We investigate the design features of these xCCLs,discuss their use cases in the industry deep learning workloads,compare their performance with industry-made benchmarks(i.e.,NCCL Tests and PARAM),and discuss key take-aways and interesting observations.We believe our survey sheds light on potential research directions of future designs for xCCLs.