MENU

Fun & Interesting

Maurice Herlihy "Cache-Conscious Concurrent Data Structures for Near-Memory Computing"

SPTDC 578 1 year ago
Video Not Working? Fix It Now

The performance gap between memory and CPU has grown exponentially. To bridge this gap, hardware architects have proposed near-memory computing (NMC), where a lightweight processor (called an NMC core) is located close to memory. Due to its proximity to memory, memory access from an NMC core is much faster than from a CPU core. New advances in 3D integration and die-stacked memory make NMC viable in the near future. Prior work has shown significant performance improvements by using NMC for embarrassingly parallel and data-intensive applications, as well as for pointer-chasing traversals in sequential data structures. However, current server machines have hundreds of cores, and algorithms for concurrent data structures exploit these cores to achieve high throughput and scalability, with significant benefits over sequential data structures. Thus, it is important to examine how NMC performs with respect to modern concurrent data structures and understand how concurrent data structures can be developed to take advantage of NMC. This talk focuses on specific examples of cache-optimized data structures, such as skiplists and B+ trees, where lookups begin at a small number of top-level nodes, and diverge to many different node paths as they move down the hierarchy. These data structures expoint a memory structure split into a host-managed portion consisting of higher-level nodes and an NMP-managed portion consisting of the remaining lower-level nodes. Joint work with Jiwon Choe, Andriew Crotty, Tali Moreshet, and Iris Bahar.

Comment