Scaling Parallel Programming Beyond Threads

2020 
Session: PPOPP Keynote (Garden Pavilion)Title: Scaling Parallel Programming Beyond Threads#N#Abstract: Parallel hardware is ubiquitous, as are the essential components of the hardware/software interface necessary to leverage such systems. Low-level mechanisms such as threads, atomic memory operations, and synchronization constructs are available in many mainstream languages, either directly or via libraries such as MPI. These foundations are necessary for building effective parallel programming system, but they do not directly address the needs of programmers whose domain of expertise lies outside parallel programming. Mainstream programming languages are beginning to provide higher-level support for parallelism, such as the parallel algorithm extensions introduced in C++17. However, the needs of many programmers remain unmet. This talk will examine ongoing hardware trends, explore design directions for parallel programming systems that can scale to meet the needs of a broad range of users, and explain some of our recent work to build high-performance, scalable platforms for data science.#N#Bio: Michael Garland is the Senior Director of Programming Systems and Applications research at NVIDIA. He completed his Ph.D. at Carnegie Mellon University, and was previously on the faculty of the Department of Computer Science of the University of Illinois at Urbana-Champaign. He joined NVIDIA in 2006 as one of the first members of NVIDIA Research, and has been working to develop effective parallel programming systems ever since. His research goal is to develop tools and techniques that will equip programmers to realize the full potential of modern, massively parallel, computing systems.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []