Memory Abstractions for Parallel Programming

A memory abstraction is an abstraction layer between the program execution and the memory that provides a different “view” of a memory location depending on the execution context in which the memory access is made. Properly designed memory abstractions help ease the task of parallel programming by mitigating the complexity of synchronization and/or admitting more efficient use of resources. In this talk, I will demonstrate this point using two case studies on two types of memory abstractions.

The first memory abstraction is the cactus stack memory abstraction in Cilk-M, a Cilk-based work stealing runtime system. Many multithreaded concurrency platforms that use a work-stealing runtime system incorporate a “cactus stack” to support multiple stack views for all the active children simultaneously. The use of cactus stacks, albeit essential, forces concurrency platforms to tradeoff between performance, memory consumption, and interoperability with serial code due to its incompatibility with linear stacks. We proposes a new strategy to build a cactus stack using thread-local memory mapping, which allows worker threads to have their respective linear views of the cactus stack. This cactus stack memory abstraction enables a concurrency platform that employs a work-stealing runtime system to satisfy all three criteria simultaneously.

The second memory abstraction is reducer hyperobjects (or reducers for short), a linguistic mechanism that helps avoid determinacy races in dynamic multithreaded programs. The Cilk-M runtime system supports reducers using the memory-mapping approach, which utilizes thread-local memory mapping and leverages the virtual-address translation provided by the underlying hardware to implement this memory abstraction. This memory mapping approach yields a close to 4x faster access time compared to the existing approach of implementing reducers

Speaker Details

I-Ting Angelina Lee recently obtained her Ph.D. degree in computer science from Massachusetts Institute of Technology (MIT), under the supervision of Prof. Charles E. Leiserson. Starting in fall 2012, she will be a Postdoctoral Associate in Computer Science and Artificial Intelligence Laboratory (CSAIL) at the MIT. Her primary research interest is in the design and implementation of programming models, languages, and runtime systems to support multithreaded software, with an emphasis on efficient implementations with theoretical foundations. She received her Bachelor of Science in Computer Science from UC San Diego in 2003, where she worked on the Simultaneous Multithreading Simulator for DEC Alpha under the supervision of Prof. Dean Tullsen.

Date:
Speakers:
Angelina Lee
Affiliation:
MIT

Series: Microsoft Research Talks