MainPage:abstract

From WorkshopOnDeterministicMultiprocessingAndParallelProgramming
Revision as of 19:15, 10 November 2009 by Luisceze (talk | contribs) (Created page with '=== ''Rethinking Parallel Execution for Multicore Processors'', Guri Sohi, University of Wisconsin === Parallel processing has finally come into the mainstream of computing as a…')

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Rethinking Parallel Execution for Multicore Processors, Guri Sohi, University of Wisconsin

Parallel processing has finally come into the mainstream of computing as almost every processing chip going forward will have multiple processing cores. Software which is typically written as a sequential program, with the intention of running on a uniprocessor, now has to be written to run correctly and efficiently on multiple processor cores. This is a decades old problem, and several decades of research have been invested in trying to find solutions but success has been very limited. New ways about achieving parallel execution are needed in order to easily make use of multicore processors.

The prevailing wisdom, based upon many decades of experience, says that a parallel program representation is required to achieve a parallel execution on multiple processors. This talk will challenge that decades-old wisdom and present a new paradigm for achieving parallel execution. This paradigm, which we call Dynamic Serialization, achieves parallel execution with a sequential program representation using an ordinary object-oriented programming language (C++) on stock commercial hardware using an ordinary stock compiler. Unlike other approaches that are being explored, such as thread-level speculation and transactional memory, dynamic serialialization currently employs no speculation in either hardware or software. This talk will present experimental results gathered on real machines (AMD Barcelona, Intel Nehalem, Sun Niagara) showing that the new paradigm can achieve parallel execution speedups comparable to traditional parallel execution techniques, but can do so with a sequential program representation that does not suffer from the challenges and drawbacks of a parallel representation.