000 06088nam a22005175i 4500
001 978-3-031-01720-9
003 DE-He213
005 20240730164219.0
007 cr nn 008mamaa
008 220601s2007 sz | s |||| 0|eng d
020 _a9783031017209
_9978-3-031-01720-9
024 7 _a10.1007/978-3-031-01720-9
_2doi
050 4 _aTK7867-7867.5
072 7 _aTJFC
_2bicssc
072 7 _aTEC008010
_2bisacsh
072 7 _aTJFC
_2thema
082 0 4 _a621.3815
_223
100 1 _aOlukotun, Kunle.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_982976
245 1 0 _aChip Multiprocessor Architecture
_h[electronic resource] :
_bTechniques to Improve Throughput and Latency /
_cby Kunle Olukotun, Lance Hammond, James Laudon.
250 _a1st ed. 2007.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2007.
300 _aVIII, 145 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aSynthesis Lectures on Computer Architecture,
_x1935-3243
505 0 _aContents: The Case for CMPs -- Improving Throughput -- Improving Latency Automatically -- Improving Latency using Manual Parallel Programming -- A Multicore World: The Future of CMPs.
520 _aChip multiprocessors - also called multi-core microprocessors or CMPs for short - are now the only way to build high-performance microprocessors, for a variety of reasons. Large uniprocessors are no longer scaling in performance, because it is only possible to extract a limited amount of parallelism from a typical instruction stream using conventional superscalar instruction issue techniques. In addition, one cannot simply ratchet up the clock speed on today's processors, or the power dissipation will become prohibitive in all but water-cooled systems. Compounding these problems is the simple fact that with the immense numbers of transistors available on today's microprocessor chips, it is too costly to design and debug ever-larger processors every year or two. CMPs avoid these problems by filling up a processor die with multiple, relatively simpler processor cores instead of just one huge core. The exact size of a CMP's cores can vary from very simple pipelines to moderately complex superscalar processors, but once a core has been selected the CMP's performance can easily scale across silicon process generations simply by stamping down more copies of the hard-to-design, high-speed processor core in each successive chip generation. In addition, parallel code execution, obtained by spreading multiple threads of execution across the various cores, can achieve significantly higher performance than would be possible using only a single core. While parallel threads are already common in many useful workloads, there are still important workloads that are hard to divide into parallel threads. The low inter-processor communication latency between the cores in a CMP helps make a much wider range of applications viable candidates for parallel execution than was possible with conventional, multi-chip multiprocessors; nevertheless, limited parallelism in key applications is the main factor limiting acceptance of CMPs in some types of systems. After a discussion of the basic pros and cons of CMPs when they are compared with conventional uniprocessors, this book examines how CMPs can best be designed to handle two radically different kinds of workloads that are likely to be used with a CMP: highly parallel, throughput-sensitive applications at one end of the spectrum, and less parallel, latency-sensitive applications at the other. Throughput-sensitive applications, such as server workloads that handle many independent transactions at once, require careful balancing of all parts of a CMP that can limit throughput, such as the individual cores, on-chip cache memory, and off-chip memory interfaces. Several studies and example systems, such as the Sun Niagara, that examine the necessary tradeoffs are presented here. In contrast, latency-sensitive applications - many desktop applications fall into this category - require a focus on reducing inter-core communication latency and applying techniques to help programmers divide their programs into multiple threads as easily as possible. This book discusses many techniques that can be used in CMPs to simplify parallel programming, with an emphasis on research directions proposed at Stanford University. To illustrate the advantages possible with a CMP using a couple of solid examples, extra focus is given to thread-level speculation (TLS), a way to automatically break up nominally sequential applications into parallel threads on a CMP, and transactional memory. This model can greatly simplify manual parallel programming by using hardware - instead of conventional software locks - to enforce atomic code execution of blocks of instructions, a technique that makes parallel coding much less error-prone. Contents: The Case for CMPs / Improving Throughput / Improving Latency Automatically / Improving Latency using Manual Parallel Programming / A Multicore World: The Future of CMPs.
650 0 _aElectronic circuits.
_919581
650 0 _aMicroprocessors.
_982979
650 0 _aComputer architecture.
_93513
650 1 4 _aElectronic Circuits and Systems.
_982980
650 2 4 _aProcessor Architectures.
_982983
700 1 _aHammond, Lance.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_982984
700 1 _aLaudon, James.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_982985
710 2 _aSpringerLink (Online service)
_982987
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031005923
776 0 8 _iPrinted edition:
_z9783031028489
830 0 _aSynthesis Lectures on Computer Architecture,
_x1935-3243
_982988
856 4 0 _uhttps://doi.org/10.1007/978-3-031-01720-9
912 _aZDB-2-SXSC
942 _cEBK
999 _c85437
_d85437