000 04140nam a22005535i 4500
001 978-3-031-01756-8
003 DE-He213
005 20240730164224.0
007 cr nn 008mamaa
008 220601s2017 sz | s |||| 0|eng d
020 _a9783031017568
_9978-3-031-01756-8
024 7 _a10.1007/978-3-031-01756-8
_2doi
050 4 _aTK7867-7867.5
072 7 _aTJFC
_2bicssc
072 7 _aTEC008010
_2bisacsh
072 7 _aTJFC
_2thema
082 0 4 _a621.3815
_223
100 1 _aReagen, Brandon.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_983072
245 1 0 _aDeep Learning for Computer Architects
_h[electronic resource] /
_cby Brandon Reagen, Robert Adolf, Paul Whatmough, Gu-Yeon Wei, David Brooks.
250 _a1st ed. 2017.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2017.
300 _aXIV, 109 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aSynthesis Lectures on Computer Architecture,
_x1935-3243
505 0 _aPreface -- Introduction -- Foundations of Deep Learning -- Methods and Models -- Neural Network Accelerator Optimization: A Case Study -- A Literature Survey and Review -- Conclusion -- Bibliography -- Authors' Biographies.
520 _aMachine learning, and specifically deep learning, has been hugely disruptive in many fields of computer science. The success of deep learning techniques in solving notoriously difficult classification and regression problems has resulted in their rapid adoption in solving real-world problems. The emergence of deep learning is widely attributed to a virtuous cycle whereby fundamental advancements in training deeper models were enabled by the availability of massive datasets and high-performance computer hardware. This text serves as a primer for computer architects in a new and rapidly evolving field. We review how machine learning has evolved since its inception in the 1960s and track the key developments leading up to the emergence of the powerful deep learning techniques that emerged in the last decade. Next we review representative workloads, including the most commonly used datasets and seminal networks across a variety of domains. In addition to discussing the workloadsthemselves, we also detail the most popular deep learning tools and show how aspiring practitioners can use the tools with the workloads to characterize and optimize DNNs. The remainder of the book is dedicated to the design and optimization of hardware and architectures for machine learning. As high-performance hardware was so instrumental in the success of machine learning becoming a practical solution, this chapter recounts a variety of optimizations proposed recently to further improve future designs. Finally, we present a review of recent research published in the area as well as a taxonomy to help readers understand how various contributions fall in context.
650 0 _aElectronic circuits.
_919581
650 0 _aMicroprocessors.
_983073
650 0 _aComputer architecture.
_93513
650 1 4 _aElectronic Circuits and Systems.
_983076
650 2 4 _aProcessor Architectures.
_983077
700 1 _aAdolf, Robert.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_983078
700 1 _aWhatmough, Paul.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_983079
700 1 _aWei, Gu-Yeon.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_983080
700 1 _aBrooks, David.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_983081
710 2 _aSpringerLink (Online service)
_983085
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031000546
776 0 8 _iPrinted edition:
_z9783031006289
776 0 8 _iPrinted edition:
_z9783031028847
830 0 _aSynthesis Lectures on Computer Architecture,
_x1935-3243
_983086
856 4 0 _uhttps://doi.org/10.1007/978-3-031-01756-8
912 _aZDB-2-SXSC
942 _cEBK
999 _c85449
_d85449