000 04146nam a22005295i 4500
001 978-3-031-02165-7
003 DE-He213
005 20240730163825.0
007 cr nn 008mamaa
008 220601s2017 sz | s |||| 0|eng d
020 _a9783031021657
_9978-3-031-02165-7
024 7 _a10.1007/978-3-031-02165-7
_2doi
050 4 _aQ334-342
050 4 _aTA347.A78
072 7 _aUYQ
_2bicssc
072 7 _aCOM004000
_2bisacsh
072 7 _aUYQ
_2thema
082 0 4 _a006.3
_223
100 1 _aGoldberg, Yoav.
_eauthor.
_4aut
_4http://id.loc.gov/vocabulary/relators/aut
_980679
245 1 0 _aNeural Network Methods for Natural Language Processing
_h[electronic resource] /
_cby Yoav Goldberg.
250 _a1st ed. 2017.
264 1 _aCham :
_bSpringer International Publishing :
_bImprint: Springer,
_c2017.
300 _aCCXCII, 20 p.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aSynthesis Lectures on Human Language Technologies,
_x1947-4059
505 0 _aPreface -- Acknowledgments -- Introduction -- Learning Basics and Linear Models -- Learning Basics and Linear Models -- From Linear Models to Multi-layer Perceptrons -- Feed-forward Neural Networks -- Neural Network Training -- Features for Textual Data -- Case Studies of NLP Features -- From Textual Features to Inputs -- Language Modeling -- Pre-trained Word Representations -- Pre-trained Word Representations -- Using Word Embeddings -- Case Study: A Feed-forward Architecture for Sentence -- Case Study: A Feed-forward Architecture for Sentence Meaning Inference -- Ngram Detectors: Convolutional Neural Networks -- Recurrent Neural Networks: Modeling Sequences and Stacks -- Concrete Recurrent Neural Network Architectures -- Modeling with Recurrent Networks -- Modeling with Recurrent Networks -- Conditioned Generation -- Modeling Trees with Recursive Neural Networks -- Modeling Trees with Recursive Neural Networks -- Structured Output Prediction -- Cascaded, Multi-task and Semi-supervised Learning -- Conclusion.-Bibliography -- Author's Biography.
520 _aNeural networks are a family of powerful machine learning models. This book focuses on the application of neural network models to natural language data. The first half of the book (Parts I and II) covers the basics of supervised machine learning and feed-forward neural networks, the basics of working with machine learning over language data, and the use of vector-based rather than symbolic representations for words. It also covers the computation-graph abstraction, which allows to easily define and train arbitrary neural networks, and is the basis behind the design of contemporary neural network software libraries. The second part of the book (Parts III and IV) introduces more specialized neural network architectures, including 1D convolutional neural networks, recurrent neural networks, conditioned-generation models, and attention-based models. These architectures and techniques are the driving force behind state-of-the-art algorithms for machine translation, syntactic parsing, and many other applications. Finally, we also discuss tree-shaped networks, structured prediction, and the prospects of multi-task learning.
650 0 _aArtificial intelligence.
_93407
650 0 _aNatural language processing (Computer science).
_94741
650 0 _aComputational linguistics.
_96146
650 1 4 _aArtificial Intelligence.
_93407
650 2 4 _aNatural Language Processing (NLP).
_931587
650 2 4 _aComputational Linguistics.
_96146
710 2 _aSpringerLink (Online service)
_980680
773 0 _tSpringer Nature eBook
776 0 8 _iPrinted edition:
_z9783031001765
776 0 8 _iPrinted edition:
_z9783031010378
776 0 8 _iPrinted edition:
_z9783031032936
830 0 _aSynthesis Lectures on Human Language Technologies,
_x1947-4059
_980681
856 4 0 _uhttps://doi.org/10.1007/978-3-031-02165-7
912 _aZDB-2-SXSC
942 _cEBK
999 _c85012
_d85012