000 03884nam a2200529 i 4500
001 6267405
003 IEEE
005 20220712204655.0
006 m o d
007 cr |n|||||||||
008 151223s1994 maua ob 001 eng d
020 _z9780262111935
_qprint
020 _a9780262276863
_qebook
020 _z0585350531
_qelectronic
020 _z9780585350530
_qelectronic
020 _z0262276860
_qelectronic
035 _a(CaBNVSL)mat06267405
035 _a(IDAMS)0b000064818b43f1
040 _aCaBNVSL
_beng
_erda
_cCaBNVSL
_dCaBNVSL
050 4 _aQ325.5
_b.K44 1994eb
100 1 _aKearns, Michael J.,
_eauthor.
_922634
245 1 3 _aAn introduction to computational learning theory /
_cMichael J. Kearns, Umesh V. Vazirani.
264 1 _aCambridge, Massachusetts :
_bMIT Press,
_cc1994.
264 2 _a[Piscataqay, New Jersey] :
_bIEEE Xplore,
_c[1994]
300 _a1 PDF (xii, 207 pages) :
_billustrations.
336 _atext
_2rdacontent
337 _aelectronic
_2isbdmedia
338 _aonline resource
_2rdacarrier
504 _aIncludes bibliographical references (p. [193]-203) and index.
505 0 _aThe probably approximately correct learning model -- Occam's razor -- The Vapnik-Chervonenkis dimension -- Weak and strong learning -- Learning in the presence of noise -- Inherent unpredictability -- Reducibility in PAC learning -- Learning finite automata by experimentation -- Appendix: some tools for probabilistic analysis.
506 1 _aRestricted to subscribers or individual electronic text purchasers.
520 _aEmphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics.Computational learning theory is a new and rapidly expanding area of research that examines formal models of induction with the goals of discovering the common methods underlying efficient learning algorithms and identifying the computational impediments to learning.Each topic in the book has been chosen to elucidate a general principle, which is explored in a precise formal setting. Intuition has been emphasized in the presentation to make the material accessible to the nontheoretician while still providing precise arguments for the specialist. This balance is the result of new proofs of established theorems, and new presentations of the standard proofs.The topics covered include the motivation, definitions, and fundamental results, both positive and negative, for the widely studied L. G. Valiant model of Probably Approximately Correct Learning; Occam's Razor, which formalizes a relationship between learning and data compression; the Vapnik-Chervonenkis dimension; the equivalence of weak and strong learning; efficient learning in the presence of noise by the method of statistical queries; relationships between learning and cryptography, and the resulting computational limitations on efficient learning; reducibility between learning problems; and algorithms for learning finite automata from active experimentation.
530 _aAlso available in print.
538 _aMode of access: World Wide Web
588 _aDescription based on PDF viewed 12/23/2015.
650 0 _aNeural networks (Computer science)
_93414
650 0 _aAlgorithms.
_93390
650 0 _aArtificial intelligence.
_93407
650 0 _aMachine learning.
_91831
655 0 _aElectronic books.
_93294
700 1 _aVazirani, Umesh Virkumar.
_922635
710 2 _aIEEE Xplore (Online Service),
_edistributor.
_922636
710 2 _aMIT Press,
_epublisher.
_922637
776 0 8 _iPrint version
_z9780262111935
856 4 2 _3Abstract with links to resource
_uhttps://ieeexplore.ieee.org/xpl/bkabstractplus.jsp?bkn=6267405
942 _cEBK
999 _c73059
_d73059