000 | 03974nam a22005175i 4500 | ||
---|---|---|---|
001 | 978-3-031-01817-6 | ||
003 | DE-He213 | ||
005 | 20240730163723.0 | ||
007 | cr nn 008mamaa | ||
008 | 220601s2017 sz | s |||| 0|eng d | ||
020 |
_a9783031018176 _9978-3-031-01817-6 |
||
024 | 7 |
_a10.1007/978-3-031-01817-6 _2doi |
|
050 | 4 | _aTA1501-1820 | |
050 | 4 | _aTA1634 | |
072 | 7 |
_aUYT _2bicssc |
|
072 | 7 |
_aCOM016000 _2bisacsh |
|
072 | 7 |
_aUYT _2thema |
|
082 | 0 | 4 |
_a006 _223 |
100 | 1 |
_aScheirer, Walter J. _eauthor. _4aut _4http://id.loc.gov/vocabulary/relators/aut _980180 |
|
245 | 1 | 0 |
_aExtreme Value Theory-Based Methods for Visual Recognition _h[electronic resource] / _cby Walter J. Scheirer. |
250 | _a1st ed. 2017. | ||
264 | 1 |
_aCham : _bSpringer International Publishing : _bImprint: Springer, _c2017. |
|
300 |
_aXV, 115 p. _bonline resource. |
||
336 |
_atext _btxt _2rdacontent |
||
337 |
_acomputer _bc _2rdamedia |
||
338 |
_aonline resource _bcr _2rdacarrier |
||
347 |
_atext file _bPDF _2rda |
||
490 | 1 |
_aSynthesis Lectures on Computer Vision, _x2153-1064 |
|
505 | 0 | _aPreface -- Acknowledgments -- Figure Credits -- Extrema and Visual Recognition -- A Brief Introduction to Statistical Extreme Value Theory -- Post-recognition Score Analysis -- Recognition Score Normalization -- Calibration of Supervised Machine Learning Algorithms -- Summary and Future Directions -- Bibliography -- Author's Biography. | |
520 | _aA common feature of many approaches to modeling sensory statistics is an emphasis on capturing the "average." From early representations in the brain, to highly abstracted class categories in machine learning for classification tasks, central-tendency models based on the Gaussian distribution are a seemingly natural and obvious choice for modeling sensory data. However, insights from neuroscience, psychology, and computer vision suggest an alternate strategy: preferentially focusing representational resources on the extremes of the distribution of sensory inputs. The notion of treating extrema near a decision boundary as features is not necessarily new, but a comprehensive statistical theory of recognition based on extrema is only now just emerging in the computer vision literature. This book begins by introducing the statistical Extreme Value Theory (EVT) for visual recognition. In contrast to central-tendency modeling, it is hypothesized that distributions near decision boundaries form a more powerful model for recognition tasks by focusing coding resources on data that are arguably the most diagnostic features. EVT has several important properties: strong statistical grounding, better modeling accuracy near decision boundaries than Gaussian modeling, the ability to model asymmetric decision boundaries, and accurate prediction of the probability of an event beyond our experience. The second part of the book uses the theory to describe a new class of machine learning algorithms for decision making that are a measurable advance beyond the state-of-the-art. This includes methods for post-recognition score analysis, information fusion, multi-attribute spaces, and calibration of supervised machine learning algorithms. | ||
650 | 0 |
_aImage processing _xDigital techniques. _94145 |
|
650 | 0 |
_aComputer vision. _980181 |
|
650 | 0 |
_aPattern recognition systems. _93953 |
|
650 | 1 | 4 |
_aComputer Imaging, Vision, Pattern Recognition and Graphics. _931569 |
650 | 2 | 4 |
_aComputer Vision. _980182 |
650 | 2 | 4 |
_aAutomated Pattern Recognition. _931568 |
710 | 2 |
_aSpringerLink (Online service) _980183 |
|
773 | 0 | _tSpringer Nature eBook | |
776 | 0 | 8 |
_iPrinted edition: _z9783031006890 |
776 | 0 | 8 |
_iPrinted edition: _z9783031029455 |
830 | 0 |
_aSynthesis Lectures on Computer Vision, _x2153-1064 _980184 |
|
856 | 4 | 0 | _uhttps://doi.org/10.1007/978-3-031-01817-6 |
912 | _aZDB-2-SXSC | ||
942 | _cEBK | ||
999 |
_c84913 _d84913 |