Normal view MARC view ISBD view

Latent Semantic Mapping [electronic resource] : Principles and Applications / by Jerome R. Bellegarda.

By: Bellegarda, Jerome R [author.].
Contributor(s): SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Synthesis Lectures on Speech and Audio Processing: Publisher: Cham : Springer International Publishing : Imprint: Springer, 2007Edition: 1st ed. 2007.Description: X, 101 p. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783031025563.Subject(s): Electrical engineering | Signal processing | Acoustical engineering | Electrical and Electronic Engineering | Signal, Speech and Image Processing | Engineering AcousticsAdditional physical formats: Printed edition:: No title; Printed edition:: No titleDDC classification: 621.3 Online resources: Click here to access online
Contents:
Contents: I. Principles -- Introduction -- Latent Semantic Mapping -- LSM Feature Space -- Computational Effort -- Probabilistic Extensions -- II. Applications -- Junk E-mail Filtering -- Semantic Classification -- Language Modeling -- Pronunciation Modeling -- Speaker Verification -- TTS Unit Selection -- III. Perspectives -- Discussion -- Conclusion -- Bibliography.
In: Springer Nature eBookSummary: Latent semantic mapping (LSM) is a generalization of latent semantic analysis (LSA), a paradigm originally developed to capture hidden word patterns in a text document corpus. In information retrieval, LSA enables retrieval on the basis of conceptual content, instead of merely matching words between queries and documents. It operates under the assumption that there is some latent semantic structure in the data, which is partially obscured by the randomness of word choice with respect to retrieval. Algebraic and/or statistical techniques are brought to bear to estimate this structure and get rid of the obscuring ""noise."" This results in a parsimonious continuous parameter description of words and documents, which then replaces the original parameterization in indexing and retrieval. This approach exhibits three main characteristics: -Discrete entities (words and documents) are mapped onto a continuous vector space; -This mapping is determined by global correlation patterns; and -Dimensionality reduction is an integral part of the process. Such fairly generic properties are advantageous in a variety of different contexts, which motivates a broader interpretation of the underlying paradigm. The outcome (LSM) is a data-driven framework for modeling meaningful global relationships implicit in large volumes of (not necessarily textual) data. This monograph gives a general overview of the framework, and underscores the multifaceted benefits it can bring to a number of problems in natural language understanding and spoken language processing. It concludes with a discussion of the inherent tradeoffs associated with the approach, and some perspectives on its general applicability to data-driven information extraction. Contents: I. Principles / Introduction / Latent Semantic Mapping / LSM Feature Space / Computational Effort / Probabilistic Extensions / II. Applications/ Junk E-mail Filtering / Semantic Classification / Language Modeling / Pronunciation Modeling / Speaker Verification / TTS Unit Selection / III. Perspectives / Discussion / Conclusion / Bibliography.
    average rating: 0.0 (0 votes)
No physical items for this record

Contents: I. Principles -- Introduction -- Latent Semantic Mapping -- LSM Feature Space -- Computational Effort -- Probabilistic Extensions -- II. Applications -- Junk E-mail Filtering -- Semantic Classification -- Language Modeling -- Pronunciation Modeling -- Speaker Verification -- TTS Unit Selection -- III. Perspectives -- Discussion -- Conclusion -- Bibliography.

Latent semantic mapping (LSM) is a generalization of latent semantic analysis (LSA), a paradigm originally developed to capture hidden word patterns in a text document corpus. In information retrieval, LSA enables retrieval on the basis of conceptual content, instead of merely matching words between queries and documents. It operates under the assumption that there is some latent semantic structure in the data, which is partially obscured by the randomness of word choice with respect to retrieval. Algebraic and/or statistical techniques are brought to bear to estimate this structure and get rid of the obscuring ""noise."" This results in a parsimonious continuous parameter description of words and documents, which then replaces the original parameterization in indexing and retrieval. This approach exhibits three main characteristics: -Discrete entities (words and documents) are mapped onto a continuous vector space; -This mapping is determined by global correlation patterns; and -Dimensionality reduction is an integral part of the process. Such fairly generic properties are advantageous in a variety of different contexts, which motivates a broader interpretation of the underlying paradigm. The outcome (LSM) is a data-driven framework for modeling meaningful global relationships implicit in large volumes of (not necessarily textual) data. This monograph gives a general overview of the framework, and underscores the multifaceted benefits it can bring to a number of problems in natural language understanding and spoken language processing. It concludes with a discussion of the inherent tradeoffs associated with the approach, and some perspectives on its general applicability to data-driven information extraction. Contents: I. Principles / Introduction / Latent Semantic Mapping / LSM Feature Space / Computational Effort / Probabilistic Extensions / II. Applications/ Junk E-mail Filtering / Semantic Classification / Language Modeling / Pronunciation Modeling / Speaker Verification / TTS Unit Selection / III. Perspectives / Discussion / Conclusion / Bibliography.

There are no comments for this item.

Log in to your account to post a comment.