Normal view MARC view ISBD view

Machine Learning for Multimodal Interaction [electronic resource] : 5th International Workshop, MLMI 2008, Utrecht, The Netherlands, September 8-10, 2008, Proceedings / edited by Andrei Popescu-Belis, Rainer Stiefelhagen.

Contributor(s): Popescu-Belis, Andrei [editor.] | Stiefelhagen, Rainer [editor.] | SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Information Systems and Applications, incl. Internet/Web, and HCI: 5237Publisher: Berlin, Heidelberg : Springer Berlin Heidelberg : Imprint: Springer, 2008Edition: 1st ed. 2008.Description: XII, 364 p. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783540858539.Subject(s): User interfaces (Computer systems) | Human-computer interaction | Natural language processing (Computer science) | Artificial intelligence | Computers and civilization | Computer vision | User Interfaces and Human Computer Interaction | Natural Language Processing (NLP) | Artificial Intelligence | Computers and Society | Computer VisionAdditional physical formats: Printed edition:: No title; Printed edition:: No titleDDC classification: 005.437 | 004.019 Online resources: Click here to access online
Contents:
Face, Gesture and Nonverbal Communication -- Visual Focus of Attention in Dynamic Meeting Scenarios -- Fast and Robust Face Tracking for Analyzing Multiparty Face-to-Face Meetings -- What Does the Face-Turning Action Imply in Consensus Building Communication? -- Distinguishing the Communicative Functions of Gestures -- Optimised Meeting Recording and Annotation Using Real-Time Video Analysis -- Ambiguity Modeling in Latent Spaces -- Audio-Visual Scene Analysis and Speech Processing -- Inclusion of Video Information for Detection of Acoustic Events Using the Fuzzy Integral -- Audio-Visual Clustering for 3D Speaker Localization -- A Hybrid Generative-Discriminative Approach to Speaker Diarization -- A Neural Network Based Regression Approach for Recognizing Simultaneous Speech -- Hilbert Envelope Based Features for Far-Field Speech Recognition -- Multimodal Unit Selection for 2D Audiovisual Text-to-Speech Synthesis -- Social Signal Processing -- Decision-Level Fusion for Audio-Visual Laughter Detection -- Detection of Laughter-in-Interaction in Multichannel Close-Talk Microphone Recordings of Meetings -- Automatic Recognition of Spontaneous Emotions in Speech Using Acoustic and Lexical Features -- Daily Routine Classification from Mobile Phone Data -- Human-Human Spoken Dialogue Processing -- Hybrid Multi-step Disfluency Detection -- Exploring Features and Classifiers for Dialogue Act Segmentation -- Detecting Action Items in Meetings -- Modeling Topic and Role Information in Meetings Using the Hierarchical Dirichlet Process -- Time-Compressing Speech: ASR Transcripts Are an Effective Way to Support Gist Extraction -- Meta Comments for Summarizing Meeting Speech -- HCI and Applications -- A Generic Layout-Tool for Summaries of Meetings in a Constraint-Based Approach -- A Probabilistic Model for UserRelevance Feedback on Image Retrieval -- The AMIDA Automatic Content Linking Device: Just-in-Time Document Retrieval in Meetings -- Introducing Additional Input Information into Interactive Machine Translation Systems -- Computer Assisted Transcription of Text Images and Multimodal Interaction -- User Requirements and Evaluation of Meeting Browsers and Assistants -- Designing and Evaluating Meeting Assistants, Keeping Humans in Mind -- Making Remote 'Meeting Hopping' Work: Assistance to Initiate, Join and Leave Meetings -- Physicality and Cooperative Design -- Developing and Evaluating a Meeting Assistant Test Bed -- Extrinsic Summarization Evaluation: A Decision Audit Task.
In: Springer Nature eBookSummary: This book constitutes the refereed proceedings of the 5th International Workshop on Machine Learning for Multimodal Interaction, MLMI 2008, held in Utrecht, The Netherlands, in September 2008. The 12 revised full papers and 15 revised poster papers presented together with 5 papers of a special session on user requirements and evaluation of multimodal meeting browsers/assistants were carefully reviewed and selected from 47 submissions. The papers cover a wide range of topics related to human-human communication modeling and processing, as well as to human-computer interaction, using several communication modalities. Special focus is given to the analysis of non-verbal communication cues and social signal processing, the analysis of communicative content, audio-visual scene analysis, speech processing, interactive systems and applications.
    average rating: 0.0 (0 votes)
No physical items for this record

Face, Gesture and Nonverbal Communication -- Visual Focus of Attention in Dynamic Meeting Scenarios -- Fast and Robust Face Tracking for Analyzing Multiparty Face-to-Face Meetings -- What Does the Face-Turning Action Imply in Consensus Building Communication? -- Distinguishing the Communicative Functions of Gestures -- Optimised Meeting Recording and Annotation Using Real-Time Video Analysis -- Ambiguity Modeling in Latent Spaces -- Audio-Visual Scene Analysis and Speech Processing -- Inclusion of Video Information for Detection of Acoustic Events Using the Fuzzy Integral -- Audio-Visual Clustering for 3D Speaker Localization -- A Hybrid Generative-Discriminative Approach to Speaker Diarization -- A Neural Network Based Regression Approach for Recognizing Simultaneous Speech -- Hilbert Envelope Based Features for Far-Field Speech Recognition -- Multimodal Unit Selection for 2D Audiovisual Text-to-Speech Synthesis -- Social Signal Processing -- Decision-Level Fusion for Audio-Visual Laughter Detection -- Detection of Laughter-in-Interaction in Multichannel Close-Talk Microphone Recordings of Meetings -- Automatic Recognition of Spontaneous Emotions in Speech Using Acoustic and Lexical Features -- Daily Routine Classification from Mobile Phone Data -- Human-Human Spoken Dialogue Processing -- Hybrid Multi-step Disfluency Detection -- Exploring Features and Classifiers for Dialogue Act Segmentation -- Detecting Action Items in Meetings -- Modeling Topic and Role Information in Meetings Using the Hierarchical Dirichlet Process -- Time-Compressing Speech: ASR Transcripts Are an Effective Way to Support Gist Extraction -- Meta Comments for Summarizing Meeting Speech -- HCI and Applications -- A Generic Layout-Tool for Summaries of Meetings in a Constraint-Based Approach -- A Probabilistic Model for UserRelevance Feedback on Image Retrieval -- The AMIDA Automatic Content Linking Device: Just-in-Time Document Retrieval in Meetings -- Introducing Additional Input Information into Interactive Machine Translation Systems -- Computer Assisted Transcription of Text Images and Multimodal Interaction -- User Requirements and Evaluation of Meeting Browsers and Assistants -- Designing and Evaluating Meeting Assistants, Keeping Humans in Mind -- Making Remote 'Meeting Hopping' Work: Assistance to Initiate, Join and Leave Meetings -- Physicality and Cooperative Design -- Developing and Evaluating a Meeting Assistant Test Bed -- Extrinsic Summarization Evaluation: A Decision Audit Task.

This book constitutes the refereed proceedings of the 5th International Workshop on Machine Learning for Multimodal Interaction, MLMI 2008, held in Utrecht, The Netherlands, in September 2008. The 12 revised full papers and 15 revised poster papers presented together with 5 papers of a special session on user requirements and evaluation of multimodal meeting browsers/assistants were carefully reviewed and selected from 47 submissions. The papers cover a wide range of topics related to human-human communication modeling and processing, as well as to human-computer interaction, using several communication modalities. Special focus is given to the analysis of non-verbal communication cues and social signal processing, the analysis of communicative content, audio-visual scene analysis, speech processing, interactive systems and applications.

There are no comments for this item.

Log in to your account to post a comment.