Normal view MARC view ISBD view

Context Aware Human-Robot and Human-Agent Interaction [electronic resource] / edited by Nadia Magnenat-Thalmann, Junsong Yuan, Daniel Thalmann, Bum-Jae You.

Contributor(s): Magnenat-Thalmann, Nadia [editor.] | Yuan, Junsong [editor.] | Thalmann, Daniel [editor.] | You, Bum-Jae [editor.] | SpringerLink (Online service).
Material type: materialTypeLabelBookSeries: Human-Computer Interaction Series: Publisher: Cham : Springer International Publishing : Imprint: Springer, 2016Description: XIII, 298 p. 143 illus. online resource.Content type: text Media type: computer Carrier type: online resourceISBN: 9783319199474.Subject(s): Computer science | User interfaces (Computer systems) | Artificial intelligence | Computer graphics | Computer Science | User Interfaces and Human Computer Interaction | Computer Imaging, Vision, Pattern Recognition and Graphics | Artificial Intelligence (incl. Robotics)Additional physical formats: Printed edition:: No titleDDC classification: 005.437 | 4.019 Online resources: Click here to access online
Contents:
Preface -- Introduction -- Part I User Understanding through Multisensory Perception -- Face and Facial Expressions Recognition and Analysis -- Body Movement Analysis and Recognition -- Sound Source Localization and Tracking -- Modelling Conversation -- Part II Facial and Body Modelling Animation -- Personalized Body Modelling -- Parameterized Facial modelling and Animation -- Motion Based Learning -- Responsive Motion Generation -- Shared Object Manipulation -- Part III Modelling Human Behaviours -- Modelling Personality, Mood and Emotions -- Motion Control for Social Behaviours -- Multiple Virtual Humans Interactions -- Multi-Modal and Multi-Party Social Interactions.
In: Springer eBooksSummary: This is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI). The research presented in this volume is split into three sections: �User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization. �Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion. �Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other. Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.
    average rating: 0.0 (0 votes)
No physical items for this record

Preface -- Introduction -- Part I User Understanding through Multisensory Perception -- Face and Facial Expressions Recognition and Analysis -- Body Movement Analysis and Recognition -- Sound Source Localization and Tracking -- Modelling Conversation -- Part II Facial and Body Modelling Animation -- Personalized Body Modelling -- Parameterized Facial modelling and Animation -- Motion Based Learning -- Responsive Motion Generation -- Shared Object Manipulation -- Part III Modelling Human Behaviours -- Modelling Personality, Mood and Emotions -- Motion Control for Social Behaviours -- Multiple Virtual Humans Interactions -- Multi-Modal and Multi-Party Social Interactions.

This is the first book to describe how Autonomous Virtual Humans and Social Robots can interact with real people, be aware of the environment around them, and react to various situations. Researchers from around the world present the main techniques for tracking and analysing humans and their behaviour and contemplate the potential for these virtual humans and robots to replace or stand in for their human counterparts, tackling areas such as awareness and reactions to real world stimuli and using the same modalities as humans do: verbal and body gestures, facial expressions and gaze to aid seamless human-computer interaction (HCI). The research presented in this volume is split into three sections: �User Understanding through Multisensory Perception: deals with the analysis and recognition of a given situation or stimuli, addressing issues of facial recognition, body gestures and sound localization. �Facial and Body Modelling Animation: presents the methods used in modelling and animating faces and bodies to generate realistic motion. �Modelling Human Behaviours: presents the behavioural aspects of virtual humans and social robots when interacting and reacting to real humans and each other. Context Aware Human-Robot and Human-Agent Interaction would be of great use to students, academics and industry specialists in areas like Robotics, HCI, and Computer Graphics.

There are no comments for this item.

Log in to your account to post a comment.