Book description
Embodied conversational agents (ECA) and speech-based human–machine interfaces can together represent more advanced and more natural human–machine interaction. Fusion of both topics is a challenging agenda in research and production spheres. The important goal of human–machine interfaces is to provide content or functionality in the form of a dialog resembling face-to-face conversations. All natural interfaces strive to exploit and use different communication strategies that provide additional meaning to the content, whether they are human–machine interfaces for controlling an application or different ECA-based human–machine interfaces directly simulating face-to-face conversation.
Coverbal Synchrony in Human-Machine Interaction presents state-of-the-art concepts of advanced environment-independent multimodal human–machine interfaces that can be used in different contexts, ranging from simple multimodal web-browsers (for example, multimodal content reader) to more complex multimodal human–machine interfaces for ambient intelligent environments (such as supportive environments for elderly and agent-guided household environments). They can also be used in different computing environments—from pervasive computing to desktop environments. Within these concepts, the contributors discuss several communication strategies, used to provide different aspects of human–machine interaction.
Table of contents
- Cover
- Preface
- Contents
- List of Contributors
- CHAPTER 1: Speech Technology and Conversational Activity in Human-Machine Interaction (1/4)
- CHAPTER 1: Speech Technology and Conversational Activity in Human-Machine Interaction (2/4)
- CHAPTER 1: Speech Technology and Conversational Activity in Human-Machine Interaction (3/4)
- CHAPTER 1: Speech Technology and Conversational Activity in Human-Machine Interaction (4/4)
- CHAPTER 2: A Framework for Studying Human Multimodal Communication (1/5)
- CHAPTER 2: A Framework for Studying Human Multimodal Communication (2/5)
- CHAPTER 2: A Framework for Studying Human Multimodal Communication (3/5)
- CHAPTER 2: A Framework for Studying Human Multimodal Communication (4/5)
- CHAPTER 2: A Framework for Studying Human Multimodal Communication (5/5)
- CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (1/7)
- CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (2/7)
- CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (3/7)
- CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (4/7)
- CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (5/7)
- CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (6/7)
- CHAPTER 3: Giving Computers Personality?Personality in Computers is in the Eye of the User (7/7)
- CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (1/6)
- CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (2/6)
- CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (3/6)
- CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (4/6)
- CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (5/6)
- CHAPTER 4: Multi-Modal Classifier-Fusion for the Recognition of Emotions (6/6)
- CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (1/9)
- CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (2/9)
- CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (3/9)
- CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (4/9)
- CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (5/9)
- CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (6/9)
- CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (7/9)
- CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (8/9)
- CHAPTER 5: A Framework for Emotions and Dispositions in Man-Companion Interaction (9/9)
- CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (1/7)
- CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (2/7)
- CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (3/7)
- CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (4/7)
- CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (5/7)
- CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (6/7)
- CHAPTER 6: French Face-to-Face Interaction: Repetition as a Multimodal Resource (7/7)
- CHAPTER 7: The Situated Multimodal Facets of Human Communication (1/6)
- CHAPTER 7: The Situated Multimodal Facets of Human Communication (2/6)
- CHAPTER 7: The Situated Multimodal Facets of Human Communication (3/6)
- CHAPTER 7: The Situated Multimodal Facets of Human Communication (4/6)
- CHAPTER 7: The Situated Multimodal Facets of Human Communication (5/6)
- CHAPTER 7: The Situated Multimodal Facets of Human Communication (6/6)
- CHAPTER 8: From Annotation to Multimodal Behavior (1/4)
- CHAPTER 8: From Annotation to Multimodal Behavior (2/4)
- CHAPTER 8: From Annotation to Multimodal Behavior (3/4)
- CHAPTER 8: From Annotation to Multimodal Behavior (4/4)
- CHAPTER 9: Co-speech Gesture Generation for Embodied Agents and its Effects on User Evaluation (1/4)
- CHAPTER 9: Co-speech Gesture Generation for Embodied Agents and its Effects on User Evaluation (2/4)
- CHAPTER 9: Co-speech Gesture Generation for Embodied Agents and its Effects on User Evaluation (3/4)
- CHAPTER 9: Co-speech Gesture Generation for Embodied Agents and its Effects on User Evaluation (4/4)
- CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (1/6)
- CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (2/6)
- CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (3/6)
- CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (4/6)
- CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (5/6)
- CHAPTER 10: A Survey of Listener Behavior and Listener Models for Embodied Conversational Agents (6/6)
- CHAPTER 11: Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesis (1/5)
- CHAPTER 11: Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesis (2/5)
- CHAPTER 11: Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesis (3/5)
- CHAPTER 11: Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesis (4/5)
- CHAPTER 11: Human and Virtual Agent Expressive Gesture Quality Analysis and Synthesis (5/5)
- CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (1/7)
- CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (2/7)
- CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (3/7)
- CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (4/7)
- CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (5/7)
- CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (6/7)
- CHAPTER 12: A Distributed Architecture for Real-time Dialogue and On-task Learning of Efficient Co-operative Turn-taking (7/7)
- CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (1/8)
- CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (2/8)
- CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (3/8)
- CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (4/8)
- CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (5/8)
- CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (6/8)
- CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (7/8)
- CHAPTER 13: TTS-driven Synthetic Behavior Generation Model for Embodied Conversational Agents (8/8)
- CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (1/6)
- CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (2/6)
- CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (3/6)
- CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (4/6)
- CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (5/6)
- CHAPTER 14: Modeling Human Communication Dynamics for Virtual Human (6/6)
- CHAPTER 15: Multimodal Fusion in Human-Agent Dialogue (1/5)
- CHAPTER 15: Multimodal Fusion in Human-Agent Dialogue (2/5)
- CHAPTER 15: Multimodal Fusion in Human-Agent Dialogue (3/5)
- CHAPTER 15: Multimodal Fusion in Human-Agent Dialogue (4/5)
- CHAPTER 15: Multimodal Fusion in Human-Agent Dialogue (5/5)
- Color Plate Section (1/2)
- Color Plate Section (2/2)
- Back Cover
Product information
- Title: Coverbal Synchrony in Human-Machine Interaction
- Author(s):
- Release date: October 2013
- Publisher(s): CRC Press
- ISBN: 9781466598263
You might also like
book
Human Recognition in Unconstrained Environments
Human Recognition in Unconstrained Environments provides a unique picture of the complete ‘in-the-wild’ biometric recognition processing …
book
Hidden Semi-Markov Models
Hidden semi-Markov models (HSMMs) are among the most important models in the area of artificial intelligence …
book
Lessons in Rapid Innovation From the COVID-19 Pandemic
As medical researchers and pharmaceutical companies race to develop treatments and a vaccine for COVID-19, they …
book
Household Service Robotics
Copyright ©2015 Zhejiang University Press, Published by Elsevier Inc. Household Service Robotics is a collection of …