The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations

Cover
Morgan & Claypool, 01.06.2017 - 600 Seiten
The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smart phones, wearables, in-vehicle and robotic applications, and many other areas that are now highly competitive commercially. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This first volume of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling and interface designs that support user choice, that synergistically combine modalities with sensors, and that blend multimodal input and output. This volume also highlights an in-depth look at the most common multimodal-multisensor combinations—for example, touch and pen input, haptic and non-speech audio output, and speech-centric systems that co-process either gestures, pen input, gaze, or visible lip movements. A common theme throughout these chapters is supporting mobility and individual differences among users. These handbook chapters provide walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.
 

Inhalt

Scope Trends and Paradigm Shift in the Field of Computer Interfaces
1
PART I THEORY AND NEUROSCIENCE FOUNDATIONS
17
1 Theoretical Foundations of Multimodal Interfaces and Systems
19
2 The Impact of MultimodalMultisensory Learning on Human Performance and Brain Activation Patterns
51
PART II APPROACHES TO DESIGN AND USER MODELING
95
Understanding the Sense and Designing for It
97
4 A Background Perspective on Touch as a Multimodal and Multisensor Construct
143
5 Understanding and Supporting Modality Choices
201
PART III COMMON MODALITY COMBINATIONS
363
9 GazeInformed Multimodal Interaction
365
10 Multimodal Speech and Pen Interfaces
403
11 Multimodal Gesture Recognition
449
12 Audio and Visual Modality Combination in Speech Processing Applications
489
PERSPECTIVES ON LEARNING WITH MULTIMODAL TECHNOLOGY
545
13 Perspectives on Learning with Multimodal Technology
547
Index
571

The Case for Speech and Gesture Production
239
Haptics NonSpeech Audio and Their Applications
277
Challenges and Opportunities
319
Biographies
599
Volume 1 Glossary
609
Urheberrecht

Andere Ausgaben - Alle anzeigen

Häufige Begriffe und Wortgruppen

Autoren-Profil (2017)

Incaa Designs

University of Passau and Imperial College London

VoiceBox Technologies

German Research Center for Artificial Intelligence

University of Thessaly

Bibliografische Informationen