The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality CombinationsMorgan & Claypool, 01.06.2017 - 600 Seiten The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smart phones, wearables, in-vehicle and robotic applications, and many other areas that are now highly competitive commercially. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This first volume of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling and interface designs that support user choice, that synergistically combine modalities with sensors, and that blend multimodal input and output. This volume also highlights an in-depth look at the most common multimodal-multisensor combinations—for example, touch and pen input, haptic and non-speech audio output, and speech-centric systems that co-process either gestures, pen input, gaze, or visible lip movements. A common theme throughout these chapters is supporting mobility and individual differences among users. These handbook chapters provide walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance. |
Inhalt
1 | |
17 | |
19 | |
2 The Impact of MultimodalMultisensory Learning on Human Performance and Brain Activation Patterns | 51 |
PART II APPROACHES TO DESIGN AND USER MODELING | 95 |
Understanding the Sense and Designing for It | 97 |
4 A Background Perspective on Touch as a Multimodal and Multisensor Construct | 143 |
5 Understanding and Supporting Modality Choices | 201 |
PART III COMMON MODALITY COMBINATIONS | 363 |
9 GazeInformed Multimodal Interaction | 365 |
10 Multimodal Speech and Pen Interfaces | 403 |
11 Multimodal Gesture Recognition | 449 |
12 Audio and Visual Modality Combination in Speech Processing Applications | 489 |
PERSPECTIVES ON LEARNING WITH MULTIMODAL TECHNOLOGY | 545 |
13 Perspectives on Learning with Multimodal Technology | 547 |
571 | |
The Case for Speech and Gesture Production | 239 |
Haptics NonSpeech Audio and Their Applications | 277 |
Challenges and Opportunities | 319 |
Biographies | 599 |
Volume 1 Glossary | 609 |
Andere Ausgaben - Alle anzeigen
The Handbook of Multimodal-multisensor Interfaces: Foundations, User ... Sharon Oviatt Keine Leseprobe verfügbar - 2017 |
The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations ... Sharon Oviatt Keine Leseprobe verfügbar - 2017 |
Häufige Begriffe und Wortgruppen
action activity applications audio audio feedback audio-visual speech auditory auditory icons AVASR behavior brain Brewster chapter cognitive load cognitive model Cohen communication Computer Vision Computing Systems Conference on Human context detection discussed display editors encode environment evaluation example experience eye tracker eye tracking Factors in Computing Figure force feedback fusion gaze gesture recognition haptic Hinckley Human Factors human-computer interaction icons IEEE input International Conference language learning machine learning MacLean memory mobile devices modalities modality choice Modality Combinations motor multimodal integration multimodal interaction multimodal systems Multimodal-Multisensor Interfaces multiple multisensory neural neurons neuroscience objects older adults output Oviatt patterns perception performance Potamianos Proceedings representations robot Section semantic sensing sensors SIGCHI Conference Signal Processing smartphone spatial speech and gesture speech and pen speech recognition tactile task techniques technologies temporal theory tion touch touchscreen types unimodal User Interface user’s vibrotactile