The handbook of multimodal-multisensor interfaces. Volume 1, Foundations, user modeling, and common modality combinations için kapak resmi
The handbook of multimodal-multisensor interfaces. Volume 1, Foundations, user modeling, and common modality combinations
Başlık:
The handbook of multimodal-multisensor interfaces. Volume 1, Foundations, user modeling, and common modality combinations
Yazar:
Oviatt, Sharon, author.
ISBN:
9781970001655
Edisyon:
First edition.
Fiziksel Niteleme:
1 PDF (xxvii, 633 pages) : illustrations.
Seri:
ACM books, #14

ACM books ; #14.
İçindekiler:
Introduction: scope, trends, and paradigm shift in the field of computer interfaces -- Why multimodal-multisensor interfaces have become dominant -- Flexible multiple-component tools as a catalyst for performance -- More expressively powerful tools are capable of stimulating cognition -- One example of how multimodal-multisensor interfaces are changing today -- Insights in the chapters ahead -- Expert exchange on multidisciplinary challenge topic -- References --

Part I. Theory and neuroscience foundations --

Chapter 1. Theoretical foundations of multimodal interfaces and systems / Sharon Oviatt -- 1.1 Gestalt theory: understanding multimodal coherence, stability, and robustness -- 1.2 Working memory theory: performance advantages of distributing multimodal processing -- 1.3 Activity theory, embodied cognition, and multisensory-multimodal -- Facilitation of cognition -- Focus questions -- References --

Chapter 2. The impact of multimodal-multisensory learning on human performance and brain activation patterns / Karin H. James, Sophia Vinci-Booher, Felipe Munoz-Rubke -- 2.1 Introduction -- 2.2 The multimodal-multisensory body -- 2.3 Facilitatory effects on learning as a function of active interactions -- 2.4 Behavioral results in children -- 2.5 Neuroimaging studies in adults -- 2.6 Neuroimaging studies in developing populations -- 2.7 Theoretical implications-embodied cognition -- 2.8 Implications for multimodal-multisensor interface design -- Focus questions -- References --

Part II. Approaches to design and user modeling --

Chapter 3. Multisensory haptic interactions: understanding the sense and designing for it / Karon E. Maclean, Oliver S. Schneider, Hasti Seifi -- 3.1 Introduction -- 3.2 Interaction models for multimodal applications -- 3.3 Physical design space of haptic media -- 3.4 Making haptic media -- 3.5 Frontiers for haptic design -- Focus questions -- References --

Chapter 4. A background perspective on touch as a multimodal (and multisensor) construct / Ken Hinckley -- 4.1 Introduction -- 4.2 The duality of sensors and modalities -- 4.3 A model of foreground and background interaction -- 4.4 Seven views of touch interaction -- 4.5 Summary and discussion -- Focus questions -- References --

Chapter 5. Understanding and supporting modality choices / Anthony Jameson, Per Ola Kristensson -- 5.1 Introduction -- 5.2 Synthesis of research on modality choices -- 5.3 Brief introduction to the aspect and arcade models -- 5.4 Consequence-based choice -- 5.5 Trial-and-error-based choice -- 5.6 Policy-based choice -- 5.7 Experience-based choice -- 5.8 Other choice patterns -- 5.9 Recapitulation and ideas for future research -- Focus questions -- References --

Chapter 6. Using cognitive models to understand multimodal processes: the case for speech and gesture production / Stefan Kopp, Kirsten Bergmann -- 6.1 Introduction -- 6.2 Multimodal communication with speech and gesture -- 6.3 Models of speech and gesture production -- 6.4 A computational cognitive model of speech and gesture production -- 6.5 Simulation-based testing -- 6.6 Summary -- Focus questions -- References --

Chapter 7. Multimodal feedback in HCI: haptics, non-speech audio, and their applications / Euan Freeman, Graham Wilson, Dong-Bach Vo, Alex Ng, Ioannis Politis, Stephen Brewster -- 7.1 Overview of non-visual feedback modalities -- 7.2 applications of multimodal feedback: accessibility and mobility -- 7.3 Conclusions and future directions -- Focus questions -- References --

Chapter 8. Multimodal technologies for seniors: challenges and opportunities / Cosmin Munteanu, Albert Ali Salah -- 8.1 Introduction -- 8.2 Senior users and challenges -- 8.3 Specific application areas -- 8.4 Available multimodal-multisensor technologies -- 8.5 Multimodal interaction for older adults: usability, design, and adoption challenges -- 8.6 Conclusions -- Focus questions -- References --

Part III. Common modality combinations --

Chapter 9. Gaze-informed multimodal interaction / Pernilla Qvarfordt -- 9.1 Introduction -- 9.2 Eye movements and eye tracking data analysis -- 9.3 Eye movements in relation to other modalities -- 9.4 Gaze in multimodal interaction and systems -- 9.5 Conclusion and outlook -- Focus questions -- References --

Chapter 10. Multimodal speech and pen interfaces / Philip R. Cohen, Sharon Oviatt -- 10.1 Introduction -- 10.2 Empirical research on multimodal speech and pen interaction -- 10.3 Design prototyping and data collection -- 10.4 Flow of signal and information processing -- 10.5 Distributed architectural components -- 10.6 Multimodal fusion and semantic integration architectures -- 10.7 Multimodal speech and pen systems -- 10.8 Conclusion and future directions -- Focus questions -- References --

Chapter 11. Multimodal gesture recognition / Athanasios Katsamanis, Vassilis Pitsikalis, Stavros Theodorakis, Petros Maragos -- 11.1 Introduction -- 11.2 Multimodal communication and gestures -- 11.3 Recognizing speech and gestures -- 11.4 A system in detail -- 11.5 Conclusions and outlook -- Focus questions -- References --

Chapter 12. Audio and visual modality combination in speech processing Applications / Gerasimos Potamianos, Etienne Marcheret, Youssef Mroueh, Vaibhava Goel, Alexandros Koumbaroulis, Argyrios Vartholomaios, Spyridon Thermos -- 12.1 Introduction -- 12.2 Bimodality in perception and production of human speech -- 12.3 AVASR applications and resources -- 12.4 The visual front-end -- 12.5 Audio-visual fusion models and experimental results -- 12.6 Other audio-visual speech applications -- 12.7 Conclusions and outlook -- Focus questions -- References --

Part IV. Multidisciplinary challenge topic: perspectives on learning with multimodal technology --

Chapter 13. Perspectives on learning with multimodal technology / Karin H. James, James Lester, Dan Schwartz, Katherine M. Cheng, Sharon Oviatt -- 13.1 Perspectives from neuroscience and human-centered interfaces -- 13.2 Perspectives from artificial intelligence and adaptive computation -- 13.3 The enablers: new techniques and models -- 13.4 Opening up new research horizons -- 13.5 Conclusion -- References --

Index -- Biographies -- Volume 1 glossary.
Özet:
The content of this handbook would be most appropriate for graduate students, and of primary interest to students studying computer science and information technology, human-computer interfaces, mobile and ubiquitous interfaces, and related multidisciplinary majors. When teaching graduate classes with this book, whether in quarter or semester classes, we recommend initially requiring that students spend two weeks reading the introductory textbook, The Paradigm Shift to Multimodality in Contemporary Interfaces (Morgan Claypool, Human-Centered Interfaces Synthesis Series, 2015). This textbook is suitable for upper-division undergraduate and graduate students. With this orientation, a graduate class providing an overview of multimodal-multisensor interfaces then could select chapters from the handbook distributed across topics in the different sections.
Elektronik Erişim:
Abstract with links to full text http://dx.doi.org/10.1145/3015783