tionship indicates how multimodal medical image processing can be unified to a large extent, e. g. multi-channel segmentation and image registration, and extend information theoretic registration to other features than image intensities. The framework is not at all restricted to medical images though and this is illustrated by applying it to multimedia sequences as well. In Chapter 4, the main results from the developments in plastic UIs and mul- modal UIs are brought together using a theoretic and conceptual perspective as a unifying approach. It is aimed at defining models useful to support UI plasticity by relying on multimodality, at introducing and discussing basic principles that can drive the development of such UIs, and at describing some techniques as proof-of-concept of the aforementioned models and principles. In Chapter 4, the authors introduce running examples that serve as illustration throughout the d- cussion of the use of multimodality to support plasticity.
Multimodality Theory.- Information-Theoretic Framework for Multimodal Signal Processing.- Multimodality for Plastic User Interfaces: Models, Methods, and Principles.- Face and Speech Interaction.- Recognition of Emotional States in Natural Human-Computer Interaction.- Two SIMILAR Different Speech and Gestures Multimodal Interfaces.- Multimodal User Interfaces in Ubiquitous Environments.- Software Engineering for Multimodal Interactive Systems.- Gestural Interfaces for Hearing-Impaired Communication.- Modality Replacement Framework for Applications for the Disabled.- A medical Component-based Framework for Image Guided Surgery.- Multimodal Interfaces for Laparoscopic Training.