Schedule

[Information on the invited speakers]

General Conference Schedule

Tuesday Nov. 4th, 2003

        6:00pm-9:00pm Evening Reception & Demos at NewMic - Held jointly with UIST'03

Wednesday Nov. 5th, 2003

8:30-8:45am ICMI Welcome & Introduction
8:45-10:00 Keynote Speaker (Marshall)
10:00-10:30 Break
10:30-12:10 Joint (UIST and ICMI) paper session
12:10-2:00 Lunch (stations) and Joint Poster Session
2:00-3:30 UIST papers
3:30-4:00 Break
4:00-5:30 ICMI papers
5:30-6:00 UIST announcements & wrap-up
7:00pm Evening Reception & Demos - UBC's Human Computer Interaction Group invites UIST and ICMI-PUI attendees to join us for a "demo reception" on the evening of Wed Nov 5th. The cost is $10. Click here for more details.

Thursday Nov. 6th, 2003

9:00-10:00am

Keynote Speaker (Spence)

10:00-10:30

Break

10:30-12:15

Paper session

12:15-1:45

Lunch (sit-down, with informal discussions)

1:45-3:00

Panel: Standards for Multimodal User interfaces, James Larson, chair

3:00-4:30

Posters/Demos and Break

4:30-6:00

Paper session

7:00pm

ICMI Banquet at Aqua Riva Restaurant on the waterfront

Friday Nov. 7th, 2003

9:00-10:00am

Keynote Speaker (Jain)

10:00-10:30

Break

10:30-12:10

Paper session

12:10-1:30

Lunch (sit-down, with informal discussions)

1:30-2:45

Panel: Funding for Multimodal Interface Research, Phil Cohen and Oliviero Stock, co-chairs

2:45-3:00

Break

3:00-4:30

Paper session

4:30-5:30

ICMI Town Hall Meeting

5:30-7:00pm

ICMI Business Meeting & Dinner

Note: Continental breakfast, coffee breaks, and lunch included each day with paid registration, as is Thursday evening's banquet.

 

Schedule of Papers

WEDNESDAY 10:30-12:10

JOINT SESSION
WITH UIST


[UIST] VisionWand: Interaction Techniques for Large Displays using a Passive Wand Tracked in 3D

Xiang Cao and Ravin Balakrishnan, University of Toronto


[UIST] Perceptually-Supported Image Editing of Text and Graphics

Eric Saund, David Fleet, Daniel Larner, and James Mahoney, Palo Alto Research Center


A System for Fast Full-Text Entry for Small Electronic Devices
Saied Nesbat, ExIdeas, Inc.

Mutual Disambiguation of 3D Multimodal Interaction in Augmented and Virtual Reality
Ed Kaiser, Alex Olwal, David McGee, Hrvoje Benko, Andrea Corradini, Xiaoguang Li, Phil Cohen, Steven Feiner,
Oregon Health and Science University/OGI School of Science & Engineering, Columbia University, and Pacific Northwest National Laboratory


WEDNESDAY 4:00-5:30

ATTENTION and INTEGRATION

Learning and Reasoning about Interruption
Eric Horvitz and Johnson Apacible, Microsoft Research

Providing the Basis for Human-Robot-Interaction: A Multimodal Attention System for a Mobile Robot
Sebastian Lang, Marcus Kleinehagenbrock, Sascha Hohenner, Jannik Fritsch, Gernot A. Fink, and Gerhard Sagerer, Bielefeld University, Faculty of Technology, Bielefeld

Selective Perception Policies for Limiting Computation in Multimodal Systems: A Comparative Analysis
Nuria Oliver and Eric Horvitz, Microsoft Research

Toward a Theory of Organized Multimodal Integration Patterns during Human-Computer Interaction
Sharon Oviatt, Rachel Coulston, Stefanie Tomko, Benfang Xiao, Rebecca Lunsford, Matt Wesson and Lesley Carmichael,
Oregon Health and Science University/OGI School of Science & Engineering, Carnegie Mellon University and University of Washington


THURSDAY 10:30-12:15

HAPTICS and BIOMETRICS

TorqueBAR: An Ungrounded Haptic Feedback Device
Colin Swindells, Alex Unden, and Tao Sang, University of British Columbia

Towards Tangibility in Gameplay: Building a Tangible Affective Interface for a Computer Game
Ana Paiva, Rui Prada, Ricardo Chaves, Marco Vala, Adrian Bullock, Gerd Andersson, and Kristina Hook, IST-Technical University of Lisbon and INEXC-ID, SICS, and DSV, IT-university in Kista

Multimodal Biometrics: Issues in Design and Testing
Robert Snelick, Mike Indovina, James Yen, and Alan Mink,
National Institute of Standards and Technology

(Short Papers)
Sensitivity to Haptic-Audio Asynchrony
Bernard D. Adelstein, Durand R. Begault, Mark R. Anderson, and Elizabeth M. Wenzel, NASA Ames Research Center,
QSS Group Inc.

A Multimodal Approach for Determining Speaker Location and Focus
Micheal Siracusa, Louis-Philippe Morency, Kevin Wilson, John Fisher, and Trevor Darrell,
MIT Computer Science and Artificial Intelligence Laboratory



Distributed and Local Sensing Techniques for Face-to-Face Collaboration
Ken Hinckley, Microsoft Research


THURSDAY 4:30-6:00

MULTIMODAL ARCHITECTURES AND FRAMEWORKS

The Georgia Tech Gesture Toolkit: Supporting Experiments in Gesture Recognition
Tracy Westeyn, Helene Brashear, Amin Atrash, and Thad Starner, Georgia Institute of Technology

Architecture and Implementation of Multimodal Plug and Play
Christian Elting, Stefan Rapp, Gregor Moehler, and Michael Strube, European Media Laboratory GmbH and
Sony Corporate Laboratories Europe

SmartKom - Adaptive and Flexible Multimodal Access to Multiple Applications
Norbert Reithinger, Jan Alexandersson, Tilman Becker, Anselm Blocher, Ralf Engel, Markus
Löeckelt, Jochen Müeller, Norbert Pfleger, Peter Poller, Michael Streit, and Valentin Tschernomas, DFKI GmbH-German Research Center for Artificial Intelligence

A Framework for Rapid Development of Multimodal Interfaces
Frans Flippo, Allan Meng Krebs, and Ivan Marsic, Rutgers University, Dept. ECE
and Delft University of Technology


FRIDAY 10:30-12:10

USER TESTS AND MULTIMODAL GESTURE

Capturing User Tests in a Multimodal, Multidevice Informal Prototyping Tool
Anoop K. Sinha and James A. Landay, Group for User Interface Research UC Berkeley

Large Vocabulary Sign Language Recognition Based on Hierarchical Decision Trees
Gaolin Fang,
Wen Gao, and Debin Zhao, Harbin Institute of Technology

Hand Motion Gestural Oscillations and Multimodal Discourse
Yingen Xiong, Francis Quek, and David McNeil, Vision Interfaces and System Laboratory (VISLab), Wright State University and The University of Chicago

Pointing Gesture Recognition based on 3D-Tracking of Face, Hands and Head Orientation
Kai Nickel and Rainer Stiefelhagen, Interactive Systems Labs, University of Karlsruhe

(Short Paper)
Untethered Gesture Acquisition and Recognition for a Multimodal Conversational System
Teresa Ko, David Demirdjian and Trevor Darrell, MIT CSAI Lab


FRIDAY 3:00-4:10

SPEECH and GAZE

Where is "it"? Event Synchronization in Gaze-Speech Input Systems
Manpreet Kaur, Marilyn Tremaine, Ning Huang, Joseph Wilder, Frans Flippo, Zoran Gacovski, and Chandra Sekhar Mantravadi, Rutgers University, Center for Advanced Information Processing (CAIP) and New Jersey Institute of Technology, Department of Information Systems

Eyetracking in Cognitive State Detection for HCI
George McConkie, University of Illinois at Urbana-Champaign

A Multimodal Learning Interface for Grounding Spoken Language in Sensory Perceptions
Chen Yu and Dana H. Ballard, Department of Computer Science, University of Rochester

(Short Papers)
A Computer-Animated Tutor for Spoken and Written Language Learning
Dominic W. Massaro, UC Santa Cruz

Augmenting User Interfaces with Adaptive Speech Commands
Peter Gorniak and Deb Roy, MIT Media Laboratory


POSTERS: WEDNESDAY 12:30-2:00 and THURSDAY 3:00-4:30

(Long Papers)


Combining Speech and Haptics for Intuitive and Efficient Navigation through Image Databases
Thomas
Käster, Michael Pfeiffer, and Christian Bauckhage, Bielefeld University

Interactive Skills Using Active Gaze Tracking
Rowel Atienza and Alexander Zelinsky, Research School of Information Sciences and Engineering, The Australian National University

Error Recovery in a Blended Style Eye Gaze and Speech Interface
Yeow Kee
Tan, Nasser Sherkat and Tony Allen, Nottingham Trent University

Using an Autonomous Cube for Basic Navigation and Input
Kristof Van Laerhoven, Nicolas Villar, Albrecht Schmidt, Gerd Kortuem and Hans-Werner Gellersen, Computing Department, Lancaster University, United Kingdom

GWindows: Robust Stereo Vision for Gesture-Based Control of Windows
Andrew Wilson and Nuria Oliver, Microsoft Research

A Visually Grounded Natural Language Interface for Reference to Spatial Scenes
Peter Gorniak and Deb Roy, MIT Media Laboratory

Perceptual User Interfaces using Vision-Based Eye Tracking
Ravikrishna Ruddarraju, Antonio Haro, Kris Nagel, Irfan Essa, Quan T. Tran, Gregory Abowd, and Elizabeth D. Mynatt, Georgia Institute of Technology

Sketching Informal Presentations
Yang Li, James A. Landay, Zhiwei Guan, Xiangshi Ren and Guozhong Dai, Group for User Interface Research UC Berkeley, Chinese Academy of Sciences, and Kochi University of Technology.

Gestural Communication over Video Stream: Supporting Multimodal Interaction for Remote Collaborative Physical Tasks
Jiazhi Ou, Susan R. Fussell, Xilin Chen, Leslie D. Setlock, and Jie Yang, School of Computer Science, Carnegie Mellon University

The
Role of Spoken Feedback in Experiencing Multimodal Interfaces as Human-like
Pernilla Qvarfordt, Arne Jonsson and Nils Dahlback, Department of Computer and Information Science,
Linköping University, Sweden

Real Time Facial Expression Recognition in Video using Support Vector Machines
Philipp Michel and Rana El Kaliouby, University of Cambridge

Modeling Multimodal Integration Patterns and Performance in Seniors: Toward Adaptive Processing of Individual Differences
Benfang Xiao, Rebecca Lunsford, Rachel Coulston, Matt Wesson and Sharon Oviatt,
Oregon Health and Science University/OGI School of Science & Engineering


(Short Papers)

Auditory, Graphical and Haptic Contact Cues for a Reach, Grasp, and Place Task in an Augmented Environment
Mihaela A. Zahariev and Christine L. MacKenzie, Simon Fraser University

Mouthbrush: Drawing and Painting by Hand and Mouth
Chi-ho Chan, Michael J. Lyons, and Nobuji Tetsutani, ATR Media Information Science Labs, Kyoto Japan

XISL: A Language for Describing Multimodal Interaction Scenarios
Kouichi Katsurada, Yusaku Nakamura, Hirobumi Yamada, Tsuneo Nitta, Toyohashi University of Technology

IRYS: A Visualization Tool for Temporal Analysis of Multimodal Interaction
Dan Bauer and Jim Hollan, UC San Diego

Towards Robust Person Recognition On Handheld Devices Using Face and Speaker Identification Technologies
Timothy J. Hazen, Eugene Weinstein and Alex Park, MIT Computer Science and Artificial Intelligence Laboratory

Algorithms for Controlling Cooperation between Output Modalities in 2D Embodied Conversational Agents
Sarkis Abrilian, Jean-Claude Martin and Stéphanie Buisine, LIMSI-CNRS and LINC-Univ Paris 8

Towards an Attentive Robotic Dialog Partner
Torsten Wilhelm, Hans-Joachim
Böhme, Horst-Michael Gross, Ilmenau Technical University

List of Demos

LTE: A Multimodal Training Environment for Surgeons

Shahram Payandeh, John Dill, Graham Wilson, Hui Zhang, Lilong Shi, Alan Lomax, and Christine MacKenzie, Simon Fraser University

 

Playing FantasyA with SenToy

Ana Paiva, Rui Prada, Ricardo Chaves, Marco Vala, Adrian Bullock, Gerd Andersson, and Kristina Höök, IST-Technical University of Lisbon and INESC-ID, SICS AB, and DSV, IT-university in Kista

 

Baldi: A Computer-Animated Tutor for Spoken and Written Language Learning

Dominic Massaro, UC Santa Cruz

 

MessagEase: A system for fast full-text entry for small electronic devices

Saied Nesbat, ExIdeas

 

Mouthbrush: Drawing and painting by hand and mouth

Chi-ho Chan, Michael J. Lyons, Nobuji Tetsutani, ATR Media Information Science Research Labs, Kyoto, Japan

 

TorqueBAR:  An ungrounded haptic feedback device

Colin Swindells, Alex Unden, and Tao Sang, University of British Columbia

 

Discovering the mind's eye

Sandra Marshall and Cassandra Davis, EyeTracking, Inc.

 

Pen-based Access Control and Retrieval of Digital Ink

Anoop Namboodiri and Anil Jain, Michigan State Univ.

 

Multimodal Interaction with Paper

Phil Cohen and David McGee, Natural Interaction Systems, LLC

 

SmartKom

David Gelbart and Norbert Reithinger, Berkeley and DFKI