Multimodal conversational search

4 people in a classroom looking at a largescreen with a picture of a robot, text, a search box, and 2 images. At the front of the classroom, 2 researchers working on their laptopsThe purpose of this study, which is part of the ARC project “A Pictorial Communication Framework for Inclusion“, is to explore the design of applications that would allow people to point to images, in addition to using their voice, to interact with online search engines. This can be used for interactions in groups (on interactive whiteboards) or individually (on tablets).

At the beginning of the study, an experimenter (a member of the research team) will simulate the responses from a search engine, thus ensuring that people in the study are not limited by what search engines can currently do and understand.

4 people in a classroom, with a large screen showing text and 2 pictures of a horse. The principal researcher is at the front, with similar images showing on a laptop.

We built our own software to make sure it is accessible to all participants, and also that experimenters can respond to voice, touch and text as fast as possible with voice, text and images all at once.

 

 

Team

Associate Professor Laurianne Sitbon

Sirin Roomkham

Shannon Terris

Alicia Mitchell