Given a query where the answer is sitting on a single web page, search
engines do a decent job of identifying that web page. Some queries,
however, are difficult to express in text: an image is worth a
thousand query terms. At other times, a textual query is sufficient,
but it's difficult to type (walking, driving). And some research tasks
don't have a simple answer already computed: it's necessary to
synthesize an answer from multiple sources. I'll talk about some of
the research projects at Google directed at solving these problems,
including Google Goggles, Search by Voice, and Google Squared,
and some of the other challenges that we're tackling that go beyond
returning ten blue links.
Disclaimer: I'm not an expert on image or voice recognition -- that part of the talk won't be at a deep technical level.
Note: The speaker will be available to meet Undergraduate and
Graduate students in CS and related areas between 2--3 at Core A.