Skip to content Skip to navigation
Distinguished Lecture Series
1/16/2015 11:00 am
Via Video Wall in CoRE A(Room 301) and CBIM (Room 22)

Deep Learning

Geoff Hinton, University of Toronto

Faculty Host: Dimitris Metaxas

Abstract

I will give a brief history of deep learning explaining what it is,what kinds of task it should be good for and why it was largely abandoned in the 1990's. I will then describe how ideas from statistical physics were used to make deep learning work much better on small datasets. Finally I will describe how deep learning is now used by Google for speech recognition and object recognition and how it may soon be used for machine translation.

Bio

Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto. Since 2013, he has been splitting his time between the University of Toronto and Google.

Geoffrey Hinton is a fellow of the Royal Society, an honorary foreign member of the American Academy of Arts and Sciences, and a former president of the Cognitive Science Society. He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the Killam prize for Engineering (2012) and the NSERC Herzberg Gold Medal (2010) which is Canada's top award in Science and Engineering.

Geoffrey Hinton designs machine learning algorithms. His aim is to discover a learning procedure that is efficient at finding complex structure in large, high-dimensional datasets and to show that this is how the brain learns to see. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts and deep belief nets. His students used deep learning to change the way in which speech recognition and object recognition are done.