Many problems in real-world applications involve extracting information from data efficiently. In addition and supported by an ever increasing amount of collected data, we want computers to explain our environment more holistically. This involves predicting several random variables that are statistically related. A great tool to encode those dependencies is a structured model, like a Markov random field.
Within the first part of this talk I will discuss the difficulties in finding the most likely variable configuration of a structured model. I will then present a model-parallel inference algorithm and illustrate its effectiveness in jointly estimating the disparity of more than 12 million variables.
In the second part, I will show how to combine structured distributions with deep learning to estimate complex representations which take into account the dependencies between the random variables. To model those deep structured distributions I will present a sample-parallel training algorithm and show its applicability, among others, by using a 3D scene understanding task.