CS Events

Seminar

Towards Verifying AI Systems: Testing of Samplers

 

Download as iCal file

Wednesday, February 12, 2020, 10:30am

 

Speaker: 

Kuldeep Meel, National University of Singapore

Bio

Kuldeep Meel is an Assistant Professor of Computer Science in School of Computing at the National University of Singapore where he holds the Sung Kah Kay Assistant Professorship. He received his Ph.D. (2017) and M.S. (2014) degree from Rice University, and B. Tech. (with Honors) degree (2012) in Computer Science and Engineering from Indian Institute of Technology, Bombay. His research interests lie at the intersection of Artificial Intelligence and Formal Methods. He is a recipient of 2019 NRF Fellowship for AI.  His work received the 2018 Ralph Budd Award for Best PhD Thesis in Engineering, 2014 Outstanding Masters Thesis Award from Vienna Center of Logic and Algorithms and Best Student Paper Award at CP 2015.

Location : CBIM 22

Event Type: Seminar

Abstract: The modern AI systems significantly differ from traditional systems in their reliance on probabilistic reasoning. We focus on one core component of probabilistic reasoning: sampling from a discrete distribution specified by a probabilistic model. The widespread usage of heuristics in different sampling-based techniques creates a gap between theory and practice: In theory, heuristics would nullify guarantees while in practice the heuristics seem to work well for problems arising from real-world instances. Often statistical tests are employed to argue for the quality of distributions, but such statistical tests are usually performed on a tiny number of samples for which no theoretical guarantees exist for their accuracy. In contrast to testing for deterministic programs, where one trace is sufficient to prove the existence of a bug; such is not the case for samplers as one sample is typically not sufficient to prove non-conformity of the sampler to the desired distribution. This makes one wonder: whether it is possible to design testing methodology to test whether a sampler under test generates samples close to a given distribution. We will discuss, to the best of our knowledge, the first algorithmic framework, Barbarik, to test whether the distribution generated is close to the uniform distribution. In contrast to the sampling techniques that require an exponential or sub-exponential number of samples for sampler whose support can be represented by n bits, Barbarik requires samples independent of n. We present a prototype implementation of Barbarik and use it to test three state of the art uniform samplers over the support defined by combinatorial constraints. Barbarik is able to provide a certificate of uniformity to one sampler and demonstrate non-uniformity for the other two samplers (joint work with Sourav Chakraborty)

Contact  Faculty Host: Santosh Nagarakatte