CS Events
Faculty Candidate TalkVariational Auto Encoder (VAE): Theory and its Applications |
|
||
Tuesday, April 22, 2025, 11:00am - 12:30pm |
|||
Speaker: Diana Kim
Bio
Diana Kim is a part-time lecturer at Rutgers and taught “Machine Learning Principles” to senior students in 2024 and 2025. She earned her Ph.D. in Computer Science at Rutgers in 2022, under the supervision of Prof. Elgammal at the Art and AI Lab at Rutgers, and completed a postdoc at Vision CAIR group of KAUST in Saudi Arabia in 2024. Her research is about interpreting art patterns within the latent space of various deep neural nets, using language models and art principles. Her works have been published and presented at AI conferences, including ICSC, ICCC, and AAAI 2018, 2022, and CVPR workshops 2024. Her current research focuses on creating AI vision and language systems more structured based on general domain knowledge, making them less reliant on empirical or data-driven approaches. She likes to teach students, serving as a mentor for undergraduate research internships and as a teacher for machine learning and AI classes. Before transitioning to a computer science major, her prior interests were in Electrical Engineering with a specialization in communication theory. She received an M.S. from USC (2009) and a B.S. from Ewha Womans University (South Korea, 2003). Outside academia, she worked at Samsung Electronics as a software engineer (South Korea, 2003 – 2006).
Location : CoRE 301
:
Event Type: Faculty Candidate Talk
Abstract: In machine learning, computing posterior probabilities is crucial to reveal the key factors of high-dimensional data. However, finding an exact posterior solution is often intractable. Variational Auto Encoder (VAE) is a framework that enables us to find an approximated posterior, which is still rich enough to capture valuable information from data. To review the theoretical and practical aspects of VAE, in this class, we will talk about how the approximation is realized through the following components: (1) an architecture integrating the bidirectional transitions (encoder & decoder) between data and the posterior into a unified neural net, (2) optimization that targets a lower bound to tackle intractability of the original loss, and lastly (3) a parameterization trick that enables backpropagation in a VAE with a random source, which would otherwise be impossible without it. In the latter part of the class, we will focus on the practical applications of VAE. The instructor will explain how the probabilistic approach learns a continuous latent space. Through various experimental examples, we will see how the latent space is used to disentangle data factors and generate new data by incorporating the decoder block in VAE.
:
Contact Professor Richard Martin
Join Zoom Meeting
https://rutgers.zoom.us/j/2014444359?pwd=WW9ybFNCNVFrUWlycHowSHdNZjhzUT09
Meeting ID: 201 444 4359
Password: 550978