Events Feed

Start Date: 04 Apr 2023;
Start Time: 10:30AM - 11:30AM
Title: Building Better Data-Intensive Systems Using Machine Learning

Bio:

Ibrahim Sabek is a postdoc at MIT and an NSF/CRA Computing Innovation Fellow. He is interested in
building the next generation of machine learning-empowered data management, processing, and analysis
systems. Before MIT, he received his Ph.D. from University of Minnesota, Twin Cities, where he studied
machine learning techniques for spatial data management and analysis. His Ph.D. work received the
University-wide Best Doctoral Dissertation Honorable Mention from University of Minnesota in 2021.
He was also awarded the first place in the graduate student research competition (SRC) in ACM
SIGSPATIAL 2019 and the best paper runner-up in ACM SIGSPATIAL 2018.


Speaker:
Abstract: Database systems have traditionally relied on handcrafted approaches and rules to store large-scale data and process user queries over them. These well-tuned approaches and rules work well for the general-purpose case, but are seldom optimal for any actual application because they are not tailored for the specific application properties (e.g., user workload patterns). One possible solution is to build a specialized system from scratch, tailored to each application's needs. Although such a specialized system is able to get orders-of-magnitude better performance, building it is time-consuming and requires a substantial manual effort. This pushes the need for automated solutions that abstract system-building complexities while getting as close as possible to the performance of specialized systems.In this talk, I will show how we leverage machine learning to instance-optimize the performance of query scheduling and execution operations in database systems. In particular, I will show how deep reinforcement learning can fully replace a traditional query scheduler. I will also show that—in certain situations—even simpler learned models, such as piece-wise linear models approximating the cumulative distribution function (CDF) of data, can help improve the performance of fundamental data structures and execution operations, such as hash tables and in-memory join algorithms.
Location: CoRE 301
Committee:
Start Date: 06 Apr 2023;
Start Time: 10:30AM - 11:30AM
Title: Optimization When You Don’t Know the Future

Bio:

Roie is a Fulbright Postdoctoral Fellow at Tel Aviv University, working with Niv Buchbinder. He received his PhD in Algorithms, Combinatorics and Optimization (ACO) from Carnegie Mellon University where he was advised by Anupam Gupta. Before that, he was a research engineer at the Allen Institute for AI in Seattle, and before that he received bachelor degrees in math and computer science from Brown University. Roie's research spans approximation algorithms, algorithms for uncertain environments, and submodular optimization (the discrete cousin of convex optimization).


Speaker:
Abstract: Discrete optimization is a powerful toolbox used ubiquitously in computer science and beyond; yet, for many applications, it is unrealistic to expect a complete and accurate description of the problem at hand. How should we approach solving problems when we are uncertain about the input? In this talk I will survey my research on algorithms under uncertainty, which is a framework for answering such questions. I will talk about algorithmic models that try to capture different kinds of uncertainty in optimization problems, the interplay between computational hardness and information, and applications to a variety of common algorithmic tasks.My work has focused on three different kinds of uncertain environments: (a) Online settings where the input is revealed piecemeal, and the algorithm must commit to irrevocable decisions as it maintains feasibility. (b) Dynamic settings where the input changes over time, and the goal is to maintain a feasible solution that moves as little as possible between updates. (c) Streaming settings where the input is too large to hold in memory all at once, and the algorithm must compute a solution with only limited memory after few sequential passes over the data. An important motif throughout my research is the study of submodular functions, which are a natural discrete analog of convex/concave functions
Location: Core 301
Committee:
Start Date: 06 Apr 2023;
Start Time: 03:30PM - 05:00PM
Title: Enhancing Language Models with Logical Reasoning and Automatic Error Analysis

Bio:
Speaker:
Abstract: See above
Location: CoRE 305
Committee:

Professor Yongfeng Zhang (Advisor)

Professor He Zhu

Professor Hao Wang

Professor Peng Zhang

 

Start Date: 10 Apr 2023;
Start Time: 10:30AM - 11:30AM
Title: Gaps in My Research

Bio:

Bender is Professor and David R. Smith Leading Scholar in Computer Science at Stony Brook University. He was Founder and Chief Scientist at Tokutek, Inc, an enterprise database company, which was acquired byPerconain 2014.  Bender's research interests span the areas of data structures and algorithms, I/O-efficient computing, scheduling, and parallel computing.  He has coauthored over 180 articles on these and other topics.  He has won several awards, including an R&D 100 Award, a Test-of-Time Award, a Distinguished Paper Award, two Best Paper Awards, and five awards for graduate and undergraduate teaching.


 

Bender received his B.A. in Applied Mathematics from Harvard University in 1992 and obtained a D.E.A. in Computer Science from the EcoleNormaleSuperieure de Lyon, France in 1993. He completed a Ph.D. on Scheduling Algorithms from Harvard University in 1998. He has held Visiting Scientist positions at both MIT and King's College London. He is a Fellow of the European Association for Theoretical Computer Science (EATCS). 


Speaker:
Abstract: In my first computer science course, we learned that insertion sort runs in $O(n^2)$ time---each insertion into the array takes time $O(n)$ and there are $n$ insertions. I distinctly remember asking, "why not do what librarians do? Why not leave gaps in the array in anticipation of future insertions?" Some years later, I would find the answer to this question---adding gaps to insertion sort improves its running time to $O(n \log n)$. This technique of strategically leaving gaps in arrays to support future insertions is surprisingly powerful. I'll explain how leaving gaps leads to a general approach for designing platform-independent data structures. I'll also present two recent theoretical breakthroughs: how we (1) solved a 40-year-old problem on how efficiently one can maintain a dynamic set of sorted items in an array, and (2) overturned 60-year-old conventional wisdom on the performance of linear-probing hash tables. Throughout the talk, I'll emphasize the surprising bi-directional bridge between algorithms and real-world systems building.
Location: Core 301
Committee:
Start Date: 11 Apr 2023;
Start Time: 12:00PM - 02:00PM
Title: Towards Generalized Modeling for Physics-based Simulation in Computer Graphics

Bio:
Speaker:
Abstract: See above
Location: CoRE 301
Committee:

Professor Mridul Aanjaneya

Professor Dimitris Metaxas

Professor Abdeslam Boularias

Professor Bo Zhu (Dartmouth College)

 

Start Date: 12 Apr 2023;
Start Time: 01:30PM - 03:00PM
Title: Programmatic Reinforcement Learning

Bio:
Speaker:
Abstract: See above
Location: CoRE 301
Committee:

Prof. He Zhu (advisor)

Prof. Shiqing Ma

Prof. Srinivas Narayana

Prof. Qiong Zhang

 

Start Date: 14 Apr 2023;
Start Time: 10:00AM - 12:00PM
Title: Integrate Logical Reasoning and Machine Learning for Decision Making

Bio:
Speaker:
Abstract: See above
Location: CoRE 305
Committee:

Professor Yongfeng Zhang (Advisor)

Professor Hao Wang

Professor Dong Deng

Professor Sudarsan Kannan

 

Start Date: 19 Apr 2023;
Start Time: 03:00PM - 04:30PM
Title: CrossPrefetch: Accelerating I/O Prefetching for Modern Storage

Bio:
Speaker:
Abstract: See above
Location: CoRE 301
Committee:

Professor Sudarsan Kannan (Advisor)

Professor Srinivas Narayana

Professor Badri Nath

Professor Karthick CS

Professor Parashar Manish (University of Utah)

 

Start Date: 20 Apr 2023;
Start Time: 10:30AM - 11:30AM
Title: Lift-and-Project for Statistical Machine Learning Models

Bio:

My research interests are in theoretical computer science, cryptography and data privacy, and machine learning theory. I am also interested in understanding the interfaces between these areas.

I have been with the faculty of the Department of Management Science as an assistant and associate professor since September 2015. Since then I have cosupervised Hafiz Asif, a Ph.D. student in my department and now an assistant professor at Hofstra University. Currently, I am also the advisor of Nathaniel Hobbs whose expected graduation from the Ph.D. program is in August 2023. Hafiz did his Ph.D. in the theoretical foundations of Data Privacy. Nathaniel is doing his Ph.D. in problems in the intersection of Machine Learning and Cryptography, in particular in obfuscating and interpreting deep networks. Before Rutgers, I was (February 2010 - July 2015) an assistant professor at Andrew Yao's Institute, where four Ph.D. students graduated under my direct supervision (I was habilitated/Ph.D. supervisor of the duration of my appointment at Tsinghua). Bangsheng Tang did his Ph.D. with me in proof complexity and is now with Facebook Research, Hao Song did his PhD with me in communication complexity and is now an engineer at Pony.AI, Guang Yang did his PhD with me in cryptography and is an assistant professor in the Chinese Academy of Sciences (Institute of Computing Technology), and Shiteng Chen did his Ph.D. with me in circuit complexity and is now an associate professor in the Chinese Academy of Sciences (Institute of Software). I also supervised numerous diploma and MSc theses. These students continued their PhDs in Computer Science at Princeton, Harvard, and CMU and are now postdoctoral fellows, research assistant professors, and assistant professors at CMU, UPenn, and elsewhere.


Speaker:
Abstract: In supervised learning, the prediction accuracy is critically bounded by learning errors. We introduce Lift-and-Project (LnP), a meta algorithm for probabilistic models that boosts multi-class classification accuracy. Unlike previous learning error reduction methods, LnP maps each class into a number of new classes and learns new class distributions "lifted" to a higher dimension. Specifically, instead of estimating the probability of a class c given an instance x, we estimate the probability of (c,c') given x, where (c,c') indicates that c is more likely to be the correct label for x than c', and c' encodes errors of the standard model. By marginalizing the new distributions for c, we "project" the lifted model back to the form of the original problem. We prove that in principle our method reduces the learning error exponentially. Experiments demonstrate significant improvements in prediction accuracy on standard datasets for discriminative and generative models.
Location: Core 301
Committee:
Start Date: 20 Apr 2023;
Start Time: 04:00PM - 06:00PM
Title: Unsupervised Learning of Cardiac Wall Motion from Imaging Sequences

Bio:
Speaker:
Abstract: See above
Location: CoRE 305
Committee:

Professor Dimitri Metaxas (Advisor)

Professor Yongfeng Zhang

Professor Hao Wang

Professor Richard Martin

 

Start Date: 21 Apr 2023;
Start Time: 11:00AM - 12:30PM
Title: Synthesizing Program-guided Machine Learning Models

Bio:
Speaker:
Abstract: See above
Location: CoRE 301
Committee:

Professor He Zhu (Advisor)

Professor Yongfeng Zhang

Professor Santosh Naragakatte

Professor Konstantinos Michmizos

 

Start Date: 24 Apr 2023;
Start Time: 02:00PM - 04:00PM
Title: Unlocking Artificial Intelligent Video Understanding through Object-Centric Relational Reasoning

Bio:
Speaker:
Abstract: See above
Location: CBIM 22
Committee:

Professor Mubbasir Kapadia (Chair)

Professor Vladimir Pavlovic

Professor Dimitris Metaxas

Dr. Iqbal Mohomed (Tornoto AI Research Centre)

Start Date: 27 Apr 2023;
Start Time: 01:00PM - 03:00PM
Title: Learning Explicit Shape Abstractions with Deep Deformable Models

Bio:
Speaker:
Abstract: See above
Location: CoRE 301
Committee:

Professor Dimitris Metaxas (Advisor)

Professor Yongfeng Zhang

Professor Konstantinos Michmizos

Professor Jie Gao

 

Start Date: 27 Apr 2023;
Start Time: 04:00PM - 06:00PM
Title: Leveraging Powerful Attention Mechanisms for Biological Image Segmentation

Bio:
Speaker:
Abstract: See above
Location: CoRE 305
Committee:

Professor Dimitris Metaxas (Chair)

Professor Konstantinos Michmizos

Professor Yongfeng Zhang

Professor Aaron Bernstein

 

Start Date: 09 May 2023;
Start Time: 10:00AM - 12:00PM
Title: Efficient Quantum Circuit Compilation with Permutable Operators through a Time-Optimal SWAP Insertion Approach

Bio:
Speaker:
Abstract: See above
Location: CoRE 305
Committee:

Professor Zheng Zhang (Advisor)

Professor Yipeng Huang

Professor Mario Szegedy

Professor Casimir Kulikowski

 

Start Date: 10 May 2023;
Start Time: 01:30PM - 03:30PM
Title: Multi-pass Semi-streaming Lower Bounds for Approximating Maximum Matching

Bio:
Speaker:
Abstract: See above
Location: CoRE 301
Committee:

Professor Sepehr Assadi (Advisor)

Professor Aaron Bernstein

Professor Mike Saks

Professor Yongfeng Zhang

 

Start Date: 10 May 2023;
Start Time: 04:00PM - 06:00PM
Title: Visual Learning In-the-Wild with Limited Supervision

Bio:
Speaker:
Abstract: See above
Location: CBIM 22
Committee:

Professor Vladimir Pavlovic (Chair)

Professor Yongfeng Zhang

Professor Hao Wang

Professor Adriana Kovashka (University of Pittsburgh)

 

Start Date: 12 May 2023;
Start Time: 03:00PM - 05:00PM
Title: Motion Planning and System Identification for Reliable Robot Actions

Bio:
Speaker:
Abstract: See above
Location: SPR-403 (1 Spring Street, New Brunswick, NJ)
Committee:

Professor Kostas Bekris (Advisor)

Professor Abdeslam Boularias

Professor Mridul Aanjaneya

Professor Yipeng Huang

 

Start Date: 12 May 2023;
Start Time: 04:00PM - 05:30PM
Title: Skeleton-Based Isolated Sign Recognition using Graph Convolutional Networks

Bio:
Speaker:
Abstract: See above
Location: CoRE 301
Committee:

Professor Dimitris N. Metaxas (Advisor)

Professor Konstantinos Michmizos

Professor Vladimir Pavlovic

Professor Zheng Zhang

 

Start Date: 15 May 2023;
Start Time: 11:00AM - 01:00PM
Title: Cyber-Physical Systems for Logistics Delivery

Bio:
Speaker:
Abstract: See above
Location: CoRE 305
Committee:

Professor Desheng Zhang (Advisor)

Professor Hao Wang

Professor Dong Deng

Professor Xiong Fan

 

Start Date: 16 May 2023;
Start Time: 09:00AM - 11:00AM
Title: Cyber-Physical Systems for Location-based Services

Bio:
Speaker:
Abstract: See above
Location: CoRE 305
Committee:

Professor Desheng Zhang (Advisor)

Professor Yongfeng Zhang

Professor Jie Gao

Professor Jingjin Yu

 

Start Date: 16 May 2023;
Start Time: 10:30AM - 12:00PM
Title: Cyber Physical Systems for Urban Mobility

Bio:
Speaker:
Abstract: See above
Location: CoRE 305
Committee:

Professor Desheng Zhang (Advisor)

Professor Yongfeng Zhang

Professor Deng Dong

Professor Karl Stratos

 

Start Date: 17 May 2023;
Start Time: 09:00AM - 11:00AM
Title: Coherence as a Key Ingredient to Learn Effective Communication Strategies

Bio:
Speaker:
Abstract: See above
Location: CoRE 305
Committee:

Professor Matthew Stone (Chair)

Professor Yongfeng Zhang

Professor Karl Stratos

Professor Matthew Purver (Queen Mary, University of London)

 

Start Date: 29 May 2023;
Start Time: 11:00AM - 12:30PM
Title: Defending against Backdoor Attacks on Deep Neural Networks

Bio:
Speaker:
Abstract: See above
Location: CoRE 301
Committee:

Professor Shiqing Ma (Advisor)

Professor Dimitris Metaxas

Professor Hao Wang

Professor Professor Sepehr Assadi

 

Start Date: 05 Jun 2023;
Start Time: 01:30PM - 03:00PM
Title: Scaling Stateful Applications with Adaptive Scheduling

Bio:
Speaker:
Abstract: See above
Location: CoRE 301
Committee:

Professor Sudarsan Kannan (Advisor)

Professor Richard Martin

Professor Srinivas Ganapathy

Professor James Abello

 

Start Date: 14 Jun 2023;
Start Time: 10:00AM - 12:00PM
Title: Context-Sensitive Narrative Generation for Virtual Populations and Application to Human-Building Interaction

Bio:
Speaker:
Abstract: See above
Location: Virtual
Committee:

Professor Mubbair Kapadia (Chair)

Professor Mridul Aanjaneya

Professor Jingjin Yu

Professor Nuria Pelechano (Polytechnic University of Catalonia)

 

Start Date: 19 Jun 2023;
Start Time: 04:00PM - 06:00PM
Title: Some Problems on Multi-Sensor Layout Optimization

Bio:
Speaker:
Abstract: See above
Location: Virtual
Committee:

Prof. Jingjin Yu (Chair)

Prof. Kostas Bekris

Prof. Abdeslam Boularias

Dr. Zherong Pan (Tencent America)

 

Start Date: 20 Jul 2023;
Start Time: 04:00PM - 06:00PM
Title: The Power of Low Associativity

Bio:
Speaker:
Abstract: See above
Location: CoRE 301
Committee:

Professor Martin Farach-Colton (Chair)

Professor Aaron Bernstein

Professor Sepehr Assadi

Professor Dominik Kempa (Stonybrook University)

 

Start Date: 01 Aug 2023;
Start Time: 12:00PM - 02:00PM
Title: Program Compilation and Optimization Enhancement Through Graph Theoretical Methods

Bio:
Speaker:
Abstract: See above
Location: Virtual
Committee:

Professor Zheng Zhang (Chair)

Professor Mario Szegedy

Professor Ulrich Kremer

Professor Fred Chong (University of Chicago)

 

Start Date: 03 Aug 2023;
Start Time: 12:00PM - 02:00PM
Title: Differentially Private Auditing and Monitoring

Bio:
Speaker:
Abstract: See above
Location: CoRE 301
Committee:

Professor Anand Sarwate (Chair)

Professor Santosh Nagarakatte

Professor Rebecca Wright (Barnard College)

Professor David Cash (University of Chicago

 

Start Date: 03 Aug 2023;
Start Time: 01:00PM - 03:00PM
Title: Multi-Object Manipulation Leveraging Object Dependencies

Bio:
Speaker:
Abstract: See above
Location: CoRE 305
Committee:

Professor Kostas Bekris (Advisor)

Professor Jingjin Yu

Professor Matthew Stone

Professor Jie Gao

 

Start Date: 21 Aug 2023;
Start Time: 10:00AM - 12:00PM
Title: Context-Sensitive Narrative Generation for Virtual Populations and Application to Human-Building Interaction

Bio:
Speaker:
Abstract: See above
Location: Virtual
Committee:

Professor Mubbasir Kapadia (Chair)

Professor Mridul Aanjaneya

Professor Jingjin Yu

Professor Nuria Pelechano (Polytechnic University of Catalonia)

 

Start Date: 22 Aug 2023;
Start Time: 03:00PM - 05:00PM
Title: Self-Supervised Object-Centric Representations Learning of Computer Vision and Language Understanding Models

Bio:
Speaker:
Abstract: See above
Location: CoRe 301 and Virtual
Committee:

Professor Gerard De Melo (Chair)

Professor Matthew Stone

Professor Yongfeng Zhang

Professor Daniel Khashabi (Johns Hopkins University)

 

Start Date: 31 Aug 2023;
Start Time: 01:00PM - 03:00PM
Title: Meta Complexity - Connections to One-way functions and Zero Knowledge protocols

Bio:
Speaker:
Abstract: See above
Location: CoRE 305 and Zoom
Committee:

Professor Eric Allender (Chair)

Professor Mike Saks

Professor Sepehr Assadi

Professor Valentine Kabinets (Simon Fraser University)

 

Start Date: 08 Sep 2023;
Start Time: 04:00PM - 06:00PM
Title: Image Generation for Healthcare Applications

Bio:
Speaker:
Abstract: The rapid advancement of artificial intelligence (AI) and machine learning (ML) techniques has opened new horizons for their application in healthcare, notably in the domain of image generation. The capacity to synthetically generate medical images can aid in various areas, including model training, disease diagnosis, treatment planning, and patient education. Techniques such as Generative Adversarial Networks (GANs) and Diffusion Model have demonstrated significant proficiency in generating high-resolution, realistic medical images. These synthetically generated images can bolster the available dataset, especially in cases where real medical images are scarce or privacy concerns inhibit sharing. This can be especially crucial in rare disease diagnosis where sample images may be limited. Moreover, generating images helps improving the resolution of medical images, which can potentially reveal missing information that is crucial to diagnosing disease. In this talk, I will review two work of mine that utilize different methods to solve the lack of data in healthcare settings. The first work tries to lift the barrier of data sharing in healthcare industry through utilizing federated learning. The second work improves the patient’s cine magnetic resonance imaging (cMRI) spatial resolution so that it could be used for downstream cardiac disease diagnostic. Both experiment results shows a superior performance compared to previous methods.
Location: Core 301
Committee:

Professor Dimitris Metaxas (Chair)

Professor Konstantinos Michmizos

Professor Hao Wang

Professor David M Pennock

Start Date: 11 Sep 2023;
Start Time: 10:30AM - 11:30AM
Title: Fast Algorithms for Massive Graphs

Bio:

Aaron Bernstein is an assistant professor at Rutgers University working on graph algorithms. He is funded by an NSF CAREER grant on sublinear algorithms and a Google Research Grant, and he is the recipient of the 2023 Presburger Award for distinguished young scientist in theoretical computer science.


Speaker:
Abstract: In this talk, I will discuss my recent work on fast algorithms for graphs, especially algorithms for alternative models of computation that address the challenges of processing very large graphs. I will focus on two branches of my work. The first topic is new algorithmic tools for processing directed graphs, which are used to represent asymmetric relationships between objects; such graphs are much more difficult to process because they do not permit the natural notions of clustering that are widely used in undirected graphs. The second topic is fast algorithms for finding a large matching in a graph.
Location: Core 301
Committee:
Start Date: 12 Sep 2023;
Start Time: 10:30AM - 11:30AM
Title: Trustworthy AI for Human and Science

Bio:

Yongfeng Zhang is an Assistant Professor in the Department of Computer Science at Rutgers University. His research interest is in Machine Learning, Machine Reasoning, Information Retrieval, Recommender Systems, Natural Language Processing, Explainable AI, and Fairness in AI. His research works appear in top-tier computer science conferences and journals such as SIGIR, WWW, KDD, ICLR, RecSys, ACL, NAACL, CIKM, WSDM, AAAI, IJCAI, TOIS, TORS, TIST, etc. His research is generously supported by funds from Rutgers, NSF, NIH, Google, Facebook, eBay, Adobe, and NVIDIA. He serves as Associate Editor for ACM Transactions on Information Systems (TOIS), ACM Transactions on Recommender Systems (TORS), and Frontiers in Big Data. He is a Siebel Scholar of the class 2015 and an NSF career awardee in 2021.


Speaker:
Abstract: Artificial Intelligence (AI) has been an essential part of our society, and it is widely adopted in both human-oriented tasks and science-oriented tasks. However, irresponsible use of AI techniques may bring counter-effects such as compromised user trust due to non-transparency and unfair treatment of different populations. In this talk, we will introduce our recent research on Trustworthy AI with a focus on explainability, fairness, robustness, privacy, and controllability as well as their implications, which are some of the most important perspectives to consider when building Trustworthy AI systems. We will introduce Trustworthy AI in terms of both methodology and application. On methodology, we will introduce causal and counterfactual reasoning, neural-symbolic reasoning, knowledge reasoning, explainable graph neural networks, and large language models for building Trustworthy AI systems. On application, we will cover both human-oriented tasks such as search engine, recommender systems and e-commerce, and science-oriented tasks such as molecule analysis, drug design and protein structure prediction.
Location: Core 301
Committee:
Start Date: 15 Sep 2023;
Start Time: 10:30AM - 11:30AM
Title: Towards Designing Generalized Constitutive Models for Versatile Physics Simulation and Inverse Learning

Bio:

Dr. Mridul Aanjaneya is an Assistant Professor in the Department of Computer Science at Rutgers University. Prior to joining Rutgers, he was a postdoctoral researcher in the Department of Computer Sciences at the University of Wisconsin - Madison, where he was advised by Prof. Eftychios Sifakis. He obtained his Ph.D. in Computer Science from Stanford University under the supervision of Prof. Ronald Fedkiw. While at Stanford, he also worked as a consultant in the Spatial Technologies team at the Nokia Research Center for two years. His research lies at the intersection of Computer Graphics, Scientific Computing, and Computational Physics, with the overarching goal of designing scalable physics engines for applications in engineering and the physical sciences. His research is supported by the National Science Foundation. He is a recipient of the Ralph E. Powe Junior Faculty Enhancement Award 2019, sponsored by Oak Ridge Associated Universities (ORAU), and the NSF CAREER Award 2023.


Speaker:
Abstract: Physics simulation is an active area of research in computer graphics but has now started being used in many other fields for inverse learning purposes. Many of these applications cannot impose the assumptions that are typically used in forward simulation methods and require "generalized" models that can allow for achieving different physical behaviors by changing the values of appropriate parameters. In this talk, I will explain the steps taken by my research group for designing such generalized constitutive models. The key idea is to exploit non-local modeling techniques that have the potential to unify seemingly disjoint and complex physical processes under one umbrella. This effort has also revealed the striking promise of providing possible explanations for some real-world observations that cannot be described by existing scientific theories.
Location: Core 301
Committee:
Start Date: 18 Sep 2023;
Start Time: 10:30AM - 11:30AM
Title: Tackling Mapping and Scheduling Problems for Quantum Program Compilation

Bio:

Zheng (Eddy) Zhang is an Associate Professor at Rutgers University. Her research is in compilers, systems, and quantum computing. A central tenet of her research is to develop efficient compiler techniques for emerging computing architectures such as many-core GPUs and quantum processing units. Her recent work focuses on the synergistic interaction between algorithms, programming languages, intermediate representation, and micro-architectures for near-term intermediate scale (NISQ) computing devices. She will be talking about mapping and scheduling problems that arise in the compilation process of quantum programs in the NISQ era.


Speaker:
Abstract: We are at the verge of quantum revolution. Google has demonstrated supremacy with less than 100 qubits by performing a specific calculation (on a random number generator) that is beyond reach even for the best classical supercomputer. Quantum computers may soon be able to solve large scale problems in chemistry, physics, cryptography, machine learning, and database search. However, there is a significant gap between the quantum algorithms and the physical devices that can support them. Most well-known quantum algorithms are designed with perfect hardware in mind. But hardware has constraints. A compiler framework is needed for efficiently converting quantum algorithm in high level specification to that in hardware-compliant code. This talk will focus on mapping and scheduling problems in the compilation process for superconducting quantum computers. Tackling these problems not improves the performance but also the fidelity of the quantum programs.
Location: Core 301
Committee:
Start Date: 21 Sep 2023;
Start Time: 01:30PM - 03:00PM
Title: Multi-Modal Vector Query Processing

Bio:
Speaker:
Abstract: In recent years, various machine learning models, e.g., word2vec , doc2vec, and node2vec, have been developed to effectively represent real-world objects such as images, documents, and graphs as high-dimensional feature vectors. Simultaneously, these real-world objects frequently come with structured attributes or fields, such as timestamps, prices, and quantities. Many scenarios need to jointly query the vector representations of the objects together with their associated attributes.In this talk, I will outline our research efforts in the domains of range-filtering approximate nearest neighbor search (ANNS) and the construction of all-range approximate K-Nearest Neighbor Graphs (KNNG). In the context of range-filtering ANNS, queries are characterized by a query vector and a specified range within which the attribute values of data vectors must fall. We introduce an innovative indexing methodology addressing this challenge, encompassing ANNS indexes for all the potential query ranges. Our approach facilitates the retrieval of corresponding ANNS index within the query range, thereby improving query processing efficiency. Furthermore, we design an index to take a search key range as the query input and generate a KNNG graph composed of vectors falling within that specified query range. Looking ahead, our future work aims to develop a comprehensive database management system for vector data. This system will integrate all of our indexing techniques, providing durable storage and efficient querying capabilities.
Location: Core 301
Committee:

Professor Dong Deng (Advisor)

Professor Yongfeng Zhang

Professor Amélie Marian

Professor Karl Stratos

Start Date: 28 Sep 2023;
Start Time: 10:30AM - 11:30AM
Title: The versatile platelet: a bridge to translational medicine

Bio:

Anandi Krishnan is a translational scientist and principal investigator at Stanford University School of Medicine. Dr. Krishnan’s current research focuses on transcriptional and epigenetic mechanisms of blood cell function and dysfunction in human disease. In particular, she is interested in expanding our understanding of the multifaceted function of blood platelets in cancer, inflammation, or immunity, beyond their classical role in hemostasis and thrombosis. Her work integrates omics-based discovery (from large clinical cohorts) with experimental and computational systems biology approaches toward a deeper understanding of disease mechanisms, risk stratification, and novel therapeutic strategies.  Recent findings have outlined a number of heretofore unrecognized platelet mechanisms that are central to platelet response in disease.

Her interest in the field was primarily influenced by her experiences at the Duke Translational Research Institute, studying RNA-based aptamer-antidote molecules for antithrombotic therapy (laboratory of Drs. Bruce Sullenger, PhD and Richard Becker, MD) and her doctoral work at Penn State Biomedical Engineering (with Dr. Erwin Vogler, PhD) establishing the biophysical mechanisms of contact activation in blood coagulation. Funding for Anandi’s research includes her current NIH NHGRI genomic medicine career development award, MPN Research Foundation Challenge grant, multiple Stanford internal awards and NIH NCATS diversity/research re-entry award.


Speaker:
Abstract: Evolving evidence suggests that blood platelets have cross-functional roles beyond their traditional function in hemostasis, and therefore, that their molecular signatures may be altered in diverse settings of cancer, heart disease, metabolic or neurogenerative disorders. This lecture will present recent data from multi-omic profiling of platelets from patients with chronic bone marrow disorders (myeloproliferative neoplasms). Emphasis will be on demonstrating the translational relevance of platelet-omics and systems biology approaches, and their possible bench-to-bedside utility in patient care. Methods of platelet RNA/protein sequencing and associated analyses, and application of predictive machine learning algorithms will be discussed. Extending this work on omics-based discovery to ongoing and future research on molecular, cellular, and computational validation approaches will also be discussed.
Location: Core 301
Committee:
Start Date: 29 Sep 2023;
Start Time: 02:30PM - 03:30PM
Title: Simulation of Diffusion Effects with Physics-Based Methods

Bio:
Speaker:
Abstract: Physics-based simulation has yielded numerous vivid and realistic results as a research area in computer graphics, and the simulation of diffusion effects applies across diverse problems. In this talk, I will present two works centered on simulating diffusion effects. First, I'll discuss our introduction of the C-F diffusion model to computer graphics. This model enhances the commonly used Fick’s/Fourier’s law and allows for a finite propagation speed for diffusion. It captures characteristic visual aspects of diffusion-driven physics, such as hydrogel swelling and snowflake formation. Then, I will discuss our Lagrangian particle-based model for fracture and diffusion in thin membranes, such as aluminum foil, rubbery films, and seaweed flakes. The deformation-diffusion coupling framework generates a detailed and heterogeneous growth of fractures for both in-plane and out-of-plane motions. To the best of our knowledge, our work is the first to simulate the complex fracture patterns of single-layered membranes in computer graphics and introduce heterogeneity induced by the diffusion process, which generates more geometrically rich fracture patterns.
Location: CBIM #17
Committee:

Assistant Professor Mridul Aanjaneya

Professor Kostas Bekris

Associate Professor Abdeslam Boularias

Assistant Professor Peng Zhang

Start Date: 05 Oct 2023;
Start Time: 10:30AM - 11:30AM
Title: Promoting Fairness in Dynamic and Multimodal Information Systems

Bio:

Dr. Vivek Singh is the Founding Director of the Behavioral Informatics Lab and an Associate Professor at the School of Communication and Information at Rutgers University. He earned his Ph.D. in Computer Science from the University of California, Irvine and completed his post-doctoral training at MIT. His research has been published in prestigious disciplinary and interdisciplinary venues (e.g., Science, ACM Multimedia, ACM CHI) and has garnered media attention from outlets like The New York Times, BBC, Wall Street Journal. In 2016, he was recognized as a “Rising Star Speaker” by ACM SIG-Multimedia. He continues to contribute to the academic community through roles such as Program Co-chair for ACM ICMR’22 and ACM Multimedia’24, and Co-editor for ACM CSCW’23.


Speaker:
Abstract: : As multimodal information systems, such as face matching systems, become increasingly integrated into our daily lives, it is crucial to ensure their fairness. This means they should perform equally well across different demographic groups. Moreover, these systems are not static; they evolve over time. Therefore, maintaining fairness and accuracy throughout their evolution is a significant challenge. In this talk, we will explore some of our recent work on auditing algorithms for bias and developing strategies to mitigate bias. We will pay particular attention to dynamic settings where preempting and countering bias is vital. Our discussion will span multiple data modalities, including visual, textual, and social data, with exemplars in diverse application domains (e.g., facial image matching, skin cancer detection).
Location: Core 301
Committee:
Start Date: 17 Oct 2023;
Start Time: 10:00AM - 11:30AM
Title: Towards Trustworthy Recommender Systems

Bio:
Speaker:
Abstract: Recommender systems (RS), serving at the forefront of Human-centered AI, are widely deployed in almost every corner of the web and facilitate the human decision making process. However, despite their enormous capabilities and potential, RS may also lead to undesired effects on users, items, producers, platforms, or even the society at large, such as compromised user trust due to non-transparency, unfair treatment of different consumers, or producers, privacy concerns due to extensive use of user’s private data for personalization, just to name a few. All of these underscore a pressing need for the development of Trustworthy Recommender Systems (TRS) to alleviate or circumvent such detrimental impacts and risks. In this thesis, we study three core dimensions of trustworthiness in RS, namely, fairness, robustness and explainability. It is important to note that these are not the only perspectives of a trustworthy recommender system. However, they are several widely discussed topics in the literature that are deeply connected with trustworthiness of an intelligent system such as recommender system. Moreover, unlike many existing works in TRS, which only consider one perspective, we mainly concentrate on studying the interplay between them. To this end, we aim to address this multi-faceted problem from three distinct angles, which are shown as follows, • The trade-off between fairness and recommendation performance. • The robustness of fairness under the dynamic nature of RS. • The transparency and explainability of model fairness. For the first angle, we seek to identify the Pareto efficient/optimal solutions to guarantee optimal compromises between utility and fairness, where a Pareto efficient/optimal solution means no single objective can be further improved without hurting the others. In addressing the second angle, we delve into the concept of “Robustness of Fairness”, which explores the capability of RS to sustain fairness amidst a plethora of uncertainties, disturbances, and changes encountered throughout the recommendation processes. Lastly, for the third perspective, we explore the idea of “Explainable Fairness”, intending to furnish coherent explanations that can aid users, system architects, or policymakers in comprehending the reasons behind the perceived fairness or unfairness of the recommender system. Our proposed methods in this thesis have outperformed several state-of-the-art methods on numerous real-world datasets. The experimental results demonstrate the effectiveness of the proposed methods in achieving satisfying recommendation accuracy and recommendation fairness.
Location: CoRE 305
Committee:

Professor Yongfeng Zhang (Chair)

Professor Jie Gao

Professor Desheng Zhang

Professor Jiliang Tang Michigan State University (MSU)

Start Date: 17 Oct 2023;
Start Time: 01:00PM - 02:00PM
Title: Unlocking Lifelong Robot Learning With Modularity

Bio:

Jorge Mendez-Mendez is a postdoctoral fellow at MIT CSAIL. He received his Ph.D. (2022) and M.S.E. (2018) from the GRASP Lab at the University of Pennsylvania, and his Bachelor's degree (2016) in Electronics Engineering from Universidad Simon Bolivar in Venezuela. His research focuses on creating versatile, intelligent, embodied agents that accumulate knowledge over their lifetimes, leveraging techniques from transfer and multitask learning, modularity and compositionality, reinforcement learning, and task and motion planning. His work has been recognized with an MIT-IBM Distinguished Postdoctoral Fellowship, a third place prize of the Two Sigma Ph.D. Diversity Fellowship, and a Best Paper Award in the Lifelong Machine Learning Workshop (ICML).

 


Speaker:
Abstract:  Embodied intelligence is the ultimate lifelong learning problem. If you had a robot in your home, you would likely ask it to do all sorts of varied chores, like setting the table for dinner, preparing lunch, and doing a load of laundry. The things you would ask it to do might also change over time, for example to use new appliances. You would want your robot to learn to do your chores and adapt to any changes quickly. In this talk, I will explain how we can leverage various forms of modularity that arise in robot systems to develop powerful lifelong learning mechanisms. My talk will then dive into two algorithms that exploit these notions. The first approach operates in a pure reinforcement learning setting using modular neural networks. In this context, I will also introduce a new benchmark domain designed to assess the compositional capabilities of reinforcement learning methods for robots. The second method operates in a novel, more structured framework for task and motion planning systems. I will close my talk by describing a vision for how we can construct the next generation of home assistant robots that leverage large-scale data to continually improve their own capabilities.
Location: Room 402, 4th floor, 1 Spring Street, Downtown New Brunswick
Committee:
Start Date: 18 Oct 2023;
Start Time: 04:00PM - 06:00PM
Title: Advancing Domain Adaptation with Domain Index

Bio:
Speaker:
Abstract: In machine learning, we usually assume that the training dataset and the testing dataset follow the identical distribution. However, such assumption often breaks down when data comes from different domains. Domain Adaptation (DA) aims to solve such a problem by producing features that are invariant across different domains. Traditional DA methods, such as DANN, treat each domain equally. They neglect the nuanced relationship between different domains and thus lead to inferior performance. To capture the heterogeneity of domains, we centered our work on the Domain Index, a vector representation that embeds the domain semantics. Our works can be divided by whether extra metadata is available for deriving the Domain Index. Such metadata has several forms: for instance, if the dataset is about weather data from different locations, the location can serve as the metadata. When such meta data is unavailable, we have developed a variational framework that can learn the Domain Index directly from the data distribution. Using such methods, we not only improved the model performance on unseen domains, but also facilitated interpretability of the model generalization.
Location: CBIM #22
Committee:

Prof. Hao Wang (chair)

Prof. Dimitris Metaxas

Prof. Yongfeng Zhang

Prof. Srinivas Narayana Ganapathy (4th member)

Start Date: 26 Oct 2023;
Start Time: 02:00PM - 03:30PM
Title: Verified Static Analyzers for Kernel Extensions

Bio:
Speaker:
Abstract: OS extensions enable novel functionality with low performance costs, e.g., implementing custom yet high-speed packet processing. I focus on eBPF in Linux, which allows user code to be loaded into the kernel and executed with minimal run-time overhead. An in-kernel static analyzer, called the eBPF verifier, is critical in this context: it checks the safety of user code before loading it into the kernel. It is paramount that eBPF static analysis is sound and precise. The eBPF verifier uses abstract interpretation, which consists of algorithms that succinctly track all possible values taken by program variables across any execution. Abstract interpretation in the kernel uses abstract domains like the tristate domain (to track individual bits) and interval domain (to track ranges). This presentation will highlight two of my recent research endeavors that enable sound and precise abstract interpretation in the eBPF verifier:(1) Soundness and precision of the tristate domain [CGO '22]This work proved the soundness and optimal precision of abstract algorithms for addition and subtraction of tristate numbers. Additionally, it introduced a novel, provably sound algorithm for tristate multiplication.(2) Soundness of the value tracking analysis [CAV '23]The verifier refines abstract values maintained in one domain (e.g., interval) using values maintained in other domains (e.g., tristate). This work analyzed the soundness of the refinement, which is non-standard and distinct from refinements proposed in abstract interpretation theory. This work also developed program synthesis techniques that automatically generate programs that expose soundness bugs found by our analysis.These research directions tie into my broader vision of verified static analyzers for kernel extensions in production systems.
Location: CoRE 305
Committee:

Assistant Professor Srinivas Narayana 

Professor Santosh Nagarakatte

Assistant Professor He Zhu

Assistant Professor Hao Wang

Start Date: 02 Nov 2023;
Start Time: 03:30PM - 04:30PM
Title: Identifying and Protecting Privacy of Website Network Traffic Using Neural Networks

Bio:
Speaker:
Abstract: The increasing expansion of the current Internet network and online applications has heightened the significance of Internet traffic classification. Consequently, a multitude of studies have been conducted in this domain, resulting in the development of various approaches. We investigated using a Convolutional Neural Network as a traffic classifier to associate TCP traffic with specific websites using only packet sizes and inter-packet delay. This method identifies sites despite encryption, onion routing, and VPNs. Data was collected from cloud providers in the US and Europe, a university server, and home networks. Traffic was generated to 25 popular sites using a headless browser. With tens of thousands of packets per site for training, average classification accuracy reached nearly 95% with as few as 500 packets. Using jitter and size alone decreased accuracy by only 2-4%. Accuracy dropped significantly if training and testing data sources differed, e.g., training on cloud data but testing on home network traffic. We show that a domain adaptation approach ensures high accuracy even when data sources differ. In terms of privacy protection (i.e., avoiding attackers from correctly classifying traffic), we found that randomizing jitter and packet sizes to preserve privacy does not greatly affect accuracy, but adding just 3% extra data can plummet accuracy to below 10%.Paper URL: https://people.cs.rutgers.edu/~rmartin/papers/Flows-draft_01.pdf
Location: CoRE 305
Committee:

Professor Richard Martin 

Professor Srinivas Narayana

Professor Hao Wang

Professor Abdeslam Boularias

Start Date: 28 Nov 2023;
Start Time: 08:30AM - 10:00AM
Title: Open Vocabulary Object Detection with Pretrained Vision and Language Models

Bio:
Speaker:
Abstract: Recent studies show promising performance in open-vocabulary object detection (OVD) using pseudo labels (PLs) from pretrained vision and language models (VLMs). However, PLs generated by VLMs are extremely noisy due to the gap between the pretraining objective of VLMs and OVD, which blocks further advances on PLs. In this paper, we aim to reduce the noise in PLs and propose a method called online Self-training And a Split-and-fusion head for OVD (SAS-Det). First, the self-training finetunes VLMs to generate high quality PLs while prevents forgetting the knowledge learned in the pretraining. Second, a split-and-fusion (SAF) head is designed to remove the noise in localization of PLs, which is usually ignored in existing methods. It also fuses complementary knowledge learned from both precise ground truth and noisy pseudo labels to boost the performance. Extensive experiments demonstrate SAS-Det is both efficient and effective. Our pseudo labeling is 3 times faster than prior methods. SAS-Det outperforms prior state-of-the-art models of the same scale by a clear margin and achieves 37.4 AP_50 and 27.3 AP_r on novel categories of the COCO and LVIS benchmarks, respectively.
Location: CoRE 305
Committee:

Professor Dimitris Metaxas (Chair)

Professor Konstantinos Michmizos

Professor Dong Deng

Professor Desheng Zhang

Start Date: 28 Nov 2023;
Start Time: 12:00PM - 02:00PM
Title: Toward Scalable and High-Performance I/O with Cross-layered Storage Design

Bio:
Speaker:
Abstract: Storage technologies have been evolving rapidly in the past decades with the advancement of new features and capabilities such as byte-addressability and near-hardware computational capability. However, software innovations to manage storage hardware advancement lag behind such progress. State-of-the-art approaches in OS or user-level either suffer from high I/O overheads (e.g., system calls, data movement, I/O scalability bottlenecks) or fail to manage storage devices to maximize the utilization of their capabilities. This dissertation addresses these problems with a cross-layered system design by dividing the I/O software stack into user space, OS kernel, and storage firmware to capitalize on the advantages of each layer. At a single storage level, to reduce software overheads in the traditional I/O stack in operating systems with emerging computational storage devices, our first work proposed and designed a cross-layered file system that disaggregates file systems into user space, OS, and storage devices to eliminate I/O scalability bottlenecks and scale I/O performance with fine-grained concurrency. To accelerate I/O and data processing, our second work proposed and implemented a novel CISC I/O interface compatible with POSIX, which packs multiple I/O operations with related computational tasks in a single compound I/O operation to utilize the compute power in storage devices and reduce data copy overheads effectively. Scoping beyond a single storage, the third part of this dissertation presents a novel solution to exploit the collective hardware and software capabilities offered by multiple storage devices by delegating resource management to user space and retaining important properties such as permission enforcement and sharing in the OS kernel. As a tangible outcome, through fundamental, principled, and end-to-end redesign of I/O stack, the works implemented in this thesis showcased the advantages of a cross-layered design by accelerating production-level applications significantly.
Location: CoRE 301
Committee:

Professor Sudarsun Kannan (Rutgers), Committee Chair

Professor Ulrich Kremer (Rutgers)

Professor Santosh Nagarakatte (Rutgers)

Professor Thu Nguyen (Rutgers)

Professor Sanidhya Kashyap (EPFL)

Professor Yuanchao Xu (UC Santa Cruz)

Start Date: 30 Nov 2023;
Start Time: 10:30AM - 12:00PM
Title: Counterfactual Explainable AI for Human and Science

Bio:
Speaker:
Abstract: Artificial Intelligence (AI) goes beyond merely making predictions. Its explainability is crucial not only for enhancing user satisfaction but also for facilitating more effective decision-making. Among all available methods for achieving explainable AI, this dissertation focuses on the specialized domain of counterfactual explanations. Counterfactual explanations offer a unique interpretation of systems by providing ``what-if'' scenarios that illuminate how a given outcome could differ if the system input were altered. The model-agnostic nature of counterfactual explanations makes them exceptionally well-suited for elucidating the intrinsic mechanisms of advanced AI systems. This is particularly critical in an era where such systems, especially those employing deep neural networks, are becoming increasingly opaque and complex. An in-depth investigation is conducted into the applicability of counterfactual explainable AI across both human-centered and science-oriented AI models. Within the context of human-centered AI systems, such as recommender systems, the incorporation of counterfactual explanations can enhance user trust and satisfaction. In the scientific field, counterfactual explainable AI offers a valuable contribution. It helps researchers identify key factors behind model predictions in a straightforward manner and promotes trust and credibility in AI-generated outcomes, thereby accelerating both the human comprehension of natural phenomena and the pace of scientific innovation. This dissertation offers a thorough and methodical exploration of counterfactual explainable AI, encompassing its underlying philosophy, stated objectives, methodological framework, practical applications, and evaluation metrics.
Location: CoRE 305
Committee:

Professor Yongfeng Zhang (Chair)

Professor Jie Gao

Professor Dong Deng

Professor Quanquan Gu, University of California, Los Angeles (UCLA)

Start Date: 05 Dec 2023;
Start Time: 10:30AM - 12:00PM
Title: Identifying Hardness of Covering and Coloring in Clustering and Steiner Tree

Bio:

Karthik C. S. is an Assistant Professor in the Department of Computer Science at Rutgers University supported by a Simons Foundation Junior Faculty Fellowship and a grant from the National Science Foundation. He received his Ph.D. in 2019 from Weizmann Institute of Science where he was advised by Irit Dinur and his M.S. in 2014 from École Normale Supérieure de Lyon. He has held postdoctoral appointments at Tel Aviv University (hosted by Amir Shpilka) and New York University (hosted by Subhash Khot). He is broadly interested in complexity theory and discrete geometry with an emphasis on hardness of approximation, fine-grained complexity, and parameterized complexity.


Speaker:
Abstract: In this talk, I will discuss my recent efforts in revitalizing the subarea of inapproximability of geometric optimization problems, which lies at the confluence of hardness of approximation and geometric optimization, with the main aim of determining the boundaries of efficient approximation. My work is motivated by a lack of explanations as to why algorithmics are unable to exploit the structure of L_p-metrics, and in particular the Euclidean metric, to design efficient algorithms with better approximation guarantees compared to arbitrary metric spaces. I will focus on inapproximability results of clustering objectives and Steiner tree computation. A recurring technical motif in these results are graph embedding to L_p-metrics of hard instances of set cover and graph coloring problems.
Location: CoRE 301
Committee:
Start Date: 07 Dec 2023;
Start Time: 03:00PM - 05:00PM
Title: Toward Universal Medical Image Segmentation

Bio:
Speaker:
Abstract: A major enduring focus of clinical workflows is disease analytics and diagnosis, leading to medical imaging datasets where the modalities and annotations are strongly tied to specific clinical objectives. To date, the prevailing training paradigm for medical image segmentation revolves around developing separate models for specific medical objects (e.g., organs or tumors) and image modalities (e.g., CT or MR). This traditional paradigm can hinder the robustness and generalizability of these AI models, inflate costs when further scaling data volumes, and fail to exploit potential synergies among various medical imaging tasks. By observing the training program of radiology residency, we recognize that radiologists’ expertise arises from routine exposure to a diverse range of medical images across body regions, diseases, and imaging modalities. This observation motivates us to explore a new training paradigm, “universal medical image segmentation”, whose key goal is to learn from diverse medical imaging sources. In the qualification exam, I’ll delve into challenges in the new paradigm including issues with partial labeling, conflicting class definitions, and significant data heterogeneity. I’ll also present our work, aimed at tackling these challenges. We demonstrate that our proposed universal paradigm not only offers enhanced performance and scalability, but also excels in transfer learning, incremental learning and generalization. This innovative approach opens up new perspectives for the construction of foundational models in a broad range of medical image analysis.
Location: CoRE 305
Committee:

Professor Dimitris Metaxas (Chair)

Assistant Professor Hao Wang

Assistant Professor Yongfeng Zhang

Assistant Professor Karthik Srikanta

Start Date: 08 Dec 2023;
Start Time: 10:30AM - 12:00PM
Title: Enhanced Multi-Agent Trajectory Forecasting Using Ordinary Differential Equations

Bio:
Speaker:
Abstract: Multi-agent trajectory forecasting aims to estimate future agent trajectories given the historical trajectories of multiple agents. It has recently attracted a lot of attention due to its widespread applications including autonomous driving, physical system modeling and urban data mining. It is challenging because both complex temporal dynamics and agent interaction jointly affect each agent. Existing methods often fall short in capturing these two factors explicitly, because they neglect the continuous nature of the system and distance information between agents, which leads to limited forecasting accuracy and poor interpretability. Innovatively, Neural Ordinary Differential Equations (ODEs) introduce a novel paradigm of continuous-time neural networks by solving ODEs. In this talk, I will review my works that utilize ODEs to enhance multi-agent trajectory forecasting by incorporating distance information and explicitly modeling underlying continuous temporal dynamics. Our experiments demonstrate that our works not only improve the trajectory forecasting accuracy, but also adeptly deal with unexpected events which are not in the training dataset.
Location: CoRE 301
Committee:

Professor Dimitris Metaxas (Chair)

Professor Hao Wang

Professor Konstantinos Michmizos

Professor Dong Deng

Start Date: 08 Dec 2023;
Start Time: 04:15PM - 05:30PM
Title: Advancing AI Sustainability

Bio:
Speaker:
Abstract: Recent advancements in AI have occasionally outperformed human capabilities in specific domains. Yet, these models often lag behind human adaptability due to their static nature, compartmentalized knowledge, and limited extensibility. This gap becomes evident as rapidly evolving environments and task requirements render even the most sophisticated models quickly outdated. In contrast, humans possess a dynamic learning capability that allows for continuous adaptation and relevance across their lifespan. Addressing this challenge, my presentation focuses on the development of more resilient and sustainable AI models that can integrate effectively in real-world applications, both at scale and within practical budget constraints. I will introduce two innovative approaches in this domain. Towards Self-Supervised and Weight-preserving Neural Architecture Search: This method facilitates label-free model design, eliminating the dependency on extensive labeled datasets and reducing the time and resources required for model training. Steering Prototypes for Rehearsal-free Continual Learning: This approach addresses the issue of model obsolescence by enabling AI systems to learn continually, adapt to new information, and update their knowledge base without the need for constant retraining or external guidance. These methodologies not only enhance the sustainability of AI models but also bring them a step closer to mirroring the human capacity for lifelong learning and adaptation. My talk will delve into the technical aspects of these approaches and discuss their potential implications for the future of AI development.
Location: CoRE 305
Committee:

Professor Dimitris N Metaxas (Chair)

 Assistant Professor Yongfeng Zhang

 Associate Professor Konstantinos Michmizos

Professor Badri Nath

Start Date: 12 Dec 2023;
Start Time: 10:30AM - 12:00PM
Title: Efficient Algorithms for Data Science: Designing Randomized Controlled Trials and Solving Linear Equations

Bio:

Peng Zhang is an assistant professor in the Department of Computer Science at Rutgers University. Peng is broadly interested in designing efficient algorithms for Data Science, particularly in causal inference and linear equation solving. Her work has been recognized with an NSF CAREER Award, an Adobe Data Science Research Award, a Rutgers Research Council Individual Fulcrum Award, and a FOCS Best Student Paper award. Before joining Rutgers, she received her Ph.D. in Computer Science from Georgia Tech and was a postdoc at Yale University.


Speaker:
Abstract: Two key components of a data science pipeline are collecting data from carefully planned experiments and analyzing data using tools such as linear equations and linear programs. I will discuss my recent work on fast algorithms for designing randomized controlled trials and solving structured linear equations.In the first part of the talk, I will present efficient algorithms that improve the design of randomized controlled trials (RCTs). In an RCT, we want to randomly partition experimental subjects into two treatment groups to balance subject-specific variables, which might correlate with treatment outcomes. We formulate such a task as a discrepancy question and employ recent advances in algorithmic discrepancy theory to improve the design of RCTs. In the second part of the talk, I will briefly present my recent research on fast solvers for linear equations in generalized Laplacians arising from topological data analysis.
Location: CoRE 301
Committee:
Start Date: 13 Dec 2023;
Start Time: 02:15PM - 03:30PM
Title: A Trio of Graph Problems

Bio:
Speaker:
Abstract: I will talk (in brief) about three graph problems, each considered in disparate models of computation.First, I will go over problems pertaining to structural balance in the streaming model. Here we consider complete graphs where each edge signals either a positive or negative relation. Such a graph is said to be balanced if there is no incentive to flip any of these relations. We give an $O(\log n)$ space algorithm for determining whether a graph is balanced, and a $\widetilde{O}(n)$ space algorithm which provides a certificate saying approximately how far away a graph is from being balanced. These results are complemented by various lower bounds.Then, I plan to talk about the classic problem of single source shortest paths (SSSP), but now in distributed and parallel models of computation. Most SSSP algorithms in these models do not work when there are negative weight edges present. We show how any such algorithm which pays $T$ can be used in a blackbox fashion to construct an almost as good SSSP algorithm which pays $Tn^{o(1)}$, but which now works when there are negative weight edges.Finally, I want to discuss the problem of sending probes from a selection of $k$ vantage points to every other vertex in a graph, with the aim of maximizing the number of bottleneck edges discovered by these probes. We give an efficient polytime $(1-1/e)$ approximation algorithm in the non-adaptive setting, along with results (and tentative approaches) when we seek instance-optimality and/or are afforded adaptivity.List of Papers:Evaluating Stability in Massive Social Networks: Efficient Streaming Algorithms for Structural Balance (https://arxiv.org/pdf/2306.00668.pdf)Parallel and Distributed Exact Single-Source Shortest Paths with Negative Edge Weights (https://arxiv.org/pdf/2303.00811.pdf)Vantage Point Selection Algorithms for Bottleneck Capacity Estimation (not available yet)
Location: CoRE 305
Committee:

Professor Jie Gao

Professor Kostas Bekris

Assistant Professor Aaron Bernstein

Assistant Professor Karthik CS

Start Date: 14 Dec 2023;
Start Time: 01:00PM - 02:30PM
Title: Divide-and-conquer quantum computing with graphical models

Bio:

Yipeng is an assistant professor of computer science at Rutgers University. His research work is in quantum and unconventional computer architectures. He is developing ways where reconfigurable classical computer hardware can aid critical tasks in realizing quantum computer control and simulation. His work has been recognized as top picks or honorable mentions in the computer architecture conferences in 2021, 2017, and 2016.


Speaker:
Abstract: Quantum circuit cutting is an attractive strategy for expanding the capabilities of current quantum computers. The idea is to decompose quantum programs into smaller fragments which do fit on existing prototypes, and then use classical high-performance computing to recombine the results. As identified in some recent prior work, the challenge in realizing such a strategy is in solving three related problems: 1) Deciding how to cut up the circuit, 2) Running the circuit fragments efficiently on limited-capacity quantum prototype computers, and 3) Recombining the results as efficiently as possible despite the fact that the cost of recombination scales exponentially relative to the number cuts.Our group previously explored using probabilistic graphical models (e.g., Bayesian networks, Markov networks) as flexible and correct models for quantum circuit simulation and modelling of correlated errors. Our group also explored using advanced PGM inference techniques based on algebraic model counting (AMC) on decomposable negation normal forms (DNNFs) to simulate quantum circuits. In this work, we observe that the quantum circuit cutting and recombination approach is mathematically identical to transforming PGMs to factor graphs, and then performing exact inference on the factor graphs. This observation, in combined with the AMC approach, provides new and powerful techniques in each of the three tasks above to realize quantum circuit cutting and recombination.
Location: CoRE 301
Committee:
Start Date: 18 Dec 2023;
Start Time: 01:00PM - 02:00PM
Title: Towards an emotionally expressive embodied conversational agent

Bio:
Speaker:
Abstract: Recent success in generative AI and large language models (LLMs) has led to a wide range of interesting applications, notably interactive virtual assistant and embodied conversational agents (ECAs).These applications are crucial in advancing human-computer interaction by providing immersive and expressive dialogues across various domains and enhancing user engagement.Constructing an ECA with immersive realism requires synchronous multi-modal responses as well as the integration of emotionally expressive behaviors in voice, facial expressions and body gestures.However, most existing works often entail a simple concatenation of modules such as text-to-speech and speech-to face and gesture, resulting in expressionless responses primarily because of the dilution in emotions across the modalities.In this talk, I plan to delve into this challenge and propose solutions through three distinctive research works. The first work focuses on refining the synchronization of audio and facial expressions with conditioned emotions. The second involves synthesizing body gestures that align with audio and textual inputs, ensuring semantic appropriateness. The last one highlights the importance of emotion conditioning and affect consistency for multimodal ECAs via large-scale user studies.All three combined provide insights for the development of expressive embodied conversational agents.
Location: CoRE 305
Committee:

Associate Professor Mubbasir Kapadia (Chair)

Professor Vladimir Pavlovic

Distinguished Professor Dimitris Metaxas

Assistant Professor He Zhu

Start Date: 19 Dec 2023;
Start Time: 10:30AM - 12:00PM
Title: Majority Rule, Visibility Graphs, External Memory Algorithms, and Graph Cities

Bio:

James Abello is an Associate Professor of Practice in the Department of Computer Science at Rutgers University. He received his PhD in Combinatorial Algorithms from the University of California San Diego and was awarded a University of California President’s Postdoctoral Fellowship (1985-1987). James has been a senior member of technical staff at AT&T Shannon Laboratories, Bell Labs, Senior Scientist at Ask.com, and a DIMACS Research Professor. He was the Director of the MS program in the Computer Science Department from 2016 to 2022.

James is the co-editor of External Memory Algorithms, Vol. 50 of the AMS-DIMACS Series (with Jeff Vitter. 1999), The Kluwer Handbook of Massive Data Sets (with P. Pardalos and M. Resende, 2002) and Discrete Methods in Epidemiology (with Graham Cormode, 2006).

James is broadly interested in algorithmic artifacts that facilitate processing, visualization, and data interactions, with the overall goal of making sense of massive amounts of information.


Speaker:
Abstract: We will highlight connections between the Majority Rule of Social Choice and Visibility Graphs of Polygons via the Symmetric Group; External Memory Algorithms and visualizations of over billion edge graphs called "Graph Cities", and the use of Graph Edge Partitions to extract semantic digests from social media.*The talk will be "almost" self-contained.
Location: CoRE 301
Committee:
Start Date: 20 Dec 2023;
Start Time: 09:30AM - 10:30AM
Title: Human-AI Collaboration for Cyber-Physical Systems

Bio:
Speaker:
Abstract: In the domain of Cyber-Physical Systems, the integration of human intelligence and artificial intelligence plays a key role. This talk highlights a practical application of this integration in the context of last-mile delivery services, which include various delivery stations segmented into distinct delivery areas. Each area is assigned a courier, responsible for all deliveries within it. The necessity for data-driven methods to assess delivery area difficulty is well-recognized, yet the significant expenses associated with precise workload measurement curtail the availability of dependable ground truth data, constraining the scalability of current machine learning solutions. In this work, we turn to a frequently overlooked resource—the couriers’ firsthand knowledge of their delivery areas. In this paper, we design \texttt{$AD^2I$} (Assessing Delivery Area Difficulty Isotonic) Framework, which includes two modules: (i) a Preference Rank Aggregation module, which collects individual courier preferences and enriches them with historical delivery data to assess their familiarity with each delivery area; (ii) an Isotonic Integration module, which combines the aggregated preference with the assessment of existing machine learning model through isotonic regression to enhance the accuracy of delivery area difficulty assessments. Our \texttt{$AD^2I$} framework, tested on six-month data from a major logistics company, demonstrated a 13.3% accuracy increase and 0.175 Kendall tau improvement. Its adoption improved income fairness, reducing the Gini coefficient by 0.29 for courier salaries and increasing on-time delivery rates by 1.67%.
Location: CoRE 301
Committee:

Associate Professor Desheng Zhang

Assistant Professor Dong Deng

Assistant Professor Qiong Zhang

Assistant Professor Yongfeng Zhang

Start Date: 21 Dec 2023;
Start Time: 10:30AM - 12:00PM
Title: Promoting Fairness and Accuracy in Dynamic and Multimodal Algorithms

Bio:
Speaker:
Abstract: A multitude of decision-making tasks, such as content moderation, medical diagnosis, misinformation detection, and recidivism prediction, can now be easily automated due to the recent developments in machine learning (ML) capabilities. ML models excel in large scale data processing and complex pattern recognition. However, their effectiveness may diminish in specific situations when the assumption of stationarity is violated, i.e., the independent and identically distributed (iid) assumption. Specifically, the nature of the aforementioned tasks is that they are not static; they evolve over time. In this study, we explore these challenges and propose strategies to alleviate their adverse effects in multimodal settings including visual, textual and social data. We first introduce an “Anticipatory Bias Correction” method designed to address algorithmic fairness and accuracy jointly in temporally-shifting settings, ensuring the proactivity and adaptability objectives. Subsequently, we investigate the ML performance of a dermatological image processing task for skin-cancer detection, where datasets are collected from diverse locations and propose a fair and accurate methodological framework. Lastly, we summarize observed issues and provide recommendations for potential solutions.
Location: CoRE 301
Committee:

Vivek K. Singh 

David M. Pennock

Amélie Marian 

Pradeep K. Atrey (University at Albany)

Start Date: 22 Dec 2023;
Start Time: 09:00AM - 10:30AM
Title: Human Behavior Detection for Cyber-Physical Systems

Bio:
Speaker:
Abstract: As the integration of cyber-physical systems into everyday life advances, understanding human behavior becomes crucial for ensuring seamless, safe, and efficient interaction between humans and technology. One such application is in the realm of micro-mobility, a mode of transportation that has gained immense popularity among commuters for its convenience, low cost, and eco-friendly nature. However, despite these advantages, micro-mobility presents unique challenges, particularly in terms of rider safety. Unlike traditional transportation methods such as cars and public transit, micro-mobility vehicles often lack comprehensive safety features or designs to mitigate potential hazards. The high rate of accidents in micro-mobility, often linked to distraction, affecting riders across all experience levels, underscores the need for enhanced safety measures. Therefore, understanding where riders are looking (i.e., gaze following) is essential in preventing potential accidents and enhancing road safety. In this work, we propose a novel two-stage coarse-to-fine gaze following framework utilizing video frames streamed from smartphone dual cameras. Initially, gaze vectors are estimated from riders' facial appearances using a lightweight deep network, enabling the cropping of approximate gaze target regions. The next stage of our framework involves leveraging the visual information within these estimated regions to predict areas likely to attract attention (saliency, a bottom-up mechanism). Furthermore, we acknowledge that human gaze behavior is heavily influenced by intentional directives (a top-down mechanism). We categorize riders' gaze behavior into three distinct types: forward-fixation, target pursuit, and saccade. By integrating both bottom-up and top-down mechanisms, our approach facilitates implicit calibration and refinement specific to the riding context. This methodology aims to provide a more accurate and contextually relevant understanding of rider behavior and attention, ultimately contributing to the safety and efficiency of micro-mobility systems.
Location: CoRE 301
Committee:

Associate Professor Desheng Zhang

Assistant Professor Hao Wang

Distinguished Professor Dimitris Metaxas

Assistant Professor Aaron Bernstein

Start Date: 18 Jan 2024;
Start Time: 03:00PM - 05:00PM
Title: Unlocking Visual Reasoning: Exploring Representations for Enhanced Problem-Solving

Bio:
Speaker:
Abstract: The success of deep learning systems in various applications hinges on their ability to extract structured and invariant representations. However, visual reasoning remains challenging due to the complexity of high-dimensional sensory input and the necessity for high-level abstraction. In contrast, humans excel at this complex process by using simple design principles based on realizing low-level abstractions and their relations from the visual input. Hence, understanding why humans excel at this cognitive task while current computational models fall short of solving visual reasoning tasks in a human-like hierarchical manner is increasingly apparent. Current reasoning models require enormous training data, exhibit sensitivity to perturbations, and lack the capacity to generalize to new reasoning tasks. In this dissertation, we aim to address these limitations of visual perception and visual reasoning.The thesis comprises two main parts. The first part is devoted to Visual Reasoning via Disentangled Representations, delves into extracting high-quality disentangled representations and devising modules to tackle reasoning tasks using these representations. We begin with the pursuit of understanding and learning disentangled representations that encodes the salient (data-generative) factors of variation in the data independently. To achieve this, we present a novel VAE-based approach capable of disentangling latent representations in fully unsupervised manner. Our approach harnesses the total correlation (TC) within the latent space by introducing a relevance indicator variable. This variable pinpoints and emphasizes significant factors, characterized by substantial prior KL divergence, while filtering out noise-associated factors with minimal variation. Our method automatically identifies and assimilates genuine factors, even in scenarios where the count of such factors remains explicitly unknown. Furthermore, it outperforms existing methods both quantitatively and qualitatively. These disentangled latent factors, adept at independently mapping generative factors, prove invaluable in reasoning puzzles where visual attributes correspond to specific rules like constancy, progression, or arithmetic. They enable the derivation of rules capable of solving various puzzles. Additionally, these representations exhibit sample efficiency and superior generalization, rendering them ideal for solving visual reasoning problems. Expanding on this concept, we propose a computational model that addresses visual reasoning tasks as an end-to-end joint representation-reasoning learning framework. This framework leverages the weak inductive bias present in reasoning datasets to accomplish these tasks concurrently. Specifically focusing on Raven’s Progressive Matrices (RPMs) as our reasoning task, we introduce a general generative graphical model (GM-RPM). Subsequently, we propose the “Disentangling-based Abstract Reasoning Network (DAReN)”, aligning with the principles of GM-RPM. Evaluating our model across disentanglement and reasoning benchmarks demonstrates consistent improvement over existing state-of-the-art models in both domains. Our results underscore the necessity of structured representations for solving visual reasoning tasks.The second part of my dissertation is devoted to learning tokenized spatial representations that grasp low-level visual concepts within each RPM image. We introduce “Spatially Attentive Transformers for Abstract Visual Reasoning (SARN)”, a novel computational model which integrates spatial semantics within visual elements, represented as spatio-visual tokens, capturing both intra-image and inter-image relationships within the puzzle. The reasoning module groups these tokens (by row or column) to capture the underlying rule binding the puzzle, thereby solving the visual reasoning task. Through extensive experiments on established RPM benchmarks, we demonstrate that our results surpass existing approaches. Furthermore, we validate that the learned rule representation exhibits increased robustness in novel tasks and better generalization to test-time domain shifts compared to current methods.In a nutshell, this work underscores the necessity of acquiring structured representations to enhance visual reasoning performance. Thus, we address certain limitations in AI model design, as well as narrowing the gap between machine intelligence and human cognitive abilities.
Location: CBIM multipurpose room - 22
Committee:

 Prof. Vladimir Pavlovic (advisor)

 Prof. Dimitris Metaxas

Prof. Yongfeng Zhang

Prof. Junsong Yuan (external)

Start Date: 25 Jan 2024;
Start Time: 10:30AM - 11:30AM
Title: Geometry, Arithmetic and Computation of Polynomials

Bio:

Akash Kumar Sengupta is a postdoctoral fellow in the Department of Mathematics and a member of the Algorithms & Complexity group at the University of Waterloo. Previously, he was a J. F. Ritt Assistant Professor at Columbia University. He received his PhD in Mathematics from Princeton University in 2019, advised by János Kollár. He is broadly interested in theoretical computer science, algebraic geometry, number theory and their interconnections. In particular, his research in algebraic complexity theory has focused on the Polynomial Identity Testing (PIT) problem. Two prominent highlights of his research are: the solution to Gupta’s radical Sylvester-Gallai conjecture for obtaining PIT algorithms, and the solution to the geometric consistency problem of Manin’s conjecture.


Speaker:
Abstract: Polynomials are ubiquitous in various branches of mathematics and sciences ranging from computer science to number theory. A remarkable phenomenon is that the algebraic-geometric properties of polynomials govern their arithmetic and computational behavior. As a result, algebraic-geometric techniques have led to exciting progress towards fundamental problems in complexity theory and number theory. I’ll begin with an overview of my research in these areas, including the problems of counting rational solutions and efficient computation of polynomials. Then, we will dig deeper into the Polynomial Identity Testing (PIT) problem, a central problem in computational complexity. PIT has applications to a wide range of problems, such as circuit lower bounds, perfect matching and primality testing. In this talk, I’ll discuss an algebraic-geometric approach towards polynomial-time deterministic algorithms for PIT via Sylvester-Gallai configurations. In particular, we will see that dimension bounds on SG-configurations yield poly-time PIT algorithms. I’ll talk about my work on the geometry of SG-configurations, showing that radical SG-configurations are indeed low-dimensional, as conjectured by Gupta in 2014.
Location: CoRE 301
Committee:
Start Date: 31 Jan 2024;
Start Time: 12:10PM - 02:10PM
Title: Computational Learning Theory through a New Lens: Scalability, Uncertainty, Practicality, and Beyond

Bio:
Speaker:
Abstract: Computational learning theory studies the design and analysis of learning algorithms, and it is integral to the foundation of machine learning. In the modern era, classical computation learning theory is growingly unable to catch up with new practical demands. In particular, problems arise in the aspects of scalability of the input size, uncertainty on the input formation, and the discrepancy between the theoretical and practical efficiency. There are several promising approaches to tackle the above challenges. For scalability, we can consider learning algorithms under sublinear models, e.g., streaming and sublinear time models, that use resources substantially smaller than the input size. For uncertainty, we can resort to learning algorithms that naturally take noisy inputs, e.g., algorithms that deal with multi-armed bandits (MABs). Finally, for practicality, we should design algorithms that strike a balance between theoretical guarantees and experimental performances.In light of the above discussion, we will discuss results in three areas of study in this talk. In the first part, we present recent results in streaming multi-armed bandits, where the arms arrive one by one in a stream. We study the fundamental problems of pure exploration and regret minimization under the model and present optimal algorithms and lower bounds. In the second part, we discuss graph clustering problems in sublinear settings. We consider two important problems: correlation clustering and hierarchical clustering. We give various sublinear algorithms for these problems in the streaming, sublinear time, and parallel computation settings. Finally, in the third part, we move to the more practically-driven problems of differential privacy (DP) range queries and weak-strong oracle learning. Both problems are motivated by practical industry settings, and we give near-optimal algorithms with strong experimental performances.
Location: CoRE 301
Committee:

Assistant  Professor Sepehr Assadi (Advisor)

Assistant Professor Aaron Bernstein

Professor Jie Gao

Rajesh Jayaram (External) 

Professor Qin Zhang (External)

Start Date: 01 Feb 2024;
Start Time: 01:30PM - 03:00PM
Title: Measuring Uncertainty

Bio:

Diana Kim is a Ph.D. graduate in computer science at Rutgers University (2016-2022) and a postdoctoral researcher at Vision CAIR group of KAUST in Saudi Arabia (2023-current). Her research interest is interpreting massive art patterns on the latent space of various deep neural nets by using language models and fine-grained art principal semantics. Her works were published in several AI conferences (ICSC-2018, ICCC-2019, and AAAI 2018, 2022). She likes to teach students: a mentor for undergraduate research internships and a teacher for recitation classes at Rutgers.


Speaker:
Abstract: Understanding probability is key to critical reasoning and rational decisions. In this lecture, we will learn mathematical machinery to compute probability from building a probability space and axioms to using the tools: partitioning sample space (Bayes Theorem), tree diagrams, and induction. For empirical probability computation, we will learn how relative frequency reveals the probability hidden in data; the Galton board will be presented and analyzed. The convergence of relative frequency will be proved by using Chebyshev's inequality. From the derivation, we can understand the relation between the number of data and reliability in probability inference from data.
Location: CoRE 301
Committee:
Start Date: 02 Feb 2024;
Start Time: 02:00PM - 03:00PM
Title: Eliciting Information without Verification from Humans and Machines

Bio:

Yuqing Kong is currently an assistant professor at the Center on Frontiers of Computing Studies (CFCS), Peking University. She obtained her Ph.D. degree from the Computer Science and Engineering Department at University of Michigan in 2018 and her bachelor degree in mathematics from University of Science and Technology of China in 2013. Her research interests lie in the intersection of theoretical computer science and the areas of economics: information elicitation, prediction markets, mechanism design, and the future applications of these areas to crowdsourcing and machine learning.


Speaker:
Abstract: Many application domains rely on eliciting high-quality (subjective) information. This presentation will talk about how to elicit and aggregate information from both human and machine participants, especially when the information cannot be directly verified. The first part of the talk presents a mechanism, DMI-Mechanism, designed to incentivize truth-telling in the setting where participants are assigned multiple multi-choice questions (e.g. what’s the quality of the above content? High/Low). DMI-Mechanism ensures that truthful responses are more rewarding than any less informative strategy. The implementation of DMI-Mechanism is straightforward, requiring no verification or prior knowledge, and involves only two participants and four questions for binary-choice scenarios. When applied to machine learning, DMI-Mechanism results in a loss function that is invariant to label noise. The second part of the talk discusses the elicitation of information not just from humans but also from machines. Recognizing the limitations in time and resources that humans and machines have, the talk introduces a method to elicit and analyze the 'thinking hierarchy' of both entities. This approach not only facilitates the aggregation of information when the majority of agents are at less sophisticated 'thinking' levels but also provides a unique way to compare humans and machines.This talk is based a series of works including Kong (SODA 2020, ITCS 2022, JACM 2024), Xu, Cao, Kong, Wang (NeurIPS 2019), Kong, Li, Zhang, Huang, Wu (NeurIPS 2022), Huang, Kong, Mei (2024).
Location: Core 301
Committee:
Start Date: 08 Feb 2024;
Start Time: 10:30AM - 12:00PM
Title: Matching Algorithms in Theory and Practice

Bio:

Abraham Gale is a graduating PhD student at Rutgers working under Amélie Marian. His research focuses on fair and explainable algorithms, specifically designing algorithms for high-stakes applications that are understandable to stakeholders. He looks forward to teaching courses that range from introductory algorithms and data structures to more advanced networking, database, and theory electives.


Speaker:
Abstract: This teaching demonstration will be aimed at helping students understand the canonical matching algorithms and their real-world implications. Matching algorithms are used widely in the real world, for everything from Kidney donations to public high school admissions. We will discuss Deferred Acceptance as well as older algorithms such as Immediate Acceptance. The goal is for students to gain insight into how to choose from available algorithmic tools starting from theory and continuing to implementation. The lecture will start with a brief explanation of what these algorithms are and their properties, with some theoretical discussion. We will then move on to a brief discussion of implementation details.
Location: CoRE 301
Committee:
Start Date: 12 Feb 2024;
Start Time: 10:30AM - 12:00PM
Title: Security of Quantum Computing Systems

Bio:

Prof. Jakub Szefer’s research focuses on computer architecture and hardware security. His research encompasses secure processor architectures, cloud security, FPGA (Field Programmable Gate Array) attacks and defenses, hardware FPGA implementation of cryptographic algorithms, and most recently quantum computer cybersecurity. Among others, Prof. Szefer is the author of first book focusing on processor architecture security: “Principles of Secure Processor Architecture Design”, published in 2018, and he is a co-editor of a book on “Security of FPGA-Accelerated Cloud Computing Environments”, published in 2023. He is recipient of awards such as NSF CAREER award and is a senior member of IEEE (2019) and ACM (2022).


Speaker:
Abstract: Quantum computer device research continues to advance rapidly to improve size and fidelity of the quantum computers. In parallel, there is an increasing number of deployments of existing quantum computing systems which are being made available for use by researchers and general public through cloud-based services. In particular, more and more of the quantum computer systems are becoming available as cloud-based services thanks to IBM Quantum, Amazon Braket, Microsoft Azure, and other cloud providers. Ease of access makes these computers accessible to almost anybody and can help advance developments in algorithms, quantum programs, compilers, etc. However, open, cloud-based access may make these systems vulnerable to novel security threats that could affect operation of the quantum computers, or users using these devices. Further, as with any cloud-based computing system, users do not have physical control of the remote devices. Untrusted cloud providers, or malicious insiders within otherwise trusted cloud provider, also pose novel security threats. User’s programs could be stolen or manipulated, or output data could be leaked out. The goal of this seminar will be to introduce audience to recent research on security of quantum computing systems. During the seminar novel security attacks on quantum computing systems will be discussed, as well as corresponding defenses. The focus of the seminar will be on superconducting qubit quantum computers; however, the security ideas can be applied to other types of quantum computers.
Location: CoRE 301
Committee:
Start Date: 13 Feb 2024;
Start Time: 10:30AM - 12:00PM
Title: Data Privacy in the Decentralized Era

Bio:

Amrita Roy Chowdhury is a CRA/CCC CIFellow at University of California-San Diego, working with Prof. Kamalika Chaudhuri. She graduated with her PhD from University of Wisconsin-Madison and was advised by Prof. Somesh Jha. She completed her Bachelor of Engineering in Computer Science from the Indian Institute of Engineering Science and Technology, Shibpur where she was awarded the President of India Gold Medal. Her work explores the synergy between differential privacy and cryptography through novel algorithms that expose the rich interconnections between the two areas, both in theory and practice. She has been recognized as a Rising Star in EECS in 2020 and 2021, and a Facebook Fellowship finalist, 2021. She has also been selected as a UChicago Rising Star in Data Science, 2021. 


Speaker:
Abstract: Data is today generated on smart devices at the edge, shaping a decentralized data ecosystem comprising multiple data owners (clients) and a service provider (server). Clients interact with the server with their personal data for specific services, while the server performs analysis on the joint dataset. However, the sensitive nature of the involved data, coupled with inherent misalignment of incentives between clients and the server, breeds mutual distrust. Consequently, a key question arises: How to facilitate private data analytics within a decentralized data ecosystem, comprising multiple distrusting parties?My research shows a way forward by designing systems that offer strong and provable privacy guarantees while preserving complete data functionality. I accomplish this by systematically exploring the synergy between cryptography and differential privacy, exposing their rich interconnections in both theory and practice. In this talk, I will focus on two systems, CryptE and EIFFeL, which enable privacy-preserving query analytics and machine learning, respectively.
Location: CoRE 301
Committee:
Start Date: 16 Feb 2024;
Start Time: 10:30AM - 12:00PM
Title: Graph Exploration and Applications of Breadth-First Search (BFS)

Bio:

Surya Teja Gavva graduated with a Ph.D. in mathematics from Rutgers University in May 2023 and is currently a doctoral lecturer at Queens College, City University of New York. His research interests include analytic problems in theoretical computer science and number theory, specifically Harmonic analysis (Analysis of Boolean functions, L-functions and Automorphic Forms), Discrepancy Theory and Discrete Probability. He is passionate about teaching, mentoring students and community organizing. He has taught a wide variety of courses in mathematics and computer science since 2010.


Speaker:
Abstract: This lecture will delve into the concept of graph exploration, focusing on one of its most fundamental algorithms, Breadth-First Search (BFS). We will explore the inner workings of BFS, including a queue implementation and its wide range of applications that make it an essential tool in computer science and beyond.
Location: CoRE 301
Committee:
Start Date: 19 Feb 2024;
Start Time: 10:30AM - 12:00PM
Title: Modern Algorithms for Massive Graphs: Structure and Compression

Bio:

Zihan Tan is a postdoctoral associate at DIMACS, Rutgers University. Before joining DIMACS, he obtained his Ph.D. from the University of Chicago, where he was advised by Julia Chuzhoy. He is broadly interested in theoretical computer science, with a focus on graph algorithms and graph theory.


Speaker:
Abstract: In the era of big data, the significant growth in graph size renders numerous traditional algorithms, including those with polynomial or even linear time complexity, inefficient. Therefore, we need novel approaches for efficiently processing massive graphs. In this talk, I will discuss two modern approaches towards this goal: structure exploitation and graph compression. I will first show how to utilize graph structure to design better approximation algorithms, showcasing my work on the Graph Crossing Number problem. I will then show how to compress massive graphs into smaller ones while preserving their flow/cut/distance structures and thereby obtaining faster algorithms.
Location: CoRE 301
Committee:
Start Date: 20 Feb 2024;
Start Time: 10:30AM - 12:00PM
Title: The Computational Cost of Detecting Hidden Structures: from Random to Deterministic

Bio:

Tim Kunisky is a postdoctoral associate at Yale University, hosted by Dan Spielman in the Department of Computer Science. He previously graduated with a bachelor’s degree in mathematics from Princeton University, worked on machine learning for ranking problems and natural language processing at Google, and received his PhD in Mathematics from the Courant Institute of Mathematical Sciences at New York University, where he was advised by Afonso Bandeira and G‌érard Ben Arous. His main research interests concern how probability theory and mathematical statistics interact with computational complexity and the theory of algorithms.


Speaker:
Abstract: I will present a line of work on the computational complexity of algorithmic tasks on random inputs, including hypothesis testing, sampling, and certifying bounds on optimization problems. Surprisingly, these diverse tasks admit a unified analysis involving the same two main ingredients. The first is the study of algorithms that output low-degree polynomial functions of their inputs. Such algorithms are believed to be optimal for many statistical tasks and can be understood with the theory of orthogonal polynomials, leading to strong evidence for the difficulty of certain hypothesis testing problems. The second is a strategy of "planting" unusual structures in problem instances, which shows that algorithms for sampling and certification can be interpreted as implicitly performing hypothesis testing. I will focus on examples of hypothesis testing related to principal component analysis (PCA), and their connections with problems motivated by statistical physics: (1) sampling from Ising models, and (2) certifying bounds on random functions associated with models of spin glasses.Next, I will describe more recent results probing the computational cost of certification not just in random settings under strong distributional assumptions, but also for more generic problem instances. As an extreme example, by considering the sum-of-squares hierarchy of semidefinite programs, I will show how some of the above ideas may be completely derandomized and applied in a deterministic setting. Using as a testbed the long-standing open problem of computing the clique number of the number-theoretic Paley graph, I will give an analysis of semidefinite programming that leads both to new approaches to this combinatorial optimization problem and to refined notions of pseudorandomness capturing deterministic versions of phenomena from random matrix theory and free probability.
Location: CoRE 301
Committee:
Start Date: 22 Feb 2024;
Start Time: 10:30AM - 12:00PM
Title: Economics Meets Approximations

Bio:

Kangning Wang is currently a Motwani Postdoctoral Fellow at Stanford University. He earned his Ph.D. from Duke University in 2022 and subsequently held the position of J.P. Morgan Research Fellow at the Simons Institute at UC Berkeley. Kangning's research is at the interface of computer science, economics, and operations research, with a focus on developing economic and societal solutions from an algorithmic perspective. His research has been recognized by an ACM SIGecom Doctoral Dissertation Award Honorable Mention, a Duke CS Best Dissertation Award, and Best Paper Awards at SODA 2024 and WINE 2018.


Speaker:
Abstract: Traditional economic research often focuses on solutions that are exactly optimal. However, these exact optima frequently prove undesirable, due to concerns surrounding incentives, robustness, fairness, computational efficiency, and more. This has led to the formulation of several renowned "impossibility theorems." More recently, the emerging interdisciplinary field of *economics and computation* has brought about a shift in perspective, embracing an approximation-based approach to classical problems. This shift opens up avenues for novel economic solutions that both hold theoretical significance and provide practical guidelines to complex real-world applications. In this presentation, I will explore this approximation viewpoint applied to various well-established economic concepts, highlighting its power of uncovering the once-impossible possibilities.
Location: CoRE 301
Committee:
Start Date: 22 Feb 2024;
Start Time: 02:00PM - 04:00PM
Title: Hybrid CPU-GPU Architectures for Processing Large-Scale Data on Limited Hardware

Bio:
Speaker:
Abstract: In the dynamic field of data processing, efficiently managing large-scale data, both offline and in real-time, is a growing challenge. With the limitations of hardware as a focal concern, this dissertation introduces hybrid CPU-GPU frameworks. These are designed specifically to meet the computational needs of data-intensive environments in real time. A central feature of these designs is a unique shared-memory-space approach, which is effective in facilitating data transfers and ensuring synchronization across multiple computations. The research highlights the increasing trend towards swift processing of large-scale data. In sectors like distributed fiber optic sensing, there's a consistent demand for immediate real-time data processing. These designs combine the advantages of both CPU and GPU components, effectively handling fluctuating workloads and addressing computational challenges. Designed for optimal performance in diverse computing environments with limited hardware, the system architecture offers scalability, adaptability, and increased efficiency. Key components of the design, such as shared memory space utilization, process replication, CPU-GPU synchronization, and real-time visualization capabilities, are thoroughly analyzed to demonstrate its capability in real-time data processing.
Location: CoRE 301
Committee:

Prof. Badri Nath (Chair)

Prof. Srinivas Narayana

Prof. Zheng Zhang

Prof. Kazem Cheshmi (External)

Start Date: 23 Feb 2024;
Start Time: 10:30AM - 12:00PM
Title: Fundamental Problems in AI: Transferability, Compressibility and Generalization

Bio:

Tomer Galanti is a Postdoctoral Associate at the Center for Brains, Minds, and Machines at MIT, where he focuses on the theoretical and algorithmic aspects of deep learning. He received his Ph.D. in Computer Science from Tel Aviv University, during which he served as a Research Scientist Intern at Google DeepMind's Foundations team. He has published numerous papers in top-tier conferences and journals, including NeurIPS, ICML, ICLR, and JMLR. Notably, his paper "On the Modularity of Hypernetworks" was awarded an oral presentation at NeurIPS 2020.


Speaker:
Abstract: In this talk, we delve into several fundamental questions in deep learning. We start by addressing the question, "What are good representations of data?" Recent studies have shown that the representations learned by a single classifier over multiple classes can be easily adapted to new classes with very few samples. We offer a compelling explanation for this behavior by drawing a relationship between transferability and an emergent property known as neural collapse. Later, we explore why certain architectures, such as convolutional networks, outperform fully-connected networks, providing theoretical support for how their inherent sparsity aids learning with fewer samples. Lastly, I present recent findings on how training hyperparameters implicitly control the ranks of weight matrices, consequently affecting the model's compressibility and the dimensionality of the learned features.Additionally, I will describe how this research integrates into a broader research program where I aim to develop realistic models of contemporary learning settings to guide practices in deep learning and artificial intelligence. Utilizing both theory and experiments, I study fundamental questions in the field of deep learning, including why certain architectural choices improve performance or convergence rates, when transfer learning and self-supervised learning work, and what kinds of data representations are learned in practical settings.
Location: CoRE 301
Committee:
Start Date: 26 Feb 2024;
Start Time: 10:30AM - 12:00PM
Title: The Marriage of (provable) Algorithm Design and Machine Learning

Bio:

Sandeep is a final year PhD student at MIT, advised by Piotr Indyk. His interests are broadly in fast algorithm design. Recently, he has been working in the intersection of machine learning and classical algorithms by designing provable algorithms in various ML settings, such as efficient algorithms for processing large datasets, as well as using ML to inspire algorithm design.


Speaker:
Abstract: The talk is motivated by two questions at the interplay between algorithm design and machine learning: (1) How can we leverage the predictive power of machine learning in algorithm design? and (2) How can algorithms alleviate the computational demands of modern machine learning?Towards the first question, I will demonstrate the power of data-driven and learning-augmented algorithm design. I will argue that data should be a central component in the algorithm design process itself. Indeed in many instances, inputs are similar across different algorithm executions. Thus, we can hope to extract information from past inputs or other learned information to improve future performance. Towards this end, I will zoom in on a fruitful template for incorporating learning into algorithm design and highlight a success story in designing space efficient data structures for processing large data streams. I hope to convey that learning-augmented algorithm design should be a tool in every algorithmist's toolkit.Then I will discuss algorithms for scalable ML computations to address the second question. I will focus on my works in understanding global similarity relationships in large high-dimensional datasets, encoded in a similarity matrix. By exploiting geometric structure of specific similarity functions, such as distance or kernel functions, we can understand the capabilities -- and fundamental limitations -- of computing on similarity matrices. Overall, my main message is that sublinear algorithms design principles are instrumental in designing scalable algorithms for big data. I will conclude with some exciting directions in pushing the boundaries of learning-augmented algorithms, as well as new algorithmic challenges in scalable computations for faster ML.
Location: CoRE 301
Committee:
Start Date: 27 Feb 2024;
Start Time: 10:30AM - 12:00PM
Title: A Step Further Toward Scalable and Automatic Distributed Large Language Model Pre-training

Bio:

Hongyi Wang is a Senior Project Scientist at the Machine Learning Department of CMU working with Prof. Eric Xing. He obtained his Ph.D. degree from the Department of Computer Sciences at the University of Wisconsin-Madison, where he was advised by Prof. Dimitris Papailiopoulos. Dr. Wang received the Rising Stars Award from the Conference on Parsimony and Learning in 2024 and the Baidu Best Paper Award at the Spicy FL workshop at NeurIPS 2020. He led the distributed training effort of LLM360, an academic research initiative advocating for fully transparent open-source LLMs. His research has been adopted by companies like IBM, Sony, and FedML Inc., and he is currently funded by NSF, DARPA, and Semiconductor Research Corporation.


Speaker:
Abstract: Large Language Models (LLMs), such as GPT and LLaMA, are at the forefront of advances in the field of AI. Nonetheless, training these models is computationally daunting, necessitating distributed training methods. Distributed training, however, generally suffers from bottlenecks like heavy communication costs and the need for extensive performance tuning. In this talk, I will first introduce a low-rank training framework for enhancing communication efficiency in data parallelism. The proposed framework achieves almost linear scalability without sacrificing model quality, by leveraging a full-rank to low-rank training strategy and a layer-wise adaptive rank selection mechanism. Hybrid parallelism, which combines data and model parallelism, is essential for LLM pre-training. However, designing effective hybrid parallelism strategies requires heavy tuning effort and strong expertise. I will discuss how to automatically design high-throughput hybrid-parallelism training strategies using system cost models. Finally, I will demonstrate how to use the automatically designed hybrid parallelism strategies to train state-of-the-art LLMs.
Location: CoRE 301
Committee:
Start Date: 29 Feb 2024;
Start Time: 10:30AM - 12:00PM
Title: Bridging the Gap Between Theory and Practice: Solving Intractable Problems in a Multi-Agent Machine Learning World

Bio:

Emmanouil-Vasileios (Manolis) Vlatakis Gkaragkounis is currently a Foundations of Data Science Institute (FODSI) Postdoctoral Fellow at the Simons Institute for the Theory of Computing, UC Berkeley, mentored by Prof. Michael Jordan. He completed his Ph.D. in Computer Science at Columbia University, under the guidance of Professors Mihalis Yannakakis and Rocco Servedio, and holds B.Sc. and M.Sc. degrees in Electrical and Computer Engineering. Manolis specializes in the theoretical aspects of Data Science, Machine Learning, and Game Theory, with expertise in beyond worst-case analysis, optimization, and data-driven decision-making in complex environments. His work has applications across multiple areas, including privacy, neural networks, economics and contract theory, statistical inference, and quantum machine learning.


Speaker:
Abstract: "Traditional computing sciences have made significant advances with tools like Complexity and Worst-Case Analysis. However, Machine Learning has unveiled optimization challenges, from image generation to autonomous vehicles, that surpass the analytical capabilities of past decades. Despite their theoretical complexity, such tasks often become more manageable in practice, thanks to deceptively simple yet efficient techniques like Local Search and Gradient Descent.In this talk, we will delve into the effectiveness of these algorithms in complex environments and discuss developing a theory that transcends traditional analysis by bridging theoretical principles with practical applications. We will also explore the behavior of these heuristics in multi-agent strategic environments, evaluating their ability to achieve equilibria using advanced tools from Optimization, Statistics, Dynamical Systems, and Game Theory. The discussion will conclude with an outline of future research directions and my vision for a computational understanding of multi-agent Machine Learning.
Location: CoRE 301
Committee:
Start Date: 01 Mar 2024;
Start Time: 10:30AM - 12:00PM
Title: Mitigating the Risks of Large Language Model Deployments for a Trustworthy Cyberspace

Bio:

Tianxing He is a postdoctoral researcher working with Prof. Yulia Tsvetkov at the University of Washington. His research is focused on natural language generation (NLG) with large language models (LLMs). He did his Ph.D. at MIT CSAIL under the guidance of Prof. James Glass, where he worked towards a better understanding of LM generation. He received his Master degree from the SpeechLab at Shanghai Jiao Tong University (SJTU) with Prof. Kai Yu, and a Bachelor degree from ACM class at SJTU. Tianxing currently works on developing algorithms or protocols for a trustworthy cyberspace in the era of large language models. He is also interested in the critical domains of monitoring, detecting, and mitigating various behaviors exhibited by language models under diverse scenarios. 

Tianxing’s research is recognized with accolades, including the UW Postdoc Research Award, the CCF-Tencent Rhino-Bird Young Faculty Open Research Fund, and The ORACLE Project Award. He is a recipient of the Best Paper Award at NeurIPS-ENLSP 2022.


Speaker:
Abstract: Large language models (LLMs) have ushered in transformative possibilities and critical challenges within our cyberspace. While offering innovative applications, they also introduce substantial AI safety concerns. In my recent research, I employ a comprehensive approach encompassing both red teaming, involving meticulous examination of LLM-based systems to uncover potential vulnerabilities, and blue teaming, entailing the development of algorithms and protocols to enhance system robustness.In this talk, I will delve into three recent projects focused on evaluation, detection, and privacy. (1) Can we trust LLMs as reliable natural language generation (NLG) evaluation metrics? We subject popular LLM-based metrics to extensive stress tests, uncovering significant blind spots. Our findings illuminate clear avenues for enhancing the robustness of these metrics. (2) How can we ensure robust detection of machine-generated text? We introduce SemStamp, a semantic watermark algorithm that performs rejection sampling in the semantic space during LLM generation. The inherent properties of semantic mapping render the watermark resilient to paraphrasing attacks. (3) How do we protect the decoding-time privacy in prompted generation with online services like ChatGPT? The current paradigm gives zero option to users who want to keep the generated text to themselves. We propose LatticeGen, a cooperative framework in which the server still handles most of the computation while the client controls the sampling operation. The key idea is that the true generate sequence is mixed with noise tokens by the client and hidden in a noised lattice. To wrap up, I will outline future directions in the realm of AI safety, addressing the evolving challenges and opportunities that lie ahead.
Location: CoRE 301
Committee:
Start Date: 04 Mar 2024;
Start Time: 10:30AM - 12:00PM
Title: Towards a Unified Theory of Approximability of Globally Constrained CSPs

Bio:

Suprovat Ghoshal is currently a postdoc at Northwestern University and Toyota Technological Institute at Chicago, hosted by Konstantin Makarychev and Yury Makarychev. Before this, he was a postdoc at the University of Michigan, hosted by Euiwoong Lee. He received his Ph.D. from the Indian Institute of Science (IISc), where he was advised by Arnab Bhattacharyya and Siddharth Barman. His thesis received an honorable mention at the ACM India Doctoral Dissertation Award and a Best Alumni Thesis Award from IISc. He is primarily interested in exploring the landscape of optimization problems using the theory of approximation algorithms and the hardness of approximation.


Speaker:
Abstract: Approximation algorithms are a natural way of dealing with the intractability barrier faced by many fundamental computational problems in discrete and continuous optimization. The past couple of decades have seen vast progress in this area, culminating in unified theories of algorithms and hardness for several fundamental classes of problems, such as Maximum Constraint Satisfaction Problems (Max-CSPs). However, a similarly complete understanding of many fundamental generalizations of these classes is still yet to be realized, and in particular, the challenges in this direction represent some of the central open questions in the theory of algorithms and hardness.In this talk, I will present some recent results that build towards a theory of optimal algorithms and hardness for Globally Constrained CSPs (Constraint Satisfaction Problems), a class of problems that vastly generalize Max-CSPs. These results are derived using recently emerging tools in the theory of approximation, such as the Small-Set Expansion Hypothesis and the Sum-of-Squares Hierarchy, and they yield the first nearly tight bounds for classical fundamental problems such as Densest-k-Subgraph and Densest-k-SubHypergraph, in the regime where k is linear in the number of vertices. I will conclude by describing this research program's broad, long-term goals, as well as some specific open questions that represent key bottlenecks towards its realization.
Location: CoRE 301
Committee:
Start Date: 05 Mar 2024;
Start Time: 10:30AM - 12:00PM
Title: Building Transparency in Representation Learning

Bio:

Yaodong Yu is a PhD student in the EECS department at UC Berkeley, advised by Michael I. Jordan and Yi Ma. His research focuses on foundations and applications of trustworthy machine learning, including interpretable deep neural networks, privacy-preserving foundation models, and uncertainty quantification under complex environments. His research has been recognized by the CPAL-2024 Rising Star Award. He is the recipient of first place in the NeurIPS-2018 Adversarial Vision Challenge.


Speaker:
Abstract: Machine learning models trained on vast amounts of data have achieved remarkable success across various applications. However, they also pose new challenges and risks for deployment in real-world high-stakes domains. Decisions made by deep learning models are often difficult to interpret, and the underlying mechanisms remain poorly understood. Given that deep learning models operate as black-boxes, it is challenging to understand, much less resolve, various types of failures in current machine learning systems.In this talk, I will describe our work towards building transparent machine learning systems through the lens of representation learning. First, I will present a white-box approach to understanding transformer models. I will show how to derive a family of mathematically interpretable transformer-like deep network architectures by maximizing the information gain of the learned representations. Furthermore, I will demonstrate that the proposed interpretable transformer achieves competitive empirical performance on large-scale real-world datasets, while learning more interpretable and structured representations than black-box transformers. Next, I will present our work on training the first set of vision and vision-language foundation models with rigorous differential privacy guarantees, and demonstrate the promise of high-utility differentially private representation learning. To conclude, I will discuss future directions towards transparent and safe AI systems we can understand and trust.
Location: CoRE 301
Committee:
Start Date: 07 Mar 2024;
Start Time: 10:30AM - 12:00PM
Title: Mathematical Foundations for Trustworthy Machine Learning

Bio:

Lunjia Hu is a final-year Computer Science PhD student at Stanford University, advised by Moses Charikar and Omer Reingold. He works on advancing the theoretical foundations of trustworthy machine learning, addressing fundamental questions about interpretability, fairness, robustness, and uncertainty quantification. His works on algorithmic fairness and machine learning theory have received Best Student Paper awards at ALT 2022 and ITCS 2023.


Speaker:
Abstract: Machine learning holds significant potential for positive societal impact. However, in critical applications involving people such as healthcare, employment, and lending, machine learning raises serious concerns of fairness, robustness, and interpretability. Addressing these concerns is crucial for making machine learning more trustworthy. This talk will focus on three lines of my recent research establishing the mathematical foundations of trustworthy machine learning. First, I will introduce a theory that optimally characterizes the amount of data needed for achieving multicalibration, a recent fairness notion with many impactful applications. This result is an instance of a broader theory developed in my research giving the first sample complexity characterizations for learning tasks with multiple interacting function classes (ALT’22 Best Student Paper, ITCS’23 Best Student Paper). Next, I will discuss my research in omniprediction, a new approach to robust learning that allows for simultaneous optimization of different loss functions and fairness constraints (ITCS'23, ICML’23). Finally, I will present a principled theory of calibration of neural networks (STOC’23). This theory provides an essential tool for understanding uncertainty quantification and interpretability in deep learning, allowing rigorous explanations for interesting empirical phenomena (NeurIPS'23 spotlight, ITCS'24).
Location: CoRE 301
Committee:
Start Date: 08 Mar 2024;
Start Time: 10:00AM - 11:00AM
Title: Simplification of Boolean Expressions using Karnaugh Map

Bio:

Murtadha Aldeer graduated with a Ph.D. in Electrical and Computer Engineering from WINLAB/Rutgers University in October 2023 where he was supervised by Professor Richard Martin and Professor Jorge Ortiz. He is currently an Assistant Teaching Professor at Montclair State University. His research interests include Cyber-Physical Systems, Connected Health, Internet of Things, and Smart Buildings. He is passionate about teaching, mentoring students and community organizing. He has taught a variety of courses in computer science and computer engineering since 2018.


Speaker:
Abstract: This lecture will explore the fundamentals of Boolean Algebra with a particular emphasis on simplifying Boolean expressions using Karnaugh-Maps (K-maps). We will demonstrate how this method efficiently simplifies Boolean expressions, which is invaluable for minimizing Logic Circuits.
Location: CoRE 301
Committee:
Start Date: 13 Mar 2024;
Start Time: 09:00AM - 11:00AM
Title: Causal Collaborative Filtering

Bio:
Speaker:
Abstract: In the era of information explosion, recommender systems have become essential for fulfilling users' personalized and complex demands across various services like e-commerce and social media. Collaborative filtering algorithms, fundamental to these systems, traditionally leverage similarities between users and items to provide recommendations, focusing on mining correlative patterns. However, this dissertation introduces causal collaborative filtering methods based on the structural causal model framework to address issues like Simpson's paradox, confounding bias, and echo chambers. These methods shift from correlative to causal learning by formulating recommendations as "what if" questions and applying causal inference techniques. The dissertation presents a comprehensive approach that mitigates various challenges through different types of causal graphs and inference techniques, providing a significant advancement in the field of recommender systems.
Location: CoRE 305
Committee:

Prof. Yongfeng Zhang (advisor)

Prof. Hao Wang

Prof. Desheng Zhang

Prof. Hamed Zamani (external)

Start Date: 13 Mar 2024;
Start Time: 04:00PM - 05:30PM
Title: Fairness in Recommender Systems

Bio:
Speaker:
Abstract: As one of the most pervasive applications of machine learning, recommender systems are playing an important role on assisting human decision making, which gives rise to essential concerns regarding the fairness of such systems. Research on fair machine learning has mainly focused on classification and ranking tasks. Although recommendation algorithm can usually be considered as a type of ranking algorithm, the fairness concerns in recommender systems are more complicated and should be extended to multiple stakeholders. In specific, different from only concerning item exposure fairness in ranking problem, we should also attach importance to the fairness demands of users in recommender systems. To improve user-side fairness in recommendation, we have proposed three works which concentrate on user group-level fairness, user individual-level fairness, and enhancing fairness for cold-start users, respectively.
Location: CoRE 301
Committee:

Prof. Yongfeng Zhang (Advisor)

Prof. Hao Wang

Prof. Amélie Marian

Prof. Yi Zhang (external)

Start Date: 15 Mar 2024;
Start Time: 08:45PM - 10:00PM
Title: Large Language Models for Data Driven Applications

Bio:
Speaker:
Abstract: This dissertation presents a series of innovative approaches leveraging deep learning and Large Language Models to address challenges in various steps in the pipeline of real-world data driven applications.Firstly, we explore the enhancement of locality-sensitive hashing (LSH) for entity blocking through a neuralization approach. Entity blocking is an important data pre-processing step that finds similar data records that might refer to the same real-world entity. We train deep neural networks to act as hashing functions for complex metrics, which surpasses the limitations of generic similarity metrics in traditional LSH-based methods. Our novel methodology, embodied in NLSHBlock (Neural-LSH Block), leverages pre-trained language models fine-tuned with a novel LSH-based loss function. NLSHBlock achieves significant performance improvements in entity blocking tasks and can boost the performance of later steps in the data processing pipeline.Further, we introduce Sudowoodo, a multi-purpose data integration framework based on contrastive representation learning and large language models, which offers a unified solution for data integration tasks like Entity Matching. Entity matching is a process that determines if a pair of data records represent the same real-world entity and plays an essential role in plenty of applications. To tackle the common issue of insufficient number of high-quality labeled data, Sudowoodo utilizes similarity-aware data representations learned without labels and enables effective fine-tuning in the semi-supervised learning setting where only a small amount of labeled data is available. Besides, Sudowoodo also applies to other data integration tasks such as data cleaning and semantic type detection.Finally, we propose a Generate-and-Retrieve with Reasoning (GTR) framework for recommender systems, inspired by generative large language models. Entity recommendation is usually the last step in a data driven application pipeline. Our proposed framework views recommendation tasks as a process of instruction following by generative large language models, employing natural language instructions to express and decipher user preferences and intentions. GTR innovates by directly generating item names, employing state-of-the-art retrieval models for item alignment, and enhancing model performance through reasoning distillation.Through rigorous experimentation on diverse real-world datasets, we validate the effectiveness of these approaches, setting new benchmarks in their respective domains. The findings of this dissertation not only advance the state of the art in crucial steps of industrial application pipelines including entity blocking, entity matching, and entity recommendation systems, but also open promising avenues for the application of deep learning and large language models in complex data integration and recommendation tasks, fostering improved accuracy, efficiency, and user interaction.
Location: FULLY REMOTE
Committee:

Prof. Yongfeng Zhang (advisor)

Prof. Dong Deng (co-advisor)

Prof. Hao Wang

Dr. Xiao Qin (external member)

Start Date: 18 Mar 2024;
Start Time: 10:30AM - 12:00PM
Title: Quantum and Quantum-Inspired Computation for Next-Generation Wireless Networks

Bio:

Minsung Kim is a postdoctoral associate in the Department of Computer Science at Yale University. He received his Ph.D. in Computer Science from Princeton University and his B.E. in Electrical Engineering (Great Honor) from Korea University. His research focuses on quantum and emerging computing systems for next-generation wireless networks. His work has been published in the premier venues of mobile computing and wireless networking such as ACM SIGCOMM and MobiCom. He is a recipient of the 2021 Qualcomm Innovation Fellowship, the 2022 Princeton SEAS Award for Excellence, and the 2023 Adiabatic Quantum Computing (AQC) Junior Scientist Award. He was named a Siebel Scholar (Class of 2024).


Speaker:
Abstract: A central design challenge for future generations of wireless networks is to meet the ever-increasing demand for capacity, throughput, and connectivity. While significant progress has been made in designing advanced wireless technologies, the current computational capacity at base stations to support them has been consistently identified as the bottleneck, due to limitations in processing time. Quantum computing is a potential tool to address this computational challenge. It exploits unique information processing capabilities based on quantum mechanics to perform fast calculations that are intractable by traditional digital methods. In this talk, I will present design directions of quantum compute-enabled base station systems in wireless networks and introduce our prototype systems that are implemented on real-world quantum processors. The prototypes are designed for quantum-accelerated near-optimal wireless signal processing in Multiple-Input Multiple-Output (MIMO) systems that could drastically increase wireless performance for tomorrow's next-generation wireless cellular networking standards, as well as in next-generation wireless local area networks. I will provide design guidance of quantum, quantum-inspired classical, and hybrid classical-quantum optimization in the systems with underlying principles and technical details and discuss future research directions based on the current challenges and opportunities.
Location: CoRE 301
Committee:
Start Date: 19 Mar 2024;
Start Time: 10:30AM - 12:00PM
Title: Towards Optimal Sampling Algorithms

Bio:

Thuy-Duong “June” Vuong is a 5th year PhD student at Stanford. Her research is in designing and analyzing algorithms for sampling from complex high-dimensional distributions, with a focus on Markov chains analysis. Her work gives the optimal bounds on the runtime of Markov chains using entropy analysis and the theory of high-dimensional expanders. She received Bachelor of Science degrees in Mathematics and Computer Science at the Massachusetts Institute of Technology. Her research is supported by a Microsoft Research PhD fellowship.


Speaker:
Abstract: Sampling is a fundamental task with applications in various areas including physics, statistics, and combinatorics. Many applications involve the challenging task of sampling from complex high-dimensional distributions. Markov chains are a widely adopted approach for tackling these critical problems, but current runtime analyses are suboptimal.In this talk, I will introduce “entropic independence”, a novel and powerful framework for analyzing Markov chains, and use it to obtain the tightest possible runtime bounds. My work gives the first near-linear time sampling algorithms for classical statistical physics models in the tractable regime, resolving a 70-year-old research program. My research results in highly practical algorithms and settles several long-standing open problems in sampling and approximate computing.
Location: CoRE 301
Committee:
Start Date: 19 Mar 2024;
Start Time: 02:00PM - 04:00PM
Title: Techniques for Increasing the Efficiency of Physics Simulation

Bio:
Speaker:
Abstract: This work focuses on methods for improving the efficiency of simulations of 3D tetrahedral meshes representing elastic solids. These kinds of simulations are used in computer graphics for stretch/squash animations, as well as in engineering to simulate the elastic deformation of materials. Our general-purpose method speeds up such simulations, so that for any given level of processing capacity, one can simulate larger and more complicated models in a shorter period of time.Our approach focuses on reformulating mesh data for greater cache efficiency. It is much faster to access data from a cache than from memory, so whenever a datum is needed, it is better to have it in the cache than to access it from memory. When a mesh datum is called by the physics engine, its neighbors in the data are loaded into the cache as well. Since certain vertices and mesh elements are called together in the simulation code, we reorder the mesh data so that they are listed together in memory. If mesh data that are called together are listed together, they will be loaded together into the cache, and there will be a smaller chance of cache misses and page faults in the process of executing the algorithms. We identified remarkable speedups using this setup for two separate elastic solid physics simulation engines.
Location: CBIM #22
Committee:

Assistant Professor Mridul Aanjaneya

Professor Santosh Nagarakatte

Associate Professor Abdeslam Boularias

Assistant Professor Roie Levin

Start Date: 21 Mar 2024;
Start Time: 10:00AM - 12:00AM
Title: All-Norm Load Balancing in Massive Graphs

Bio:
Speaker:
Abstract: We address the all-norm load balancing problem in bipartite graphs in the distributed and semi-streaming models. In the all-norm load balancing problem, we are given a bipartite graph with weights on the clients. The goal is to assign each client to an adjacent server such that the Lp norm of the server load vector is approximately minimized for all p simultaneously. We present the first O(1)-approximation algorithm in the CONGEST model that runs in polylog(n) rounds. In the semi-streaming model, we develop an O(1)-approximation O(log n)-pass algorithm using different techniques. Additionally, these algorithms are can be ported to the sequential model, yielding the first O(1)-approximation for the problem that runs in near-linear time.
Location: CoRE 305
Committee:

Aaron Bernstein (advisor)

Sepehr Assadi

Jie Gao

Christian Konrad (external)

Start Date: 21 Mar 2024;
Start Time: 10:30AM - 12:00PM
Title: Reliable Machine Learning by Integrating Context

Bio:

Chengzhi Mao is a postdoctoral research fellow and prior Ph.D. student from the Department of Computer Science at Columbia University, advised by Prof. Carl Vondrick and Junfeng Yang. He is also a core faculty member at MILA, Quebec AI Institute, since 2023. He received his Bachelor's from Tsinghua University. His research resides in trustworthy machine learning and computer vision. His work has led to over 20 publications and Orals at top computer vision and machine learning conferences, which have been covered by Science and MITNews. He is a recipient of the CVPR doctoral award in 2023.


Speaker:
Abstract: Machine learning is now widely used and deeply embedded in our lives. However, despite the excellent performance of machine learning models on benchmarks, state-of-the-art methods like neural networks often fail once they encounter realistic settings. Since neural networks often learn correlations without reasoning with the right signals and knowledge, they fail when facing shifting distributions, unforeseen corruptions, and worst-case scenarios. Since neural networks are black boxes, they are not interpretable and not trusted by the user. In this talk, I will show how to build reliable machine learning by tightly integrating context into the models. The context has two aspects: the intrinsic structure of natural data, and the extrinsic structure of domain knowledge. Both are crucial: By capitalizing on the intrinsic structure in natural images, I show that we can create adaptive computer vision systems that are robust, even in the worst case, an analytical result that also enjoys strong empirical gains. Through the integration of external knowledge, such as causal structure, my framework can instruct models to use the right signals for visual recognition, enabling new opportunities for controllable and interpretable models. I will also talk about future work on reliable foundation models.
Location: CoRE 301
Committee:
Start Date: 22 Mar 2024;
Start Time: 10:30AM - 12:00PM
Title: Learning from Interaction

Bio:

Kianté Brantley is a Postdoctoral Associate in the Department of Computer Science at Cornell University., working with Thorsten Joachims. He completed his Ph.D. in Computer Science at the University of Maryland College Park, advised by Dr. Hal Daumé III. His research focuses on developing machine learning models that can make automated decisions in the real world with minimal supervision. His research lies at the intersection of imitation learning, reinforcement learning, and natural language processing. He is a recipient of the NSF LSAMP BD Fellowship, ACM SIGHPC Computational and Data Science Fellowship, Microsoft Dissertation Research Grant, Ann G. Wylie Dissertation Fellowship, and NSF CIFellow Postdoctoral Fellowship.


Speaker:
Abstract: Machine learning systems have seen advancements due to large models pre-trained on vast amounts of data. These pre-trained models have led to progress on various downstream tasks when fine-tuned. However, for machine learning systems to function in real-world environments, they must overcome certain challenges that are not influenced by model or dataset sizes. One potential solution is to fine-tune machine learning models based on online interactions. In this talk, I will present my research on developing natural language processing systems that learn from interacting in an environment. I will begin by describing the issues that arise when systems are trained on offline data and then deployed in interactive environments. Additionally, I will present an algorithm that addresses these issues using only environmental interaction without additional supervision. Moreover, I will demonstrate how learning from interaction can improve natural language processing systems. Finally, I will present a set of new interactive learning algorithms explicitly designed for natural language processing systems.
Location: CoRE 301
Committee:
Start Date: 25 Mar 2024;
Start Time: 10:30AM - 12:00PM
Title: Types and Metaprogramming for Correct, Safe, and Performant Software Systems

Bio:

Guannan Wei is currently a postdoctoral researcher at Purdue University. His research interests lie in programming languages and software engineering. His contributions have been published in flagship programming languages and software engineering venues, such as POPL, OOPSLA, ICFP, ECOOP, ICSE, and ESEC/FSE. Guannan received his PhD degree (2023) in Computer Science from Purdue University, advised by Tiark Rompf. He is the 2022 recipient of the Maurice H. Halstead Memorial Award for Software Engineering Research. More of Guannan’s work can be found at https://continuation.passing.style.


Speaker:
Abstract: In this talk, I will present some novel directions to build correct, safe, and performant software systems using programming languages and metaprogramming techniques. In the first part of the talk, I will present reachability type systems, a family of static type systems to track sharing, separation, and side effects in higher-order imperative programs. Reachability types lead to a smooth combination of Rust-style ownership types with higher-level programming abstractions (such as first-class functions). In the second part, I will discuss how metaprogramming techniques can help build correct, flexible, and performant program analyzers. I will present GenSym, a parallel symbolic-execution compiler that is derived from a high-level definitional symbolic interpreter using program generation techniques. GenSym generates code in continuation-passing style to perform parallel symbolic execution of LLVM IR programs, and significantly outperforms similar state-of-the-art tools. The talk also covers my future research agenda, including applications of reachability types in quantum computing.
Location: CoRE 301
Committee:
Start Date: 26 Mar 2024;
Start Time: 10:30AM - 12:00PM
Title: Building Networked Systems & Protocols for Terabit Ethernet

Bio:

Qizhe Cai is a Ph.D. student in the Computer Science Department at Cornell University, advised by Prof. Rachit Agarwal. His research lies at the intersection of networking and operating systems, with a focus on building efficient network systems and protocols to exploit the benefits of Terabit Ethernet. He was awarded the Meta Fellowship in 2022.


Speaker:
Abstract: Datacenter servers will soon have Terabit Ethernet. However, design of host network stacks that are able to fully exploit the benefits of Terabit Ethernet hardware remains elusive.In this talk, I will first present insights from an in-depth study on understanding fundamental limitations of existing network stacks in terms of exploiting the benefits of Terabit hardware. I will then present NetChannel, a new host network stack architecture that enables elastic allocation and scheduling of host resources across applications, enabling many previously unachievable operating points (e.g., single-threaded applications to saturate multi-hundred gigabit links). I will also discuss how my recent work has led to a myriad of new questions on redesigning network protocols, stacks, and hardware for Terabit Ethernet.
Location: CoRE 301
Committee:
Start Date: 27 Mar 2024;
Start Time: 03:00PM - 05:00PM
Title: Automated Machine Learning for Intelligent Systems

Bio:
Speaker:
Abstract: The fast progress in Machine Learning techniques has led to a growing impact of Artificial Intelligence (AI) on various aspects of people's lives. AI model learning comprises three vital components: data inputs, model design, and loss functions. Each of these components makes a significant contribution to the AI system's performance. Traditionally, these components needed skilled domain experts to meticulously design them, creating a challenging barrier to entry into AI. Besides, manual approaches are relatively inefficient and often fail to achieve optimal results. Recently, automated machine learning (AutoML), such as neural architecture search, has tried to tackle the challenge by automating the design and parameters of deep models. However, traditional AutoML research mainly focuses on automated model design instead of the inputs and loss functions, and AutoML's research on recent large language models is still in its infancy due to the enormous computations required. Through the development of automated machine learning techniques at various stages of an AI pipeline, we conduct three studies to advance both small-scale and large-scale intelligent systems further, making them more precise, effective, and user-friendly.
Location: CoRE 305
Committee:

Professor Yongfeng Zhang (Chair)

Assistant Professor Hao Wang

Assistant Professor He Zhu

Professor Mengnan Du (external)

Start Date: 29 Mar 2024;
Start Time: 10:30AM - 12:00PM
Title: When and why do simpler-yet-accurate models exist?

Bio:

Lesia Semenova is a final-year Ph.D. candidate at Duke University in the Department of Computer Science, advised by Cynthia Rudin and Ronald Parr. Her research interests span responsible and trustworthy AI, interpretable machine learning, reinforcement learning, and AI in healthcare. She has developed a foundation for the existence of simpler-yet-accurate machine learning models. She was selected as one of the 2024 Rising Stars in Computational and Data Sciences. The student teams she has coached won the ASA Data Challenge Expo twice and placed third in a competition on scholarly document processing. Prior to joining Duke, she worked for two years at the Samsung Research and Development Institute Ukraine.


Speaker:
Abstract: Finding optimal, sparse, accurate models of various forms (such as linear models with integer coefficients, rule lists, and decision trees) is generally NP-hard. Often, we do not know whether the search for a simpler model will be worthwhile, and thus we do not undertake the effort to find one. This talk addresses an important practical question: for which types of datasets would we expect interpretable models to perform as well as black-box models? I will present a mechanism of the data generation process, coupled with choices usually made by the analyst during the learning process, that leads to the existence of simpler-yet-accurate models. This mechanism indicates that such models exist in practice more often than one might expect. 
Location: CoRE 301
Committee:
Start Date: 29 Mar 2024;
Start Time: 02:00PM - 04:00PM
Title: Exploiting pre-trained large-scale text-to-image diffusion models for image & video editing

Bio:
Speaker:
Abstract: Recent advancements in diffusion models have significantly impacted both image and video domains, showcasing remarkable capabilities in text-guided synthesis and editing. In the realm of image editing, particularly for single images like iconic paintings, existing approaches often face challenges such as overfitting and content preservation. To overcome these, a novel approach has been developed that introduces a model-based guidance technique, enhancing the pre-trained diffusion models with the ability to maintain original content while incorporating new features as directed by textual descriptions. This method includes a patch-based fine-tuning process, enabling the generation of high-resolution images and demonstrating impressive editing capabilities, including style modification, content addition, and object manipulation. Extending the prowess of diffusion models to the video sector, text-guided video inpainting presents its own set of challenges, including maintaining temporal consistency, handling various inpainting types with different structural fidelity, and accommodating variable video lengths. The introduction of the Any-Length Video Inpainting with Diffusion Model (AVID) addresses these issues head-on. AVID incorporates effective motion modules and adjustable structure guidance for fixed-length video inpainting and introduces a Temporal MultiDiffusion sampling pipeline with a middle-frame attention guidance mechanism. This innovative approach enables the creation of videos of any desired length, ensuring high-quality inpainting across a wide range of durations and types. Through extensive experimentation, both methodologies have proven their effectiveness, pushing the boundaries of what's possible in image and video editing with diffusion models. Publication:1. Zhang, Zhixing, Ligong Han, Arnab Ghosh, Dimitris N. Metaxas, and Jian Ren. "Sine: Single image editing with text-to-image diffusion models." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6027-6037. 2023.2. Zhang, Zhixing, Bichen Wu, Xiaoyan Wang, Yaqiao Luo, Luxin Zhang, Yinan Zhao, Peter Vajda, Dimitris Metaxas, and Licheng Yu. "AVID: Any-Length Video Inpainting with Diffusion Model." arXiv preprint arXiv:2312.03816 (2023).
Location: CoRE 305
Committee:

Professor Dimitris Metaxas

Assistant Professor Yongfeng Zhang

Associate Professor Konstantinos Michmizos

Professor Mario Szegedy

Start Date: 01 Apr 2024;
Start Time: 10:30AM - 12:00PM
Title: Programmable Software Systems for Correct High-performance Applications

Bio:

Konstantinos Kallas is a PhD student at the University of Pennsylvania working with Rajeev Alur. He is interested in building systems that enable the development of high-performance applications with robust correctness guarantees, both in theory and in practice. His research has appeared at several venues including OSDI, NSDI, EuroSys, POPL, OOPSLA, and VLDB, and has received the best paper award at EuroSys 21, the best presentation award at HotOS 21, and the 2nd place at the ACM SRC Grand Finals. His research on optimizing shell scripts for parallel and distributed computing environments is supported by the Linux Foundation and part of his research on serverless is incorporated in the Durable Functions framework that is offered by Azure and serves thousands of active users. You can find more information about him on his website: https://www.cis.upenn.edu/~kallas/.


Speaker:
Abstract: We live in an era of unprecedented compute availability. The advent of the cloud allows anyone to deploy critical high-performance applications that serve millions of users without owning or managing any computational resources. The goal of my research is to enable the development of such high-performance applications with robust correctness guarantees. To achieve this goal, I build practical programmable software systems that target realistic workloads in widely-used environments. My systems are rooted in solid foundations, incorporating formal specifications and techniques drawn from the programming languages, compilers, and formal methods literature.In this talk I will present some of my work on such systems, including PaSh, the first optimization system for the Unix shell since its inception 50 years ago, as well as MuCache, a caching system for microservice graphs. Surprisingly, the shell and microservices have a key characteristic in common, they are both used to compose black-box components to create applications that are greater than the sum of their parts. I will conclude the talk by arguing that systems research is a key requirement to support the increased compute demands of new applications and enable future breakthroughs.
Location: CoRE 301
Committee: