Events Feed

Start Date: 12 Aug 2020;
Start Time: 11:00AM - 12:00PM
Title: Bayes-Factor-VAE: Hierarchical Bayesian Deep Auto-Encoder Models for Factor Disentanglement

Bio:
Speaker:
Abstract: The success of machine learning techniques, including few-shot learning, transfer learning, and image synthesis, relies on how data is represented. Flagship approaches, such as variational autoencoders (VAEs), model the nonlinear generative relationship between data and latent factors in an unsupervised manner. However, such representations entangle and hide more or less the factors of variation inherent in the data. To tackle this limitation, we argue that the desired representation is supposed to satisfy two properties. First, its latent space is able to explain the variation of data in a way that the factors are disentangled. That is, when changing one factor while freezing other factors, the variation is observed in only one aspect of the data. Second, relevant factors that are statistically dependent on the data should be discerned from nuisance factors. Inspired by these properties, we extend VAEs to a hierarchical Bayesian model that introduces hyper-priors on the variances of Gaussian latent priors, thus mimicking an infinite mixture, while maintaining tractable learning and inference of traditional VAEs. Our key insight is that the prior distribution of disentangled latent factors should be learned from data instead of being fixed to normal Gaussian distribution as in VAEs. By doing so, we are also able to better model nuisance factors. Extensive analysis shows the importance of partitioning the latent dimensions corresponding to relevant factors and nuisances, and treating them in different ways. Our proposed model outperforms the state of the art both quantitatively and qualitatively in terms of latent disentanglement on several challenging benchmarks, such as Sprites, 3D-Face, Teapots, and Celeb-A datasets.
Location: Remote via Webex
Committee:

Prof. Vladimir Pavlovic

Prof. Yongfeng Zhang

Prof. Konstantinos Michmizos

Prof. Martin Farach-Colton

Start Date: 14 Aug 2020;
Start Time: 03:00PM - 04:30PM
Title: Optimally Covering Critical Sets in R^2 with A Team of Mobile Sensing Robots

Bio:
Speaker:
Abstract: Optimal coverage is an important problem in robotics, e.g. for monitoring changing environments, guarding the perimeter of a building for security, detecting emergencies in an area, and so on. In this domain, we investigated several challenging variations, modelling the robots’ sensing abilities with different models (1D / 2D range sensing) and examining different types of sets to guard (1D perimeter / 2D region). Some of the problems could be optimized quite efficiently, while others are intractable to obtain optimal solutions or even to approximate within some certain ratio. Specifically, in a 1D scenario where the sets to guard are line segments and a single robot can only cover a continuous boundary segment, the maximum coverage length of a robot could be minimized quite efficiently with low polynomial time complexity. However, when the group of robots are heterogeneous, the corresponding problem of minimizing deployment costs or maximum capacity exploitation can be NP-hard to solve. When studying the problem in a 2D scenario, it is even NP-hard to approximate the minimum coverage radius within 15% to cover the boundary of a simple polygon or the simple polygon itself. Yet, it is still possible to obtain good computational results experimentally for these problems by applying some realistic assumptions or using methods as dynamic programming and integer programming.
Location: Remote via Webex
Committee:

Prof. Jingjin Yu (Advisor)

Prof. Kostas Bekris

Prof. Jie Gao

Prof. Richard Martin

Start Date: 10 Sep 2020;
Start Time: 10:30AM - 12:00PM
Title: Decision Making for Many Mobile Objects

Bio:

Jingjin Yu is an Assistant Professor in the Department of Computer Science at Rutgers University. He received his B.S. from the University of Science and Technology of China (USTC) in 1998. He obtained his M.S. in Computer Science (2010) and Ph.D. in Electrical and Computer Engineering (2013), both from the University of Illinois, where he briefly stayed as a postdoctoral researcher. He was a postdoctoral researcher at the Massachusetts Institute of Technology from 2013 to 2015. He is broadly interested in the area of algorithmic robotics, control, and optimization, focusing on issues related to optimality, complexity, and the design of efficient decision-making methods for single- and multi-robot systems. He is a recipient of the NSF CAREER award.


Speaker:
Abstract: Systems composed of many mobile bodies (e.g., robots or movable objects) are often highly complex, due to interactions between the bodies. For example, in a multi-robot motion planning scenario, applicable to warehouse automation, port automation, and autonomous driving settings, as the density of robots increases, it becomes increasingly difficult to coordinate robots’ collision-free motion while simultaneously seeking to optimize the overall system throughput. In this talk, I will discuss several practical decision-making problems for systems containing many mobile objects, examine the involved computational challenges, and highlight state-of-the-art algorithmic solutions for dealing with the challenges.
Location: Via Zoom
Committee:
Start Date: 14 Sep 2020;
Start Time: 10:30AM -
Title: Intelligence beyond the neuron: Energy-Efficient and Robust Brain-inspired Computing Algorithms Evaluated in Neuro-Morphic and Neuro-Rehabilitation Robots

Bio:

Konstantinos P. Michmizos is an Assistant Professor of Computer Science, an Executive Council Faculty at Center for Cognitive Science, and a Faculty member of the Brain Health Institute and CBIM at Rutgers University. Previously he held positions at Harvard Medical School and MIT.


Speaker:
Abstract: In our strive for ever more efficient and intelligent systems, the Von Neumann computing paradigm that has been serving us for fifty years is no longer sufficient. The problem lies in the paradox that, despite the pioneering early connections between computation and brain studies, today’s algorithms and systems diverge from what we have lately learned about the brain’s information processing and learning. The neuronal and non-neuronal computing principles lead to dramatic efficiencies in nature but make them extremely ill-suited for executing on today’s energy- and memory-greedy processors. This talk will present our efforts to advance non-Von Neumann computations that draw from the brain’s functional analogies and redefine algorithms as spiking neuronal astrocytic networks, where memory, learning and computing are tightly integrated. Specifically, it will describe how we a) develop and synthesize computational models and learning algorithms grounded in recent breakthroughs in neuroscience, b) translate this spectrum of knowledge into scalable computing primitives and deploy them on large-scale neuromorphic processors, and c) evaluate the emerging behavior as a solution to foundational problems in robotics, leading to 2 orders of magnitude more energy-efficient and significantly more robust solutions compared to the state-of-the-art deep learning. The talk will conclude with the learned lessons and the untapped challenges in harnessing intelligence for robots, by either archetyping it to serve people’s life or targeting it to assist people’s recovery.
Location: Via Webex meeting
Committee:
Start Date: 15 Sep 2020;
Start Time: 10:30AM -
Title: Model Identification for Robotic Manipulation

Bio:

Abdeslam Boularias is an Assistant Professor of computer science at Rutgers, The State University of New Jersey, where he works on robot learning. Previously, he was a Project Scientist in the Robotics Institute of Carnegie Mellon University, and a Research Scientist at the Max Planck Institute for Intelligent Systems in Tübingen, Germany, in the department of Empirical Inference. He receieved a Ph.D. in computer science from Laval University in Canada, an M.S. in computer science from Paris-Sud University in France, and a B.S. in computer science from the École Nationale Supérieure d’Informatique (ESI) in Algeria. He is broadly interested in the area of reinforcement learning, robotic manipulation, and robot vision. He is a recipient of the NSF CAREER award.


Speaker:
Abstract: A popular approach in robot learning is model-free reinforcement learning (RL), where a control policy is learned directly from sensory inputs by trial and error without explicitly modeling the effects of the robot’s actions on the controlled objects or system. While this approach has proved to be very effective in learning motor skills, it suffers from several drawbacks in the context of object manipulation due to the fact that types of objects and their arrangements vary significantly across different tasks and environments. An alternative approach that may address these issues more efficiently is model-based RL. A model in RL generally refers to a transition function that maps a state and an action into a probability distribution over all possible future states. In this talk, I will present my recent works on data-efficient physics-driven techniques for identifying models of manipulated objects. To perform a new task in an environment with unknown objects, a robot first identifies from sequences of images the 3D mesh models of the objects, as well as their physical properties such as their mass distributions, moments of inertia and friction coefficients. The robot then reconstructs in a physics simulation the observed scene, and imagines the future motions of the objects when manipulated. The predicted motions are used to select a sequence of appropriate actions to apply on the real objects. The proposed techniques are evaluated on a wide range of applications in robotics, such as grasping, nonprehensile manipulation, rearrangement, and warehouse automation.
Location: Via YouTube
Committee:
Start Date: 17 Sep 2020;
Start Time: 10:30AM -
Title: Intelligent Human-Aware Building Design

Bio:

Mubbasir Kapadia is an Assistant Professor in the Computer Science department at Rutgers University, and the Director of the Intelligent Visual Interfaces Lab. Previously, he held positions at Disney Research Zurich and the Human Modeling and Simulation Lab at University of Pennsylvania. He received his PhD in Computer Science at University of California, Los Angeles. Kapadia’s research lies at the intersection of artificial intelligence, visual computing, and human-computer interaction, with a mission to develop intelligent visual interfaces to empower content creation for human-aware architectural design, digital storytelling, and serious games. He has published more than 100 journal and conference papers at premier venues in Computer Graphics, and Artificial Intelligence. Kapadia’s research is funded by DARPA and NSF, and through generous support from industrial partners including Disney Research, Autodesk Research, Adobe Research, and Unity Labs. 

Lab Website: https://ivi.cs.rutgers.edu/ 

Project Website: https://www.humanbuildinginteraction.com/ 


Speaker:
Abstract: My research lies at the intersection of artificial intelligence, visual computing, and human-computer interaction, with a focus on human modeling and simulation. My overarching career goal is to address key limitations and simplifying assumptions in virtual crowds, to make it an enabling technology for architectural design, urban simulation, and other forms of content creation. Towards this goal, I have helped lay the foundation for realism in virtual crowds, introduced new experimental paradigms for studying collective human behavior, developed model-free techniques for multi-agent learning from multi-modal data, and enhanced computer-aided design pipelines to seamlessly account for human behavior. My research contributions are actively used by researchers and practitioners all over the world and have helped establish a burgeoning multi-disciplinary research community in ``Human-Building Interaction''. In this talk, I will describe our ongoing work along three main thrusts: (a) computational modeling of the cognitive processes underlying how people inhabit spaces, (b) predicting the performance of buildings with respect to its inhabitants, and (c) integrating human-centric performance metrics within computer-aided design pipelines. These research objectives culminate towards the goal of democratizing human behavior simulation as a service and developing end-to-end solutions for intelligent human-aware building design.
Location: Via Webex
Committee:
Start Date: 18 Sep 2020;
Start Time: 10:15AM - 12:00PM
Title: Computational Approaches for Semantics-Aware Typographical Choices

Bio:
Speaker:
Abstract: Typographic signals carry strong semantic connotations, e.g., they may convey excitement, anger or even sweetness, which empowers them to affect almost any aspect of life, from perception of an email, to the perceived sweetness of a cup of coffee. This thesis explores some of the possibilities that can be offered by computational approaches to support users in understanding and taking advantage of this impact. More specifically, the focus is on learning font semantics from crowdsourced and Web data, and using this information to facilitate font search and recommendation. Among the novel contributions are the use of CNN-based embeddings to represent fonts in attribute learning, leveraging emotion theories and lexical relations to infer font semantics, a multimodal font search method that allows specifying a reference font together with the target semantic additions, enabled by a cross-modal representation of fonts and words, and the proposal of affect-aware word clouds that let users specify a target emotion, which is used to recommend fonts and color palettes with congruent affective connotations.
Location: Remote via Webex
Committee:

Gerard de Melo

Matthew Stone

Abdeslam Boularias

Rada Mihalcea

 

Start Date: 18 Sep 2020;
Start Time: 02:00PM - 04:00PM
Title: Failure Explanation Planning for Robotic Manipulation: Planning with Discrete Uncertainty and Many Objects

Bio:
Speaker:
Abstract: Traditional task and motion planning problems in robot manipulation focus on finding valid plans for robot arms to fulfill different tasks. Collision-free plans, however, are not often immediately available in highly-challenging manipulation tasks. The first part of the talk will focus on picking an item in the presence of obstacles, while taking into account the uncertainty of the perception process due to occlusions and partial views. The focus is on the case that no obvious collision-free solution can be found. The objective is to minimize the probability of collision with uncertain obstacles and to maximize the probability of picking the target object given a discrete set of pose hypotheses obtained from perception. It follows the concept of conformant probabilistic planning (CPP) and is modelled as a stochastic variant of the Minimum Constraint Removal (MCR) problem so as to achieve the desired objective. This approach is shown experimentally to be effective and is a promising direction for closing the gap between sensing uncertainty and motion planning in manipulation. The second part of the talk will cover the related topic of multi-object rearrangement. Rearrangement can be used either to move blocking obstacles before manipulating the target object when picking in clutter or it can be the task itself in applications, such as logistics, when multiple objects need to be presented to a consumer in a specific manner. Failure explanation planning is also suitable for solving such multi-object task planning problems, especially for non-monotone problem instances, where an object has to be first moved to an intermediate location before being moved to its target pose so that the problem is solvable. The talk covers the existing exploration of this problem in collaboration with colleagues for the case where the rearrangement process has to respect object-to-object interactions but no manipulator-to-object interactions.
Location: Remote via Webex
Committee:

Prof. Kostas Bekris (Advisor)

Prof. Jingjin Yu

Prof. Fred Roberts

Prof. Sepehr Assadi

Start Date: 23 Sep 2020;
Start Time: 11:00AM - 12:00PM
Title: Improved Bounds for Distributed Load Balancing

Bio:
Speaker:
Abstract: We consider the problem of load balancing in the distributed setting. The input is a bipartite graph on clients and servers, where each client comes with a positive weight. The algorithm must assign each client to an adjacent server, minimizing the weighted sum of clients assigned to any one server. This problem has a variety of applications and has been widely studied under several different names, including scheduling with restricted assignment, semi-matching, and distributed backup placement. We show the first distributed algorithms to compute an O(1)-approximation to the load balancing problem in polylog(n) rounds. In the CONGEST model, we give an O(1)-approximation algorithm in polylog(n) rounds for unweighted clients. For weighted clients, the approximation ratio is O(log(n)). In the less constrained LOCAL model, we give an O(1)-approximation algorithm for weighted clients in polylog(n) rounds. Based on joint work with Sepehr Assadi and Aaron Bernstein.
Location: via Zoom
Committee:
Start Date: 06 Oct 2020;
Start Time: 03:00PM - 04:30PM
Title: A Novel Discretization and Numerical Solver for Non-Fourier Diffusion

Bio:
Speaker:
Abstract: We introduce the C-F diffusion model [Andreson and Tamma 2006; Xue et al. 2018] to computer graphics for diffusion-driven problems that has several attractive properties: (a) it fundamentally explains diffusion from the perspective of the non-equilibrium statistical mechanical Boltzmann Transport Equation, (b) it allows for a finite propagation speed for diffusion, in contrast to the widely employed Fick's/Fourier's law, and (c) it can capture some of the most characteristic visual aspects of diffusion-driven physics, such as hydrogel swelling, limited diffusive domain for smoke flow, snowflake and dendrite formation, that span from Fourier-type to non-Fourier-type diffusive phenomena. We propose a unified convection-diffusion formulation using this model that treats both the diffusive quantity and its associated flux as the primary unknowns, and that recovers the traditional Fourier-type diffusion as a limiting case. We design a novel semi-implicit discretization for this formulation on staggered MAC grids and a geometric Multigrid-preconditioned Conjugate Gradients solver for efficient numerical solution. To highlight the efficacy of our method, we demonstrate end-to-end examples of elastic porous media simulated with the Material Point Method (MPM), and diffusion-driven Eulerian incompressible fluids.
Location: Remote via Webex
Committee:

Prof. Mridul Aanjaneya 

Prof. Mario Szegedy

Prof. Jingjin Yu

Prof. He Zhu

Start Date: 07 Oct 2020;
Start Time: 11:00AM - 12:00PM
Title: Network Coding Gaps for Completion Times of Multiple Unicasts

Bio:
Speaker:
Abstract: We study network coding gaps for the problem of makespan minimization of multiple unicasts. In this problem, distinct packets at different nodes in a network need to be delivered to a destination specific to each packet, as fast as possible. The *network coding gap* specifies how much coding packets together in a network can help compared to the more natural approach of routing. While makespan minimization using routing has been intensely studied for the multiple unicasts problem, no bounds on network coding gaps for this problem are known. We develop new techniques which allow us to upper bound the network coding gap for the makespan of $k$ unicasts, proving this gap is at most polylogarithmic in $k$. Complementing this result, we show there exist instances of $k$ unicasts for which this coding gap is polylogarithmic in $k$. Our results also hold for average completion time, and more generally any $\ell_p$ norm of completion times. Based on joint work with Bernhard Haeupler and Goran Zuzic, to appear in FOCS’2020.
Location: Via Zoom
Committee:
Start Date: 07 Oct 2020;
Start Time: 12:30PM - 02:00PM
Title: Modelling Goal Oriented Referential Dialogue using Discourse Coherence, Cognitive Modelling, and RL

Bio:
Speaker:
Abstract: Most of the dialogue literature either focuses on evaluation of utterances on their own creating problems in reference resolution or build end-to-end dialogue systems which are hard to customize. We present a dialogue system which represents discourse state as a graph of coherence relations. We use a referential dialogue task in which a director has to describe an ambiguous target to a matcher to show that a coherence-based representation can be successfully used to model dialogue contributions at discourse level and accomplish reference resolution of ambiguous referents. We use data-driven methods to implement individual dialogue modules, considerably reducing human effort. This representation is coupled with a RL method to learn flexible dialogue strategies. Secondly, we show that by combining this method with cognitive modelling we can learn context-sensitive clarification strategies which improve interactive experience with human subjects and better task success rate.
Location: Remote via Webex
Committee:

Dr. Matthew Stone (advisor), Dr. Yongfeng Zhang, Dr. Abdesalam Boularias, Dr. Zheng Zhang

Start Date: 21 Oct 2020;
Start Time: 04:00PM - 06:00PM
Title: Dynamic Heavy-Case Balls-and-Bins, Low-Associativity Paging, and the Address Translation Problem

Bio:
Speaker:
Abstract: Balls-and-bins games are a powerful mathematical abstraction that are used to model a wide variety of scenarios. We consider bounds on the fullest bin in the dynamic case of an unrestricted oblivious adversary: balls can be inserted and deleted in any order and they can be reinserted. There are n bins and at no point does the system has more than m balls. We give a bin-selection rule using d + 1 ≥ 3 hash functions that achieves a maximum load of at most (1 + o(1))m/n + (log log n)/d + O(1), with high probability in n, against any oblivious adversary in the dynamic setting. This result settles a long-standing open question and is tight up to low-order terms when m = ω(n log log n). We apply this result to a paging problem that requires a tight analysis in this range. Specifically, we use this balls-and-bins rule to show that for any fully-associative paging algorithm on a memory of size M, there is a (1 + o(1))-competitive paging algorithm with (1 + o(1))-memory augmentation that has nearly log log M associativity, where the associativity of an algorithm is the maximum number of memory locations in which a page can be placed. This low-associativity paging algorithm is in turn used to reduce the number of bits a translation lookaside buffer (TLB) needs to encode every virtual-to-physical address translation. We show how to pack these small encodings into a TLB and implement what we call hashed huge pages. These do not suffer from the main drawbacks of standard huge pages. We prove that hashed huge pages are (2 + o(1))-competitive with the optimal offline TLB/paging algorithm that uses standard huge pages.
Location: Remote via Webex
Committee:

Prof. Martin Farach-Colton (Advisor)

Prof. Aaron Bernstein

Prof. Yongfeng Zhang

Prof. Sepehr Assadi

Start Date: 22 Oct 2020;
Start Time: 04:30PM - 06:00PM
Title: Enabling data-driven adaptations for large-scale in-situ scientific workflows

Bio:
Speaker:
Abstract: As scientific workflows increasingly target extreme-scale resources, the imbalance between higher computational capabilities, generated data volumes, and available I/O bandwidth limits the ability to translate these scales into insights. In-situ workflows (and the in-situ approach) leverage storage levels close to the computation in novel ways to reduce the amount of data movement and I/O required. However, managing the execution of in-situ workflows presents several challenges, especially in cases where computations are dynamic and data-driven, and resources are constrained and shared. In these cases, the placement of tasks and data, the execution of the workflow as well as the management have to be adapted at runtime to ensure that objectives in terms of performance and scalability are satisfied. In this presentation, I will first discuss the state-of-art in data-driven adaptive management of in-situ workflows aimed at reducing overheads and improving execution time. Specifically, I will explore key scenarios that trigger adaptation and summarize mechanisms that have been used to address the trigger. I will use this exploration to formulate an approach for data-driven adaptation for in-situ workflows based on change detection, adaptation goals, adaptation policies, and adaptation mechanism for different types of triggers, and will analyze associated research challenges. I will then present our recent work, “Staging Based Task Execution for Data-driven, In-Situ Scientific Workflows.” This work compares different paradigms for in-situ task placement. In this work, we show that the proper choice for scheduling in-situ tasks can decrease workflow execution overheads significantly. Specifically, I will discuss a model that captures the different factors that influence the mapping, execution, and performance of data-driven in-situ workflows and experimentally study the impact of different mapping decisions and execution patterns. I will finally discuss the design, implementation, and experimental evaluation of a data-driven in-situ workflow execution framework that leverages user-defined task-triggers to enable efficient and scalable in-situ workflow execution.
Location: Remote via Webex
Committee:

Prof. Manish Parashar (Advisor)

Prof. Uli Kremer

Prof. Sudarsun Kannan

Prof. Mridul Aanjaneya

Start Date: 23 Oct 2020;
Start Time: 10:00AM - 11:30PM
Title: Mechanism Design and Data Science

Bio:

Prof. Hartline’s research introduces design and analysis methodologies from computer science to understand and improve outcomes of economic systems. Optimal behavior and outcomes in complex environments are complex and, therefore, should not be expected; instead, the theory of approximation can show that simple and natural behaviors are approximately optimal in complex environments. This approach is applied to auction theory and mechanism design in his graduate textbook Mechanism Design and Approximation which is under preparation.

Prof. Hartline received his Ph.D. in 2003 from the University of Washington under the supervision of Anna Karlin. He was a postdoctoral fellow at Carnegie Mellon University under the supervision of Avrim Blum; and subsequently a researcher at Microsoft Research in Silicon Valley. He joined Northwestern University in 2008 where he is currently a professor of computer science. He was on sabbatical at Harvard University in the Economics Department during the 2014 calendar year and visiting Microsoft Research, New England for the Spring of 2015. Prof. Hartline codirects the Institute for Data, Econometrics, Algorithms, and Learning (an NSF HDR TRIPODS institute) and is a cofounder of virtual conference organizing platform Virtual Chair.

 Presented in association with the DATA-INSPIRE TRIPODS Institute


Speaker:
Abstract: Computer systems have become the primary mediator of social and economic interactions. A defining aspect of such systems is that the participants have preferences over system outcomes and will manipulate their behavior to obtain outcomes they prefer. Such manipulation interferes with data-driven methods for designing and testing system improvements. A standard approach to resolve this interference is to infer preferences from behavioral data and employ the inferred preferences to evaluate novel system designs. In this talk, Prof. Hartline will describe a method for estimating and comparing the performance of novel systems directly from behavioral data from the original system. This approach skips the step of estimating preferences and is more accurate. Estimation accuracy can be further improved by augmenting the original system; its accuracy then compares favorably with ideal controlled experiments, a.k.a., A/B testing, which are often infeasible. A motivating example will be the paradigmatic problem of designing an auction for the sale of advertisements on an Internet search engine.
Location: Join this event via Webex- Details on how to join are listed below
Committee:
Start Date: 27 Oct 2020;
Start Time: 10:00AM -
Title: Using satellite imagery and deep learning to target aid in data-sparse contexts

Bio:
Speaker:
Abstract: Aid policy has the potential to alleviate global poverty by targeting areas of concentrated need. A critical question remains, however, over whether aid is reaching the areas of most need. Often little ground-truth poverty data is available at a granular level (e.g., village) where aid interventions take place. This research explores remote sensing techniques to measure poverty and target aid in data-sparse contexts. Our study of Myanmar examines i) the performance of different methods of poverty estimation and ii) the extent to which poverty and other development characteristics explain community aid distribution. This study draws from the following sources of data: georeferenced community-driven development projects (n=12,504), daytime and nighttime satellite imagery, the Demographic and Health Survey, and conflict data. We first compare the accuracy of four poverty measures in predicting ground-truth survey data. Using the best poverty estimation in the first step, we investigate the association between village characteristics and aid per capita per village. Our results show that daytime features perform the best in predicting poverty as compared to the analysis of RSG color distribution, Kriging, and nighttime-based measures. We use a Convolutional Neural Network, pre-trained on ImageNet, to extract features from the satellite images in our best model. These features are then trained on the DHS wealth data to predict the DHS wealth index/poverty for villages receiving aid. The linear and non-linear estimator indicate that development assistance flows to low-asset villages, but only marginally. Aid is more likely to be disbursed to those villages that are less populous and farther away from fatal conflicts. Our study concludes that the nuances captured in satellite-based models can be used to target aid to impoverished communities.
Location: Via Zoom
Committee:
Start Date: 04 Nov 2020;
Start Time: 03:30PM - 05:00PM
Title: Urban-scale Human Mobility Data Synthesis via Learning-based models with Privacy Awareness

Bio:
Speaker:
Abstract: Real-time human mobility data obtained from urban infrastructure (e.g., cellular networks) have the potential to unleash the full power of social and collaborative computing systems and applications, from crowd sourcing, to ridesharing, mobile networking, trans- portation planning, emergency response, and pandemic mitigation. However, most of the city-wide human mobility data are often pro- prietary and thus cannot be accessed by the research community (e.g., researchers and practitioners) unless released by data owners (e.g., government or industries). Recently, under the context of Data Science for Social Good, some data owners are willing to share their mobility data with the public to unlock their value, but there are significant privacy and security concerns that remain in the way. To overcome these issues, we present a privacy-aware framework to generate synthetic yet realistic mobility data through augmented Generative Adversarial Nets (GANs) based on real world human mobility data from a cellular network. We plug in two auxiliary modules, i.e., privacy classifier and utility classifier, to apply targeted perturbation, which balances the trade-off between sensitive and non-sensitive mobility features. We implement our framework to generate urban-scale cellular connection records based on large scale real world data from Shenzhen cellular networks as a case study and empirically evaluate its performance in both utility and privacy aspects. The quantitative results show that with same level of privacy risks, our model outperforms the baseline models by 33% gain on temporal utility and an up to 49% gain on spatiotemporal utility.
Location: Remote via Webex
Committee:

Prof. Desheng Zhang (Advisor)

Prof. Jie Gao

Prof. Yongfeng Zhang

Prof. Eric Allender

Start Date: 06 Nov 2020;
Start Time: 02:00PM -
Title: Developing and Employing Situational Awareness in Autonomous Robotic Systems

Bio:

Philip Dames is an Assistant Professor of Mechanical Engineering at Temple University, where he directs the Temple Robotics and Artificial Intelligence Lab (TRAIL). Prior to joining Temple, he was a Postdoctoral Researcher in Electrical and Systems Engineering at the University of Pennsylvania. He received his PhD Mechanical Engineering and Applied Mechanics from the University of Pennsylvania in 2015 and his BS and MS degrees in Mechanical Engineering from Northwestern University in 2010.


Speaker:
Abstract: Robotic systems must possess sufficient situational awareness in order to successfully operate in complex and dynamic real-world environments, meaning they must be able to perceive objects in their surroundings, comprehend their meaning, and predict the future state of the environment. In this talk, I will first describe how multi-target tracking (MTT) algorithms can provide mobile robots with this awareness, including our recent results that extend classical MTT approaches to include semantic object labels. Next, I will discuss two key applications of MTT to mobile robotics. The first problem is autonomous navigation through crowded, dynamic environments. To solve this, we develop a novel neural network-based controller that takes as its input the target tracks from an MTT, unlike previous approaches which only rely on raw sensor data. The second problem is distributed target search and tracking. To solve this, we develop a distributed MTT framework, allowing robots to estimate, in real time, the relative importance of each portion of the environment, and dynamic tessellation schemes, which account for uncertainty in the pose of each robot, provide collision avoidance, and automatically balance task assignment in a heterogeneous team.
Location: Via Zoom
Committee:
Start Date: 10 Nov 2020;
Start Time: 10:30AM -
Title: Bayesian Deep Learning: A Probabilistic Framework to Unify Deep Learning and Graphical Models

Bio:

Hao Wang is currently an Assistant Professor in the Department of Computer Science at Rutgers University. Previously he was a Postdoctoral Associate at the Computer Science & Artificial Intelligence Lab (CSAIL) of MIT, working with Dina Katabi and Tommi Jaakkola. He received his PhD degree from the Hong Kong University of Science and Technology, as the sole recipient of the School of Engineering PhD Research Excellence Award in 2017. He has been a visiting researcher in the Machine Learning Department of Carnegie Mellon University. His research focuses on statistical machine learning, deep learning, and data mining, with broad applications on recommender systems, healthcare, user profiling, social network analysis, text mining, etc. In 2015, he was awarded the Microsoft Fellowship in Asia and the Baidu Research Fellowship for his innovation on Bayesian deep learning and its applications on data mining and social network analysis.


Speaker:
Abstract: While perception tasks such as visual object recognition and text understanding play an important role in human intelligence, the subsequent tasks that involve inference, reasoning, and planning require an even higher level of intelligence.\A0 The past few years have seen major advances in many perception tasks using deep learning models. In terms of higher-level inference, however, probabilistic graphical models, with their ability to expressively describe properties of variables and various probabilistic relations among variables, are still more powerful and flexible. To achieve integrated intelligence that involves both perception and inference, we have been exploring along a research direction, which we call Bayesian deep learning, to tightly integrate deep learning and Bayesian models within a principled probabilistic framework. In this talk, I will present the proposed unified framework and some of our recent work on Bayesian deep learning with various applications including recommendation, social network analysis, healthcare, and representation learning.
Location: Via Zoom Recording
Committee:
Start Date: 17 Nov 2020;
Start Time: 02:30PM - 04:00PM
Title: Multimodal Content Understanding via Structural Reasoning

Bio:
Speaker:
Abstract: Vision-language tasks require a model to understand input multimodal contents and then conduct a specific goal (such as answer selection, text generation, or span prediction), which poses challenging multimodal representation learning and reasoning scenarios. Many existing works on vision-language tasks simply represent multimodal inputs as a sequence of embeddings (objects in an image or tokens in a sentence). However, such representation ignores the intrinsic structures such as relations or hierarchies of the multimodal contents. We propose to leverage the structural cues to build graph representation for multimodal contents and further reason over it through multimodal graph attention networks. We present experiments on two vision-language tasks to show the effectiveness of incorporating structural representation and reasoning for understanding multimodal contents: 1) The first task is audio-visual scene aware dialog which requires an agent to indulge in a question-answer dialog with a human about the audio-visual content. 2) The second task is document visual question answering which requires the ability of understanding layout in document images to locate the correct answer span for given questions.
Location: Remote via Webex
Committee:

Prof. Yongfeng Zhang (Advisor), Prof. Hao Wang, Prof. Sungjin Ahn, Prof. Dong Deng

Start Date: 24 Nov 2020;
Start Time: 10:00AM - 12:00PM
Title: Cross-Domain User Embedding - Solving Cold-Start Recommendation with Edge Computing

Bio:
Speaker:
Abstract: When a user starts exploring within a recommendation system looking for items from new areas, cross-domain recommendation techniques come into help by transferring abundant knowledge from the user's ``acquainted'' domains. While this is usually achieved by sharing information between service providers on the clouds, we argue that it can be approached more naturally, efficiently, and effectively through learning on edge devices such as smartphones and laptops. In this work, we formalize the cross-domain recommendation problem under the mobile computing environment as compared to that in a centralized environment and identify two challenges to the community: unavailable direct transfer and heterogeneous domain-specific user representation. We then propose to learn and maintain cross-domain embedding on each user's personal device. The optimization follows a variational inference framework that maximizes the mutual information between cross-domain embeddings and domain-specific user information, and empirical study on real-world datasets exhibits its effectiveness on recommendation tasks and its superiority over domain-pairwise transfer models. We demonstrate that the resulting system has good scalability and allows flexible plugin of domain-specific encoder and decoders.
Location: Remote via Zoom
Committee:

Prof. Amelie Marian (Advisor)

Prof. Dmitris Metaxas

Prof. Yongfeng Zhang

Prof. Yipeng Huang

Start Date: 01 Dec 2020;
Start Time: 10:30AM -
Title: Explainable AI: From Human to Nature

Bio:

Yongfeng Zhang is an Assistant Professor in the Department of Computer Science at Rutgers University. His research interest is in Machine Learning and Data Mining, Information Retrieval and Recommender Systems, Economics of Data Science and Explainable AI. He is a Siebel Scholar of the class 2015, and a 2018 Best Professor Awardee by Rutgers CSGSS for Teaching and Mentoring. Recently, he has been working on various approaches to explainable AI, including but not limited to neural logical reasoning, knowledge graph reasoning, causal machine learning, explainable graph neural networks, as well as their application in human-centered and nature-oriented tasks. His research is generously supported by funds from Rutgers, NSF, Google, Adobe, and NVIDIA. He has been serving as PC or SPC members for top-tier computer science conferences as well as the Associate Editor for the ACM Transactions on Information Systems.


Speaker:
Abstract: AI has been an important part in many research areas or disciplines, spanning from human-centered tasks such as search, recommendation, dialog systems and social networks, to nature-oriented tasks such as drug discovery, bioscience and physics. However, how to understand and interpret the decisions or results produced by AI remains a significant problem, which greatly influences the trust between human and AI. In this talk, we will introduce our recent work on Explainable AI from both technical and application perspectives. On the technical perspective, we will introduce our work on explainable machine learning models for Explainable AI, including neural logic and neural symbolic reasoning, causal and counterfactual reasoning, knowledge graph reasoning, explainable graph neural networks and techniques for generating natural language explanations. On the application perspective, we will introduce our efforts on both human-centered and nature-oriented tasks, including search engine, recommender systems, dialog systems, molecular classification and biodiversity conservation based on our developed Explainable AI models.
Location:
Committee:
Start Date: 08 Dec 2020;
Start Time: 10:30AM -
Title: Unsupervising Vision via Object-Centric World Models

Bio:

Sungjin Ahn is an Assistant Professor of Computer Science at Rutgers University and directs the Rutgers Machine Learning (RUML) lab. He is also affiliated with Rutgers Center for Cognitive Science. He is interested in and have expertise in developing methods for probabilistic learning, deep learning, and their intersection. He is also particularly interested in developing an AI-agent that can discover underlying structure and representations of the world in an unsupervised manner and through interactions with the environment. He received Ph.D. at the University of California, Irvine with Max Welling and did a postdoc with Yoshua Bengio at Mila. Then, he joined Rutgers University in Fall 2018. He has co-organized ICML 2020 Workshop on Object-Oriented Learning and received the ICML best paper award in ICML 2012.


Speaker:
Abstract: Objects and their interactions are the foundational structure of the world that plays the central role in our perception, reasoning, and control of the world. Incorporating such structural knowledge is thus expected to resolve various limitations of current deep learning systems in reasoning, causality, modularity, and systematic generalization. However, current deep learning systems are limited in providing such structures: they either extensively rely on human annotations or uses unsupervised representations with a minimal uninterpretable structure. In this talk, I present recent advances in object-centric latent variable models. I first argue that this class of models provide a probabilistic modeling framework to learn interpretable, structured, and adaptable representations as well as the compositional imagination of multi-object scenes in an unsupervised manner. Also, I present the benefits of combining symbolic and distributed representations in these models, and an approach to learn three-dimensional scene representations in an object-centric manner. I conclude that human-like AI agents should understand the causal structured of the world and the object-centric representations can be the foundation of building such world models.
Location: Via Webex recording
Committee:
Start Date: 10 Dec 2020;
Start Time: 12:00PM - 02:00PM
Title: Computational Aspects of the Triangle Algorithm for Convex Hull Membership - a Universal Geometric Problem

Bio:
Speaker:
Abstract: The Convex Hull Membership (CHM) problem queries whether a point p in m-dimensional Euclidean space lies in the convex hull of a compact subset S in that Euclidean space. For S finite, CHM forms a basic problem in Computational Geometry and Linear Programming. It also serves as a query problem in many applications e.g. Topic Modeling, Non-negative Matrix Factorization. The Triangle Algorithm (TA) invented by Dr. Kalantari is a novel algorithm for CHM, first developed for the case where S is finite and subsequently for the more general case where S is any compact subset. The TA either computes an approximate solution or proves p lies outside of conv(S) by computing a separating hyperplane. The number of iterations of TA to solve CHM is independent of the dimension m and it thus can serve as an efficient oracle in large scale and high dimensional cases of CHM. In this dissertation we mainly investigate the computational aspects of solving three problems via the Triangle Algorithm, as stated below: 1) Irredundancy problem: We investigate the problem of computing all or a good approximation of vertices of a convex hull given a set of n points in dimension m, known as the irredundancy problem. This problem becomes challenging as m grows, especially for classical algorithms such as Gift Wrapping and QuickHull due to their exponential running times in terms of the dimension. The All Vertex Triangle Algorithm (AVTA) is proposed to tackle the efficiency issue for the irredundancy problem and its variations. The AVTA leverages on the TA as a membership oracle and progressively recovers vertices of convex hull of S, potentially after the Johnson-Lindenstrauss projection pre-processing. We apply AVTA to design new practical algorithms for two popular machine learning problems: topic modeling and non-negative matrix factorization. Our experimental results show its robustness and efficiency in solving the irredundancy problem and its variations in real world applications. 2) Spherical-CHM: We investigate Spherical-CHM as a special case of CHM where the n points of S lie on a sphere centered at p. The Spherical-CHM is shown to be equivalent to the general case in the sense that the general CHM is reducible to Spherical-CHM. Based on the structure of Spherical-CHM, we propose a novel version of the TA called the Spherical Triangle Algorithm(Spherical-TA) which converts CHM into Spherical-CHM and then applying the TA. Under mild assumptions, it is provably faster than TA. Empirically, we observe that it has better efficiency than TA. We list several applications of the Spherical-TA and empirically show the efficiency of the Spherical-TA as an oracle for CHM query in solving various problems. 3) Solving Linear System: We consider solving a linear system Ax=b, where A is an m by n real or complex matrix of arbitrary rank. We describe a novel geometric algorithm for computing an approximate solution based on Kalantari’s Triangle Algorithm for general CHM, where the underlying set S is a compact subset of a Euclidean space. The corresponding Triangle Algorithm is applicable to linear systems of arbitrary dimension, requiring no assumption on the matrix A. When A is invertible, the algorithm approximates the unique solution. When A is not invertible but Ax=b is solvable, it is able to approximate the solution with smallest norm, and when Ax=b has no solution, it is able to approximate the solution that minimizes ||Ax – b|| with smallest norm and computes a certificate proving unsolvability. Empirically we compare the algorithm with two popular linear system solvers BiCGSTAB and GMRES with iLU-preconditioner. We consider the case where A is a square matrix and observe superior efficiency of our proposed algorithm in approximating the solution or proving unsolvability.
Location: Remote via Webex
Committee:

Prof. Bahman Kalantari (Chair)

Prof. Sepehr Assadi

Prof. Abdeslam Boularias

Prof. Min Xu (Outside Member, Statistics Department, Rutgers)

Start Date: 11 Dec 2020;
Start Time: 05:30PM - 07:00PM
Title: AUTOMATED MACHINE LEARNING FOR SUPERVISED AND UNSUPERVISED MODELS WITH ARTIFICIAL NEURAL NETWORKS

Bio:
Speaker:
Abstract: Artificial Neural Networks (ANNs) are powerful machine learning tools to find and apply patterns for intelligent decision making. These tools can be combined with automation to select few results among many trials. Since ANNs are used for both supervised and unsupervised learning, automation can lead to more trusted learning methods across many fields and lead to exploring possibilities that are considered impossible with current technology. In this thesis, at first, I introduce a new form of ANN architecture which is used exclusively for automated robot navigation. By doing so, I provide a high-level overview of both computational neuroscience and the potential of automation. Next, I introduce Greedy AutoAugment to automate the learning of state-of-the-art neural networks for both big and small datasets. I also create an efficient model to evaluate clustering in unsupervised learning. The model is further expanded to introduce unsupervised learning for deep subspace clustering. In the end, I provide discussion and the future research plan for automating ANNs in machine learning applications.
Location: Remote via Webex
Committee:

Prof. Dimitris Metaxas, Advisor

Prof. Ahmed Elgammal

Prof. Abdeslam Boularias

Prof. Wei Vivian Li (External member)

Start Date: 15 Dec 2020;
Start Time: 10:00AM -
Title: Algorithms for Massive Graphs

Bio:

Aaron Bernstein is an assistant professor at the University of Rutgers working on graph algorithms. He completed his PhD at Columbia University with Cliff Stein as his advisor.


Speaker:
Abstract: Many real-world graphs are too large to be processed by standard algorithms, as examining the entire input would be too slow, and the graph cannot fit in the memory of a single machine. In recent years, much of graph-algorithms research has focused on different models for effectively processing massive graphs. In this talk, I will present my own recent work in this area. I will focus on new approaches to matching and load balancing that lead to new results in many different models. I will also discuss how more efficient data structures for basic graph primitives such as shortest paths can be used to speed up existing algorithms.
Location: Via Webex recording
Committee:
Start Date: 16 Dec 2020;
Start Time: 11:00AM - 01:00PM
Title: Performance Profilers and Debugging Tools for OpenMP Applications

Bio:
Speaker:
Abstract: OpenMP is a popular application programming interface (API) used to write shared-memory parallel programs. It supports a wide range of parallel constructs that allow expressing different types of parallelism, including fork-join and task-based parallelism. Using OpenMP, developers can incrementally parallelize a program by adding parallelism to it until their performance goals are met. In this dissertation, we address the problem of assisting developers in meeting the two primary goals of writing parallel programs in OpenMP: performance and correctness. First, writing OpenMP programs that achieve scalable performance is challenging. An OpenMP program that achieves reasonable speedup on a low core count system may not achieve scalable speedup when ran on a system with a larger number of cores. Traditional profilers report program regions where significant serial work is performed. In a parallel program, optimizing such regions may not improve performance since it may not improve its parallelism. To address this problem, we introduce OMP-Adviser, a parallelism-centric performance analysis tool for OpenMP programs with what-if analyses. We propose a novel OpenMP series-parallel graph (OSPG) that precisely captures the series-parallel relations between different fragments of the program's execution. The OSPG, along with fine-grained measurements, constitute OMP-Adviser's performance model. OMP-Adviser identifies serialization bottlenecks by measuring inherent parallelism in the program and its OpenMP constructs. OMP-Adviser's what-if analysis technique enables developers to estimate the increase in parallelism in user-specified code regions before designing concrete optimizations. OMP-Adviser's what-if analysis technique assists developers to identify regions that must be optimized first for the program to achieve scalable speedup. While a lack of inherent parallelism is a sufficient condition for a program not to have scalable speedup, too much parallelism can lead to excessive runtime and scheduling overheads, resulting in lower performance. We address this issue by extending OMP-Adviser to measure tasking overheads. By attributing this additional information to different OpenMP constructs in the program, OMP-Adviser can identify tasking cut-offs that achieve the right balance between a program's parallelism and its scheduling overheads for a given input. Further, we design a differential analysis technique that enables OMP-Adviser to identify program regions experiencing scalability bottlenecks caused by secondary effects of execution. Second, writing correct parallel programs is challenging due to the possibility of bugs such as deadlocks, livelocks, and data races that do not manifest when writing a serial program. A data race occurs when two parallel fragments of the program access the same memory location while one of the accesses is a write. Data races are a common cause of bugs in OpenMP applications. Manually identifying and reproducing data races is challenging due to the exponential number of possible instruction and thread interleaving in parallel applications. We introduce OMP-Racer, a dynamic data race detector for OpenMP. To detect apparent data races, OMP-Racer constructs the program's OSPG to encode the logical series-parallel relation for different fragments of the program. By capturing the logical series-parallel relations in the program, OMP-Racer can detect data races in other thread interleavings for a given input. Compared to the state-of-the-art OpenMP data race detectors, OMP-Racer can correctly identify races in a larger subset of OpenMP programs that use task-dependencies and locks. Our results with testing the OMP-Adviser and OMP-Racer prototypes with 40 OpenMP applications and benchmarks indicate their respective effectiveness in performance analysis and data race detection. Furthermore, it demonstrates the usefulness of our proposed OSPG data structure in enabling the creation of different types of analysis tools for OpenMP applications.
Location: Remote via Zoom
Committee:

Prof. Santosh Nagarakatte (Chair)

Prof. Badri Nath

Prof. Srinivas Narayana

Prof. Martha Kim (Columbia University)

Start Date: 16 Dec 2020;
Start Time: 01:00PM - 03:00PM
Title: Integrating Accelerators

Bio:
Speaker:
Abstract: Accelerators have emerged as a popular way to improve both the performance and energy efficiency of computation. The level at which an accelerator is integrated with the rest of the system varies -- from tightly integrated vector floating-point units to standalone GPU and FPGA add-in boards. In this work, we consider loosely coupled accelerators -- compute units that run programs in their own context and address spaces. However, independence also makes loosely coupled accelerators more challenging to program, and it is more difficult for applications to benefit from their improved performance and energy efficiency. This thesis focuses on some of the challenges in im- proving the programmability of accelerators and their integration with the rest of the system. We examine the issues that arise in such heterogeneous systems in three major areas; data access, system services, and acceleration of high-level languages. First, we examine challenges with data access. To avoid expensive data marshaling overhead, accelerators often support unified virtual address space (also called unified virtual memory). This feature allows the operating system to synchronize CPU and accelerator address spaces. However, designing such a system needs to make several trade-offs to accommodate the complexities of maintaining the mirror layout and at the same time matching accelerator specific data access patterns. This work investigates integrated GPUs as a case study of loosely coupled accelerators and identifies several opportunities for improvement in designing device-side address translation hardware to provide unified virtual address space. The second major area we study is the access to system services from accelerator's programs. While accelerators often work as memory to memory devices, there is an increasing amount of evidence in favour of providing them with direct access to network or permanent storage. This work discusses the suitability of existing operating system interfaces (POSIX) and their semantics for inclusion in GPU programs. We consider the differences between CPU and GPU execution model and the suitability of CPU system calls from both semantics and performance point of view. Finally, we study mapping of high-level dynamic languages to accelerators. High-level languages, like Python, are increasingly popular with designers of scientific applications with a large selection of support libraries. However, Python's ease of use and excellent programmability comes at a performance cost. We examine a specific case of cognitive modeling workloads written in Python and propose a path to efficient execution on accelerators. We demonstrate that it is often possible to extract and optimize core computational kernels using standard compiler techniques. Extracting such kernels offers multiple benefits; it improves performance, it eliminates dynamic language features for more efficient mapping to accelerators, and it offers opportunities for exploiting compiler-based analyses to provide direct user feedback. All demonstrated systems were implemented and data collected on real systems without any use of system simulators or hardware emulation.
Location: Remote via Webex
Committee:

Prof. Abhishek Bhattacharjee (Advisor)

Prof. Thu Nguyen

Prof. Ulrich Kremer

Olivier Giroux (nVidia)

Start Date: 18 Dec 2020;
Start Time: 10:00AM - 12:00PM
Title: Ubiquitous Precise Tracking: From Activity Detection over Indoor Tracking, to Outdoor Vehicle Positioning

Bio:
Speaker:
Abstract: Context awareness and tracking have changed our daily activities and style of living over the last three decades. Many applications have fueled research and industry efforts to establish accurate, energy-efficient, yet scalable and easy to deploy tracking and sensing systems. One of these applications is indoor and outdoor energy saving. For example, precise room occupancy estimation and activity sensing enable better control of indoor amenities such as light, heating, and air conditioning. These indoor applications can be extended to outdoor use for smart cities, e.g., automatic decrease of lighting for empty streets. However, current tracking systems can still not meet the aforementioned stringent requirements, and as a result there are no real world deployments of such applications. For example, conventional wireless tracking relies on WiFi received signal strength, and more recently on channel state information (CSI) offering decimeter level tracking accuracy. However, these tracking systems require either extensive fingerprints collection (wardriving), the knowledge of anchor locations and/or require expensive hardware that prevents wide deployment of such systems. Therefore, these applications with their sensing requirements still demand more accurate and scalable solutions. This thesis focuses on developing wireless tracking solutions targeting submeter accuracy indoors and meter-level accuracy outdoors by leveraging unconventional wireless signals including visible light and WiFi Fine Time Measurements (FTM). These tracking algorithms can adaptively learn and simultaneously map the environment/anchors while tracking users. The goal of this research is to propose tracking and context aware sensing systems that can tweak their parameters and map the environment through crowd-sourcing without the need of offline training. In particular, the proposed solutions include: (i) EyeLight, a device-free sensing system based on visible light to enable accurate tracking indoors and provide occupancy estimation, room activity recognition services; This system integrates photosensors with light bulbs easing its deployment compared to existing systems requiring the deployment of photosensors on the floor, (ii) an open platform for experimenting with WiFi fine time measurements and a general, repeatable, and accurate measurement framework for evaluating time-based ranging systems, (iii) Wi-Go, a system that simultaneously tracks vehicles and maps WiFi access point positions by fusing WiFi FTMs, GPS, and odometry information. We believe these three systems enable energy-efficient, continuous, precise and easy to deploy indoor and outdoor tracking and context awareness sensing solutions.
Location: Remote via Zoom
Committee:

Prof. Marco Gruteser (Advisor)

Prof. Dimitri Metaxas

Prof. Badri Nath

Prof. Richard Martin

Prof. Romit Roy Choudhury (UIUC)

Start Date: 18 Dec 2020;
Start Time: 01:00PM - 03:00PM
Title: Deep Learning based NAS Score and Fibrosis Stage Prediction from CT data

Bio:
Speaker:
Abstract: Non-Alcoholic Fatty Liver Disease (NAFLD) is becoming increasingly prevalent in the world population. Without diagnosis at the right time, NAFLD can lead to non-alcoholic steatohepatitis (NASH) and subsequent liver damage. The diagnosis and treatment of NAFLD depend on the NAFLD activity score (NAS) and the liver fibrosis stage, which are usually evaluated from liver biopsies by pathologists. We propose a novel method to automatically predict NAS score and fibrosis stage from CT data that is non-invasive and inexpensive to obtain compared with liver biopsy. We also present a method to combine the information from CT and H\&E stained pathology data to improve the performance of NAS score and fibrosis stage prediction, when both types of data are available. However, the availability of a large annotated dataset cannot be always ensured and there can be domain shifts when using transfer learning. So, we propose a self-supervised learning method to address both problems. As the NAFLD causes changes in the liver texture, we also propose to use texture encoded inputs to improve the performance of the model.
Location: Remote via Webex
Committee:

Prof. Dimitris Metaxas (Advisor)

Prof. Vladimir Pavlovic

Prof. Sungjin Ahn

Prof. Aaron Bernstein

Start Date: 22 Dec 2020;
Start Time: 12:00PM - 02:00PM
Title: Graph-Representation Learning for Human-Centered Analysis of Building Layouts

Bio:
Speaker:
Abstract: Floorplans are a standard representation of building layouts. Computer-Aided Design (CAD) applications and existing Building Information Modeling tools rely on simple floorplan representations which are not amenable to automation (e.g., generative design), and do not account for how people inhabit and occupy the space -- two key challenges which must be addressed for intelligent human-aware building design. This thesis addresses these challenges by exploring the use of graph representation learning techniques to implicitly encode the latent state of floorplan configurations, which is more amenable to automation. Specifically, we use graphs as intermediate representation of floorplans. Rooms are nodes and edges indicate a connection between adjacent rooms, either through a door or passageway. The graphs are annotated with a variety of attributes which characterize the semantic, geometric, and dynamic properties of the floorplan with respect to human-centered criteria. To address the variation in dimensionality of graphs, we utilize an intermediate sequential representation (generated by random walks) which allow us to encode the graphical structure in a fixed-dimensional representation. We propose the use of RNN-based variational autoencoder architectures to embed attribute floorplans. We enhance graph-based representations of floorplans with human occupancy and behavior attributes which are extracted by statically analysing the floorplan geometry, and by running human behavior simulations on large datasets of real and procedurally generated synthetic floorplans. We explore the potential of these representations on a variety of tasks including finding semantically similar floorplans, floorplan optimization, and generative design. Our approach and techniques are extensively evaluated through a series of quantitative experiments, and user studies with expert architects to validate our findings. The qualitative, quantitative and user-study evaluations show that our embedding framework produces meaningful and accurate vector representations for floorplans. Our models and associated datasets have been made publicly available to encourage adoption and spark future research in the burgeoning research area of intelligent human-aware building design.
Location: Remote via Webex
Committee:

Prof. Mubbasir Kapadia (chair)

Prof. Mridul Aanjaneya

Prof. Gerard de Melo

Prof. M. Brandon Haworth

Start Date: 11 Jan 2021;
Start Time: 10:30AM - 11:30AM
Title: Overview of NSF CISE: Vision and Investments in Research, Education, Workforce and Infrastructure

Bio:

Margaret Martonosi is the US National Science Foundation’s (NSF) Assistant
Director for Computer and Information Science and Engineering (CISE). With an
annual budget of more than $1B, the CISE directorate at NSF has the mission to
uphold the Nation’s leadership in scientific discovery and engineering innovation
through its support of fundamental research and education in computer and information
science and engineering as well as transformative advances in research
cyberinfrastructure. While at NSF, Dr. Martonosi is on leave from and retains her tenure
at Princeton University where she is the Hugh Trumbull Adams '35 Professor of
Computer Science. Dr. Martonosi's research interests are in computer architecture and
hardware-software interface issues in both classical and quantum computing systems.
Dr. Martonosi is an elected member of the American Academy of Arts and Sciences,
and a Fellow of the Association for Computing Machinery (ACM) and the Institute of
Electrical and Electronics Engineers (IEEE).


Speaker:
Abstract: The National Science Foundation (NSF) supports a majority of US academic research in the Computer and Information Science and Engineering (CISE) topic areas. Since February, 2020, Dr. Margaret Martonosi serves as NSF CISE AD, stewarding the CISE directorate’s $1B annual budget on behalf of research, education, workforce and infrastructure funding in CISE topic areas and for science as a whole. Martonosi’s talk will present an overview of CISE investments including both technical visions for the field, as well as thoughts on priorities for advancing that vision. Of particular importance are the people issues: Who will we engage in advancing CISE into the future? Martonosi will discuss CISE and NSF investments in broadening participation in computing, from K-12 to senior career stages. Please join us for this highly-interactive session and please bring your input and questions!
Location: Via Zoom
Committee:
Start Date: 04 Feb 2021;
Start Time: 10:30PM - 11:45PM
Title: Recent Lower Bounds in Algebraic Complexity Theory

Bio:

Ben Lee Volk is a postdoctoral researcher at UT Austin. Previously, he was a postdoctoral scholar at Caltech. He obtained his Ph.D. from Tel Aviv University and his M.Sc. from the Technion, both under the supervision of Amir Shpilka. His research interests include complexity theory, and in particular the study of algebraic models of computation, as well as neighboring areas such as pseudorandomness, coding theory and combinatorics.


Speaker:
Abstract: Algebraic Complexity Theory studies the complexity of solving algebraic computational tasks using algebraic models of computation. One major problem in this area is to prove lower bounds on the number of arithmetic operations required for computing explicit polynomials. This natural mathematical problem is the algebraic analog of the P vs. NP problem. It also has tight connections to other classical mathematical areas and to fundamental questions in complexity theory. In this talk I will provide background and then present some recent progress in this line of study, particularly in the area of proving lower bounds for computing linear transformations.
Location: Via Zoom Recording
Committee:
Start Date: 08 Feb 2021;
Start Time: 10:30AM -
Title: Communication Complexity, Quantum Computing and Optimization: New Connections and Insights

Bio:

Makrand Sinha is currently a postdoctoral researcher at CWI in Amsterdam hosted by Nikhil Bansal, Ronald de Wolf and Monique Laurent. Prior to that, he obtained his Ph.D. in Computer Science from the University of Washington, Seattle in 2018, under the guidance of Anup Rao. His research interests lie in quantum computing, communication complexity and optimization, and in particular in making connections between these areas.


Speaker:
Abstract: How much information flows through a system? This fundamental question is at the heart of communication complexity. Techniques from this field have turned out to be immensely powerful and fairly universal tools to understand the power of different kinds of algorithms. In this talk, I will describe new methods that I have developed to analyze communication which offer fresh insights into quantum computing and optimization. Using these techniques, I will answer a question of Aaronson and Ambainis regarding the maximal advantage that quantum algorithms can have over classical algorithms in the "blackbox" model, and another conjecture due to Rothvoss about optimal linear programs for approximately solving the matching problem. Looking forward, I also propose new directions to explore more connections among these areas with the intention of answering important questions regarding whether quantum speedups need structure, and questions regarding the power of more general optimization approaches, such as semidefinite programming.
Location: Via Zoom
Committee:
Start Date: 09 Feb 2021;
Start Time: 10:30AM -
Title: New Arenas in Hardness of Approximation

Bio:

Karthik is currently a Postdoctoral Fellow at the Courant Institute of Mathematical Sciences, New York University, hosted by Subhash Khot. He received his Ph.D. in 2019 from Weizmann Institute of Science where he was advised by Irit Dinur. Prior to that, he obtained his M.S. from Ecole Normale Superieure in Lyon. His main research interests are in the intersection of complexity theory and combinatorial geometry.


Speaker:
Abstract: In this talk, I will introduce the mathematically-rich area of hardness of approximation through the recent inapproximability results for selected geometric problems whose difficulty ranges from being in P (e.g., closest pair problem) to being NP-hard (e.g., k-means clustering problem). The closest pair problem is a fundamental problem in computational geometry where given a set of n points in high dimensions (say log n dimensions) as input, we are required to find the closest pair of points. This problem can be solved in a straightforward manner in roughly O(n^2) time (by computing all pairwise distances), but can it be solved faster, in say O(n^1.9) time? If not, is there at least a fast algorithm that finds a pair of points that are close to being the nearest? An important task in unsupervised learning is k-means clustering. In this problem, given a set of n points and an integer k, we are asked to find a partition of the point-set into k clusters that minimizes the k-means objective. This problem is NP-hard and thus believed to not admit any polynomial (in n) time algorithm. But is there an algorithm running in polynomial time that can find k clusters whose k-means objective is only a 2 factor away from an optimal clustering solution? In this talk, I will present the recent advances on the above long standing questions and also briefly sketch the landscape of the complexity of geometric problems and the important open questions that need to be addressed therein. I will also highlight the symbiosis in the ideas needed to answer the aforementioned questions on closest pair and clustering problems, which might seem a-priori very different.
Location: Via Zoom Recording
Committee:
Start Date: 11 Feb 2021;
Start Time: 10:30AM -
Title: Algorithms and Barriers for Fast Matrix Multiplication

Bio:

Josh Alman is a Michael O. Rabin postdoctoral fellow in theoretical computer science at Harvard University. He earned his Ph.D. in computer science from MIT in 2019, where he was advised by Virginia Vassilevska Williams and Ryan Williams. He works on both algorithm design and complexity theory, and much of his work combines tools and insights from both areas. He's particularly interested in developing algebraic tools like algorithms for matrix multiplication and Fourier transforms, and applying them to problems throughout computer science. His awards include the European Association of TCS Distinguished Dissertation Award, the George M. Sprowls Award for outstanding Ph.D. theses in computer science at MIT, and best student paper awards at CCC 2019 and FOCS 2019.


Speaker:
Abstract: Matrix multiplication is one of the most basic algebraic operations. Since Strassen's surprising breakthrough algorithm from 1969, which showed that matrices can be multiplied faster than the most straightforward algorithm, algorithmic problems from nearly every area of computer science have been sped up by clever reductions to matrix multiplication. It is popularly conjectured that two n × n matrices can be multiplied in roughly O(n^2) time, but we don't yet have the algorithmic techniques needed to achieve this. In this talk, I'll give an overview of my work on this important problem, in the broad context of my research on combining ideas from algorithm design and complexity theory to help understand problems throughout computer science. First, I'll give an overview of known applications of matrix multiplication, including some new applications to nearest neighbor search and processing data streams. Second, I'll talk about the fastest known algorithm for matrix multiplication, which runs in time O(n^2.37286), which I recently designed with Virginia Vassilevska Williams. Third, I'll describe new barrier results which help to explain why we've been stuck for so long on this problem, and describe what kinds of insights are needed for further improvements.
Location: Via Zoom Recording
Committee:
Start Date: 18 Feb 2021;
Start Time: 10:30AM -
Title: Efficient Algorithms and Hardness for Fundamental Problems in Data Science

Bio:

Peng Zhang is a Postdoctoral Associate in Computer Science at Yale University, under the supervision of Daniel Spielman. She obtained her Ph.D. from Georgia Tech, advised by Richard Peng. Her research lies broadly in designing and understanding the limits of fast algorithms. She has worked on structured linear equations and linear programs, discrepancy theory and its application in the design of randomized controlled trials. Peng’s work received the best student paper award at FOCS 2017, and Georgia Tech College of Computing Dissertation Award in 2019.


Speaker:
Abstract: Two important components of data science are data and computation. Sometimes, data is cheap but computation is expensive. Sometimes, data can be expensive (for example, when data are collected from experiments with human subjects). In this talk, I will discuss efficient algorithms and hardness addressing fundamental problems in computation and data. In the first part of the talk, I will present hardness and fast algorithms for solving structured linear equations and linear programs, which are central tools in data analysis. Solving arbitrary linear equations and linear programs can be time-consuming. However, many linear equations and linear programs arising commonly in practice have additional structures that enable faster solvers. One example is Laplacian linear equations, which can be solved in nearly linear time (Spielman-Teng, 2004). I will show that a slight generalization of Laplacian linear equations is as hard to solve as arbitrary linear equations. But when we know additional geometric structures about this slightly general class of linear equations, we can design asymptotically faster solvers. Then, I will briefly discuss hardness and fast algorithms for positive linear programs, another example showing that structures enable faster solvers. In the second part of the talk, I will present an algorithm that improves the design of randomized controlled trials, which are widely used to test the effectiveness of interventions. In a trial, we randomly partition all subjects to a treatment group and a control group. We want the two groups to be similar in covariates — features that we know about the subjects. By exploiting ideas from discrepancy theory, we obtain random partitions with a nearly optimal tradeoff between the gain we have when covariates are correlated with treatment outcomes and the loss we suffer when they are not.
Location: Via Zoom Recording
Committee:
Start Date: 19 Feb 2021;
Start Time: 10:00AM - 11:00AM
Title: Throwing a Sofa Through the Window

Bio:

Dan Halperin is a professor of Computer Science at Tel Aviv University. His main field of research is Computational Geometry and Its Applications. A major focus of his work has been in research and development of robust geometric algorithms, principally as part of the CGAL project and library. The application areas he is interested in include robotics, automated
manufacturing, algorithmic motion planning and 3D printing. Halperin is an IEEE Fellow and an ACM Fellow. http://acg.cs.tau.ac.il/danhalperin


Speaker:
Abstract: Planning motion for robots and other artifacts toward desired goal positions while avoiding obstacles on the way becomes harder when the environment is tight or densely cluttered. Indeed, prevalent motion-planning techniques often fail in such settings. The talk centers on recently-developed efficient algorithms to cope with motion in tight quarters. We study several variants of the problem of moving a convex polytope in three dimensions through a rectangular (and sometimes more general) window. Specifically, we study variants in which the motion is restricted to translations only, discuss situations in which such a motion can be reduced to sliding (translation in a fixed direction) and present efficient algorithms for those variants. We show cases where sliding is insufficient but purely transnational motion works, or where purely transnational motion is insufficient and rotation must be included. Finally, we explore the general setup, where we want to plan a general motion (with all six degrees of freedom) for the polytope through the window and present an efficient algorithm for this problem, with running time close to O(n^4), where n is the number of edges of the polytope. (Joint work with Micha Sharir and Itay Yehuda.) As time permits I will present additional recent results for motion in tight settings in assembly planning, fixture design, and casting and molding.
Location: Via Webex
Committee:
Start Date: 22 Feb 2021;
Start Time: 10:30AM - 11:45AM
Title: Mechanism Design for Social Good

Bio:

Kira Goldner is a postdoctoral researcher in the Computer Science Department and at the Data Science Institute at Columbia University, hosted by Tim Roughgarden, and supported by NSF Math Sciences and Data Science Institute fellowships.  Kira uses her background in the foundations of mechanism design to address societal problems, e.g., in healthcare, climate change, and privacy.  She has also worked on core problems concerning revenue maximization, simplicity, and robustness.  As part of this agenda, Kira co-founded Mechanism Design for Social Good (MD4SG), an interdisciplinary initiative working to improve access to opportunity for historically disadvantaged communities.  She received her PhD in computer science and engineering from the University of Washington under the advisement of Anna Karlin, and was supported by a Microsoft Research PhD Fellowship and a Google Anita Borg Scholarship.  She has received many awards for her work, including the EC 2019 Best Paper with a Student Lead Author Award and the EC 2020 Best Presentation by a Student or Postdoctoral Researcher Award.


Speaker:
Abstract: Society is run by algorithms, and in many cases, these algorithms interact with autonomous agents who have a stake in the outcome. These participants may behave strategically in an attempt to "game the system," resulting in unexpected or suboptimal outcomes. In order to accurately predict an algorithm's outcome and quality, we must design it to be robust to strategic manipulation. This is the subject of algorithmic mechanism design, which borrows ideas from game theory and economics to design robust algorithms. In this talk, I will show how results from the theoretical foundations of algorithmic mechanism design can be used to solve problems of societal concern. I will focus on applications in carbon license allocations, health insurance markets, and online labor markets.
Location: Via Zoom
Committee:
Start Date: 22 Feb 2021;
Start Time: 02:30PM - 04:00PM
Title: Human-Centered Intelligent Sensing Systems

Bio:

Jorge Ortiz is an Assistant Professor at Rutgers University where he directs the Cyber-Physical Intelligence Lab (CyPhy-Lab) and is where he is also a member of the Wireless Information Network Laboratory (WINLAB). His work focuses on building and studying sensing systems that learn about human behavior, from human feedback, and with humans to improve system objectives and enhance people’s lives. These include a broad range of sensing systems including smart objects, smart built environments, and smart cities, more broadly. Prior to joining Rutgers in 2018, he was a Research Staff Member at IBM Research working on machine learning and the internet of things. In the five years he was at IBM, he attained 12 patents and published in multiple top academic conferences, journals, and books and was awarded ‘Best Poster’ at IEEE/ACM IPSN '08, two ‘Best Paper Runner-ups’ at Buildsys '15, ‘Best Paper’ at ICISSP '18, and ‘Best Paper Runner-up’
at IoTDI '19. At IBM he led teams to commercialize two major research projects and bring them to market. Dr. Ortiz also has extensive industry experience, which includes several years at Oracle Corporation and has worked at and led multiple startups. He is currently serving as co-TPC chair for Buildsys 2020. Dr. Ortiz attained his Ph.D. in Computer Science from UC Berkeley in 2013, M.S.
in Computer Science from Berkeley in 2010, and a BS in Computer Science from MIT in 2003.


Speaker:
Abstract: This talk is about the design of systems and algorithms for sensing systems that interact directly with humans. I will discuss our work in the Cyber Physical Intelligence lab (CyPhy-Lab) at Rutgers University where we study sensing systems that learn about human behavior, from human feedback, and with humans to improve system objectives and enhance people’s lives. I will describe three on-going projects that explore these themes more precisely. First I will describe PillSense, a smart pillbox system for medication adherence. This system helps us learn about how humans take their medication and how physical design is tightly coupled to the system’s ability to identify users effectively. We describe two pill box designs and associated algorithms to address the challenges posed. Then, I will discuss our on-going project Maestro, a system that learns from humans to fill the semantic gap between sensor measurements and their interpretation, in order to facilitate the construction of smart ambient-sensing applications in buildings. Finally, I will discuss project Paz, a system that attempts to learn when to interact with humans as we work to facilitate agent-human collaboration to both attain system objectives (i.e. efficiency) and enhance human productivity, comfort, and entertainment.
Location: Via Zoom
Committee:
Start Date: 23 Feb 2021;
Start Time: 10:30AM -
Title: Proof Complexity and Applications

Bio:

Marc Vinyals is a visiting fellow at the Technion. He works on
computational complexity, and proof complexity in particular, hosted
by Yuval Filmus. His research interests include other areas of
computational complexity such as circuit complexity, communication
complexity, query complexity, and pebble games, as well as the theory
of satisfiability solving. Previously he was a graduate student at the
KTH Royal Institute of Technology, supervised by Jakob Nordström, and
a visiting fellow at the Tata Institute of Fundamental Research,
hosted by Arkadev Chattopadhyay.


Speaker:
Abstract: Even though the origins of proof complexity are intimately tied to the P vs NP question, nowadays a significant part of the appeal of research in this area is that new results readily apply to large classes of algorithms. In this talk we will discuss how recent developments in proof complexity are impacting the design of satisfiability solvers, how fine-grained mathematical models can help us smooth over discrepancies between theory and practice, and how questions arising from implementation details end up as meaningful mathematical problems with connections to other topics in computational complexity.
Location: Via Zoom
Committee:
Start Date: 24 Feb 2021;
Start Time: 02:00PM - 04:00PM
Title: GreenDRL: Managing Green Data Centers Using Deep Reinforcement Learning

Bio:
Speaker:
Abstract: Managing data centers to maximize efficiency and sustainability is a complex and challenging problem. In this work, we explore the use of deep reinforcement learning (RL) to manage “green” data centers, bringing a robust approach to optimizing management systems for specific workload and datacenter characteristics. Specifically, we consider data centers that are partially powered by on-site generation of renewable energy and partially cooled by “free-cooling.” We design and evaluate GreenDRL, a hybrid system that combines a deep RL agent and a simple dispatcher to schedule a workload while jointly managing server power consumption and the cooling system to minimize cost. Our design addresses several important challenges, including scalability, robustness ,and effective learning in an environment comprising an enormous state/action space and multiple stochastic processes. Evaluation results show that GreenDRL is able to learn important principles such as delaying deferrable jobs to leverage variable generation of renewable (solar) energy , and avoiding the use of power-intensive cooling settings even at the expense of leaving some renewable energy unused. Simulation of a realistic (small) green data center running a workload containing a fraction of jobs deferrable by up to 12 hours shows that GreenDRL can reduce grid electricity by 26–57% compared to a baseline system, depending on outside temperatures and availability of renewable energy. Overall, our work shows that deep RL is a promising technique for building efficient management systems for green data centers.
Location: Remote via Zoom
Committee:

Thu D. Nguyen (Advisor)

Ulrich Kremer

He Zhu

Karl Stratos

Start Date: 25 Feb 2021;
Start Time: 02:00PM - 04:00PM
Title: Automatic, Efficient, and Robust Deep Learning

Bio:
Speaker:
Abstract: Deep learning enables automatically discovering useful, multistage, task-specific features from high-dimensional raw data. Instead of relying on domain expertise to hand-engineer features, it uses a general-learning procedure that is readily applicable to many domains such as image analysis and natural language processing. Deep learning has made significant advances after decades of development, in which the dataset size, model size, and benchmark accuracy have also dramatically increased. However, these three increasing trends pose corresponding challenges regarding data efficiency, model efficiency, and generalization robustness. To address these challenges, we research solutions from three perspectives: automatic data augmentation, efficient architecture design, and robust feature normalization. (i) Chapter 2 to Chapter 4 proposes a series of automatic data augmentation methods to replace the hand-crafted rules that define the augmentation sampling distributions, magnitude ranges, and functions. Experiments show the automatic augmentation methods can apply to diverse tasks and effectively improve their performance without using extra training data. (ii) Chapter 5 introduces the quantized coupled U-Nets architecture to boost the efficiency of stacked U-Nets with broad applications to location-sensitive tasks. U-Net pairs are coupled together through shortcut connections that can facilitate feature reuse across U-Nets and reduce redundant network weights. Quantizing weights, features, and gradients to low-bit representations can further make coupled U-Nets more lightweight, accelerating both training and testing. (iii) Chapter 6 presents two feature normalization techniques, SelfNorm and CrossNorm, to promote deep networks' robustness. SelfNorm utilizes attention to highlight vital feature statistics and suppress trivial ones, whereas CrossNorm augments feature statistics by randomly exchanging statistics between feature maps in training. SelfNorm and CrossNorm can reduce deep networks' sensitivity and bias to feature statistics and improve the robustness to out-of-distribution data, which usually results in unforeseen feature statistics. Overall, the proposed automatic data augmentation, efficient U-Net design, and robust feature normalization shed light on new perspectives for automatic, efficient, and robust deep learning.
Location: Remote via Zoom
Committee:

Prof. Dimitris Metaxas (Advisor)

Prof. Hao Wang

Prof. Sungjin Ahn

Prof. Xiaolei Huang (External member)

Start Date: 05 Mar 2021;
Start Time: 10:00AM - 11:00AM
Title: Ethics Washing in AI

Bio:

Moshe Vardi is a Professor of Computer Science at Rice University and directs the Ken Kennedy Institute for Information Technology. He also holds the titles of University Professor, the Karen Ostrum George Professor in Computational Engineering and Distinguished Service Professor. Dr Vardi received his Ph.D. from the Hebrew University of Jerusalem in 1981. His interests focus on applications of logic to computer science. He is an expert in model checking, constraint satisfaction and database theory. Vardi is the recipient of multiple awards, including 3 IBM Outstanding Innovation Awards, co-winner of the 2000 Gödel Prize, co-winner of the 2005 ACM Paris Kanellakis Theory and Practice Award, co-winner of the LICS 2006 Test-of-Time Award, the 2008 and 2017 ACM Presidential Award, the 2008 Blaise Pascal Medal in computational science by the European Academy of Sciences, and others. He holds honorary doctorates from 8 universities. He is a Guggenheim Fellow, a Fellow of ACM, AAAS and AAAI, a member of the US National Academy of Engineering, the National Academy of Sciences, the European Academy of Sciences, and the Academia Europaea. Prof. Vardi is the president of the International Federation of Computational Logicians. He is Senior Editor of Communications of the ACM, after serving as its Editor-in-Chief for a decade.


Speaker:
Abstract: Over the past decade Artificial Intelligence, in general, and Machine Learning, in particular, have made impressive advancements, in image recognition, game playing, natural-language understanding and more. But there were also several instances where we saw the harm that these technologies can cause when they are deployed too hastily. A Tesla crashed on Autopilot, killing the driver; a self-driving Uber crashed, killing a pedestrian; and commercial face-recognition systems performed terribly in audits on dark-skinned people. In response to that, there has been much recent talk of AI ethics. Many organizations produced AI-ethics guidelines and companies publicize their newly established responsible-AI teams. But talk is cheap. “Ethics washing” — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. An example is when a company promotes “AI for good” initiatives with one hand, while selling surveillance tech to governments and corporate customers with the other. I will argue that the ethical lens is too narrow. The real issue is how to deal with technology’s impact on society. Technology is driving the future, but who is doing the steering?
Location: Via Webex
Committee:
Start Date: 09 Mar 2021;
Start Time: 10:00AM - 12:00PM
Title: Shape Modeling of Data with Complex Geometry and Topology

Bio:
Speaker:
Abstract: With the advancement and prevalence of various sensors over the past years, it is increasingly easy to produce and access abundant data with different modalities and properties. These data typically are noisy, and feature complex geometrical and topological structures. Abstracting away the application domains reveals a common thread that such data can be effectively processed from the perspective of shape modeling. Depending on the contexts, shape modeling can be performed at different granularities, from individual data samples to the whole dataset. In this dissertation, we explore the benefits and challenges of shape modeling for such complex data, and offer novel solutions to achieve effective and scalable data processing over different applications. In the first part of this dissertation, we connect data analysis with explicit shape modeling by considering the underlying topological features within data. Based on the theory of algebraic topology, we show that low-dimensional topological structures offer compact global shape information for data analysis, and present provable algorithms to identify these structures. Furthermore, we demonstrate that such topological information not only is rich in individual samples, but also can be discovered from the dataset as a whole. Empirically we show the benefits of harnessing data topology via two specific applications, i.e., cardiac trabeculae restoration from 3D CT images and learning with noisy labels. As the second part of this dissertation, we explore the data-driven paradigms to implicitly model shapes for large-scale data processing. Such implicit strategies are built upon deep neural networks, and typically lead to more robust data descriptions than the explicit ones at the cost of higher sample complexity. We illustrate the machinery behind implicit shape modeling by considering one concrete data modality, i.e., 3D point clouds, which broadly serve as the standard outputs of various sensors. Through developing specialized network architectures, we are able to capture the shape features of point clouds efficiently and effectively, thereby benefiting point cloud-based applications, such as 3D scene segmentation and autonomous driving.
Location: Remote via Webex
Committee:

Prof. Dimitris Metaxas (Advisor)

Prof. Konstantinos Michmizos

Prof. Desheng Zhang

Prof. Mingchen Gao (External member; University at Buffalo, SUNY)

Start Date: 09 Mar 2021;
Start Time: 03:00PM - 05:00PM
Title: Deep Learning-based Histopathology Image Analysis for Cancer Diagnosis and Treatment

Bio:
Speaker:
Abstract: Histopathology plays a vital role in cancer diagnosis, prognosis, and treatment decisions. The whole slide imaging technique that captures the entire slide as a digital image (whole slide image, WSI) allows the pathologists to view the slides digitally as opposed to what was traditionally viewed under a microscope. With the development of computational power and image analysis algorithms, computational methods have been developed for the quantitative and objective analyses of histopathology images, which can reduce the intensive labor and improve the efficiency for pathologists compared with manual examinations. In this dissertation, we focus on deep learning-based solutions in histopathology image analysis for cancer diagnosis and treatment, specifically, nuclei segmentation for cancer diagnosis, and gene mutation prediction for cancer treatment. Nuclei segmentation is a critical step in the automatic analysis of histopathology images. We focus on two directions to tackle the problems in the deep learning-based nuclei segmentation task. One is the annotation-efficient algorithms. As the fully supervised learning of deep neural networks requires a large amount of training data, a weakly supervised nuclei segmentation framework based on a portion of nuclear locations is proposed to alleviate the annotation effort of pathologists. This method achieves comparable performance as the fully supervised methods and about 60$\times$ speed-up (10\% points) in the annotation time. The other direction is to improve the instance segmentation performance. The networks and cross entropy loss in current deep learning-based segmentation methods originate from image classification tasks and have drawbacks for segmentation. Therefore, we propose a full resolution convolutional neural network (FullNet) and a variance constrained cross entropy (varCE) loss to improve the fully supervised segmentation performance. Except for the cell-level heterogeneity that is routinely used for cancer diagnosis, it remains unclear for many cancers that how tissue structures in histopathology slides are related to genomic features like gene alterations and expression patterns. We develop a deep learning model to predict the genetic mutations and biological pathway activities directly from histopathology slides in breast cancer. The weight maps of tumor tiles are visualized to understand the decision-making process of deep learning models. Our results provide new insights into the association between pathological image features, molecular outcomes and targeted therapies for breast cancer patients.
Location: Remote via Zoom
Committee:

Dimitris Metaxas (Advisor), Rutgers CS

Konstantinos Michmizos, Rutgers CS

Hao Wang, Rutgers CS

Mei Chen (Outside member), Microsoft

Start Date: 12 Mar 2021;
Start Time: 11:30AM -
Title: Rutgers University Department of Computer Science Open House

Bio:

General Session

    webex link: https://rutgers.webex.com/rutgers/j.php?MTID=m43c830298cb6c8323edd67ac08892949

    pass: open

[11.30am] Chair's Welcome Speech (Prof. Matthew Stone)

{youtube}https://www.youtube.com/watch?v=ifSrc-6jo34&ab_channel=ComBraLab{/youtube}
 

[11.40am] GPD's Presentation of the PhD program (Prof. Ulrich Kremer)

{youtube}https://www.youtube.com/watch?v=_QbGDwRfUqU&ab_channel=ComBraLab{/youtube}

[12.00pm] PhD students' Pitch Talks

{youtube}https://www.youtube.com/watch?v=_8RT_iapQQ4&ab_channel=ComBraLab{/youtube}

  •     David Domingo: Accelerating storage recovery (advisor: Prof. Sudarsun Kannan)
  •     Neelesh Kumar: Machine Learning for Motor Learning: Brain-Informed and Interpretable Deep Learning of Movement (advisor: Prof. Konstantinos Michmizos)
  •     Guido Tagliavini Ponce: Optimal Balls-and-Bins Games and Virtual Address Translation (advisor: Prof. Martin Farach-Colton)
  •     Harsha Srimath Tirumala: One way Functions and the Minimum Circuit Size problem (MCSP) (advisor: Distinguished Prof. Eric Allender)
  •     Honglu Zhou: Multi-Hop Transformer For Spatiotemporal Reasoning (advisor: Prof. Mubbasir Kapadia)

[12.30pm] Break out sessions

  •     Theory session - Chair: Prof. Sepehr Assadi

        webex link: https://rutgers.webex.com/rutgers/j.php?MTID=m0bbb36d9b9a5379a93cf40be681d879b

        pass: theory

  •     Systems session - Chair: Prof. Yipeng Huang

        webex link: https://rutgers.webex.com/rutgers/j.php?MTID=m98039c2b93977c3edc57bf836c41c974

        pass: systems

  •     AI session - Chair: Prof. Hao Wang

        webex link: https://rutgers.webex.com/rutgers/j.php?MTID=m9b09e9bc9a91a17b527f6c8f707ff18c

        pass: arti

[13.00pm] The CS Department (Prof. Matthew Stone)

{youtube}https://www.youtube.com/watch?v=3agiEzvlgkU&ab_channel=ComBraLab{/youtube}


Speaker:
Abstract:
Location: Join this event via Webex- Details on how to join are listed above
Committee:
Start Date: 12 Mar 2021;
Start Time: 01:00PM - 03:00PM
Title: Instance Segmentation for Biological Image Analysis

Bio:
Speaker:
Abstract: Image-based instance segmentation is a task that differentiates and classifies objects at the pixel level. This task is of great significance for many biological applications (e.g., the study of neural cell interactions), which require precise segmentation of biological objects. However, it is challenging to capture fine-grained object details in biological images and separate the attached or overlapping objects due to their weak boundaries. This thesis presents a series of instance segmentation methods for biological images that address these challenges. We focus on three representative biological subjects, including neural cells, plants, and cell nuclei, to illustrate the image-based challenges and our methods' effectiveness. In particular, neural cells feature irregular shapes with tiny and slender protrusions; plant leaves typically contain thin stalks and overlapping regions; cell nuclei are usually distributed as clusters and have various shapes. In the first chapter, we present an overview of existing instance segmentation methods and their applications in biological image analysis. In the following chapters, we introduce a series of instance segmentation methods that distinguish objects from a global view and classify object pixels within bounding boxes. We study two kinds of object detectors: anchor-based and keypoint-based detectors. We explore strategies to remove undesired neighboring objects within a region of interest (ROI). On this basis, we further enhance the segmentation quality around the uncertain areas (e.g., boundary) via an auxiliary point-wise feature refinement module. Through extensive experiments, we show the superiority of our methods by comparing them to state-of-the-art approaches.
Location: Remote via Webex
Committee:

Prof. Dimitris N. Metaxas (Advisor)

Prof. Konstantinos Michmizos

Prof. Sungjin Ahn

Prof. Huang, Sharon Xiaolei (Penn State)

Start Date: 16 Mar 2021;
Start Time: 05:00PM - 07:00PM
Title: Improving Inference and Generation Process of Generative Adversarial Networks

Bio:
Speaker:
Abstract: Generative Adversarial Networks (GANs) have been witnessed tremendous successes in broad Computer Vision applications. This thesis considers two directions that are closely related to GAN-based image synthesis. (i) GAN-based inference learning and (ii) GAN-based video synthesis. Both directions face many challenges in generation and inference: In GAN-based Inference Learning, the application-driven methods usually outperform the approaches with elegant theories. For GAN-based video synthesis, the generation quality is far behind the contemporary image generators. To tackle those challenges, we conduct extensive studies on improving inference and generation processes. First, we identify the “incomplete” representation issue in the existing single-pathway framework, then propose a two-pathway approach to address this problem. Self-supervised learning is also employed to make the use of both labeled and unlabeled data. Second, as the theoretical extension of the previous work. We analyze three issues that degrade both generation and inference in GAN-based inference learning approaches. To address all three issues in a unified framework, we take the single-pathway approach as a baseline and propose two strategies to solve all issues. Third, we present a framework that leverages contemporary image generators to render high-resolution videos. We frame the video synthesis problem as discovering a trajectory in the latent space of a pre-trained and fixed image generator. Furthermore, we introduce a new task, which we call cross-domain video synthesis, this allows for generating moving objects for which the desired video data is not available. Extensive experiments on various datasets demonstrate the advantages of our methods over existing techniques.
Location: Remote via Webex
Committee:

Prof. Dimitris Metaxas (Advisor)

Prof. Yongfeng Zhang

Prof. Hao Wang

Dr. Han Zhang (Google Brain)

Start Date: 18 Mar 2021;
Start Time: 02:00PM - 04:00PM
Title: Integrated Reconstruction, Segmentation and Functional Analysis for MRI Data

Bio:
Speaker:
Abstract: Magnetic Resonance Imaging (MRI) plays an important role in many clinical applications due to its low radiation and high-contrast imaging. Traditional sequential pipeline for analyzing MRI data includes three steps: reconstruction, segmentation, and functional analysis. For dynamic MRI, i.e., dynamic cardiac MRI, the motion estimation step is also included. These tasks are highly entangled and share important information. However, current approaches are either computationally inefficient or require lots of human efforts to design complicated neural architectures. Moreover, limited work has explored the relationship between different tasks and utilized the coherence information between them. In this thesis, we propose 1) more effective and efficient deep learning-based MRI reconstruction methods; 2) integrated approaches for joint reconstruction and segmentation as well as joint reconstruction and motion estimation; 3) thickness and thickening estimation methods of the left ventricle wall for the diagnosis and characterization of different cardiac diseases. The proposed methods are extensively validated on four different medical applications: 2D cardiac and brain MRI reconstruction, 2D cardiac MRI reconstruction and segmentation, dynamic 3D cardiac MRI reconstruction and motion estimation, dynamic 3D cardiac MRI thickness and thickening estimation. Our proposed approaches are evaluated on various datasets and have outperformed current state-of-the-art methods in all these studies.
Location: Remote via Webex
Committee:

Prof. Dimitris Metaxas (Advisor)

Prof. Konstantinos Michmizos

Prof. Desheng Zhang

Dr. Zhen Qian (Tencent America)

Start Date: 23 Mar 2021;
Start Time: 10:00AM - 12:00PM
Title: The integration of heterogeneous systems for vehicular sensing

Bio:
Speaker:
Abstract: With the emergence of connected vehicles and autonomous vehicles in the future, vehicles are being equipped with rich sensing and communication devices, as well as various traffic sensors are being deployed on the road. Based on real-time sensor data from these sensors, tremendous driving assistant applications have been presented to improve urban efficiency, e.g., navigation, HD-live map, and traffic accident recovery, etc. Thus, it is essential to improve vehicular sensing for urban residents. The vehicular sensing could be achieved by modeling mobility patterns of vehicular data from multiple systems, e.g., (1) mobile systems including devices equipped in taxis, buses, trucks, personal vehicles, etc.; and (2) stationary systems including traffic cameras, electronic toll station systems, roadside units, etc. However, most existing works study vehicular sensing on a single system, which may not comprehensively represent the mobility patterns of residents for the whole city, leading to a biased understanding. In this dissertation, we study the integration of multiple vehicular sensing systems and design a vehicular sensing framework, which enables integrated vehicular sensing from three perspectives, i.e.: (1) Integration of mobile systems: we present a real-time sensing task scheduling system, RISC, which schedules sensing tasks for commercial vehicles with constraint sensing resources based on unified modeling for vehicles' mobility patterns; (2) Integration of stationary systems: we design a type-aware travel speed prediction for road segments method, mDrive, by utilizing sparse spatial-temporal data of multiple types of vehicles from traffic camera systems; (3) Integration of hybrid systems: we provide a privacy risk prediction approach based on transfer learning, TransRisk, to predict the privacy risk for a traffic camera system through its small-scale short-term data, and the knowledge learned from large-scale long-term data from existing sensing systems. We implement these three modules of our vehicular sensing framework by real-world data from two Chinese cities, Shenzhen and Suzhou. Our results provide insights for the integration of multiple sensing systems on the improvement of the performance of vehicular sensing from different aspects.
Location: Remote via Zoom
Committee:

Prof. Desheng Zhang (Advisor)

Prof. Zheng Zhang

Prof. Yongfeng Zhang

Prof. Ruilin Liu (external member)

Start Date: 24 Mar 2021;
Start Time: 12:00PM - 02:00PM
Title: Towards Visual Learning with Attention Mechanism

Bio:
Speaker:
Abstract: Tremendous interest in deep learning has emerged in the computer vision research community. The established deep convolutional neural networks (CNNs) have achieved astonishing results in various vision tasks, while there are still problems that need to be addressed. First of all, the CNN models are perceived as ``black-box" with a lack of understanding of the internal function. Recently, the class-wise activation map is proposed to show a visual explanation of model attention, while it still lacks the way to utilize that explanation to guide the learning process. Additionally, the success of deep learning relies on supervised training the models on the large-scale data, which requires humans to create massive annotations. In this dissertation, we address that attention mechanisms can play significant roles in dealing with the challenges mentioned above. First, despite class-wise attention mechanisms providing good localization for an individual class of interest when it comes to interpreting CNNs, these techniques produce attention maps with substantially overlapping responses among different classes, leading to visual confusion and the need for discriminative attention. We address this problem by means of a new framework that makes class-discriminative attention a principled part of the learning process. Second, to get rid of human annotations, we introduce the Co-Attention as weak supervision to generate the positive/negative training samples and a Contrastive Attention module to enhance the feature representations such that the comparative contrast between features of the positive and negative samples are maximized. Third, we adopt the attention in feature space to bridge different vision tasks in a unified learning framework. Extensive experiments on vision benchmarks show the effectiveness of our approaches in terms of improved image classification and segmentation accuracy. Regards the applications, our proposed algorithms are applied to the unsupervised detection of highlighted segments in the videos, joint face detection and landmark localization, and reasoning about human facial behaviors in deception. Additionally, two new benchmarks are collected to support related studies and facilitate researches in the same direction.
Location: Remote via Zoom
Committee:

Prof. Dimitris N. Metaxas (Advisor)

Prof. Konstantinos Michmizos

Prof. Georgios Moustakides

Prof. John Yiannis Aloimonos (external member)

Start Date: 25 Mar 2021;
Start Time: 03:00PM - 05:00PM
Title: Scalable, Physics-driven Model Identification for Robust Robotic Manipulation

Bio:
Speaker:
Abstract: Manipulating all types of objects is still challenging in robotics. Successful manipulation planning requires accurate geometric and mechanical models of objects as a precondition. In warehouses and factories, manipulated objects are typically known in advance, with their CAD models obtained from full 3D scans along with other mechanical properties like mass and friction in the workspace. However, recent research efforts in grasping and manipulation rather focus on the tasks where object models are unavailable. For example, the robot cannot obtain full 3D scans and mechanical properties of unknown objects inside a drawer or in a pile of clutter. Furthermore, in such circumstance, nonprehensile manipulation of objects is preferred over the traditional pick-and-place approach because an object may not be easily grasped by the robot, due to the design of the end-effector, the size of the object, or the obstacles surrounding the manipulated object. Recently, combined pushing and grasping approaches have shown success where traditional grasp planners fail, and they work well under uncertainty. First, I propose the probabilistic methods to infer geometric and mechanical models of unknown objects by leveraging non-prehensile robot actions. The proposed methods utilize physics simulations of all the physical interactions between the objects and between the robot and the manipulated objects. Non-prehensile manipulation on unknown objects, such as pushing and sliding, and physics reasoning help infer the shape and mechanical properties of the objects by replaying hypothesized object models in simulation. Then, the identified probabilistic models enable probabilistic manipulation planning. I propose the probabilistic manipulation planning methods using the identified object models. The proposed planning methods shows successful pre-grasp sliding manipulation by selecting a stable goal configuration and finding the most robust action considering all possible hypothesized models. Finally, I propose the nonprehensile manipulation on multiple objects that allows the robot to rearrange the objects much efficiently. The nested nonprehensile manipulation actions reduce the length of the end-effector’s trajectories by manipulating multiple objects simultaneously. The nested rearrangement search algorithm proposes an efficient way to explore the object interactions in combinatorially large configuration space of multiple objects.
Location: Remote via Zoom
Committee:

Professor Abdeslam Boularias (Advisor)

Professor Jingjin Yu

Professor Mridul Aanjaneya

Professor Tucker Hermans (external member)

Start Date: 26 Mar 2021;
Start Time: 03:00PM - 05:00PM
Title: Scaling data sharing among vehicles to tackle long-tail traffic situations

Bio:
Speaker:
Abstract: Developing robust driver assistance and automated driving technologies requires an understanding of not just common highway and city traffic situations but also the long tail of unusual or rare traffic events (e.g., objects on the roadway and pedestrian crossing highway, etc.). Due to the rareness, variety, and complexity of the long tail traffic conditions, it is widely recognized that ensuring dependability under these situations remains a key challenge for automated vehicles. Specifically, the challenges of tackling the long tail traffic situations are threefold. (i) Due to the rareness of the long tail traffic events, understanding these events will require accumulating data collection in the order of billions of miles. Since most existing efforts to collect driving data build on a small fleet of highly instrumented vehicles that are continuously operated with test drivers, it is challenging to acquire the road data on such a large scale. (ii) Among the large amount of data needed, it is hard to automatically identify the unusual situations which are challenging to address and could lead to potential accidents. Although there exists a large body of work on abnormal driving event detection, they focus on detecting specific, known situations but cannot detect previously unknown unusual road events that are missing in the current set of test cases for automated vehicles. (iii) The complexity of diverse traffic situations makes a single-vehicle hard to sense its surrounding comprehensively. Although vehicle-to-vehicle (V2V) communications provide a channel for point cloud data sharing, it is challenging to align point clouds from two vehicles with state-of-the-art techniques due to localization errors, visual obstructions, and viewpoint differences. To address such challenges, this thesis focuses on scaling data sharing among vehicles, which enables low-cost road data acquisition, unusual driving events identification and accurate vehicle-to-vehicle point cloud alignment. In particular, the proposed solution includes: (1) A light-weight sensing and driving data logging system that can derive internal driver inputs (i.e., steering wheel angles, driving speed, and acceleration) and external perceptions of road environments (i.e., road conditions and front-view video) using a smartphone and an IMU mounted in a vehicle, which enables road data acquisition and sharing in heterogeneous driving scenarios crossing different vehicle models and drivers. (2) An automatic unusual driving events identification system, which can detect unusual situations through a two-pronged approach involving inertial monitoring of driver reactions and an autoencoder-based technique for detecting unusual video scenes. (3) A two-phase point cloud registration mechanism to fuse point clouds from two different vehicles, which focuses on key objects in the scene where the point clouds are most similar and infer the transformation from those.
Location: Remote via Zoom
Committee:

Prof. Marco Gruteser (Advisor)

Prof. Richard Martin

Prof. Desheng Zhang

Dr. Fan Bai (External Member, from General Motor)

Start Date: 29 Mar 2021;
Start Time: 01:00PM - 03:00PM
Title: Computational Methods to Understand the Association between Emojis and Emotions

Bio:
Speaker:
Abstract: Emojis have become ubiquitous in digital communication due to their visual appeal as well as their ability to vividly express human emotion, among other factors. They are also heavily used in customer surveys and feedback forms. Hence, there is a need for methods and resources that shed light on their meaning and communicative role. In this work, we seek to explore the connection between emojis and emotions by employing new resources and methodologies. First, we compile a unique corpus of ~20.8 million emoji-centric tweets, such that we can capture rich emoji semantics using a comparably small dataset. We then train a model to generate interpretable word-vectors and show how domain-specific emoji embedding gives better emotion prediction than other vanilla embeddings like Glove and Word2Vec. Second, we conduct annotation experiments for a set of 150 popular emojis. This gives 1,200 emoji-emotion pairs of human ratings of association concerning 8 basic human emotions such as anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. This gold standard emoji-emotion score is the first of its kind. We additionally conduct experiments to assess to what extent such associations can be inferred from existing data. Our experiments show that this succeeds when high-quality word-level information is available. Lastly, we consider a set of popular NLP tools to perform additional experiments on texts containing emojis to analyze how they operate on emojis. Keywords: EmoTag, Emoji--Emotion, Emoji--Embedding, Twitter--Emoji, Emotion--Lexicons
Location: Remote via Zoom
Committee:

Prof. Gerard de Melo (Advisor)
Prof. Matthew Stone
Prof. Yongfeng Zhang
Daniel Preotiuc-Pietro (Bloomberg)
Vita G. Markman (LinkedIn)

Start Date: 01 Apr 2021;
Start Time: 12:30PM - 02:30PM
Title: Neural Methods for Document Understanding

Bio:
Speaker:
Abstract: Document recognition involves both structured and unstructured data. This dissertation studies and proposes solutions to problems in both domains. Structured data often comes in tabular format. Many existing applications use structured tables as input such as spreadsheets, HTML, Excel, and CSV. However, there are a plethora of circumstances that can lead to generating images of tables (scanning invoices, sharing user manuals, downloading read-only publications, etc.). In such scenarios, we do not have access to the table structure. To be able to use the aforementioned applications in the given scenarios, we focus on the problem of structure recognition by graph modeling. We are investigating ways of inferring the hidden structures from tables’ images. By extracting table fields in the form of an undirected graph G and finding the corresponding Line graph L(G), AKA edge-to-vertex dual graph, we are able to model the structure-inference problem as a vertex classification using a Graph Convolutional Network (GCN) based model. We focus on text classification in three settings: English, multilingual and Arabic. We performed a comparative study with different models for text classification tasks using ULMFiT, Bert and XLM, across the spectrum of languages to understand their advantages and limitations.
Location: Remote via Zoom
Committee:

Prof. Ahmed Elgammal (Advisor)

Prof. Abdeslam Boularias

Prof. Gerard De Melo

Prof. Malihe Alikhani (external member)

Start Date: 02 Apr 2021;
Start Time: 03:00PM - 05:00PM
Title: Graph Streaming Lower Bounds via a Streaming XOR Lemma

Bio:
Speaker:
Abstract: We study space-pass tradeoffs in graph streaming algorithms for parameter estimation and property testing problems such as estimating the size of maximum matchings and maximum cuts, weight of minimum spanning trees, or testing if a graph is connected or cycle-free versus being far from these properties. We develop a new lower bound technique that proves that for many problems of interest, including all the above, obtaining a (1 + ε)-approximation requires either near-linear space or Ω(1/ε) passes, even on highly restricted families of graphs such as bounded-degree planar graphs. For multiple of these problems, this bound matches those of existing algorithms and is thus (asymptotically) optimal. One key ingredient of our proofs is a simple streaming XOR Lemma, a generic hardness amplification result, that we prove: informally speaking, if a p-pass s-space streaming algorithm can only solve a decision problem with advantage δ > 0 over random guessing, then it cannot solve XOR of L independent copies of the problem with advantage much better than δ^L. This result can be of independent interest and useful for other streaming lower bounds as well. In this talk, I will be giving a brief overview of our results and techniques. Based on joint work with Prof. Sepehr Assadi.
Location: Remote via Zoom
Committee:

Prof. Swastik Kopparty (advisor)

Prof. Sepehr Assadi

Prof. Eric Allender

Prof. Sudarsun Kannan

Start Date: 09 Apr 2021;
Start Time: 10:30AM - 12:30PM
Title: Neural Graph Reasoning for Explainable Decision-Making

Bio:
Speaker:
Abstract: Modern decision-making systems are primarily accuracy-driven, seeking to learn good representations and provide accurate predictions via deep learning techniques. However, due to the black-box nature of deep neural networks, the explainability is largely ignored, which plays a pivotal role in practical applications such as user modeling, digital marketing and e-commerce platforms, etc. The explanations can be leveraged to not only assist model developers to understand and debug the working mechanism of the decision-making process, but also facilitate better engagement and trustworthiness for the end users who consume the results delivered by the systems. Meanwhile, the explanations are supposed to be both consistent to the decision-making process and understandable to human beings, while most existing explainable approaches only possess one of the properties. In this thesis, we concentrate on how to generate and evaluate faithful and comprehensible explanations via external heterogeneous graphs in various practical scenarios. The meaningful and versatile graph structures (e.g., knowledge graphs) are shown to be effective in improving model performance, and more importantly, make it possible for an intelligent decision-making system to conduct explicit reasoning over graphs to generate predictions. The benefit is that the resulting graph paths can be directly regarded as the explanations to the predictions, because the traceable facts along the paths reflect the decision-making process and can also be easily understood by humans. We propose several neural graph reasoning approaches to generate such path-based explainable results, by marrying the powerfulness of deep neural models and the interpretability of graph structures. The covered techniques range from deep reinforcement learning, imitation learning to neural-symbolic reasoning and neural logic reasoning. The proposed models are extensively evaluated on real-world benchmarks across different application scenarios such as recommendations and column annotations. The experimental results demonstrate both superior performance gains and better explainability of these methods.
Location: Remote via Zoom
Committee:

Prof. Shan Muthukrishnan (Advisor)
Prof. Yongfeng Zhang
Prof. Gerard de Melo
Dr. Lihong Li (Amazon)

Start Date: 16 Apr 2021;
Start Time: 02:00PM - 03:30PM
Title: Choreographic coordination: a programming model for in-situ workflows

Bio:
Speaker:
Abstract: As scientific applications strive towards increasingly realistic modeling of complex phenomena, they are integrating multiple models and simulations into complex, coupled scientific workflows. As a result, ensuring that existing codes can be combined and recombined correctly and flexibly as part of these workflows is essential. In this presentation, I will discuss challenges in current methods for creating such compositions and propose a choreographic novel approach for creating in-situ scientific workflows, i.e., workflows that execute on a single extreme-scale system. Specifically, this approach involves providing a domain-specific abstraction that enables a programmer to instrument existing simulation codes so that they can be used as building blocks in defining complex workflows using a service-oriented approach. Using this approach, developers specify a workflow-level shared specification of data objects over common or partitioned data domains. This permits dependency-based execution to be specified at the workflow level, distinct from the independent operation of the component simulations. I will describe the features and benefits of this approach from the perspective of application design and systems optimization research.
Location: Remote via Zoom
Committee:

Professor Manish Parashar (advisor)

Professor Zheng Zhang

Professor Sudarsun Kannan

Professor Eric Allender

Start Date: 20 Apr 2021;
Start Time: 02:30PM - 04:00PM
Title: Explainable Graph Attention Networks

Bio:
Speaker:
Abstract: Graphs are an important structure for data storage and computation. Recent years have seen the success of deep learning on graphs such as Graph Neural Networks (GNN) on various data mining and machine learning tasks. However, most of the deep learning models on graphs cannot easily explain their predictions and are thus often labelled as “black boxes”. For example, Graph Attention Network (GAT) is a frequently used GNN architecture, which adopts the attention mechanism to carefully select over the neighborhood, nodes for message passing and aggregation. However, it is difficult to explain why certain neighbors are selected while others are not and how the selected neighbors contribute to the final classification result. In this paper, we present a new graph learning model called Explainable Graph Attention Network (XGAT), which integrates graph attention modeling and explainability. Though in previous work accuracy and explainability were mostly tackled as two different problem spaces and by distinctly different models, we show that in the context of graph attention modeling, we can design a unified neighborhood selection strategy that selects appropriate neighbor-nodes for both better accuracy and enhanced explainability. To justify this, we conduct extensive experiments to better understand the behavior of our model under different conditions and show the increase in both accuracy and explainability.
Location: Remote via Zoom
Committee:

Professor Yongfeng Zhang

Professor Santosh Nagarakatte

Professor Hao Wang

Professor Sudarasun Kannan

Start Date: 22 Apr 2021;
Start Time: 01:30PM - 03:00PM
Title: Explaining Ranking Functions

Bio:
Speaker:
Abstract: As the general public realizes the significant societal impacts of the widespread use of algorithms in decision-making, there has been a push towards explainability and transparency in decision processes and results, as well as demands to justify the fairness of the processes. I propose transparent participation metrics to clarify the ranking process, by assessing the contribution of each parameter used in the ranking function in the creation of the final ranked outcome,using information about the ranking functions themselves, as well as observations of the underlying distributions of the parameter values involved in the ranking. To evaluate the outcome of the ranking process, I propose diversity and disparity metrics to measure how similar the selected objects are to each other, and to the underlying data distribution. In addition, I have been investigating the repercussions of errors and changes of resources occurring post-decision on the outcome of allocation mechanisms in order to enable fair redistribution of resources after the original decision conditions have been altered. My research seeks to enable non-experts to actively participate in all stages of the algorithmic decision process, from design to decision to error mitigation.
Location: Remote via Zoom
Committee:

Prof. Amelie Marian

Prof. Yongfeng Zhang

Prof. David Pennock

Prof. Jingjin Yu

Start Date: 23 Apr 2021;
Start Time: 10:30AM - 12:30PM
Title: Towards Zero-Shot Learning: Object Recognition with Semantic Descriptions

Bio:
Speaker:
Abstract: Human beings have the remarkable ability to recognize novel visual objects only based on the description with semantic concepts. Deep learning, however, often requires a large number of labeled training samples to achieve superior performance. It's laborious, expensive, and difficult to collect sufficient labeled data for each object because the distribution of data among categories in the real world is long-tailed. Therefore, it is in great demand to research how to recognize novel visual objects with no visual data being seen. Zero-shot learning aims to expand the learned knowledge from seen objects, of which we have training samples, to unseen objects, of which we have no training data. To mimic the human ability to recognize novel objects, zero-shot learning leverages semantic descriptions of classes to bridge the gap between both seen and unseen classes. This thesis concerns zero-shot learning from three aspects: the semantic descriptions of classes, the discriminative visual representation of images, and the algorithm design. We first present a visual-semantic embedding-based method that embeds visual data and semantic data to a shared embedding space. Images are encoded by a part-based CNN that detects bird parts and learns part-specific representation. The sparsity regularizer forces the models to connect text terms to their relevant parts and suppress connections to non-visual text terms without any part-text annotations. Then we propose a novel algorithm to leverage generative models to synthesize the visual data of unseen categories based on semantic descriptions and convert zero-shot learning to a conventional object recognization problem. Finally, we shift our focus to learning better discriminative visual representation of images in part-level for zero-shot learning in a weekly-supervised manner. Experiment results showed that our models significantly improves the performance of zero-shot learning in different settings.
Location: Remote via Zoom
Committee:

Prof. Ahmed Elgammal (advisor and chair)

Prof. Yongfeng Zhang

Prof. Gerard de Melo


Prof. Haibin Ling (external member, Stony Brook University)

Start Date: 23 Apr 2021;
Start Time: 02:00PM - 03:30PM
Title: In Defense of 2D Keypoint Annotations for 3D Animal Reconstruction

Bio:
Speaker:
Abstract: In this talk, we address the problem of learning-based 3D reconstruction of animals from a single image. The main focus of recent work lies at eliminating the reliance of relevant approaches on keypoint annotations, using mostly masks for supervision. For some scenarios, e.g., when it is hard to establish correspondences across instances, this is a reasonable goal. However, we argue that 2D keypoints can get us a long way when it comes to 3D reconstruction and it is important to embrace their significance. More importantly, we observe that current methods for 2D keypoint localization have excellent generalization properties. As a result, even when trained on a small initial training set, the 2D keypoint detector can automatically annotate unlabelled images with robust 2D keypoint detections. We utilize these self-annotated images for training, greatly improving the reconstruction performance of our models.
Location: Remote via Zoom
Committee:

Prof. Dimitris Metaxas (advisor)

Prof. Sungjin Ahn

Prof. Karl Statos

Prof. Yipeng Huang 

Start Date: 23 Apr 2021;
Start Time: 02:30PM - 04:00PM
Title: User-oriented Fairness in Recommendations

Bio:
Speaker:
Abstract: Recommender systems are gaining increasing and critical impacts on human and society since a growing number of users use them for information seeking and decision making. Therefore, it is crucial to address the potential unfairness problems in recommendations. We study unfairness problems in recommendations from user perspective. First, we conduct experiments to show that current recommender systems will behave unfairly between groups of users with different activity level. Specifically, the active users who only account for a small proportion in data enjoy much higher recommendation quality than those inactive users. Such bias can also affect the overall performance since the inactive users are the majority. To solve this problem, we provide a re-ranking approach to mitigate this unfairness problem by adding constraints over evaluation metrics. Furthermore, considering users' demands for fairness are personalized in many scenarios, we consider achieving personalized fair recommendations for users to satisfy their personalized fairness requirements through adversary learning.
Location: Remote via Zoom
Committee:

Prof. Yongfeng Zhang (Advisor) 

Prof. Desheng Zhang 

Prof. Hao Wang 

Prof. Shiqing Ma 

Start Date: 30 Apr 2021;
Start Time: 01:00PM - 03:00PM
Title: Neural Methods for Entity-Centric Knowledge Extraction and Reasoning in Natural Language

Bio:
Speaker:
Abstract: Entities are the cornerstone for the dissemination of factual knowledge in natural language. Human verbal and written communication invariably refer to entities, their properties, and their relationships. Moreover, human reasoning often relies on a proper understanding of relationships among various entities. It is perceived that representing entity-centric knowledge in a structured form is most suitable for machines to consume and reason about it. Over the past few decades, numerous methodological advances have been made in extracting entity-centric knowledge from unstructured and semi-structured sources, representing entity-centric knowledge as graph-structured data known as Knowledge Graphs, and using these knowledge graphs in various knowledge-intensive natural language processing tasks. Despite these advances, machines are yet to achieve human-level ability to extract and reason with factual knowledge and use it for various knowledge-intensive tasks. This dissertation proposes novel neural methods to narrow this gap. In particular, for factual knowledge extraction, it proposes efficient and effective methods for the tasks of Entity Linking and Relation Extraction. For knowledge-based logical reasoning, an explainable link prediction method for emerging entities in knowledge graphs is proposed. Furthermore, as a representative of knowledge-intensive natural language processing tasks, this dissertation studies the problem of entity summarization to retrieve relevant facts and generate fact-allegiant textual descriptions of entities.
Location: Remote via Zoom
Committee:

Prof. Gerard de Melo (advisor and chair)

Prof. Matthew Stone

Prof. Karl Stratos

Prof. Sameer Singh (external member, University of California Irvine)

Start Date: 07 May 2021;
Start Time: 09:30AM - 11:00AM
Title: One way functions and a conditional variant of MKTP

Bio:
Speaker:
Abstract: One-way functions (OWFs) are central objects of study in cryptography and computational complexity theory. In a seminal work, Liu and Pass (FOCS 2020) proved that the average-case hardness of computing time-bounded Kolmogorov complexity is equivalent to the existence of OWFs. It remained an open problem to establish such an equivalence for the average-case hardness of some NP-complete problem. In our work, we make progress on this question by studying a conditional variant of the Minimum KT-complexity Problem (MKTP), which we call McKTP, as follows : 1. First, we prove that if McKTP is average-case hard on a polynomial fraction of its instances, then there exist OWFs. 2. Then, we observe that McKTP is NP-complete under polynomial-time randomized reductions. That is, there are NP-complete problems whose average-case hardness implies the existence of OWFs. 3. Finally, we prove that the existence of OWFs implies the nontrivial average-case hardness of McKTP. Thus the existence of OWFs is inextricably linked to the average-case hardness of this NP-complete problem.
Location: Remote via Webex
Committee:

Prof. Eric Allender (advisor)

Prof. Michael Saks

Prof. Sepehr Assadi

Prof. Santosh Nagarakatte

Start Date: 17 May 2021;
Start Time: 04:00PM - 06:00PM
Title: Toward Efficient and Optimal Multi-Robot Motion Planning: Provable Guarantees and Effective Heuristics

Bio:
Speaker:
Abstract: The labeled Multi-Robot Motion Planning (MRMP) problem, despite its large number of variations, generally requires routing a set of mobile robots from a start configuration to a goal configuration as fast as possible, without collisions. In recent years, MRMP has been extensively studied, and significant progress has been made on solving MRMP with great efficiency as well as solution quality. However, due to the problem's hardness and importance in industrial applications, there is always the need to seek higher performance. This dissertation proposes methods with provable guarantees and effective heuristics to improve the current MRMP algorithm development from a few different perspectives, including scalability and optimality. For MRMP in a continuous workspace, we first find a polynomial computation time, constant-factor optimal algorithmic framework. Then, a second method is developed with similar theoretical guarantees, while at the same time pushes robot density to above 50%. For MRMP in a discretized graph structure, we propose two heuristics: a database-driven heuristic for quickly resolve path conflicts and a space utilization optimization principle that can improve the efficiency and optimality of existing decoupled MRMP algorithms. Both heuristics extend their effectiveness to a lifelong, dynamic MRMP formulation. Finally, we propose a general integer programming framework for path-based optimization problems. The framework is easy to adapt to different path optimization formulations and can generate optimal solutions efficiently.
Location: Remote via Zoom
Committee:

Prof. Jingjin Yu (advisor and chair)

Prof. Kostas Bekris

Prof. Abdeslam Boularias

Prof. Sven Koenig (external member, University of Southern California)

Start Date: 24 May 2021;
Start Time: 10:00AM - 12:00PM
Title: Geometric and Spectral Limitations in Generative Adversarial Networks

Bio:
Speaker:
Abstract: Generative Adversarial Networks (GANs) have become one of the most successful and popular generative models in the recent years, with a wide variety of applications, such as image and audio manipulation and synthesis, style transfer, and semi-supervised learning, to name a few. The main advantage of GANs over their classical counterparts stems from the use of Deep Neural Networks (DNNs), in both the sampling process (generator) and the energy evaluation process (discriminator), which can utilize the ongoing revolution in the availability of data and computation power to effectively discover complex patterns. Yet, with this exceptional power, comes an exceptional limitation: the black-box behavior associated with DNNs, which not only places the profound promise of GANs under a shadow of mistrust, but also greatly slows down the efforts to improve the efficiency of these models. As such, studying GANs limitations and biases is critical for advancing their performance and usability in practice. The primary focus of this dissertation is to study two such limitations, namely a geometric limitation in generating disconnected manifolds, and a spectral limitation in learning distributions carried by high frequency components. We investigate these limitations both empirically and theoretically, unveil their causes and consequences, and propose and evaluate solutions for overcoming their adverse effects.
Location: Remote via Zoom
Committee:

Dr. Ahmed Elgammal (advisor)

Dr. Abdeslam Boularias

Dr. Casimir Kulikowski

Dr. Dimitris Samars (outside member)

Start Date: 27 May 2021;
Start Time: 10:00AM - 11:30AM
Title: Can Document Statistics Disambiguate Grounded Language: The Case of PP attachment

Bio:
Speaker:
Abstract: Advancements in the coverage and efficacy of pre-trained language models have paved the way for future research in natural language processing by making it much easier generalize from small hand-annotated datasets. In cases that require inputs from situational context, however, pre-trained models may not reliably learn to disambiguate grounded language. Using the case of PP attachment in image captions, I assess the learning potential of pre-trained statistical language models toward resolving ambiguity in image captions. My results show that pre-trained models may lack the context required to resolve the ambiguity in many cases, but traditional NLP approaches can be combined with statistical models for grounded learning.
Location: Remote via Webex
Committee:

Prof. Matthew Stone (advisor)

Prof. Martin Farach-Colton

Prof. Karl Santos

Prof. Yongfeng Zhang

Start Date: 04 Jun 2021;
Start Time: 08:00AM - 05:00PM
Title: spatial-temporal prediction in cyber-physical systems

Bio:
Speaker:
Abstract: Spatial-temporal predictions are essential to Cyber-Physical Systems (CPS) such as the taxi network and bus network to satisfy users’ demand. In order to manage those CPS resources efficiently, it is vital to predicting the users’ demand ahead of time for future possible actions such as rescheduling. However, current methods, while working fine for short-term prediction, do not work well to predict the unexpected increasing user demand in advance (i.e., ahead of sufficient time, say 2 hours) and thus are inefficient for resultant applications such as bike re-balancing. To address this issue, we leverage the fact that the latest public transportation traffic demand as an example of large-scale CPS, e.g., subways, can reflect the future bike traffic demand. Specifically, we model downstream traffic, i.e., bikes, and upstream traffic, i.e., subways, as spatial-temporal network series, and then design a multi-modal 3D deep capsule network framework called BikeCap. The key novelty of BikeCap is in our first attempt to apply the capsule network in a temporal domain, which benefits the prediction of a long-term bike demand. BikeCap has three components: (1) a pyramid-based convolutional layer in historical capsules to capture the spatial-temporal correlations along the directions of traffic propagation; (2) a spatial-temporal routing technique to actively learn the uncertain multi-modal spatial-temporal correlations between historical capsules and future capsules; (3) a 3D deconvolution-based decoder to construct future bike demand considering the multi-dimensional correlations of traffic patterns among neighborhoods. Through experiments based on the data of 30,000 bikes and 7 subway lines collected in Shenzhen City, China, the results show that BikeCap outperforms several state-of-the-art methods. We also conduct an ablation study to show the impact of BikeCap’s different designed components and multi-modal concept.
Location: Join Zoom Meeting
Committee:

Prof.Desheng Zhang, Prof.Yongfeng Zhang, Prof.Hao Wang, and Prof.Jie Gao.

Start Date: 16 Jun 2021;
Start Time: 01:00PM - 03:00PM
Title: Vertex Cover in The Sliding Window Model

Bio:
Speaker:
Abstract: We study the problem of finding a minimum vertex cover for a graph in the sliding window model. The sliding window model is similar to the original streaming model where the edges of a graph whose vertices are known, arrive sequentially in a stream. However, in the sliding window model, only the recent $w$ elements (edges in the case of graph) of the stream are considered in computing the output. We are only allowed to use $o(w)$ space and hence all the recent $w$ elements of the stream cannot be stored. Therefore, when a new element is arrived the oldest element goes out of the window. This implicit deletion makes the sliding window model harder when compared to the original streaming model. In this work, we observe that the $(3+\eps)$-approximation algorithm ($\eps \in (0,1)$) for maximum matching in the sliding window model can give a $(6+\eps)$-approximation for the minimum vertex cover problem in the sliding window model. We give a counterexample to the existing analysis of a $(4+\eps)$-approximation algorithm for the minimum vertex cover problem in the sliding window model and provide a correct analysis for the same algorithm. Finally, we come up with a $(3+\eps)$-approximation algorithm for the minimum vertex cover problem in the sliding window model using $\Tilde{O}(n)$ space, where $\thicksim$ encompasses $\frac{1}{\eps}, \log n$ factors with $n$ being the number of vertices in the graph.
Location: Remote via Zoom
Committee:

Prof. Sepehr Assadi

Prof. Aaron Bernstein

Prof James Abello

Start Date: 13 Jul 2021;
Start Time: 10:30AM - 12:30PM
Title: Human Behavior-Driven Cyber-Physical Systems for On-demand Gig Delivery

Bio:
Speaker:
Abstract: In this thesis, I will introduce my work on the foundations and applications of Human Behavior-Driven Cyber-Physical Systems with a concrete use case of on-demand delivery as part of the Gig Economy. The key challenge for on-demand gig delivery is to obtain real-time gig worker status in a cost-efficient approach to enable timely delivery and efficient order scheduling. However, the existing sensing approaches in both industry and academy are limited to achieve a practical balance between reliability and scalability due to cost issues and an uncertain operational environment. I design a human behavior-driven system based on transfer learning to infer the accurate workers' status based on worker reporting behavior. Then, I will talk about a smartphone-based approach to scale it up to a nationwide system. This system has been deployed and operated at a real-world industry platform via our collaboration with Alibaba On-demand Delivery Platform Eleme. Based on this deployment, I will provide some results on how gig workers and the proposed system interact with each other and improve each other’s behavior. Finally, I will talk about some opportunities and challenges for my future work.
Location: Remote via Zoom
Committee:

Prof. Desheng Zhang (chair)

Prof. Jie Gao

Prof. Hao Wang

Dr. Lu Han (outside member, Google)

Start Date: 12 Aug 2021;
Start Time: 10:00AM - 12:00PM
Title: Multimodal Story Comprehension: Datasets, Tasks and Neural Models

Bio:
Speaker:
Abstract: Storytelling is a uniquely human skill that plays a central role in how we learn about and experience the world. Stories play a crucial part in the mental development of humans as it helps us encode a wide range of shared knowledge, including common sense physics, cause and effect, human psychology, and morality. We postulate that machine intelligence requires comparable skills, particularly when interacting with people. Much of the current research in understanding of the visual and textual world operates only at a superficial, factual level using data that align atomic one--to--one descriptive, factual text with and an image. This dissertation aims to close the gap between current research and what we hope AI systems could do by developing novel datasets, tasks and neural methods for true multimodal story creation and comprehension. We start by showing that existing datasets and generic vision-language tasks do not demand the understanding of factors such as coherence and causality that are important for story comprehension. We propose \textit{Story Illustration}, automatic illustration of an input story with a sequence of images, as a measure of story comprehension imitating how humans visualize when they read or comprehend a story. We develop an end-to-end trained neural module with explicit entity--grid based coherence modelling that is able to illustrate a story with clear understanding of coherence and coreference resolution. We then extend the \emph{Story Illustration} to a more generalized \textit{Many-to-Many Story Illustration} formulation and create a new dataset and a novel machine translation approach to story illustration. Our model is shown to generalize well and achieve high scores in creating coherent illustrations by virtue of its explicit causal decoding. In our works, we observe that generation is a much closer imitation of the human visualization process than retrieval. Moreover, the existing datasets primarily capture the perceptive process associated with comprehension rather than the creative process associated with storytelling. We create \textsc{aesop}, a new dataset that captures the creative process associated with visual storytelling. Using \textsc{aesop}, we propose foundational storytelling tasks that are generative variants of story cloze tests, to better measure the creative and causal reasoning ability required for visual storytelling. We develop a generalized story completion framework that models multimodal stories as the co-evolution of visual and textual concepts. Our dataset and model treats images as a composition of objects and related concepts making it capable of plugging them in different scenarios to generate new illustrations. An AI system that can comprehend a story can essentially comprehend any kind of data. We hope that this dissertation along with all the publicly released resources such as data and code drives further research towards building complex and intelligent systems that can create and comprehend multimodal stories.
Location: Remote via Zoom
Committee:

Mubbasir Kapadia

Gerard De Melo

Dimitris Metaxas

Nasrin Mostafazadeh (External)

Start Date: 18 Aug 2021;
Start Time: 01:00PM - 03:00PM
Title: Novel Polynomial Approximation Methods for Generating Correctly Rounded Elementary Functions

Bio:
Speaker:
Abstract: All endeavors in science use math libraries to approximate elementary functions. Unfortunately, mainstream math libraries for floating point (FP) do not produce correctly rounded results for all inputs. In addition, given the importance of FP performance in numerous domains, several new variants of FP and its alternatives have been proposed, which do not yet have correctly rounded math libraries. This dissertation proposes the RLibm approach, a collection of novel techniques for generating polynomial approximations that produces correctly rounded results of an elementary function f(x). Existing approaches generate polynomials that approximate the real value of f(x) using the minimax approach. In contrast, the RLibm approach makes a case for generating polynomials that approximate the correctly rounded result of f(x) (i.e., the exact value of f(x) computed in reals and rounded to the target representations). This approach provides more freedom in generating efficient polynomials that produce correctly rounded results for all inputs. Additionally, this dissertation makes the following contributions. First, we show that the problem of generating polynomials that produce correctly rounded results can be structured as a linear programming problem. Second, the RLibm approach accounts for numerical errors that occur from range reduction by automatically adjusting the amount of freedom available to generate polynomials. The generated polynomial with RLibm produces the correctly rounded results for all inputs. Third, we propose a set of techniques to develop correctly rounded elementary functions for 32-bit types. Specifically, we generate efficient piecewise polynomials using counterexample guided polynomial generation and bit-pattern based domain splitting. Finally, we extend the RLibm approach to generate a single polynomial approximation that produces the correctly rounded results for multiple rounding modes and multiple precision configurations. To generate correctly rounded elementary functions for n-bit type, our key idea is to generate a polynomial approximation for a (n+2)-bit representation using the round-to-odd mode. We provide formal proof that the resulting polynomial will produce the correctly rounded results for all standard rounding modes and for multiple representations with k bits where k is less than or equal to n. Using the RLibm approach, we have developed several implementations of elementary functions for various representations and rounding modes. Our elementary functions produce the correctly rounded results for all inputs in the target representation. These functions are faster than state-of-the-art math libraries which have been optimized for decades. Our prototype is the first 32-bit float library that produces the correctly rounded results with all standard rounding modes in the IEEE-754 standards for all inputs, Additionally, we provide the first correctly rounded 32-bit posit32 library.
Location: Remote via Zoom
Committee:

Prof. Santosh Nagarakatte (chair)

Prof. Ulrich Kremer

Prof. Richard Martin

Prof. Zachary Tatlock (external committee)