# Events Feed

 Start Date: 04 Feb 2021; Start Time: 10:30PM - 11:45PM Title: Recent Lower Bounds in Algebraic Complexity Theory Bio: Ben Lee Volk is a postdoctoral researcher at UT Austin. Previously, he was a postdoctoral scholar at Caltech. He obtained his Ph.D. from Tel Aviv University and his M.Sc. from the Technion, both under the supervision of Amir Shpilka. His research interests include complexity theory, and in particular the study of algebraic models of computation, as well as neighboring areas such as pseudorandomness, coding theory and combinatorics. Speaker: Abstract: Algebraic Complexity Theory studies the complexity of solving algebraic computational tasks using algebraic models of computation. One major problem in this area is to prove lower bounds on the number of arithmetic operations required for computing explicit polynomials. This natural mathematical problem is the algebraic analog of the P vs. NP problem. It also has tight connections to other classical mathematical areas and to fundamental questions in complexity theory. In this talk I will provide background and then present some recent progress in this line of study, particularly in the area of proving lower bounds for computing linear transformations. Location: Via Zoom Recording Committee: Start Date: 08 Feb 2021; Start Time: 10:30AM - Title: Communication Complexity, Quantum Computing and Optimization: New Connections and Insights Bio: Makrand Sinha is currently a postdoctoral researcher at CWI in Amsterdam hosted by Nikhil Bansal, Ronald de Wolf and Monique Laurent. Prior to that, he obtained his Ph.D. in Computer Science from the University of Washington, Seattle in 2018, under the guidance of Anup Rao. His research interests lie in quantum computing, communication complexity and optimization, and in particular in making connections between these areas. Speaker: Abstract: How much information flows through a system? This fundamental question is at the heart of communication complexity. Techniques from this field have turned out to be immensely powerful and fairly universal tools to understand the power of different kinds of algorithms. In this talk, I will describe new methods that I have developed to analyze communication which offer fresh insights into quantum computing and optimization. Using these techniques, I will answer a question of Aaronson and Ambainis regarding the maximal advantage that quantum algorithms can have over classical algorithms in the "blackbox" model, and another conjecture due to Rothvoss about optimal linear programs for approximately solving the matching problem. Looking forward, I also propose new directions to explore more connections among these areas with the intention of answering important questions regarding whether quantum speedups need structure, and questions regarding the power of more general optimization approaches, such as semidefinite programming. Location: Via Zoom Committee: Start Date: 09 Feb 2021; Start Time: 10:30AM - Title: New Arenas in Hardness of Approximation Bio: Karthik is currently a Postdoctoral Fellow at the Courant Institute of Mathematical Sciences, New York University, hosted by Subhash Khot. He received his Ph.D. in 2019 from Weizmann Institute of Science where he was advised by Irit Dinur. Prior to that, he obtained his M.S. from Ecole Normale Superieure in Lyon. His main research interests are in the intersection of complexity theory and combinatorial geometry. Speaker: Abstract: In this talk, I will introduce the mathematically-rich area of hardness of approximation through the recent inapproximability results for selected geometric problems whose difficulty ranges from being in P (e.g., closest pair problem) to being NP-hard (e.g., k-means clustering problem). The closest pair problem is a fundamental problem in computational geometry where given a set of n points in high dimensions (say log n dimensions) as input, we are required to find the closest pair of points. This problem can be solved in a straightforward manner in roughly O(n^2) time (by computing all pairwise distances), but can it be solved faster, in say O(n^1.9) time? If not, is there at least a fast algorithm that finds a pair of points that are close to being the nearest? An important task in unsupervised learning is k-means clustering. In this problem, given a set of n points and an integer k, we are asked to find a partition of the point-set into k clusters that minimizes the k-means objective. This problem is NP-hard and thus believed to not admit any polynomial (in n) time algorithm. But is there an algorithm running in polynomial time that can find k clusters whose k-means objective is only a 2 factor away from an optimal clustering solution? In this talk, I will present the recent advances on the above long standing questions and also briefly sketch the landscape of the complexity of geometric problems and the important open questions that need to be addressed therein. I will also highlight the symbiosis in the ideas needed to answer the aforementioned questions on closest pair and clustering problems, which might seem a-priori very different. Location: Via Zoom Recording Committee: Start Date: 11 Feb 2021; Start Time: 10:30AM - Title: Algorithms and Barriers for Fast Matrix Multiplication Bio: Josh Alman is a Michael O. Rabin postdoctoral fellow in theoretical computer science at Harvard University. He earned his Ph.D. in computer science from MIT in 2019, where he was advised by Virginia Vassilevska Williams and Ryan Williams. He works on both algorithm design and complexity theory, and much of his work combines tools and insights from both areas. He's particularly interested in developing algebraic tools like algorithms for matrix multiplication and Fourier transforms, and applying them to problems throughout computer science. His awards include the European Association of TCS Distinguished Dissertation Award, the George M. Sprowls Award for outstanding Ph.D. theses in computer science at MIT, and best student paper awards at CCC 2019 and FOCS 2019. Speaker: Abstract: Matrix multiplication is one of the most basic algebraic operations. Since Strassen's surprising breakthrough algorithm from 1969, which showed that matrices can be multiplied faster than the most straightforward algorithm, algorithmic problems from nearly every area of computer science have been sped up by clever reductions to matrix multiplication. It is popularly conjectured that two n × n matrices can be multiplied in roughly O(n^2) time, but we don't yet have the algorithmic techniques needed to achieve this. In this talk, I'll give an overview of my work on this important problem, in the broad context of my research on combining ideas from algorithm design and complexity theory to help understand problems throughout computer science. First, I'll give an overview of known applications of matrix multiplication, including some new applications to nearest neighbor search and processing data streams. Second, I'll talk about the fastest known algorithm for matrix multiplication, which runs in time O(n^2.37286), which I recently designed with Virginia Vassilevska Williams. Third, I'll describe new barrier results which help to explain why we've been stuck for so long on this problem, and describe what kinds of insights are needed for further improvements. Location: Via Zoom Recording Committee: Start Date: 18 Feb 2021; Start Time: 10:30AM - Title: Efficient Algorithms and Hardness for Fundamental Problems in Data Science Bio: Peng Zhang is a Postdoctoral Associate in Computer Science at Yale University, under the supervision of Daniel Spielman. She obtained her Ph.D. from Georgia Tech, advised by Richard Peng. Her research lies broadly in designing and understanding the limits of fast algorithms. She has worked on structured linear equations and linear programs, discrepancy theory and its application in the design of randomized controlled trials. Peng’s work received the best student paper award at FOCS 2017, and Georgia Tech College of Computing Dissertation Award in 2019. Speaker: Abstract: Two important components of data science are data and computation. Sometimes, data is cheap but computation is expensive. Sometimes, data can be expensive (for example, when data are collected from experiments with human subjects). In this talk, I will discuss efficient algorithms and hardness addressing fundamental problems in computation and data. In the first part of the talk, I will present hardness and fast algorithms for solving structured linear equations and linear programs, which are central tools in data analysis. Solving arbitrary linear equations and linear programs can be time-consuming. However, many linear equations and linear programs arising commonly in practice have additional structures that enable faster solvers. One example is Laplacian linear equations, which can be solved in nearly linear time (Spielman-Teng, 2004). I will show that a slight generalization of Laplacian linear equations is as hard to solve as arbitrary linear equations. But when we know additional geometric structures about this slightly general class of linear equations, we can design asymptotically faster solvers. Then, I will briefly discuss hardness and fast algorithms for positive linear programs, another example showing that structures enable faster solvers. In the second part of the talk, I will present an algorithm that improves the design of randomized controlled trials, which are widely used to test the effectiveness of interventions. In a trial, we randomly partition all subjects to a treatment group and a control group. We want the two groups to be similar in covariates — features that we know about the subjects. By exploiting ideas from discrepancy theory, we obtain random partitions with a nearly optimal tradeoff between the gain we have when covariates are correlated with treatment outcomes and the loss we suffer when they are not. Location: Via Zoom Recording Committee: Start Date: 19 Feb 2021; Start Time: 10:00AM - 11:00AM Title: Throwing a Sofa Through the Window Bio: Dan Halperin is a professor of Computer Science at Tel Aviv University. His main field of research is Computational Geometry and Its Applications. A major focus of his work has been in research and development of robust geometric algorithms, principally as part of the CGAL project and library. The application areas he is interested in include robotics, automatedmanufacturing, algorithmic motion planning and 3D printing. Halperin is an IEEE Fellow and an ACM Fellow. http://acg.cs.tau.ac.il/danhalperin Speaker: Abstract: Planning motion for robots and other artifacts toward desired goal positions while avoiding obstacles on the way becomes harder when the environment is tight or densely cluttered. Indeed, prevalent motion-planning techniques often fail in such settings. The talk centers on recently-developed efficient algorithms to cope with motion in tight quarters. We study several variants of the problem of moving a convex polytope in three dimensions through a rectangular (and sometimes more general) window. Specifically, we study variants in which the motion is restricted to translations only, discuss situations in which such a motion can be reduced to sliding (translation in a fixed direction) and present efficient algorithms for those variants. We show cases where sliding is insufficient but purely transnational motion works, or where purely transnational motion is insufficient and rotation must be included. Finally, we explore the general setup, where we want to plan a general motion (with all six degrees of freedom) for the polytope through the window and present an efficient algorithm for this problem, with running time close to O(n^4), where n is the number of edges of the polytope. (Joint work with Micha Sharir and Itay Yehuda.) As time permits I will present additional recent results for motion in tight settings in assembly planning, fixture design, and casting and molding. Location: Via Webex Committee: Start Date: 22 Feb 2021; Start Time: 10:30AM - 11:45AM Title: Mechanism Design for Social Good Bio: Kira Goldner is a postdoctoral researcher in the Computer Science Department and at the Data Science Institute at Columbia University, hosted by Tim Roughgarden, and supported by NSF Math Sciences and Data Science Institute fellowships.  Kira uses her background in the foundations of mechanism design to address societal problems, e.g., in healthcare, climate change, and privacy.  She has also worked on core problems concerning revenue maximization, simplicity, and robustness.  As part of this agenda, Kira co-founded Mechanism Design for Social Good (MD4SG), an interdisciplinary initiative working to improve access to opportunity for historically disadvantaged communities.  She received her PhD in computer science and engineering from the University of Washington under the advisement of Anna Karlin, and was supported by a Microsoft Research PhD Fellowship and a Google Anita Borg Scholarship.  She has received many awards for her work, including the EC 2019 Best Paper with a Student Lead Author Award and the EC 2020 Best Presentation by a Student or Postdoctoral Researcher Award. Speaker: Abstract: Society is run by algorithms, and in many cases, these algorithms interact with autonomous agents who have a stake in the outcome. These participants may behave strategically in an attempt to "game the system," resulting in unexpected or suboptimal outcomes. In order to accurately predict an algorithm's outcome and quality, we must design it to be robust to strategic manipulation. This is the subject of algorithmic mechanism design, which borrows ideas from game theory and economics to design robust algorithms. In this talk, I will show how results from the theoretical foundations of algorithmic mechanism design can be used to solve problems of societal concern. I will focus on applications in carbon license allocations, health insurance markets, and online labor markets. Location: Via Zoom Committee: Start Date: 22 Feb 2021; Start Time: 02:30PM - 04:00PM Title: Human-Centered Intelligent Sensing Systems Bio: Jorge Ortiz is an Assistant Professor at Rutgers University where he directs the Cyber-Physical Intelligence Lab (CyPhy-Lab) and is where he is also a member of the Wireless Information Network Laboratory (WINLAB). His work focuses on building and studying sensing systems that learn about human behavior, from human feedback, and with humans to improve system objectives and enhance people’s lives. These include a broad range of sensing systems including smart objects, smart built environments, and smart cities, more broadly. Prior to joining Rutgers in 2018, he was a Research Staff Member at IBM Research working on machine learning and the internet of things. In the five years he was at IBM, he attained 12 patents and published in multiple top academic conferences, journals, and books and was awarded ‘Best Poster’ at IEEE/ACM IPSN '08, two ‘Best Paper Runner-ups’ at Buildsys '15, ‘Best Paper’ at ICISSP '18, and ‘Best Paper Runner-up’at IoTDI '19. At IBM he led teams to commercialize two major research projects and bring them to market. Dr. Ortiz also has extensive industry experience, which includes several years at Oracle Corporation and has worked at and led multiple startups. He is currently serving as co-TPC chair for Buildsys 2020. Dr. Ortiz attained his Ph.D. in Computer Science from UC Berkeley in 2013, M.S.in Computer Science from Berkeley in 2010, and a BS in Computer Science from MIT in 2003. Speaker: Abstract: This talk is about the design of systems and algorithms for sensing systems that interact directly with humans. I will discuss our work in the Cyber Physical Intelligence lab (CyPhy-Lab) at Rutgers University where we study sensing systems that learn about human behavior, from human feedback, and with humans to improve system objectives and enhance people’s lives. I will describe three on-going projects that explore these themes more precisely. First I will describe PillSense, a smart pillbox system for medication adherence. This system helps us learn about how humans take their medication and how physical design is tightly coupled to the system’s ability to identify users effectively. We describe two pill box designs and associated algorithms to address the challenges posed. Then, I will discuss our on-going project Maestro, a system that learns from humans to fill the semantic gap between sensor measurements and their interpretation, in order to facilitate the construction of smart ambient-sensing applications in buildings. Finally, I will discuss project Paz, a system that attempts to learn when to interact with humans as we work to facilitate agent-human collaboration to both attain system objectives (i.e. efficiency) and enhance human productivity, comfort, and entertainment. Location: Via Zoom Committee: Start Date: 23 Feb 2021; Start Time: 10:30AM - Title: Proof Complexity and Applications Bio: Marc Vinyals is a visiting fellow at the Technion. He works oncomputational complexity, and proof complexity in particular, hostedby Yuval Filmus. His research interests include other areas ofcomputational complexity such as circuit complexity, communicationcomplexity, query complexity, and pebble games, as well as the theoryof satisfiability solving. Previously he was a graduate student at theKTH Royal Institute of Technology, supervised by Jakob Nordström, anda visiting fellow at the Tata Institute of Fundamental Research,hosted by Arkadev Chattopadhyay. Speaker: Abstract: Even though the origins of proof complexity are intimately tied to the P vs NP question, nowadays a significant part of the appeal of research in this area is that new results readily apply to large classes of algorithms. In this talk we will discuss how recent developments in proof complexity are impacting the design of satisfiability solvers, how fine-grained mathematical models can help us smooth over discrepancies between theory and practice, and how questions arising from implementation details end up as meaningful mathematical problems with connections to other topics in computational complexity. Location: Via Zoom Committee: Start Date: 24 Feb 2021; Start Time: 02:00PM - 04:00PM Title: GreenDRL: Managing Green Data Centers Using Deep Reinforcement Learning Bio: Speaker: Abstract: Managing data centers to maximize efficiency and sustainability is a complex and challenging problem. In this work, we explore the use of deep reinforcement learning (RL) to manage “green” data centers, bringing a robust approach to optimizing management systems for specific workload and datacenter characteristics. Specifically, we consider data centers that are partially powered by on-site generation of renewable energy and partially cooled by “free-cooling.” We design and evaluate GreenDRL, a hybrid system that combines a deep RL agent and a simple dispatcher to schedule a workload while jointly managing server power consumption and the cooling system to minimize cost. Our design addresses several important challenges, including scalability, robustness ,and effective learning in an environment comprising an enormous state/action space and multiple stochastic processes. Evaluation results show that GreenDRL is able to learn important principles such as delaying deferrable jobs to leverage variable generation of renewable (solar) energy , and avoiding the use of power-intensive cooling settings even at the expense of leaving some renewable energy unused. Simulation of a realistic (small) green data center running a workload containing a fraction of jobs deferrable by up to 12 hours shows that GreenDRL can reduce grid electricity by 26–57% compared to a baseline system, depending on outside temperatures and availability of renewable energy. Overall, our work shows that deep RL is a promising technique for building efficient management systems for green data centers. Location: Remote via Zoom Committee: Thu D. Nguyen (Advisor) Ulrich Kremer He Zhu Karl Stratos Start Date: 25 Feb 2021; Start Time: 02:00PM - 04:00PM Title: Automatic, Efficient, and Robust Deep Learning Bio: Speaker: Abstract: Deep learning enables automatically discovering useful, multistage, task-specific features from high-dimensional raw data. Instead of relying on domain expertise to hand-engineer features, it uses a general-learning procedure that is readily applicable to many domains such as image analysis and natural language processing. Deep learning has made significant advances after decades of development, in which the dataset size, model size, and benchmark accuracy have also dramatically increased. However, these three increasing trends pose corresponding challenges regarding data efficiency, model efficiency, and generalization robustness. To address these challenges, we research solutions from three perspectives: automatic data augmentation, efficient architecture design, and robust feature normalization. (i) Chapter 2 to Chapter 4 proposes a series of automatic data augmentation methods to replace the hand-crafted rules that define the augmentation sampling distributions, magnitude ranges, and functions. Experiments show the automatic augmentation methods can apply to diverse tasks and effectively improve their performance without using extra training data. (ii) Chapter 5 introduces the quantized coupled U-Nets architecture to boost the efficiency of stacked U-Nets with broad applications to location-sensitive tasks. U-Net pairs are coupled together through shortcut connections that can facilitate feature reuse across U-Nets and reduce redundant network weights. Quantizing weights, features, and gradients to low-bit representations can further make coupled U-Nets more lightweight, accelerating both training and testing. (iii) Chapter 6 presents two feature normalization techniques, SelfNorm and CrossNorm, to promote deep networks' robustness. SelfNorm utilizes attention to highlight vital feature statistics and suppress trivial ones, whereas CrossNorm augments feature statistics by randomly exchanging statistics between feature maps in training. SelfNorm and CrossNorm can reduce deep networks' sensitivity and bias to feature statistics and improve the robustness to out-of-distribution data, which usually results in unforeseen feature statistics. Overall, the proposed automatic data augmentation, efficient U-Net design, and robust feature normalization shed light on new perspectives for automatic, efficient, and robust deep learning. Location: Remote via Zoom Committee: Prof. Dimitris Metaxas (Advisor) Prof. Hao Wang Prof. Sungjin Ahn Prof. Xiaolei Huang (External member) Start Date: 05 Mar 2021; Start Time: 10:00AM - 11:00AM Title: Ethics Washing in AI Bio: Moshe Vardi is a Professor of Computer Science at Rice University and directs the Ken Kennedy Institute for Information Technology. He also holds the titles of University Professor, the Karen Ostrum George Professor in Computational Engineering and Distinguished Service Professor. Dr Vardi received his Ph.D. from the Hebrew University of Jerusalem in 1981. His interests focus on applications of logic to computer science. He is an expert in model checking, constraint satisfaction and database theory. Vardi is the recipient of multiple awards, including 3 IBM Outstanding Innovation Awards, co-winner of the 2000 Gödel Prize, co-winner of the 2005 ACM Paris Kanellakis Theory and Practice Award, co-winner of the LICS 2006 Test-of-Time Award, the 2008 and 2017 ACM Presidential Award, the 2008 Blaise Pascal Medal in computational science by the European Academy of Sciences, and others. He holds honorary doctorates from 8 universities. He is a Guggenheim Fellow, a Fellow of ACM, AAAS and AAAI, a member of the US National Academy of Engineering, the National Academy of Sciences, the European Academy of Sciences, and the Academia Europaea. Prof. Vardi is the president of the International Federation of Computational Logicians. He is Senior Editor of Communications of the ACM, after serving as its Editor-in-Chief for a decade. Speaker: Abstract: Over the past decade Artificial Intelligence, in general, and Machine Learning, in particular, have made impressive advancements, in image recognition, game playing, natural-language understanding and more. But there were also several instances where we saw the harm that these technologies can cause when they are deployed too hastily. A Tesla crashed on Autopilot, killing the driver; a self-driving Uber crashed, killing a pedestrian; and commercial face-recognition systems performed terribly in audits on dark-skinned people. In response to that, there has been much recent talk of AI ethics. Many organizations produced AI-ethics guidelines and companies publicize their newly established responsible-AI teams. But talk is cheap. “Ethics washing” — also called “ethics theater” — is the practice of fabricating or exaggerating a company’s interest in equitable AI systems that work for everyone. An example is when a company promotes “AI for good” initiatives with one hand, while selling surveillance tech to governments and corporate customers with the other. I will argue that the ethical lens is too narrow. The real issue is how to deal with technology’s impact on society. Technology is driving the future, but who is doing the steering? Location: Via Webex Committee: Start Date: 09 Mar 2021; Start Time: 10:00AM - 12:00PM Title: Shape Modeling of Data with Complex Geometry and Topology Bio: Speaker: Abstract: With the advancement and prevalence of various sensors over the past years, it is increasingly easy to produce and access abundant data with different modalities and properties. These data typically are noisy, and feature complex geometrical and topological structures. Abstracting away the application domains reveals a common thread that such data can be effectively processed from the perspective of shape modeling. Depending on the contexts, shape modeling can be performed at different granularities, from individual data samples to the whole dataset. In this dissertation, we explore the benefits and challenges of shape modeling for such complex data, and offer novel solutions to achieve effective and scalable data processing over different applications. In the first part of this dissertation, we connect data analysis with explicit shape modeling by considering the underlying topological features within data. Based on the theory of algebraic topology, we show that low-dimensional topological structures offer compact global shape information for data analysis, and present provable algorithms to identify these structures. Furthermore, we demonstrate that such topological information not only is rich in individual samples, but also can be discovered from the dataset as a whole. Empirically we show the benefits of harnessing data topology via two specific applications, i.e., cardiac trabeculae restoration from 3D CT images and learning with noisy labels. As the second part of this dissertation, we explore the data-driven paradigms to implicitly model shapes for large-scale data processing. Such implicit strategies are built upon deep neural networks, and typically lead to more robust data descriptions than the explicit ones at the cost of higher sample complexity. We illustrate the machinery behind implicit shape modeling by considering one concrete data modality, i.e., 3D point clouds, which broadly serve as the standard outputs of various sensors. Through developing specialized network architectures, we are able to capture the shape features of point clouds efficiently and effectively, thereby benefiting point cloud-based applications, such as 3D scene segmentation and autonomous driving. Location: Remote via Webex Committee: Prof. Dimitris Metaxas (Advisor) Prof. Konstantinos Michmizos Prof. Desheng Zhang Prof. Mingchen Gao (External member; University at Buffalo, SUNY) Start Date: 09 Mar 2021; Start Time: 03:00PM - 05:00PM Title: Deep Learning-based Histopathology Image Analysis for Cancer Diagnosis and Treatment Bio: Speaker: Abstract: Histopathology plays a vital role in cancer diagnosis, prognosis, and treatment decisions. The whole slide imaging technique that captures the entire slide as a digital image (whole slide image, WSI) allows the pathologists to view the slides digitally as opposed to what was traditionally viewed under a microscope. With the development of computational power and image analysis algorithms, computational methods have been developed for the quantitative and objective analyses of histopathology images, which can reduce the intensive labor and improve the efficiency for pathologists compared with manual examinations. In this dissertation, we focus on deep learning-based solutions in histopathology image analysis for cancer diagnosis and treatment, specifically, nuclei segmentation for cancer diagnosis, and gene mutation prediction for cancer treatment. Nuclei segmentation is a critical step in the automatic analysis of histopathology images. We focus on two directions to tackle the problems in the deep learning-based nuclei segmentation task. One is the annotation-efficient algorithms. As the fully supervised learning of deep neural networks requires a large amount of training data, a weakly supervised nuclei segmentation framework based on a portion of nuclear locations is proposed to alleviate the annotation effort of pathologists. This method achieves comparable performance as the fully supervised methods and about 60$\times$ speed-up (10\% points) in the annotation time. The other direction is to improve the instance segmentation performance. The networks and cross entropy loss in current deep learning-based segmentation methods originate from image classification tasks and have drawbacks for segmentation. Therefore, we propose a full resolution convolutional neural network (FullNet) and a variance constrained cross entropy (varCE) loss to improve the fully supervised segmentation performance. Except for the cell-level heterogeneity that is routinely used for cancer diagnosis, it remains unclear for many cancers that how tissue structures in histopathology slides are related to genomic features like gene alterations and expression patterns. We develop a deep learning model to predict the genetic mutations and biological pathway activities directly from histopathology slides in breast cancer. The weight maps of tumor tiles are visualized to understand the decision-making process of deep learning models. Our results provide new insights into the association between pathological image features, molecular outcomes and targeted therapies for breast cancer patients. Location: Remote via Zoom Committee: Dimitris Metaxas (Advisor), Rutgers CS Konstantinos Michmizos, Rutgers CS Hao Wang, Rutgers CS Mei Chen (Outside member), Microsoft Start Date: 12 Mar 2021; Start Time: 11:30AM - Title: Rutgers University Department of Computer Science Open House Bio: General Session     pass: open [11.30am] Chair's Welcome Speech (Prof. Matthew Stone) {youtube}https://www.youtube.com/watch?v=ifSrc-6jo34&ab_channel=ComBraLab{/youtube}  [11.40am] GPD's Presentation of the PhD program (Prof. Ulrich Kremer) {youtube}https://www.youtube.com/watch?v=_QbGDwRfUqU&ab_channel=ComBraLab{/youtube} [12.00pm] PhD students' Pitch Talks {youtube}https://www.youtube.com/watch?v=_8RT_iapQQ4&ab_channel=ComBraLab{/youtube}     David Domingo: Accelerating storage recovery (advisor: Prof. Sudarsun Kannan)     Neelesh Kumar: Machine Learning for Motor Learning: Brain-Informed and Interpretable Deep Learning of Movement (advisor: Prof. Konstantinos Michmizos)     Guido Tagliavini Ponce: Optimal Balls-and-Bins Games and Virtual Address Translation (advisor: Prof. Martin Farach-Colton)     Harsha Srimath Tirumala: One way Functions and the Minimum Circuit Size problem (MCSP) (advisor: Distinguished Prof. Eric Allender)     Honglu Zhou: Multi-Hop Transformer For Spatiotemporal Reasoning (advisor: Prof. Mubbasir Kapadia) [12.30pm] Break out sessions     Theory session - Chair: Prof. Sepehr Assadi         pass: theory     Systems session - Chair: Prof. Yipeng Huang         pass: systems     AI session - Chair: Prof. Hao Wang         pass: arti [13.00pm] The CS Department (Prof. Matthew Stone) {youtube}https://www.youtube.com/watch?v=3agiEzvlgkU&ab_channel=ComBraLab{/youtube} Speaker: Abstract: Location: Join this event via Webex- Details on how to join are listed above Committee: Start Date: 12 Mar 2021; Start Time: 01:00PM - 03:00PM Title: Instance Segmentation for Biological Image Analysis Bio: Speaker: Abstract: Image-based instance segmentation is a task that differentiates and classifies objects at the pixel level. This task is of great significance for many biological applications (e.g., the study of neural cell interactions), which require precise segmentation of biological objects. However, it is challenging to capture fine-grained object details in biological images and separate the attached or overlapping objects due to their weak boundaries. This thesis presents a series of instance segmentation methods for biological images that address these challenges. We focus on three representative biological subjects, including neural cells, plants, and cell nuclei, to illustrate the image-based challenges and our methods' effectiveness. In particular, neural cells feature irregular shapes with tiny and slender protrusions; plant leaves typically contain thin stalks and overlapping regions; cell nuclei are usually distributed as clusters and have various shapes. In the first chapter, we present an overview of existing instance segmentation methods and their applications in biological image analysis. In the following chapters, we introduce a series of instance segmentation methods that distinguish objects from a global view and classify object pixels within bounding boxes. We study two kinds of object detectors: anchor-based and keypoint-based detectors. We explore strategies to remove undesired neighboring objects within a region of interest (ROI). On this basis, we further enhance the segmentation quality around the uncertain areas (e.g., boundary) via an auxiliary point-wise feature refinement module. Through extensive experiments, we show the superiority of our methods by comparing them to state-of-the-art approaches. Location: Remote via Webex Committee: Prof. Dimitris N. Metaxas (Advisor) Prof. Konstantinos Michmizos Prof. Sungjin Ahn Prof. Huang, Sharon Xiaolei (Penn State) Start Date: 16 Mar 2021; Start Time: 05:00PM - 07:00PM Title: Improving Inference and Generation Process of Generative Adversarial Networks Bio: Speaker: Abstract: Generative Adversarial Networks (GANs) have been witnessed tremendous successes in broad Computer Vision applications. This thesis considers two directions that are closely related to GAN-based image synthesis. (i) GAN-based inference learning and (ii) GAN-based video synthesis. Both directions face many challenges in generation and inference: In GAN-based Inference Learning, the application-driven methods usually outperform the approaches with elegant theories. For GAN-based video synthesis, the generation quality is far behind the contemporary image generators. To tackle those challenges, we conduct extensive studies on improving inference and generation processes. First, we identify the “incomplete” representation issue in the existing single-pathway framework, then propose a two-pathway approach to address this problem. Self-supervised learning is also employed to make the use of both labeled and unlabeled data. Second, as the theoretical extension of the previous work. We analyze three issues that degrade both generation and inference in GAN-based inference learning approaches. To address all three issues in a unified framework, we take the single-pathway approach as a baseline and propose two strategies to solve all issues. Third, we present a framework that leverages contemporary image generators to render high-resolution videos. We frame the video synthesis problem as discovering a trajectory in the latent space of a pre-trained and fixed image generator. Furthermore, we introduce a new task, which we call cross-domain video synthesis, this allows for generating moving objects for which the desired video data is not available. Extensive experiments on various datasets demonstrate the advantages of our methods over existing techniques. Location: Remote via Webex Committee: Prof. Dimitris Metaxas (Advisor) Prof. Yongfeng Zhang Prof. Hao Wang Dr. Han Zhang (Google Brain) Start Date: 18 Mar 2021; Start Time: 02:00PM - 04:00PM Title: Integrated Reconstruction, Segmentation and Functional Analysis for MRI Data Bio: Speaker: Abstract: Magnetic Resonance Imaging (MRI) plays an important role in many clinical applications due to its low radiation and high-contrast imaging. Traditional sequential pipeline for analyzing MRI data includes three steps: reconstruction, segmentation, and functional analysis. For dynamic MRI, i.e., dynamic cardiac MRI, the motion estimation step is also included. These tasks are highly entangled and share important information. However, current approaches are either computationally inefficient or require lots of human efforts to design complicated neural architectures. Moreover, limited work has explored the relationship between different tasks and utilized the coherence information between them. In this thesis, we propose 1) more effective and efficient deep learning-based MRI reconstruction methods; 2) integrated approaches for joint reconstruction and segmentation as well as joint reconstruction and motion estimation; 3) thickness and thickening estimation methods of the left ventricle wall for the diagnosis and characterization of different cardiac diseases. The proposed methods are extensively validated on four different medical applications: 2D cardiac and brain MRI reconstruction, 2D cardiac MRI reconstruction and segmentation, dynamic 3D cardiac MRI reconstruction and motion estimation, dynamic 3D cardiac MRI thickness and thickening estimation. Our proposed approaches are evaluated on various datasets and have outperformed current state-of-the-art methods in all these studies. Location: Remote via Webex Committee: Prof. Dimitris Metaxas (Advisor) Prof. Konstantinos Michmizos Prof. Desheng Zhang Dr. Zhen Qian (Tencent America) Start Date: 23 Mar 2021; Start Time: 10:00AM - 12:00PM Title: The integration of heterogeneous systems for vehicular sensing Bio: Speaker: Abstract: With the emergence of connected vehicles and autonomous vehicles in the future, vehicles are being equipped with rich sensing and communication devices, as well as various traffic sensors are being deployed on the road. Based on real-time sensor data from these sensors, tremendous driving assistant applications have been presented to improve urban efficiency, e.g., navigation, HD-live map, and traffic accident recovery, etc. Thus, it is essential to improve vehicular sensing for urban residents. The vehicular sensing could be achieved by modeling mobility patterns of vehicular data from multiple systems, e.g., (1) mobile systems including devices equipped in taxis, buses, trucks, personal vehicles, etc.; and (2) stationary systems including traffic cameras, electronic toll station systems, roadside units, etc. However, most existing works study vehicular sensing on a single system, which may not comprehensively represent the mobility patterns of residents for the whole city, leading to a biased understanding. In this dissertation, we study the integration of multiple vehicular sensing systems and design a vehicular sensing framework, which enables integrated vehicular sensing from three perspectives, i.e.: (1) Integration of mobile systems: we present a real-time sensing task scheduling system, RISC, which schedules sensing tasks for commercial vehicles with constraint sensing resources based on unified modeling for vehicles' mobility patterns; (2) Integration of stationary systems: we design a type-aware travel speed prediction for road segments method, mDrive, by utilizing sparse spatial-temporal data of multiple types of vehicles from traffic camera systems; (3) Integration of hybrid systems: we provide a privacy risk prediction approach based on transfer learning, TransRisk, to predict the privacy risk for a traffic camera system through its small-scale short-term data, and the knowledge learned from large-scale long-term data from existing sensing systems. We implement these three modules of our vehicular sensing framework by real-world data from two Chinese cities, Shenzhen and Suzhou. Our results provide insights for the integration of multiple sensing systems on the improvement of the performance of vehicular sensing from different aspects. Location: Remote via Zoom Committee: Prof. Desheng Zhang (Advisor) Prof. Zheng Zhang Prof. Yongfeng Zhang Prof. Ruilin Liu (external member) Start Date: 24 Mar 2021; Start Time: 12:00PM - 02:00PM Title: Towards Visual Learning with Attention Mechanism Bio: Speaker: Abstract: Tremendous interest in deep learning has emerged in the computer vision research community. The established deep convolutional neural networks (CNNs) have achieved astonishing results in various vision tasks, while there are still problems that need to be addressed. First of all, the CNN models are perceived as black-box" with a lack of understanding of the internal function. Recently, the class-wise activation map is proposed to show a visual explanation of model attention, while it still lacks the way to utilize that explanation to guide the learning process. Additionally, the success of deep learning relies on supervised training the models on the large-scale data, which requires humans to create massive annotations. In this dissertation, we address that attention mechanisms can play significant roles in dealing with the challenges mentioned above. First, despite class-wise attention mechanisms providing good localization for an individual class of interest when it comes to interpreting CNNs, these techniques produce attention maps with substantially overlapping responses among different classes, leading to visual confusion and the need for discriminative attention. We address this problem by means of a new framework that makes class-discriminative attention a principled part of the learning process. Second, to get rid of human annotations, we introduce the Co-Attention as weak supervision to generate the positive/negative training samples and a Contrastive Attention module to enhance the feature representations such that the comparative contrast between features of the positive and negative samples are maximized. Third, we adopt the attention in feature space to bridge different vision tasks in a unified learning framework. Extensive experiments on vision benchmarks show the effectiveness of our approaches in terms of improved image classification and segmentation accuracy. Regards the applications, our proposed algorithms are applied to the unsupervised detection of highlighted segments in the videos, joint face detection and landmark localization, and reasoning about human facial behaviors in deception. Additionally, two new benchmarks are collected to support related studies and facilitate researches in the same direction. Location: Remote via Zoom Committee: Prof. Dimitris N. Metaxas (Advisor) Prof. Konstantinos Michmizos Prof. Georgios Moustakides Prof. John Yiannis Aloimonos (external member) Start Date: 25 Mar 2021; Start Time: 03:00PM - 05:00PM Title: Scalable, Physics-driven Model Identification for Robust Robotic Manipulation Bio: Speaker: Abstract: Manipulating all types of objects is still challenging in robotics. Successful manipulation planning requires accurate geometric and mechanical models of objects as a precondition. In warehouses and factories, manipulated objects are typically known in advance, with their CAD models obtained from full 3D scans along with other mechanical properties like mass and friction in the workspace. However, recent research efforts in grasping and manipulation rather focus on the tasks where object models are unavailable. For example, the robot cannot obtain full 3D scans and mechanical properties of unknown objects inside a drawer or in a pile of clutter. Furthermore, in such circumstance, nonprehensile manipulation of objects is preferred over the traditional pick-and-place approach because an object may not be easily grasped by the robot, due to the design of the end-effector, the size of the object, or the obstacles surrounding the manipulated object. Recently, combined pushing and grasping approaches have shown success where traditional grasp planners fail, and they work well under uncertainty. First, I propose the probabilistic methods to infer geometric and mechanical models of unknown objects by leveraging non-prehensile robot actions. The proposed methods utilize physics simulations of all the physical interactions between the objects and between the robot and the manipulated objects. Non-prehensile manipulation on unknown objects, such as pushing and sliding, and physics reasoning help infer the shape and mechanical properties of the objects by replaying hypothesized object models in simulation. Then, the identified probabilistic models enable probabilistic manipulation planning. I propose the probabilistic manipulation planning methods using the identified object models. The proposed planning methods shows successful pre-grasp sliding manipulation by selecting a stable goal configuration and finding the most robust action considering all possible hypothesized models. Finally, I propose the nonprehensile manipulation on multiple objects that allows the robot to rearrange the objects much efficiently. The nested nonprehensile manipulation actions reduce the length of the end-effector’s trajectories by manipulating multiple objects simultaneously. The nested rearrangement search algorithm proposes an efficient way to explore the object interactions in combinatorially large configuration space of multiple objects. Location: Remote via Zoom Committee: Professor Abdeslam Boularias (Advisor) Professor Jingjin Yu Professor Mridul Aanjaneya Professor Tucker Hermans (external member) Start Date: 26 Mar 2021; Start Time: 03:00PM - 05:00PM Title: Scaling data sharing among vehicles to tackle long-tail traffic situations Bio: Speaker: Abstract: Developing robust driver assistance and automated driving technologies requires an understanding of not just common highway and city traffic situations but also the long tail of unusual or rare traffic events (e.g., objects on the roadway and pedestrian crossing highway, etc.). Due to the rareness, variety, and complexity of the long tail traffic conditions, it is widely recognized that ensuring dependability under these situations remains a key challenge for automated vehicles. Specifically, the challenges of tackling the long tail traffic situations are threefold. (i) Due to the rareness of the long tail traffic events, understanding these events will require accumulating data collection in the order of billions of miles. Since most existing efforts to collect driving data build on a small fleet of highly instrumented vehicles that are continuously operated with test drivers, it is challenging to acquire the road data on such a large scale. (ii) Among the large amount of data needed, it is hard to automatically identify the unusual situations which are challenging to address and could lead to potential accidents. Although there exists a large body of work on abnormal driving event detection, they focus on detecting specific, known situations but cannot detect previously unknown unusual road events that are missing in the current set of test cases for automated vehicles. (iii) The complexity of diverse traffic situations makes a single-vehicle hard to sense its surrounding comprehensively. Although vehicle-to-vehicle (V2V) communications provide a channel for point cloud data sharing, it is challenging to align point clouds from two vehicles with state-of-the-art techniques due to localization errors, visual obstructions, and viewpoint differences. To address such challenges, this thesis focuses on scaling data sharing among vehicles, which enables low-cost road data acquisition, unusual driving events identification and accurate vehicle-to-vehicle point cloud alignment. In particular, the proposed solution includes: (1) A light-weight sensing and driving data logging system that can derive internal driver inputs (i.e., steering wheel angles, driving speed, and acceleration) and external perceptions of road environments (i.e., road conditions and front-view video) using a smartphone and an IMU mounted in a vehicle, which enables road data acquisition and sharing in heterogeneous driving scenarios crossing different vehicle models and drivers. (2) An automatic unusual driving events identification system, which can detect unusual situations through a two-pronged approach involving inertial monitoring of driver reactions and an autoencoder-based technique for detecting unusual video scenes. (3) A two-phase point cloud registration mechanism to fuse point clouds from two different vehicles, which focuses on key objects in the scene where the point clouds are most similar and infer the transformation from those. Location: Remote via Zoom Committee: Prof. Marco Gruteser (Advisor) Prof. Richard Martin Prof. Desheng Zhang Dr. Fan Bai (External Member, from General Motor) Start Date: 29 Mar 2021; Start Time: 01:00PM - 03:00PM Title: Computational Methods to Understand the Association between Emojis and Emotions Bio: Speaker: Abstract: Emojis have become ubiquitous in digital communication due to their visual appeal as well as their ability to vividly express human emotion, among other factors. They are also heavily used in customer surveys and feedback forms. Hence, there is a need for methods and resources that shed light on their meaning and communicative role. In this work, we seek to explore the connection between emojis and emotions by employing new resources and methodologies. First, we compile a unique corpus of ~20.8 million emoji-centric tweets, such that we can capture rich emoji semantics using a comparably small dataset. We then train a model to generate interpretable word-vectors and show how domain-specific emoji embedding gives better emotion prediction than other vanilla embeddings like Glove and Word2Vec. Second, we conduct annotation experiments for a set of 150 popular emojis. This gives 1,200 emoji-emotion pairs of human ratings of association concerning 8 basic human emotions such as anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. This gold standard emoji-emotion score is the first of its kind. We additionally conduct experiments to assess to what extent such associations can be inferred from existing data. Our experiments show that this succeeds when high-quality word-level information is available. Lastly, we consider a set of popular NLP tools to perform additional experiments on texts containing emojis to analyze how they operate on emojis. Keywords: EmoTag, Emoji--Emotion, Emoji--Embedding, Twitter--Emoji, Emotion--Lexicons Location: Remote via Zoom Committee: Prof. Gerard de Melo (Advisor) Prof. Matthew Stone Prof. Yongfeng Zhang Daniel Preotiuc-Pietro (Bloomberg) Vita G. Markman (LinkedIn) Start Date: 01 Apr 2021; Start Time: 12:30PM - 02:30PM Title: Neural Methods for Document Understanding Bio: Speaker: Abstract: Document recognition involves both structured and unstructured data. This dissertation studies and proposes solutions to problems in both domains. Structured data often comes in tabular format. Many existing applications use structured tables as input such as spreadsheets, HTML, Excel, and CSV. However, there are a plethora of circumstances that can lead to generating images of tables (scanning invoices, sharing user manuals, downloading read-only publications, etc.). In such scenarios, we do not have access to the table structure. To be able to use the aforementioned applications in the given scenarios, we focus on the problem of structure recognition by graph modeling. We are investigating ways of inferring the hidden structures from tables’ images. By extracting table fields in the form of an undirected graph G and finding the corresponding Line graph L(G), AKA edge-to-vertex dual graph, we are able to model the structure-inference problem as a vertex classification using a Graph Convolutional Network (GCN) based model. We focus on text classification in three settings: English, multilingual and Arabic. We performed a comparative study with different models for text classification tasks using ULMFiT, Bert and XLM, across the spectrum of languages to understand their advantages and limitations. Location: Remote via Zoom Committee: Prof. Ahmed Elgammal (Advisor) Prof. Abdeslam Boularias Prof. Gerard De Melo Prof. Malihe Alikhani (external member) Start Date: 02 Apr 2021; Start Time: 03:00PM - 05:00PM Title: Graph Streaming Lower Bounds via a Streaming XOR Lemma Bio: Speaker: Abstract: We study space-pass tradeoffs in graph streaming algorithms for parameter estimation and property testing problems such as estimating the size of maximum matchings and maximum cuts, weight of minimum spanning trees, or testing if a graph is connected or cycle-free versus being far from these properties. We develop a new lower bound technique that proves that for many problems of interest, including all the above, obtaining a (1 + ε)-approximation requires either near-linear space or Ω(1/ε) passes, even on highly restricted families of graphs such as bounded-degree planar graphs. For multiple of these problems, this bound matches those of existing algorithms and is thus (asymptotically) optimal. One key ingredient of our proofs is a simple streaming XOR Lemma, a generic hardness amplification result, that we prove: informally speaking, if a p-pass s-space streaming algorithm can only solve a decision problem with advantage δ > 0 over random guessing, then it cannot solve XOR of L independent copies of the problem with advantage much better than δ^L. This result can be of independent interest and useful for other streaming lower bounds as well. In this talk, I will be giving a brief overview of our results and techniques. Based on joint work with Prof. Sepehr Assadi. Location: Remote via Zoom Committee: Prof. Swastik Kopparty (advisor) Prof. Sepehr Assadi Prof. Eric Allender Prof. Sudarsun Kannan Start Date: 09 Apr 2021; Start Time: 10:30AM - 12:30PM Title: Neural Graph Reasoning for Explainable Decision-Making Bio: Speaker: Abstract: Modern decision-making systems are primarily accuracy-driven, seeking to learn good representations and provide accurate predictions via deep learning techniques. However, due to the black-box nature of deep neural networks, the explainability is largely ignored, which plays a pivotal role in practical applications such as user modeling, digital marketing and e-commerce platforms, etc. The explanations can be leveraged to not only assist model developers to understand and debug the working mechanism of the decision-making process, but also facilitate better engagement and trustworthiness for the end users who consume the results delivered by the systems. Meanwhile, the explanations are supposed to be both consistent to the decision-making process and understandable to human beings, while most existing explainable approaches only possess one of the properties. In this thesis, we concentrate on how to generate and evaluate faithful and comprehensible explanations via external heterogeneous graphs in various practical scenarios. The meaningful and versatile graph structures (e.g., knowledge graphs) are shown to be effective in improving model performance, and more importantly, make it possible for an intelligent decision-making system to conduct explicit reasoning over graphs to generate predictions. The benefit is that the resulting graph paths can be directly regarded as the explanations to the predictions, because the traceable facts along the paths reflect the decision-making process and can also be easily understood by humans. We propose several neural graph reasoning approaches to generate such path-based explainable results, by marrying the powerfulness of deep neural models and the interpretability of graph structures. The covered techniques range from deep reinforcement learning, imitation learning to neural-symbolic reasoning and neural logic reasoning. The proposed models are extensively evaluated on real-world benchmarks across different application scenarios such as recommendations and column annotations. The experimental results demonstrate both superior performance gains and better explainability of these methods. Location: Remote via Zoom Committee: Prof. Shan Muthukrishnan (Advisor) Prof. Yongfeng Zhang Prof. Gerard de Melo Dr. Lihong Li (Amazon) Start Date: 16 Apr 2021; Start Time: 02:00PM - 03:30PM Title: Choreographic coordination: a programming model for in-situ workflows Bio: Speaker: Abstract: As scientific applications strive towards increasingly realistic modeling of complex phenomena, they are integrating multiple models and simulations into complex, coupled scientific workflows. As a result, ensuring that existing codes can be combined and recombined correctly and flexibly as part of these workflows is essential. In this presentation, I will discuss challenges in current methods for creating such compositions and propose a choreographic novel approach for creating in-situ scientific workflows, i.e., workflows that execute on a single extreme-scale system. Specifically, this approach involves providing a domain-specific abstraction that enables a programmer to instrument existing simulation codes so that they can be used as building blocks in defining complex workflows using a service-oriented approach. Using this approach, developers specify a workflow-level shared specification of data objects over common or partitioned data domains. This permits dependency-based execution to be specified at the workflow level, distinct from the independent operation of the component simulations. I will describe the features and benefits of this approach from the perspective of application design and systems optimization research. Location: Remote via Zoom Committee: Professor Manish Parashar (advisor) Professor Zheng Zhang Professor Sudarsun Kannan Professor Eric Allender Start Date: 20 Apr 2021; Start Time: 02:30PM - 04:00PM Title: Explainable Graph Attention Networks Bio: Speaker: Abstract: Graphs are an important structure for data storage and computation. Recent years have seen the success of deep learning on graphs such as Graph Neural Networks (GNN) on various data mining and machine learning tasks. However, most of the deep learning models on graphs cannot easily explain their predictions and are thus often labelled as “black boxes”. For example, Graph Attention Network (GAT) is a frequently used GNN architecture, which adopts the attention mechanism to carefully select over the neighborhood, nodes for message passing and aggregation. However, it is difficult to explain why certain neighbors are selected while others are not and how the selected neighbors contribute to the final classification result. In this paper, we present a new graph learning model called Explainable Graph Attention Network (XGAT), which integrates graph attention modeling and explainability. Though in previous work accuracy and explainability were mostly tackled as two different problem spaces and by distinctly different models, we show that in the context of graph attention modeling, we can design a unified neighborhood selection strategy that selects appropriate neighbor-nodes for both better accuracy and enhanced explainability. To justify this, we conduct extensive experiments to better understand the behavior of our model under different conditions and show the increase in both accuracy and explainability. Location: Remote via Zoom Committee: Professor Yongfeng Zhang Professor Santosh Nagarakatte Professor Hao Wang Professor Sudarasun Kannan Start Date: 22 Apr 2021; Start Time: 01:30PM - 03:00PM Title: Explaining Ranking Functions Bio: Speaker: Abstract: As the general public realizes the significant societal impacts of the widespread use of algorithms in decision-making, there has been a push towards explainability and transparency in decision processes and results, as well as demands to justify the fairness of the processes. I propose transparent participation metrics to clarify the ranking process, by assessing the contribution of each parameter used in the ranking function in the creation of the final ranked outcome,using information about the ranking functions themselves, as well as observations of the underlying distributions of the parameter values involved in the ranking. To evaluate the outcome of the ranking process, I propose diversity and disparity metrics to measure how similar the selected objects are to each other, and to the underlying data distribution. In addition, I have been investigating the repercussions of errors and changes of resources occurring post-decision on the outcome of allocation mechanisms in order to enable fair redistribution of resources after the original decision conditions have been altered. My research seeks to enable non-experts to actively participate in all stages of the algorithmic decision process, from design to decision to error mitigation. Location: Remote via Zoom Committee: Prof. Amelie Marian Prof. Yongfeng Zhang Prof. David Pennock Prof. Jingjin Yu Start Date: 23 Apr 2021; Start Time: 10:30AM - 12:30PM Title: Towards Zero-Shot Learning: Object Recognition with Semantic Descriptions Bio: Speaker: Abstract: Human beings have the remarkable ability to recognize novel visual objects only based on the description with semantic concepts. Deep learning, however, often requires a large number of labeled training samples to achieve superior performance. It's laborious, expensive, and difficult to collect sufficient labeled data for each object because the distribution of data among categories in the real world is long-tailed. Therefore, it is in great demand to research how to recognize novel visual objects with no visual data being seen. Zero-shot learning aims to expand the learned knowledge from seen objects, of which we have training samples, to unseen objects, of which we have no training data. To mimic the human ability to recognize novel objects, zero-shot learning leverages semantic descriptions of classes to bridge the gap between both seen and unseen classes. This thesis concerns zero-shot learning from three aspects: the semantic descriptions of classes, the discriminative visual representation of images, and the algorithm design. We first present a visual-semantic embedding-based method that embeds visual data and semantic data to a shared embedding space. Images are encoded by a part-based CNN that detects bird parts and learns part-specific representation. The sparsity regularizer forces the models to connect text terms to their relevant parts and suppress connections to non-visual text terms without any part-text annotations. Then we propose a novel algorithm to leverage generative models to synthesize the visual data of unseen categories based on semantic descriptions and convert zero-shot learning to a conventional object recognization problem. Finally, we shift our focus to learning better discriminative visual representation of images in part-level for zero-shot learning in a weekly-supervised manner. Experiment results showed that our models significantly improves the performance of zero-shot learning in different settings. Location: Remote via Zoom Committee: Prof. Ahmed Elgammal (advisor and chair) Prof. Yongfeng Zhang Prof. Gerard de Melo Prof. Haibin Ling (external member, Stony Brook University) Start Date: 23 Apr 2021; Start Time: 02:00PM - 03:30PM Title: In Defense of 2D Keypoint Annotations for 3D Animal Reconstruction Bio: Speaker: Abstract: In this talk, we address the problem of learning-based 3D reconstruction of animals from a single image. The main focus of recent work lies at eliminating the reliance of relevant approaches on keypoint annotations, using mostly masks for supervision. For some scenarios, e.g., when it is hard to establish correspondences across instances, this is a reasonable goal. However, we argue that 2D keypoints can get us a long way when it comes to 3D reconstruction and it is important to embrace their significance. More importantly, we observe that current methods for 2D keypoint localization have excellent generalization properties. As a result, even when trained on a small initial training set, the 2D keypoint detector can automatically annotate unlabelled images with robust 2D keypoint detections. We utilize these self-annotated images for training, greatly improving the reconstruction performance of our models. Location: Remote via Zoom Committee: Prof. Dimitris Metaxas (advisor) Prof. Sungjin Ahn Prof. Karl Statos Prof. Yipeng Huang Start Date: 23 Apr 2021; Start Time: 02:30PM - 04:00PM Title: User-oriented Fairness in Recommendations Bio: Speaker: Abstract: Recommender systems are gaining increasing and critical impacts on human and society since a growing number of users use them for information seeking and decision making. Therefore, it is crucial to address the potential unfairness problems in recommendations. We study unfairness problems in recommendations from user perspective. First, we conduct experiments to show that current recommender systems will behave unfairly between groups of users with different activity level. Specifically, the active users who only account for a small proportion in data enjoy much higher recommendation quality than those inactive users. Such bias can also affect the overall performance since the inactive users are the majority. To solve this problem, we provide a re-ranking approach to mitigate this unfairness problem by adding constraints over evaluation metrics. Furthermore, considering users' demands for fairness are personalized in many scenarios, we consider achieving personalized fair recommendations for users to satisfy their personalized fairness requirements through adversary learning. Location: Remote via Zoom Committee: Prof. Yongfeng Zhang (Advisor)  Prof. Desheng Zhang  Prof. Hao Wang  Prof. Shiqing Ma Start Date: 30 Apr 2021; Start Time: 01:00PM - 03:00PM Title: Neural Methods for Entity-Centric Knowledge Extraction and Reasoning in Natural Language Bio: Speaker: Abstract: Entities are the cornerstone for the dissemination of factual knowledge in natural language. Human verbal and written communication invariably refer to entities, their properties, and their relationships. Moreover, human reasoning often relies on a proper understanding of relationships among various entities. It is perceived that representing entity-centric knowledge in a structured form is most suitable for machines to consume and reason about it. Over the past few decades, numerous methodological advances have been made in extracting entity-centric knowledge from unstructured and semi-structured sources, representing entity-centric knowledge as graph-structured data known as Knowledge Graphs, and using these knowledge graphs in various knowledge-intensive natural language processing tasks. Despite these advances, machines are yet to achieve human-level ability to extract and reason with factual knowledge and use it for various knowledge-intensive tasks. This dissertation proposes novel neural methods to narrow this gap. In particular, for factual knowledge extraction, it proposes efficient and effective methods for the tasks of Entity Linking and Relation Extraction. For knowledge-based logical reasoning, an explainable link prediction method for emerging entities in knowledge graphs is proposed. Furthermore, as a representative of knowledge-intensive natural language processing tasks, this dissertation studies the problem of entity summarization to retrieve relevant facts and generate fact-allegiant textual descriptions of entities. Location: Remote via Zoom Committee: Prof. Gerard de Melo (advisor and chair) Prof. Matthew Stone Prof. Karl Stratos Prof. Sameer Singh (external member, University of California Irvine) Start Date: 07 May 2021; Start Time: 09:30AM - 11:00AM Title: One way functions and a conditional variant of MKTP Bio: Speaker: Abstract: One-way functions (OWFs) are central objects of study in cryptography and computational complexity theory. In a seminal work, Liu and Pass (FOCS 2020) proved that the average-case hardness of computing time-bounded Kolmogorov complexity is equivalent to the existence of OWFs. It remained an open problem to establish such an equivalence for the average-case hardness of some NP-complete problem. In our work, we make progress on this question by studying a conditional variant of the Minimum KT-complexity Problem (MKTP), which we call McKTP, as follows : 1. First, we prove that if McKTP is average-case hard on a polynomial fraction of its instances, then there exist OWFs. 2. Then, we observe that McKTP is NP-complete under polynomial-time randomized reductions. That is, there are NP-complete problems whose average-case hardness implies the existence of OWFs. 3. Finally, we prove that the existence of OWFs implies the nontrivial average-case hardness of McKTP. Thus the existence of OWFs is inextricably linked to the average-case hardness of this NP-complete problem. Location: Remote via Webex Committee: Prof. Eric Allender (advisor) Prof. Michael Saks Prof. Sepehr Assadi Prof. Santosh Nagarakatte Start Date: 17 May 2021; Start Time: 04:00PM - 06:00PM Title: Toward Efficient and Optimal Multi-Robot Motion Planning: Provable Guarantees and Effective Heuristics Bio: Speaker: Abstract: The labeled Multi-Robot Motion Planning (MRMP) problem, despite its large number of variations, generally requires routing a set of mobile robots from a start configuration to a goal configuration as fast as possible, without collisions. In recent years, MRMP has been extensively studied, and significant progress has been made on solving MRMP with great efficiency as well as solution quality. However, due to the problem's hardness and importance in industrial applications, there is always the need to seek higher performance. This dissertation proposes methods with provable guarantees and effective heuristics to improve the current MRMP algorithm development from a few different perspectives, including scalability and optimality. For MRMP in a continuous workspace, we first find a polynomial computation time, constant-factor optimal algorithmic framework. Then, a second method is developed with similar theoretical guarantees, while at the same time pushes robot density to above 50%. For MRMP in a discretized graph structure, we propose two heuristics: a database-driven heuristic for quickly resolve path conflicts and a space utilization optimization principle that can improve the efficiency and optimality of existing decoupled MRMP algorithms. Both heuristics extend their effectiveness to a lifelong, dynamic MRMP formulation. Finally, we propose a general integer programming framework for path-based optimization problems. The framework is easy to adapt to different path optimization formulations and can generate optimal solutions efficiently. Location: Remote via Zoom Committee: Prof. Jingjin Yu (advisor and chair) Prof. Kostas Bekris Prof. Abdeslam Boularias Prof. Sven Koenig (external member, University of Southern California) Start Date: 24 May 2021; Start Time: 10:00AM - 12:00PM Title: Geometric and Spectral Limitations in Generative Adversarial Networks Bio: Speaker: Abstract: Generative Adversarial Networks (GANs) have become one of the most successful and popular generative models in the recent years, with a wide variety of applications, such as image and audio manipulation and synthesis, style transfer, and semi-supervised learning, to name a few. The main advantage of GANs over their classical counterparts stems from the use of Deep Neural Networks (DNNs), in both the sampling process (generator) and the energy evaluation process (discriminator), which can utilize the ongoing revolution in the availability of data and computation power to effectively discover complex patterns. Yet, with this exceptional power, comes an exceptional limitation: the black-box behavior associated with DNNs, which not only places the profound promise of GANs under a shadow of mistrust, but also greatly slows down the efforts to improve the efficiency of these models. As such, studying GANs limitations and biases is critical for advancing their performance and usability in practice. The primary focus of this dissertation is to study two such limitations, namely a geometric limitation in generating disconnected manifolds, and a spectral limitation in learning distributions carried by high frequency components. We investigate these limitations both empirically and theoretically, unveil their causes and consequences, and propose and evaluate solutions for overcoming their adverse effects. Location: Remote via Zoom Committee: Dr. Ahmed Elgammal (advisor) Dr. Abdeslam Boularias Dr. Casimir Kulikowski Dr. Dimitris Samars (outside member) Start Date: 27 May 2021; Start Time: 10:00AM - 11:30AM Title: Can Document Statistics Disambiguate Grounded Language: The Case of PP attachment Bio: Speaker: Abstract: Advancements in the coverage and efficacy of pre-trained language models have paved the way for future research in natural language processing by making it much easier generalize from small hand-annotated datasets. In cases that require inputs from situational context, however, pre-trained models may not reliably learn to disambiguate grounded language. Using the case of PP attachment in image captions, I assess the learning potential of pre-trained statistical language models toward resolving ambiguity in image captions. My results show that pre-trained models may lack the context required to resolve the ambiguity in many cases, but traditional NLP approaches can be combined with statistical models for grounded learning. Location: Remote via Webex Committee: Prof. Matthew Stone (advisor) Prof. Martin Farach-Colton Prof. Karl Santos Prof. Yongfeng Zhang Start Date: 04 Jun 2021; Start Time: 08:00AM - 05:00PM Title: spatial-temporal prediction in cyber-physical systems Bio: Speaker: Abstract: Spatial-temporal predictions are essential to Cyber-Physical Systems (CPS) such as the taxi network and bus network to satisfy users’ demand. In order to manage those CPS resources efficiently, it is vital to predicting the users’ demand ahead of time for future possible actions such as rescheduling. However, current methods, while working fine for short-term prediction, do not work well to predict the unexpected increasing user demand in advance (i.e., ahead of sufficient time, say 2 hours) and thus are inefficient for resultant applications such as bike re-balancing. To address this issue, we leverage the fact that the latest public transportation traffic demand as an example of large-scale CPS, e.g., subways, can reflect the future bike traffic demand. Specifically, we model downstream traffic, i.e., bikes, and upstream traffic, i.e., subways, as spatial-temporal network series, and then design a multi-modal 3D deep capsule network framework called BikeCap. The key novelty of BikeCap is in our first attempt to apply the capsule network in a temporal domain, which benefits the prediction of a long-term bike demand. BikeCap has three components: (1) a pyramid-based convolutional layer in historical capsules to capture the spatial-temporal correlations along the directions of traffic propagation; (2) a spatial-temporal routing technique to actively learn the uncertain multi-modal spatial-temporal correlations between historical capsules and future capsules; (3) a 3D deconvolution-based decoder to construct future bike demand considering the multi-dimensional correlations of traffic patterns among neighborhoods. Through experiments based on the data of 30,000 bikes and 7 subway lines collected in Shenzhen City, China, the results show that BikeCap outperforms several state-of-the-art methods. We also conduct an ablation study to show the impact of BikeCap’s different designed components and multi-modal concept. Location: Join Zoom Meeting Committee: Prof.Desheng Zhang, Prof.Yongfeng Zhang, Prof.Hao Wang, and Prof.Jie Gao. Start Date: 16 Jun 2021; Start Time: 01:00PM - 03:00PM Title: Vertex Cover in The Sliding Window Model Bio: Speaker: Abstract: We study the problem of finding a minimum vertex cover for a graph in the sliding window model. The sliding window model is similar to the original streaming model where the edges of a graph whose vertices are known, arrive sequentially in a stream. However, in the sliding window model, only the recent $w$ elements (edges in the case of graph) of the stream are considered in computing the output. We are only allowed to use $o(w)$ space and hence all the recent $w$ elements of the stream cannot be stored. Therefore, when a new element is arrived the oldest element goes out of the window. This implicit deletion makes the sliding window model harder when compared to the original streaming model. In this work, we observe that the $(3+\eps)$-approximation algorithm ($\eps \in (0,1)$) for maximum matching in the sliding window model can give a $(6+\eps)$-approximation for the minimum vertex cover problem in the sliding window model. We give a counterexample to the existing analysis of a $(4+\eps)$-approximation algorithm for the minimum vertex cover problem in the sliding window model and provide a correct analysis for the same algorithm. Finally, we come up with a $(3+\eps)$-approximation algorithm for the minimum vertex cover problem in the sliding window model using $\Tilde{O}(n)$ space, where $\thicksim$ encompasses $\frac{1}{\eps}, \log n$ factors with $n$ being the number of vertices in the graph. Location: Remote via Zoom Committee: Prof. Sepehr Assadi Prof. Aaron Bernstein Prof James Abello Start Date: 13 Jul 2021; Start Time: 10:30AM - 12:30PM Title: Human Behavior-Driven Cyber-Physical Systems for On-demand Gig Delivery Bio: Speaker: Abstract: In this thesis, I will introduce my work on the foundations and applications of Human Behavior-Driven Cyber-Physical Systems with a concrete use case of on-demand delivery as part of the Gig Economy. The key challenge for on-demand gig delivery is to obtain real-time gig worker status in a cost-efficient approach to enable timely delivery and efficient order scheduling. However, the existing sensing approaches in both industry and academy are limited to achieve a practical balance between reliability and scalability due to cost issues and an uncertain operational environment. I design a human behavior-driven system based on transfer learning to infer the accurate workers' status based on worker reporting behavior. Then, I will talk about a smartphone-based approach to scale it up to a nationwide system. This system has been deployed and operated at a real-world industry platform via our collaboration with Alibaba On-demand Delivery Platform Eleme. Based on this deployment, I will provide some results on how gig workers and the proposed system interact with each other and improve each other’s behavior. Finally, I will talk about some opportunities and challenges for my future work. Location: Remote via Zoom Committee: Prof. Desheng Zhang (chair) Prof. Jie Gao Prof. Hao Wang Dr. Lu Han (outside member, Google) Start Date: 12 Aug 2021; Start Time: 10:00AM - 12:00PM Title: Multimodal Story Comprehension: Datasets, Tasks and Neural Models Bio: Speaker: Abstract: Storytelling is a uniquely human skill that plays a central role in how we learn about and experience the world. Stories play a crucial part in the mental development of humans as it helps us encode a wide range of shared knowledge, including common sense physics, cause and effect, human psychology, and morality. We postulate that machine intelligence requires comparable skills, particularly when interacting with people. Much of the current research in understanding of the visual and textual world operates only at a superficial, factual level using data that align atomic one--to--one descriptive, factual text with and an image. This dissertation aims to close the gap between current research and what we hope AI systems could do by developing novel datasets, tasks and neural methods for true multimodal story creation and comprehension. We start by showing that existing datasets and generic vision-language tasks do not demand the understanding of factors such as coherence and causality that are important for story comprehension. We propose \textit{Story Illustration}, automatic illustration of an input story with a sequence of images, as a measure of story comprehension imitating how humans visualize when they read or comprehend a story. We develop an end-to-end trained neural module with explicit entity--grid based coherence modelling that is able to illustrate a story with clear understanding of coherence and coreference resolution. We then extend the \emph{Story Illustration} to a more generalized \textit{Many-to-Many Story Illustration} formulation and create a new dataset and a novel machine translation approach to story illustration. Our model is shown to generalize well and achieve high scores in creating coherent illustrations by virtue of its explicit causal decoding. In our works, we observe that generation is a much closer imitation of the human visualization process than retrieval. Moreover, the existing datasets primarily capture the perceptive process associated with comprehension rather than the creative process associated with storytelling. We create \textsc{aesop}, a new dataset that captures the creative process associated with visual storytelling. Using \textsc{aesop}, we propose foundational storytelling tasks that are generative variants of story cloze tests, to better measure the creative and causal reasoning ability required for visual storytelling. We develop a generalized story completion framework that models multimodal stories as the co-evolution of visual and textual concepts. Our dataset and model treats images as a composition of objects and related concepts making it capable of plugging them in different scenarios to generate new illustrations. An AI system that can comprehend a story can essentially comprehend any kind of data. We hope that this dissertation along with all the publicly released resources such as data and code drives further research towards building complex and intelligent systems that can create and comprehend multimodal stories. Location: Remote via Zoom Committee: Mubbasir Kapadia Gerard De Melo Dimitris Metaxas Nasrin Mostafazadeh (External) Start Date: 18 Aug 2021; Start Time: 01:00PM - 03:00PM Title: Novel Polynomial Approximation Methods for Generating Correctly Rounded Elementary Functions Bio: Speaker: Abstract: All endeavors in science use math libraries to approximate elementary functions. Unfortunately, mainstream math libraries for floating point (FP) do not produce correctly rounded results for all inputs. In addition, given the importance of FP performance in numerous domains, several new variants of FP and its alternatives have been proposed, which do not yet have correctly rounded math libraries. This dissertation proposes the RLibm approach, a collection of novel techniques for generating polynomial approximations that produces correctly rounded results of an elementary function f(x). Existing approaches generate polynomials that approximate the real value of f(x) using the minimax approach. In contrast, the RLibm approach makes a case for generating polynomials that approximate the correctly rounded result of f(x) (i.e., the exact value of f(x) computed in reals and rounded to the target representations). This approach provides more freedom in generating efficient polynomials that produce correctly rounded results for all inputs. Additionally, this dissertation makes the following contributions. First, we show that the problem of generating polynomials that produce correctly rounded results can be structured as a linear programming problem. Second, the RLibm approach accounts for numerical errors that occur from range reduction by automatically adjusting the amount of freedom available to generate polynomials. The generated polynomial with RLibm produces the correctly rounded results for all inputs. Third, we propose a set of techniques to develop correctly rounded elementary functions for 32-bit types. Specifically, we generate efficient piecewise polynomials using counterexample guided polynomial generation and bit-pattern based domain splitting. Finally, we extend the RLibm approach to generate a single polynomial approximation that produces the correctly rounded results for multiple rounding modes and multiple precision configurations. To generate correctly rounded elementary functions for n-bit type, our key idea is to generate a polynomial approximation for a (n+2)-bit representation using the round-to-odd mode. We provide formal proof that the resulting polynomial will produce the correctly rounded results for all standard rounding modes and for multiple representations with k bits where k is less than or equal to n. Using the RLibm approach, we have developed several implementations of elementary functions for various representations and rounding modes. Our elementary functions produce the correctly rounded results for all inputs in the target representation. These functions are faster than state-of-the-art math libraries which have been optimized for decades. Our prototype is the first 32-bit float library that produces the correctly rounded results with all standard rounding modes in the IEEE-754 standards for all inputs, Additionally, we provide the first correctly rounded 32-bit posit32 library. Location: Remote via Zoom Committee: Prof. Santosh Nagarakatte (chair) Prof. Ulrich Kremer Prof. Richard Martin Prof. Zachary Tatlock (external committee) Start Date: 19 Aug 2021; Start Time: 09:00AM - 10:30AM Title: Scalable graph embedding on GPUs Bio: Speaker: Abstract: Graph embedding techniques have attracted growing interest since they convert the graph data into continuous and low-dimensional space. Effective graph analytic provides users a deeper understanding of what is behind the data and thus can benefit a variety of machine learning tasks. With the current scale of real-world applications, most graph analytic methods suffer high computation and space costs. These methods and systems can process a network with thousands to million nodes and edges. However, scaling to networks with billions of nodes and edges remains a challenge. We propose a hybrid CPU-GPU system for large-scale graphs that overpass the limitation of previous methods. Empirical experiments show the efficiency of our system on a variety of real-world information networks, including language networks, social networks, and citation networks. Location: Remote via Zoom Committee: Prof. Badri Nath (advisor) Prof. Manish Parashar Prof. Srinivas Narayana Ganapathy Prof. Konstantinos Michmizos Start Date: 20 Aug 2021; Start Time: 09:00AM - 11:00AM Title: Data-Driven Cyber-Physical Systems for Socially Informed Mobility Bio: Speaker: Abstract: With the ever-increasing concerns for air pollution and energy security, in recent years, we are witnessing a rapid emergence of innovative mobility systems (e.g., electric vehicles and shared mobility like carsharing and bikesharing). These mobility systems are believed to be able to make a clearer environment for us. In addition, the fast development of advanced sensing technologies and communication devices provides us a good opportunity to collect and accumulate long-term massive data from these mobility systems, which are essential for us to extract domain knowledge and understand their mobility patterns, charging issues, user behaviors, as well as the potential roadblocks during their promotion process. These understandings and knowledge based on real-world data can benefit different parties including governments, providers, drivers, and passengers/users. For example, we can further enhance the current mobility systems and provide better decision-making services after identifying their drawbacks. Hence, in this dissertation, I propose a data-driven framework to understand innovative mobility systems and physical phenomena by data-driven long-term measurement and prediction based on large-scale data, and then manage and enhance these mobility systems by socially informed decision making. I will utilize two examples to illustrate our contributions, which have the potential to pave the way for future shared autonomous mobility systems. Finally, I will talk about some opportunities and challenges for my future work. Location: Remote via Zoom Committee: Prof. Desheng Zhang (chair) Prof. Jie Gao Prof. Yongfeng Zhang Dr. Ruilin Liu (external member) Start Date: 23 Aug 2021; Start Time: 10:00AM - 12:00PM Title: AUTOMATED FEEDBACK GENERATION FOR PROGRAMMING ASSIGNMENTS Bio: Speaker: Abstract: Autograding systems are being increasingly deployed to meet the challenges of teaching programming at scale. Studies show that formative feedback can greatly help novices learn programming. This work extends an autograder, enabling it to provide corrective and formative feedback on programming assignment submissions using a mixed-approach. First methodology starts with the design of a knowledge map, which is the set of concepts and skills that are necessary to complete an assignment, followed by the design of the assignment and that of a comprehensive test suite for identifying logical errors in the submitted code. Test cases are used to test the student submissions and learn classes of common errors. For each assignment, I train a classifier that automatically categorizes errors in a submission based on the outcome of the test suite. The instructor maps the errors to corresponding concepts and skills and writes hints to help students find their misconceptions and mistakes. I apply this methodology to two assignments in the Introduction to Computer Science course and find that the automatic error categorization has a 90% average accuracy. I report and compare data from two semesters, one semester when hints are given for the two assignments and one when hints are not given. Results show that the percentage of students who successfully complete the assignments after an initial erroneous submission is three times greater when hints are given compared to when hints are not given. However, on average, even when hints are provided, almost half of the students fail to correct their code so that it passes all the test cases. Using insights about student errors gained through applying the first approach, the second approach was designed to repair student programs and provide feedback on how to correct the program and on associated misconceptions. I start by compiling a set of programming repairs that are commonly needed to repair student programs across assignments. The repair set consists of modifications to both the statements and the program structure. These repairs fix errors that point to misconceptions about program structuring and other programming concepts. Each repair is used to modify the behavior of the student program until it matches the expected behavior, measured using a similar test suite as the first approach. Additionally, I provide a search procedure for finding a set of repairs that correct the behavior of a student program. I apply this approach to multiple assignments from the Introduction to Computer Science course and report a higher success rate than with previous approaches, as high as 20% of the attempted student programs for specific assignments. Manual analysis of the repaired programs reveals that the repairs do not introduce stylistic issues such as dead code. However, for some assignments this approach is successful for less than half of the attempted student programs. More research is needed to understand how to provide accurate automated feedback to all student programs and for all types of assignments, and how this type of feedback specifically impacts learning and teaching. Location: Remote via Zoom Committee: Thu D. Nguyen (thesis chair/director) He Zhu (co-advisor) Ulrich Kremer Craig Zilles (external member) Start Date: 09 Sep 2021; Start Time: 11:00AM - Title: Cyber-Physical Systems for Smart Cities Bio: Desheng Zhang is an Assistant Professor in the Department of Computer Science at Rutgers University and a Connection Science Fellow at MIT Media Lab. He is broadly interested in Mobile Sensing, Data Science, Ubiquitous Computing, and Cyber-Physical Systems (CPS), with a focus on sensing, prediction, and decision-making for cross-domain mobile systems including cellular networks, Wi-Fi networks, mobile payment, taxis, buses, subways, bikes, personal vehicles, electric vehicles, trucks, and social networks with applications to Smart Cities, Gig Economy, and Connected Vehicles. Desheng is the lead PI for 5 NSF projects to study various system properties and applications of CPS from the equity of sociotechnical CPS to the adaptability of self-evolving CPS, the privacy of mobile CPS, the interoperability of vehicular CPS, and the safety of mobile CPS. His technical contributions have led to more than 100 papers in premium Computer Science venues, e.g., UbiComp, MobiCom,SIGCOMM, NSDI, SenSys, ICDE, KDD, SIGSPATIAL, IPSN, ICCPS, BigData, RTSS, ICDCS. He has been honored with 6 best paper/thesis/poster awards including the recent Best Paper Award in ICCPS 2021. He is a recipient of the NSF CAREER Award. Details:https://www.cs.rutgers.edu/~dz220/ Speaker: Abstract: In this talk, I will introduce my group’s work on the foundations and applications of Cyber Physical Systems to Smart Cities. For any city-scale systems, obtaining the accurate real-time status of interested entities (e.g., real-time locations of vehicles, devices, and users) is essential for downstream applications. However, the existing status sensing approaches in both industry and academy have limited scalability to city-scale due to cost issues. By using city-scale on-demand gig delivery as a concrete use case, I will introduce how we obtain real-time gig worker status (e.g., locations and routes) in a cost-efficient approach to enable timely delivery and efficient order scheduling. Based on our collaboration with Alibaba On-demand Delivery Platform, we design, deploy, and evaluate a nationwide sensing system called aBeacon exploring a hybrid solution of hardware, software, and human participation. aBeacon detects and infers the status of more than 3 million workers in 364 cities in China via a sophisticated tradeoff between performance, cost, and privacy. I will provide some lessons learned and insights when aBeacon evolves from conception to design, deployment, validation, and operation. Finally, I will discuss a few new challenges and open problems in Human Cyber Physical Systems. Location: Via Zoom Committee: Start Date: 10 Sep 2021; Start Time: 09:30AM - 11:30AM Title: Data-driven Human Behavior Learning and Prediction in Smart Cities Bio: Speaker: Abstract: Over the last decade, enormous data are logged and collected with the development of infrastructure in smart cities, such as cellular networks and vehicular networks. Learning human behaviors by leveraging such large-scale data sets becomes extremely popular due to the accessibility to data, which further reveals insights regarding user behavior patterns. Such insights are not only rich in business value but also helpful for improving system or service performance, which benefits the users in return. For example, learning from data traffic log data in cellular networks helps enhance the resilience of cellular systems, e.g., better handling data traffic surge during crowded events, which then improves the user experience in return. Human behaviors can generally be revealed by two dimensions, i.e., offline physical mobility patterns and online data usage patterns, which can be captured by infrastructures such as cellular systems and vehicular systems. However, most existing works mainly focus on learning from a single system or approach the learning from the aggregate system level, missing the potential aid by fusing information from heterogeneous systems or integrating both aggregate system aspect and individual user aspect. In this dissertation, building upon the comprehensive data investigation on collected city-level data sets covering millions of users, we demonstrate human behaviors learning and prediction by three concrete examples with heterogeneous systems and both aggregate and individual aspects: 1) Behavior learning and prediction on traveling time: we investigate the traffic crowdedness level by designing EXIMIUS, a hybrid measurement framework with implicit sensing data from cellular signaling data and explicit sensing data from vehicular GPS data. We then show a comparative study from two sensing sources and present a data fusion-based crowdedness level predictor. 2) Behavior learning and prediction on cellular data traffic: we present a cellular data traffic prediction framework called CellPred, which combines behavioral modeling from individual user level and aggregated modeling from cell tower level. 3) Behavior learning and prediction on mobile WiFi usage: we provide a mobile WiFi usage predictor called MIMU, which dedicates to mobility regularity and access irregularity in the mobile WiFi setting. We implement and evaluate the three frameworks by real-world data collected from two Chinese cities, Shenzhen and Hefei. Our data investigation and framework evaluation results provide insights for broad human behavior learning and prediction works, which is beneficial to both the service providers and the involved users. Location: Remote via Zoom Committee: Prof. Desheng Zhang (advisor) Prof. Yongfeng Zhang Prof. Dong Deng Dr. Ruilin Liu (external member) Start Date: 13 Sep 2021; Start Time: 10:00AM - 11:30AM Title: AutoBraid: A Framework for Enabling Efficient Surface Communication in Quantum Computing Bio: Speaker: Abstract: Quantum computers can solve problems that are even intractable using the most powerful state-of-the-art classical computer. However, qubits are fickle and error prone. It is necessary to actively correct errors in the execution of a quantum circuit. Quantum error correction (QEC) codes are proposed to ensure the fault-tolerant computing. With the QEC, one logical circuit is compiled into an encoded circuit. Most studies on quantum circuit compilation focus on NISQ devices which have 10-100 qubits and are not fault-tolerant. In this paper, we focus on the compilation for the fault-tolerant quantum hardware. In particular, we focus on optimizing \emph{communication parallelism} for the \emph{surface code} based QEC. The compilation for surface code involves non-trivial geometric manipulation of a large lattice of entangled physical qubits. Communication between qubits is the major bottleneck. A two-qubit gate in surface code is implemented as a virtual "pipe” in space-time called a braiding path. The braiding paths should be carefully routed to avoid congestion. We provide a framework for efficiently scheduling braiding paths. We discover that for quantum programs with a local parallelism pattern, our framework guarantees an optimal solution, while the previous greedy-heuristic-based solution cannot. Moreover, we propose an extension on the local parallelism analysis framework to address the communication bottleneck. Our framework achieves orders of magnitude improvement after addressing the communication bottleneck. Location: Remote via Zoom Committee: Prof. Zheng Zhang (advisor) Prof. Mario Szegedy Prof. Yipeng Huang Prof. Sungjin Ahn Start Date: 13 Sep 2021; Start Time: 11:00AM - Title: How big is a pointer? Bio: Martin Farach-Colton is a professor of computer science at Rutgers University, where he works on pure and applied algorithms in I/O-efficient storage systems, streaming algorithms and string matching. He was Founder and CTO at Tokutek, Inc, an enterprise database company, which was acquired by Percona in 2015. He has been a Member of Technical Staff at Bell Labs (1997-98) and was an early employee of Google, Inc. (2000-2002). Farach-Colton received his M.D. from Johns Hopkins and his Ph.D. from the University of Maryland. Farach-Colton is a Sloan and SIAM Fellow. Speaker: Abstract: There is an obvious lower bound of Omega(log n) bits needed to encode a pointer to a memory of size n. But is this really the smallest a pointer can be? We show that in many cases, pointers can be much smaller, which has implications to both the theory of succinct data structures and to virtual memory systems. Location: Via Zoom Committee: Start Date: 14 Sep 2021; Start Time: 11:00AM - Title: Networks Capable of Change Bio: Jennifer Rexford is the Gordon Y.S. Wu Professor of Engineering and the Chair of Computer Science at Princeton University.  Before joining Princeton in 2005, she worked for eight years at AT&T Labs--Research.  Jennifer received her BSE degree in electrical engineering from Princeton University in 1991, and her PhD degree in electrical engineering and computer science from the University of Michigan in 1996. Her research focuses on computer networking. She is co-author of the book "Web Protocols and Practice" (Addison-Wesley, 2001) and co-editor of the book "She's an Engineer?  Princeton Alumnae Reflect" (Princeton University, 1993).  Jennifer received the ACM Grace Murray Hopper Award for outstanding young computer professional, the ACM Athena Lecturer Award, the NCWIT Harrold and Notkin Research and Graduate Mentoring Award, the ACM SIGCOMM award for lifetime contributions, and the IEEE Internet Award. She is an ACM Fellow, an IEEE Fellow, and a member of the American Academy of Arts and Sciences, the National Academy of Engineering, and the National Academy of Sciences. Dr. Rexford's personal page Speaker: Abstract: The early designers of the Internet fostered tremendous innovation by leaving much of the network’s functionality to the programmable computers at its periphery. Unfortunately, the *inside* of the network has been much harder to change. Yet, changing the network is important to make the Internet more reliable, secure, performant, and cost-effective. The networking research community has struggled for many years to make networks more programmable. What has worked, and what hasn't, and what lessons have we learned along the way? This talk offers my perspective on these questions, through a 25-year retrospective of research on programmable networks, focusing on my own research experiences as well as reflections on major trends in the field. The talk advocates a sort of “ambitious pragmatism” that approaches an ambitious long-term goal (a programmable network infrastructure) through smaller, pragmatic steps while keeping an eye on the prize. Location: Via Zoom Committee: Start Date: 16 Sep 2021; Start Time: 09:30AM - 11:30AM Title: Towards Human-centered Recommender Systems Bio: Speaker: Abstract: Recently, there has been extensive interest in developing intelligent human-centered AI (artificial intelligence)) systems that support human participation so as to facilitate cooperation between humans and machines. Recommender systems are a particularly pervasive form of AI system that can aid in decision-making in the face of ever-growing amounts of information. Modern deep learning based recommender systems have made great strides in accuracy and effectiveness, but raise a number of important challenges: 1) How can we actively incorporate human participation into the decision-making procedure of recommender systems? 2) How can we ensure that explanations are provided such that users can better understand why particular items are being recommended? 3) How can we alleviate biases in recommender systems? This thesis proposes several novel methods to fill these gaps. In particular, for improved human understanding, it introduces an adversarial semantic learning framework for cross-lingual settings. For human integration, a human-in-the-loop conversational recommender system with external graph structure is introduced. To ensure fair explanations, we mitigate the unfairness within graph-based explainable reasoning in the recommender system. Finally, for human-system cooperation, we present a popularity debiasing framework to integrate user interaction and debiased dialogue state management in a conversational recommender system. We not only extensively evaluate our proposed approaches on multiple real-world recommendation datasets, but also contribute open public datasets to the community. The experimental results demonstrate the effectiveness of the proposed methods in achieving satisfying prediction accuracy, mitigating bias, and providing users with faithful understandable explanations. Location: Remote via Zoom Committee: Prof. Gerard de Melo (Advisor) Prof. Yongfeng Zhang  Prof. Hao Wang Prof. Qingyao Ai (External member) Start Date: 20 Sep 2021; Start Time: 03:00PM - 04:30PM Title: Learning Transferable Reward for Query Object Localization with Policy Adaptation Bio: Speaker: Abstract: We propose a reinforcement learning based approach to the problem of query object localization, where an agent is trained to localize objects of interest specified by a small exemplary set. We learn a transferable reward signal formulated using the exemplary set by ordinal metric learning. It enables test-time policy adaptation to new environments where the reward signals are not readily available, and thus outperforms fine-tuning approaches that are limited to annotated images. In addition, the transferable reward can allow repurposing of the trained agent for new tasks, such as annotation refinement, or selective localization from multiple common objects across a set of images. Experiments on corrupted MNIST dataset and CU-Birds dataset demonstrate the effectiveness of our approach. Location: Remote via Zoom Committee: Prof. Dimitris Metaxas (advisor) Prof. Hao Wang Prof.Konstantinos Michmizos Prof. Badri Nath Start Date: 12 Oct 2021; Start Time: 11:00AM - Title: Understanding Human Memory as a Constrained Optimization Problem Bio: Dr. Zhang is an Assistant Professor in the Department of Psychology and is also affiliated with the Department of Computer Science and the Rutgers Center for Cognitive Science. Dr. Zhang received her Ph.D. in 2019 from Carnegie Mellon University jointly in the Machine Learning Department under the School of Computer Science, and the Center for the Neural Basis of Cognition. She completed additional postdoctoral training in the Princeton Neuroscience Institute before joining Rutgers. She was a visiting researcher in the Department of Artificial Intelligence at the University of Groningen in 2016 and interned as a research scientist at Facebook Reality Labs in 2018.   Speaker: Abstract: How can methods from machine learning be brought to bear on cognitive psychology’s classic questions about mental representations and cognitive processes? The literature on human memory has primarily focused on identifying a set of behavioral patterns that are consistently shown across experiments, but with relatively little concern for why humans demonstrate these patterns in the first place. In contrast, my research explains human memory behavior by understanding how close it is to optimal behavior, by adapting the theory of rational analysis from psychology literature. This creates an opportunity to draw inspiration from the best examples of intelligent systems in computer science. Humans live a resource constrained existence. We can only process a fraction of our experiences at one time, and we store a shockingly small subset of these experiences in the memory for later use. Properly identifying optimal behavior requires taking into account the limitations and constraints of human cognition. In this talk, I will discuss a few directions I have taken to understand what the solutions are to different constrained-optimization memory problems, their evidence over human empirical data, and how we can leverage the optimal solutions as guidance in developing technology to improve human memory. Location: Via Zoom Committee: Start Date: 18 Oct 2021; Start Time: 12:30PM - 02:00PM Title: Improving the Efficiency of Kinodynamic Planning with Machine Learning Bio: Speaker: Abstract: Kinodynamic motion planning is characterized by the lack of a steering function (i.e., a local planner) for the underlying robotic system. This talk will first survey efforts to improve the efficiency of state-of-the-art, sampling-based kinodynamic planners through data-driven methods. It will then present a proposed direction for improving the path quality and computational efficiency of such planners when applied to vehicular navigation. Given a black-box dynamics model for the vehicle, a reinforcement learning process is trained offline to return a low-cost control that reaches a local goal state (i.e., a waypoint) in the absence of obstacles. By focusing on the system's dynamics and not knowing the environment, this process is data-efficient and takes place once for a robotic system. In this way, it can be reused in different environments. Then, the proposed sampling-based planner generates online local goal states for the learned controller in an informed manner to bias towards finding a high-quality solution trajectory fast. The planner also maintains an exploratory behavior for cases where the guidance from the machine learning process is not effective. The results show that the proposed integration of learning and planning can produce higher quality paths than standard, sampling-based kinodynamic planning with random controls in fewer iterations and computation time. Location: 1 Spring Street, Room SPR-319 & via Zoom Committee: Prof. Kostas Bekris (Chair) Prof. Abdeslam Boularias Prof. Jingjin Yu Prof. Amélie Marian Start Date: 19 Oct 2021; Start Time: 11:00AM - 12:30PM Title: Fairness and Bias in Algorithmic Decision-Making Bio: Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on the interaction of algorithms and networks, the roles they play in large-scale social and information systems, and their broader societal implications. He is a member of the National Academy of Sciences and National Academy of Engineering, and the recipient of MacArthur, Packard, Simons, Sloan, and Vannevar Bush research fellowships, as well awards including the Harvey Prize, the Nevanlinna Prize, and the ACM Prize in Computing. Speaker: Abstract: As algorithms trained via machine learning are increasingly used as a component of screening decisions in areas such as hiring, lending, and education, discussion in the public sphere has turned to the question of what it means for algorithmic classification to be fair to different groups. We consider several of the key fairness conditions that lie at the heart of these debates, and discuss recent research on trade-offs and interventions through the lens of these conditions. We also explore how the complexity of a classification rule interacts with its fairness properties, showing how natural ways of approximating a classifier via a simpler rule can lead to unintended biases in the outcome. The talk will be based on joint work with Jens Ludwig, Sendhil Mullainathan, Manish Raghavan, and Cass Sunstein. Location: Via Zoom Committee: Start Date: 19 Oct 2021; Start Time: 02:00PM - 03:30PM Title: Picture-to-Amount (PITA): Predicting Relative Ingredient Amounts from Food Images Bio: Speaker: Abstract: Increased awareness of the impact of food consumption on health and lifestyle today has given rise to novel data-driven food analysis systems. Although these systems may recognize the ingredients, a detailed analysis of their amounts in the meal, which is paramount for estimating the correct nutrition, is usually ignored. Here, we study the novel and challenging problem of predicting the relative amount of each ingredient from a food image. We propose PITA, the Picture-to-Amount deep learning architecture to solve the problem. More specifically, we predict the ingredient amounts using a domain-driven Wasserstein loss from image-to-recipe cross-modal embeddings learned to align the two views of food data. Experiments on a dataset of recipes collected from the Internet show the model generates promising results and improves the baselines on this challenging task. Location: Remote via Zoom Committee: Prof. Vladimir Pavlovic (Chair) Prof. Dimitris Metaxas Prof. Sungjin Ahn Prof. David Pennock Start Date: 21 Oct 2021; Start Time: 11:00AM - Title: How to identify anomalies accurately and privately Bio: Jaideep Vaidya is a Professor in the MSIS Department at Rutgers University. He received the B.E. degree in Computer Engineering from the University of Mumbai, the M.S. and Ph.D. degree in Computer Science from Purdue University. His general area of research is in security, privacy, data mining, and data management. He has published over 190 technical papers in peer-reviewed journals and conference proceedings, and has received several best paper awards from the premier conferences in data mining, databases, digital government, security, and informatics. He is an ACM Distinguished Scientist, and IEEE Fellow, and is the Editor in Chief of the IEEE Transactions on Dependable and Secure Computing. Speaker: Abstract: In the current digital age, data is continually being collected by organizations and governments alike. While the goal is to use this data to derive insight and improve services, the ubiquitous collection and analysis of data creates a threat to privacy. In this talk, we examine the problem of private anomaly identification. Anomaly detection is one of the most fundamental data analysis tasks, and is useful in applications as far ranging as homeland security, to medical informatics, to financial fraud. However, many applications of outlier detection such as detecting suspicious behavior for counter-terrorism or anti-fraud purposes also raise privacy concerns. We conclusively demonstrate that differential privacy (the de facto model for privacy used today) is inherently incapable of solving this problem. We then present a new notion of privacy, called Sensitive Privacy, that protects the vast majority of records that are or could be normal, while still enabling accurate identification of records that are anomalous. Given the widespread impact of COVID-19, we also present some results from a recent NSF funded effort to perform privacy-preserving crowdsensing of COVID-19, from the context of hotspot detection. Location: Via Zoom Committee: Start Date: 26 Oct 2021; Start Time: 11:00AM - Title: Tensor-structured dictionary learning: theory and algorithms Bio: Anand D. Sarwate is an Associate Professor in the Department of Electrical and Computer Engineering at Rutgers, the State University of New Jersey. He received a B.S. degree in Electrical Science and Engineering and a B.S. degree in Mathematics from MIT in 2002, an M.S. in Electrical Engineering from UC Berkeley in 2005 and a PhD in Electrical Engineering from UC Berkeley in 2008. Prof. Sarwate received the NSF CAREER award in 2015, and the Rutgers Board of Governors Research Fellowship for Scholarly Excellence in 2020. His interests are in information theory, machine learning, and signal processing, with applications to distributed systems, privacy and security, and biomedical research. Speaker: Abstract: Existing and emerging sensing technologies produce multidimensional, or tensor, data. The traditional approach to handling such data is to “flatten” or vectorize, the data. It is well-known that this ignores the multidimensional structure and that this structure can be exploited to improve performance. In this talk we study the problem of dictionary learning for tensor data, where we want to learn an efficient representations with low rank. A naïve approach to this problem would fail due to the large number of parameters to estimate in the dictionary. However, by looking at dictionaries that admit a low rank factorization the problem becomes tractable. We characterize how hard the statistical problem of estimating these dictionaries is and provide novel algorithms for learning them. Joint work with Z. Shakeri (EA), M. Ghassemi (JP Morgan Research), and W.U. Bajwa (Rutgers) Location: Via Zoom Committee: Start Date: 03 Nov 2021; Start Time: 11:00AM - 01:00PM Title: Towards Efficient and Reliable Skeleton-Based Human Pose Modeling Bio: Speaker: Abstract: Understanding human behaviors by deep neural networks has been a central task in computer vision due to its wide application in our daily life. Existing studies have explored various modalities for learning powerful feature representations of human poses, such as RGB frames, optical flows, depth images, and human skeletons. Among them, skeleton-based pose representation has received increasing attention in recent years thanks to its action-focusing nature, compactness, and domain-invariant property. However, prevalent skeleton-based algorithms are typically inefficient in network parameters or training data, but also unreliable in human action forecasting problems. In this dissertation, we explore the benefits and challenges of skeleton-based human action modeling and offer novel solutions to achieve efficient and reliable model performance in human action estimation, recognition, and generation tasks. In the first part of this dissertation, we tackle the problem of model as well as data efficiency in human pose understanding. Given the meaningful topological structure carried by human skeletons, we show that capturing the relationships between joints in the skeleton of a human body by graph neural networks leads to an efficient network architecture that outperforms state of the art while using 90% fewer parameters. Then we present a novel representation learning method to disentangle pose-dependent as well as view-dependent factors from human poses based on mutual information maximization. Empirically, we show that the resulting pose representations can be used for different action recognition scenarios where training data are limited. As the second part of this dissertation, we explore the structure-driven paradigms to make long-term predictions of future human actions by explicitly using skeletons as structural conditions. Such hierarchical strategies are built upon multi-stage generative adversarial networks, and typically lead to more robust and reliable predictions than previous appearance-driven ones. To avoid inherent compounding errors in recursive pixel-level prediction, we first estimate high-level structure in the input frames and then predict how that structure evolves in the future. Through developing specialized network architectures, we are able to capture the high-level structure of actions efficiently while preserve temporal coherence, thereby benefiting long-term future forecasting. Location: Via Zoom Committee: Prof. Dimitris N. Metaxas (Advisor) Prof. Mubbasir Kapadia Prof. Hao Wang Prof. Xiaolei Huang (External member, Penn State University) Start Date: 16 Nov 2021; Start Time: 11:00AM - 12:15PM Title: Learning and Using Knowledge for Text Bio: Karl Stratos is an assistant professor in the Computer Science Department at Rutgers University. His research centers on unsupervised representation learning and knowledge-intensive language processing. He completed a PhD in computer science from Columbia University in 2016. During PhD, he was advised by Michael Collins and also worked closely with Daniel Hsu. After PhD, he was a senior research scientist at Bloomberg LP (2016-2017) and a research assistant professor at Toyota Technological Institute at Chicago (2017-2019). Speaker: Abstract: Two key problems in automatic text understanding are (1) how to learn high-level representations that capture useful knowledge from noisy unlabeled data, and conversely (2) how to use existing knowledge resources to robustly handle unknown facts. In this talk, I will present our recent works along the two thrusts. The first is AMMI, a general framework for learning discrete structured latent variables from noisy signals by adversarially maximizing mutual information (ICML 2020). The second is a theoretical and empirical investigation of the use of "hard" negative examples in noise contrastive estimation (NAACL 2021). The third is EntQA, a new paradigm for entity linking that reduces the task as inverse open-domain question answering and fundamentally solves the dilemma of having to predict mentions without knowing their entities (under review). We achieve new state-of-the-art results in document hashing, zero-shot entity retrieval, and entity linking on numerous datasets. Location: Via Zoom Committee: Start Date: 23 Nov 2021; Start Time: 11:00AM - Title: Rethinking Modern Storage and Memory Management Bio: Sudarsun Kannan is an assistant professor at Rutgers University, where he leads the Rutgers Systems Lab. His research group works at the intersection of hardware and software, building operating systems and system software for next-generation memory and storage technologies. Results from his work have appeared at premier operating systems and architecture venues. Sudarsun's work has also resulted in patents related to nonvolatile memory and resource management. Before joining Rutgers, he was a postdoctoral research associate at Wisconsin-Madison and graduated with an M.S. and Ph.D. from Georgia Tech. Speaker: Abstract: The last decade has seen a rapid hardware innovation to develop ultra-fast and heterogeneous storage and memory technologies critical for accelerating data-intensive and intelligent applications. Unfortunately, today's monolithic operating systems (OS) that are the backbone for managing storage and memory heterogeneity continue to be an Achilles heel and fail to exploit hardware innovations. In the first part of the talk, I will discuss our approach to designing scalable storage solutions through a cross-layered disaggregation of software layers across the host and the storage hardware (OSDI '20). I will also briefly outline techniques to improve storage reliability and accelerate data recovery after failures (FAST '21). In the second part of the talk, I will focus on the need for new OS principles and abstractions for managing memory heterogeneity and their resulting impact on applications (ASPLOS '21). Finally, I will conclude the talk by outlining unsolved challenges and prospective future directions. Location: Via Zoom Committee: Start Date: 30 Nov 2021; Start Time: 11:00AM - 12:30PM Title: Processing Massive Datasets via Sublinear Algorithms: New Challenges and Opportunities Bio: Sepehr Assadi is an Assistant Professor of Computer Science at Rutgers University. His primary research interests are in theoretical foundations of processing massive datasets and in particular streaming and sublinear algorithms and lower bounds for massive graph problems. He received a Ph.D. in Computer Science from University of Pennsylvania in 2018 and spent a year as a postdoctoral researcher at Princeton University, before joining Rutgers. Sepehr is a recipient of NSF CAREER award, Google Research Scholar award, EATCS Distinguished Dissertation Award, ACM-EATCS Principles of Distributed Computing Dissertation Award, Rubinoff Dissertation Award, and several best paper awards at theoretical computer science conferences including SODA, SPAA, and DISC. Speaker: Abstract: With the emergence of massive datasets across various application domains, there is a rapidly growing need in designing algorithms that can process these massive inputs efficiently. The challenges of data processing at this scale are inherently different from those of traditional algorithm design. For instance, the sheer size of these datasets no longer allows one to assume fast random access to the input, or even store the entire input in one place to begin with. In this talk, I will give an overview of my research on developing algorithms that follow the rigorous standards of traditional algorithm design, while explicitly accounting for the practical challenges of processing massive inputs. As an illustrative example, I will focus on my research on algorithms for graph coloring problems over massive graphs. Graph coloring is one of the most fundamental problems in graph theory with a wide range of applications in computer science. I will describe my earlier work in obtaining the first set of efficient algorithms for coloring massive graphs across multiple computational models in a unified way. I will then talk about my recent work on exploring the inherent limitations of graph coloring on massive graphs as well as surprising connections established in this line of work to problems beyond graph coloring such as correlation clustering. Location: Via Zoom Committee: Start Date: 07 Dec 2021; Start Time: 11:00AM - 12:30PM Title: Leveraging kernel extensions for safe and efficient networking Bio: Srinivas Narayana is an Assistant Professor in the Department of Computer Science at Rutgers University. His research goal is to enable developers to implement novel and flexible packet-processing applications, with guarantees of safety and high performance. To achieve this goal, he applies compiler and formal methods technology in domain-specific ways to network software and hardware. Srinivas received his M.A/Ph.D. in Computer Science from Princeton University in 2016 and a B.Tech from Indian Institute of Technology Madras in 2010. Srinivas completed a post-doc at Massachusetts Institute of Technology in 2018. Srinivas's research has been recognized with the best paper award at the 2017 ACM SIGCOMM conference, a Facebook research award, and grants from the National Science Foundation and the Network Programming Institute. Speaker: Abstract: Extended Berkeley Packet Filter (eBPF) is a mechanism that emerged recently in Linux to extend the functionality of the operating system kernel. eBPF allows users to download their code into the kernel and execute it at specific points of attachment within the kernel---for example, the network device driver. To mitigate security risks from running untrusted user code, the kernel implements software fault isolation through static program analysis, encapsulated into an in-kernel component called the verifier. Due to its trifecta of flexibility, safety, and performance in a familiar Linux environment, eBPF code is already widely deployed on production systems. However, programming networking applications in eBPF presents several new challenges. Optimizing compilers need to incorporate safety into their optimizations. Developers must wrestle with the arcane rules of the verifier to prove the safety of their code. The verifier's static analysis algorithms contained critical bugs, resulting in disastrous security consequences. This talk will cover my recent work (SIGCOMM'21, CGO'22) that addresses some of these challenges by combining networking with formal methods. Joint work with Qiongwen Xu, Harishankar Vishwanathan, Michael Wong, Tanvi Wagle, Anirudh Sivaraman, Matan Shachnai, and Santosh Nagarakatte. Location: Via Zoom Committee: Start Date: 08 Dec 2021; Start Time: 09:00AM - Title: An integrated platform for joint simulation of occupant-building interactions Bio: Speaker: Abstract: Several approaches exist for simulating building properties (e.g. temperature, noise) and human occupancy (e.g. movement, actions) in an isolated fashion, providing limited ability to represent how environmental features affect human behavior and vice versa. To systematically model building-occupant interactions, several requirements must be met, including the modelling of (a) interdependent multi-domain phenomena ranging from temperature and sound changes to human movement, (b) high-level occupant planning and low-level steering behaviors, (c) environmental and occupancy phenomena that unfold at different time scales, and (d) multiple strategies to represent occupancy using established models. In this work, we propose an integrated platform that satisfies the aforementioned requirements thus enabling the joint simulation of building-occupant interactions. To this end, we combine the benefits of a model-independent, discrete-event, general-purpose framework with an established crowd simulator. Our platform provides insights on a building’s performance while accounting for alternative design features and modelling strategies. Location: Via Zoom Committee: Prof. Mubbasir Kapadia Prof. Vladimir Pavlovic Prof. Mridul Aanjaneya Prof. Eric Allender Start Date: 08 Dec 2021; Start Time: 09:00AM - 10:00AM Title: Unsupervised Learning of Structured Representation of the World Bio: Speaker: Abstract: Intelligent agents would like to build representations of the world that are well-suited for planning actions and achieving goals in the physical world. Because the environments are diverse and always changing, providing supervision to train the representation learning systems makes them hard to scale and unviable. In this work, we explore unsupervised representation learning methods, based on variational auto-encoders, that do not simply return a monolithic compression of the input observation but rather provide a significantly richer representation in the following respects: (i) our encoder, taking only a few observations of a 3D scene as input, produces a representation that contains information about the full 3D scene. (ii) It organizes the representation as a set of object vectors and decomposes these vectors further into what' and where', thus opening the possibility for symbolic processing downstream. (iii) As the agent observes a dynamic environment, our representation method, using past observations and the learned knowledge of the dynamics, infers the state of invisible objects and unseen viewpoints of the scene. (iv) Lastly, our representation provides, not just a point estimate of the inferred environment state, but rather a belief state and thus embraces the randomness of the physical world. In achieving these characteristics, we develop novel models that advance the state of the art. To do this, we build on existing approaches for neural modeling of stochastic processes, recurrent state-space modeling, sequential Monte Carlo, and unsupervised object-centric representation learning. Location: Via Zoom Committee: Hao Wang Karl Stratos Mario Szegedy (assigned by the department) Sungjin Ahn (Advisor) Start Date: 08 Dec 2021; Start Time: 10:00AM - 12:00PM Title: Scenario Generalization and its Estimation in Data-driven Decentralized Crowd Modeling Bio: Speaker: Abstract: In the context of crowd modeling, we propose the notion of scenario generalization, which is a macroscopic view of the performance of a decentralized crowd model. Based on this notion, firstly, we aim to answer the question that how a training paradigm and a training domain (source) affect the scenario generalization of an imitation learning model when applied to a different test domain (target). We evaluate the exact scenario generalizations of models built on combinations of imitation learning paradigms and source domains. Our empirical results suggest that (i) Behavior Cloning (BC) is better than Generative Adversarial Imitation Learning (GAIL), (ii) training samples in source domain with diverse agent-agent and agent-obstacle interactions are beneficial for reducing collisions when generalized to new scenarios. Secondly, we note that although the exact evaluation of scenario generalization is accurate, it requires training and evaluation on large datasets, coupled with complex model selection and parameter tuning. To circumvent this challenge by estimating the scenario generalization without training, we proposed an information-theoretic inspired approach to characterize both the source and the target domains. It estimates the Interaction Score (IS) that captures the task-level inter-agent interaction difficulty of target scenario domain. When augmented with Diversity Quantification (DQ) on the source, the combined ISDQ score offers a means to estimating the source to target generalization of potential models. Various experiments verify the efficacy of ISDQ in estimating the scenario generalization, compared with the exact scenario generalizations of models trained with imitation leaning paradigms (BC, GAIL) and reinforcement learning paradigm (proximal policy optimization, PPO). Thus, it would enable rapid selection of the best source-target domain pair among multiple possible choices prior to training and testing of the actual crowd model. Location: Via Zoom Committee: Prof. Petros Faloutsos (York University) Prof. Yongfeng Zhang Prof. Mubbasir Kapadia Prof. Vladimir Pavlovic (chair) Start Date: 08 Dec 2021; Start Time: 04:00PM - 06:00PM Title: Learning of Networks Dynamics: Mobility, Diffusion, and Evolution Bio: Speaker: Abstract: Within network science and network theory, traditional social network analysis is mainly based on static networks. Network dynamics analysis takes interactions of social features and temporal information into account, appearing to be an emergent scientific field. Different from the social networks, they consider larger, dynamic, multi-mode, multi-plex networks, and may contain varying levels of uncertainty. Due to the heterogeneity of networks, agent-based modeling and other forms of simulations are often used to explore how networks evolve and adapt as well as the impact of interventions on those networks. In this thesis, we discuss two aspects of this topic. In the first aspect, we consider the mobility properties in the social networks, i.e., human trajectories. A human trajectory is a sequence of spatial-temporal data from an individual. The data mining of human trajectories can help us improve a lot of real-world applications. However, with the increasing size of the human trajectory dataset, it brings higher challenges to our analysis. At the same time, for the human trajectory, we still lack of a comprehensive understanding of the relationship among the human trajectories, such as commonality and individuality. Thus, we focused on the statistic tools to measure the similarity between the human trajectories. Different from the traditional geometric similarity measures, such as Hausdorff distance and Frechet distance, we proposed three novel partial similarity measures, which are more suitable to the human trajectories. They can also reduce the storage requirement and save computation time. Based on these similarity measures, we designed a more advanced unsupervised clustering algorithm, integrating with the conformal prediction framework. It performs better in varied trajectory datasets compared with the classical clustering algorithms. In the second aspect, we investigate two problems in the social interactions of the social networks. For the first problem, the previous works mainly focused on the information spreading speed in the static social networks, which can be regarded as the diffusion phenomenon. However, taking mobility into the consideration, the social networks become dynamic and the interactions between different individuals happen over time. The heterogeneity of mobility plays an important role in the diffusion process. We utilize the human trajectory dataset to simulate the physical interactions among individuals to view this diffusion process. At the same time, we also investigate some targeted interventions' impact on the social networks. In the second problem, the opinion evolution process in the social networks is discussed. We proposed a co-evolution model, including the opinion dynamics and social tie dynamics, to investigate the community structure and structural balance in the final state with rigorous theoretical analysis. The simulations also are utilized to validate the communities form process. Location: Via Zoom Committee: Prof. Jie Gao (advisor, chair) Prof. Hao Wang Prof. Peng Zhang Prof. Feng Luo Prof. Joseph S.B. Mitchell (external member, Stony Brook University) Start Date: 10 Dec 2021; Start Time: 01:00PM - Title: Toward Structured Plan Exploration for Multi-Object Rearrangement Bio: Speaker: Abstract: With practical applications in both industrial automation and home automation, object rearrangement is a topic of interest in the broader area of task and motion planning. In this problem, a robot arm moves objects one at a time from current positions to specified goal positions. To avoid collisions, some objects need to be moved before others or even be temporarily displaced to open space. Therefore, to compute feasible rearrangement plans, one needs to not only determine the ordering of object manipulations but also allocate clearings for temporary object displacements. In this talk, we investigate this problem with different objectives. We first present structural properties of the problem, which include upper and lower bounds of objectives. Second, we introduce fast algorithms for object rearrangement in different setups. Extensive experiments show that our algorithms outperform state-of-the-art methods in both computation time and solution quality under various practical scenarios. Location: Via Zoom Committee: Jingjin Yu Kostas Bekris Abdeslam Boularias Aaron Bernstein Start Date: 14 Dec 2021; Start Time: 03:00PM - Title: Scalable machine learning algorithms for multi-armed bandits and correlation clustering Bio: Speaker: Abstract: Massive datasets appear frequently in modern machine learning applications. The situation poses a pressing challenge to design provably efficient large-scale learning algorithms. One of the most promising directions to tackle this challenge is to explore sublinear algorithms for machine learning, i.e. the algorithms that can work with resources substantially smaller than the size of the input they operate on. In this talk, I will discuss sublinear machine learning algorithms for two well-known problems, namely multi-armed bandits (MABs) and correlation clustering. For the multi-armed bandits problem, I will discuss the space efficiency for best-arm identification under the streaming model. The optimal sample complexity to identify the best arm has been known for almost two decades. However, not much was known about the space complexity of the algorithms. To study the space complexity, we introduced the streaming multi-armed bandits model. We proved that, perhaps surprisingly, there exists an algorithm that achieves the asymptotically optimal sample complexity and only uses a space of a single arm. More recently, we extended the results by showing several lower bounds that characterize the necessary assumptions for instance-sensitive sample complexity in streaming MABs. For the correlation clustering problem, I will discuss recent results on sublinear time and space algorithms. This, in particular, includes algorithms that can solve the problem quadratically faster than even reading the input -- assuming standard query access to the data -- or with quadratically smaller space than needed to store the input using a single-pass stream over the data. Location: Via Zoom Committee: Prof. Sepehr Assadi Prof. Jie Gao Prof. Peng Zhang Prof. Mubbasir Kapadia Start Date: 16 Dec 2021; Start Time: 09:00AM - Title: Predicting Long-Term Crowd Flow in Built Environments Bio: Speaker: Abstract: Predicting the concerted movements of crowds in built environments, whether at the scale of a house or a campus with several buildings, is a key requirement for crowd and disaster management, architectural design, and urban planning. It is particularly important to study this movement with large crowds and environments over long time frames to understand the influence of the environment on the crowd, e.g. in terms of congestion. However, this comes at a prohibitive logistical cost when studying real humans and a prohibitive computational cost when studying simulations, neither of which meet the demands of practitioners. We therefore propose the first framework for instantaneously forecasting the full movement of a human/agent crowd using solely the initial state of the crowd and environment. Since the spatial dimensions of environments can vary widely, we have developed fixed-size crowd scenario representations to effectively direct initial state information into convolutional neural network architectures. Experimental results indicate that after training models exclusively on synthetic data, the models generalize to never-before-seen real environments and crowds. Location: Via Zoom Committee: Prof. Mubbasir Kapadia (advisor) Prof. Vladimir Pavlovic Prof. Dimitris Metaxas Prof. David Pennock Start Date: 20 Dec 2021; Start Time: 12:00PM - 02:00PM Title: A GPU Binary Analysis Framework for Memory Performance and Safety Bio: Speaker: Abstract: General-Purpose Graphics Processing Units (GPUs) have attained popularity for their massive concurrency. But with their relative infancy, as well as a prevalence of closed-source and proprietary technology, GPU software has not undergone the degree of optimization that CPU software has. In this work, we focus on NVIDIA's GPUs and the CUDA framework, but our techniques may be generalized to other GPU platforms. We develop a compiler which targets the low-level SASS assembly, rather than the high-level source code or the intermediate-level PTX assembly. This allows for a degree of tuning and modification not possible at higher levels. We reconstruct program information such as the control-flow graph and call graph, thereby permitting data flow analysis. Thanks to extensive reverse-engineering, our compiler retains compatibility with numerous versions of the CUDA framework and several generations of NVIDIA GPUs. We are able to improve memory performance with optimizations not available in the proprietary compiler. We perform memory-allocation across the multiple types of available on-chip memory, making full use of available resources including registers, scratchpad memory, and the L1 data cache. This further allows us to tune occupancy - the number of threads allowed to be simultaneously active. We perform static and dynamic occupancy tuning, in order to find an effective balance between concurrency and resource contention. Our compiler can also be applied toward improvement of memory safety. We use it to implement dynamic taint tracking, a technique previously used on CPUs to identify sensitive data as it spreads through memory. By analyzing and modifying the low-level assembly, we minimize tracking overhead, track memory resources not visible to the programmer, and erase sensitive data before it has opportunity to leak. We evaluate our compiler across a number of benchmarks on NVIDIA devices of the Fermi, Kepler, Maxwell, and Pascal architectures. We demonstrate that our resource allocation and occupancy tuning provide significant improvement in both speed and energy usage. We additionally show that our GPU-specific optimizations for taint tracking can enormously reduce overhead. Location: Via Zoom Committee: Professor Zheng Zhang (advisor, chair) Professor Ulrich Kremer Professor Manish Parashar Professor Chen Ding (external member, University of Rochester) Start Date: 21 Dec 2021; Start Time: 09:00AM - 10:30AM Title: ABCinML: Anticipatory Bias Correction in Machine Learning Bio: Speaker: Abstract: Static models (i.e., train once, deploy forever) of machine learning (ML) rarely work in practical settings. Besides fluctuations in accuracy over time, they are likely to suffer from biases based on the poor representations or past injustices coded in human judgements. Thus, multiple researchers have begun to explore ways to maintain algorithmic fairness over time. One line of work focuses on”dynamic learning” i.e., retraining after each batch, and the other on ”robustlearning” which tries to make the algorithms robust across all possible future challenges. Robust learning often yields to (overly) conservative models and ”dynamic learning” tries to reduce biases soon after, they have occurred. We propose an anticipatory ”dynamic learning” approach for correcting the algorithm to prevent bias before it occurs. Specifically, we make use of anticipations regarding the relative distributions of population subgroups (e.g., relative ratio of maleand female applicants) in the next cycle to identify the right parameters for an importance weighing fairness approach. Results from experiments over multiple real-world datasets suggest that this approach has a promise for anticipatory bias correction. Location: Via Zoom Committee: Vivek K. Singh (Research advisor) David M. Pennock Amélie Marian Srinivas Narayana Start Date: 21 Dec 2021; Start Time: 11:00AM - 01:00PM Title: Natural Language Understanding through Emotions and Stylistic Elements Bio: Speaker: Abstract: In the context of text understanding, computational methods are used to study how humans utilize stylistic elements (visual and rhetorical) in addition to language to express emotions and opinions. In this dissertation, I first explore to what extent distributional semantic models can capture the intensity of emotions in lexical items. Using human judgment as gold standard, I employ language models for word-level emotion intensity prediction and present a technique to attain an emotional database for more fine-grained and more accurate word-level emotion predictions. The results indicate that: 1) the sentiment score of words can improve distributional models for emotion classification, and 2) language models perform better in practice in comparison to the state-of-the-art lexicons. Next, I present a statistical analysis of the role of emojis in text as a visual element that conveys emotion. I show the strong connection between the emoji and emotions, and the syntactic role of emojis in the text. The empirical results illustrate that emojis are used mainly to intensify the emotion in a sentence; however, in some cases, they replace a word or phrase in the sentence or signal contrast in a sarcastic context. The last part of the dissertation is an analysis of stylistic and rhetorical elements in standard and poetic Persian text. I present a corpus of Persian literary text mainly focusing on poetry, covering the 7th to 21st century annotated for century and style, with additional partial annotation of rhetorical figures. I present several computational experiments to analyze poetic styles, authors, and periods, as well as context shifts over time. Location: Via Zoom Committee: Dr. Gerard de Melo (advisor) Dr. Matthew Stone Dr. Yongfeng Zhang Dr. Smaranda Muresan(outside member) Start Date: 23 Dec 2021; Start Time: 09:00AM - Title: Generative Scene Graph Networks Bio: Speaker: Abstract: Human perception excels at building compositional hierarchies of parts and objects from unlabeled scenes that help systematic generalization. Yet most work on generative scene modeling either ignores the part-whole relationship or assumes access to predefined part labels. In this work, we propose Generative Scene Graph Networks (GSGNs), the first deep generative model that learns to discover the primitive parts and infer the part-whole relationship jointly from multi-object scenes without supervision and in an end-to-end trainable way. We formulate GSGN as a variational autoencoder in which the latent representation is a tree-structured probabilistic scene graph. The leaf nodes in the latent tree correspond to primitive parts, and the edges represent the symbolic pose variables required for recursively composing the parts into whole objects and then the full scene. This allows novel objects and scenes to be generated both by sampling from the prior and by manual configuration of the pose variables, as we do with graphics engines. We evaluate GSGN on datasets of scenes containing multiple compositional objects, including a challenging Compositional CLEVR dataset that we have developed. We show that GSGN is able to infer the latent scene graph, generalize out of the training regime, and improve data efficiency in downstream tasks. Location: Via Zoom Committee: Prof. Sungjin Ahn (Advisor) Prof. Karl Stratos Prof. Hao Wang Prof. Dong Deng Start Date: 13 Jan 2022; Start Time: 11:00AM - Title: Rethinking Curriculum Design for a Scalable CS Education Bio: Prof. Andy Gunawardena is an Associate Teaching Professor of Computer Science at Rutgers University. Prior to joining Rutgers in 2018, he served as an Associate Teaching Professor of Computer Science at Carnegie Mellon University (CMU) from 1998-2013 and as a Lecturer in Computer Science at Princeton University from 2013-2018. A dedicated CS educator and an evangelist for technology in education, he has been a PI and Co-PI of multiple grants from National Science Foundation, Hewlett Packard, Microsoft, Gates Foundation and Qatar foundation to research, build, deploy technological infrastructures to support student learning and measure their impact. He co-led the CMU’s multi-million-dollar measuring learning initiative in 2010 and is a co-founder of Classroom Salon and cubits.ai platforms that have been widely used by instructors and students worldwide. He holds 3 patents (CMU, Princeton) and has co-authored 2 textbooks in Computational Linear Algebra published by Springer-Verlag (1998) and Cengage (2003). Speaker: Abstract: Number of students taking CS courses have increased dramatically in recent times. At the same time, the availability of human resources to manage CS courses have decreased. The traditional teaching methods have not changed, and student overall educational experiences seem to have diminished over time and continue to do so. Today, instructors hardly understand the individual needs of the students they are responsible for. There are also significant disruptive pressures in higher education due to prevalence of online content and opportunities. Attracting and retaining women and under-represented groups continue to be a challenge. Students demand that instructors teach what is relevant for them to get jobs in industry and/or to pursue higher education opportunities. Addressing these issues require significant efforts in course innovation and long-term dedication. In this talk, we will address the efforts to revamp 3 introductory CS courses at Rutgers (CS 111, CS 112, CS 205) as well as challenges in designing, implementing, and delivering the popular CS 439 Introduction to Data Science course at Rutgers. We will argue that it is important to rethink what we teach, how we teach it, and how we measure student outcomes. We will also argue for the need to centralize the management of large enrollment courses and the need to create more data-driven measuring instruments to help scale student learning in large courses. Finally, we discuss some initial efforts to increase outreach into NJ communities colleges (CC) and provide better transfer opportunities for a diverse group of students from CCs. Location: Via Zoom Committee:

## Upcoming Events

 23 Dec 2021; - 09:00AM - Generative Scene Graph Networks 13 Jan 2022; - 11:00AM - Rethinking Curriculum Design for a Scalable CS Education