CS Events Monthly View

PhD Defense

Automatic, Efficient, and Robust Deep Learning

 

Download as iCal file

Thursday, February 25, 2021, 02:00pm - 04:00pm

 

Speaker: Zhiqiang Tang

Location : Remote via Zoom

Committee

Prof. Dimitris Metaxas (Advisor)

Prof. Hao Wang

Prof. Sungjin Ahn

Prof. Xiaolei Huang (External member)

Event Type: PhD Defense

Abstract: Deep learning enables automatically discovering useful, multistage, task-specific features from high-dimensional raw data. Instead of relying on domain expertise to hand-engineer features, it uses a general-learning procedure that is readily applicable to many domains such as image analysis and natural language processing. Deep learning has made significant advances after decades of development, in which the dataset size, model size, and benchmark accuracy have also dramatically increased. However, these three increasing trends pose corresponding challenges regarding data efficiency, model efficiency, and generalization robustness. To address these challenges, we research solutions from three perspectives: automatic data augmentation, efficient architecture design, and robust feature normalization. (i) Chapter 2 to Chapter 4 proposes a series of automatic data augmentation methods to replace the hand-crafted rules that define the augmentation sampling distributions, magnitude ranges, and functions. Experiments show the automatic augmentation methods can apply to diverse tasks and effectively improve their performance without using extra training data. (ii) Chapter 5 introduces the quantized coupled U-Nets architecture to boost the efficiency of stacked U-Nets with broad applications to location-sensitive tasks. U-Net pairs are coupled together through shortcut connections that can facilitate feature reuse across U-Nets and reduce redundant network weights. Quantizing weights, features, and gradients to low-bit representations can further make coupled U-Nets more lightweight, accelerating both training and testing. (iii) Chapter 6 presents two feature normalization techniques, SelfNorm and CrossNorm, to promote deep networks' robustness. SelfNorm utilizes attention to highlight vital feature statistics and suppress trivial ones, whereas CrossNorm augments feature statistics by randomly exchanging statistics between feature maps in training. SelfNorm and CrossNorm can reduce deep networks' sensitivity and bias to feature statistics and improve the robustness to out-of-distribution data, which usually results in unforeseen feature statistics. Overall, the proposed automatic data augmentation, efficient U-Net design, and robust feature normalization shed light on new perspectives for automatic, efficient, and robust deep learning.

 

Join Zoom Meeting
https://rutgers.zoom.us/j/96053883518?pwd=ZzZVMkRzOUxjSU15dFZKazFDUDRyQT09