CS Events Monthly View

PhD Defense

Biologically Inspired Spiking Neural Networks for Energy-Efficient Robot Learning and Control


Download as iCal file

Thursday, May 26, 2022, 03:00pm


Join Zoom Meeting

Join by SIP

Meeting ID: 985 0708 3847
Password: 878248
One tap mobile
+13126266799,,98507083847# US (Chicago)
+16465588656,,98507083847# US (New York)

Join By Phone
+1 312 626 6799 US (Chicago)
+1 646 558 8656 US (New York)
+1 301 715 8592 US (Washington DC)
+1 346 248 7799 US (Houston)
+1 669 900 9128 US (San Jose)
+1 253 215 8782 US (Tacoma)
Meeting ID: 985 0708 3847
Find your local number: https://rutgers.zoom.us/u/ayD4cGv3x

Join by Skype for Business

If you have any questions, please <a href="https://it.rutgers.edu/help-support/">contact the Office of Information Technology Help Desk</a>

Speaker: Guangzhi Tang

Location : Via Zoom


Konstantinos Michmizos (Advisor)

Vladimir Pavlovic,

Abdeslam Boularias,

James Bradley Aimone (Sandia National Lab)

Event Type: PhD Defense

Abstract: : Energy-efficient learning and control are becoming increasingly crucial for robots that solve complex real-world tasks with limited onboard resources. Although deep neural networks (DNN) have been successfully applied to robotics, their high energy consumption limits their use in low-power edge applications. Biologically inspired spiking neural networks (SNN), facilitated by the advances in neuromorphic processors, have started to deliver energy-efficient, massively parallel, and low-latency solutions to robotics. In this defense, I will present our energy-efficient neuromorphic solutions to robot navigation, control, and learning, using SNNs on Intel's Loihi neuromorphic processor. First, I will present a biologically constrained SNN, mimicking the brain's spatial system, solving the unidimensional SLAM problem while only consuming 1% of energy compared with the conventional filter-based approach. In addition, when extending the model to 2D environments by adding biologically realistic hippocampal neurons, the SNN formed cognitive maps in real-time and helped study the neuronal interconnectivity and cognitive functions. Next, I will show how the neuromorphic approach can be extended to high-level cognitive functions such as learning control policies. Specifically, I will present a reinforcement co-learning framework that jointly trains a spiking actor network (SAN) with a deep critic network using backpropagation to learn optimal policies for both mapless navigation and high-dimensional continuous control. Compared with state-of-the-art DNN approaches, our method results in up to 140 times less energy consumption during inference, while generating a superior successful rate on mapless navigation, and achieves the same level of performance on high-dimensional continuous control when using the population-coded spiking actor network (PopSAN). Lastly, I will present how these energy gains can further be extended to training through the development of a biologically plausible gradient-based learning framework on the neuromorphic processor. The learning method is functionally equivalent to the spatiotemporal backpropagation but solely relies on spike-based communication, local information processing, and rapid online computation, which are the main neuromorphic principles that mimic the brain. Overall, our work pushes the frontiers of SNN applications to energy-efficient robotic control and learning, and hence paves the way toward the introduction of a biologically inspired alternative solution for autonomous robots running on energy-efficient neuromorphic processors.


Rutgers University School of Arts and Sciences

Contact  Konstantinos Michmizos