The ability to orient in an unknown, fast-changing, environment is an unmet challenge for robots but a seamlessly solved problem for the primate brain. This thesis describes the first steps in developing a neuro-inspired “bottom-up” model of the brain’s navigation system to make a mobile robot localize itself, map its surrounding and plan its trajectory. Our model employs neural spikes to encode and process information in real-time. Despite a multitude of Nobel-winning studies that have revealed neurons specializing in self-navigation, such as place, grid, border and head direction cells, their interconnectivity remains elusive. Therefore, any model employing these neurons needs to make quite a lot of extrapolations to fill in the gaps of knowledge.
The main challenge was to design a real-time spiking neural network that can compensate for the hardware limitations as well as its own intrinsic imperfections and work in real conditions. To design the first component of our model, the head direction cell layer, we employed mechanisms based on self-organizing and self-sustaining neural activity, or attractor dynamics, resembling those originally proposed in Hebb’s cell assembly theory. The information to be maintained and updated was a continuous variable, or continuous attractor, where a 1D continuum of cell assemblies represented head direction. In theory, our network should give rise to a self-sustained hill of excitation - the attractor. In practice, due to non-ideal speed sensors and the intrinsic spike variability of the spiking network, it was impossible to sustain a correct approximation of the head direction using just this scheme. To correct this, we introduced a spike-based Bayesian inference layer of leaky-integrate-and-fire models of neurons, that combined feedforward (vision) and recursive (kinesthetic) inputs. We show how such a layer can approximate the posterior probability of the preferred state encoded in the spiking probability by adding the logarithms of the simulated dendritic currents, which is a reasonable approximation of the nonlinear dendritic activity. We show that our model accurately estimated head direction and further extend it to include a dynamic network of border cells that can learn to map the observed environment through simulating synaptic plasticity.
Solving the localization problem and creating a cognitive map of the surroundings, our thesis paves the way for tackling robot planning through imitating brain structure, its principles and its performance.