LIDA - Spatial Memory and Navigation Ability in a Physically Embodied Cognitive Architecture

Project Leader:
Robert Trappl

Principal Investigator:
Tamas Madl

Paolo Petta

Project Description:

Current computational cognitive models of spatial memory only account for few specific cognitive processes, instead of integrating them with a cognitive architecture. Most existing computationally implemented cognitive architectures lack the ability to use spatial information for planning or navigation in real-world environments. Furthermore, there is currently no cognitively plausible model of spatial cognition integrated with an implemented comprehensive computational cognitive architecture that would model spatial cognition and a wide range of other human cognitive mechanisms, while being able to function in the physical world.

The aim of this project is the development of a computational cognitive model of spatial memory and navigation based on the LIDA (Learning Intelligent Distribution Agent) cognitive architecture, integrated with the other high-level cognitive processes accounted for by LIDA, and physically embodied on a humanoid PR2 robot with the aid of the CRAM (Cognitive Robot Abstract Machine) control system. The LIDA cognitive architecture will be extended by a conceptual and computational, hierarchical spatial memory model, inspired by the neural basis of spatial cognition in brains. This memory module, representing the environment on multiple hierarchical grids, will be added to LIDA and integrated with other modules such as working memory, attention and action selection in order to facilitate navigation and planning based on spatial maps. The resulting architecture will be physically embodied on a humanoid robot (PR2), to strengthen the cognitive plausibility of the spatial model by comparing its behavior with humans in performing simple spatial and navigational tasks. This will be accomplished by developing an interface between LIDA, implementing high-level cognitive processes, and the low-level CRAM control system, implementing hardware control, visual object recognition, and motor execution. Apart from hypothesis and plausibility verification, this embodied model will also provide a biologically inspired robotic mapping approach that does not need expensive sensors, scales well to large environments due to being hierarchical, and is integrated with important general high-level cognitive functions such as planning, non-routine problem solving and attention.

The models ability to navigate in the physical world, and its cognitive plausibility, will be verified in a series of experiments including testing the robots ability to navigate known routes, novel and multi-goal routes; and comparing planning efficiency, planning time, map accuracies, map learning times and other metrics with data from human subjects.

Demo Video of Atlas Robot Learning Cognitive Map

This video shows the perception, and some of the internal representations, of a simulated humanoid Atlas robot (Boston Dynamics) exploring an environment modelled after the spatial layout of a human participant's home town. The top left inset panel shows the raw visual input, the second panel the heatmap of closest recognized navigationally relevant points reconstructed from the stereo camera, the third panel the recognized relevant objects (the road being followed, occasionally recognized key landmarks flashing in red); and the bottom, large panel shows the learned cognitive map, with brighter areas representing the vicinity of recognized key landmarks.

More details, and comparison of the robot's performance at learning and recalling spatial information with human performance, are available in the following publication: (Madl et al., 2016).




  • Cognitive-Map-Structure-Experiment (3D virtual reality-based experimentation platform in the browser, written in JavaScript)
  • semisup-learn (a semi-supervised learning framework inspired by a model of spatial memory structure)
  • ROS-road-line-junction-extraction (a sub-model for perceiving and following roads based on stereo camera input, for those interested in robotics)
  • python-LS-SLAM and pySeqSLAM (implementations of various types of simultaneous localization and mapping for robotics)
  • HTSP (biologically inspired solver for the traveling salesman problem for logistics problems)

The Austrian Science Fund (FWF) (2013-2017)