STRANDS aims to enable a robot to achieve robust and intelligent behaviour in human environments through adaptation to, and the exploitation of, long-term experience. Our approach is based on understanding 3D space and how it changes over time, from milliseconds to months. We will develop novel approaches to extract quantitative and qualitative spatio-temporal structure from sensor data gathered during months of autonomous operation. Extracted structure will include reoccurring geometric primitives, objects, people, and models of activity. We will also develop control mechanisms which exploit these structures to yield adaptive behaviour in highly demanding, real-world security and care scenarios.

The spatio-temporal dynamics presented by such scenarios (e.g. humans moving, furniture changing position, objects (re-)appearing) are largely treated as anomalous readings by state-of-the-art robots. Errors introduced by these readings accumulate over the lifetime of such systems, preventing many of them from running for more than a few hours. By autonomously modelling spatio-temporal dynamics, our robots will be able run for significantly longer than current systems (at least 120 days by the end of the project). Long runtimes provide previously unattainable opportunities for a robot to learn about its world. Our systems will take these opportunities, advancing long-term mapping, life-long learning about objects, person tracking, human activity recognition and self-motivated behaviour generation.

We will integrate our advances into complete cognitive systems to be deployed and evaluated at two end-user sites. The tasks these systems will perform are impossible without long-term adaptation to spatio-temporal dynamics, yet they are tasks demanded by early adopters of cognitive robots. We will measure our progress by benchmarking these systems against detailed user requirements and a range of objective criteria including measures of system runtime and autonomous behaviour.

work packages

WP1 will develop novel algorithms to create and maintain 3D quantitative maps over long periods of time. UOL
WP2 will provide the robot with life-long acquisition of object knowledge, incrementally extending its expertise over time. TUW
WP3 will provide our robots with reliable person detection and tracking at multiple height levels during indoor activities. RWTH
WP4 will develop methods that allow the STRANDS robots to capture and learn the qualitative and functional aspects of space. KTH
WP5 will exploit the models generated in WP4 to allow our robots to learn about the activities that happen in the world it inhabits. UNIVLEEDS
WP6 will create more robust and efficient robot control behaviours by exploiting the long-term experience gathered by STRANDS systems. BHAM
WP7 will provide a detailed exploration of the care and security scenarios in which the STRANDS robots will be validated. AAF
WP8 will integrate and validate our research in a robot capable of performing useful tasks from real-world care and security settings. BHAM

This Website is maintained by the Automation and Control Institute (ACIN), TU Vienna