Laboratory for Animate Technologies

We are developing multidisciplinary technologies to create interactive animated systems which will define the next generation of human-computer interaction.

The Laboratory for Animate Technologies is creating ‘live’ computational models of the face and brain by combining bioengineering, computational and theoretical neuroscience, artificial intelligence and interactive computer graphics research.

Imagine a machine that can laugh and cry, learn and dream, and can express its inner responses to how it perceives you to feel. It can express itself in a natural manner but also allows you to visualise the mental imagery emerging in its mind.  

If I had my time again, I’d want to spend it in this lab.

Alvy Ray Smith Co-founder of Pixar (on his visit to the Laboratory for Animate Technologies)

Research questions

We have highly realistic facial animation technologies for producing interactive avatars based on neural network models of the human brain. These are already world-leading but:

  • Can we enhance them much further?
  • Can progress be made if we link to neuroscience research at the Centre for Brain Research in Auckland, and to the whole body physiological modelling that is emerging from the ABI’s leadership of the Physiome Project? 

Our avatars are already being used for assisting people with disabilities to access government services in Australia.

Making even more progress will lead to a huge number of applications in healthcare, robotics and human-machine interfaces. It will also provide substantial future business opportunities for New Zealand companies.

Our research

Applications of the technology encompass both Pure Research and Commercial Applied Research, including:

  • Exploration and embodiment of theories of brain function and behaviour through computational modelling.
  • New, more natural ways of interacting with technology through expressive human-machine communication.

Neurobehavioural animation

We believe the best way to simulate biological behaviour is through biological models.

We model the brain processes which give rise to behaviour and social learning and use these to animate lifelike models of the face that can interact with you.

Baby X

BabyX is an interactive animated virtual infant prototype. It is a computer-generated psychobiological simulation under development in the Laboratory of Animate Technologies. It’s an experimental vehicle incorporating computational models of the basic neural systems that are involved in interactive behaviour and learning.

These models are embodied through advanced 3D computer graphics models of the face and upper body of an infant. The system can analyse video and audio inputs in real time to react to a caregiver or another child’s behaviour (it bases this on behavioural models).

BabyX embodies many of the technologies we work on and is under continuous development. This includes its neural models, sensing systems and also the realism of its real-time computer graphics.

Interactive modelling and simulation of biologically based neural networks

We create interactive models of neural systems and neuroanatomy. This enables us to visualise the internal processes generated by computational simulations giving rise to behaviour.

The Auckland Face simulator

This is being developed to cost-effectively create extremely realistic and precisely controllable models of the human face and its expressive dynamics for Psychology research.

Face simulation and biomechanics

We are developing the technology to simulate faces both inside and out. We simulate how faces move and how they look, and even their underlying anatomic structure.

Brain language

We have been working on a visual modelling methodology for the construction, visualisation and animation of neural systems called Brain Language [BL] – a novel simulation environment for neural models.

The methodology allows creations of animations and real-time visualisations from biologically based neural network models, allowing simulation effects to be viewed in an interactive context. Such a visual environment is not only suitable for visualising a simulation... it is also ideal for model development.

Facial tracking and analysis

We are developing computer vision-based systems to track and analyse facial expression and state-of-the-art algorithms for individual facial muscle activation.

Applications range from real-time expression recognition to microdynamic interaction analysis for psychology research.

Members

Primary contact

Mark Sagar

Academic

Siobhan Kennedy-Constantini
Mark Sagar
Gonzalo Talou

International links

  • Japan: SONY
  • US: IBM

Discover more augmented & virtual reality groups

  • Augmented Human Laboratory - The lab focuses on exploring ways of creating enabling human-computer interfaces as natural extensions of our body, mind and behaviour.

  • Virtual Brain Project - This project is developing a comprehensive framework for modelling the brain using computational methods built on bioengineering principles.