CAMIL Computational Audio-Motor Integration through Learning

Static and dynamic binaural recordings of a wide spectrum emitter from annotated motor states

The CAMIL dataset is a unique set of audio recordings made with a realistic dummy head equipped with a binaural pair of microphones and mounted on a pan tilt robot setup. The dataset was gathered in order to investigate audio-motor contingencies from a computational point of view and experiment new auditory models and techniques for Computational Auditory Scene Analysis. The version 0.1 of the dataset was built in November 2010, and the version 1.1 in April 2012. All recording sessions were held at INRIA Grenoble Rhône-Alpes and lead by Antoine Deleforge. A fully automatized protocol for the University of Coimbra's audiovisual robot head POPEYE was designed to gather a very large number of binaural sounds from different motor states, with or without head movements. Recordings were made in the presence of a static loud speaker emitting sounds with different properties (random spectrum sounds, white noise, speech, music...). Each recording is annotated with the corresponding ground truth motor coordinates of the robot. The overall experiments were entirely unsupervised and laster respectively 70 and 48 hours.

The CAMIL dataset is freely accessible for scientific research purposes and for non-commercial applications.

These two videos shows how training and test data were recorded in the CAMIL dataset: