The RAVEL Corpora

HUMAVIPS Project (FP7-ICT-2009-247525)

Action recognition

The task of recognizing human-solo actions is the motivation behind this category; it consists of only one scenario. Twelve actors perform a set of nine actions alone and in front of the robot. There are eight male actors and four female actors, getting rid of any bias due to gender. Each actor repeats the set of actions six times in different -- random -- order; this provides for various co-articulation effects between subsequent actions. The following is a detailed list of the set of actions: (i) stand still, (ii) walk, (iii) turn around, (iv) clap, (v) talk on the phone, (vi) drink, (vii) check watch, (viii) scratch head and (ix) cross arms.

Background clutter

Since the Ravel data set aims to be useful for benchmarking methods working in populated spaces these scenarios were collected with two levels of background clutter. The first level corresponds to a controlled scenario in which there are no other actors in the scene and the outdoor and indoor acoustic noise is very limited. During the recording of the scenarios under the second level of background clutter, other actors were allowed to walk around, always behind the main actor. In addition, the extra actors occasionally talked to each other; the amount of outdoor noise was not limited in this case.

Download

In the following you should find the preview video as well as the download folder for each one of the characters. In the download folder you may find:

You may also download the calibration data.

Sequences

Character 1

Data download | Up

Character 2

Data download | Up

Character 3

Data download | Up

Character 4

Data download | Up

Character 5

Data download | Up

Character 6

Data download | Up

Character 7

Data download | Up

Character 8

Data download | Up

Character 9

Data download | Up

Character 10

Data download | Up

Character 11

Data download | Up

Character 12

Data download | Up

 

Creative Commons License
The Ravel Corpora is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.