DIPLECS Autonomous Driving Datasets (2015)
DIPLECS Autonomous Driving Datasets (2015)
(c) Nicolas Pugeault ([email protected]), 2015.
Description
This page contains three datasets recording steering information in different cars and environments, recorded during the course of the DIPLECS project (www.diplecs.eu) used in the references [1,2,3,4].
Datasets
Surrey (CVSSP, University of Surrey)
This dataset contains the data used in articles [2] and [4].
The dataset was recorded by placing a HD camera in a car driving around the Surrey countryside. The dataset contains about 30 minutes of driving.
The video is 1920x1080 in colour, encoded using H.264 codec.
Steering is estimated by tracking markers on the steering wheel. The car's speed is estimated from OCR the car's speedometer (but the accuracy of the method is not guaranteed).
Illustration of the recording setup (from [4]).
Downloads
Sweden (Autoliv)
This dataset contains the data used in articles [1] and [4].
It was recorded by Autoliv Inc. (www.autoliv.com), and is provided here with permission.
The video contains the driver's view from the windscreen for approximately 3 hours of driving, in the vicinity of Stokholm. The video is of size 900x244 and greyscale.
All frames are labelled according to driving context and driver's actions (see [2] for details).
Example image from the dataset.
GPS track of the dataset (part in green were used for training in [4] and parts in red for testing) - figure reproduced from [4], image captured from Google Earth (c).
Downloads
Indoor (Linkoping University)
This dataset contains the data used in articles [2], [3] and [4].
It has been recorded by Liam Ellis and Nicolas Pugeault at the university of Linkoping, using a remote controlled car with a rotating camera (see reference [2] for a description), driving around two tracks: O-shape and P-shape.
For each track, the data consists of a number of runs one way or the other around the track.
The remote controlled car used for the recordings.
|
|
O-Shape track |
P-Shape track |
Downloads
License
The copyright of the DIPLECS dataset is owned by The Centre for Vision Speech and Signal Processing, University of Surrey, UK. Permission is hereby granted to use the DIPLECS dataset for academic purposes only, provided that it is referenced in publications related to its use as follows:
N. Pugeault, R. Bowden, "How much of driving is pre-attentive?", In IEEE Transactions on Vehicular Technologies, vol. 64, no. 12, pp. 1-15, 2015.
REFERENCES
[1] N. Pugeault, R. Bowden, "Learning pre-attentive driving behaviour from holistic visual features", In ECCV 2010, Part VI, LNCS 6316, pp. 154-167, 2010.
[2] N. Pugeault, R. Bowden, "Driving me Around the Bend: Learning to Drive from Visual Gist", In 1st IEEE Workshop on Challenges and Opportunities in Robotic Perception, in conjunction with ICCV'2011, 2011.
[3] L. Ellis, N. Pugeault, K. Ofjall, J. Hedborg, R. Bowden, M. Felsberg, "Autonomous navigation and sign detector learning", In IEEE Workshop on Robot Vision (WORV'2013), Winter Vision Meetings, 2013.
[4] N. Pugeault, R. Bowden, "How much of driving is pre-attentive?", In IEEE Transactions on Vehicular Technologies, vol. 64, no. 12, pp. 1-15, 2015.