Interaction of Autonomous and Manually Controlled Vehicles
"Developing basic knowledge for future technological development."
The Interaction of Autonomous and Manually-Controlled Vehicles (IAMCV) Dataset contains over 14 hours of data acquired through vehicles sensors in highways, intersections, roundabouts and urban environments. The data is currently being processed and will soon be published.
FWF Project Details
Project number: P 34485
Project lead: Univ. Prof. Dr. Cristina Olaverri-Monreal
Decision board: 08.03.2021
IAMCV Dataset
The Interaction of Autonomous and Manually-Controlled Vehicles (IAMCV) Dataset contains over 14 hours of data acquired through vehicles sensors in highways, intersections, roundabouts and urban environments. The data is currently being processed and will soon be published.
Methodology
The IAMCV dataset was recorded in June 2022 at daytime, under different natural light levels (sunny, cloudy, or overcast). Since the dataset aimed to record different interactions between the recording vehicle and other vehicles, trucks, motorcycles, and vulnerable road users, the chosen locations were focused on: intersections, roundabouts, roads, and highways.
To study intersections in urban areas, three locations were selected inside the city of Aachen in Germany. In the suburban areas around the city, two more locations were chosen for roundabouts and two more for country roads. Finally, the highway locations were selected along the A3 Federal Motorway between Passau and Cologne.
The collected data was anticipated to include various driving patterns and trajectories, contingent upon the specific scenarios and traffic conditions. In the locations selected for intersections and roundabouts analysis, the vehicle traversed the same reference intersection or roundabout several times, thus the followed path resembles loops around the reference point. These driving patterns are particularly useful for loop-closure purposes when testing mapping or localization algorithms.
Data Format
The IAMCV dataset comprises different types of data:
- Point clouds: The point clouds from the three LIDARs were stored using PCD file format. All the original fields that came from the LIDARs were also stored in the file:
- x,y,z: coordinates of the target in the sensor frame of reference.
- range: distance from the sensor to the target in mm.
- ambience: or near IR, is the ambience level of infrared sensed by the receiver when the transmitter is not pointing to that area.
- intensity: level of the signal in the receiver.
- reflectivity: reflectivity level of the target.
- ring: LIDAR's layer.
- Images: Similarly, the images from the three cameras were stored using PNG file format. The images are stored without rectification and the intrinsic parameters are provided. The images of the dataset were anonymized before publishing by blurring faces and license plates.
Regarding the naming, files from cameras and LIDARs were named using the format "nn_sensor_tttttttttt.tttt.ext" as follows:
- nn: stands for the number of the recording. Each recording has a unique two-digit identifier.
- sensor: indicates whether the sensor is a camera or a LIDAR and its position. e.g., "camera_c" stands for the camera located in the center of the vehicle, and "lidar_l" stands for the LIDAR located in the left side of the vehicle.
- tttttttttt.tttt: it corresponds to the timestand of the frame.
- ext: file extension .pcd or .png
Download dataset (the data is currently being uploaded to the server)
The dataset will be available at IEEE Dataport, opens an external URL in a new window.