Perception, Localization, Control and Communications.
Object Detection based on Deep Learning
Based on our works published in different forums, we have developed computer vision algorithms able to provide reliable detection of the objects in the environment based on Deep Learning technologies. The AMPL works have achieved good results in public benchmarks, as well as in applications for industry, always with the aim of providing reliable detection in real-time.
Furthermore, our works to estimate the pose of the obstacles in the road based on images: pedestrians, vehicles… are reference in the field.
We are also working on providing similar degree of performance based on LiDAR, and data fusion. if you want to know about this, please contact us here.
Another of our approaches deals with the use of Semantic Segmentation in driving environment, also working in real time in our autonomous platforms.
3D based detection
Based on the experience of the Lab in perception and Deep Learning, we proposed an algorithm for pedestrian, vehicles and cyclists detection based on point cloud. The algorithm, called BirdNet, was successfully presented in the IEEE Intelligent Transformation Systems Conference 2018, celebrated in Maui, Hawaii, USA.
Do you wanna know more about our 3D and data fusion based detection ?. Contact us!.
Tracking & Data Fusion
One of our main research lines has always been Data Fusion. We use different Data Fusion algorithms during our researches to solve many different problems, including detection and tracking. On this regard, our different publications in the topic have allowed us to refine our tracking algorithms, in order to provide accurate and reliable detection and movement estimation.
The video on the top right, shows the performance of one of our algorithms in the KITTI dataset (only pedestrians, cars and cyclist within the camera field of view are tracked). The video in the right bottom shows the backward tracking algorithm designed for offline application that can provide enhanced performance.
Localizacion and Mapping
Localization is one of the key factors in order to provide reliable and trustable autonomous driving in all different kinds of environments. AMPL has wide experience in different Localization algorithms, together with SLAM (Simultaneous localization and Mapping).
Algorithms such as visual odometry, map matching, data fusion for localization, and lidar SLAM are some examples of the different solutions that AMPL offered over the years.
Localising in indoor environments, where GNSS technologies are completely unusable, is a complex task. Our technology, using exclusively 3D LiDAR information generated by one or more sensors simultaneously, can generate real-time continuous and precise localization in every environment. To do so, our algorithms fuse 3D LiDAR based odometry with a scan-matching algorithm that uses a pre-computed map, built using our SLAM method, to locate the vehicle inside the map and, therefore, inside the environment.
For outdoor applications, such as autonomous driving, we fuse our lidar based localization solution with GNSS information using our data fusion algorithms, making the best of both technologies: GNSS data is used to improve localization accuracy in places with fewer map features and to prevent the kidnapped robot problem. Besides, laser information improves accuracy in places where the map has more features and GNSS higher uncertainty, allowing AMPL’s data fusion localization to be used in specifically difficult scenarios for GNSS such as urban canyons.
3D Traffic Sign Classification
An algorithm to detect, classify and 3D locate traffic signs in different scenarios; based on Convolutional Neural Networks (CNN) using YOLOv3. The algorithm provides the bounding box of every traffic sign in an RGB image in real-time. The 3D localization phase is achieved using stereo vision, calculating the position difference of every pixel in both cameras and generating a disparity map. This disparity map is combined with the bounding boxes generated from the detector to estimate the depth of every sign.
Within the scope of the different project related to automated driving, AMPL developed different communication schemes: V2V (Vehicle to vehicle), V2I ( Vehicle to infrastructure) and V2P ( vehicle to pedestrian). The different autonomous vehicles of the AMPL have these capabilities:
- V2V. The vehicles are able to communicate and to negotiate a pick up action, as well as to provide information of their status. All is done regardless the network used: 3G, 4G or WiFi
- V2I. Here different approaches are being developed. One example is information from the infrastructure to the car, about the occupancy of the road. Another example of V2I was a web server able to provide pick up requests to the vehicles.
- V2P. Some demonstration were created to test the viability of the communication of an approaching vehicle (see video).
Besides the aforementioned, currently IoT approaches have been tested to provide all kind of information based on these emerging technologies.
Control and Cooperation of Autonomous Vehicles
Autonomous control is one of the keys in autonomous driving. Several solutions are being designed and implemented in the AMPL platforms: a robust low-level control (throttle, brake, and direction) and a high-level control (calculation of paths and obstacle avoidance) based on advanced intelligent control systems.
Among the different control techniques developed we can find:
- Drive by wire adaptation of a golf cart ( iCab Project)
- Path Planning and Path Following techniques.
- Vehicle Platooning.
- Multi Robot Task allocation for autonomous vehicles fleet.
Multiple Sensor Calibration
In a field where it is required more than one sensor, calibration methods are very important. On this regard, the AMPL has automatic calibration techniques used worldwide, developed and published in Open Source.