Advanced Driver Assistance System (ADAS) is an important topic in Intelligent Transportation Systems (ITS), which applies the cameras or other sensor fusion techniques to obtain information from road scene in front of vehicle, and prevents from the dangerous situations. The systems in the topic provide a warning mechanism to protect drivers from car accidents caused by fatigue or sudden distraction. According to this reason, we have developed a series of techniques based on computer vision techniques including lane recognition and obstacle detection (such as pedestrian or vehicle detection).
The motivation of system is that many car accidents happened due to collisions with on-road obstacle. Here we have developed on-road obstacle detection systems base on convolutional neural network(CNN). This system is a proposal-free framework, so it is fast and efficient. And it has been widely applied to Intelligence Transportation Systems or some applications of surveillance.
A Deep Learning Based Semantic Segmentation Approach for Car Steering on Urban Roads
In vision based autonomous driving systems, perception and control tasks are two critical problems to be solved. We propose an end-to-end CNN architecture with semantic perception to solve the vision based control problem in autonomous driving. In the first stage, a CNN module is used to generate semantic segmentation from the input image. In the second stage, another CNN module is used to take advantage of the semantic perception to predict steering controls. We show that semantic segmentation can be applied to enhance the performance of a vision based autonomous driving system.
This research focuses on developing a reliable and general action-recognition (AR) system, which is difficult to adapt various appearance and action styles of different users. Many future applications rely on this technique such as the real-time Human-Machine Interaction (HCI) systems due to the significance of identifying the input signal of a certain motion pattern, and are potentially used in extension techniques of AR as well as enhance the convenience of life.
This research is now focusing on hand pose estimation and human-computer interaction, and we would like to propose a method to provide a natural way to interaction with the computer and the virtual environment. The difficulty of this includes hand pose variations, self-occlusion and depth image broken. The process of our application is listed below: we will first get the depth image from the depth camera which is set on the helmet, then estimation the 3D coordinates of the hand joint by deep learning. Finally, project the joint coordinates in to the virtual environment and interact with the object.