Font Size: a A A

Design And Implementation Of In-Vehicle Gesture Control System

Posted on:2019-07-27Degree:MasterType:Thesis
Country:ChinaCandidate:Z L LuFull Text:PDF
GTID:2392330590492409Subject:Software engineering
Abstract/Summary:PDF Full Text Request
As a new means of HMI(Human machine interface),gesture control system is getting more and more attention from the market.To control the head unit in the car using traditional buttons or touch screen might distract the driver.This compromises the driver's safety.By controlling with gestures the driver doesn't need to look at the buttons or the screen,so you can get a more natural HMI experience.The current in-vehicle gesture control system on market uses a depth camera and an ASIC which runs the image algorithm.The disadvantage of this approach is the high cost of the depth camera,and the extra cost of the ASIC.So it can only be applied to high-end car models.There is also a strong demand for gesture recognition in the mainstream markets.If we use an economical camera without depth sensing and run the algorithm on the head unit,the overall cost of the system can be dramatically reduced.Economical cameras are readily available in the market.The problem is the computing power of SOCs in different head units vary a lot.A SOC with more computing power can use a more complex algorithm to support more gestures and get better recognition results.A SOC with less computing power needs a less complex algorithm to perform some basic functions.So different algorithms are needed to make gesture functions running on different kinds of SOC platforms.This paper focused on the study of gesture recognition algorithms on those embedded systems.Considering the cost of the system and the limitations of computing power on different SOCs,we decided to use a low-cost infrared camera.By applying traditional morphological algorithms,machine learning and deep learning algorithms,this article proposes 3 different approaches for different SOC platforms.The first method uses morphological algorithms which requires minimum computing power.It produces gesture result based on the position of the palm and the number of the fingers.This method has some difficulties in handling background noise.We can overcome this by adjusting the power of infrared LED to get adequate segmentation.The second method uses LBP detector and CNN classification network.It has better detection performance than some previous study[1].We train a SVM classifier based on LBP feature for hand detection.Then train a small convolutional neural network to classify the hand pose.We use sliding window to detect position of hand and get the class of the hand pose by putting the hand image into a classification network.Since we only need to support single hand operation,when the SVM detector creates several candidate regions,we will apply some algorithm to assign priorities to those regions and then run the classification network by those high priority regions.This improves the speed of the system and allows for adjustable running times on the classification network in a single frame.This helps to meet the real time performance requirement.The second method is much faster than the third method because the computing power needed by SVM and low resolution CNN is much smaller than the power need by deep neural network.It can achieve real time performance even without a GPU.The third method applies the latest object detection neural network,YOLOv2,to detect and classify the hand in one complete network.We can adjust the parameters to strike a balance between speed and accuracy.With a GPU,this third method will get much better accuracy than the previous two methods.YOLOv2 handles the background noise very well and achieves the minimum false positive rate.This method will work best with sufficient computing power.
Keywords/Search Tags:Gesture Recognition, Deep learning, Machine Learning, Human Machine Interface, In-Vehicle HMI
PDF Full Text Request
Related items