Font Size: a A A

The Research Of Vision Based Human Computer Interface For Wearable Devices

Posted on:2017-05-31Degree:MasterType:Thesis
Country:ChinaCandidate:D HuangFull Text:PDF
GTID:2308330485984939Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Vision based wearable HCI technology is an interdisciplinary consisted by computer vision, wearable computing, cognition, etc. The research focus on to design and build the most natural and intuitive HCI for the people, equipment and environment. This paper studies for visual interactive wearable technology can be divided into three parts, hand detection from FPV(First Person Vision), banknotes recognition for visual impaired, terrain recognition for Exoskeleton Robot.Hand detection is the most primary computer vision task for wearable device and plays an important role in wearable HCI. Stable, fast, robust hand detection to provide reliable and effective interactive using experience. Under condition of FPV, the illumination changing dramatically will cause the hand appearance instability. To end this, a novel based on modeling global illumination by supervise learning using the optimal features. The first step is clustering the training images into different scene categories by k-means using HSV histgram as feature. Then, for each scene we train scene independent classifier using the optimal features including color, texture, spatial. Our method have been test on the public dataset CMU-EDSH, get F-measure = 0.80.Using the position of hand as prior knowledge, we propose a sequential framework for handheld object recognition. According to the position of hand, we can quickly locate the target which can sharply reduce the computation significantly during recognition. We adopt a banknote recognition to evaluate it. Firstly, we use ORB match for banknotes recognition and barely can’t get the satisfied result. After that, we improved the ORB match to I2 C match which can cover the varity of illumination and the inconsistency of the bills. Finally, we combine the framework with I2 C match we get mAP = 62.4% for natural scene banknotes recognition.The last part of this paper is terrain classification for Exoskeleton Robot, which can be used for robot navigation. The algorithm based on the BoVW framework. The visual dictionary can be learned by the cluster of training samples. After get the trained visual dictionary each test entity can be represented by a vector generated by the dictionary. Finally, we built an Exoskeleton robot terrain dataset, which including 6 typical terrain, to verify the algorithm and get a result of mAP = 92.8%.
Keywords/Search Tags:HCI, wearable devices, hand detection, banknotes recognition, terrain recognition
PDF Full Text Request
Related items