Font Size: a A A

Spatial Layout Understanding From Single Catadioptric Omnidirectional Image

Posted on:2009-08-31Degree:MasterType:Thesis
Country:ChinaCandidate:G ChengFull Text:PDF
GTID:2178360278956697Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Image-based spatial layout understanding, which aims to perceive the position and size of each object in the image, wins a hot focus in the ongoing computer vision. Taking part in the scene perception, a computer can liberate the brains from complicated working. The technique's flourish is infusing vivid life into various fields such as industrial design, robot navigation, medical measure and entertainment.Meanwhile, the catadioptric omnidirectional photoing excels at "shooting once, grasping omnidirectionally". With respect to this particular advantage, our research proposes a spatial layout understanding method in terms of an outdoor-building-scene image. Through experiments, this single-image-based technique is proved feasible.This paper mainly refers and solves following key points:(1) With the help of machine learning, we partition a catadioptric omnidirectional image into 4 classes: sky, ground, vertical building, and tree-like plant. Consequently, a spatial layout understanding problem turns to a vertical objects understanding problem.(2) Therefore, the problem chiefly comes to analyzing the vertical objects'bounds. Our solution is made up of the following methods: a dynamic programming for building-feature-segments cumulative function to separate the buildings'left and right lines, a least square lines fitting, vanishing points projection converging-extending, or approximate circular estimation to distinguish the buildings'top and bottom lines, then each rectangle-like building surface is get.(3) In the section of building-feature-segments detection, we use a windowed Hough transform for implementation, which can make the detected segments homogeneous.(4) With the deduction of the optical paths, we prove a special characteristic of the omnidirectional image, that the horizon is only relative to the catadioptric photo equipments. Under this useful principle, the building-face boundary extraction can be facilitated and effective.(5) With the deduction of the conversion among omnidirectional image coordination, real ground position, and respective height, we put forward the solutions to calculate a vertical object's position and height.(6) According to above spatial layout understanding result, we construct a multi-view scene walkthrough system. With this visualized system, the validity of our research is examined.
Keywords/Search Tags:catadioptric omnidirectional image, spatial layout understanding, machine learning, building-feature-segment, Hough transform
PDF Full Text Request
Related items