Font Size: a A A

Object category recognition through visual-semantic context networks

Posted on:2015-03-18Degree:Ph.DType:Thesis
University:Rutgers The State University of New Jersey - New BrunswickCandidate:Chakraborty, IshaniFull Text:PDF
GTID:2478390020452555Subject:Computer Science
Abstract/Summary:
Understanding and interacting with one's environment requires parsing the image of the environment and recognizing a wide range of objects within it. Despite wide variations, such as viewpoint, occlusion and background clutter, humans can achieve this task effortlessly and almost instantaneously. In this thesis, we explore computational algorithms that teach computers to recognize objects in natural scenes. Inspired by the findings in human cognition, our algorithms are based on the notion that visual inference involves not only recognizing individual objects in isolation but also exploiting rich visual and semantic associations between object categories that form complex scenes. We view artificial object recognition as a fusion of information from two interconnected representations. The first is the inter-image representation in which an image location is visually associated with previously learnt object categories based on appearance models to find the most likely interpretations. The second is the intra-image representation in which the objects in an image are semantically associated with each other to find the most meaningful spatial and structural arrangements. The two representations are interconnected in that the visual process proposes object candidates to the semantic process, while the semantic process verifies and corrects the visual processs hypotheses. The primary goal of this thesis is to develop computational models for visual recognition that characterize the visual and semantic associations and their inter-dependencies to resolve object identities. In order to do so, we model object associations in contextual spaces. Unlike traditional approaches for object recognition that use context as a postprocessing filter to discard inconsistent object labels, we stratify scene generation into a Bayesian hierarchy and simultaneously learn semantic and visual context models for objects in scenes. The semantic-visual contexts among objects are represented through latent variables in this hierarchy. The intra-image associations within a scene are modeled as semantic context while the inter-image relations due to appearance similarities between object categories are modeled as visual context. To combine the complementary information derived from the two spaces, object labels are inferred by context switching; labels activated by appearance matches constrain semantic search while semantic coherence, in turn, constrains object identities. We demonstrate how this novel context network for modeling associations between objects leads to highly accurate object detection and scene understanding in natural images, especially when training data is impoverished and negative exemplars are not easily available.
Keywords/Search Tags:Object, Visual, Semantic, Context, Image, Recognition
Related items