Font Size: a A A

Scalable inference techniques for markov logic Scalable inference techniques for Markov logic

Posted on:2016-10-27Degree:Ph.DType:Dissertation
University:The University of Texas at DallasCandidate:Venugopal, DeepakFull Text:PDF
GTID:1478390017480359Subject:Computer Science
Abstract/Summary:
In this dissertation, we focus on Markov logic networks (MLNs), an advanced modeling language that combines first-order logic, the cornerstone of traditional Artificial Intelligence (AI), with probabilistic graphical models, the cornerstone of modern AI. MLNs are routinely used in a wide variety of application domains including natural language processing and computer vision, and are preferred over propositional representations because unlike the latter they yield compact, interpretable models that can be easily modified and tuned. Unfortunately, even though the MLN representation is compact and efficient, inference in them is notoriously difficult and despite great progress, several inference tasks in complex real-world MLNs are beyond the reach of existing technology. In this dissertation, we greatly advance the state-of-the-art in MLN inference, enabling it to solve much harder and larger problems than existing approaches. We develop several domain-independent principles, techniques and algorithms for fast, scalable and accurate inference that fully exploit both probabilistic and logical structure.;This dissertation makes the following five contributions. First, we propose two approaches that respectively address two fundamental problems with Gibbs sampling, a popular approximate inference algorithm: it does not converge in presence of determinism and it exhibits poor accuracy when the MLN contains a large number of strongly correlated variables. Second, we lift sampling-based approximate inference algorithms to the first-order level, enabling them to take full advantage of symmetries and relational structure in MLNs. Third, we develop novel approaches for exploiting approximate symmetries. These approaches help scale up inference to large, complex MLNs, which are not amenable to conventional lifting techniques that exploit only exact symmetries. Fourth, we propose a new, efficient algorithm for solving a major bottleneck in all inference algorithms for MLNs: counting the number of true groundings of each formula. We demonstrate empirically that our new counting approach yields orders of magnitude improvements in both the speed and quality of inference. Finally, we demonstrate the power and promise of our approaches on Biomedical event extraction, a challenging real-world information extraction task, on which our system achieved state-of-the-art results.
Keywords/Search Tags:Inference, Markov, Logic, Mlns, Techniques, MLN, Approaches, Scalable
Related items