| We target a 3D generative model for general natural scenesâ‘ that are typically unique and intricate.Lacking the necessary volumes of training data,along with the difficulties of having ad hoc designs in presence of varying scene characteristics,renders existing setups intractable.The main contribution of this dissertation is to propose a 3D natural scene generation method,which can generate diverse high-quality 3D natural scenes using only a single sample.Inspired by classical patch-based image models,we advocate for synthesizing 3D scenes at the patch level,given a single example.To our knowledge,our method is the first 3D generative model that can generate 3D general natural scenes from a single example,with both realistic geometry and visual appearance,in large quantities and varieties.Derived from Plenoxels,a dedicated exemplar pyramid is constructed via coarse-to-fine training,and the Plenoxels features are further transformed into more welldefined and compact features.Specifically,the patch matching and blending operate in tandem at each scale to synthesize an intermediate value-based scene,which will be eventually converted to a coordinate-based counterpart at the end.Last,an exact-toapproximate patch nearest-neighbor module is devised to address the computation issue arising from working on voxels with patch-based algorithms.At the core of this work lies important algorithmic designs w.r.t the scene representation and generative patch nearest-neighbor module,that address unique challenges arising from lifting classical 2D patch-based frameworks to 3D generation.We validate the efficacy of our method on random scene generation with an array of general natural scenes,and show superiority by comparing it to baseline methods.The importance of each design choice is also validated.Extensive experiments also demonstrate the versatility of our method in several 3D modeling applications,all implemented in a unified framework. |