| With the development of fields such as augmented reality,industrial manufacturing,and smart cities,there is an increasing demand for 3D models of real scenes in these fields.However,limited by the actual scene environment and equipment conditions,the collected data is usually incomplete or with low precision.Therefore,it is difficult to create the intended 3D model directly through data-driven modeling methods.Furthermore,interaction-driven modeling methods usually require cumbersome and elaborate interactions and are not suitable for creating object models in real scenes.Although some modeling methods combining data and interaction have been proposed to create models of real-world objects,there are still many challenges.On the one hand,in modeling tasks based on a single image,it is necessary to manually define the meaning and constraints of explaining user strokes,and it has poor generalization ability.On the other hand,these methods usually require users to draw tedious and detailed two-dimensional interactions,Especially when interacting with multi-view images or point cloud data,it is necessary to frequently switch the viewing angle,so that the operation is cumbersome and not conducive to the use of novice users.Based on the above problems,this paper attempts to explore the rapid creation of high-quality 3D models of real-world objects by combining intelligent semantics of interaction and real-world data under the premise of a small number of simple interactions.The main contributions of this paper are as follows:1.We propose a simple and effective method to reconstruct a high-fidelity 3D tree model from a single image,which abstracts the tree in the image into a simple 2D shape representation,and the user only needs to roughly outline the shape of the tree on the image without having to Draw the complex branch structure of the tree.At the same time,the method applies deep learning technology to learn the mapping of 2D shape to 3D shape from the 3D tree model library,so as to obtain more natural and reliable depth information.Then,a complete 3D tree model is generated under the constraint of 3D shape combined with a rule-based self-organizing plant modeling method.Furthermore,the method is able to create 3D tree models with different levels of detail by editing the shape semantics represented by strokes.Finally,we experimentally demonstrate the efficiency and effectiveness of this method in recovering 3D tree models from a single image.2.We propose a method for creating 3D models from low-quality reality data using simple 3D interaction,which optimizes rough 3D strokes from a rough and incomplete point cloud captured by reality,allowing users to focus more on overall shape delineation,without having to focus on the precision of 3D interactions.First,the user draws two simple 3D strokes to roughly construct the initial shape of the target object.Then,the neural network is used to infer the stroke shape and model type,and the real point cloud of the target object is extracted under the constraints of the 3D stroke,and the 3D stroke is optimized through the target point cloud data to create a more accurate 3D model.Finally,we experimentally verify the effectiveness and ease of use of this method to create models of indoor man-made objects. |