| Parameterized 3D human body reconstruction is a significant research topic in the field of computer vision,involving the modeling of three-dimensional human bodies.In recent years,with advancements in three-dimensional visual algorithms and capture devices,parameterized 3D human body reconstruction has garnered extensive attention and research.The primary task of parameterized 3D human body reconstruction is to accurately estimate the pose and body shape parameters of a human body based on input data,resulting in a parameterized human body model.Despite the significant progress made in the development of datasets and methods based on monocular images,these approaches still face several formidable challenges.Firstly,monocular images fail to provide three-dimensional spatial information,leading to a deep ambiguity in the process of estimating human body parameters from monocular images.As a result,the reconstructed 3D human body model cannot be accurately mapped to the physical coordinate system for practical applications.Secondly,the acquisition cost of three-dimensional data is high,and the annotation process for ground truth values of the 3D human body model is complex.This poses considerable challenges for the creation of reconstruction datasets,model training,and method evaluation.Lastly,due to the inherent self-occlusion and self-intersection issues in human poses,most parameterized human body reconstruction methods currently only focus on recovering pose parameters based on human joint estimation,while disregarding constraints related to unrealistic body shapes.This study aims to explore the issues in parameterized 3D human body reconstruction.The objective of this project is to overcome the common challenges of depth uncertainty and pose ambiguity in parameterized human body reconstruction methods,enabling the reconstructed human body to meet the requirements of interaction with the real physical world.This will facilitate the development of parameterized human body reconstruction methods and implementation systems suitable for real-world scenarios,such as intelligent healthcare and industrial design.The main contributions of this paper are as follows:1.We propose a parameterized human body reconstruction framework that combines a regression network model with an optimization method,avoiding the challenge of acquiring a large amount of RGB-D capture data for model training in real-world applications.This framework employs an intermediate representation of the 3D human body as the supervision signal for human body parameters.Through multiple iterative stages,the initial human body parameters output by the regression network model are optimized.Experimental results demonstrate that the average joint error and point-to-point reconstruction error of parameterized human body reconstruction reach 33.9mm and 29.1mm,respectively.Compared to the baseline methods,these errors are reduced by 31.5% and 57.9%,proving the effectiveness of our RGBD-based approach in improving human body reconstruction and pose estimation.2.This paper proposes a method for estimating three-dimensional human body joint positions by incorporating prior knowledge of human body shape and depth information.The estimated 3D joint positions,along with human body surface point clouds containing part information,serve as an intermediate representation for parameter optimization.This approach overcomes the common problems of depth ambiguity and inaccurate pose estimation in monocular reconstruction methods.The method is tested on synthetic datasets,and experimental results show that the average joint error of the RGB-D-based 3D joint estimation method reaches 16.8mm,a significant reduction of 72.2% compared to state-of-the-art monocular methods.3.A synthetic method for generating RGB-D human body datasets is proposed to address the lack of RGB-D test data.This method utilizes three-dimensional rendering and parameterized human body model transformation to generate depth maps and SMPL parameterized human body models.The synthetic dataset serves as the test data for our experiments and provides a data foundation for comparing the reconstruction performance of different methods using unified metrics.4.We have implemented a parameterized human body reconstruction system based on the Real Sense depth camera.This system serves as a practical example of our method’s application in real-world scenarios,demonstrating its significant effectiveness with RGB-D data acquired from capture devices.The reconstructed results are more accurate and reasonable compared to baseline methods and can be mapped to the world coordinate system based on physical scale information,satisfying the requirement for interaction between the reconstructed model and the real world. |