| The crane operation process in the intelligent factory needs the ability of perception,decision making,and execution.The accuracy and robustness of the digital modeling results of the crane operating environment are closely related to its safe and efficient operation.Crane operating environment is a three-dimensional scene different from other equipment,the motion of the trolley changes the position of the plane,and the lifting mechanism causes the relative height change between the cargo and the ground.How to use three-dimensional sensing technology to establish an accurate and reliable digital model of crane operation environment is a key problem that urgently needed to be solved.Because of the above problems,this paper carries out the following research work:(1)The internal parameters of the three-dimensional sensor used are calibrated to provide parameters support for the digital modeling task of the crane operation environment.The information of commonly used three-dimensional perception sensors is investigated and collected,and the requirements for digital modeling of crane operation environment and the performance and characteristics of various sensors are analyzed.Binocular camera and solid-state lidar are selected as three-dimensional perception sensors.To obtain the internal parameters of the binocular camera and the internal parameters of the binocular camera-lidar joint sensing system,establish the binocular camera mathematical model and joint sensing mathematical model,complete parameter calibration respectively,and provide data basis for the subsequent binocular stereo matching and heterosource information fusion calculation.(2)The digital modeling technology framework of the crane operation environment is established to solve the problem of obtaining the three-dimensional map which can be used for navigation and obstacle avoidance.The high running speed requirement of the digital modeling system and the characteristics of various stereo matching algorithms are analyzed,and the differential image calculation is completed by comparing the block matching algorithm.The formation causes of reflective flares are tracked,and the flares are suppressed by the polarization filter.For lidar,the single frame laser point cloud is collected by calling API interface in SDK.For three-dimensional perception sensor positioning,comparing general positioning methods,combining with the characteristics of intelligent factory crane fixed large scene environment,we choose laser ranging sensor is carried out on the local point cloud splicing,to filter the global point cloud,select the octree map with high compression ratio as digital modeling architecture save the form of the output,to reduce the storage space model and improve model transfer speed.(3)The two-degree-of-freedom attitude control system of the active vision sensor is proposed to solve the correlation between the lifting three-dimensional perception sensor’s perception area and the crane’s operation state,considering the factors of running speed,view angle,and height.Analysis of three-dimensional perception attitude sensor constraints,access to the angle of impact factor,building the three-dimensional perception of pitch angle and roll angle sensor calculation model,based on attitude changes in design and production of 2-DOF PTZ,establishing active vision sensors attitude control system of point cloud splicing method solve the problem of conversion of the coordinates of the rotational degree of freedom,Combined with digital modeling experiments under obstacles of different sizes and types,the system is verified to be scientific,which provides a calculation basis for global point cloud generation in the attitude control system of the active vision sensor.(4)The digital modeling method based on spatial information fusion of heterogeneous data is proposed to solve the problems of limited object recognition ability of a single type of three-dimensional sensing sensor and poor information fusion accuracy of heterogeneous sensors from the perspective of data correction and confidence evaluation.The modified target point is selected from the low-precision point cloud,and the K-d Tree method is used to search the neighboring points in the high-precision point cloud and calculate the weight,then the depth value of the target point is modified by combining the depth value of the target point to achieve the position correction of the low-precision point cloud.On the depth of target point and adjacent point to value coordinate normalization processing determine its degree of confidence,after placing all point cloud voxel grid map belongs,using fusion rules to have confidence in the grid integration into a single octree map,for the use of binocular camera and solid state lidar to obtain high resolution,high precision output results under the fusion method of theoretical support.(5)The information fusion method based on the historical raster map and real-time raster map of confidence level information is proposed to solve the problem that the digital modeling results fail to obtain the distribution of obstacles in the field due to accidental factors such as real-time data loss,noise influence,and sensor recognition failure.According to the confidence attenuation model of the forgetting curve,the data retention intensity is set in combination with the operating frequency of the equipment.The confidence retention rate is calculated by using the formation time interval between historical data and real-time data,and the storage confidence was obtained by using the confidence of voxels in historical data.In voxel layer based on historical data fusion rules of historical data in the attached storage confidence voxel and additional have confidence voxel in the real-time data information fusion,accord with operating habits formed digital model of the historical data fusion method,ensure the digital modeling results are capable of containing a brief error real-time data.(6)The "Crane operation environment Digital Modeling experiment platform" is designed and built to verify the digital modeling method,active vision sensor attitude control system,heterogeneous information fusion method,historical information fusion method,and other relevant experiments. |