| With the rapid development of China’s aerospace field,image processing in deep space exploration has become a difficult change for its further development.In deep space exploration,image detection can be used for the human-computer interaction between astronauts and detectors,and image segmentation can be used for the preliminary processing of the original image taken by the detector,which are crucial to the deep space exploration mission.However,the traditional image detection and segmentation methods are used in complex deep air environment,and the accuracy is difficult to meet the requirements;in the face of a large number of image data,the efficiency is low and the segmentation accuracy is poor.Based on the method of depth learning,this paper provides a feasible scheme for multi-source detection and valley segmentation in deep space exploration.(1)Aiming at the difficulty of image detection in the human-computer interaction between astronauts and lunar robots,this paper self-makes a hand-voice data set,integrates the dilated convolution,the Multi-scale feature extraction block in series and the Selective Kernel attention(SK)in VGG16 network,and proposes a gesture detection network DMS-SK.The speech detection network BLSTM-CTC is proposed by incorporating the Connectionist Temporal Classification(CTC)algorithm into the Bidirectional Long Short-Term Memory(BLSTM)network.Combining the two,a gesture and speech fusion detection network DMS-SK/BLSTM-CTC is proposed to realize gesture and speech detection in human-computer interaction between astronauts and lunar robots.The accuracy rate reached 97.38%.(2)Aiming at the difficulty of valley segmentation in the Mars satellite image,this paper self-makes the Mars valley image data set,proposes the Multi-scale Double Residual feature extraction(MDR)module and the triple attention(TA)mechanism,introduces them into Unet,and proposes the Mars valley image segmentation network MDR-Unet-TA to realize the segmentation of the Mars valley image.The accuracy,F1 and Io U scores reached 98.78%,95.77% and 95.12% respectively on simple test sets and 97.50%,95.12% and 93.50% on complex test sets. |