| With the rapid development of artificial intelligence,the development of deep learning models has become increasingly sophisticated.However,the development of these models is also limited by the chip platform and deep learning frameworks.Currently,GPU chips and deep learning frameworks such as Py Torch are the leading mainstream technologies abroad.In contrast,Chinese companies do not have enough say in this field and need to accelerate their own research and development.Therefore,this paper promotes the development of Chinese chip platforms and deep learning frameworks by migrating the deep learning target detection model Mask R-CNN from GPU chip platforms and Py Torch frameworks to Huawei Ascend chip platforms and MindSpore frameworks,and improving them based on the features of MindSpore frameworks.Specifically,the proposal in this paper has the following three parts:(1)A scheme to migrate Mask R-CNN to Ascend platform and MindSpore framework is proposed to solve the performance bottleneck problem of Mask R-CNN under the original platform and original framework.By migrating the model to the MindSpore framework and solving the problems such as the loss of Na N and the inference accuracy of 0 that occurred during the migration process,the migration model was successfully built in this paper,and the performance was improved by about 20.1% after the migration.(2)The solution of changing the dynamic tensor shape of Mask R-CNN to static tensor shape is proposed to solve the problem of low performance of dynamic tensor shape under the MindSpore framework.The dynamic tensor shape of Mask R-CNN data preprocessing and tensor Boolean slicing to filter useless boxes is changed to static tensor shape,and the original model is reconstructed in this paper.The experimental results show that the static tensor shape improves the performance by about 26.8%.(3)A scheme to improve the Res Net50 backbone network and SGD optimizer of Mask R-CNN into a depthwise separable convolution-based backbone network and Momentum optimizer,respectively,is proposed to address the problem of low model performance under MindSpore auto-differentiation.The experimental results show that these improvements result in a performance improvement of about 36.8%.The implementation of these schemes not only helps break the ecological barriers of foreign chip platforms and deep learning frameworks,but also has important application value for the cultivation of the domestic Ascend platform and MindSpore framework ecosystem. |