Font Size: a A A

Android Portrait Background Blur System Based On MobileNetV2 And DeepLabV3

Posted on:2021-06-04Degree:MasterType:Thesis
Country:ChinaCandidate:L J XieFull Text:PDF
GTID:2518306050466794Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
Nowadays,more and more people use mobile phones to take pictures.Portrait background blur system is a hot research direction in the field of mobile phone photography.The background blur effect is usually completed by superimposing the multi-camera lens of the mobile phone,but the multi-camera puts more burden on the weight of the mobile phone body,increases the manufacturing cost of the mobile phone,and increases the purchase price of the mobile phone for the user.Therefore,this article relies on a mobile phone single camera portrait background blur project and completes the Android phone portrait background blur system with the help of neural network algorithms.This paper implements and trains the portrait segmentation model,and uses the mobile terminal algorithm migration framework to transplant and deploy it to the local mobile phone,so that users can use the system at any time without a network,reduce network delay and protect user privacy..The specific research contents of this article are as follows:(1)This paper uses the DeepLabV3+ network as the overall network architecture of the portrait segmentation model.The modification is to replace the Xception of the backbone network part of DeepLabV3+ with the lightweight network MobileNetV2,and add a hollow convolution to the MobileNetV2 network.After training on the 2312 picture training set,the loss finally converged to 0.115,and reached 0.889 m Io U on the 450 picture test set.(2)This article uses the two mobile terminal algorithm migration frameworks of MACE and Tensorflow Lite to transplant the trained portrait segmentation model and deploy it to the Android side.MACE uses Bazel and CMake compilation tools to compile the model into a dynamic library,and uses JNI to call the model to predict the reasoning.Tensorflow Lite uses the Tensorflow interface to convert the model to tflite format,import it to the Androidside resource folder assets,and directly use the Android interface to call the model predictive inference.(3)This article uses Android Studio to build the Android client,and uses Renderscript on the Android side to complete the subsequent background blur processing.The input picture is predicted and inferred by the transplanted portrait segmentation model to obtain the mask picture with the character part as the white background part and the black part.Write the Renderscript script to process the original picture and the mask picture,and finally obtain the complete portrait background blur picture.(4)This article tests the effect of the portrait segmentation model after transplantation and deployment.In terms of predictive inference effect,the m Io U of the mask map predicted by the transplantation and the mask map predicted by the original model is 0.878-0.943;in terms of time consumption The model prediction and inference processing time after transplantation is 198-530ms;in terms of resource consumption,the model prediction and inference CPU usage is 10%-40%,and the memory consumption is 24M-53 M.After a complete test,the portrait background blur system implemented in this article works well,and the overall running time is within 3s.The entire process does not need to upload pictures to the network,which protects the user's privacy,and the system has certain decompilation capabilities and good Practicality.
Keywords/Search Tags:Background Blur, People Segmentation, DeepLabV3+, MobileNetV2, Android
PDF Full Text Request
Related items