Font Size: a A A

Research On Lightweight Semantic Segmentation Networks For Mobile Devices And Application In The Assistance Of The Visually Impaired

Posted on:2021-02-26Degree:MasterType:Thesis
Country:ChinaCandidate:C Y XuFull Text:PDF
GTID:2404330632950598Subject:Engineering
Abstract/Summary:PDF Full Text Request
About 253 million visually impaired people around the world are facing many inconveniences in their daily life due to visual barriers.It is particularly difficult for them to sense the environment,which seriously affects their quality of life.There are many limitations in the use of traditional blind assistance methods(such as blind sticks and guide dogs).However,the available assistance methods such as navigation,obstacle avoidance,and positioning are very limited,which are not enough to provide comprehensive scene information(including surrounding objects,as well as environmental information such as ground and wall).These difficulties make the visually impaired eager to have a system that assists them in sensing their surroundings.In recent years,convolutional neural networks and the technology of image recognition and segmentation by means of deep learning has also been widely used in the field of image content understanding and information acquisition.Semantic segmentation classifies images at the pixel level,which can obtain rich object and scene information from the image,and could be very suitable for scene perception tasks.However,blind-assisted scene perception also has its special application requirements:there is a huge amount of information in the environment,and blind people do not need to perceive redundant information;the network runs on mobile devices,so it must meet certain real-time requirements to achieve the balance between accuracy and real-time performance.What's more,the model should be as small as possible.At the same time,in indoor and outdoor environments,the change of illumination has a greater impact on the robustness of semantic segmentation network.In response to these problems and needs,in this paper,we learn the current researches of semantic segmentation technology,and study on lightweight semantic segmentation networks that can be used on mobile terminals,then apply them on mobile devices to provide assistance for blind-assisted scene perception.In this article,we summarize the current status of the development of technologies and the core part--the main technical methods commonly used in real-time semantic segmentation networks.Based on this,a lightweight semantic segmentation network based on DeepLabv3+network structure,MobileNet v2 and ShuffleNet v2 as feature extractors is used,and improved by adding Atrous Spatial Pyramid Pooling(ASPP)layers and adjusting their numbers properly.In terms of training data,the indoor and outdoor scene dataset ADE20K is first reduced to remove redundant information,and then pre-processed with illumination-invariance preprocessing method.With the method of multiple pre-training,and at different learning rates,the two models were trained and verified on the processed and unprocessed datasets,respectively,and tested on the Gardens Point Walking dataset on day and night images to verify the robustness of the models to illumination changes.The two lightweight semantic segmentation network models are compared with existing models in terms of accuracy,real-time performance,model scale,and computational amount.As a result,the network with MobileNet v2 as the backbone has a mIoU of 50.9%and a running speed of 64 FPS on 1080Ti.;The mIoU of the network with ShuffleNet v2 as the backbone is 45.8%,and the running speed is 83 FPS.Finally,the trained model is deployed on the Android smartphone.Finally,we summarize the lightweight semantic segmentation network for the assistance of semantic scene understanding of the visually impaired,and present prospects for improvement and future development.
Keywords/Search Tags:semantic segmentation, lightweight network, illumination-invariance processing, blind assistance
PDF Full Text Request
Related items