Font Size: a A A

Research On Automatic Generation Of High-fidelity Wireframe Based On Hand-drawn UI Sketch

Posted on:2022-01-28Degree:MasterType:Thesis
Country:ChinaCandidate:Y FanFull Text:PDF
GTID:2518306551470824Subject:Master of Engineering
Abstract/Summary:PDF Full Text Request
User Interface(UI)prototyping is an essential step in the process of user-centered application software development.In the early stages of interface prototyping,designers have no idea what the interface should look like and often use wireframes to explore.Wireframes are the graphical skeleton of a user interface.According to the degree of detail implementation,they can be divided into low fidelity and high fidelity wireframes.A low-fidelity wireframe is a crude and basic representation of a wireframe,usually drawn on paper,used to experiment with the designer's initial ideas and to convey the designer's intentions.High-fidelity wireframes are a more complete representation of the final product and take more effort to create,usually using prototyping software tools.High-fidelity wireframes can help convey aesthetic features and understand basic functionality,but creating them is time consuming,and there is an additional learning cost associated with using prototyping tools.Therefore,this paper studies how to use deep learning technology to detect components in hand-drawn UI sketches,and automatically generate high-fidelity wireframes according to the detection results.Finally,this paper designs and implements a prototype tool plug-in to realize the automatic conversion from hand-drawn UI sketches to high-fidelity wireframes.It saves designers time to create high fidelity wireframes and improves the efficiency of prototype design.The most critical step in automatically generating high-fidelity wireframes is to accurately identify the type and position of each component in the hand-drawn UI sketch.This is a target detection task in a specific field.In the existing research on user interface component detection,traditional computer vision methods or methods based on deep learning are usually used.The former requires manual extraction of features and setting heuristic rules based on experience.The process is cumbersome and complicated,and the accuracy rate is low.The latter is an end-to-end learning model that uses neural networks to automatically extract features with higher accuracy.Therefore,this thesis studies the user interface component detection method based on deep learning,and summarizes the problems of the existing deep learning methods in hand-drawn UI sketch detection:(1)The collection of hand-drawn sketch data sets requires a lot of manpower and is expensive.Therefore,the amount of training data is small.When using deep learning for user interface components detection,a model pre-trained on natural images is usually used,and the feature difference between sketches and natural images is large,resulting in slower convergence when training the detection network,The accuracy rate is low.(2)The useful pixels in tsorhe sketch are relatively sparse,and there is a lot of useless background information.When using the convolutional network for feature extraction,the sparse stroke information will be lost during the layered convolution process,which makes the sketch Recognition becomes difficult;(3)The frequency of use of various components of the user interface varies greatly,which leads to the problem of unbalanced categories in the training data set,which makes the network model easy to support components with a larger sample size,making the recognition of components with a smaller sample size accurate The rate is low.In response to the above problems,this article improves the Faster R-CNN target detection algorithm to detect components in hand-drawn UI sketches,and develops a plug-in for the prototype software tool(Adobe XD)to help designers quickly create high-fidelity wireframes through hand-drawn UI sketches.The main work of this thesis is as follows:(1)In view of the large difference between sketch and natural image,the use of pre-training model based on natural image leads to the low accuracy of detection mode,this thesis constructs a large-scale UI sketch component data set,and uses it to pre-train the target detection network of this thesis,which speeds up the convergence speed of the network and increases the m AP of the model by 3.6%.(2)Aiming at the problem of sparse stroke information loss in the sketch during the convolution process,this thesis proposes the use of a residual network based on the displacement attention mechanism to extract the features of the interface sketch,which combines channel attention and spatial attention,and it can help the network learn more useful features,ignore unimportant information,and increase the m AP of the model by 4.6%.(3)Aiming at the problem of class imbalance in the training data,this thesis uses Focal Loss to improve the multi-class cross-entropy loss in Faster R-CNN,making the network pay more attention to samples that are difficult to classify,and reduce the problem of easy-to-classify samples Over-fitting improves the detection accuracy of difficult samples.(4)This article developed a high-fidelity wireframe automatic generation plug-in based on the hand-drawn UI sketch for the existing popular prototyping software tool Adobe XD,and applied the improved hand-drawn UI sketch detection model to the plug-in.With the plug-in It reduces the time for designers to create high-fidelity wireframes and improves the efficiency of prototyping.In summary,this article improves the Faster R-CNN target detection algorithm to make its m AP in the hand-drawn UI sketch detection reach 88.5%.Finally,this thesis designs and implements a prototype software tool plug-in to help designers automatically generate high-fidelity wireframes through hand-drawn UI sketches.
Keywords/Search Tags:UI sketch, high-fidelity wireframe, target detection, plug-in
PDF Full Text Request
Related items