| Goat(Capra hircus),one of the oldest domesticated species and considered as an essential resource for meat,milk,and fibre throughout the globe.Nowadays different tagidentification methods are used for individual goat species.Automatic identification method for individual goat species reorganization is an important step towards achieving revolutionary customization in terms of care and protection that involve reproduction,health and improvement of feeding quality.On the other hand,to achieve the prominence objective for animal welfare,proper maintenance of them has to be established by ensuring free from pain,injury or disease.In precision farming,animal facial expression can assist farmers to improve animal health and welfare.This thesis work proposed automatic face and facial landmark extraction from source image along with face identification and facial action unit classification using convolutional neural network approach.Due to the high similarity and the lack of adequate dataset this problem is more complex than human face recognition.This thesis work composed three different publicly available datasets for detection,recognition and pain action actions unit classification.The dataset contained 1680 images with 3078 faces,3326 eyes,2586 noses and 4771 ear bounding boxes for face and facial landmark detection purposes.For face identification,4129 images were collected from 10 individual,while 2387 ears and 1635 eyes images were retrieved from the source images for pain action unit classification.Stateof-the-art convolutional neural networks(CNN)model are trained on this dataset.The proposed method is divided into three main steps: detection,identity recognition and pain action units classification.Given an input image,the first step is face and facial landmarks detection using a Yolo-based method and crop it into fixed regions.Facial landmarks are used to geometrically normalize the face.A custom CNN model extracts the feature for the detected region and predict the probability for identity classification.For pain action unit classification pretrained convolutional neural network has been used for feature extraction and traditional machine learning model has been used for parameter learning.This paper reports 93.26%,83.71%,92.04% and 85.55% detection accuracy(average precision)for face,eye,mouth and ear respectively and obtained an individual identification accuracy 96.4% from 10 individual samples.Furthermore,the accuracy of facial action unit classification for eyes and ears were81% and 95% respectively.I firmly believe that using the findings of this paper a new opportunity will be revealed in terms of phenotypic data collection,disease diagnosis,activity monitoring and finally rearing goats without any need for tags.All the dataset and related outcome are publicly available(http://dx.doi.org/10.17632/4skwhnrscr). |