Ear Recognition

Ear Recognition. Ear Recognition. The instructions

Abstract

Human ear has attracted researcher’s attention recently due to its stable biometrics nature.

Biometric authentication for personal identification is very attractive nowadays. The advantages of fixed shape and appearance of ear biometrics made it reliable than other biometrics such as a face. In this paper, we have proposed a new method for an automated system for human ear identification. Our proposed method consists of four stages. The first stage considers the preprocessing of ear image which done by contrast enhancement, size normalization and skin detection. In the second stage, edge detection is done by using Deep Contour. In the third stage, features are extracted through CHAINLETS followed by ear identification (matching) using Histogram Intersection Distance in the fourth stage. The proposed method is applied on Unconstrained Ear Recognition Challenge (UERC) ear image database. Experimental results show that our proposed system achieves the best accuracy in CMC metric evaluation comparing with state of the art descriptors.

I.INTRODUCTION

Biometrics deals with recognition of individuals based on their physiological or behavioural characteristics. Researchers have done extensive studies on biometrics such as fingerprint, face, palm print, iris, and gait. Ear, viable new class of biometrics, has certain advantages over face and fingerprint [1], which are the two most common biometrics in both academic research and industrial applications. For example, the ear is richin features; it is a stable structure that does not change much with age [2] and it does not change its shape with facial expressions. Furthermore, ear is larger in size compared to fingerprints but smaller as compared to face and it can be easily captured from a distance without a fully cooperative subject although it can sometimes be hidden with hair, cap, turban, muffler, scarf, and earrings.

Â� [1] Basit, A., Javed, M. Y. and Anjum, M. A., “Efficient iris recognition method for human identification”, ENFORMATIKA, pp. 24-26, vol 1, 2005.
Â� [2] Moreno, B., Sanchez, A., Velez, J., F., “On the Use of Outer Ear Images for Personal Identification in Security Applications”, IEEE 33rd Annual International Carnahan Conference on Security Technology, pp. 469-476, 1999.

Biometrics is a fast-evolving technology [1] that it intends to identify or verify people based on their physical or behavioral characteristics. The rich structure of ear made it unique and preserved over time and suitable for people recognition. Nevertheless, extracting these structures and representing them as robust features remains a challenging task.
Moreover, ear biometric provide a very promising part within multimodal biometrics, especially when fused with face recognition as simple imaging devices can acquire both.
There are many advantages of using ear biometrics measure for identifying and verifying people: ear appearance is not affected by face expressions, make-up, facial hair or glasses. It is also easily captured and detected from a long distance as it has a predictable background (profile head).
This paper presents a new technique for automated human ear recognition system. There are four steps of the proposed algorithm. In the first step preprocessing is applied on the ear image which includes its cropping, size normalization contrast enhancement and skin detection. In the second step, edge detection is applied using Deep Contour method to extract the boundary of image to be used for next step. In the third step, features of ear are extracted using CHAINLETS. In the fourth step of proposed system feature matching is done using fast Histogram Intersection Distance which gives good results for template matching.

The main objective is to use a dense CHAINLETS feature extraction to cover the whole image. Moreover, different scales, i.e. different block and cell sizes, of CHAINLETS features were extracted simultaneously to consider the chain code directions over areas of different sizes and therefore represent different ear structures sizes. As a result, a vector of Multi-scale dense CHAINLETS feature values was constructed to describe each ear image. Those features values were used for classification (matching). The matching was tested by three distance functions: Cosine, Histogram Intersection, and Square Chi distance.
This paper develop automatic ear recognition system (from 2D images) and tested on the most recent, descriptor based methods proposed in this area. The evaluation done on new fully unconstrained dataset of ear images (UERC) gathered from the web.

Related Work

Most recent state of the art methods that have been done for Ear Recognition. Most related work can be covered from the papers:

-Ear Recognition: More Than a Survey ***

-Ear Biometrics: A Survey of Detection,Feature Extraction and Recognition Methods

-A Survey on Ear Biometrics

I.Methodology

A.Preprocessing
We are following the same morphological procedure as :

Preprocessing of ear image is the first stage in our proposed system. In Preprocessing, we are going to segment out the ear image from the rest of the head portion of a person. Also, size normalization and ear image enhancement is the requirement of our proposed system before feature extraction. Images with ear rings, other artifacts and occluded with hairs have been processed by using color segmentation by referring to ear color cue that is region of skin-tone value in HSV color model image. Then for contrast enhancement of grayscale image we are going to use contrast limited adaptive histogram equalization [1].
We follow the same procedure as paper:
-Automated human identification using ear imaging�

But instead of using closing and opening we convert the grayscale image to binary by thresholding to produce black and white image. Then the maximum region which consider as ear region will be selected. (generated mask) This region will apply to the original image by multiply the mask to original image. To remove the ear rings, other artifacts and occluded with hairs have been processed by using color segmentation by referring to ear color cue that is region of skin-tone value in HSV color model image.

[1] K. Zuiderveld, "Contrast Limited Adaptive Histograph Equalization. " Graphic Gems IV. San Diego: Academic Press Professional, pp. 474–485, 1994.

B.Edge Detection
We propose to use Deep Contour for edge detection to segment image from the background and represent image as black color for background and white color for edges.

Deep Contour: the method of [2] is based on the Convolutional Neural Networks (CNNs) to regress the contours of an image accurately. They first set edge patches into hundreds of subclasses then after that goal becomes predicting whether an input patch belongs to each edge subclass or the non-edge class. Ensemble classification scores can accomplish the final binary task. This method running on GPU and improve the edge detection performance.

[2] W. Shen, X. Wang, Y. Wang, X. Bai, and Z. Zhang. Deep Contour: A deep convolutional feature learned by positivesharing loss for contour detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3982–3991, 2015.

C. Feature Extraction

We propose to use CHAINLETS descriptor that can represent and describe the structure of ear image. CHAINLETS effectively and efficiently used in solving the problems of object detection and recognition, especially when illumination variations are present.

“Chainlets” supports the idea that each object edge in an image can be characterized as a chain code in representing its direction. This characterization is further accomplished when the image is divided into small and connected regions (cells) within the edge detection of the computed binary image. For each cell, the computation of its chain will be necessary. However, defining a larger area for grouped cells (block) could facilitate improvement achievement. These designated blocks are in turn used in computing histogram of chain code of the whole block after normalizing all the cells within the blocks.

The methodology of CHAINLETS:

Using Edge of Orientation Histogram (EOH)
– Edge detection by Deep Contour
– same methodology as HOG instead of using Gradient it use edge orientation.

D. Classification (Matching)
We propose for matching and classification of the computed feature vectors, two options: i) feature vectors can either be matched directly using various distance measures (Cosine similarity, Histogram Intersection and Square Chi Distances), or ii) classifiers may be trained on the training data of the datasets to classify samples.

II.Dataset and Experiments
We will test our system on an extended version of the Annotated Web Ears (AWE) dataset, containing a total of 9,500 ear images.
The datasets consist of three parts: part A, the main datasets of 3,300 ear images which contain various annotations, such as the level of occlusion, rotation (yaw, roll and pitch angles), presence of accessories, gender and soon.� These images belong to 330 distinct identities (with 10 images per subject) that will be used for the recognition experiments (training and testing). Part B, a set of 804 ear images of 16 subjects (with a variable number of images per subject) that will be used for the recognition experiments (training). Part C, an additional set of 7,700 ear images of 3,360 identities that will be used to test the scalability of the submitted algorithms.
We will test (7442 probe images) compare with (9500 gallery images) and show the experiment results. Â� We follow a similar processing pipeline for all techniques, which: i)Â� rescales the images to a fixed size of 100Å~100Â� pixels, ii)Â� corrects for illumination induced variability by applying histogram equalization to the resized images, iii)Â� subjects the images to the selected feature (or descriptor) extraction techniques, and iv)Â� produces a similarity score for each probe-to-gallery comparison by computing the distance between the corresponding feature vectors.
� We implement feature extraction techniques included CHAINLETS based on LBP, LPQ, BSIF, POEM, HOG, DSIFT, RILPQ and Gabor features and select the open hyper-parameters through cross-validation. We use the chi-square distance to measure the similarity between the feature vectors of the probe and gallery images for all histogram-based descriptors and the cosine similarity measure for the Gabor features.

We follow the same Experimental protocols that mentioned in the survey:
Ear Recognition: More Than a Survey
From the following table:
Method � Rank-1� & � Rank-5� & � AUCMC

CHAINLETS � & � 24.95 � & � 32.68 � & � 93.65 � \\
LBP & � 8.75� � & � 16.69 � & � 84.31 � \\
POEM� � & � 10.70 � & � 20.46 � & � 89.54 � \\
bsif� � & � 7.81� � & � 14.79 � & � 79.09 � \\
Gabor � & � 9.58� � & � 20.37 � & � 92.47 � \\
Lpq & � 9.00� � & � 16.98 � & � 83.03 � \\
rilpq � & � 7.73� � & � 14.74 � & � 84.40 � \\
HOG & � 7.15� � & � 12.79 � & � 75.44 � \\

Showing that the performance of all assessed techniques on an extended version of the Annotated Web Ears (AWE) dataset is significantly improved by using our approach for both the identification and verification experiments. The mean rank-1 recognition rates range from 10.70% achieved with POEM features to 24.95.6% achieved by the CHAINLETS descriptor.[Ours]. The remaining recognition rates are close and all above 40%. For the verification experiments the performance of all evaluated methods is also lower than our approach but, considering the entire ROC curve, the CHAINLETS descriptor is again the top performer.

Conclusion

Should mentioned that our experiment result show that our approach of using CHAINLETS for Ear recognition show best result comparing with other state of the art base descriptors.

P(10)

  • Among other benefits, we guarantee:
  • Essays written from scratch – 100% original,

  • Timely delivery,

  • Competitive prices and excellent quality,

  • 24/7 customer support,

  • Priority on customer’s privacy,

  • Unlimited free revisions upon request, and

  • Plagiarism free work.

Providing Quality University Papers, written from scratch, delivered on time, at affordable rates!

Ear Recognition

Ear Recognition

For a custom paper on the above or a related topic or instructions, place your order now!

What We Offer:

• Affordable Rates – (15 – 30% Discount on all orders above $50)
• 100% Free from Plagiarism
• Masters & Ph.D. Level Writers
• Money Back Guarantee
• 100% Privacy and Confidentiality
• Unlimited Revisions at no Extra Charges
• Guaranteed High-Quality Content