student project: smart picture frame · student project: smart picture frame group 3: gigi ho <...

26
Student Project: Smart Picture Frame Group 3: Gigi Ho <[email protected] > LarsErik Riechert <[email protected] > Supervisors: Stefan Hillman <[email protected] > Patrick Ehrenbrink <[email protected] > Technische Universität Berlin 2 April 2015

Upload: doliem

Post on 27-Sep-2018

227 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

Student Project: Smart Picture Frame Group 3:

Gigi Ho <[email protected]­berlin.de>

Lars­Erik Riechert <[email protected]­berlin.de>

Supervisors:

Stefan Hillman <[email protected]>

Patrick Ehrenbrink <[email protected]>

Technische Universität Berlin

2 April 2015

Page 2: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

Table of Content 1. Introduction 2. Research

2.1 Face recognition library 2.2 Face recognition algorithms

2.2.1 Eigenface 2.2.2 Fisherface 2.2.3 4SF

2.3 Photo library 2.4 Further study of OpenBR

2.4.1 General concepts in OpenBR 2.4.1.1 Algorithms 2.4.1.2 Training and Model 2.4.1.3 Enrolling, Template and Gallery 2.4.1.4 Comparing, Image Set and Similarity Scores

2.4.1.5 Evaluation and Mask Matrix 2.4.2 Applications

2.4.2.1 HelloWorld example 2.4.2.2 Face recognition system

3. Evaluation 3.1 Methods

3.1.1 Algorithms 3.1.2 Enrolling and testing dataset 3.1.3 Training dataset and models 3.1.4 Running the test 3.1.5 Controlled experiment

3.2 Evaluation results 4. Implementation

4.1 Installing OpenBR 4.2 Downloading source codes

4.2.1 Folder structure 4.3 Python implementation

4.3.1 Scripts for evaluation 4.3.1.1 generate_xml_*.py 4.3.1.2 generate_gal.py 4.3.1.3 generate_scores.py 4.3.1.4 generate_csv.py 4.3.1.5 generate_eer.py 4.3.1.6 generate_model.bat

4.3.2 The Smart Picture Frame demonstrator 4.3.2.1 User manual 4.3.2.2 Example input and output 4.3.2.3 Internal variables and file dependencies

5. Conclusion 6. References

1

Page 3: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

1. Introduction

The motivation of this student project is a smart home system in which a smart picture frame will be one of the interaction points. Imagining when a person entering the room with the smart home system installed, the smart picture frame can give different responses or offer different services according to the face recognised. The smart picture frame can recognise a person who is already in the database or recognise the person as a new face. This is the vision of the student project: Smart Picture Frame.

This project serves as a beginning point of the Smart Picture Frame system. The goals of the project is to evaluate several state­of­the­art face recognition algorithms and implement a face recognition demonstrator using the algorithm with the best performance.

2. Research

The research stage is to get basic understanding and give a brief review on the various existing face recognition algorithms and photos libraries and select the most favorable ones for further research and implementation.

2.1 Face recognition library

The basic criteria of a face recognition library in this project is that the library offers an open­source license which is free to use and that can also work offline.

Several existing libraries have been found and their properties are summarized in below Table 1. OpenCV is the most well­known and well­supported one among all while OpenBR is a wrapper library over OpenCV supporting more other functions. Face++ is a commercial library with both online and offline version for development while temporarily free to use. CVV is one with the smallest scale and does not seem to be well­supported and hence the least favourable one. While Face++ seems to offer very promising functions and features, its license and offline functions are still of many unknowns and therefore not considered finally. Comparing OpenBR with OpenCV, OpenBR supports all the common algorithms that OpenCV supports as OpenBR framework itself is a wrapper of OpenCV, therefore OpenBR was chosen to be the library for further study and investigation.

OpenCV CVV OpenBR Face++

Coding Language Support

C++, C, Python, Java

C C++ Python, Objective­C (iOS), Java (Android), Matlab, PHP, C# (non­official)

Website opencv.org libcvv.org openbiometrics.org faceplusplus.com

License BSD BSD Apache 2.0 Commercial (temporarily free)

Algorithm Eigenface [1]; <Unknown> Spectrally sampled <Unknown>

2

Page 4: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

Fisherface [2]; Local binary pattern histogram [3]

structural subspaces (4SF) [4]; others supported by OpenCV

Purpose Computer vision Computer vision

Biometric identification

Face recognition and analysis

No. of contributors

306 3 16 <Unknown>

Table 1. Summary of several existing face recognition libraries

2.2 Face recognition algorithms

Eigenface are Fisherface are the more well­known and traditional face recognition algorithms while 4SF is a more novel one. The 4SF algorithm attracted the greatest interest here as it was proven to outperform other traditional algorithms [5], therefore it was chosen to be evaluated and compared with the traditional Eigenface and Fisher algorithms. The working principles of these algorithms are briefly described as below.

2.2.1 Eigenface

The Eigenfaces method [1] took a holistic approach to face recognition: A facial image is a point from a high­dimensional image space and a lower­dimensional representation is found, where classification becomes easy. The lower­dimensional subspace is found with Principal Component Analysis (PCA), which identifies the axes with maximum variance. While this kind of transformation is optimal from a reconstruction standpoint, it does not take any class labels into account. Imagine a situation where the variance is generated from external sources, let it be light. The axes with maximum variance do not necessarily contain any discriminative information at all, hence a classification becomes impossible.

2.2.2 Fisherface

A class­specific projection with a Linear Discriminant Analysis (LDA) was applied to Fisherface algorithm [2]. The basic idea is to minimize the variance within a class, while maximizing the variance between the classes at the same time.

2.2.3 4SF

The default face recognition algorithm in OpenBR is based on the Spectrally Sampled Structural Subspaces Features (4SF) algorithm [4]. The 4SF algorithm being used in OpenBR is described here:

1. Detection OpenBR wraps the OpenCV Viola­Jones object detector and offers frontal face detection with the syntax Cascade(FrontalFace). For eye detection, a custom C++ port of the ASEF eye detector [2] is included in OpenBR as ASEFEyes.

3

Page 5: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

2. Normalization Faces are registered using the detected eye locations to perform a rigid rotation and scaling via the Affine(...) transform. The face recognition algorithm follows illumination preprocessing when extracting local binary patterns.

3. Representation The face recognition algorithm uses both LBP and SIFT descriptors sampled in a dense grid across the face. Histograms of local binary patterns are extracted. A PCA decomposition retaining 95% of the variance is learned for each local region, with descriptors then projected into their corresponding Eigenspace and normalized to unit L2­norm.

4. Extraction The next step is weighted spectral sampling, whereby all per­region feature vectors are concatenated and random sampling is performed weighted on the variance of each dimension. LDA is then applied on each random sample to learn subspace embeddings that improve the discriminability of the facial feature vectors. Lastly, descriptors are once again concatenated together and normalized to unit L1­norm.

5. Matching OpenBR introduced a novel matching strategy that the distance metric which, given an algorithm that compares feature vectors using the L1 distance, quantizes the vectors to 8­bit unsigned integers as the last step in template generation. This quantization step reduces template size four­fold by exploiting the observation that the IEEE floating point for­mat provides more precision than necessary to represent a feature vector. Together with another improvement strategy that OpenBR uses for non­8­bit­representable metrics, the template comparison speed is fast enough to allow several millions comparisons per CPU thread per second.

2.3 Photo library

Photo libraries were needed for the purposes of training, enrolling and testing. There are several criteria for the photo libraries to be used in this project. The library should come free of charge and have at least 20 persons in the library. And ideally, it consists standard controlled images for training as well as realistic images for testing, in which realistic means the faces should be of different angles, illuminations, etc.

By installing OpenBR, it also comes with a step of downloading dataset that OpenBR used in their project. Several of those data sources are available to be downloaded and used, e.g. ATT, BioID, LFW, MEDS. Together with some other data sources found outside, below show a list of available­to­be­used photo libraries, and their brief descriptions were shown in Table 2.

The BioID Face Database (BioID), https://www.bioid.com/About/BioID­Face­Database

The Georgia Tech face database (GT),

4

Page 6: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

http://www.anefian.com/research/face_reco.htm , available for free, 128MB Color FERET (Feret), http://www.nist.gov/itl/iad/ig/colorferet.cfm Labeled Faced in the wild (LFW), http://vis­www.cs.umass.edu/lfw/lfw.tgz, available

for free ~150MB AT&T Facedatabase (ATT),

http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html , download available 4.5MB

Library # of persons # of images Description

BioID 23 1521 gray level images with a resolution of 384x286 pixel, each one shows the frontal view of a face

GT 50 750 (15x50) frontal and/or tilted faces with different facial expressions, lighting conditions, scales and orientations

FERET 994 11388 variation of pose, expression, illumination, and resolution.

LFW 1680 13000 acquired from web, realistic images,

ATT 40 400 (4x10) varying lighting, facial expressions and facial details, upright, frontal position, black & white, cropped face, no angle variations,

Table 2. Brief details of different photo libraries found

2.4 Further study of OpenBR

OpenBR stands for Open Source Biometric Recognition, which is an open­source framework supporting the development of open algorithms, evaluating recognition performance and deploying biometric recognition systems. The package comes with a command line application ‘br’ which is ready to be used on different platforms including Windows, Mac OS and Linux. Otherwise, applications or plug­ins can be developed using the C++ framework. More information about installing and how to use OpenBR can be found in Section 4.

For the purpose of evaluation and developing a demonstration, the command line application ‘br’ would suffice to be used.

br -algorithm FaceRecognition -compare imageA.jpg imageB.jpg

Listing 1: Basic ­compare command

The above command is the simplest br command to demonstrate how to compare two images using the default OpenBR FaceRecogition pre­trained model. The output of the command is a numeric similarity score (displayed as response in command prompt) showing how similar the two images are, the higher the score, the more similar the images are. Using this default

5

Page 7: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

FaceRecognition algorithm, the similarity score is normalised to the range of 0 and 1.

Besides trying out the command line application and looking into the C++ source codes, most other understandings of OpenBR were gained by reading its publications [6], project documentation [7], and using their developer forum [8] which is especially useful.

2.4.1 General concepts in OpenBR

In order to proceed with using OpenBR for evaluation and application development later, some basic concepts of OpenBR are described below.

2.4.1.1 Algorithms

An algorithm describes how an image is processed and how features are extracted from the image. In OpenBR, different algorithms can be applied readily without recompiling using a pre­defined grammar to describe the algorithm. Some common methods of algorithms are readily available to be used, e.g. PCA (Principal component analysis) and LDA (Linear discriminant analysis), where PCA is used in EigenFace and LDA is used in FisherFace. An example of algorithm description of EigenFaces can be found in the line below (Listing 2).

Open+Cvt(Gray)+Cascade(FrontalFace)+ASEFEyes+Affine(128,128,0.33,0.45)+CvtFloa

t+PCA(0.95):Dist(L2)

Listing 2: Algorithm description syntax of EigenFaces

The above algorithm describes the process as follows: it reads the image from disk, converts it to grayscale, runs the face detector, runs the eye detector on any detected faces, uses the eye locations to normalize the face for rotation and scale, converts to floating point format, and then embeds it in a PCA subspaced trained on face images, which is basically what the Eigenfaces algorithm does. More predefined algorithms can be found in OpenBR source "openbr/plugins/algorithms.php". Below codes show the default OpenBR face recognition algorithm 4SF, which can also be found in "openbr/plugins/algorithms.php".

Open+Cvt(Gray)+Cascade(FrontalFace)+ASEFEyes+Affine(128,128,0.33,0.45)+(Grid(1

0,10)+SIFTDescriptor(12)+ByRow)/(Blur(1.1)+Gamma(0.2)+DoG(1,2)+ContrastEq(0.1,

10)+LBP(1,2)+RectRegions(8,8,6,6)+Hist(59))+PCA(0.95)+Cat+Normalize(L2)+Dup(12

)+RndSubspace(0.05,1)+LDA(0.98)+Cat+PCA(0.95)+Normalize(L1)+Quantize:NegativeL

ogPlusOne(ByteL1)

Listing 3: Algorithm description syntax of 4SF

An algorithm always need to be trained into a model before being used to compare images as in performing a face recognition function. Training would be discussed in the coming section.

2.4.1.2 Training and Model

Training is about building a statistical model generated from an independent set of

6

Page 8: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

reference images which can describe how to discriminate between faces in the hopes that the statistical model will generalize well to unseen images [8].

Model training expects training images from many individuals, and for most face recognition algorithms, e.g. 4SF, it also requires labeled data (labeled with identity). OpenBR offers numerous ways to associate images with labels, but the easiest one would be to create a subfolder for each unique identity within the root image folder where subfolder should have at least two images. Another option is to use sigset xml file, which is described in later Section 2.4.1.4. Besides, training images should also be representative of the kind of images that will be encountered in the target application, and be standard enough to not to confuse the face detector.

Below is an example of training using the OpenBR command. This command first defines the algorithm to be used, in this example EigenFace which was mentioned was used. Then it tells the program to train upon a specific dataset, specifying by a file path as the parameter after ‘­train’, followed by the name (or also filepath, otherwise working directory by default) of the model file to be generated. The model file is generated as a file without any extension by OpenBR’s invention.

br -algorithm

'Open+Cvt(Gray)+Cascade(FrontalFace)+ASEFEyes+Affine(128,128,0.33,0.45)+CvtFlo

at+PCA(0.95):Dist(L2)' -train ../../../data/BioID/img Eigenfaces

Listing 4: Example of training command of EigenFaces

The string following ‘­algoirthm’ sometimes represent an algorithm description (as shown in Listing 4) and sometimes represent a trained model (as shown in Listing 1). In Listing 1, ‘FaceRecognition’ there is a pre­trained model of 4SF using the PCSO database, which is the default face recognition model (or algorithm) used in OpenBR.

2.4.1.3 Enrolling, Template and Gallery

Enrolling is about parameterizing an individual image with respect to a specified model, or it can also be understood as feature extraction. Enrolling can be performed on a single image or multiple images, where multiple images can be represented by a signature set (sigset) XML file or simply a folder path.

A Template is a binary representation of a face (or an image) with features extracted after enrollment, which is always used in comparison and evaluation. A TemplateList is used to represent multiple Templates, which can be further stored on disk as a file called Gallery, with file extension ‘.gal’. Gallery is in binary format containing serialized (complete) templates, which is designed in a way that can appended easily in the future.

OpenBR also supports enrolling multiple faces in one image to multiple templates. This can be done by setting the global option flag -enrollAll true, which is by default false.

7

Page 9: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

Below is an example command for explicit enrolling. The ­enroll command is applying Eigenfaces (previously trained model) to the images under the filepath “../data/BioID/img”, and storing the result in the .gal file. A lot of times, enrolling is done implicitly, for example when ­compare command is used to compare two images, the two images are enrolled and transformed to templates behind the scene.

br -algorithm Eigenfaces -enroll ../data/BioID/img ../data/BioID/target.gal

Listing 5: Example of enrolling a folder of images upon EigenFaces

2.4.1.4 Comparing, Image Set and Similarity Scores

In OpenBR, comparing can be made between two individual images, as in Listing 1, or with a collection of images (image set). In other words, a comparison of 1:1, 1:n and n:m are all possible. One single value of similarity score will be the output of a 1:1 comparison, while a similarity scoring matrix of dimension 1:n and n:m are the output of a 1:n and n:m comparison.

As in Listing 6, comparing <target_imageset> of n images and <query_imageset> of m images, the output will be a similarity scoring matrix of n:m dimension (under the condition that ­enrollAll is not set and/ or each image has only 1 face). As the output file parameter is specified as ‘scores.mtx’, the score matrix will be written in the generated file instead of printed out on screen as in the result in Listing 1. Instead of a binary .mtx file, OpenBR can also generate a .csv text file (by specifying the outfile as ‘scores.csv’) which can be opened by a common text editor.

br -algorithm FaceRecognition -compare <target_imageset> <query_imageset>

../result/scores.mtx

Listing 6: Comparing image datasets

In this compare command in Listing 6, <target_imageset> and <query_imageset> are parameters specifying two sets of images. These image sets can be represented by a folder path, a gallery file or a signature set (sigset) XML file. Not only indicating the list of image files like what a folder path does, a sigset file also states the ground truth of the target images, i.e. the identity of the person in the specified images. This information, in specific, the biometric signatures, together with the similarity scoring matrix generated, are essential for using the evaluation command (further discussed in the next section). More detailed information about the specification standards of sigset can be found on page 9 of the document of MBC File Overview [9], while the easiest way to define a sigset file is shown in the example snippet in Listing 7. In other words, sigset is a br::Gallery compliant XML filelist, which is identified by the file extension .xml [10]. For applications like face recognition systems, using a gallery file (.gal) as target image database will be of advantage comparing with using sigset XML files or folder path, as the target images are pre­enrolled with features extracted and time would be saved for extracting features real­time.

8

Page 10: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

<?xml version="1.0" encoding="UTF-8"?>

<biometric-signature-set>

<biometric-signature name="S01">

<presentation name="S01" modality="face" file-name="s01\04.jpg"

file-format="jpeg"/>

</biometric-signature>

<biometric-signature name="S01">

<presentation name="S01" modality="face" file-name="s01\07.jpg"

file-format="jpeg"/>

</biometric-signature>

</biometric-signature-set>

Listing 7. A snippet of an example sigset file

2.4.1.5 Evaluation and Mask Matrix

OpenBR provides an evaluation environment for one to evaluate biometric algorithms. Some concepts of the Biometric Evaluation Environment (BEE), which is a NIST standard, were implemented in OpenBR [10].

In the previous section, a similarity scoring matrix file (.mtx) was generated as the output using a ­compare command. In order to evaluate such scoring matrix, the ground truth of whether each comparison is genuine or impostor is necessary. This information can be represented using a mask matrix [10] which is also a binary matrix and has a same dimension as the similarity matrix consisting only 1 and 0 to represent genuine or impostor. Instead of generating the mask matrix separately, the easiest way is to use ­compare command upon two sigset files (instead of folder path) so the mask matrix would be implicitly included in the generated similarity scores matrix file and hence which will be ready for evaluation. In Listing 8, an example ­eval command is shown with the performance.csv generated as an output file, which contains some useful performance evaluations metrics and other statistical numbers in an OpenBR predefined format, e.g. false acceptance rate (FAR), false rejection rate (FRR), etc.

br -eval ../result/scores.mtx ../result/performance.csv

Listing 8: Evaluating a similarity matrix (with mask matrix inside)

2.4.2 Applications

After understanding more about OpenBR, now it came to how we can make use of it for actual implementation.

2.4.2.1 HelloWorld example

# 1. Train the Eigenfaces algorithm

br -algorithm

'Open+Cvt(Gray)+Cascade(FrontalFace)+ASEFEyes+Affine(128,128,0.33,0.45)+CvtFlo

at+PCA(0.95):Dist(L2)' -train ../data/BioID/img Eigenfaces

9

Page 11: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

# 2a. Enroll images using Eigenfaces

br -algorithm Eigenfaces -path ../data/MEDS/img -compare

../data/MEDS/sigset/MEDS_frontal_target.xml

../data/MEDS/sigset/MEDS_frontal_query.xml scores.mtx

# 2b. Alternatively, generate galleries first:

# br -algorithm Eigenfaces -path ../data/MEDS/img -enroll

../data/MEDS/sigset/MEDS_frontal_target.xml target.gal -enroll

../data/MEDS/sigset/MEDS_frontal_query.xml query.gal -compare target.gal

query.gal scores.mtx

# 3. Evaluate Eigenfaces accuracy

br -eval scores.mtx performance.csv

echo "Not very accurate right? That's why nobody uses Eigenfaces anymore!"

# 4. Create plots

br -plot performance.csv plots.pdf

Listing 9. br command line HelloWorld example [7]

The br HelloWorld example is a good example demonstrating most of the basic commands one would use when implementing the evaluation logic, namely -train, -enroll, -compare, -eval and -plot. The process is briefly described in below steps.

1. The Eigenfaces algorithm is trained using the BioID dataset with the output model file, named “Eigenfaces”, generated in the current working directory.

2. The trained model file “Eigenfaces” is then used to compare (with implicit enrollment) two image datasets, specified by sigset XML files (with labeled identities), and generate a similarity score matrix file (with mask matrix, i.e. a matrix of the same size consisting 0 and 1 stating whether each comparison is a true or impostor match). Alternatively in 2b, the two sigset xml files are explicitly enrolled first to generate two gallery files, which are then used to perform comparison.

3. With the “scores.mtx” file generated in step 2, it is then evaluated using the ­eval command and a .csv file containing some useful performance evaluations metrics and other statistical numbers will be generated, e.g false acceptance rate (FAR), false rejection rate (FRR), etc.

4. Using the performance CSV file generated in the previous step, the ­plot command can be called to generate a set of plots in a pdf file.

2.4.2.2 Face recognition system

Due to the fact that there is no concept of persons or facesets in OpenBR, but only individual faces with labeled identities, whenever a comparison is made, a query image is always compared individually against another image (or template) instead a collection of images from the same person. Therefore, a simple face recognition system developed upon OpenBR can be imagined as follow.

10

Page 12: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

Assuming the most optimum face recognition model is already trained and the sigset XML of the target photo library is already prepared, the system can enroll the target photo library using the trained model and generate a target photo gallery file (.gal), which become the target photo database to be queried upon. Whenever there is a new candidate face, this face image is then compared against each individual image in the the target photo gallery and then a set of similarity scores will be generated. The comparison with the highest matching scores, at the same time passed a certain predefined threshold, will be classified as a match, i.e. the query image will be classified as the identity of the matched face in the target photo library. Alternatively, some complex rules to combine the match scores of each set of faceset images or other comparison strategies can be designed to also consider the concept of a person.

3. Evaluation

The core research question in this project is to find out the best face recognition algorithm. On top of this, there are still a lot of research questions of interests when it comes to developing a face recognition system and below listed the questions that interested us the most:

Which face recognition algorithm give the best performance? How much impact would the size of training dataset affect the performance of face

recognition? Will the face recognition model give better performance when similar images to be

tested are used in training, i.e. same person or dataset? Is it better to enroll more images of a person in the target photo database?

These questions were investigated in this evaluation section.

3.1 Methods

In order to answer the above research questions, a number of experiments were designed to test the impact of different dimensions as listed below:

different algorithms: 4SF, FisherFace and EigenFace different dataset for training algorithm models: GT and Feret

size of dataset: small dataset (<1000, Feret) vs big dataset (~16000, PSCO) having similar images for training and testing (GT): having similar images for

training (GT) vs no similar images in training (Feret/ PSCO) number of enrolled images: 3 vs 10

Equal error rate (EER) was used as the performance metrics in these experiments. A lower EER implies better performance.

3.1.1 Algorithms

The algorithms description syntax used in the experiments were stated as follows. The 4SF

11

Page 13: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

one was taken from the OpenBR source codes, while the EigenFace one is taken from the HelloWorld example and FisherFace is just a LDA­version of EigenFace.

EigenFace: Open+Cvt(Gray)+Cascade(FrontalFace)+ASEFEyes+Affine(128,128,0.33,0.45)+CvtFloat+PCA(0.95):Dist(L2)

FisherFace: Open+Cvt(Gray)+Cascade(FrontalFace)+ASEFEyes+Affine(128,128,0.33,0.45)+CvtFloat+LDA(0.95):Dist(L2)

4SF: Open+Cvt(Gray)+Cascade(FrontalFace)+ASEFEyes+Affine(128,128,0.33,0.45)+(Grid(10,10)+SIFTDescriptor(12)+ByRow)/(Blur(1.1)+Gamma(0.2)+DoG(1,2)+ContrastEq(0.1,10)+LBP(1,2)+RectRegions(8,8,6,6)+Hist(59))+PCA(0.95)+Cat+Normalize(L2)+Dup(12)+RndSubspace(0.05,1)+LDA(0.98)+Cat+PCA(0.95)+Normalize(L1)+Quantize:NegativeLogPlusOne(ByteL1)

3.1.2 Enrolling and testing dataset

GT (The Georgia Tech face database) was chosen to be the enrolling and testing photos dataset as it provides a well­structured photo library of 50 persons with 15 photos each with frontal and (or) tilted face and controlled illumination and environments. In order to fulfill the aforementioned experiment dimensions, the usage allocation and selection logic of the 15 photos from each person is described as in Table 3. A python script were written to write sigset XML files for each of the usage group below, where more details of implementation would be discussed in Section 4.

Usage group # of photos by person

Selection logic

Testing image set 5 random 5 photos form each person

Enrolling images set (10) 10 the remaining 10 photos

Enrolling image set (3) 3 random 3 from Enrolling image set (10)

Training 15 all photos

Table 3: Photos allocation of enrolling and testing images set of GT

3.1.3 Training dataset and models

As one of the testing goals was to find out whether using similar images for training and testing would give better performance, GT which serves as the enrolling and testing dataset was also used for training. In the beginning, the whole GT photo library containing (15x50) 750 photos were used for training. However, after discovering using identical images for

12

Page 14: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

training and testing would give dramatic bias or some sort of ‘cheating effect’ to some algorithms, the usage allocation of GT library were adjusted to make the testing and training images exclusive with each other. The updated usage allocation of GT is called GTNEW and was displayed as below, noted that in this setting, there is no longer a group of enrolling image set (10) which would be later explained in Section 3.2.

Usage group # of photos by person

Selection logic

Testing image set 3 random 3 photos form each person

Enrolling images set (3) 3 another 3 exclusively random photos

Training 9 the remaining 9 photos

Table 4: Photos allocation of enrolling and testing images set of GTNEW

Feret is the another dataset that is chosen for training. In order to make the comparison between Feret and GT more fair, not all images from Feret (> 10000 images) but a selected number of 750 was used.

While GT and Feret are freely available to be used, PSCO is not as easy to be acquired, which instead just comes with the standard pre­trained FaceRecognition model (4SF) from OpenBR. Therefore, in the test plan, PSCO was only trained with 4SF (as in the default pre­trained FaceRecognition model in OpenBR) and was used to only compare with models trained with 4SF to understand the impact of training dataset size. In contrast, both GT and Feret were used to be trained with all the three algorithms, namely, 4SF, EigenFace and FisherFace.

To summarise, the three algorithms were trained against three training datasets namely, GT, GTNEW and Feret and therefore 9 (3x3) models were generated. Together with the OpenBR’s default pre­trained FaceRecognition model, 10 different models were evaluated.

3.1.4 Running the test

After defining all the datasets for training, enrolling and testing as well as generating those according sigset XML files, the 9 models were also trained accordingly. After having all the test materials ready, the test can be run which basically consists of the steps below.

1. compare the enrolling (or target) and testing dataset using one of the trained models 2. evaluate the score matrix generated in step 1 3. compute EER based on the performance CSV file generated in step 2.

Each of the above steps had a python script written for the automated process. In step 1 and 2, the -compare and -eval br commands were called in the scripts while step 3 is a purely python script. The performance CSV file generated by the ­eval command give the false acceptance rate (FAR) and false rejection rate (FRR) at different detection­error tradeoff

13

Page 15: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

(DET), the EER can hence then be found from these numbers. The result of step 3 is a CSV file containing the EER of each testing set.

3.1.5 Controlled experiment

A controlled of control experiment was done using Feret dataset for enrolling and testing instead of GT. While using the same trained model earlier, three algorithms trained with Feret and GT datasets, 365 photos were selected for enrolling and 364 photos were selected for testing. GT dataset was used here instead of GTNEW because there was no conflict between the training and testing datasets. This controlled experiment were expected to derive similar results as the testing sets tested again GT.

3.2 Evaluation results

Figure 1 below shows the result EER of all the testing sets, with ERR corrected to 2 decimal points. The testing sets refer to three algorithms, i.e 4SF, EigenFace and FisherFace, each trained with Feret, GT and GTNEW datasets, tested again the GT datasets of 3 and 10 images enrolled. Most of the keys in the figure explain itself, while there are some which may worth a little clarification. The tilted x­axis labels annotate the training dataset, there are GT and GTNEW where GT represents the whole GT photo library with 750 photos and GTNEW represents the selected 450 photos described in Section 3.1.1. The colors of the bars annotate the enrolling and probing dataset, GT3 (3 images) and GT10 (10 images) refer to the testing set that are having 3 and 10 pictures enrolled respectively. The testing sets with GTNEW was ran later than the others and it was only tested again 3 images by design.

Figure 1. EER results of all testing sets, color­grouped by number of enrolled images

* | * | GT3 vs GT10

For easier reference in upcoming discussions, a syntax to specify particular testing sets was invented here: <algorithm> | <training_dataset> | <enrolling_testing_dataset>.

First of all, dramatically small EER values were found for those using GT to train the algorithms. These EER values were so unnaturally small to indicate that there were highly

14

Page 16: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

likely some errors or bias happened on these testing sets. A reasonable explanation could be drawn out of this ‘excellent’ performance which is, it happened that all the testing photos overlapped with those used in training which means the face recognition models can ‘remember’ the testing photos as they had been learnt in training before. The system can ‘recall’ the face so well instead of recognising it as an unseen face. This setting in the test was considered to be invalid and these results should be discarded. An interesting thing to note is that this problem only happened to 4SF and FisherFace but not EigenFace, it is probably due to the fact that LDA is used in both 4SF and FisherFace which make them affected. Still, in order to fix this error, GTNEW dataset was created, as mentioned in Section 3.1.3, to make sure the training and testing images would not overlap.

Number of enrolled images

Back to the research question that was defined earlier, it was generally observed in all testing sets that both GT3 and GT10 have very close and almost the same EER (see Figure 1). In other words, we could argue that having more photos in the enrolling database would not give significant improvement on the performance of the face recognition system. Therefore, to narrow down the results, the upcoming discussion would only focus on GT3 results. Also, as mentioned in Section 3.1.3, this is the reason for why only 3 photos were allocated for enrolling in GTNEW dataset, but not also 10.

Size of training dataset

Figure 2. EER results of GT3 testing sets, color­grouped by different training datasets

* | Feret vs GTNEW vs PSCO | GT3

Referring to the narrowed­down result of only testing sets of 3 enrolled images (GT3) which is shown in Figure 2, the testing set 4SF | PSCO | GT3 (the blue bar upon 4SF), having 16 000+ photos, outperforms 4SF | Feret | GT3 set (the green bar upon 4SF), having 750 photos, by 0.03 in EER. This single specific comparison may not be representative enough to conclude that bigger training dataset is always better, it does support the general understanding when it comes to machine learning that a model gets better when it is trained

15

Page 17: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

with more data.

Having similar images in training and testing

In contrast with the above argument regarding training dataset size, GTNEW had a much smaller training dataset which was only 450 photos comparing to 16000+ photos of PSCO, but still it gave the best performance among all algorithm cases comparing with Feret or PSCO­trained ones (see Figure 2). It (orange bar upon 4SF) even outperforms the performance of the default face recognition model of OpenBR (4SF trained against PSCO) (blue bar upon 4SF) which was proven to have high performance [5]. This result proved the hypothesis of training a model with similar to­be­encountered images yields better performance is true.

Different algorithms

Figure 3. EER results of GT3 testing sets, color­grouped by different algorithms

4SF vs EigenFace vs FisherFace | * | GT3

Lastly, comparing the results of different algorithms in Figure 3, it could be seen in general that 4SF (green bars) always give the best results, having an EER of 0.18 vs 0.29 of EigenFace and 0.22 of FisherFace for the GTNEW training dataset. FisherFace was better than EigenFace when the model was trained against a to­be­expected dataset (GTNEW), otherwise EigenFace was better.

Controlled experiment

In order to give more evidence to support the above findings, a control experiment using Feret as the enrolling and testing dataset (instead of GT) was performed and the results were displayed in Figure 4. The findings discussed above could be again supported by this set of experiments as listed below.

Bigger training dataset give better performance, supported by 4SF | GT vs PSCO | Feret, 750 vs 16000+ training images, EER: 0.43 vs

0.38

16

Page 18: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

Training a model with to­be­expected images bring significant improvement to performance, supported by

4SF | GT vs Feret | Feret, EER: 0.43 vs 0.38 FisherFace | GT vs Feret | Feret, EER: 0.49 vs 0.47

4SF performs the best, supported by 4SF vs EigenFace vs FisherFace | Feret | Feret, EER: 0.38 vs 0.47 vs 0.47 4SF vs EigenFace vs FisherFace | GT | Feret, EER: 0.43 vs 0.47 vs 0.49

Figure 4. Controlled experiment: EER result of using Feret as testing and enrolling datasets

4. Implementation

This section describes in what coding languages and how the implementation is done, including both the evaluation and demonstrator parts. In general, all OpenBR­supported functions are accessed using OpebBR command line application ‘br’ directly. For evaluation and demonstrations, a number of python scripts were developed for automating the process of evaluation and also the application demonstration.

4.1 Installing OpenBR

The installation guide for different platforms is well­described on the OpenBR website: http://openbiometrics.org/doxygen/latest/installation.html.

As the pre­compiled releases do not work, it is required to build from source following the guide step­by­step.

Throughout this project, the master branch release on git (https://github.com/biometrics/openbr) dated on 21Nov2014 was used.

4.2 Downloading source codes

All other source codes or files that are developed during this project can be found on the

17

Page 19: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

group git repository at url, https://dev.qu.tu­berlin.de/git/spf­group­3. This repository does not include any OpenBR sources which were supposed to downloaded and installed separately in the previous step, i.e. Section 4.1.

4.2.1 Folder structure

The root folder contains all the python scripts (.py) that were used in this project and also the results of evaluation, eer.csv which is the raw data files containing the EER of all the testsets and eer.xls is a summarized and analysed version out of the raw data file.

/csv contains all the performance metrics CSV files generated by ­eval

/gals contains all the gallery .gal files generated using ­enroll when running the test

/models contains all the model files that were trained and to be evaluated

/pdf [unused] contains the R programs for generating plots of evaluation result

/plots [unused] contains the some plots that were generated in PDF during the early stage of evaluation

/scores contains all the similarity scoring matrix (.mtx) that were generated by ­compare command

/sigset contains the sigset XML file of training the Feret model (with 750 photos selected)

/xml contains all the sigset XML files for enrolling and training during testing

4.3 Python implementation

4.3.1 Scripts for evaluation

The following python scripts were used to implement the features needed during evaluation.

4.3.1.1 generate_xml_*.py

These Python scripts generate sigset XML­files containing image paths and identifiers for faces, which are necessary information for the evaluation. The following variables have to be set in the beginning of the file:

xmlpath: Path to save XML files

gtpath: Path to the folder containing the images

slash: The delimiter needed to describe folder structures (For Windows use "\\", for Linux use ‘/’)

The script generate_xml_gt.py generates multiple xml files, for the following purposes (also as descriped in Table 3 GT dataset):

18

Page 20: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

Enrolling 10 pictures per person: 5 XML files for 5 different enrolling image sets of each set missing 10 persons (out of 50 persons) for the purpose of cross­validation, 10 images each from the selected 40 persons.

Enrolling 3 pictures per person: 5 XML files for 5 different enrolling image sets of each set missing 10 persons (same as above), 3 images each from the selected 40 persons.

Testing 5 pictures per person: 5 XML files for 5 different testing image sets not overlapping with the enrolling images, 5 images each from all 50 persons

Training: 1 XML file containing all images from the GT­database

To overcome the ‘cheating’ problem mentioned in Section 3.1.3 and 3.2, that the evaluation results were heavily influenced by using some identical images for both training and enrolling, generate_xml_gt_new.py would generate multiple xml files, for the following modified purposes (also described in Table 4 GTNEW dataset):

Enrolling 3 pictures per person, for all persons Training 9 pictures per person, for all persons. Testing 3 pictures per person.

Altogether there were 3 XML files generated, each containing images exclusive from the other, i.e. no overlapping pictures.

The script generate_xml_feret.py produces 2 XML files, containing 1000 pictures in total from the Feret library, for the following porposes:

The first 3 images per person (if there are 3 images per person) can be used for enrolling.

The next 3 images per person (if there are 6 images per person) can be used for testing.

4.3.1.2 generate_gal.py

The script generate_gal.py creates a OpenBR gallery file for each XML­File in the folder xml_path for every model in the folder model_path and save it in gal_path.

4.3.1.3 generate_scores.py

The script generate_scores.py compares all testing images given by .gal files containing the phrase “test” to the corresponding .gal files which contain the phrase “enroll” in its filename. A similarity matrix .mtx file would be created, for each pair of .gal files for every model in the folder model_path. The binary .mtx file is stored in score_path.

4.3.1.4 generate_csv.py

The script generate_csv.py performs the function of evaluating simarliarity matrix file (.mtx) to a list of performance metrics (stored in .csv). Practically, it creates a text CSV file for

19

Page 21: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

every .mtx file in the folder score_path and save it in the folder csv_path. The CSV­files contain false­acceptance and false­rejection­rates for every of the provided test­cases.

4.3.1.5 generate_eer.py

The script generate_eer.py createsone single CSV file in the working directory, which contains the equal­error­rates of all performance CSV files created by generate_csv.py .

4.3.1.6 generate_model.bat

The script generate_model.bat trains multiple models using the OpenBR command ­train. Model files were created for the EigenFaces, 4SF and FisherFace algorithms. All these algorithms are applied on multiple enrolling and testing image sets, each specified by 2 sigset XML files, which contain pictures from different databases (GT and Feret) and a different number of those pictures (1000,450,500).

4.3.2 The Smart Picture Frame demonstrator

The file "main.py" holds a demonstrator for the smart picture frame. It is capable of managing picture databases and searching for new faces by calling the OpenBR command line application ‘br’. The program by default always uses the 4SF algorithm, which can be trained during use.

4.3.2.1 User manual

The program provides a command line interface, which reads keyboard input after being called. The programs menu structure is described as follows. A default screen would be displayed in the beginning and after finishing an operation, which looks like the Listing below:

Current Model: "model" |to TRAIN or change MODEL press m

Current Gallery: "enrolgal" |to ENROLL persons press e

|to SEARCH for a person press s

|to QUIT press q

Listing 10. default menu of demonstrator

In all operating modes the user is asked for a single letter input to navigate through the program, which has to be terminated with "Enter".

Training/ Changing Model (m):

As it was shown it is profitable to have the image recognizer trained on pictures similar to those that would be used later. In this mode it is possible to change the model file by a pre­trained model or to train a new model. After pressing "m", the path to a trained model file

20

Page 22: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

can be inserted, or the OpenBR­default "FaceRecognition" pre­trained model can be used. When pressing "t" the user is asked to give a name for the new model, which would be also the filename of the model file saved in the working directory. After this the user has to specify a folder with images.

Enrolling (e):

In this mode it is possible to change the picture database, which would be used as the target faces database to query upon. It is possible to simply open an existing gallery after pressing "g". Now the name of a .gal file can be specified. These files are generated when enrolling images and are persistent after closing the program. To generate a new .gal file the user has to enter "x" or "f", and specify the path to an xml file or folder, containing the images. Afterwards the user is asked to give a name (and path) for to .gal­file to be generated. It is possible to use files with multiple persons in it.

Searching for pictures (s):

When entering "s", the user is asked to enter the name of a query image. Afterwards the program would return the file names of the 3 most similar images and the corresponding confidence values. When the query image contains multiple persons, there will be an output for every face detected.

Quitting(q):

When the letter "q" is pressed, the program stops.

4.3.2.2 Example input and output

The following example shows the input and output, when a picture with 5 persons is compared to the enrolled library:

21

Page 23: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

Figure 5: Example output of demonstrator

Figure 6: Test Image with modified aspect ratios

22

Page 24: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

Figure 7: Suggested Images

4.3.2.3 Internal variables and file dependencies

The file main.py makes use of the following variables, which specify files that would be required or saved when opening the program.

working_dir: the path to the main directory, where all files are saved.

br_path: the path that contain OpenBR binary file

queryxml: File name of an XML file to be generated when searching a new face, it contains the coordinates of the faces in a picture.

querygal: File name of a .gal file which contains the faces to be searched for, this file will be generated during searching.

enrolgal: Default path of the .gal file containing the image database to be searched on, can be changed or generated when enrolling new pictures.

model: Path to default model file, if none model has been trained it is suggested to use "FaceRecognition"

result: Generated when searching for faces. Path to CSV file containing the similarity measures of all faces in an image to all pictures in the database.

5. Conclusion

In this project, three algorithms EigenFace, FisherFace and 4SF were evaluated using OpenBR framework. Two different photo libraries namely GT and Feret were used for

23

Page 25: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

training and testing. 4SF was proved once again to perform the best against the other algorithms. It was also found that training an algorithm like 4SF or FisherFace with images that are similar to those to­be­encountered images would give significant improvement on the performance. Also, training with more images would also help. Finally, due to the fact that OpenBR does not have a concept of person in their design, having more or less images in the enrolled photo database would not give significant different to the classification result.

For any face recognition system development, OpenBR was highly recommended to be used as a framework due to its robustness. If the system would know approximately what images it would encounter after release, it is highly suggested to train its own model using those to­be­expected images. Otherwise, the most optimum model to use would be the OpenBR default FaceRecognition model as it gave a fairly good performance. For training model, an optimum small­enough number of images would be approximately 1000. The functionality of the OpenBR framework was used to built a Smart Picture Frame demonstrator.

6. References

[1] M. Turk and A. Pentland, “Eigenfaces for Recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, Jan. 1991.

[2] P. N. Belhumeur, J. P. Hespanha, and D. Kriegman, “Eigenfaces vs. Fisherfaces: recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711–720, Jul. 1997.

[3] T. Ahonen, A. Hadid, and M. Pietikäinen, “Face Recognition with Local Binary Patterns,” in Computer Vision ­ ECCV 2004, T. Pajdla and J. Matas, Eds. Springer Berlin Heidelberg, 2004, pp. 469–481.

[4] B. Klare, “Spectrally sampled structural subspace features (4SF),” Michigan State University Technical Report, MSU­CSE­11­16, 2011.

[5] J. C. Klontz, B. F. Klare, S. Klum, A. K. Jain, and M. J. Burge, “Open source biometric recognition,” in 2013 IEEE Sixth International Conference on Biometrics: Theory, Applications and Systems (BTAS), 2013, pp. 1–8.

[6] “OpenBR Publications.”, OpenBR, [online] 2015, http://openbiometrics.org/publications.html (Accessed: 1 March 2015).

[7] “OpenBR Documentation”, OpenBR: Main Page, [online] 2015, http://openbiometrics.org/doxygen/latest/index.html (Accessed: 1 March 2015).

[8] “Open Source Biometric Recognition: Developer Forum”, Google Groups Forum, [online] 2015, https://groups.google.com/forum/#!forum/openbr­dev (Accessed: 1 March 2015).

[9] “MBGC File Overview”, [online] 2015,

24

Page 26: Student Project: Smart Picture Frame · Student Project: Smart Picture Frame Group 3: Gigi Ho < ho@campus.tuberlin.de > LarsErik Riechert < riechert@mailbox.tuberlin.de >

WS 2014/15 Student Project: Smart Picture Frame Group 3

http://openbiometrics.org/doxygen/latest/MBGC_file_overview.pdf#page=9 (Accessed: 1 March 2015).

[10] “Biometrics Evaluation Environment”, OpenBR Related Pages, [online] 2015, http://openbiometrics.org/doxygen/latest/bee.html (Accessed: 1 March 2015).

25