distributed framework for automatic facial mark detection

Post on 14-Jan-2016

27 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Distributed Framework for Automatic Facial Mark Detection. Graduate Operating Systems-CSE60641 Nisha Srinivas and Tao Xu Department of Computer Science and Engineering n sriniva , txu1@nd.edu. Introduction. What is Biometrics? Face, iris, fingerprint etc. Face is a popular biometric - PowerPoint PPT Presentation

TRANSCRIPT

Distributed Framework for Automatic Facial Mark Detection

Graduate Operating Systems-CSE60641Nisha Srinivas and Tao Xu

Department of Computer Science and Engineeringnsriniva, txu1@nd.edu

1

Introduction

• What is Biometrics?– Face, iris, fingerprint etc.– Face is a popular biometric• Non-invasive

– Identical twins have a high degree of facial similarity.• Fine details on the face like facial marks are used to

distinguish between identical twins.

– Automatic facial mark detector: detects facial marks and extracts facial mark features.

Different type of Biometric.

2

Automatic Facial Mark Detector

Convert Images Face Contour Points

Crop face Images Detect facial marks

Independent of results from other images

3

Objective

• Drawbacks of the Automatic Facial Mark Detector– Slow

• Size of the dataset• Size of each image in the dataset• Run time of algorithms is long• Executing it sequentially

• Objective:– To design a distributed framework for the

automatic facial mark detector.• To improve computation time• To obtain scalability

4

Sequential Execution

Execution Time:Te=Ntp

tp= time to execute facial mark detector for a single imageN= Number of Images

5

Conversion

Contour Points

Cropping

FM Detections

Input Image

Proposed Approach : Distributed Framework

Conversion

Contour Points

Cropping

FM Detections

Machine 1 Machine n

Machine 2

Execution Time:Te= tp

tp= time to execute facial mark detector for a single image

6

• Implementation– Combination of Makeflow, Worker Queue , Condor

• Condor is a distributed environment which makes use of idle resources on remote computers.

• Work Queue is a fault tolerant framework.– Master/Worker framework.– Manages Condor

• Makeflow– Distributed computing abstraction– Runs computations on WQ– The computations have dependencies that are

represented by directed acyclic graph (DAG).

7

Flow Diagram

8

Performance Metrics

• We evaluate the performance of the distributed framework by computing the following metrics– Total execution time

– Node Efficiency

– Scalability• Weak scaling: Number of jobs proportional to number of

images in dataset.

• Strong scaling: Number of jobs is varied by keeping the number of images in the dataset a constant.

9

Dataset and System Specifications

• Twin face images were collected at the Twins Days Festival in Twinsburg, Ohio in August 2009.

• High Resolution Images: 4310 rows x 2868 columns

• Total Number of Images: 800– Dataset size based on attributes: [206 200 250 144]

• Notre Dame Condor Pool: ~(700 cores)

10

Notre Dame Condor Pool• Machine ArchOpSys MachineOwnerMachineGroup StateLoadAvg Memory

• ccl00.cse.nd.eduINTELLINUXdthaincclUnclaimed0.1901518

• ccl01.cse.nd.eduINTELLINUXdthaincclUnclaimed0.1501518

11

ccl 8x1cclsun16x2

loco32x2

sc032x2

netscale16x2

cvrl32x2

iss44x2

compbio1x8

netscale1x32

Fitzpatrick130

CSE170

CHEG25

EE10

Nieu20

DeBart10

MPIHadoopBiometrics

StorageResearch

NetworkResearch

NetworkResearch

TimesharedCollaboration

PersonalWorkstations

StorageResearch

BatchCapacity

greenhouse

Makeflow was executed on cvrl.cse.nd.eduIntel(R) Xeon(R) CPU X7460 @ 2.66GHz

Experiments

12

• Experiment 1– Comparison of total execution time between the

distributed framework and sequential framework.

– Submit N jobs to Condor by keeping the dataset constant.

– Number of jobs workers for distributed framework= {10,50, 100, 150, 200}

– Dataset Size= 206

– Executed on the Notre Dame Condor Pool.

• Experiment 2– To evaluate node efficiency

– Analyze the time taken for a single job to complete on a machine in the Notre Dame Condor Pool.

• Experiment 3– To evaluate scalability of the AFMD• Weak scaling: Number of jobs proportional to number

of images in dataset.

• Strong scaling: Number of jobs is varied by keeping the number of images in the dataset a constant.

13

Experiment 1: Results

14Number of Workers

Tim

e (s

ecs)

15

Experiment 2: Results

Machine NamesNumber of Workers

Tim

e (s

ecs)

Num

ber

of jo

bs e

xecu

ted

per

mac

hine

Experiment 3:Weak Scaling

16

Tim

e (s

ecs)

Number of Workers

Conclusion

• Designed and implemented a distributed framework for a Automatic facial mark detector.

• It was implemented using Makeflow, Work Queue and Condor.

• Performance of the distributed framework is significantly better.

17

top related