computer science for fun project

20
Computer Science for Fun Facing up to faces Issue 13 ISSN 1754-3657 (Print) ISSN 1754-3665 (Online) How to get a head in robotics Exploring face space Pulling a face: truth and online dating

Upload: nandana-mihindukulasooriya

Post on 11-May-2015

590 views

Category:

Entertainment & Humor


3 download

DESCRIPTION

I don't own any rights to the magazine. Please refer to website for more information. http://www.cs4fn.org/project/

TRANSCRIPT

Page 1: Computer Science for Fun Project

Computer Science for Fun

Facing up to faces

Issue 13

ISSN 1754-3657 (Print)ISSN 1754-3665 (Online)

How to get a head inrobotics

Exploring face space

Pulling a face: truthand online dating

Page 2: Computer Science for Fun Project

Welcome to cs4fn

2

ThefacesissueKnow someone’s face, and youknow who they are. Sure, youknow what they look like, buttheir face is also the closestthing you have to a window intotheir mind. A facial expressioncan tell you what they think andfeel, even when sometimes theywould rather keep it hidden.Faces help us socialise, bondwith people and even find love.

In this issue you’ll find out how much faceshelp computer science. You’ll read about theface space in our brains, robotic faces thatmimic human emotion and the tricks thatfaces play on us. There are stories about faceson the Internet, on clocks and in the Earth. Allhumans are natural face experts – even babiesan hour old can recognise a face – but here’syour chance to find out even more.

People with visual impairments useinterfaces to get online that don’t rely onsight. For example, instead of reading textprinted on a screen, screen readers canturn text on a webpage into speech. Thatworks OK, but sometimes the methods forimages can be a bit indirect, like relying onwordy descriptions of what’s in a picture.But now researchers in the USA have come up with a way for people with visualimpairment to get more out of images – theyhave a method for automatically turningphotos of faces into raised pictures thatpeople can feel with their fingers.

The computer program concerned,TactileFace, first needs a portrait photo.Once it detects where in the photo the faceis, it sets about redrawing the variousfeatures. The tough bit is that people can’tfeel as much detail with their fingers as theycan see with eyes, so the raised portraitsneed to be simpler than the original photos.To do this the computer needs to knowwhich important lines to keep, and whichdetails it can discard. Luckily, it’s beentrained to know the big landmarks of a face, like the eyes, nose and mouth. It can then detect line edges in the rest of the face, ending up with a face that’s got areasonable amount of detail without beingtoo much to handle.

Finally, the portrait goes to a printer, whereit’s printed on to special paper that swellswhen it’s exposed to the heat of the printhead. A person with a visual impairmentcan then feel the face to find out what aperson looks like. The researchers testedtheir system with a range of people, and itworks well – folk were good at locating facialfeatures, could figure out the pose thepicture was taken in, and could even pickout two different pictures of the sameperson from a group. Feels like they’vefound a good solution!

Fingersfeelingfocus

Fingersfeelingfocus

www.cs4fn.org

Page 3: Computer Science for Fun Project

3

Changingthe face ofthe EarthWe tend to think of a jigsaw these days as being a gentle pastime –or a psychotic serial killer in the Saw movies. But being able to putthe pieces together in one particular jigsaw has helped us understandthe planet we live on and how it will look in the future.

Put it togetherWay back in the 1500s a Flemish map-maker called AbrahamOrtelius noticed something strange about the shape of the world. Ifyou took the shapes of the continents and moved them around, theyall seem to fit together. For example, South America kinda fits intothe bend in the west side of Africa, just like jigsaw pieces. It wasn’ttill the 1900s, when meteorologist Alfred Wagner added somesignificant extra layers to the puzzle, that scientists began to realisethat all the land on the Earth had once been joined in a giant super-continent called Pangaea. Areas of ancient rock and mineral beltsmatched across the continents, the scars and debris caused byancient icy glaciation were shared, and the fossil sequences were thesame. These tell-tale fingerprints all helped to support the idea thattoday’s separate continents had once been joined together.

As if!At first, as is always the way when new ideas come along,scientists didn’t believe it. However as time passed, and moreinformation became available, the idea of continental driftbecame accepted. Studies in paleomagnetism, the study of themagnetic properties of rock, showed either the continents hadmoved or there were two North Poles. Since, well, there’s onlyone North Pole, it looked like maybe the continents really hadmoved. It wasn’t an easy ride, though. Most scientists at thetime worked in the northern hemisphere, where the evidencewas less obvious. More importantly no one agreed on how the continents had drifted apart in the first place.

Continents didn’t move – you just had to look at them. See, nothingmoves! It was only when the clever idea of plate tectonics camearound that the final piece was put into the jigsaw. Plate tectonicswas the discovery that we live on a load of floating plates – rockyrafts that drift around on the molten liquid at the core of the earth.These rafts could move when the squiggly lava contents belowsqueezed up through cracks, pushing them apart.

Coming soon to a planet near youThat’s the past, but what about the future? Continental drift happens,but at such a slow pace that humans never see it – even overgenerations. However it is possible to see the shape of the world to come. Computer scientists have created programs thatmimic the way the plates will move over millions of years. Our futurewill definitely have a different shape as our continents drift, butpredicted changes in sea level will also delete sections of familiarcountries. The scientists may be showing us the possible faces of our future, but some of what it will look like is up to us.

cs4fn�eecs.qmul.ac.uk

Page 4: Computer Science for Fun Project

Enter face spaceWe’ll try and give you an idea of what facespace is like. Start by imagining a room. Onone side of your room is an open door andon the other is a window. Starting in themiddle, as you move towards the windowthings get lighter, and as you move towardthe door you get more cooling air wafting infrom outside. At any point in between the twoyou have a specific combination of light andbreeze. Now let’s teleport the room to facespace, so that on one side of this new ‘room’is a really narrow nose, on the other side is areally broad nose. Now take any nose – howabout yours if you’re not using it at themoment? You can place your nose at aspecific position between the two extremenoses so that it’s just the right distance fromboth. Right in the middle would be theaverage nose, created by combining all thenoses you know. Your nose might be widerthan average or narrower than average, andyou could let people know exactly what yournose looks like by giving them the averagenose measurements and an indication ofhow much broader or narrower your nosewas from this. Knowing the average and howfar you are from it in each direction are thesignposts in face space.

Your face looks like...There are all sorts of ways to describe a face,like nose width, eye separation and so on.You can build up a complicated face spaceby deciding what the important features are,measuring lots of them, finding the averageand then putting the bits making upindividual faces at different positions relativeto the average for those features. Eventually,researchers began to see evidence that theremight actually be an average face stored inour brains too. We tend to recognisedistinctive faces more easily. They are furtherfrom the average so we get a stronger signal.An average face shape doesn’t seem to carryany particular information about a person’sidentity – it’s what’s different from theaverage that lets us identify people. By usinga series of cartoons of varying levels ofextremeness researchers showed that certaincells in the brain became more active themore different the face presented is from theaverage face. Somewhere deep inside ourbrain, it seems, we have learned an averageface to compare other faces against.

Computers – our GPSin face space To create this face space in a computer wefirst take lots of pictures of faces. We canalign the features like the eyes and nose bychanging the size and the orientation of theimages. A picture of a face is just a massivenumber of pixels, where each pixel is thecolour value needed at that particular point in the image – for example the shiny white ofthe teeth. With hundreds of face pictures allaligned, it’s easy to calculate the averageface. Just add the values for all the imagestogether, pixel by pixel, and divide the finalvalue for each pixel by the number of images used.

But there is more information we can extract.We know about the average so let’s take thataway from our information and look at what’sleft. We can use a method called ‘principal

components analysis’ to find a set of imagesthat we can add to the average face to let usrecreate any of the original faces. In effect weconvert our huge combined face into a seriesof independent images, which then becomethe staging posts in face space, starting fromthe average face at the centre. So any facehas a place in the space, a certain distanceand direction away from the central averageface. As we move out into the space, wecombine the different component images torecreate the face from our original set thatlies at our current position. In essence, wecan determine positions in face space muchlike GPS can on the Earth.

Girl face vs boy faceThat’s not all we can do. We can turn boysinto girls! We can take a load of female faces,find the average and the staging post faces,and we can do the same with male faces.For any particular male face we can calculateits distance and direction from the averagemale face at the centre. Now, imagine a lineon a map joining the starting place to thefinish position. We can take the line we’vedrawn in the male face space and draw thesame one in the female face space. Anchorits starting point at the average female faceand see what female face it ends at. This willbe the closest female equivalent of the maleface. By moving in face space we canchange the gender of faces.

Enter the antifaceWe can also change the direction of our linein face space so it points backwards. If youfollow this line all the way through the centre,then continue for the same distance out theother side, you’ll end up at the antiface: theface that is essentially the opposite of theoriginal face. People rate these antifaces asbeing more dissimilar from the original thanrandomly created faces. It seems our brainknows its way around the face space wecreate in a computer, which might mean wekeep a face space in our heads too.

4 www.cs4fn.org

Go expSpace is the stuff allaround us – up, down,back, front, left, right.Wherever we go, we allmove through space. Nowimagine instead that thespace you inhabit isn’tfilled with space but isfilled with faces. Welcometo the strange world offace space.

Page 5: Computer Science for Fun Project

5cs4fn�eecs.qmul.ac.uk

ploringFacing forwardYou can probably see that recognising facesis a very complex job for our brains to do.Fortunately, it looks as though we haveparticular parts of our brains that specialisein helping us perceive faces. By buildingcomputer software that, as with face spaces,mimics the way our brains understand faces, we can start to explore and test our understanding of them. Then thesediscoveries can help us build newtechnologies, like robots that can recognise and express emotions (see pages 6 and 18). That should give us something to smile about in the future.

Page 6: Computer Science for Fun Project

If humansare ever to get

to like and livewith robots we need

to understand each other.One of the ways that people let

others know how they are feeling isthrough the expressions on their faces.A smile or a frown on someone’s facetells us something about how they arefeeling and how they are likely toreact. Some scientists think it mightbe possible for robots to expressfeelings this way too, butunderstanding how a robot canusefully express its ‘emotions’

(what its internal computer program isprocessing and planning to do next), isstill in its infancy. A group of researchersin Poland, at Wroclaw University ofTechnology, have come up with a clevernew design for a robot head that couldhelp a computer show its feelings. It’sinspired by the Teenage Mutant NinjaTurtles cartoon and movie series.

The real TeenageMutant Ninja TurtleTheir turtle-inspired robotic head calledEMYS, which stands for EMotive headYSystem is cleverly also the name of aEuropean pond turtle, Emys orbicularis.Taking his inspiration from cartoons, theproject’s principal ‘head’ designer JanKedzierski created a mechanical marvelthat can convey a whole range of differentemotions by tilting a pair of movablediscs, one of which contains highlyflexible eyes and eyebrows.

Eye seeThe lower disc imitates the movements of the human lower jaw, while the upperdisk can mimic raising the eyebrows andwrinkling the forehead. There are eyelidsand eyebrows linked to each eye. Have a

How to geta head inrobotics

How to geta head inrobotics

www.cs4fn.org6

Page 7: Computer Science for Fun Project

look at your face in the mirror, then trypulling some expressions like sadness andanger. In particular look at what these doto your eyes. In the robot, as in humans,the eyelids can move to cover the eye.This helps in the expression of emotionslike sadness or anger, as your mirrorexperiment probably showed.

Pop eyeBut then things get freaky and fun.Following the best traditions of cartoons,when EMYS is ‘surprised’ the robot’s eyescan shoot out to a distance of more than10 centimetres! This well-known ‘eyes outon stalks’ cartoon technique, whichdeliberately over-exaggerates how people’seyes widen and stare when they arestartled, is something we instinctivelyunderstand even though our eyes don’treally do this. It makes use of the factthat cartoons take the real world toextremes, and audiences understand andare entertained by this sort of comicalexaggeration. In fact it’s been shown thatpeople are faster at recognising cartoonsof people than recognising the un-exaggerated original.

High tech headbuilderThe mechanical internals of EMYS consistof lightweight aluminium, while thecovering external elements, such as theeyes and discs, are made of lightweightplastic using 3D rapid prototypingtechnology. This technology allows adesign on the computer to be ‘printed’ in plastic in three dimensions. The designin the computer is first converted into a stack of thin slices. Each slice of thedesign, from the bottom up, individuallyoozes out of a printer and on to the sliceunderneath, so layer-by-layer the design in the computer becomes a plastic reality,ready for use.

Facing the futureA ‘gesture generator’ computer programcontrols the way the head behaves.Expressions like ‘sad’ and ‘surprised’ are broken down into a series of simplecommands to the high-speed motors,moving the various lightweight parts ofthe face. In this way EMYS can behave in an amazingly fluid way – its eyes can‘blink’, its neck can turn to follow aperson’s face or look around. EMYS can even shake or nod its head. EMYS is being used on the Polish group’s social

robot FLASH (FLexible Autonomous Social Helper) and also with other robotbodies as part of the LIREC project(www.lirec.eu). This big project exploresthe question of how robot companionscould interact with humans, and helpsfind ways for robots to usefully show their ‘emotions’.

Do try this at homeIn this issue, there is a chance for you to program an EMYS-like robot. Follow the instructions on the Emotion Machinein the centre of the magazine and buildyour own EMYS. By selecting a series ofdifferent commands in the EmotionEngine boxes, the expression on EMYS’sface will change. How many differentexpressions can you create? What are theinstructions you need to send to the facefor a particular expression? What emotiondo you think that expression looks like –how would you name it? What would youexpect the robot to be ‘feeling’ if it pulledthat face?

Go furtherWhy not draw your own sliders, withdifferent eye shapes, mouth shapes andso on. Explore and experiment! That’swhat computer scientists do.

7cs4fn�eecs.qmul.ac.uk

Page 8: Computer Science for Fun Project

Pulling a faceHow close is your online self to your realself? When you tell people about yourselfonline, on social networking sites likeFacebook, you’ve got the luxury of beingable to edit your life a bit. You can choosethe profile picture you look best in, andyou can write status updates that makeyou seem as interesting, funny andexciting as possible. Wanting to presentyourself in the best light is pretty natural,and all of us do it at least a little bit.Even we here at cs4fn aren’t immune. Forexample, right now we’re tempted to lookclever by pointing out that there’s a linefrom Shakespeare about this – “God hasgiven you one face, and you make yourselfanother”. But when does this impulse gotoo far? Is there a line where a little self-editing becomes more like lying?

Do you match up?One place where this question becomesimportant is on online dating sites. On theone hand, everyone is there to get dates,so it’s important to make yourself look asgood as possible. On the other hand, ifyou do find a date you’re going to meeteach other in person. That means any bigdifferences between your profile and thereal-life you will be rumbled. There aregood reasons to lie, and other goodreasons to tell the truth. Whenresearchers find a situation like

this – where there are big incentives to do two completely opposite things – they know they’ve found a gold mine.Jeffery Hancock and Catalina Toma, two sociologists from Cornell University in the USA, wanted to see what peopleeventually choose to do, so they decidedto find out whether people’s online datingprofile pictures matched their real-lifeappearance. To do it they needed to getsome people round to their place. Well,their lab anyway.

Here’s what the researchers did: theyfound some people with dating profilesonline, who all agreed to visit apsychology lab in New York City. Whenthe test subjects got there they wereshown their own profile from a datingwebsite, and asked to say how accuratethey thought their own profile picture was.Then they had a second picture taken inthe lab, in the same pose as their onlinephoto. Later on, independent judges, whohad never met the volunteers, comparedthe two photos and rated the accuracy ofthe online version. What they found was:people are tricky.

Rate my dateWhen the volunteers were asked to ratethe accuracy of their own profile photos,they tended to say the photos were very

accurate. The independent judges didn’tagree. The average score was closer to‘barely accurate’. Almost a third of thepictures were found to be downrightinaccurate. There were differencesbetween the genders, too: women tendedto have more inaccurate pictures thanmen (possibly because they face moresocial pressure about their appearance),and the judges tended to see differentinaccuracies in men’s and women’sphotos. The differences that made judgesthink women’s photos were inaccuratewere related to their hair, their weight and(oddly) their teeth, but judges took noticewhen men’s online photos made themlook younger or less bald.

So when faced with a dilemma betweenlooking more attractive and being moretrue-to-life, it seems people are morelikely to choose attractiveness. But whendoes good presentation become fakeness?For that matter, when does being true-to-life become, well, just sloppy? The moreyou think about this, the more you realisejust how fine a line it can be, and itseems harder to judge the people who tryand make themselves look hotter onlinethan they are in real life. On the otherhand, this experiment seems to say thateven if the line is fine, it does exist, andyou can be caught out if you try and pushthe fib too far.

www.cs4fn.org8

Page 9: Computer Science for Fun Project

Turn the page forsome special cut-outexperiments!

Page 10: Computer Science for Fun Project

Build this robot dog's head and watch itpop into 3D life! This activity is based onan optical illusion where the brain isfooled into thinking that a hollow face isactually popping out towards you. It's alsoinspired by the LIREC project(www.lirec.eu), which is trying to discoverhow to make robots as friendly as a real-life pooch.

Putting it together

Punch out the cartoon. By the end ofthese steps, you'll have made a facethat bends inwards, rather thanpointing out like a normal face. Eventhough the dog's face is hollow, you canmake your brain see it coming outtowards you!

Make a valley fold on the blue, purpleand yellow lines, and a mountain foldon the orange and red lines.

Do a valley fold on all the tabs.

Bring the green lines together so theymeet. Tape or glue the tabs so theywrap round the back of the cartoon,joining the pieces together.

Grab the neck where it says 'hold here'.Hold the face upright in front of you,shut one eye, and slowly tilt the facefrom side to side. Eventually the dog’snose should pop out at you. It may takesome time – try different angles orswitching which eye you’ve closed.

How it worksThis illusion works because we don'tnormally see faces that bend inwards.When our brains try to make sense of theinformation, we end up seeing a normal3D face. In fact, our brains treat lots ofconcave objects the same way.

Blue line

Hold h

ere

Orangeline

Purpleline

Green line

Red line

Green line

Yellowline

www.cs4fn.org/faces/

5

13

2

4

The robot dogillusion

A valley foldlooks like this:

A mountain foldlooks like this:

Page 11: Computer Science for Fun Project

The emotion

machine

www.cs4fn.org/faces/

The Emotion

Machine

Try programming this

robot face!

Pun

ch out

along

the

dotted

lines

, so

you've

got the

emotion

mac

hine

and

fac

ewith

some em

pty slots in the

m, plus

thr

eestrips

to slide th

roug

h th

e slots.

Wea

ve the

strips th

roug

h th

e slots for th

eey

ebrows, eye

s an

d mou

th.

Get p

rogram

ming! M

oving th

e strips

to

differen

t letters will give yo

u differen

tex

pres

sion

s.

Things to try

Wha

t co

mbina

tion

of letters mak

es the

rob

otlook

hap

py? Can

you

com

e up

with

differen

tex

pres

sion

s for mild

hap

pine

ss and

utter joy

?How

abo

ut a series of com

bina

tion

s to m

ake

the robo

t look

hap

py, su

rprise

d, the

n sa

d?

Create yo

ur own

new emotions

by dr

awing

differen

t ey

ebrows, eye

s an

d mou

ths. W

hat

expr

ession

s wou

ld the

rob

ot n

eed

to d

o if it

were go

ing to b

e so

meo

ne's frien

d?

Our

rob

ot fac

e is b

ased

on

a real d

esign

used

in the

LIR

EC res

earch

projec

t. To se

e th

e real

one in action

go to www.lirec.eu

.

A A

A

B B

B

C C

D D

C

E E

Eyebrows

Eyebrows

Mouth

Mouth

Eyes

Eyes

1 2 3

Page 12: Computer Science for Fun Project
Page 13: Computer Science for Fun Project

cs4fn�eecs.qmul.ac.uk 13

How can you tell if someone lookstrustworthy? Could it have anything to dowith their facial expression? Some newresearch suggests that people are lesslikely to trust someone if their smilelooks fake. Of course, that seems likecommon sense – you’d never think toyourself ‘wow, what a phoney’ and thendecide to trust someone anyway. Butwe’re talking about very subtle clueshere. The kind of thing that might onlyproduce a bit of a gut feeling, or youmight never be conscious of at all.

To do this experiment, researchers atCardiff University told volunteers to pick

someone to play a trust game with.The scientists told the volunteers

to make their choice basedon a short video of each

person smiling – butthey didn’t knowthe scientists could

control certain aspects ofeach smile, and could make

some smiles look moregenuine than others.

Here’s where things get very subtle: itturns out that genuine smiles take longerto build up to, and come back from, thanfake smiles. For a real smile, your facialmuscles begin to move, eventuallyforming the grin we’re all familiar with,after which your face gradually returns tonormal. When people try to fake a smile,their face tends to snap into the grinalmost immediately and hold it there for an unnaturally long time. Suddenlythe smile disappears and their face goes back to normal. In short, fakersconcentrate just on making the ‘smileyface’, but the fake emotion snaps on and off like a switch. A real smile washes over the face like a wave.

The researchers used computer graphicsto control the timing of each stage of asmile. In some videos, the smile built upand diminished the way real smiles tendto; in others it snapped into and out ofexistence in a more fake way. Amazingly,the differences amounted to less than asecond for many of the stages. Could thevolunteers tell the difference with such adifficult clue? It seems they could. Thevolunteers were less likely to pick thepeople with less authentic smiles. Theydidn’t always spot the phoney, and some

fake smilers still ended up beingtrusted enough to play the game.Still, it suggests that we ashumans have evolved somepretty sophisticated ways ofcatching cheaters if all it cantake to give you away is asmile that’s too sudden. It’s

a lesson to trust your gutfeelings: those feelings are often

a sign that your brain is addingtogether a lot of tiny but important

clues.

Robot street smarts?

In the future, artificial intelligence might have to learn this trick too. Imagine a robot interacting with a human – the robot would need to be able to pick up the clues if the human was trying to trick them! We wouldn’t want robots falling for confidence tricksters, now would we?

Can you

trust a

smile?

Page 14: Computer Science for Fun Project

Are thereplaces forfaces?Our brains seem to work particularlyhard at recognisingand processingfaces. That’sperhaps not

surprising as we are social animals. We livein groups and faces are the way we oftensignal our intentions – if I look angry avoidannoying me – or identify who we areinteracting with. Inside our skulls, at certainspots in our brains, are millions of neuronsthat seem to respond to faces and do thework for us. These areas, landmarks on thewiggles on the surface of our brain, occurmostly on the right hand side. But how dowe know these places take care of faceperception? Well, we can use scanners that measure the amount of blood going toparticular brain areas when doing particularthings. When we look at faces certain areas light up on the scans, flooding withmore blood as they crack on with the faceprocessing business. Some experimentsalso suggest that the face processingactivity differs between men and women: in men the greatest activity happens on theright side of the brain and for women it’sthe left. Why? That’s yet to be discovered.

Faces vs thingsWhile some think that those specialparts of the brain only process faces,others argue that a face is justanother sort of ‘thing’ toprocess. The reason, they say, for the increase of brain activity for faces inparticular is that faces arehard. Most faces on thewhole are similar to oneanother, and it’s far moredifficult to tell thedifference between yourmates’ faces than, say, thedifference between theirhouses. So it would be no surprisethat it takes more effort, so more bloodto the brain, and thus those areas wouldlight up in a brain scan. Some experimentshave, in fact, shown that the same brainareas are active when we try to tell thedifference between types of cars or types ofbirds, so perhaps faces aren’t that specialafter all?

Baby look at thatUsing an experimental method called‘preferential looking’ researchers haveshown that babies spend more time looking at faces than at other things. Theexperiment is simple: on the right side of ascreen is a face, and on the left side all

the parts of a face in a jumbled order.Video recordings show that babies (evensome less than an hour old!) tend to spend longer looking at the face than the jumbledparts. Both pictures contain the sameinformation, just presented in a differentway. That suggests faces do have a specialplace in our brains after all!

So has the notion of special brainprocessing for faces had its chips? The jury is still out.

Somethingspecial inthe brain

On the face of it faces are easy. We all recognisefriends and family, and we can tell how they arefeeling by the look on their face. But if we lookinside our brains we find that the whole process of understanding faces, which scientists call faceperception, is far from easy. It’s also proving hard towork out whether our brains treat faces as a specialcase or not.

www.cs4fn.org14

Page 15: Computer Science for Fun Project

Do illusions tell usanything about how ourbrain processes faces?In everyday life we encounter faces that aremostly in the right order: eyes above nose,nose above mouth. Researchers call this a configuration, and our brains seem tounderstand that a face is there even if it’s fairly low in details. A good example is the smilies (aka emoticons) used in textmessages. There is very little information in :-) or :-( but we still see a happy or sadface. That’s because the configuration(eyes, nose, mouth) still looks like a faceeven if it’s on its side. But what happenswhen you turn a face upside down? Ourbrains still recognise it as a face, but wecan do strange things to it and our brainswon’t notice. The Thatcher Illusion, namedafter the famous UK prime minister, is a

classic example. Two upside-down facesare put next to one another. They both looklike normal faces, but when you turn themboth the right way up, you see that onepicture has had the eyes and mouth turnedupside down. It now looks grotesque andobvious, but you didn’t notice this when thepictures were inverted. Why? Because yourbrain has problems with inverted faces. Youdon’t see many of them in daily life, so all itcan manage to do is recognise all the partsof a face in roughly the right upside downorder. The fine detail – the fact that theeyes and mouth are the wrong way – islost.

Facing up to the potato Another illusion with faces comes aboutwhen you take a look inside a plasticHalloween mask. They are made by

stretching plastic over a mould of aperson’s face. Looking at the mask from the front it’s clearly a face protruding out at you. Now take a look behind. It starts out looking hollow, but if you look at it for a bit, perhaps closing one eye, it starts tolook like a face peaking out at you too.Researchers have argued that this isbecause we don’t normally experiencefaces that curve inwards, only faces that point outwards, so given the sameinformation our brains will see a hollow faceas a normal face. Perhaps this is a specialface thing? Well, perhaps not. You can takeany shape – for example, a potato – andmake a plaster cast of it. Looking into thiscast it’s hollow, but your brain sees thepotato shape bulging out, just like the real thing.

That illusionarysmileThat illusionarysmile

cs4fn�eecs.qmul.ac.uk 15

Build your own hollow face! See therobot dog illusion in the centre of themagazine.

Page 16: Computer Science for Fun Project

www.cs4fn.org16

family members, all from differentstages of their lives. It doesn’t even haveto take information from your parents.For example, by comparing how yoursiblings looked years ago with how theylook now, the computer program can get clues about how you will look as you get older.

In the future Khoa’s system could helppolice when searching for a missingperson or a fugitive, or it could evenhelp cosmetics companies design anti-ageing products. Computer science is all about the future, but who knew itcould predict your future face too?

A glimpse of thefuturePredicting what a person will look like whenthey’re older is tricky. At the moment itoften depends on the imagination of asingle person, whether it’s a sketch artisthelping to solve a missing person case, or amakeup artist trying to make someone lookolder. A PhD computer science student inCanada has designed a program toautomatically age people’s appearances, by applying rules for how faces change as we get older.

The student, Khoa Luu, actually needed to come up with two sets of rules. In thebeginning of our life, when we’re still

growing and developing, our faces changetheir actual structure: they get longer andwider. But once we’ve finished growing, forthe rest of our life the changes are all in the soft tissues around the face. Wrinklesappear, and the facial muscles aren’t quiteso toned.

Having a set of general rules for howeverybody ages is a good start. But aprediction will be more accurate if you canuse images from your subject’s family too.It’s a more sophisticated take on the oldrule of thumb that says you can tell whatsomeone will look like when they’re olderby looking at their parents. So Khoa puttogether a database of photographs of

Whether you’re young or old (or somewhere inbetween), computer science can tell you somethingabout faces. Take a look at these stories on facesand age.

The digitalage

Page 17: Computer Science for Fun Project

cs4fn�eecs.qmul.ac.uk 17

You’ll never forget a faceWhen you’re young you might wonderwhat there could possibly be to lookforward to at age 30. Boredom anddecrepitude? We’ll be honest: maybe.But you can at least count on being at the peak of your ability to recognisefaces. Researchers at Harvard andDartmouth colleges in the USA havefound that while most of our mentalabilities peak in our early twenties, the ability to recognise and rememberfaces takes another decade to mature.

The researchers used an internet-based face memory test, completed by 44,000 volunteers. For most of the mental tasks, like rememberingnames, people around the age of 23 or 24 scored the highest. That ispretty consistent with other findingsthat show people are generally at theirbrainy best in their early twenties.Things were different when volunteersgot to the face recognition test, inwhich volunteers tried to recogniseand remember a lot of computer-generated faces. There, people’sabilities got much better between 10and 20 years old, then continued toget better bit-by-bit throughout thenext decade. The group who scoredhighest on this bit of the test werebetween 30 and 34 years old. Afterthat, scores gradually declined. Peoplewho were 65 years old were about asgood at recognising faces as 16 year-olds.

Researchers aren’t really sure why face recognition is at its bestalmost ten years after our other mentalabilities. It could be that faces are socomplex that our brain needs someextra time to practise. So if you’ve yetto hit the big three-oh, settle in andwait – your best face time is yet tocome.

Turn back the clockNow that you know how computerscientists can make someone lookolder (see ‘A glimpse of the future’),how about making someone lookyounger? Turns out psychologists canhelp us there. Some researchers atJena University in Germany havefound an easy way to make yourselflook younger than your years – justhang out with a bunch of older people.

Their experiment hinges on the ideathat we average out the faces we see,and judge other faces based on thataverage. The researchers got a groupof people to concentrate on a series of pictures of elderly people on acomputer screen. Then, when askedto guess the age of a middle-agedperson, the volunteers guessedsubstantially younger than the person’sreal age. The effect was even morepronounced when the volunteers were asked to guess the age ofsomeone of the opposite sex.

For anyone who’s ever wished theylooked older rather than younger,there’s good news: this effect goes theother way too. After looking at picturesof younger people, the volunteersthought the middle-aged personlooked older than they really were.There’s no way to tell how long theeffect lasts, but the researchershelpfully point out that the age of thepeople next to you has an importanteffect on the age you’re perceived as.So if you’re a celebrity looking for anentourage the scientific advice is clear:make sure they’re the right age tomake you look your best.

The tests the researchers used are all available to try atwww.testmybrain.org. You can trytests on your memory, your ability to read emotions, and even whetheryour taste in beauty is the same asother people! Fun to try, and you’ll be helping science too.

Page 18: Computer Science for Fun Project

18 www.cs4fn.org

Your futurecheck mate

Robots that play games with you need to be able to tellhow you feel. When you’re playing with a friend theyknow what’s on your mind, no matter if you’re winningor losing. The expressions we make onour faces are a part of the socialglue we call friendship. So ifwe want to build robots thatcan be our friends, theyneed to be able toread our faces.

www.cs4fn.org

Page 19: Computer Science for Fun Project

cs4fn�eecs.qmul.ac.uk 19

Tick-tock, amagicclockIn this quick magical effect you canpredict the future and also read yourfriend’s mind at the same time. All youneed is a pack of playing cards and youtoo can do magic with a clock face. Findout how in the magazine+ section of ourwebsite at www.cs4fn.org, if you have thetime. As a bonus, find out how this trickcan save time for nurses!

The main featuresPeople’s faces are interesting. Over theinflexible bone of our skull is stretched alayer of skin and muscles that make us looklike the people we are. We can wrinkle ournoses, we can flex an eyebrow or two, wecan smile or frown. Our faces are wonderfullyflexible. But this amazing flexibility is a realproblem for computers. When is a smile asmile? What did that narrowing of the eyesreally mean? To work with us, computersneed to be able to give meaning to the subtle shapes we create with our faces.Researchers in the UK, Portugal and Polandworking on a joint research project calledLIREC have had to think of new ways tounderstand the expressions on faces. That’s because they are developing a robot companion to help school kids learn to play chess.

Your checkmate and its secretsThe body of the robot is called the iCat.Shaped like a cartoon cat, it can makesounds, and move its mouth and eyes tocreate expressions that the other humanplayer can understand. But how does therobot know what face to pull? It needs tounderstand how you are feeling about thegame. To do this the robot uses lots ofdifferent clues: it can detect how often youcrouch over the board, which tells the robothow interested you are in what’s happening.Perhaps most importantly, though, the iCathas learned to recognise the expressions onchildren’s faces, which is particularly toughfor a computer.

Kids vs adultsAdults in social situation tend to make fairlystrong expressions with their faces. It’s oftenclear when an adult means to smile, butkids’ expressions tend to be more subtle. Sothe researchers recorded hours of video at alocal kids’ chess club and went through it byhand, deciding when a smile or a frown waspresent. Then they noted the situationswhere these expressions occurred. Armedwith the videos, they were able to train up acomputer program to recognise how thevarious parts of the face moved when thekids were feeling in a particular mood.

Check your learningby exampleHere’s how to train a computer to recognisea smile. The researchers gave the computeraround 500 of the videos, and programmedit to automatically recognise where a facewas and notice how the important parts ofthe face moved between a neutral expressionand a smile. Each time someone in thevideos smiled, it was up to the computer to notice. If the computer got it right, great! If it missed the smile, the program used thiserror to correct itself so it got it right in future.Just as though it were in school, it learned byexample and by having a teacher telling itwhen it was right and wrong. After itsschooling, the computer was able torecognise new smiles it hadn’t seen before at about 90% accuracy – it had learned what the smile on a young child’s face looks like.

This face reading ability was then combinedwith information on the state of the game,giving the robot the ability to ‘understand’what the human player was feeling. It couldconstantly check the face of its humancompanion and make sure that it was playing at an appropriate level to help them improve their game.

Using this technology, based on ourunderstanding of human faces, thecompanion robot became a checking mate rather than just getting checkmate!

Playing for realHow can the iCat computer play on areal chess set when most play on ascreen? Partly it’s because the chessset contains a secret: there is a smallmicrochip in each piece that lets therobot know how it’s being moved onthe board. The robot also has a state-of-the-art chess program installed, so itcan play an easy or a more challenginggame as it sees fit. It then just tells youwhat move it wants to make.

Page 20: Computer Science for Fun Project

You can da Vinci code For years, the Renaissance scientist andartist Leonardo da Vinci kept his journalssecret using a simple code: he wrote allhis letters and numbers backward. Youcan train your brain to learn his secretcode too. All it needs is paper, a mirrorand practice. Start by writing out all theletters of the alphabet and the numbers inyour best handwriting. Then take a mirrorand use it to reflect what you have written.Copy all the letters as seen in thereflection on to another sheet of paper,and when that’s done put the mirror away.Now take the sheet with your mirror copiesand start to duplicate these on anothersheet of paper. You will probably find thatthe capital Q, lower-case b, d, j, k and q,and the numbers 2, 3, 6 and 9 seem tobe the most difficult, but keep at it. Aftera bit of practice in duplicating the shapes,get the mirror out again. This time you’regoing to try and draw those letters andnumbers so that they look right in themirror. Get a bit of paper and stick it tothe wall so you can see it in the mirror.Now swing your arms and body and writethe letters and numbers on the paper sothat they look the right way round in themirror. Keep practising. After a while yourbrain will learn the body and handmovements (called motor memory) for mirror writing in the same way youlearned to write the first time round. You can then code like da Vinci.

Back (page)to front

www.cs4fn.orgFor a full list of our university partners see www.cs4fn.org

cs4fn is supported by industryincluding Google, Microsoft and ARM.

Motto: backward orfront, writing isimportant

In this edition we have explored the science behindbackwards and upside down faces, but it’s not justfaces that go funny when we look at them in differentways.

Sounds backwards?It’s been shown that when people listen to recordings of piano music playedbackwards they believe the sounds areplayed on an organ. This is because when played backwards, piano tones startquietly and grow in volume. This type of sound, we have learned, is the way anorgan sounds. Our brains’ preconceptionsare fooled and so we make a mistake.Playing music backwards was somethingof a fashion in days gone by. In thefifteenth century it was fashionable tocreate palindromic canons – pieces ofmusic containing two melodies. Onemelody was the notes of a tune playedforward and the other, called thecounterpoint, was the same notes but inthe reverse order. Beethoven, Bach andHaydn all got into this mixing of musicalsymmetry. Mozart once helpfully composeda canon in which the second melody wasthe same as the first, but backward andupside down. That meant it could be readby the player from the opposite sides ofthe music sheet. What a thoughtful guy,that Mozart.

Upside downweatherThe sun heats the land and the land heatsthe air above it, so as you go up things getcooler. However from time to time thingsget a little upside down. This is called atemperature inversion: a layer of densecold air traps the lighter, warmer airunderneath it. When it does, smog fromcar fumes can be trapped near the ground,leading to an increase in health problems.If the trapped warm air contains a lot of moisture, an inversion can lead toviolent thunderstorms as the cold layerprevents the humid warm air from rising.Understanding how our weather works inorder to make the daily weather forecastneeds complex computer programs tocalculate future events. Being able to look at the weather from lots of differentangles in a computer helps us plan for a rainy day.Motto: music is just a

pattern howeveryou look at it Motto: Whether the

weather be cold, orwhether theweather be hot,we’ll weather theweather, whateverthe weather,whether we like it ornot!

Any section of this publication is available upon requestin accessible formats (large print, audio, etc.). For furtherinformation and assistance, please contact: DiversitySpecialist, [email protected],020 7882 5585