dolphin cognition psych 1090 lecture 12. now, we ’ ve talked a lot about birds, nonhuman primates,...

133
Dolphin Cognition Psych 1090 Lecture 12

Post on 20-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Dolphin Cognition

Psych 1090

Lecture 12

Now, we’ve talked a lot about birds, nonhuman primates, even

insects…

but barely discussed cetaceans,

which are also large-brained, long-lived creatures

and for which lots of data exit

So we’re going to do a brief survey of these creatures;

we could take many lectures and barely get through the data

We’ll start with some of the two-way communication studies and end with what looks to be mirror

self-recognition

Dolphins had always held fascination for humans, even from

the time of the ancient Greeks

but it was really around the 1960s that researchers started studying

them scientifically

First looking at their natural whistles, learning that they had

sonar

And only a bit later was there the attempt to engage them in direct

communication studies with humans

Lilly was one of the first to engage in such work;

He trained dolphins to match number of and trains of sound bursts, interburst silences, and

latencies

He pretty much used standard operant techniques and fish

rewardsHis subjects were also able to

differentiate between one human’s voice and other humans’ comments or

corrections And he manage to train the

dolphins to produce a few human-like utterances

Unfortunately, Lilly became known more for LSD

experimentation and total deprivation studies

But his work set the state for a number of other intriguing studies

People like Karen Pryor trained dolphins to understand certain

signals

Not just the typical stuff of marine mammal shows, but signals such

as “invent”

For which the dolphins would do some entirely new behavior or

combination of behavior

Other researchers examined how dolphins might transfer information

to each other

How, for example, they could do particular jumps exactly in tandem

or transfer information about what was in their tank to another dolphin

in an adjoining tank

In the early 70s, the person who became most influential in dolphin

research was Louis Herman

Who looked at all the work being done with signing chimps and

computer-mediated communication between apes and

humansAnd decided to replicate the work

with dolphins

He worked in two modalities…hand signals and artificial whistles

His system was quite similar to that later adapted by Ron

Schusterman that we saw for Rocky, the sea lion

Basically, the animals learned to respond to a series of signals,

either auditory or gestural

And thus could be examined on how well they comprehended this

artificial language

So the dolphin would be given a set of signals that would mean

“flipper touch the ball” (i.e.,not the frisbee or the ring)

And it would swim to the ball in the pool, ignoring the ring and the

frisbee

and carry out the appropriate action

The animals were trained via operant techniques, with food

rewards…

Once they got the hang of the system, they began to learn new

symbols much more rapidly

Often simply by the use of mutual exclusivity, or “exclusion”

So if they were to learn the novel label “buoy”,

They would watch experimenters throw already known

objects/labels such as a frisbee and a ring into the tank,

And then be told to “fetch buoy”

They knew what a frisbee was and what a ring was…

So, by exclusion, they would fetch the buoy…

And thus make the connection between the novel sign and the

novel object

When Herman moved into production, life got much more

difficult

Mainly because he separated out production from comprehension

And didn’t use referential rewards,

first looking at just vocal reproduction

What he did was to get a good baseline of the vocalizations

already in his dolphins’ repertoire…

A critical important first step

Then he trained the dolphins so that they understood that they had to replicate whatever came

next

Nevertheless, it took over 1000 trials to get the dolphins to

replicate the sounds

And they ran into even more trouble when they tried to get the dolphins to associate these

soundsLearned in the total absence of a

referent,

with specific objects

What they did was to give the dolphin the sound along with the

object at first

They hoped the dolphin would make the association and produce the sound in the presence of the

objectBut the dolphin wasn’t getting the “replicate” signal, so it didn’t act

So then they tried giving the dolphin the replicate signal along

with the signal and the object

Which worked fine, except that the dolphin wasn’t really ‘labeling’ the

object

Just replicating as taught

So they tried fading out the signal they wanted the dolphin

to use as the label

So it heard “replicate this”, a less loud signal, and saw the object

Well, the dolphin was smart and knew it was supposed to replicate

exactly what it heard…

So, as you might imagine, it began to produce the imitated

signal in a softer and softer manner….

which was exactly the opposite of what Herman wanted

The humans finally figured out a kind of alternation system that

worked

But the training was painfully slow

Nevertheless, the animals did learn to use these signals with striking accuracy, if extremely

slowly

This work is in rather stark contrast to the paper we read by

Reiss and McCowan..

Here the dolphins had a lot of control over what they wanted to

do

And they were actually given the choice of whether they wanted food, a toy, or interaction with

their trainers

By using an underwater keyboard system

Initially, Reiss would touch the

key

The dolphins heard a signal

And then something

appeared in the tank

The keyboard was then presented to the animals during

specific sessions

But they did not receive any specific training other than the demonstration of how it worked

The young males were extremely curious and explored it

Note that at least in this experiment, the older females

were not much interested

Which could be an important point

Herman’s subjects were older females

Too, Reiss’ young males often monopolized the keyboard

So, when we look at the results, we have to keep all that in mind…

We don’t know what would have happened had the experiment been done with just the adult

females

Or with females that weren’t actively involved with their calves

Or what Herman might have been able to do if he had had

young malesAnd, although my gut feeling is

that the training techniques were really the issue

That’s a hypothesis that needs testing very carefully

In any case, Reiss’ data were strikingly different from that of

Herman’s….

The dolphins started to reproduce the vocal signals after fewer than

20 exposures

And the productions were often made separately from the sample

The dolphins would produce the whistles BEFORE they hit the

appropriate key

Or when they were playing with the appropriate object in the absence

of the keyboard

Suggesting that they had easily made the connection between

whistle and object

The dolphins also remembered these whistles over a two year

period

And upon reintroduction of the keyboard, started using the

whistles immediately

Suggesting a strong association, at least, between the whistles and

the system

Of particular interest were the whistle combinations….

When the dolphins, while playing with both rings and

balls, took

+

Note frequency levels

to produce, in context,

Note that the ball bit is up near the 16 kHz…so it’s not a ring

blip

And there was an 82% correlation between the

combination and combination play

Suggesting that this was a real association

But not one that was trained; it was a spontaneous combination

and spontaneous use

In fact, none of this was trained…

If nothing else, the data suggest something about the flexibility of vocal learning in

the species

Even if, like birds, this flexibility is more pronounced in youth

And, because there was no selective reinforcement of

particular aspects of the vocal behavior

We can see what is likely salient to the dolphins….

Frequency modulation, duration, harmonic structure

And, of course, it is fascinating to see parallels with respect to primacy and recency effects

The dolphins learning the beginning and ends first…

Much like humans and the birds that we’ve studied

Now, Reiss and McCowan are very careful not to claim full

referential communication here…

And rightfully so, given that the animals were not tested directly

on whether they could label

Nor on how well they might be able to transfer to really novel

balls or rings

And, of course, the study stopped (for lack of funding) well before the researchers could look at

more sophisticated sign combinations

For that, we have to look at some more studies by Herman

Here he stuck to the gestural communication trained with Ake

Basically, Ake had learned what Herman called a syntax

and what in a series of back-and-forth in-print arguments

Schusterman called rule-governed strings

which actually is a very important distinction…

Because syntax implies true correspondence with human

behaviorAnd the dolphins did not have

anything nearly as complex…e.g., no passives or past tense, etc..

But the dolphin did have to follow some fairly complex rules in deconstructing the gestural

strings she was given

So, for example, “hoop ball in” means

“put the ball inside the hoop”…

That is, in. object-dir. object-verb, in a recency pattern so that what is acted

on is closest to the verb

This pattern was chosen specifically to link the verb and

the object to be acted on

It also meant that the dolphin had to distinguish things like “hoop ball fetch” from “ball

hoop fetch”But in many cases there could be

no real reversal…you can’t put the hoop in the ball

But, instead of looking at that non-reversal issue as a

drawback,

Herman decided to use it as an advantage

to see what exactly his dolphin did or did not comprehend

and what types of errors she might make when given impossible

directions to carry out

Interestingly, when Schusterman did a similar study on a sea lion,

the sea lion pretty much did what is could with what it was given…

Often, tho’ not always, reinterpreting the commands so

as to do something related

It isn’t clear, of course, if the dolphins, who we will see, often

balked at an anomalous command

should be considered smarter than the sea lions…

The latter maybe should be given credit for their innovative

attempts to deal with something odd

Ake also had some modifiers, such as left and right

And the order there was also important, so that the modifier modified the term directly to its

right…

“pipe left ball fetch”

So, a dolphin being given anomalous phrases could have

quite a number of different thing ‘go wrong’…

Syntactic anomalies could be something like the verb in the

wrong place…

“over surfboard” or “ring over surfboard”….

These are the kind of errors a tired human trainer might

make, reverting to English in this case

Of course, the use of the term ‘syntax’ really needs to be

considered just a short-hand

It isn’t syntax like yours or mine

Now, Herman states that something like “surfboard basket tail-touch” is a syntactic error…

Because it uses a nonrelational verb in place of a relational one

But I would argue that this example is a semantic error

Because it hinges on the meaning of the verb

Not on its placement in the string of commands

Other semantic anomalies are things like “window through”…

like the above, a physical impossibility

Finally, Herman gave his animal lexical anomalies….

Something nonsensical was inserted into the string

The question was how would the dolphin respond to all these odd

requests?

The goal was to see if anything could be learned

about her information processing,

assuming that there might be patterns to the way she dealt

with these anomalies….

Would she just balk or be inventive?

Initial studies did show some patterns to her responses…

Thus, for example, she would totally reject a string with a

nonsense gesture in place of a verb

i.e…. “DUH, want do you want me to me do??”

And if given something like “frisbee over pipe”

She followed a similar rule that could be interpreted as “string

always start with verbs,

So ignore anything until you get a verb

Note that is was interpretations such as this that made

Schusterman argue for rule-governed behavior rather than

syntaxBecause one rule could

successfully be used in a number of different situations

Generally not true for syntax

And for a semantic anomaly, like ‘window through’,

She tended to carry out the verb on whatever was possible….

Again suggesting a strong primacy for the verb position

Now the difficult thing about Ake’s strings

were that she had to store the whole thing before she could

know what to do

And, for stuff like left-right, had to remember the order very

carefully

An odd bit of this was the pause to videotape only after an

anomaly;

Ake had to be able to use that information to clue her that

something was, indeed, weird about the previous trial

Why not videotape everything and analyze only what’s needed?

And, yeah, in 1993, video was expensive and difficult and

cameras were large…

But that kind of intrusiveness was therefore even more

telling..

So that is clearly one thing to think about in evaluating the

results

Too, when you look at the TTR, the MTTR and the TMTR normal

trials

That is, bring hoop to ball, or a right hoop to a ball or a hoop to

a left ball….

The success rates are not so good

And most errors were on the indirect object,

suggesting that Ake had a lot of trouble remembering the stuff at

the beginning of the string

Particularly if there were two of them and one wasn’t assigned L/R

In fact, although even the 40% is truly statistically above

chance

because you have to factor in choice of verb and choice of

direct object…

In reality that’s a bit wonky, because she almost never

goofed on the verb

And also was pretty good w/ the direct object (next to the verb)

but could mess up on the indirect object almost half the

time….

Of course, she had 11 choices, so again, statistically ok…

but clearly indicating memory issues

Memory might also explain the success when a stationary item

is involved….

Even if moved, most of these were always in the tank…

Whereas the moveable items, at least for all studies considered, might or might not be present

Thus, for example, it might be easier to code something

that is usually around

Even if its position varied across sessions, it stayed in

place, unlike the other items, for a given session

So, keeping this bit of info in mind, let’s look at the

anomalous stringsIf she were asked to move

something truly immovable…or likely immovable, like Phoenix…

She mostly balked…

Tho’ half the time w/ Phoenix she substituted something else…

And when given two verbs, she again rejected…seemed to respond with a “what AM I

supposed to do???”

The hardest were stationery to transportable

and she kinda made things up herself there

If given choices that were anomalous in a ‘logical’ way,

Like a stationary thing put in the midst of an otherwise OK string

She just ignored the weird part most of the time

Less so if it were at the end of too many items…

And she dumped stationary items when they didn’t make sense in an action sentence

that already had a transportable thing to be

acted onShe clearly knew what was impossible and rejected it

Or she did pieces that made sense

And she did seem to be very eager to reject strings that initially looked to require a

relational situation

But then tuned out to have a simple action verb

Herman argues for that rejection based on the processing of the

set

Giving an expectation of a specific type of verb

But sometimes, of course, she just used the verb on something in the string…

or something convenient…

It’s a bit difficult to analyze the data,

Given how badly the dolphin did on the TMTR and MTTR trials…

If I were doing a serious review, I’d crunch through and see if

there were statistically significant differences in

accuracy

Because that is exactly what is critical…

Not just what the dolphin did on the anomalous strings,

But how the errors compared to the error on the normal strings

with only 40% correct

And they never did that comparison

However, it is clear that the dolphin knew something about

the strings

But how the errors really related to memory or to a strong verb

recency effect

is not at all clear…

A separate issue is how the dolphins’ sonar system

interacts with its visual system

particularly in terms of dealing with object identification

What does it “see” when it uses its sonar?

Does sonar give it an overall global perception?

Or, like the many blind men examining the elephant

does sonar give the animal a handle on some particular,

distinctive feature?

Early studies by Herman and his lab suggested that dolphins could examine an object via

sonar

and then, in a two-choice test,

correctly identify the object via vision, and vice-versa—

but couldn’t explain how it was done

The argument of using raw acoustic cues plus associative

learning wouldn’t hold

because Herman used first-trial data

But the objects used were distinctive enough that

information about single aspects could have been

sufficient

And the dolphin could, with only two distinctive choices

have actually not even looked closely at the 2nd choice,

Just choosing it only if the first choice wasn’t really correct

In contrast, if the sample had, say, a distinctive bump

and now the two choices both had that distinctive bump and

that’s all that the dolphin checked out…

then, whether it vacillated between the two or chose as soon as it located the bump…

The results would be at chance,

even if one of the choices resembled the sample a little

bit more overall

If however the dolphin went for overall shape, performance would be consistently high

So another set of experiments was done specifically to check

exactly to what the dolphin attended…

Did it care about a class feature such as a bump

or a global feature such as symmetry?

And if it had three instead of two choices

Then we could see if the animal actually paid some attention to anything other than that first

choice

Or if it at least searched in a ordered type of way

In the first experiment, they gave the dolphin three

alternative choices

All of which differed in global shape but did have local

matching featuresThey could then test just how the

subject reacted

In the next experiment, they added a ‘none of the above’

choice

Which, if the animal were working on global features, it

should generally choose

Of course, that choice could also be used if the animal were simply too confused to choose anything

else

The tasks were various combinations of sonar-vision

Although it would have been interesting, too, just to do a sonar

or a vision task to see if these differed by themselves

Maybe something is special about knowing you need to switch

modes?

First the dolphin was trained with baseline objects…things like flower pots and gratings…

She would get to examine them by sonar

Then choose between two visual items and then three

items

And was gradually introduced to some training PVC items as

well

so she wouldn’t freak when she was given such different

things during testing

And, again, went from a 2 item to a 3 item choice of

response

Now she was given a bunch of new samples mixed with old

baselines

And sometimes the choices were all mostly new stuff or

sometimes baselines mixed in

So that there were full or partial transfer tests

She was right over 90% on all types of tests, so the sonar-vision

order was reversed…

Now she saw the object and had to choose using sonar…

First with baseline and two choices, then moving up to

complexity as before

Again, she was accurate over 90% of the time, even though

the samples and the choices had some overlapping features…

Such data suggested that she was looking at the object

globally, not locally…

And her search strategy was pretty regular, too….

She pretty much checked the right, stopped if it was a

match;

If not, went on to C, stopping if it was a match and going on if

it wasn’t….

And then hitting L if that’s what was left

Note that this is somewhat different from pigeons who will

focus on local cues….

Remember, if given something like

H H H H H

HHH

They would match it to H’s not to a T

But what would have happened had none of the objects been an

exact match?

So she was first given objects as samples and taught to use a

“NA” paddle if no exact match was there

But, of course, this kinda set the stage for whatever came next

She seemed to need a bit more training on going from

vision to sonar this time, getting only one out of four

trials correct

But eventually understood that she had to have an exact match

Which, again, predisposed her choices for later

She was then given some samples that had only local

features in common with the possible choices

Now, if she were using global cues, she should hit the NA

paddle on those trials

to say that nothing really matched

She was still pretty good, but her accuracy dropped to about 75% on the “missing objects”

And, interestingly, her errors were to objects that shared at least half of the features of the

sampleSuggesting that she could be

confused, at least a bit

But there was a preponderance of global

matches or NAs

Which could have been an effect of being rewarded for “If

it’s not exact, hit NA” type training

It also seemed a bit odd that the NA paddle was always to the left, even if to the FAR left…

Because if the dolphin just kept going when she didn’t find a match right away,

Then the NA paddle was just the last thing on her list, no matter

what

Even if it wasn’t an object in a box

In a sense, it didn’t matter, because it meant that the

dolphin was searching for that exact match

But what would it have done w/ an empty box in the last

position?Or yet another bum choice?

Would it have dithered?

It does seem as tho’ the dolphin did choose in terms of

overall shape

But I’m not convinced that the latter experiments made the data

any more convincing

For the latter part, she seemed to be trained

What is extremely interesting is that the dolphin does transfer from sonar to visual and back

That data tell us a lot about how the sonar capacity must work…

Ecologically, the global features are critically important

So the data do not seem all that surprising;

And some data for humans who have been trained to

use sonar like canes suggest that, over time,

they learn to integrate the information to ‘see’ via the sonar

So, if all these data aren’t enough to impress you that something is going on in the

dolphin brain

there are now the studies on mirror self-recognition,

which suggest some level of self-awareness

MSR (mirror self recognition) is a rare trait in animals

Although part of the problem is that testing nonhumans is really

tricky

because the standard test is very difficult to administer

In the standard test, you put the subject in front of the

mirror

And first watch for self-exploratory behavior of bits of the body that

can’t be seen otherwise…

Like looking inside one’s mouth

In the second part of the task

You surreptitiously mark the individual with a sticker

or some water paint or such

And then see, if subsequently shown the mirror, if the subject

investigates the mark on its own body

Problems arise because nonhumans often don’t have

hands

or like birds can see most of their bodies without mirrors

or like elephants and birds don’t seem to care if they’ve got gunk

on their bodies

And, of course, there’s a big question as to whether having

MSRReally does mean that an animal has self-awareness

An old paper by Robert Epstein basically trained pigeons to

react appropriately

So surface behavior matched

But the birds probably had no idea about what they were

doing, at least in that experiment

So what about other animals?

So far, apes succeed and monkeys fail; parrots are a toss

up

Reiss and Marino studied two dolphins that had been in pools with reflective glass walls for some part of each

yearThus the first part of the usual

study was not possible….

The animals would have been habituated to seeing themselves

Thus they were unlikely to react by exploring bits of their bodies

they couldn’t otherwise see

when now given a mirror

Of course, this situation also meant that they probably had a pretty good idea of what they

looked like

So anything that changed could, theoretically, be noticed

immediately

To control for just handling the animals, mark sessions were balanced with sham sessions

The animals felt something happen, but nothing was left on

them

Marks were made on various different

body parts

So the test could be repeated

The researchers coded for different types of behavior

Self-directed

Non-directed

Ambiguous

Social

Self-directed included anything that could be mark- or sham-

directed,

such that the animal positioned himself at the mirror and then

produced orienting or repetitive motions

So that the area of interest was visible in the mirrored surface

Exploratory behavior is self-directed, but not with respect to

the mark or sham-mark

These involve viewing other areas not generally able to be

seen without the mirror

And that occurred at the mirror

Social behavior is what occurs when a dolphin meets

another dolphin….familiar or stranger…

either aggressive or affiliative

but clearly to “another” creature

Suggesting that the mirror image wasn’t interpreted as of oneself

Reiss and Marino did a number of different markings

Of particular interest were those sham markings done before and

after the real marks

because the dolphins treated these differently from each other

In all cases, the animals spent more time in front of the

mirrored surfaces when marked than when sham-marked

Interestingly, during the early sham- marks

One animal checked itself out once and then ignored the situation

But after it had been marked and had explored the markings

When then sham-marked it went to look for a mark

So he checked himself out, looked for a mark, and if none

was there, went away

Essentially reacting as if to say, “I felt something, that often

means a mark….”

“I’d better go check it out and see what it’s like”….

Then “Oh, it’s not there, so fuggetaboudit”

Which also shows quite a bit about some level of self-

awareness

And, the dolphin chose the most reflective surface,

Using a black-backed wall if the mirror wasn’t present

videos

I asked Reiss about the initial marking, because if the early sham-marked animal didn’t spend time near the mirror

why would they suddenly start when really marked?

Sham-marked animals skimmed by at first, didn’t see anything

and left

Marked animals skimmed by, saw something and the first

time they did,

Became hooked and kept looking

Should have been clearer in the article as written

So, dolphins, like apes, do show MSR

But, unlike apes, don’t care about marks on another animal

and what that means for full consciousness is unclear

but does show self-awareness

What is also of interest are the ‘contingency checking’ (CC)

behavior patterns from the earlier study

Pan, unlike the two dolphins who did show MSF, had mirrors only for

a few days

And did more and more CC as time went on…

Possibly the long exposure of the two MSR dolphins provided

time enough so that

CC could develop into MSR

That does not happen with monkeys, who habituate and then

ignore the mirror and fail MSR

But possibly dolphins are quite different

And maybe dolphins don’t get into too much social behavior

because their sonar says it’s not another critter

There’s lots still to be learned