lin6932: topics in computational linguistics

34
1 LIN6932 Spring 2007 LIN6932: Topics in Computational Linguistics Hana Filip Lecture 5: N-grams

Upload: jada-dickson

Post on 03-Jan-2016

35 views

Category:

Documents


3 download

DESCRIPTION

LIN6932: Topics in Computational Linguistics. Hana Filip Lecture 5: N-grams. Outline. Last bit on tagging: Tagging Foreign Languages Error Analysis N-grams. Tagging in other languages. Idea: First do morphological parsing Get all possible parses - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: LIN6932: Topics in Computational Linguistics

1 LIN6932 Spring 2007

LIN6932: Topics in Computational Linguistics

Hana FilipLecture 5: N-grams

Page 2: LIN6932: Topics in Computational Linguistics

2 LIN6932 Spring 2007

Outline

Last bit on tagging:Tagging Foreign LanguagesError Analysis

N-grams

Page 3: LIN6932: Topics in Computational Linguistics

3 LIN6932 Spring 2007

Tagging in other languages

Idea:First do morphological parsingGet all possible parsesTreat each parse for a word as a “POS tag”Use a tagger to disambiguate

Page 4: LIN6932: Topics in Computational Linguistics

4 LIN6932 Spring 2007

Error Analysis

Look at a confusion matrix (contingency table)

E.g. 4.4% of the total errors caused by mistagging VBD as VBNSee what errors are causing problems

Noun (NN) vs ProperNoun (NNP) vs Adj (JJ)Adverb (RB) vs Particle (RP) vs Prep (IN)Preterite (VBD) vs Participle (VBN) vs Adjective (JJ)

ERROR ANALYSIS IS ESSENTIAL!!!

Page 5: LIN6932: Topics in Computational Linguistics

5 LIN6932 Spring 2007

How many words?

I do uh main- mainly business data processing

FragmentsFilled pauses

Are cat and cats the same word?Some terminology

Lemma: a set of lexical forms having the same stem, major part of speech, and rough word sense– Cat and cats = same lemma

Wordform: the full inflected surface form.– Cat and cats = different wordforms

Page 6: LIN6932: Topics in Computational Linguistics

6 LIN6932 Spring 2007

How many words?

they picnicked by the pool then lay back on the grass and looked at the stars

16 tokens14 types

SWBD (Switchboard Corpus, Brown)): ~20,000 wordform types, 2.4 million wordform tokens

Brown et al (1992) large corpus583 million wordform tokens293,181 wordform types

Page 7: LIN6932: Topics in Computational Linguistics

7 LIN6932 Spring 2007

Language Modeling

The noisy channel model expects P(W); the probability of the sentenceThe model that computes P(W) is called the language model.A better term for this would be “The Grammar”But “Language model” or LM is standard

“noise in the data”: the data does not give enough information, are incorrect, or the domain it comes from is nondeterministic

Page 8: LIN6932: Topics in Computational Linguistics

8 LIN6932 Spring 2007

Computing P(W)

How to compute this joint probability:

P(“the”,”other”,”day”,”I”,”was”,”walking”,”along”,”and”,”saw”,”a”,”lizard”)

Intuition: let’s rely on the Chain Rule of Probability

Page 9: LIN6932: Topics in Computational Linguistics

9 LIN6932 Spring 2007

The Chain Rule

Recall the definition of conditional probabilities

Rewriting:

More generallyP(A,B,C,D) = P(A)P(B|A)P(C|A,B)P(D|A,B,C)In general P(x1,x2,x3,…xn) = P(x1)P(x2|x1)P(x3|x1,x2)…P(xn|x1…xn-1)

)()|()^( BPBAPBAP =

)(

)()|()|(

AP

BPBAPABP =

Page 10: LIN6932: Topics in Computational Linguistics

10 LIN6932 Spring 2007

Conditional Probability( Bayes Rule)

P(A |B) =P(B | A)P(A)

P(B)

P(A |B) =P(B | A)P(A)

P(B)

conditional/posterior probability = (LIKELIHOOD multiplied by PRIOR) divided by NORMALIZING CONSTANT

We can drop the denominator: it does not change for each tag sequence; we are looking for the best tag sequence for the same observation, for the same fixed set of words

Page 11: LIN6932: Topics in Computational Linguistics

11 LIN6932 Spring 2007

The Chain Rule Applied to joint probability of words in sentence

P(“the big red dog was”)=

P(the)*P(big|the)*P(red|the big)*P(dog|the big red)*P(was|the big red dog)

Page 12: LIN6932: Topics in Computational Linguistics

12 LIN6932 Spring 2007

Unfortunately

Chomsky dictum: “Language is creative”

We’ll never be able to get enough data to compute the statistics for those long prefixes

P(lizard|the,other,day,I,was,walking,along,and,saw,a)

Page 13: LIN6932: Topics in Computational Linguistics

13 LIN6932 Spring 2007

Markov Assumption

Make the simplifying assumptionP(lizard|the,other,day,I,was,walking,along,and,saw,a) = P(lizard|a)

Or maybeP(lizard|the,other,day,I,was,walking,along,and,saw,a) = P(lizard|saw,a)

Page 14: LIN6932: Topics in Computational Linguistics

14 LIN6932 Spring 2007

So for each component in the product replace with the approximation (assuming a prefix of N)

Bigram version

P(wn |w1n−1) ≈ P(wn |wn−N +1

n−1 )

Markov Assumption

P(wn |w1n−1) ≈ P(wn |wn−1)

Page 15: LIN6932: Topics in Computational Linguistics

15 LIN6932 Spring 2007

Estimating bigram probabilities

The Maximum Likelihood Estimate

P(wi |wi−1) =count(wi−1,wi)

count(wi−1)

P(wi |wi−1) =c(wi−1,wi)

c(wi−1)

Page 16: LIN6932: Topics in Computational Linguistics

16 LIN6932 Spring 2007

An example

<s> I am Sam </s><s> Sam I am </s><s> I do not like green eggs and ham </s>

This is the Maximum Likelihood Estimate, because it is the one which maximizes P(Training set|Model)

Page 17: LIN6932: Topics in Computational Linguistics

17 LIN6932 Spring 2007

More examples: Berkeley Restaurant Project sentences

can you tell me about any good cantonese restaurants close bymid priced thai food is what i’m looking fortell me about chez panissecan you give me a listing of the kinds of food that are availablei’m looking for a good place to eat breakfastwhen is caffe venezia open during the day

Page 18: LIN6932: Topics in Computational Linguistics

18 LIN6932 Spring 2007

Raw bigram counts

Out of 9222 sentences

Page 19: LIN6932: Topics in Computational Linguistics

19 LIN6932 Spring 2007

Raw bigram probabilities

Normalize by unigrams:

Result:

Page 20: LIN6932: Topics in Computational Linguistics

20 LIN6932 Spring 2007

Bigram estimates of sentence probabilities

P(<s> I want english food </s>) =p(i|<s>) x p(want|I) x

p(english|want) x p(food|english) x p(</s>|food)

=.000031

Page 21: LIN6932: Topics in Computational Linguistics

21 LIN6932 Spring 2007

What kinds of knowledge?

P(english|want) = .0011P(chinese|want) = .0065P(to|want) = .66P(eat | to) = .28P(food | to) = 0P(want | spend) = 0P (i | <s>) = .25

Page 22: LIN6932: Topics in Computational Linguistics

22 LIN6932 Spring 2007

The Shannon Visualization Method

Generate random sentences:Choose a random bigram <s>, w according to its probabilityNow choose a random bigram (w, x) according to its probabilityAnd so on until we choose </s>Then string the words together<s> I

I want want to to eat eat Chinese

Chinese food food </s>

Page 23: LIN6932: Topics in Computational Linguistics

23 LIN6932 Spring 2007

Page 24: LIN6932: Topics in Computational Linguistics

24 LIN6932 Spring 2007

Shakespeare as corpus

N=884,647 tokens, V=29,066Shakespeare produced 300,000 bigram types out of V2= 844 million possible bigrams: so, 99.96% of the possible bigrams were never seen (have zero entries in the table)Quadrigrams worse: What's coming out looks like Shakespeare because it is Shakespeare

Page 25: LIN6932: Topics in Computational Linguistics

25 LIN6932 Spring 2007

The wall street journal is not shakespeare (no offense)

Page 26: LIN6932: Topics in Computational Linguistics

26 LIN6932 Spring 2007

Lesson 1: the perils of overfitting

N-grams only work well for word prediction if the test corpus looks like the training corpus

In real life, it often doesn’tWe need to train robust models, adapt to test set, etc

Page 27: LIN6932: Topics in Computational Linguistics

27 LIN6932 Spring 2007

Lesson 2: zeros or not?

Zipf’s Law:A small number of events occur with high frequencyA large number of events occur with low frequencyYou can quickly collect statistics on the high frequency eventsYou might have to wait an arbitrarily long time to get valid statistics on low frequency events

Result:Our estimates are sparse! no counts at all for the vast bulk of things we want to estimate!Some of the zeroes in the table are really zeros But others are simply low frequency events you haven't seen yet. After all, ANYTHING CAN HAPPEN!How to address?

Answer:Estimate the likelihood of unseen N-grams!

Slide adapted from Bonnie Dorr and Julia Hirschberg

Page 28: LIN6932: Topics in Computational Linguistics

28 LIN6932 Spring 2007

Smoothing is like Robin Hood:Steal from the rich and give to the poor (in probability mass)

Slide from Rion Snow

Page 29: LIN6932: Topics in Computational Linguistics

29 LIN6932 Spring 2007

Add-one smoothing

Also called Laplace smoothingJust add one to all the counts!Very simple

MLE estimate:

Laplace estimate:

Reconstructed counts:

Page 30: LIN6932: Topics in Computational Linguistics

30 LIN6932 Spring 2007

Add-one smoothed bigram counts

Page 31: LIN6932: Topics in Computational Linguistics

31 LIN6932 Spring 2007

Add-one bigrams

Page 32: LIN6932: Topics in Computational Linguistics

32 LIN6932 Spring 2007

Reconstituted counts

Page 33: LIN6932: Topics in Computational Linguistics

33 LIN6932 Spring 2007

Note big change to counts

C(count to) went from 608 to 238!P(to|want) from .66 to .26!Discount d= c*/c

d for “chinese food” =.10!!! A 10x reductionSo in general, add-one is a blunt instrumentCould use more fine-grained method (add-k)

But add-one smoothing not used for N-grams, as we have much better methodsDespite its flaws it is however still used to smooth other probabilistic models in NLP, especially

For pilot studiesin domains where the number of zeros isn’t so huge.

Page 34: LIN6932: Topics in Computational Linguistics

34 LIN6932 Spring 2007

Summary

Last bit on tagging:Tagging Foreign LanguagesError Analysis

N-grams