models for norm diffusion in social...

Post on 29-Jan-2020

2 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Models for Norm Diffusion in Social Networks

Giuseppe Persiano

Dipartimento di Informatica ”Renato M. Capocelli”Universita di Salerno

Joint work with: Vincenzo Auletta, Diodato Ferraioli, Paolo Penna, Francesco Pasquale

October 2012 – ICS – Salerno

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 1 / 38

Modeling Human Behaviour

1 very difficult problem

2 not even close to a theory

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 2 / 38

Modeling Human Behaviour

1 very difficult problem

2 not even close to a theory

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 2 / 38

Game Theory

Big assumption: each person has a utility function and tries tomaximize it.

Milder way of saying it: let us model humans when they try tomaximize their utility.

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 3 / 38

Game Theory

Big assumption: each person has a utility function and tries tomaximize it.

Milder way of saying it: let us model humans when they try tomaximize their utility.

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 3 / 38

Game Theory

Big assumption: each person has a utility function and tries tomaximize it.

Milder way of saying it: let us model humans when they try tomaximize their utility.

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 3 / 38

Norm diffusion in a social network

Objective

derive a mathematical model based on Game Theory for norm diffusion ina social network.

Scenario

how two competing norms fight in a social network

new behaviour: formal vs. casual wear

new technology: Windows vs. Mac

new technology: Android vs. iOS

new paradigm: open source software vs. proprietary software

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 4 / 38

Norm diffusion in a social network

Utility

Every individual tries to

adopt the “better” norm

wants to do what others in his/her social neighborhood are doing

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 5 / 38

Why do I want to do that?

Why is CS interested in this?

intellectual curiosity

CS is constructing new social networks

if we understand how people behave we can make predictions

or we could influence them

Social networks give us a huge amount of information to be processedefficiently.

That’s what CS does!

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 6 / 38

Why do I want to do that?

Why is CS interested in this?

intellectual curiosity

CS is constructing new social networks

if we understand how people behave we can make predictions

or we could influence them

Social networks give us a huge amount of information to be processedefficiently.

That’s what CS does!

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 6 / 38

Why do I want to do that?

Why is CS interested in this?

intellectual curiosity

CS is constructing new social networks

if we understand how people behave we can make predictions

or we could influence them

Social networks give us a huge amount of information to be processedefficiently.

That’s what CS does!

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 6 / 38

Why do I want to do that?

Why is CS interested in this?

intellectual curiosity

CS is constructing new social networks

if we understand how people behave we can make predictions

or we could influence them

Social networks give us a huge amount of information to be processedefficiently.

That’s what CS does!

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 6 / 38

Why do I want to do that?

Why is CS interested in this?

intellectual curiosity

CS is constructing new social networks

if we understand how people behave we can make predictions

or we could influence them

Social networks give us a huge amount of information to be processedefficiently.

That’s what CS does!

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 6 / 38

Game Theory...in one slide

G = ([n],S,U)

[n] = 1, . . . , n players;

S = S1, . . . ,Sn; Si = actions for player i;U = u1, . . . , un; ui : S1 × · · · × Sn → R utility functions

Solution concept

x = (x1, . . . , xn) ∈ S1 × · · · × Sn pure Nash equilibrium if for every i ∈ [n]and for every y ∈ Si

ui (x−i , y) 6 ui (x)

µ = (µ1, . . . , µn), µi probability distribution over Si mixed Nashequilibrium if for every i ∈ [n] and for every other probability distribution σover Si

E(µ−i ,σ) [ui ] 6 Eµ [ui ]

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 7 / 38

Game Theory...in one slide

G = ([n],S,U)

[n] = 1, . . . , n players;

S = S1, . . . ,Sn; Si = actions for player i;U = u1, . . . , un; ui : S1 × · · · × Sn → R utility functions

Solution concept

x = (x1, . . . , xn) ∈ S1 × · · · × Sn pure Nash equilibrium if for every i ∈ [n]and for every y ∈ Si

ui (x−i , y) 6 ui (x)

µ = (µ1, . . . , µn), µi probability distribution over Si mixed Nashequilibrium if for every i ∈ [n] and for every other probability distribution σover Si

E(µ−i ,σ) [ui ] 6 Eµ [ui ]

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 7 / 38

Nash equilibrium

Nash equilibria are the equilibrium statesfor the Best Response Dynamics:

no player can increase his utility bychanging his current action

state of the system does not change,no matter which player is chosen foraction update

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 8 / 38

Nash equilibrium

Nash equilibria are the equilibrium statesfor the Best Response Dynamics:

no player can increase his utility bychanging his current action

state of the system does not change,no matter which player is chosen foraction update

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 8 / 38

Nash equilibrium

Nash equilibria are the equilibrium statesfor the Best Response Dynamics:

no player can increase his utility bychanging his current action

state of the system does not change,no matter which player is chosen foraction update

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 8 / 38

Nash equilibrium

Nash equilibria are the equilibrium statesfor the Best Response Dynamics:

no player can increase his utility bychanging his current action

state of the system does not change,no matter which player is chosen foraction update

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 8 / 38

Nash equilibrium

Nash equilibria are the equilibrium statesfor the Best Response Dynamics:

no player can increase his utility bychanging his current action

state of the system does not change,no matter which player is chosen foraction update

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 8 / 38

Nash equilibrium

Nash equilibria are the equilibrium statesfor the Best Response Dynamics:

no player can increase his utility bychanging his current action

state of the system does not change,no matter which player is chosen foraction update

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 8 / 38

Best Response Dynamics: An example

Chicken Game

If both STOP then neither wins;S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 9 / 38

Best Response Dynamics: An example

Chicken Game

If both STOP then neither wins;

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 9 / 38

Best Response Dynamics: An example

Chicken Game

If both STOP then neither wins;

If both PASS then both lose;

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 9 / 38

Best Response Dynamics: An example

Chicken Game

If both STOP then neither wins;

If both PASS then both lose;

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 9 / 38

Best Response Dynamics: An example

Chicken GameIf both STOP then neither wins;

If both PASS then both lose;

If A PASSes and B STOPs,A wins and B does not

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 9 / 38

Best Response Dynamics: An example

Chicken GameIf both STOP then neither wins;

If both PASS then both lose;

If A PASSes and B STOPs,A wins and B does not

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 9 / 38

Best Response Dynamics: An example

Chicken GameIf both STOP then neither wins;

If both PASS then both lose;

If A PASSes and B STOPs,A wins and B does not

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 9 / 38

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

initial state (S , S);

player one is chosen ⇒ (P,S);

player two is chosen ⇒ (P, S);

player one is chosen ⇒ (P,S);

.....

initial state (S ,S);

player two is chosen ⇒ (S ,P);

player one is chosen ⇒ (S ,P);

player two is chosen ⇒ (S ,P);

.....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 10 / 38

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

initial state (S , S);

player one is chosen ⇒ (P, S);

player two is chosen ⇒ (P, S);

player one is chosen ⇒ (P,S);

.....

initial state (S ,S);

player two is chosen ⇒ (S ,P);

player one is chosen ⇒ (S ,P);

player two is chosen ⇒ (S ,P);

.....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 10 / 38

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

initial state (S , S);

player one is chosen ⇒ (P, S);

player two is chosen ⇒ (P, S);

player one is chosen ⇒ (P,S);

.....

initial state (S ,S);

player two is chosen ⇒ (S ,P);

player one is chosen ⇒ (S ,P);

player two is chosen ⇒ (S ,P);

.....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 10 / 38

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

initial state (S , S);

player one is chosen ⇒ (P, S);

player two is chosen ⇒ (P, S);

player one is chosen ⇒ (P, S);

.....

initial state (S ,S);

player two is chosen ⇒ (S ,P);

player one is chosen ⇒ (S ,P);

player two is chosen ⇒ (S ,P);

.....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 10 / 38

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

initial state (S , S);

player one is chosen ⇒ (P, S);

player two is chosen ⇒ (P, S);

player one is chosen ⇒ (P, S);

.....

initial state (S , S);

player two is chosen ⇒ (S ,P);

player one is chosen ⇒ (S ,P);

player two is chosen ⇒ (S ,P);

.....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 10 / 38

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

initial state (S , S);

player one is chosen ⇒ (P, S);

player two is chosen ⇒ (P, S);

player one is chosen ⇒ (P, S);

.....

initial state (S , S);

player two is chosen ⇒ (S ,P);

player one is chosen ⇒ (S ,P);

player two is chosen ⇒ (S ,P);

.....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 10 / 38

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

initial state (S , S);

player one is chosen ⇒ (P, S);

player two is chosen ⇒ (P, S);

player one is chosen ⇒ (P, S);

.....

initial state (S , S);

player two is chosen ⇒ (S ,P);

player one is chosen ⇒ (S ,P);

player two is chosen ⇒ (S ,P);

.....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 10 / 38

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

initial state (S , S);

player one is chosen ⇒ (P, S);

player two is chosen ⇒ (P, S);

player one is chosen ⇒ (P, S);

.....

initial state (S , S);

player two is chosen ⇒ (S ,P);

player one is chosen ⇒ (S ,P);

player two is chosen ⇒ (S ,P);

.....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 10 / 38

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

initial state (S , S);

player one is chosen ⇒ (P, S);

player two is chosen ⇒ (P, S);

player one is chosen ⇒ (P, S);

.....

initial state (S , S);

player two is chosen ⇒ (S ,P);

player one is chosen ⇒ (S ,P);

player two is chosen ⇒ (S ,P);

.....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 10 / 38

S P

S (0, 0) (0, 1)

P (1, 0) (−1,−1)

initial state (S , S);

player one is chosen ⇒ (P, S);

player two is chosen ⇒ (P, S);

player one is chosen ⇒ (P, S);

.....

initial state (S , S);

player two is chosen ⇒ (S ,P);

player one is chosen ⇒ (S ,P);

player two is chosen ⇒ (S ,P);

.....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 10 / 38

So, we are done...

1 we have a significant dynamics

2 with very well studied induced equilibrium states

Why choose a different dynamics?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 11 / 38

So, we are done...

1 we have a significant dynamics

2 with very well studied induced equilibrium states

Why choose a different dynamics?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 11 / 38

Experiment

Write down a number between 1 and 100.

Your number should be as close as possible tohalf of the average

of all numbers we write.

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 12 / 38

ExperimentThe standard game-theoretic way

Numbers are at most 100, so the average will be at most 100, andhalf of the average will be at most 50

I will not write a number larger than 50

If none writes a number larger than 50, then the average will be atmost 50, and half of the average will be at most 25

I will not write a number larger than 25

If none writes a number larger than 25,. . .

. . .

Prediction: Everyone writes 1!

Do you believe that prediction?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 13 / 38

ExperimentThe standard game-theoretic way

Numbers are at most 100, so the average will be at most 100, andhalf of the average will be at most 50

I will not write a number larger than 50

If none writes a number larger than 50, then the average will be atmost 50, and half of the average will be at most 25

I will not write a number larger than 25

If none writes a number larger than 25,. . .

. . .

Prediction: Everyone writes 1!

Do you believe that prediction?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 13 / 38

ExperimentThe standard game-theoretic way

Numbers are at most 100, so the average will be at most 100, andhalf of the average will be at most 50

I will not write a number larger than 50

If none writes a number larger than 50, then the average will be atmost 50, and half of the average will be at most 25

I will not write a number larger than 25

If none writes a number larger than 25,. . .

. . .

Prediction: Everyone writes 1!

Do you believe that prediction?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 13 / 38

ExperimentThe standard game-theoretic way

Numbers are at most 100, so the average will be at most 100, andhalf of the average will be at most 50

I will not write a number larger than 50

If none writes a number larger than 50, then the average will be atmost 50, and half of the average will be at most 25

I will not write a number larger than 25

If none writes a number larger than 25,. . .

. . .

Prediction: Everyone writes 1!

Do you believe that prediction?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 13 / 38

Motivation

STOC poster session at FCRC’11Half of the average

12.2

Standard game theoretic assumption

Rationality common knowledge

This is too strong assumption in several cases

Limited knowledge

Limited computational power

Limited rationality

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 14 / 38

Motivation

STOC poster session at FCRC’11Half of the average

12.2

Standard game theoretic assumption

Rationality common knowledge

This is too strong assumption in several cases

Limited knowledge

Limited computational power

Limited rationality

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 14 / 38

ExperimentA different point of view

Xj = number chosen by player j

Example of limited rationality assumption

P (Xj = k) ∼ e−βk for k = 1, . . . , 100

where β > 0 is the rationality level

Xj independent or “socially” correlated

Prediction: Half of the average is

X =1

2n

n∑j=1

Xj

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 15 / 38

ExperimentA different point of view

Xj = number chosen by player j

Example of limited rationality assumption

P (Xj = k) ∼ e−βk for k = 1, . . . , 100

where β > 0 is the rationality level

Xj independent or “socially” correlated

Prediction: Half of the average is

X =1

2n

n∑j=1

Xj

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 15 / 38

ExperimentA different point of view

Xj = number chosen by player j

Example of limited rationality assumption

P (Xj = k) ∼ e−βk for k = 1, . . . , 100

where β > 0 is the rationality level

Xj independent or “socially” correlated

Prediction: Half of the average is

X =1

2n

n∑j=1

Xj

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 15 / 38

ExperimentA different point of view

Xj = number chosen by player j

Example of limited rationality assumption

P (Xj = k) ∼ e−βk for k = 1, . . . , 100

where β > 0 is the rationality level

Xj independent or “socially” correlated

Prediction: Half of the average is

X =1

2n

n∑j=1

Xj

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 15 / 38

Assumptions

1 Agents have complete knowledge of thestate of the system;

2 Agents have complete knowledge of theutility associated with each state;

3 Agents have the computational power tooptimize;

There could be more than oneequilibriumwhich one are we using to de-scribe the system?

It might take too much to con-verge to a Nash equilibriumhow do we describe the systemwhile it converges?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 16 / 38

Assumptions

1 Agents have complete knowledge of thestate of the system;

2 Agents have complete knowledge of theutility associated with each state;

3 Agents have the computational power tooptimize;

There could be more than oneequilibriumwhich one are we using to de-scribe the system?

It might take too much to con-verge to a Nash equilibriumhow do we describe the systemwhile it converges?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 16 / 38

Assumptions

1 Agents have complete knowledge of thestate of the system;

2 Agents have complete knowledge of theutility associated with each state;

3 Agents have the computational power tooptimize;

There could be more than oneequilibriumwhich one are we using to de-scribe the system?

It might take too much to con-verge to a Nash equilibriumhow do we describe the systemwhile it converges?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 16 / 38

Assumptions

1 Agents have complete knowledge of thestate of the system;

2 Agents have complete knowledge of theutility associated with each state;

3 Agents have the computational power tooptimize;

There could be more than oneequilibriumwhich one are we using to de-scribe the system?

It might take too much to con-verge to a Nash equilibriumhow do we describe the systemwhile it converges?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 16 / 38

The Logit dynamics – Blume [GEB 93]

Logit dynamics with β ≥ 0

At every time step

1 Select one player i ∈ [n] uniformly at random;

2 Update the strategy of player i according to the following probabilitydistributionfor every y ∈ Ai

σi (y | x) =eβui (x−i ,y)

Ti (x)

where x ∈ A1 × · · · × An is the current stateTi (x) =

∑z∈Ai

eβui (x−i ,z) is the normalizing factor.

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 17 / 38

β: level of rationality

β = 0 ⇒ players act at random;

β > 0 ⇒ players biased toward actionspromising higher utility;

β →∞ ⇒ players play best response ;

Probability σi (y | x) does not depend on theaction ai currently adopted by player i .

σi (y |x−i );

This process defines a Markov chain

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 18 / 38

β: level of rationality

β = 0 ⇒ players act at random;

β > 0 ⇒ players biased toward actionspromising higher utility;

β →∞ ⇒ players play best response ;

Probability σi (y | x) does not depend on theaction ai currently adopted by player i .

σi (y |x−i );

This process defines a Markov chain

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 18 / 38

β: level of rationality

β = 0 ⇒ players act at random;

β > 0 ⇒ players biased toward actionspromising higher utility;

β →∞ ⇒ players play best response ;

Probability σi (y | x) does not depend on theaction ai currently adopted by player i .

σi (y |x−i );

This process defines a Markov chain

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 18 / 38

β: level of rationality

β = 0 ⇒ players act at random;

β > 0 ⇒ players biased toward actionspromising higher utility;

β →∞ ⇒ players play best response ;

Probability σi (y | x) does not depend on theaction ai currently adopted by player i .

σi (y |x−i );

This process defines a Markov chain

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 18 / 38

Ising model

Ferromagnetic particles with twostates: up or down.Arranged in a 3D lattice.

The state of each particle is biased bythe state of the neighbors:

I up with probability ∼ eβnup

I down with probability ∼ eβndown

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 19 / 38

Ising model

β is inverse temperature

large β, low temperature, low energy,

particles constrained by the magneticfield;

small β, high temperature, highenergy,

particles have enough energy to ignorethe magnetic field;

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 20 / 38

Markov chains...in one slide

Xt : t ∈ N, (Ω,P).

Irreducible: for every x, y ∈ Ω, ∃t ∈ N : Pt(x, y) > 0;

Aperiodic: for every x, gcd t > 1 : Pt(x , x) > 0 = 1;

Stationary distribution: π ∈ ∆(Ω), πP = π.

Irreducible + Aperiodic = Ergodic =⇒=⇒ π is unique and Pt(x, ·)→ π

Total variation distance µ, ν ∈ ∆(Ω)

‖µ− ν‖ = maxA⊆Ω|µ(A)− ν(A)| =

1

2

∑x∈Ω

|µ(x)− ν(x)|

Mixing Time

tmix = mint ∈ N : ‖Pt(x, ·)− π‖ 6 1/4 for all x ∈ Ω

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 21 / 38

The Markov chain of the Logit dynamics

Finite Markov chain Mover the set of states x = (x1, . . . , xn) ∈ A1 × · · · × An

with transition probability x⇒ y

P(x, y) =1

n

n∑i=1

eβui (x−i ,yi )

Ti (x)Iyj=xj for every j 6=i

M is ergodic

unique stationary distribution π s.t. π · P = π

the chain converges to π;from any starting state, if we apply P sufficiently many times,x appears with probability close to π(x).

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 22 / 38

The Markov chain of the Logit dynamics

Finite Markov chain Mover the set of states x = (x1, . . . , xn) ∈ A1 × · · · × An

with transition probability x⇒ y

P(x, y) =1

n

n∑i=1

eβui (x−i ,yi )

Ti (x)Iyj=xj for every j 6=i

M is ergodic

unique stationary distribution π s.t. π · P = π

the chain converges to π;from any starting state, if we apply P sufficiently many times,x appears with probability close to π(x).

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 22 / 38

Our equilibrium state

the stationary distribution of (the Markov chain of) the Logit dynamics

1 β models the noise in the players’ knowledge of the system (currentstate and payoffs);

I As β →∞, knowledge is more and more accurate and Logit tends toBest Response;

2 By ergodicity, there is exactly one equilibrium state.

I there could be more than one Nash equilibria

3 How long does it take to converge to the stationary

I needs more study....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 23 / 38

Our equilibrium state

the stationary distribution of (the Markov chain of) the Logit dynamics

1 β models the noise in the players’ knowledge of the system (currentstate and payoffs);

I As β →∞, knowledge is more and more accurate and Logit tends toBest Response;

2 By ergodicity, there is exactly one equilibrium state.

I there could be more than one Nash equilibria

3 How long does it take to converge to the stationary

I needs more study....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 23 / 38

Our equilibrium state

the stationary distribution of (the Markov chain of) the Logit dynamics

1 β models the noise in the players’ knowledge of the system (currentstate and payoffs);

I As β →∞, knowledge is more and more accurate and Logit tends toBest Response;

2 By ergodicity, there is exactly one equilibrium state.

I there could be more than one Nash equilibria

3 How long does it take to converge to the stationary

I needs more study....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 23 / 38

Our equilibrium state

the stationary distribution of (the Markov chain of) the Logit dynamics

1 β models the noise in the players’ knowledge of the system (currentstate and payoffs);

I As β →∞, knowledge is more and more accurate and Logit tends toBest Response;

2 By ergodicity, there is exactly one equilibrium state.

I there could be more than one Nash equilibria

3 How long does it take to converge to the stationary

I needs more study....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 23 / 38

Our equilibrium state

the stationary distribution of (the Markov chain of) the Logit dynamics

1 β models the noise in the players’ knowledge of the system (currentstate and payoffs);

I As β →∞, knowledge is more and more accurate and Logit tends toBest Response;

2 By ergodicity, there is exactly one equilibrium state.I there could be more than one Nash equilibria

3 How long does it take to converge to the stationary

I needs more study....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 23 / 38

Our equilibrium state

the stationary distribution of (the Markov chain of) the Logit dynamics

1 β models the noise in the players’ knowledge of the system (currentstate and payoffs);

I As β →∞, knowledge is more and more accurate and Logit tends toBest Response;

2 By ergodicity, there is exactly one equilibrium state.I there could be more than one Nash equilibria

3 How long does it take to converge to the stationary

I needs more study....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 23 / 38

Our equilibrium state

the stationary distribution of (the Markov chain of) the Logit dynamics

1 β models the noise in the players’ knowledge of the system (currentstate and payoffs);

I As β →∞, knowledge is more and more accurate and Logit tends toBest Response;

2 By ergodicity, there is exactly one equilibrium state.I there could be more than one Nash equilibria

3 How long does it take to converge to the stationaryI needs more study....

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 23 / 38

Plan of attack

Logit Dynamics defines an ergodic Markov chain

What is the stationary distribution π?

How long it takes to get close to the stationary distribution?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 24 / 38

Plan of attack

Logit Dynamics defines an ergodic Markov chain

What is the stationary distribution π?

How long it takes to get close to the stationary distribution?

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 24 / 38

Potential games

Potential games

Difference of utility of adjacent states depends only on the state and noton the player.

The Potential function

There exists a potential function Φ that assigns a potential to each state(a1, . . . , an) such that

Φ(a1, . . . , bi , . . . , an)− Φ(a1, . . . , ai , . . . , an) =

ui (a1, . . . , bi , . . . , an)− ui (a1, . . . , ai , . . . , an)

for all a1, . . . , an, bi .

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 25 / 38

Potential games

Potential games

Difference of utility of adjacent states depends only on the state and noton the player.

The Potential function

There exists a potential function Φ that assigns a potential to each state(a1, . . . , an) such that

Φ(a1, . . . , bi , . . . , an)− Φ(a1, . . . , ai , . . . , an) =

ui (a1, . . . , bi , . . . , an)− ui (a1, . . . , ai , . . . , an)

for all a1, . . . , an, bi .

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 25 / 38

Potential games

Very large and well studied class of games: includes

congestion games

coordination games

anti coordination games

routing games

...

Pure Nash equilibria correspond to maxima of the potential function.

Theorem (Monderer–Shapley)

A potential game has at least one pure Nash equilibrium.

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 26 / 38

Logit Dynamics of a Potential Game

Theorem (Blume’s result – Informal)

For sufficiently large β, if we let the Markov chain run for sufficiently long,then with probability one it will be in a maximum potential Nash equilibria.

reversible with the Gibb

s

measure as station-ary distribution

π(x) =eβΦ(x)

Z

Z =∑

y eβΦ(y) is the partition function

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 27 / 38

Logit Dynamics of a Potential Game

Theorem (Blume’s result – Informal)

For sufficiently large β, if we let the Markov chain run for sufficiently long,then with probability one it will be in a maximum potential Nash equilibria.

reversible with the Gibb

s

measure as station-ary distribution

π(x) =eβΦ(x)

Z

Z =∑

y eβΦ(y) is the partition function

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 27 / 38

Logit Dynamics of a Potential Game

Theorem (Blume’s result – Informal)

For sufficiently large β, if we let the Markov chain run for sufficiently long,then with probability one it will be in a maximum potential Nash equilibria.

reversible with the Gibbs measure as station-ary distribution

π(x) =eβΦ(x)

Z

Z =∑

y eβΦ(y) is the partition function

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 27 / 38

Two-Player Coordination Games

0 1

0 (a, a) (c , d)

1 (d , c) (b, b)

a− d ≥ b − c

Assumptions

a > d and b > cplayers prefer to coordinate

Nash equilibria

(0, 0) and (1, 1).

Harsanyi-Selten

if a− d > b − cstrategy 0 is risk dominant

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 28 / 38

Coordination games on a Network

Vertices are playersEdges are 2-player coordination games

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 29 / 38

Spread of a new social norm [H. Peyton Young]

Two Competing Social Norms

Old: Messenger New: Facebook

(a, a) (c , d)

(d , c) (b, b)

New norm is bettera− d > b − c

Want to adopt samenorm as neighborsa > d and b > c

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 30 / 38

Spread of a new social norm [H. Peyton Young]

Two Competing Social Norms

Old: Messenger New: Facebook

0 1

0 (a, a) (c , d)

1 (d , c) (b, b)

New norm is bettera− d > b − c

Want to adopt samenorm as neighborsa > d and b > c

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 30 / 38

Spread of a new social norm [H. Peyton Young]

Two Competing Social Norms

Old: Messenger New: Facebook

N O

N (a, a) (c , d)

O (d , c) (b, b)

New norm is bettera− d > b − c

Want to adopt samenorm as neighborsa > d and b > c

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 30 / 38

Spread of a new social norm [H. Peyton Young]

Two Competing Social Norms

Old: Messenger New: Facebook

N O

N (a, a) (c , d)

O (d , c) (b, b)

New norm is bettera− d > b − c

Want to adopt samenorm as neighborsa > d and b > c

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 30 / 38

Spread of a new social norm [H. Peyton Young]

Two Competing Social Norms

Old: Messenger New: Facebook

N O

N (a, a) (c , d)

O (d , c) (b, b)

New norm is bettera− d > b − c

Want to adopt samenorm as neighborsa > d and b > c

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 30 / 38

Coordination games on a Network

0

up

1

down

0

up

(a, a) (c , d)

1

down

(d , c) (b, b)

0⇒up1⇒downa = b = 1c = d = 0

Ising model is a special case of Coordination game on a network

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 31 / 38

Coordination games on a Network

0

up 1

down

0

up (a, a) (c , d)

1

down

(d , c) (b, b)

0⇒up

1⇒downa = b = 1c = d = 0

Ising model is a special case of Coordination game on a network

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 31 / 38

Coordination games on a Network

0

up

1

down

0

up (a, a) (c , d)

1

down (d , c) (b, b)

0⇒up1⇒down

a = b = 1c = d = 0

Ising model is a special case of Coordination game on a network

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 31 / 38

Coordination games on a Network

0

up

1

down

0

up (1, 1) (c , d)

1

down (d , c) (1, 1)

0⇒up1⇒downa = b = 1

c = d = 0

Ising model is a special case of Coordination game on a network

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 31 / 38

Coordination games on a Network

0

up

1

down

0

up (1, 1) (0, 0)

1

down (0, 0) (1, 1)

0⇒up1⇒downa = b = 1c = d = 0

Ising model is a special case of Coordination game on a network

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 31 / 38

Coordination games on a Network

0

up

1

down

0

up (1, 1) (0, 0)

1

down (0, 0) (1, 1)

0⇒up1⇒downa = b = 1c = d = 0

Ising model is a special case of Coordination game on a network

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 31 / 38

Coordination games on a Network

Graph G = (V ,E )

Potential ΦG

ΦG (x1, . . . , xn) =∑

(u,v)∈E

Φ(xu, xv ).

Stationary distribution πG

πG (x1, . . . , xn) ∼ eβΦG (x1,...,xn)

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 32 / 38

Coordination games on a Network

Graph G = (V ,E )

Potential ΦG

ΦG (x1, . . . , xn) =∑

(u,v)∈E

Φ(xu, xv ).

Stationary distribution πG

πG (x1, . . . , xn) ∼ eβΦG (x1,...,xn)

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 32 / 38

Coordination games on a Network

Graph G = (V ,E )

Potential ΦG

ΦG (x1, . . . , xn) =∑

(u,v)∈E

Φ(xu, xv ).

Stationary distribution πG

πG (x1, . . . , xn) ∼ eβΦG (x1,...,xn)

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 32 / 38

Dynamics with β ≥ 0

At every time step

1 Select player i ∈ [n] with probability pi ;

2 Update the strategy of player i according to the following probabilitydistributionfor every y ∈ Ai

σi (y | x) =eβui (x−i ,y)

Ti (x)

where x ∈ A1 × · · · × An is the current stateTi (x) =

∑z∈Ai

eβui (x−i ,z) is the normalizing factor.

Same stationary distribution, mixing timeProvided no pi is too small

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 33 / 38

Dynamics with β ≥ 0

At every time step

1 Select player i ∈ [n] with probability pi ;

2 Update the strategy of player i according to the following probabilitydistributionfor every y ∈ Ai

σi (y | x) =eβui (x−i ,y)

Ti (x)

where x ∈ A1 × · · · × An is the current stateTi (x) =

∑z∈Ai

eβui (x−i ,z) is the normalizing factor.

Same stationary distribution, mixing time

Provided no pi is too small

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 33 / 38

Dynamics with β ≥ 0

At every time step

1 Select player i ∈ [n] with probability pi ;

2 Update the strategy of player i according to the following probabilitydistributionfor every y ∈ Ai

σi (y | x) =eβui (x−i ,y)

Ti (x)

where x ∈ A1 × · · · × An is the current stateTi (x) =

∑z∈Ai

eβui (x−i ,z) is the normalizing factor.

Same stationary distribution, mixing timeProvided no pi is too small

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 33 / 38

All logit

All logit with β ≥ 0

At every time step

1 Every player i updates her strategy according to the followingprobability distributionfor every y ∈ Ai

σi (y | x) =eβui (x−i ,y)

Ti (x)

where x ∈ A1 × · · · × An is the current stateTi (x) =

∑z∈Ai

eβui (x−i ,z) is the normalizing factor.

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 34 / 38

1-logit vs all-logitReversibility

I Markov chain of 1-logit reversible if and only if potential game;I Markov chain of All-logit reversible if and only if sum of 2-players

potential games.Ranking

I In the Gibbs measure, agrees with potential φ.I In the stationary of All-logit, does not agree with potential.

Even in 2-player coordination game.

Difference of number of 0s and 1sExpected magnetization of IsingNumber of adopters in the Social Norm game

0 1

0 (a, a) (0, 0)

1 (0, 0) (b, b)

a = bpreserved by the All-logiton any network

a > bpreserved by the All-logitat least in the 2-playergame

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 35 / 38

1-logit vs all-logitReversibility

I Markov chain of 1-logit reversible if and only if potential game;

I Markov chain of All-logit reversible if and only if sum of 2-playerspotential games.

Ranking

I In the Gibbs measure, agrees with potential φ.I In the stationary of All-logit, does not agree with potential.

Even in 2-player coordination game.

Difference of number of 0s and 1sExpected magnetization of IsingNumber of adopters in the Social Norm game

0 1

0 (a, a) (0, 0)

1 (0, 0) (b, b)

a = bpreserved by the All-logiton any network

a > bpreserved by the All-logitat least in the 2-playergame

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 35 / 38

1-logit vs all-logitReversibility

I Markov chain of 1-logit reversible if and only if potential game;I Markov chain of All-logit reversible if and only if sum of 2-players

potential games.

Ranking

I In the Gibbs measure, agrees with potential φ.I In the stationary of All-logit, does not agree with potential.

Even in 2-player coordination game.

Difference of number of 0s and 1sExpected magnetization of IsingNumber of adopters in the Social Norm game

0 1

0 (a, a) (0, 0)

1 (0, 0) (b, b)

a = bpreserved by the All-logiton any network

a > bpreserved by the All-logitat least in the 2-playergame

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 35 / 38

1-logit vs all-logitReversibility

I Markov chain of 1-logit reversible if and only if potential game;I Markov chain of All-logit reversible if and only if sum of 2-players

potential games.Ranking

I In the Gibbs measure, agrees with potential φ.I In the stationary of All-logit, does not agree with potential.

Even in 2-player coordination game.

Difference of number of 0s and 1sExpected magnetization of IsingNumber of adopters in the Social Norm game

0 1

0 (a, a) (0, 0)

1 (0, 0) (b, b)

a = bpreserved by the All-logiton any network

a > bpreserved by the All-logitat least in the 2-playergame

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 35 / 38

1-logit vs all-logitReversibility

I Markov chain of 1-logit reversible if and only if potential game;I Markov chain of All-logit reversible if and only if sum of 2-players

potential games.Ranking

I In the Gibbs measure, agrees with potential φ.

I In the stationary of All-logit, does not agree with potential.Even in 2-player coordination game.

Difference of number of 0s and 1sExpected magnetization of IsingNumber of adopters in the Social Norm game

0 1

0 (a, a) (0, 0)

1 (0, 0) (b, b)

a = bpreserved by the All-logiton any network

a > bpreserved by the All-logitat least in the 2-playergame

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 35 / 38

1-logit vs all-logitReversibility

I Markov chain of 1-logit reversible if and only if potential game;I Markov chain of All-logit reversible if and only if sum of 2-players

potential games.Ranking

I In the Gibbs measure, agrees with potential φ.I In the stationary of All-logit, does not agree with potential.

Even in 2-player coordination game.

Difference of number of 0s and 1sExpected magnetization of IsingNumber of adopters in the Social Norm game

0 1

0 (a, a) (0, 0)

1 (0, 0) (b, b)

a = bpreserved by the All-logiton any network

a > bpreserved by the All-logitat least in the 2-playergame

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 35 / 38

1-logit vs all-logitReversibility

I Markov chain of 1-logit reversible if and only if potential game;I Markov chain of All-logit reversible if and only if sum of 2-players

potential games.Ranking

I In the Gibbs measure, agrees with potential φ.I In the stationary of All-logit, does not agree with potential.

Even in 2-player coordination game.

Difference of number of 0s and 1sExpected magnetization of IsingNumber of adopters in the Social Norm game

0 1

0 (a, a) (0, 0)

1 (0, 0) (b, b)

a = bpreserved by the All-logiton any network

a > bpreserved by the All-logitat least in the 2-playergame

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 35 / 38

1-logit vs all-logitReversibility

I Markov chain of 1-logit reversible if and only if potential game;I Markov chain of All-logit reversible if and only if sum of 2-players

potential games.Ranking

I In the Gibbs measure, agrees with potential φ.I In the stationary of All-logit, does not agree with potential.

Even in 2-player coordination game.

Difference of number of 0s and 1sExpected magnetization of IsingNumber of adopters in the Social Norm game

0 1

0 (a, a) (0, 0)

1 (0, 0) (b, b)

a = bpreserved by the All-logiton any network

a > bpreserved by the All-logitat least in the 2-playergame

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 35 / 38

1-logit vs all-logitReversibility

I Markov chain of 1-logit reversible if and only if potential game;I Markov chain of All-logit reversible if and only if sum of 2-players

potential games.Ranking

I In the Gibbs measure, agrees with potential φ.I In the stationary of All-logit, does not agree with potential.

Even in 2-player coordination game.

Difference of number of 0s and 1sExpected magnetization of IsingNumber of adopters in the Social Norm game

0 1

0 (a, a) (0, 0)

1 (0, 0) (b, b)

a = bpreserved by the All-logiton any network

a > bpreserved by the All-logitat least in the 2-playergame

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 35 / 38

Conclusions

What I understood

this is difficult stuff

can’t predict behaviour of individuals

Game Theory can be used but needs to take into account limitedrationality and limited knowledge

it seems that humans in a large network behave like particles in a gasor magnets in a magnetic field

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 36 / 38

Conclusions

What I understood

this is difficult stuff

can’t predict behaviour of individuals

Game Theory can be used but needs to take into account limitedrationality and limited knowledge

it seems that humans in a large network behave like particles in a gasor magnets in a magnetic field

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 36 / 38

Conclusions

What I understood

this is difficult stuff

can’t predict behaviour of individuals

Game Theory can be used but needs to take into account limitedrationality and limited knowledge

it seems that humans in a large network behave like particles in a gasor magnets in a magnetic field

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 36 / 38

Conclusions

What I understood

this is difficult stuff

can’t predict behaviour of individuals

Game Theory can be used but needs to take into account limitedrationality and limited knowledge

it seems that humans in a large network behave like particles in a gasor magnets in a magnetic field

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 36 / 38

Thank you!

Giuseppe Persiano (DIA) October 2012 – ICS – Salerno 37 / 38

top related