4 mechanism design
DESCRIPTION
Mechanism Design slides from ISI game theory workshop, June 2014TRANSCRIPT
Multiple agents contribute resources (usually money) towards a shared outcome
Each individual agent’s contribution is private knowledge◦ proportional to the agent’s utility of the shared
outcome◦ usually submitted to a central location
Shared Outcome should be the “best possible point” for every agent
Problem: How can each agent be incentivized to submit its true contribution?
Public construction project◦ e.g.: building bench in a park
Design of voting procedures◦ to prevent gaming of the election
Contracts among parties that will come to have private information◦ e.g.: shared key, shared secret
Distributed Computation◦ e.g.: p2p network, each node should report the
actual # and names of files it shares
Each agent’s contribution called its preference or type
Shared outcome is assumed to be discrete◦ outcome has set of alternatives
Decision for determining the shared outcome called collective choice
I: set of agents X: set of possible alternatives from which
the collective choice will be made Each agent privately observes its
preferences beforehand◦ How do we model this?
Each agent i privately observes a value i that determines its preferences or type
i is the set of agent i’s preferences or types
Each agent is a utility maximizer ui(x, i) denotes utility to agent I when its
preference is i
Relation >=i gives preference of agent i over alternatives from X when its type is i
◦ e.g.: ui(x, i) >= ui(x’, i) is denoted as x >=i x’ Remember: i is private information to agent
i◦ Bayesian game with incomplete information
is a profile of agent’s types Probability of each Є XX is
assumed to be common knowledge◦ denoted by (.)
Each iis common knowledge ui(., i) is common knowledge iis not common knowledge
Each agent chooses a type How do we determine the collective choice? Social choice function
◦ f:XX →X
◦ i,e, f,, →x Є X
The s.c.f. f:XX →X is ex-post efficient if for no profile is there an x Є X such that ui(x, i)>=ui(f(), i) for every i, and for some i, ui(x, i) > ui(f(, i)
The s.c.f. f:XX →X is ex-post efficient if for no profile is there an x Є X such that ui(x, i)>=ui(f(), i) for every i, and for some i, ui(x, i) > ui(f(, i) ◦ In other words, an scf is ex-post efficient if, given
a type profile all agents, it returns the shared outcome at which every agent i gets the highest utility given its type i
Multiple agents contribute resources (usually money) towards a shared outcome
Each individual agent’s contribution is private knowledge◦ proportional to the agent’s utility of the shared
outcome◦ usually submitted to a central location:
implements s.c.f Shared Outcome should be the “best
possible point” for every agent Problem: How can each agent be
incentivized to submit its true contribution?
Problem: Allocation of an indivisible item to one of I agents
Set of alternatives:X={(y1, y2… yI, t1, t2… tI): yi Є {0,1} and ti Є R
for all i, i yi =1 and i ti <=0}◦ yi =1if agent i gets the good,
◦ yi = 0, otherwise
◦ ti money given to agent i
Utility function:ui(x, i) = i yi + (mi + ti)
mi: initial endowment of agent I i : set of possible valuations of the good i =[i
l, iu] subset of R
Let =(1 , 2 ,… I) be a profile of types for the different agents
The s.c.f. f:XX →X is ex-post efficient if for no profile is there an x Є X such that ui(x, i)>=ui(f(), i) for every i, and for some i, ui(x, i) > ui(f(, i) ◦ In other words, an scf is ex-post efficient if, given
a type profile all agents, it returns the shared outcome at which every agent i gets the highest utility given its type i
s.c.f f() = (y1(), y2()… yI(), t1(), t2()… tI()) is ex-post efficient if it ◦ always allocates the item to the agent that has
the highest valuation
yi()(i – Max{1 , 2 ,… I}) =0 for all I◦ does not waste money
i ti () = 0
Two specific instances of this problem:◦ Bilateral Trading◦ Auctions
I=2◦ Agent 1: owner/seller◦ Agent 2: buyer
Case 1: 2l >1
u
◦ certain to be gains from any realization of 1 and 2
Case 2: 2u <1
l
◦ certain to be no gains from any realization of 1
and 2
Case 3: 2l <1
u and 2u > 1
l
◦ gain depends on actual realization of 1 and 2
I+1 agents◦ Special agent 0 is the seller/auctioneer ◦ 1..I are buyers/bidders
Simple case◦ I=2◦ i =[, ] – common knowledge among agents
scf: f() = (y0(), y1(), y2(), t0() ,t1(), t2())
y1() =1 if , =0 if y2() =1 if , =0 if y0() =0 t1() =-1y1() t2() =-2y2() t0() = -(t1()+ t2())
◦ Seller gives good to buyer with highest valuation (buyer 1 in case of tie)
◦ This buyer gets money(pays since ti() <0) equal to its valuation
◦ Other buyer makes no payment
y1() =1 if , =0 if y2() =1 if , =0 if y0() =0 t1() =-1y1() t2() =-2y2() t0() = -(t1()+ t2())
◦ Seller gives good to buyer with highest valuation (buyer 1 in case of tie)
◦ This buyer gets money(pays since ti() <0) equal to its valuation
◦ Other buyer makes no payment s.c.f is ex-post efficient
Is the s.c.f. truth revealing?◦ If buyer 2 announces his true value, will buyer 1
find it optimal to do the same?
Is the s.c.f. truth revealing?◦ If buyer 2 announces his true value, will buyer 1
find it optimal to do the same? Buyer 1’s problem
◦ For each actual type 1 choose a revealed type ’1 that solves:
Max’1 (1-’1)*Prob(2 ’1)
Is the s.c.f. truth revealing?◦ If buyer 2 announces his true value, will buyer 1
find it optimal to do the same? Buyer 1’s problem
◦ For each actual type 1 choose a revealed type ’1 that solves:
Max’1 (1-’1)*Prob(2 ’1)◦ Recall i =[, ], therefore Prob(2 ’1) = ’1
Max’1 (1-’1)*’1
Is the s.c.f. truth revealing?◦ If buyer 2 announces his true value, will buyer 1
find it optimal to do the same? Buyer 1’s problem
◦ For each actual type 1 choose a revealed type ’1 that solves:
Max’1 (1-’1)*Prob(2 ’1)◦ Recall i =[, ], therefore Prob(2 ’1) = ’1
Max’1 (1-’1)*’1
◦ Use calculus to solve and get: ’1= 1/2
Is the s.c.f. truth revealing?◦ If buyer 2 announces his true value, will buyer 1
find it optimal to do the same? Buyer 1’s problem
◦ For each actual type 1 choose a revealed type ’1 that solves:
Max’1 (1-’1)*Prob(2 ’1)◦ Recall i =[, ], therefore Prob(2 ’1) = ’1
Max’1 (1-’1)*’1
◦ Use calculus to solve and get: ’1= 1/2◦ Truth revealing is not optimal for buyer 1
Similar reasoning will apply to buyer 2 s.c.f below (from slide 9) is not truth-revealing y1() =1 if , =0 if y2() =1 if , =0 if y0() =0 t1() =-1y1() t2() =-2y2() t0() = -(t1()+ t2())
◦ Seller gives good to buyer with highest valuation (buyer 1 in case of tie)
◦ This buyer gets money(pays since ti() <0) equal to its valuation
◦ Other buyer makes no payment
y1() =1 if , =0 if y2() =1 if , =0 if y0() =0 t1() =-2y1() t2() =-1y2() t0() = -(t1()+ t2())
◦ Seller gives good to buyer with highest valuation (buyer 1 in case of tie)
◦ This buyer gets money(pays since ti() <0) equal to the valuation of the other buyer: second highest valuation
◦ Other buyer makes no payment
Reasoning by buyer 1:◦ Case 1:
buyer 2 announces ’
When he truthfully reveals his valuation , buyer 1 gets a utility ’
For any other revelation, buyer 1 gets equal (when his revelation >= ’) or zero (when his revelation < ’) utility
Reasoning by buyer 1:◦ Case 2:
buyer 2 announces ’
When he truthfully reveals his valuation , buyer 1 gets a utility because he does not get the item
For any other revelation that gets him the item, buyer 1 gets negative utility
truthful revelation is a weakly dominant strategy for buyer 1
Similar reasoning applies for buyer 2 s.c.f (slide 15) is truth revealing
Problem of last s.c.f.◦ agents are directly revealing their types (to the
auctioneer) which is private information Reality:
◦ some indirect mechanism is used to reveal the private type information by each agent
◦ e.g.: a function that maps or transforms the private type information to a real numeric value
Buyer i submits a sealed bid bi>0◦ gets opened by the auctioneer when the auction
is over Buyer with the highest bid wins (gets the
item) The bid is now an indirect revelation of the
buyer’s typebi(iii where i and iЄ[0,1]
Consider the problem for buyer 1’s point:◦ buyer 2 reveals b2(2◦ buyer 1’s problem is
Maxb (1-b1)*Prob(b2(2 b1)
◦ buyer 2’s max. bid is 2 (when 2=1)
◦ buyer 1 should never bid more than 2
◦ 2 is uniformly distributed in [0,1] and b2(2 b1
if and only if 2 b12
◦ buyer 1’s problem becomesMaxb (1-b1)*b12
Solution:b1(1)=½1 if ½1<=2
= 2 if ½1>2
By symmetry, for buyer 2:
b2(2)=½2 if ½2<=1
= 1 if ½2>1
bi(i)=½i constitues a Bayesian Nash equilibrium is 1 = 1 =
½ (makes payoffs of one player independent of the actions of other player)
First price sealed bid auction implements a ex-post efficient s.c.f. with a Bayesian Nash equilibrium (although its not truth revealing)
Strategy bi(i)=i is weakly dominant for each buyer
Second price sealed bid auction implements a ex-post efficient s.c.f.that is truth revealing
A mechanism =(S1, S2, …SI, g(.)) is a collection of I strategy sets (S1, S2, …SI) and an outcome function g: S1 X S2 X …SI →X
is a profile of agent’s types Probability of each Є XX is
assumed to be common knowledge◦ denoted by (.)
Each iis common knowledge ui(., i) is common knowledge iis not common knowledge Social choice function
◦ f:XX →X◦ i,e, f,, →x Є X
Combining information from last slide We know each parameter of the game:
[I, {Si}, {ui(.) }, XX,(.)]
Combining information from last slide We know each parameter of the game:
[I, {Si}, {ui(.) }, XX,(.)] This is a Bayesian Nash game
A mechanism =(S1, S2, …SI, g(.)) implements a social choice function f(.) if there is an equilibrium strategy profile (s1*(.), s2*(.),… sI*(.)) of the game induced by such that g((s1*(.), s2*(.),… sI*(.))=f(1,) for all (1,) Є X
Definition on last slide is weak◦ what happens when there are multiple
equilibrium
Definition on last slide is weak◦ what happens when there are multiple
equilibrium◦ assumes agents play the equilibrium that the
mechanism designer wants (which makes g() and f() map to the same outcome)
How do we select a social choice function? Exhaustive enumeration of all s.c.fs for the
problem domain ◦ look at each possible mechanism and determine
a corresponding s.c.f. to use for that mechanism
How do we select a social choice function? Exhaustive enumeration of all s.c.fs for the
problem domain ◦ look at each possible mechanism and determine
a corresponding s.c.f. to use for that mechanism
◦ infeasible
How do we select a social choice function? Exhaustive enumeration of all s.c.fs for the
problem domain ◦ look at each possible mechanism and determine
a corresponding s.c.f. to use for that mechanism
◦ infeasible Use revelation principle
◦ ask each agent i to reveal it’s type: ’i (recall true type of agent i is: i)
◦ calculate f(’1,’2,’I)=x Є X as the chosen alternative
◦ called direct revelation mechanism
A direct revelation mechanism is a mechanism in which Si=i for all i and g() = f() for all Є X
A direct revelation mechanism is a mechanism in which Si=i for all i and g() = f() for all Є X
Do we need to consider all direct revelation mechanisms?
A direct revelation mechanism is a mechanism in which Si=i for all i and g() = f() for all Є X
Do we need to consider all direct revelation mechanisms?◦ No◦ Only consider ones where truth-telling is an
optimal strategy for each agent
The s.c.f. f(.) is truthfully implementable (or incentive compatible) if the direct revelation mechanism =(f(.)) has an equilibrium ((s1*(.), s2*(.),… sI*(.)) in which si*(i)=i for all iЄi and all i=1,2…I, that is, if truth telling by each agent i constitutes an equilibrium of =(f(.))
Combining information from last slide We know each parameter of the game:
[I, {Si}, {ui(.) }, XX,(.)] This is a Bayesian Nash game For a Bayesian Nash game at equilibrium:E-i[ui(g(si(i), s-i(-i)), i) | i)] >=
E-i[ui(g((si’, s-i(-i)), i) | i)] for all si’ Є Si
The strategy profile s*() =(s1*(.), s2*(.),… sI*(.)) is a dominant strategy equilibrium of mechanism =(S1, S2, …SI, g(.)) if for all i and all i Є i
ui(g(si*(i), s-i), i)>=
ui(g((si’, s-i), i)
for all si’ Є Si and all s-i Є S-i
A mechanism =(S1, S2, …SI, g(.)) implements a social choice function f(.) if there is a dominant strategy equilibrium of (s1*(.), s2*(.),… sI*(.)) such that g(s*()) =f() for all Є
Dominant strategy play is useful because players don’t need to know opponent’s strategy to play a dominant strategy
The s.c.f f() is truthfully implementable in dominant strategies (or strategyproof, or dominant strategy incentive compatible, or straightforward) if si*(i)=i for all iЄi and i=1,2,…I is a dominant strategy equilibroum for the direct revelation mechanism =(f(.)) . That is, for all i and all iЄi
ui(f(i, -i), i)>= ui(f(i’, -i), i)
for all i’ Єi and all -i Є i
Suppose that there exists a mechanism =(S1, S2, …SI, g(.)) implements a social choice function f(.) in dominant strategies. Then f(.) is truthfully implementable in dominant strategies
Quasi-linear utilities Vickrey-Clarkes-Groves mechanism
Scenario (discussed in past lectures): Agent i participates in a group-decision setting by choosing its optimal type given its utility function and set of possible types◦ Problem: Design incentive compatible (truth-
revealing) mechanism
Scenario (discussed in past lectures): Agent i participates in a group-decision setting by choosing its optimal type given its utility function and set of possible types◦ Problem: Design incentive compatible (truth-
revealing) mechanism What if agent i chooses not to participate at
all?◦ designed mechanism might fail to be incentive
compatible if some agents don’t participate
K={0,1} (set of outcomes) Two agents: 1,2 For each agent i = {l, u}
◦ ul > 0 Project cost: l cu
Objective: design an ex-post efficient s.c.f. that has k* (1
2) = 1 if either 1 or2 = u
and k* (1 2) = 0 if 1
=2 = l
◦ can be designed using VCG mechanism
Add constraint: an agent can withdraw at any time ◦ does not need to pay money for the project◦ does not get the benefits of the project
Problem: Design an ex-post efficient s.c.f. that achieves the desired outcomes within the voluntary participation constraint
If agent 1 can withdraw at any time, to ensure his participation t1(l u)>= -l
◦ whenever agent 1’s valuation of the project is l, he pays no more than l towards the project’s cost
What should agent 1’s transfer be if both agents have valuation u ?
If truth-telling has to be a dominant strategy, t1(u u) must satisfy:
u k*(u u)+ t1(u u) >= u k*(l u)+ t1(l u)
What should agent 1’s transfer be if both agents have valuation u ?
If truth-telling has to be a dominant strategy, t1(u u) must satisfy:
u k*(u u)+ t1(u u) >=
u k*(l u)+ t1(l u) Substituting k*(u u) = k*(l u) =1 we get:
u + t1(u u) >= u + t1(l u)
ort1(u u) >= t1(l u)
But we saw that to prevent agent 1 from withdrawing at any time we must have
t1(l u)>= -l
But since t1(u u) >= t1(l u), we must also have
t1(u u)>= -l
◦ agent i’s contribution must not exceed -l no matter what his valuation is
By symmetry we can also show for agent 2:
t2(u u)>= -l
The total amount of money transferred (payments to build the project) is:
t1(u u)+ t2(u u) >= -2l
But remember, for the project to be self-funded
t1(u u)+ t2(u u) <= -c Recall: Project cost constraint says 2l < c,
which meanst1(u u)+ t2(u u) < -2l
(contradiction with equation under first bullet)
Conclusion: We cannot design an ex-post s.c.f. (for this example, at least) when the agents can withdraw from the mechanism at any time
Add constraint: an agent can withdraw at any time ◦ does not need to pay money for the project◦ does not get the benefits of the project
Problem: Design an ex-post efficient s.c.f. that achieves the desired outcomes within the voluntary participation constraint◦ cannot be done
When can agent i withdraw? In three stages of the process
◦ ex-post: after each agent has announced its type (publicly) and an outcome has been chosen
◦ interim: when each agent has determined what is the best type to reveal (based on its utility), but have not yet announced it
◦ ex-ante: before each agent has determined what type it should reveal
What can we do to prevent an agent from withdrawing at each of these three stages?
Let agent i get receive a utility of u’i(i) by withdrawing when its type is i
To prevent ex-post withdrawal:ui(f(i, i), i) >= u’i(i)
Let Ui(i|f) = E-i (ui(f(i, i), i)| i) denote agent i’s interim expected utility.
To prevent interim withdrawal:Ui(i|f) >= u’i(i)
Let Ui(f) = Ei [Ui(i|f)] = E[ui(f(i, i), i)] denote agent i’s ex-ante expected utility
To prevent ex-ante withdrawal:Ui(f) >= Ei [u’i(i)]
Ex-post p.c. satisfies interim p.c. satisfies ex-ante p.c
Successfully implementable s.c.f’s should satisfy incentive compatibility (truth-revelation) as well as participation constraints