经济文库 - 千万精品文档,你想要的都能搜到,下载即用。

博弈论第6讲.pdf

看我口型:Gun111 页 479.139 KB 访问 3252.97下载文档
博弈论第6讲.pdf博弈论第6讲.pdf博弈论第6讲.pdf博弈论第6讲.pdf博弈论第6讲.pdf博弈论第6讲.pdf
当前文档共111页 2.97
下载后继续阅读

博弈论第6讲.pdf

Lecture 6 (Based on lectures of Pro.Prat) The Revenue Equivalence Theorem Direct selling mechanism: Probability assignment functions: chance that i gets the object given a vector of reported values p1 (v1; ::; vN ) ; :::; pN (v1; ::; vN ) ; P such that i pi 1 (the auctioneer could keep the object). Cost functions: cost paid by bidder i given a vector of reported values (he may pay even if he does not get the object): c1 (v1; ::; vN ) ; :::; cN (v1; ::; vN ) : The cost could be negative, ie the auctioneer pays the bidder. 46 The equilibria we have considered in the four formats have corresponding equilibria in direct selling mechanisms. 1. First-Price, Dutch: ( ) 1 if vi > vj for all j 6= i 0 otherwise ( ) ^b (vi) if vi > vj for all j 6= i ci (v1; ::; vN ) = 0 otherwise pi (v1; ::; vN ) = Easy to check: truth telling is an equilibrium of this direct mechanism. 2. Second-Price, English: Same assignment function as First Price and ci (v1; ::; vN ) = ( ^b (vsecond) 0 47 if vi > vj for all j 6= i otherwise ) For ri 2 [0; 1], de ne: pi (ri) = ci (r i ) = Then, Z 1 0 Z 1 0 :: :: Z 1 0 Z 1 0 pi (ri; v i) f i (v i) dv i; ci (ri; v i) f i (v i) dv i: ui (ri; vi) = pi (ri) vi ci (r i ) : A direct mechanism is incentive compatible if ui ( v i ; v i ) ui (r i ; v i ) 8i8ri8vi: Proposition 8 A direct mechanism is incentive-compatible if and only if 1. pi (vi) is non-decreasing in vi: 2. ci can be written as ci (vi) = ci (0) + pi (vi) vi 48 Z v 0 i pi (x) dx: Sketch of Proof. For (1), take v 0 < v 00 and suppose that pi v 0 > pi v 00 . Incentive compatibility implies: ui v 0 ; v 0 ui v 00; v 0 ; ui v 00; v 00 ui v 0; v 00 : Sum the two inequalities ui v 0; v 0 + ui v 00; v 00 ui v 00; v 0 + ui v 0; v 00 That is pi v 0 v 0 + pi v 00 v 00 pi v 0 v 00 + pi v 00 v 0: which re-writes as pi v 00 pi v 0 v 00 v0 0; which is a contradiction because v 0 < v 00 and pi v 0 > pi v 00 : For (2), note that incentive compatibility implies the rst-order condition d ui (r; v ) = 0 8v: dr r=v 49 We have d ui (r; v ) = p0i (r) v dr c0i (r) Hence c0i (v ) = p0i (v ) v 8v: Integrating both sides ci (v ) expected bene t z }| { ci (0) = pi (v ) v which corresponds to (2). rent zZ }| v 0 { pi (x) dx; To interpret (2), go back to mechanism design and think of downward local IC constraints: t^i = t^i 1+u1 (^ xi; i) u1 (^ xi 1 ; i ) Hence for i = 1; :::; n t^i = u1 (^ xi; i) u1 (^ xi 1 ; i ) +u1 (^ xi 1; i 1) u1 (^ xi 2 ; i 1 ) +::: = u1 (^ xi ; i ) (u1 (^ xi 1; i) u1 (^ xi 1; i 1)) ::: = u1 (^ xi ; i ) iX1 u1 x ^i k ; i k+1 k=1 50 u1 x ^i k ; i k Theorem 9 (Revenue Equivalence) If two incentivecompatible direct selling mechanisms have the same probability assignment functions and every bidder with valuation zero is indi erent between the two mechanisms, then the two mechanisms generate the same expected revenue. Proof. As the v 's are independent, the expected revenue can be written as R = = = N Z 1 X i=1 0 N Z 1 X i=1 0 N X ci (vi) fi (vi) dvi ci (0) + pi (vi) vi ci (0) + i=1 N Z 1 X i=1 0 Z v pi (vi) vi 0 i pi (x) dx fi (vi) dvi Z v 0 i pi (x) dx fi (vi) dv The revenue depends only on ci (0) and pi, not on ci (vi). We can now add new formats to the list. See exercise. 51 E ciency The four formats have the same p. They allocate the object to the bidder with the highest valuation. E cient. Trade-o tion: between e ciency and revenue maxima- The auctioneer can increase expected revenue by setting a reserve price. A reserve price mis-allocates the object with positive probability. 52 Reserve Price in Second-Price Auction The auctioneer uses a second-price auction with reserve price r 0. If all the bids are below r, the auctioneer keeps the object (for which she has zero utility). Proposition 10 The optimal reserve price is strictly greater than zero. As before, h (v ) is the density function of the secondhighest element v second of fv1; :::; vN g. The expected revenue is R2 = Z 1 r ^b (v ) h (v ) dv + r Pr v second < r; v rst > r = N (N 1) Z 1 v (F (v ))N 2 f (v ) (1 r N 1 +rN (F (r)) (1 53 F (r)) F (v )) dv Take derivatives: dR2 = N (N dr + N (N rN (N 1) r (F (r))N 2 f (r) (1 F (r )) 1) r (F (r))N 2 f (r) (1 F (r)) 1) (F (r))N 1 f (r) +N (F (r))N 1 (1 F (r )) and dR2 dr r = N 1 (F (r)) rN (N 1) f (r) + N (1 F (r)) Then lim dR2 dr r =N >0 N 1 r!0+ (F (r )) 2 For r small enough, dR dr r is positive and a higher r increases the auctioneer's expected revenue. 54 Common Values So far, we have assumed that valuations are indepently distributed. But think of auctions for Oil elds New issues of securities Spectrum (UMTS) Any object which could be re-sold (paintings, cars, etc). Values are then interdependent. 55 Let us look at the most extreme case: the value is the same for every player (but still stochastic): v1 = ::: = vn = v and v has density f and CDF F on [0; 1] Buyer i observes signal yi with distribution g (yijv ). Assume that the y 's are independent across buyers conditional on v . Restrict attention to second-price auctions. Is the equilibrium of the game bi (yi) = E [vjyi]? (4) No. A buyer who bids E [xjyi] is paying too much on average. To see this, suppose everybody bids according to the naive strategy in (4). If i wins, it means that E [vjyi] = max (E [vjy1] ; :::; E [vjyn]) 56 equivalent to yi = max (y1; :::; yn) But then E [vjy1; :::; yn] < E [vjyi] If i had known what the others know he would have bid less. This is the winner's curse. In equilibrium, rational bidders are not subject to the winner's curse because they do not use a naive strategy. The equilibrium strategy is the sophisticated bid function: " # ~bi (yi) = E vjyi; yi = max yj ; j6=i i.e. a buyer conditions his bid on the event his bid is equal to the second-highest bid. Are bidders rational? Experimental evidence (Kagel and Levin 1986): both naive and strategic bidding. 57 Information Provision and Revenue Maximization Should the auctioneer allow bidders to get more information about the object for sale? Example: provide an independent expert report. Suppose the cost of information provision is zero Milgrom-Weber (Econometrica 1982) Theorem 11 In symmetric environments, if the auctioneer uses a rst- or second-price auction the best reporting policy is full disclosure. In our example: Suppose the auctioneer chooses between: (1) letting bidders know only yi; (2) providing them with perfect information (they learn v ). With (2), the bid is simply bi = v and each buyer gets zero expected payo . 58 Which Format? With common values, the Revenue Equivalence Theorem does not hold. It is still true that First = Dutch, but Milgrom-Weber prove: English > Second Price. Intuition: The sequential format provides more info. Second Price > First Price (if bidders are riskneutral): Intuition: reduce winner's curse 59 Lecture 3: Mechanism Design Very general setup: several players with private information, one player who can commit. 1. Principal o ers mechanism (aka contract or incentive scheme) 2. Each agent accepts/rejects mechanism 3. Agents play according to mechanism Many stages ) not a static Bayesian game but... Revelation Principle: focus without loss of generality on mechanisms such that: all agents accept 60 all agents simultaneously and truthfully reveal their types Main point: incomplete info + possibility to reject ) ine ciency Principal: benevolent government or pro t maximizer I + 1 players. i = 0 is the principal. = ( 1; :::; I ) 2 y 2 Y : allocation (decided by principal) y0 2 Y : default allocation ui (y; ): utility of i Mechanism: Mi: message space of each agent. yM : M ! Y : allocation function. 61 1. Principal announces (M; yM ). 2. 8i, Agent i accepts/rejects. 3. If everybody accepts, agent i chooses i ( i) 2 Mi . 4. The allocation is y = yM f i ( i)gi2I . 62 Example: First-Price Auction Agents: bidders with symmetrically distributed valuations f igi2I Principal: auctioneer Allocation: Y = i ; ftigi2I Default allocation: y0 = i = 0; ftigi2I = 0 Message space = bids: Mi = [0; 1) Allocation function: 0 i =( arg maxi mi B yiM (m) = @ 0 if i 6= i ti = mi if i = i 1 C A Bayesian equilibrium (see previous lecture): i ( i) = 1 Z (F ( i))N 1 0 63 i xd (F (x))N 1 Revelation Principle Direct mechanism: Mi = ^i. y : ! Y . i. Agent i announces Truthtelling : ^i = i. Step 1 Given the equilibrium of a mechanism in which some agents reject, there exists a mechanism which has an equivalent equilibrium but in which all agents accept. Step 2 Given the equilibrium of a mechanism in which all agents accept, there exists a direct mechanism which has an equivalent equilibrium and in which all agents reveal their types truthfully. Take M , yM , such that as given. Construct y : y ^ = yM ^ 64 8^ 2 : ! y Then, E i = E i = [ui (y ( ) ; i; ui y M ( max E max E ^i 2 i i ( )) ; i; i ui y M ( 1 ( 1 ) ; ::; i i 2Mi i) j i] h ui y ^ j i 1 ; ::; i ; ::; I i ; ::; ; i; I ( I )) ; i ; i j i i i j i Caveat: the Revelation Principle does not say that the set of equilibria of the original mech is equal to the set of equilibria of the direct mech. Dominant strategies... 65 Example (continued): First-Price Auction Find the direct mechanism and truthful equilibrium corresponding to the classical rst-price auction (an indirect mechanism). Message space = types = valuations: Mi = [0; 1] i = Allocation function: Use: yi ^ = yiM 0 ^ 1 i = arg maxi i ^i C B ( C B 0 if i 6= i = @ A ti = ^i if i = i i 0 R ^i N 1 1 i = arg maxi xd (F (x)) N 1 0 B F (^i )) ( ( B 0 if i 6= i = B R @ ti = ^ N 1 i 1 xd (F (x)) if i = i N 1 0 (F (^i )) 0 i =( arg maxi ^i B 0 if i 6= i = @ R ^i ti = N 1 1 xd (F (x)) if i = i N 1 0 (F (^i )) The equilibrium of this direct mechanism is truthful: each player i selects message ^i = i. 66 1 C C C A 1 C A For every vector of types , the allocation is the same as in the rst-price auction. 67 Usually, y = (x; t) x 2 X : decision t 2 [0; 1)I : transfer 1 2 3 4 5 6 example price discrim. regulation income tax public good auction bargaining x quantity, quality cost income public decision winner trade 1. Mussa-Rosen (1978) 2. Baron-Myerson (1982) 3. Mirrlees (1971) 4. Groves (1973) 5. Vickrey (1961) 6. Myerson-Satterthwaite (1983) 68 willingness to pay technology ability preference willingness to pay preference t price income tax contribution price price Mechanism Design with One Agent Example: price discrimination with two types. General case. Example Seller: marginal cost c. Sells x to buyer at price t. u0 = t cx u1 = V ( x) t with V 0 > 0; V 00 < 0; V (0) = 0 2 f L; H g with L < H and Pr [ H ] = p If the seller knew , t (x; ) = V (x ( )) x ( ) = arg max V (x) x cx Let ( ) = xL; xH ; tL = LV xL ; tH = H V xH be the full-info solution. 69 If the seller does not know agent lies. When = H , ^ = ^ = and she o ers ( ), the H yields 0 L yields H V ( xL ) L V ( xL ) > 0 The seller should select (xL; xH ; tL; tH ) such that: An agent with L accepts: L V ( xL ) tL 0 (IRL) tH 0 (IRH ) An agent with H accepts: H V ( xH ) An agent with L reports ^ = L: L V ( xL ) tL L V ( xH ) tH (ICL) tL (ICH ) An agent with H reports ^ = H : H V ( xH ) tH H V ( xL ) Step 1 If xL > xH , constraints cannot be satis ed. Sum ICL and ICH : ( H L ) ( V ( xH ) 70 V (xL)) 0 Step 2 ICH binding ) ICL: L ( V ( xH ) V (xL)) H ( V ( xH ) = tH V (xL)) tL Step 3 ICH and IRL ) IRH : H V ( xH ) L V ( xL ) tH tL ICH IRL H V ( xL ) tL 0 Step 4 In the optimal contract ICH and IRL are binding (and ICL and IRH are satis ed). If ICH were not binding, increase tH . If IRL were not binding, increase tL. Step 5 The optimal contract solves max(1 t;x p) (tL cxL) + p (tH cxH ) subject to ICH and IRL binding: tL = L V ( xL ) tH = L V ( xL ) + H V ( xH ) 71 H V ( xL ) The problem is separable and rewrites as max(1 xL p) ( LV (xL) + max p ( H V (xH ) xH cxL) p( H L ) V ( xL ) cxH ) Compare with the e cient program: max(1 p) ( LV (xL) xL cxL)+max p ( H V (xH ) xH cxH ) xL < xL and xH = xH . Lessons: (1) ICH and IRL binding; (2) No distortions at the top; (3) Rationing at the bottom. 72 Mechanism Design with One Agent: General Case Let x 2 X <, where X is a nite set. Let = f 1; :::; ng 2 y 0; 8z 00 > z 0 f (y 00; z 00) f (y 0; z 00) (>)f (y 00; z 0) f (y 0 ; z 0 ): If f is twice di erentaible, then it is supermodular @ 2f if and only if @x@y 0 everywhere. 74 1. Implementability A vector of quantities x = (x1; :::; xn) is implementable if there exists a t = (t1; :::; tn) such that the IC constraints are satis ed. We assume u1 is strictly supermodular in x and . (A1) This assumption means that the marginal willingness to pay for x is increasing in the buyer's type. Proposition 12 Under A1, x is implementable if and only it is nondecreasing, that is, x0 x1 xn . Proof. Step 1: \Only if" part: Suppose xi < xk for some i > k. Summing ICi;k and ICk;i yields u 1 ( xk ; k ) u 1 ( xi ; k ) u 1 ( xk ; i ) which contradicts supermodularity. 75 u 1 ( xi ; i ) ; Step 2: Suppose x0 x1 xn. If, for i = 1; :::; n, every ICi;i 1 (local downward incentivecompatibility constraint) holds as an equality, then all IC's are satis ed. This is shown in two steps. First, if every ICi;i 1 holds as an equality, then, for every k < i, ICk;i is satis ed. To see this: ti tk = (ti ti 1) + (ti 1 ti 2) + = (u1 (xi; i) u1 (xi 1; i)) + tk+1 tk + (u1 (xi 1; i 1) u1 (xi 2; i 1)) + + u1 xk+1; k+1 u1 xk ; k+1 (u1 (xi; k ) u1 (xi 1; k )) + (u1 (xi 1; k ) u1 (xi 2; k )) u 1 ( xk ; k ) + + u1 xk+1; k = u 1 ( xi ; k ) u 1 ( xk ; k ) ; where the second equality is due to the ICi;i 1 and the inequality is supermodularity. Second, if every ICi;i 1 holds as an equality, then, for every k < i, 76 ICi;k is satis ed. This is because: ti tk = (ti ti 1) + (ti 1 ti 2) + = (u1 (xi; i) u1 (xi 1; i)) + tk+1 tk + (u1 (xi 1; i 1) u1 (xi 2; i 1)) + + u1 xk+1; k+1 u1 xk ; k+1 (u1 (xi; i) u1 (xi 1; i)) + (u1 (xi 1; i) u1 (xi 2; i)) + + u1 xk+1; i u 1 ( xk ; i ) = u 1 ( xi ; i ) u 1 ( xk ; i ) : Step 3: \If" part. Suppose x0 x1 xn and construct a t such that every ICi;i 1 holds as an equality. Let t0 = 0 and, for i = 1; :::n, ti = ti 1 + u1 (xi; i) u 1 ( xi 1 ; i ) : This t satis es all local downward IC's by constraint as equalities, and, by Step 2, all IC's. 77 2. Pro t Maximization Having characterized the set of quantity vectors that can be implemented, we move to pro t maximization. Proposition 13 Under A1, an optimal contract that implements x takes the form ti (x) = ti 1(x)+u1 (xi; i) u1 (xi 1; i) for i = 1; :::; n: Proof. Fix t and suppose the condition in the proposition is not satis ed. Start with the lowest i for which ti < ti 1 + u1 (xi; i) u1 (xi 1; i) (the opposite inequality would violate ICi;i 1) and let t~i = ti 1 + u1 (xi; i) u1 (xi 1; i) and, for all j > i, let t~j = tj + t~i ti. Repeat this procedure until all ICi;i 1 hold as equality. The new t is implementable by Proposition 12 and, as t is higher and x is unchanged, yields a strictly higher pro t. 78 With Propositions 12 and 13, the principal's problem rewrites as max x X pi (u0 (xi; i) + ti (x)) i subject to 0 x1 xn The \relaxed" version of this problem is max x X pi (u0 (xi; i) + ti (x)) i subject to x nonnegative The advantage of the relaxed problem is that both the objective function and the constraint are separable in i. The problem can be split in n subproblems that can be solved separately. In general, the solution to the relaxed problem can be di erent from the solution to the full problem. However, we can provide a condition under which the relaxed problem always has a solution that is nondecreasing in i. As that solution also solves the full problem, we can focus without loss of generalP ity on the relaxed problem. Let Pi = n j=i pj and de ne, for every x and i, Pi+1 Pi u1 (x; i+1) : (x; i) = u0 (x; i)+ u1 (x; i) pi pi 79 Proposition 14 Under A1, if is supermodular in x and i, the relaxed problem has a solution that satis es 0 x1 xn . 80 Proof. First, notice that X pi (u0 (xi; i) + ti (x)) i = (p1u0 (x1; 1) + p1u1 (x1; 1) p1u1 (x0; 1)) +(p2u0 (x2; 2) + p2u1 (x1; 1) p2u1 (x0; 1) +p2u1 (x2; 2) p2u1 (x1; 2)) + n X = pj u1 (x0; 1) j=1 +p1u0 (x1; 1) + +p2u0 (x2; 2) + n X pj u1 (x1; 1) j=1 n X pj u1 (x2; 2) j=2 + = + = n X j=1 n X i=1 n X j=1 j=2 n X pj u1 (x1; 2) pj u1 (x2; 2) j=2 pj u1 (x0; 1) + Piu1 (xi; i) n X X i n X i=1 X pj u1 (x0; 1) + piu0 (xi; i) Pi+1u1 (xi; i+1) pi (xi; i): i Claim: Suppose that Y is a nite subset of the real 81 line and that f : Y < ! < is supermodular. Then, Y (z ) arg maxy f (y; z ) is nondecreasing in z (given two nite real sets Y 00 and Y 0 we say that Y 00 Y 0 if max Y 00 max Y 0 and min Y 00 min Y 0). Proof of the claim: Suppose that z 00 > z 0 but max Y (z 00) < max Y (z 0). As Y (z 00) and Y (z 0) are maximizers, f (max Y (z 00); z 00) f (max Y (z 0); z 00) f (max Y (z 0); z 0) f (max Y (z 00); z 0); and which combined with supermodularity implies f (max Y (z 00); z 00) f (max Y (z 00); z 0) = f (max Y (z 0); z 00) f (max Y (z 0); z 0): As f (max Y (z 0); z 0) f (max Y (z 00); z 0), it must be that f (max Y (z 0); z 00) f (max Y (z 00); z 00), ie, max Y (z 0) 2 Y (z 00): a contradiction because we had assumed max Y (z 00) < max Y (z 0). The proof for min Y (z ) is analogous. 82 The problem max x X pi (u0 (xi; i) + ti (x)) i rewrites as max x X pi (xi; i) = i X i pi max (xi; i): xi By the claim, this problem has a nondecreasing solution: x1 xn. Then the solution of the relaxed problem max x X pi (xi; i) i subject to x nonnegative is also nondecreasing in i, which means that the monotonicity constraint of the full problem is not binding. A solution to the relaxed problem is a solution to the full problem. Su cient conditions for to be supermodular can be provided. Suppose that: pi is nondecreasing in i for i = 1; :::; n 1 Pi+1 (A2) 83 u0 is supermodular in x and (A3) u1(x; i) u1(x; i+1) is supermodular in x and i (A4) A2 requires that the hazard rate be nondecreasing in the type. A3 is the same of A1 but for the principal's utility. If u1 is three-time di erentiable, A4 @ 3 u1 is equivalent to 2 @ @x 0 everywhere. Proposition 15 If A1 through A4 are satis ed, then is supermodular in x and i. Proof. The following two results are useful: Claim 1: if f (y; z ) and g (y; z ) are supermodular functions, then f (y; z ) + g (y; z ) is supermodular. Claim 2: if f (y; z ) is supermodular in y and z and nonincreasing in y , and g (z ) is nonnegative and nonincreasing in z , then f (y; z )g (z ) is supermodular in y and z . 84 Proof of Claim 2: As f is nonincreasing in y and g is nonincreasing in z . f (y 00; z 00) f (y 0; z 00) g (z 00) g (z 0 ) 0 Hence, f (y 00; z 00) f (y 0; z 00) g (z 00) f (y 00; z 00) f (y 0; z 00) g (z 0) f (y 00; z 0) f (y 0 ; z 0 ) g (z 0 ) where the seond inequality is due to the fact that f is supermodular and g is nonnegative. Rewrite: (x; i) Pi+1 = u0 (x; i) + (u1 (x; i) pi u1 (x; i+1)) + u1 (x; i+ The rst addend is supermodular in x and i (because i is increasing in i). The second addend is the prodP uct of pi+1 , which is nonnegative and nonincreasing i in i (because of A2), and u1 (x; i) u1 (x; i+1), which is nonincreasing (by A1) and supermodular in x and i (by A4). The third addend is supermodular 85 by assumption. By Claim 1, (x; i) is supermodular. The results presented so far are summarized by Proposition 16 Under A1 through A4, the principal's problem has a solution that satis es: x ^i 2 arg max piu0 (x; i)+Piu1 (x; i) Pi+1u1 (x; i+1) x subject to x 0 for i = 1; :::; n t^i = t^i 1 + u1 (^ xi ; i ) x ^0 = 0 t^0 = 0 u1 (^ xi 1 ; i ) for i = 1; :::; n In practice, one starts by computing x ^; t^ . If x ^; t^ is unique, then it is a solution of the principal's problem. If there are multiple x ^; t^ , then there is at least one such that that x ^ is nondecreaing in i, and that is a solution to the principal's problem. 86 Mechanism Design with Multiple Agents Implementability { Bayesian vs. dominant { Groves mechanisms { Myerson-Satterthwaite Correlated types 88 Implementability Assumptions: (B1) i, j independent. i s Pi. with strictly posi- tive and di erentiable density pi. (B2) Private values: ui (x; ti; i) (and not ui (x; ti; )), except possibly the principal. (B3) Quasilinear preferences. Agent ui (x; ti; i) = vi (x; i) + ti Principal: u0 (x; t0; 0) = v0 (x; 0) or u0 (x; t0; 0) = I X I X ti i=1 vi (x; i) i=0 Allocation x ( ) is e cient if x ( ) 2 arg maxx 89 PI i=0 vi (x; i). Bayesian/Dominant Implementation Bayesian: It is a Bayesian equilibrium for the agents to play according to the principal's wishes. Dominant: It is a dominant strategy for each agent to play according to the principal's wishes, independent of what others do. Two Revelation Principles: (IC) For all i; i; ^i; E i [u i ( y ( i ; i) ; i)] E i h ui y ^i; i ; i (DIC) For all i; i; ^i; ^ i; ui y i; ^ i ; i ui y ^i; ^ i ; i Obviously, (DIC) implies (IC). Mookherjee-Reichelstein identify conditions under which imposing (DIC) instead of (IC) involves no welfare loss. 90 i Groves Mechanism Any e cient x can be implemented in dominant strategies. Idea: externality payments i's transfers = -i's payo PI Suppose x ( ) 2 arg maxx i=0 vi (x; i) We want to implement x . Let ti ^ = X vj x ^i; ^ i ; ^j + i ^ i ; j6=i where i is an arbitrary function. Suppose (DIC) is not satis ed, ie there exists ; ^ i; ^i 6= i such that vi x > vi x ^i; ^ i ; i + t ^i; ^ i i ^ i ; i +t ^ i i; i; i 91 Then, vi x > vi x ^i; ^ i ; i + i; ^ i ; i + X vj x ^i; ^ i ; ^j j6=i X vj x i; X vj x i; ^ i ; ^j vj x i; ^ i ; ^j vj x ^i; ^ i ; ^j ^ i ; ^j j6=i Let ~ = ^ i and x i; Then, for all x, vi x ~ 2 arg maxx PI vi x; ~i . i=0 ^ i ; i + i; vi (x; i) + X j6=i vj x; ^j j6=i ^i; ^ i , So, if x = x vi x vi x ^ i ; i + i; ^i; ^ i ; i + X j6=i X j6=i Contradiction. 92 Example 2: Second-Price Auction A principal must allocate a good among n agents. Agent i's valuation for the good is i. Valuations are independently distributed among buyers. Allocation: i ; ftigi2I Agents' payo vi (x; ti; i) = ( i + ti ti if i = i if i = 6 i Principal's payo : v0 (x; ti; i) = I X i=1 96 ti Find Groves mechanism: 1. E cient allocation I X vi (x; ti; i) = i i=0 Maximize surplus by selecting i = arg max i i 2. Externality transfers X vj x ^i; ^ i ; ^j = j6=i ( 0 if i = i maxj6=i ^j if i 6= i The transfers take the form ti(^i; ^ i) = 8 < 0+ i ^ i : max ^ ^ i j6=i j + i 3. If we x i ^ i = ( if ^i = maxj ^j if ^i 6= maxj ^j maxj6=i ^j , we obtain maxj6=i ^j if ^i > maxj6=i ^j : 0 if ^i < maxj6=i ^j which is the standard second-price auction. 97 ti(^i; ^ i) = Ine ciency with Budget Balance Myerson-Satterthwaite 1983 Coase Theorem: with complete contracts, two parties must achieve an e cient allocation. With asymmetric info this is not true. Example: Two-agent trading game: x 2 f0; 1g seller buyer L= 5 1 L=1 2 H =0 1 H =6 2 H = 1. with pH = p 1 2 2 Suppose a benevolent mechanism designer maximizes surplus (and hence e ciency). Then x ( 1; 2) is L 1 H 1 L 2 H 2 x=0 x=1 x=1 x=1 98 Without budget balance, we can use a Groves mechanism: t1 ( 1; 2) = x ( 1; 2) 2 + 1 ( 2) t2 ( 1; 2) = x ( 1; 2) 1 + 2 ( 1) where t1 (t2) is the monetary transfer to the seller (buyer). So, 8 > 0+ L if 1 = 5; 2 = 1 > > 1 >< 6 + H if 1 = 5; 2 = 6 1 t1 ( 1; 2) = L if > 1 + > 1 = 0; 2 = 1 1 > > : 6 + H if 1 = 0; 2 = 6 1 8 > 0+ L if 1 = 5; 2 = 1 > > 2 >< 5+ L 2 if 1 = 5; 2 = 6 t2 ( 1; 2) = H > 0 + if 1 = 0; 2 = 1 > 2 > > : 0+ H if 1 = 0; 2 = 6 2 Total payments are 8 L+ L > 0 + > > 1 2 >< 1+ H + L 1 2 t1 + t2 = L H > 1+ 1 + 2 > > > : 6+ H + H 1 2 if if if if 1= 5; 2 = 1 1 = 5; 2 = 6 1 = 0; 2 = 1 1 = 0; 2 = 6 =) there does not exist a Groves mechanism that satis es budget balance. 99 Is there any mechanism that implements x and satis es BB? H; 2 1 L; 1 2 xH 1 =E 2 x xL 1 =E 2 x = 1 xH 2 =1 1 = 21 xL = 2 2 L H H Show that IRL 1 , IR2 , IC1 , IC2 , and BB are inconsistent. IRL 1 LxL + tL 1 1 1 =) 0 5 12 + tL 1 =) tL 1 0 2 :5 (min price for seller with high-quality good) IRL 2 LxL + tL 2 2 2 0 =) 1 21 + tL 2 =) tL 2 0 0 :5 (max price for buyer with low demand) 100 ICH 1 H xH + tH 1 1 1 H xL + tL 1 1 1 =) 0 1 + tH 1 0 21 + tL 1 =) tH 1 tL 1 (min price for seller with low-quality good) ICH 2 H xH + tH 2 2 2 H xL + tL 2 2 2 =) 6 1 + tH 2 6 12 + tL 2 =) tH 2 tL 2 3 (max price for buyer with high demand) By BB, let t ( 1; 2) = t1 ( 1; 2) = Then, tL 1 = tH 1 = tL 2 = tH 2 = t2 ( 1; 2). 1 (t ( 5; 1) + t ( 5; 6)) 2 1 (t (0; 1) + t (0; 6)) 2 1 (t ( 5; 1) + t (0; 1)) 2 1 (t ( 5; 6) + t (0; 6)) 2 Rewrite 101 IRL 1 t( 5; 1) + t( 5; 6) IRL 2 t( 5; 1) L t( 5; 6) IRL +IR 1 2 5 t(0; 1) t(0; 1) 1 4 and ICH 1 t(0; 1) + t(0; 6) ICH 2 t( 5; 6) H 2 (t(0; 1) ICH +IC 1 2 t( 5; 1) + t( 5; 6) t(0; 6) t( 5; 1) t( 5; 6)) 6 Put together t( 5; 6) t( 5; 6) t(0; 1) t(0; 1) =) contradiction. The example generalizes 102 4 3 t(0; 1) 6 Theorem 17 (Myerson-Satterthwaite) Suppose that 1 and h 2 haveidi erentiable, i strictly positive densih L H H ties on L 1 ; 1 and 2 ; 2 , and that both x = 1 and x = 0 have positive probability. Then, there is no mechanism that satis es IC, IR, and BB. 103 Correlated Types Cremer-McLean (1985), risk neutrality. So far we have assumed that 's are independent across agents. Suppose instead they are perfectly correlated: 1; 2. Then there is a \shoot-the-liar" mechanism: t ^1; ^2 = 1 if ^1 6= ^2 There exists a Bayesian equilibrium that induces full revelation at no cost for the principal. Imperfect correlation? Take p ( ij i) Full rank: Suppose fp ( ij independent i )g =) i is informative on i. i2 i are linearly Cremer-McLean: Under risk neutrality and full rank, the principal can implement any x without leaving any rent to the agent. 104 Example: Reconsider the auction with two types. Suppose i 2 fL; Hg with equal probability, and with 2 Pr i = Hj j = H = Pr i = Hj j = L = 1 i 1; 1 . 2 The principal o ers the following mechanism; First there is a bet: Agent i = 1; 2 reports i. He receives a payment of M if i = j and he pays 1 M if i 6= j . Next, the principal sells the good to the agent with the higher i at price i. 105 If both agents tell the truth, the bet has an expected value of zero: = Pr i= j M M M =0 Pr i 6= j 1 M If one agent lies, the bet has a negative expected value Pr i 6= j M Pr i= j 1 M 2 1 M = M = (1 )M 1 1 By choosing an appropriately large M , the principal makes sure that the agent never lies. Note that the PC is satis ed. This does not work if = 21 . Drawbacks of this approach? 106 Lecture 4: Dynamic Games of Incomplete Information Combine subgame perfection and Bayesian equilibrium. Main challenge: The action a player chooses can signal her type to players who move afterwards. The player realizes the signalling component and factors it into her choice. The following players realize that she realizes and... 107 Road Map The basic signalling game. De nition of perfect Bayesian equilibrium Examples: 1. Reputation game Distinction between separating, semi-separating, and pooling Method for checking existence of PBE 2. Spence's job signalling Multiple equilibria Intuitive Criterion 3. Money burning Single crossing condition Advertising Generalization 108 The Basic Signalling Game Player 1: sender, has private info 2 . Player 2: receiver, no private info. Prior distribution p on . 1. The sender observes and selects a1 2 A1; 2. The receiver observes a1 and selects a2 2 A2. Payo s: u1 (a1; a2; ) and u2 (a1; a2; ) 109 Perfect Bayesian Equilibrium Mixed strategy for player 1 (distribution over A1 given ): 1 ( j ). Belief for player 2 (distribution over a1): ( ja1). given Mixed strategy for player 2 (distribution over A2 given a1): 2 ( ja1). 110 A perfect Bayesian equilibrium (PBE) is a strategy pro le such that 1 ; 2 and a belief Player 1's strategy is optimal. For any type, the action that the sender plays maximizes her expected payo given the receiver's equilibrium play: 8 ; 8a1 such that 1 (a1j ) > 0; a1 2 arg max a1 X 2 (a2 ja1 ) u1 (a1 ; a2 ; a2 ): Player 2's belief is consistent. For every action that the sender plays with positive probability, the receiver derives his belief on the sender's type using Bayes' theorem: 8a1 such that X p 02 0 0 > 0; a j 1 1 p ( ) 1 a1 j : ( ja1) = P 0 0 02 p 1 a1 j 111 Player 2's strategy is optimal. For every action the sender could play, the receiver's action is optimal given his belief: 8a1; 8a2 (a1) such that 2 (a2 (a1) ja1) > 0; a2 (a1) 2 arg max a2 X ( ja1) u2 (a1; a2; ) : Out-of-equilibrium beliefs: The consistency requirement applies only to actions that are played with positive probability in equilibrium. Suppose in a certain equilibrium action a1 is played with probability zero. Then, no restriction is imposed on belief ( ja1). Finding PBE's is a form of art. Practice, practice, practice. 112 1. Reputation Game Kreps-Wilson (1982), Milgrom-Roberts (1982). Two rms: incumbent (i = 1), challenger (i = 2). Two periods. 1. Incumbent observes 2 fsane,crazyg, where Pr (sane) = p. She chooses a1 2 fprey,accomodateg. 2. Challenger chooses a2 2 fstay,exitg : 113 Payo s are sane stay prey P1; P2 accomodate D1; D2 exit M1 ; 0 M1 ; 0 crazy stay prey K1 ; K 2 accomodate D1; D2 exit K1; 0 M1 ; 0 With P1 < D1 < M1 < K1; K2 < 0 < P2 < D2: 114 Also assume that the proportion of crazy types is not too high: pP2 (1 p) K2 (5) What is the set of PBE of this game? The crazy type has a dominant strategy. He should always prey: a1 (crazy) = prey. What does the sane type do? Let = Pr (a1 (sane) = prey) : 115 Reduced form of a game of entry deterrence. The incumbent monopolist faces a challenger. Some crazy monopolists love to ght on price, but the sane majority would rather accomodate entry than engage in a long-term ght. The challenger enters the market without knowing what type the monopolist is. The incumbent can engage in a price battle (prey) or collude (accomodate). Some sane monopolists may want to pretend to be crazy. The challenger can stay or leave. If the challenger stays, a crazy incumbent keeps ghting while a sane incumbent colludes. More in general: this is a theory of threats. 116 Three Classes of Equilibria 1. Separating equilibrium: = 0. A sane challenger and a crazy challenger always take di erent actions. In equilibrium the challenger knows the incumbent's type. 2. Semi-separating equilibrium: 2 (0; 1). A sane type and a crazy type sometimes choose di erent actions, but not always. 3. Pooling equilibrium: = 1. The two types always choose the same action. 117 Separating Equilibrium Proposition 18 There exists no separating equilibrium. Strategy of proof: 1. Assume there exists a separating equilibrium (the sane incumbent accomodates: = 0). 2. Determine the challenger's belief, determine the challenger's optimal strategy. 3. Determine the incumbent's optimal strategy. 4. Show that the incumbent's optimal strategy is not to accomodate. 118 If = 0, the challenger's belief is 8a1 such that That is, X p 02 0 0 > 0; a j 1 1 p ( ) 1 a1 j : ( ja1) = P 0 0 02 p 1 a1 j ( = sanejprey) = 0; ( = sanejaccomodate) = 1: There are no out-of-equilibrium beliefs. 119 The challenger's optimal strategy is: 8a1; 8a2 (a1) such that 2 (a2ja1) > 0; a2 (a1) 2 arg max a2 X ( ja1) u2 (a1; a2; ) : Suppose a1 = prey. The expected payo is X ( jprey) u2 (prey; a2; ) = u2 (prey; a2; crazy) and u2 (prey; a2; crazy) = ( K2 0 The optimal strategy is to exit. 120 if a2 = stay; if a2 = exit. Suppose a1 = accomodate. The expected payo is X ( jaccomodate) u2 (accomodate; a2; ) = u2 (accomodate; a2; sane) and u2 (accomodate; a2; sane) = ( D2 0 if a2 = stay; if a2 = exit. The optimal strategy is to stay. To recap, the challenger's optimal strategy is a 2 (a 1 ) = ( stay exit if a1 = accomodate; if a1 = prey. 121 The incumbent's optimal strategy is 8 ; 8a1 such that 1 (a1j ) > 0; a1 2 arg max a1 X a2 2 (a2 ja1 ) u1 (a1 ; a2 ; ): Given the challenger's strategy (which we saw above), the incumbent's expected payo is X = a (2 2 (a2 ja1 ) u1 (a1 ; a2 ; ) u1 (accomodate; stay; ) u1 (prey; exit; ) if a1 = accomodate; if a1 = prey. We already know that preying is optimal when crazy. Suppose = = sane. The incumbent gets u1 (accomodate; stay; sane) = D1 u1 (prey; exit; sane) = M1 if a1 = accomodate; if a1 = prey. As M1 > D1, a sane incumbent's optimal strategy is to prey. We have a contradiction because we assumed that a sane incumbent accomodates. There exists no separating equilibrium. 122 Semi-separating Equilibrium Proposition 19 There exists a unique semi-separating equilibrium. A sane incumbent preys with probabil(1 p)K2 ity = pP2 . If the incumbent accomodates, the challenger stays. If the incumbent preys, the 1 D1 . challenger stays with probability M M P 1 1 Strategy of proof: same as for the separating equilibrium but instead of a contradiction we get an equilibrium. Suppose 2 (0; 1). 123 The challenger's belief is p (sanejprey) = p +1 (sanejaccomodate) = 1: There are no out-of-equilibrium beliefs. 124 p ; The challenger's optimal strategy is: The expected payo is = X ( ja1) u2 (a1; a2; ) (saneja1) u2 (a1; a2; sane) + (1 (saneja1)) u2 (a1; a2; crazy) Suppose a1 = accomodate. If a2 = stay, the expected payo is D2. If a2 = exit, the payo is 0. Therefore, the challenger should stay. Suppose a1 = prey. If a2 = stay, the expected payo is (sanejprey) P2 + (1 (sanejprey)) K2 p 1 p = P2 + K2 p +1 p p +1 p If a2 = exit, the expected payo is 0. Therefore, the challenger stays if and only if p P2 (1 125 p) K2: That is, the challenger Note that exits if < is indi erent if = stays if > (1 p)K2 pP2 (1 p)K2 pP2 (1 p)K2 pP2 (1 p)K2 2 (0; 1) because of (5). pP2 126 The incumbent's optimal strategy : In order for 2 (0; 1), the incumbent's expected payo must be the same when he preys and when he accomodates: If the incumbent accomodates, the challenger stays and the payo is D1: If the incumbent preys, let = Pr (a2 (prey) = stay). The incumbent's expected payo is P1 + (1 ) M1 : The payo s are equal if M1 = M1 But if D1 2 (0; 1) : P1 2 (0; 1), it must be that = (1 p) K2 : pP2 127 To re-cap: in a semi-separating equilibrium, it must be that (1 p) K2 = ; pP2 M1 D1 : = M1 P1 For instance, suppose that p = 34 , K2 = 1, M1 = 4, D1 = 2, and P1 = 1. Then, 1 = 3 2 = 3 p p +1 1, P2 = 1 = : p 2 A sane incumbent preys with probability 13 . If the incumbent preys, the challenger believes that she is sane with probability 12 . If the incumbent preys, the challenger stays with probability 32 . 128 Pooling Equilibrium Proposition 20 There exists no pooling equilibrium. Strategy of proof: 1. Assume there exists a pooling equilibrium (the sane incumbent always preys: = 1). 2. Determine the challenger's belief (watch out for out-of-equilibrium beliefs). 3. Determine the challenger's optimal strategy for every possible out-of-equilibrium belief. 4. Show that the incumbent's optimal strategy is to accomodate. 129 Challenger's belief : If = 1, the only action that the incumbent plays in equilibrium is a1 = prey. Belief consistency requires: p ( ) 1 (preyj ) p( ) =P = p( ): ( ja1) = P 0 0 0 02 p 02 p 1 preyj As both types play the same action, the belief equals the prior. Instead, the belief ( jaccomodate) is out of equilibrium because a1 = accomodate is never played. Belief consistency imposes no requirement on ( jaccomodat Let = ( jaccomodate) 2 [0; 1] : As we want to show that there exists no pooling equilibrium, we have to examine all possible values of . 130 Challenger's strategy : If a1 = prey, the challenger gets pP2 + (1 0 p) K2 if a2 = stay; if a2 = exit. The challenger stays because by (5), pP2+(1 0. p) K2 > If a1 = accomodate, the challenger's payo is D2 independent of whether = sane or = crazy. The challenger stays. The payo does not depend on the out-of-equilibrium belief . Lucky case. The challenger's optimal strategy is to always stay. 131 Incumbent's strategy : The incumbent knows that the challenger stays. If = sane, the payo is P1 if a1 = prey and D1 if a1 = accomodate. As D1 > P1, a sane incumbent acoomodates. This is a contradiction because we had assumed that a sane incumbent always preys. 132 To Re-cap Proposition 21 The reputation game has a unique perfect Bayesian equilibrium. It is a semi-separating equilibrium in which a sane incumbent preys with (1 p)K2 probability = . pP 2 133 2. Spence's Job Signalling Game Spence (1974) Players: a worker, a large number of rms 1. The worker observes her type 2 f L; H g where L < H and Pr ( = H ) = , and she chooses a level of education a 2 [0; 1) at cost a. 2. The rms observe a and o er the worker a wage w. 3. The worker selects one of the rms and generates a product of value . The payo of the rm who hires the worker is w: The payo of the worker is w a : Idea: education is a sorting device (extreme case) 134 Reduced form of the game (assume rms have symmetric beliefs): 1. The worker observes her type 2 f L; H g where L < H , and she chooses a level of education a 2 [0; 1) at cost a . 2. The worker receives a wage w= where type. ( H ja) H + ( Lja) L; ( ja) is the rm's belief on the worker's Denote the rm's belief as ^ (a ) = ( H ja) : The wage is then w = L + ^ (a ) ( H 135 L) : Separating Equilibria Suppose a ( L) 6= a ( H ). Belief/wage on the equilibrium path: ^ (a ( L)) = 0 ^ (a ( H )) = 1: Belief/wage o the equilibrium path: ^ (a) 2 [0; 1] 8a 2 = fa ( L) ; a ( H )g : 1. The bad worker should not want to deviate: L a ( L) L L + ^ (a ) ( H L) a L 8a This implies that a ( L) = 0. Suppose not. If not, the worker could get a higher payo by playing a = 0. With a ( L) = 0, the condition above re-writes as a 8a ^ (a ) L( H L) 136 2. The bad worker should not want to deviate to a ( H ): L a ( H) H (6) L 3. The good worker should not want to deviate to a ( L) = 0: H a ( H) H (7) L Conditions (6) and (7) together imply L( H L) a ( H) H( H L) 4. The good worker should not want to deviate to any other a: H a ( H) H L + ^ (a ) ( H a L) H The condition rewrites as ^ (a ) 1 a ( H) H( H 137 a L) 8a 8a Together with the condition on the bad worker this implies ^ (a ) min a L( H L) a ( H) H( H ;1 a L) ! To re-cap, the necessary and su cient conditions for a separating equilibrium are a ( L) = 0; L( H L) ^ (a ) a ( H) min H( H a L( H L) L) ; ;1 a ( H) H( H Note that the belief condition is satis ed by setting discontinuous beliefs of this form: ^ (a ) = ( 0 1 if a < a ( H ) if a a ( H ). Example: Let L = 1 and H = 2. The condition on a ( H ) becomes 1 138 a ( H) 2. a L) ! For every a 2 [1; 2], there exists a separating equilibrium in which a ( L) = 0; a ( H ) = a; ^ (a ) = ( 0 1 139 if a < a if a a. Pooling Equilibria Suppose a ( L) = a ( H ) = a. Belief/wage on the equilibrium path: ^ (a ) = : Belief/wage o the equilibrium path: ^ (a) 2 [0; 1] 8a 6= a: 1. The bad worker should not want to deviate: a a ^ (a ) ( H 8a ( H L) L) L L 2. In particular, the bad worker should not want to deviate to a = 0: a ^ (0) ( H ( H L) : L) L This imposes constraint a ( ^ (0)) ( H 140 L) L: 3. The good worker should not want to deviate: ( H a L) ^ (a ) ( H H L) a H 8a 4. In particular, the good worker should not want to deviate to a = 0, which imposes constraint a ( ^ (0)) ( H L) H : It is easy to check that the conditions in 1 through 4 are satis ed if ^ (a ) = a ( 0 if a < a if a a. ( H L) L: which determines a continuum of pooling equilibria. 141 Intuitive Criterion As we saw in the last example, there may be multiple perfect Bayesian equilibria. Can we select among equilibria? Intuitive Criterion: Cho and Kreps 1987 Idea: some out-of-equilibrium beliefs are unreasonable. Suppose there are only two types: 2 f 1; 2g Take a PBE and consider out-of-equilibrium action a ^1 If: 1. Type 1 gets a strictly higher utility by deviating to a ^1 if the receiver has belief ( = 1ja ^1 ) = 1; 142 2. Type 2 does not get a strictly higher utility by deviating to a ^1 for any belief ( = 1ja ^1) that the receiver may hold: Then, we say that the PBE fails the Intuitive Criterion. Intuition: the Good Type tells the receiver: \I am going to make a deviation that cannot possibly be pro table for a Bad Type. Therefore, when I deviate, you must believe I am a Good Type." 143 Applying the Intuitive Criterion to Spence: Proposition 22 The only PBE that survives the Intuitive Criterion is the separating equilibrium with a ( H) = L ( H L). Separating Equilibria Take a separating equilibrium with L ( H L) < a ( H) H( H L) and consider a deviation to a ^ = L( H L). 1. Type H gets a strictly higher utility by deviating to a ^ if the receiver has belief ^ (^ a) = 1, because H L( H L) H 144 > H a ( H) H : 2. Type 2 does not get a strictly higher utility by deviating to a ^1 for any belief ^ (^ a) 2 [0; 1] because a) ( H L + ^ (^ L L) L( H L) L 8 ^ (^ a) 2 [0; 1] : The only separating equilibrium that survives the Intuitive Criterion is the one in which a ( H) = L ( H 145 L) : Pooling Equilibria Take a pooling equilibrium with a ( L) = a ( H ) = a ( H L) H : and consider a deviation to a ^ = a + (1 )( H L) L: 1. Type H gets a strictly higher utility by deviating to a ^ if the receiver has belief ^ (^ a) = 1, because a ^ )( H (1 L) L = H H H > = (1 H H )( H L H + (1 ) L L) L a a H : H 2. Type 2 does not get a strictly higher utility by deviating to a ^1 for any belief ^ (^ a) 2 [0; 1] because a ^ a = H + (1 : ) L H L L All pooling equilibria fail the Intuitive Criterion. 146 a H Spence Signalling in Practice Two stories about education: Productivity increase. Sorting device. Very di erent policy implications. How can you tell them apart empirically? Weiss (1995), Altonji (1995), Altonji and Pierret (2001) 147 Generalization of PBE We focused on signalling games: only two players; in each period, only one player moves. In other cases, PBE may be too weak. Sequential equilibrium (Kreps-Wilson 1982) 162 Lecture 5: Moral Hazard Mas Colell, Whinston, Green Principal-agent: hidden type or hidden action De nitions : observed pro t, 2 [ L; H ]. e: agent's action (e ort), e 2 feL; eH g. Conditional pro t density: f ( je) with f > 0 for all ;e First-order stochastic dominance: F ( jeH ) F ( jeL) for all with a strict inequality for some integration by parts): , implying (by E ( jeH ) > E ( jeL) : 163 Agent maximizes v (w) and g (eH ) > g (eL). g (e) with v 0 > 0, v 00 Principal maximizes w Reservation value u 164 0, Observable E ort Contract speci es e and w ( ) max e;w Z H ( w ( )) f ( je) d L subject to Z H L v (w ( )) f ( je) d g (e) (IR) u: Step 1 Given e, what is the cheapest compensation scheme that implements it? min w Z H L w ( ) f ( je) d subject to Z H L v (w ( )) f ( je) d g (e) u: Lagrangian Z L = + H L Z w ( ) f ( je) d H L v (w ( )) f ( je) d 165 g (e) u ! First-order condition f ( je) = v 0 (w ( )) f ( je) 8 which rewrites as 1 = : 0 v (w ( )) 8 If v 00 > 0 (risk aversion), v 0 (w ( )) is a constant. Therefore, w ( ) = we such that v (we ) g (e) = u. If v 00 = 0 (risk neutrality), any w that satis es IR works. Step 2 Choose the optimal e max e Z H f ( je) d L v 1 (u + g (e)) With a risk-neutral agent (v (w) = w), max e Z H L f ( je) d 166 g (e) Unobservable E ort: Risk-Neutral Agent The optimal contract involves \selling the store" w( ) = w: Z f ( je) d The agent solves e = arg max e H L g (e): The principal sets Z H L f ( je ) d g (e ) w = u: Surplus maximization and zero rent. Proposition 26 With a risk neutral agent and unobservable e ort, the optimal contract results in the same e ort and the same utility as when e ort is observed. 167 Unobservable E ort: Risk-Averse Agent Cost of implementing e min w Z H L w ( ) f ( je) d subject to Z H L v (w ( )) f ( je) d e 2 arg max e~ Z H L g (e) v (w ( )) f ( je~) d u: (IR) g (~ e) (IC) If e = eL, IC is easy to satisfy. Set a constant wage w that satis es IR as an equality. IC is satis ed too because g (eH ) > g (eL): weL = v 1 (u + g (eL)) : If e = eH , IC becomes Z H Z L H L v (w ( )) f ( jeH ) d g (eH ) v (w ( )) f ( jeL) d g (eL) 168 Lagrangian: Z L = + L Z Z L H L H L Foc: for all w ( ) f ( jeH ) d H Z + @L = @w H v (w ( )) f ( jeH ) d g (eH ) v (w ( )) f ( jeH ) d g (eH ) u ! ! g (eL) : v (w ( )) f ( jeL) d , f ( jeH ) + v 0 (w ( )) f ( jeH ) + v 0 (w ( )) f ( jeH ) v 0 (w ( )) f ( jeL) which rewrites as 1 = 0 v (w ( )) + Lemma 27 If e = eH , Proof: Suppose v 0 > 0. = 1 f ( jeL) f ( jeH ) > 0 and ! (FOC) > 0. = 0. Contradiction because 169 Suppose = 0. 9 ~ such that 8 2 ~ we have that f ( jeL) ~ > 1. Then, for 2 f ( je ) H 1 =0+ 0 v (w ( )) ( )<0 Suppose = 0. w ( ) is constant =) the agent chooses e = eL. 170 Interpreting FOC 1 = 0 v (w ( )) + 1 f ( jeL) f ( jeH ) ! Let w ^ be such that v 0(1w) = . ^ f ( jeL) < 1 =) w ( ) > w ^ f ( jeH ) f ( jeL) > 1 =) w ( ) < w ^ f ( jeH ) f ( jeL) : likelihood ratio (but beware of a statistical f ( jeH ) interpretation). The wage w need not be increasing in . Su cient condition: monotone likelihood ratio property (Milgrom 1982). [FIGURES] f ( jeL) decreasing in f ( jeH ) 171 E ort Choice The cost of implementing eL is the same as under observable e ort. The cost of implementing eH is higher because of variance and risk aversion. Summary With unobservable e ort: - the contract to implement eL is at. - the contract to implement eH is given by FOC and has a higher expected payment than under observable e. - non-observability may cause a welfare loss. 172 Additional Signal Suppose that the principal can observe and an additional signal y . Should compensation depend on y as well? Holmstrom (1979): yes, unless tistic for y : is a su cient sta- ~ (yj ; e) = h(yj ) h To see this, note that statistical su ciency implies: ~ (yj ; e)f ( je) = h(yj )f ( je) f~ ( ; yje) = h Hence, the rst-order condition with the additional signal is: 1 = + 0 v (w ( )) 1 f~ ( ; yjeL) f~ ( ; yjeH ) ! = + 1 f ( jeL) f ( jeH ) If the condition above is satis ed, the foc is identical to the foc with only. 173

相关文章