# 耶鲁大学公开课:博弈论总结

[ -- 2020年06月27日 -- ]

耶鲁大学公开课:博弈论课程

耶鲁大学经济学n159课程,此为课程的笔记及总结,课程教材可参考这两本书

Joel Watson Strategy (3rd Edition)

Strategies and Games(Dutta)

# Class01

  1. Don't play a strictly dominated strategy.

  2. Put yourself in other people's shoes and try to figure out what they will do.

  3. Rational choices can lead to bad outcomes.

  4. You can't get what you want, till you know what you want. (books: Strategy and Games(Dutta), Strategies(Joel Watson), Thinking Strategically)

# Class02

  1. Game ingredients:players(i,j),Strategy(player i's strategy–si;the set of possible strategies of player i–Si;a particular play of a game–s;a choice for all except i–U-i),payoffs(player i's payoff–Ui(s))

  2. Def1: player i's strategy Si is strictly dominated by player i's strategy si,if Ui(si,s-i) > Ui(si,i) for all S-i

  3. Def2: player i's strategy si is a weakly dominated by her strategy Si if Ui(Si,S-i) >= Ui(si,s-i) for all Si

# Class03

  1. Median voter Theorem:Candidates crowd the center without consideration

    1. voters are not evenly distributed in the real wolrd

    2. many candidate will not voting

    3. position not believed(commit to policy)

    4. primaries

    5. higher dimensions

  2. Best respones

# Class04

  1. Penalty kick game

    1. Def1: Player i's strategy Si^ is a BR to the strategy S-i of all the other players if Ui(Si^,S-i) >= Ui(Si',S-i) for all Si' in Si or Si^ solves Max Ui(Si,S-i)

    2. Def2: Player i's strategy is a BR to the belief p about the other players' choices if EUi(Si^,p) >= EUi(Si',p) for all Si' in Si or Si^ solves max EUi(Si,p)

    3. Example EUi(L,p)=p(l)Ui(L,l)+p(r)Ui(L,r)

  2. Partnership game(Nash Equilibrium)

# Class05

  1. NE motivation:

    1. No regrets. No individual can do strictly better by deviating and holding everyone else's actions.

    2. Self-fulfilling beliefs. Each player is going to make a best response to each player.

  2. Relate NE to Dominate. No strictly dominated strategy could ever be played in NE.

  3. Coordination game: People can cooperate without a contact because the persuader is not trying to get you play a strictly dominated strategy. NE can be a self-enforcing agreement.

# Class06

  1. Strategic complements: The more the other person does,the more you want to do.

  2. Battle of the sexes: Different people disagree about where you'd like to coordinate.

  3. Cournot Duopoly: Q: Comp > Total > Mono; P: Comp < Total < Mono

# Class07

  1. Bertrand Competition: The outcome is like perfect competition even though there's only 2 firms.The same setting as Cournot but with a different strategy set,it led to a different outcome.

  2. Differentiated Products.

  3. Candidate-Voter Model.

# Class08

  1. Candidate-model:

    1. There can be lots of NE. (not all the people in the center)

    2. Entry can lead to a more distant candidate winning.

    3. If the candidates are too extreme,someone in the center will enter.

    4. Guess and check is an effective method.

  2. Location model:

    1. Segregation does not imply that there's a preference for segregation.

    2. Randomization are policies that you agree to do.

    3. Individual randomization is another NE. (Mixed Strategy is a pure strategy)

  3. No NE in pure strategy.

# Class09

  1. Mixed Strategies:

    1. Def1: A mixed strategy Pi is a randomization over it's pure strategies. Pi(Si) is the probability that Pi assigns to the pure strategy Si. (Pi(Si) could be 0;Pi(Si) could be 1)

      1-1) Payoff: The expected payoff of the mixed strategy Pi is the weighted average of the expected payoffs of each of the pure strategies in the mix.

      1-2) Lesson1: If a mixed strategy is the BR, then each of the pure strategies in the mix must themselves be BR. In particular, each must yield the same expected payoff.

    2. Def2: A mixed strategy profile(P1*,P2*……Pn*) is a mixed strategy NE if for each Player i, Pi* in a BR to P-i*.

      2-1) Lesson2: If Pi*(Si)>0,then that strategy is also a BR to P-i*.

  2. Tennis Game.

# Class10

  1. We only ever have to check for strictly profitable pure-strategy deviations.

  2. Three different ways to think about randomization in equilibrium or out of equilibrium.

    1. it's genuinely randomization.

    2. it could be something about people's belief's.

    3. it could telling us something about the proportion of people who are doing something in society.

  3. If I change the column player's payoffs, it changes the row player's equilibrium mix; If I change the row player's payoffs, it changes the column player's equilibrium mix.

# Class11

  1. Evaluation and Game Theory:

    1. Nature can suck;

    b) If a strategy is strictly dominated then it is not evolutionarily stable.

  2. If a strategy S is not Nash,(S,S) is not NE then S is not evolutionarily stable. If S is evolutionarily stable then (S,S) is a Nash Equilibrium.

  3. Maynard Smith Bio Def: In a symmetric two player game,the pure strategy S^ is ES (im pure strategy) if there exists an ε-bar >0, (1-ε)U(S^,S^) + εU(S^,S') > (1-ε)U(S',S^) + εU(S',S') for all possible deviations S' and for all mutation sizes ε less than some ε-bar.

  4. Def2: A strategy S^ is ES (in pure strategy) if

    1. (S^,S^) is a symmetric NE, ie. U(S^,S^) ≧ U(S',S^) for all S'

    2. if U(S^,S^)=U(S',S^) then U(S^,S') > U(S',S')

# Class12

  1. Evolution of social convention: We can have multiple evolutionarily stable conventions. These need not to be equally good.

  2. Def:In a two player symmetric game, A strategy P^ is ES (in mixed strategy) if

    1. (P^,P^) is a symmetric NE

    2. if (P^,P^) is not strict NE,U(P^,P^)=U(P',P^) then U(P^,P')>U(P',P')

  3. Hawk-Dove: If V<C then ES mix has V/C Hawks

    1. as V↑ more Hawks in ESS as C↑ more Doves in ESS

    2. payoffs: (1-V/C)(V/2)

    3. identification:we can tell what the ratio V/C is from looking at data.

# Class13

  1. Sequential move game:Player Ⅱ knows player Ⅰ's choice before she chooses and player Ⅰknows that this will be the case. Backward Induction is "look forward and walk back".

  2. Moral hazard: What happened was we kept the size of the project or the size of the loan, small to reduce the temptation to cheat.

  3. Incentive design:they're taking a smaller share of a larger pie can be bigger than a large share of a small pie. the form called two things:piece rates and share cropping.

  4. Commitment strategy:

Example:Collateral works is it lowers your payoffs if you do not repay but it leads to you being better off because it changes the choices of others in a way that helps you.

Commitment is actually to have fewer options and it changes the behavior of others.It's crucial that the other side must know.

# Class14

  1. commitment: sunk costs can help.

  2. spy: the number one is games being simultaneous or sequential is not really about timing per se, it's about information. It's about who knows what, and who knows that who's going to know what. The second is having more information can hurt you. The key here is that the other side, the other players, knew you had or were going to have more information. And the reason that's true is the reason is it can lead other players to take actions that hurt you.

  3. first-mover advantage: there are games with first mover advantages, but there also games with second mover advantages. For example, In game NIM, it depends on whether the piles have the same number in them. If you have the same number you want to be second player, and if they have different numbers you want to be first player. It means if the initial position has unequal piles, uneven piles, then you would rather be Player1: it has a first mover advantage. But if the initial position has even numbers in the piles then you'd rather be Player2.

# Class15

  1. Zermelo Theorem:

[Conditions]

1) 2 players

2) perfect information

3) a finite number nodes

4) there outcome W1, L1, T

[Result]

Either 1 can force a win

- Or 1 can force a tie

- Or 2 can force a loss on 1
  1. Suppose the claim is true for all games of this type of length ≤N, we claim therefore it will be true for games of length N+1.

  2. Def: A game of perfect information is one in which at every node, or at each node in the game the player whose turn it is to move at that node knows which node she is at (that means implicity is she must know how she got there).

  3. Def: A pure strategy for Player1 in a game of perfect information -- It's a complete plan of action; It specifies which action I will take Each of 1's decision nodes.

# Class16

  1. Chain Store Paradox: If 1% chance that monoployer is crazy then he can defer entry by fighting, i.e. seeming crazy. The idea is once again you might want to behave as if you're someone else in order to defer people's actions.

  2. Dual:

Pi[d] is Player i of hitting if i shoots at distance d.

[FACT A]: Assuming no one has thrown yet, if Player i knows at distance d that j will not shoot tomorrow at distance d-1 then i should not shoot today.

[FACT B]: Assuming no one has thrown yet, if Player i knows at distance d that j will shoot tomorrow at distance d-1 then i should shoot if i's probability of hitting at d today as Pi[d]≥1-Pj[d-1] which means j's probability of missing tomorrow.

=> Pi[d*] + Pj[d*-1] ≥ 1

[Claim]: The first shot should occur at d*.

# Class17

  1. Ultimatums:

Two Players 1 and 2,

Player 1 "take it or leave it" offer to Player2;

Player 2 "can accept this offer" (S,1-S), Player 2 "can reject this offer" (0,0).

  1. Bargaining:

[Stage1] Player1 make offer to Player2, if Player 2 accept (S,1-S);

[Stage2] Player 2 reject, Player 2 make an offer to Player 1 (δS,1-δS)

...

If Player 1 offer > δPlayer1, Player 2 accept;

If Player 1 offer < δPlayer1, Player2 reject.

=> S=(1-δ^n)/(1+δ)

1-S=(δ+δ^n)/(1+δ)

So

when n→∞, S=1/(1+δ) and 1-S=δ/(1+δ);

when n→1, S=1/2 and 1-S=1/2.

  1. Conclusion:

Alternating offer bargaining

(1) Even split: 

    • potentially can bargain forever
    
    • δ→1 no discounting or rapid offers
    
    • same discount factor δ1=δ2

(2) first offer is accepted: 

    • no haggling

    • the value of the pie and the value of time is assumed to be known

# Class18

  1. DefN: An information set of Player i is a collection of Player i's nodes among which i cannot distinguish.

  2. DefN: Perfect Information is a setting where all information sets in the tree contain just one node. Imperfect information is not perfect information.

  3. DefN: A pure strategy of Player i is a complete plan of action: it specifies what Player i will do at each of i's information sets.

  4. DefN: A sub-game is a part of a game that looks like a game whthin the tree.

[conditions]:

1) It starts from a single node.

2) It comprises all successors to that node.

3) It does not  break up any information sets.
  1. DefN: A NE (S1*, S2*, ... , SN*) is a sub-game perfect equilibrium SPE if it induces a Nash equilibrium in every sub-game of the game.

# Class19

  1. Don't Screw Up: The only sub-game perfect equilibrium is the backward induction prediction.

  2. Matchmaker Game: In the sub-game, There is a mixed Nash Equilibrium.

  3. Strategic Investment: When you're analyzing a game like Cournot, the first thing you want to do is to look at what would happen if you did invest and solve out the new Nash Equilibrium in that sub-game. Then you want to roll back the value of that sub-game back into the decision which is the Strategic investment decision whether to do or not. You need to take into account strategic effects: how behavior changes.

# Class20

  1. Wars of attrition:

Two Players each period each chooses whether to fight or to quit. The game ends as soon as someone quits. If the other Player quits first, winner wins a prize V. Each period in which both fight, Each Player pay -C. If both quit at once, They get 0.

  1. Two period version: Two Pure Strategy Nash Equilibrium in the game are (Fight, quit) and (Quit,fight). And Mixed Strategy Equilibrium in this game has both mix, both fight with probability equal to V/[V+C].

  2. Continuation payoffs: If they mix in the game in the future, then the continuation value is (0,0).

# Class21

  1. Repeated Interaction: In on going relationships, the promise of feature rewards and the threat of feature punishments may sometimes provide incentives for good behavior today.The lesson is: but for this to work it helps to have a future.

  2. The temptation to cheat or defect today ≤ δ[the value of reward - the value of the punishment tomorrow].

  3. If a stage game has more than one Nash Equilibrium in it, then we may be able to use the prospect of playing different equilibria tomorrow to provide incentives as rewards and punishments for cooperation today. So there may be a problem of renegotiation.

  4. Grim Trigger Strategy: play Cooperation if no one has played Defect and play Defect otherwise.

# Class22

  1. We can get Cooperation in the Prisoner's Dilemma using the Grim trigger as a sub-game perfect equilibrium. For an ongoing relationship to provide incentives for good behavior today, it helps for there to be a high probability that the relationship will continue, which is the weight you put on the future. The more weight I put on the future, the easier it is for the future to give me incentives to behave well today, the easier it is for those to overcome the temptations to cheat today.

  2. One Period Punishment: Play Cooperation to start and then Play Cooperation if Either (Cooperation, Cooperation) or (Defect, Defect) were played last. And play Defect otherwise: Play Defect if Either (Cooperation, Defect) or (Defect, Cooperation) were played last. Trade off is that Shorter punishment need more weight δ on the future.

  3. Repeated Moral Hazard: Even a small probability of the relationship continuing drastically reduces the wage premium. To get good behavior in these continuing relationships, there has to be some reward tomorrow. That reward needs to be higher, if the weight you put on tomorrow, if the probability of continuing tomorrow, is lower. The less likely tomorrow is to occur the bigger that reward has to be tomorrow.

# Class23

  1. Asymmetric Information Signaling: the lack of a signal can be informative. So this is the idea that silence can speak volumes. It mattered that the information was verifiable.

  2. Not Verifiable: Costly Signaling or Main Signal is Education, supported by Mike Spence who actually won the Nobel Prize in large part for this model. So what are those costs? It's the pain of the work, such as mental effort, the pain and suffering.

  3. Separating Equilibrium: The types manage to separate and get identified.

    1. a good signal leads to be differentially costly across types.

    2. if you lower the standards, it takes qualification inflation.

    3. this is a rather pessimistic model of education because there is no learning in this model. Then education in this model is socially wasteful. Plus, Education, in this model, increases inequality and it actually hurts the poor.

# Class24

  1. Common Value: Value of good is the same for all.

  2. Private Value: Value of good is different for all and my value is irrelevant to you.

  3. Auction:

    1. First-price Sealed Bid Auction

    2. Second-price Sealed Bid Auction (Vickrey Auction)

    3. Ascending Open Auction

    4. Descending Open Auction (Dutch Auction)

Auction 1 is really the same as Auction 4; Auction 2 is not the same as Auction 3 But they're very closely related.