Skip to content

Self-Confirming Equilibrium and Model Uncertainty

🕒 Published at: 2 years ago

INFO

💡 This paper extended the traditional SCE model to insert ambiguity aversion and find that ambiguity aversion enlarges the set of SCE. The intuition is that the aversion to payoffs with unknown distributions mitigates the potential “experimentation“ of other alternative strategies, and thus makes decision-makers stick to the former path.

Setting

A randomly matched society. Use large populations (or Nash’s mass action) scenario: many individuals play the game G recurrently with roles i, and players are drawn at random and matched to play G for many times.

Learning issue: Misbeliefs in SCE and Partial Identification. Many underlying distributions are possible to explain the empirical frequencies they observe (i.e. patterns learned from evidence). Players use accumulated evidence as dataset to evaluate the outcome distribution associated with each choice. There is intrinsic limitations to the available evidence: can only know the terminal node of the sequential game, and mostly his own payoff instead of others’.

Ambiguity in Opponents’ strategies. Players cannot infer the distribution of strategy profiles adopted by the opponents, because it is not possible to know from the long-run frequency stats, which thus results in the fundamental inference problem.

Objective and Subjective uncertainty

For state-space S, there is a set ΣΔ(S) of possible probabilistic models that the agent posits, and each model σΣ specifies the obj probability. Plus the action space a, the player maximize von-Newmann utility U(a,σ) and uncertain about the true model σ (i.e. a distribution of strategies in the population of opponents).

Utility under Ambiguity. do not use the maxmin utility function maxaminσU(a,σ), but use a mroe general KMM model (smooth ambiguity model), which allows both maxmin and subjective expected utility. ---Klibanoff, Marinacci, and Mukerji (2005)

Smooth SCE

Agents in each role best respond to beliefs consistent with their database, choosing actions with the highest smooth ambiguity value, and their database is the one that obtains under the true data generating process corresponding to the actual strategy distribution

An Example

Player 1 chooses between an outside option O and two Matching Pennies Games MP1 and MP2

  • Subgame MP2 has higher maxmin stakes than MP1 (maxmin value 2>1.5)
    • Calculate the maxmin value: in MP2 the distribution that minimize the expected payoff is 12h2+12t2, in which the highest payoff (maxmin value) is 2. Similarly, maxmin value in MP1 is 1.5.
  • only one Bayesian SCE: Player 1 always choose MP2 due to a higher maxmin value and one half of players play h
  • sub prob of P1:  assign probability to the choice hk as  μk, and k{0,1}
    • subjective value:{μ¯1+1,2μ¯1}1.5 in MP1 and {4μ2,4(1μ2)}2 in MP2
    • However, the value of O is (1+ϵ), which is lower than other two Games

What Subgames are reachable?

  • For an ambiguity neutral agent, neither MP1 nor O can happen in a Bayesian SCE
    • in a repeated game. even if they are inOorMP1, they can learn and turn toMP2
  • Ambiguity aversion makes O or MP1 reachable through status quo bias
    • For agents already in MP1 with moderate ambiguity aversion, the aversion to unknown payoff in MP2case makes its payoff penalized, so people still stay in MP1
    • High ambiguity aversion makes the option O also possible since people in O do not want to suffer from ambiguity in both games. For an extreme case, the agent see obj payoff in MP1 as the lowest payoff 1 and MP2 as the lowest payoff 0, which are less than 1+ϵ

https://s2.loli.net/2023/05/07/YRTtxBSs3KjLPu8.png

Recurrent Games and Self-Confirming Equilibrium

Games with Feedback

a game form with feedback has the form:

G=(I,(Si,Mi,Fi)iI)
  • I={1,2,n} is the set of player roles
  • Si is the finite set of strategies of iI, and let S=iISi and Si=ΠjiSj denote the set of all strategy files and playeri’s opponents’ strategies
  • Mi is a set of messages that playerimay receive at the end of the game
  • Feedback function: Fi:SMi. If the game is dynamic, a player’s feedback is a function of the terminal node ζ(s)Z reached under strategy profile s. In this case, Fi(s)=fi(ζ(s)) where fi:ZMi is the extensive-form feedback function for player i.

Example 1: Three cases about the forms of Feedback

  1. Fi(s)=ζ(s) each player observes the terminal node (reached under the realized strategy profile). That is, fi is the identity on Z
  2. Fi(s)=g(ζ(s)) each player observes everybody’s material consequences at the terminal node, that is, fi is the consequence function g
  3. Fi(s)=gi(ζ(s)) each player observes his own material consequences at the terminal node, that is, fi is the ith projection of g

The player infer that the strategy profile played by his opponents must belong to the set

{siSi:Fi(si,si)=mi}=Fi,si1(mi)

Rewrite:

Fsi={Fsi1(mi):miMi}

Example 2: in former game, player 1 observes only his monetary payoff

the ex post information partition:

FO={S2}FH1=FT1˙={{h1.h2,h1.t2},{t1.h2,t1.t2}}FH2=FT2={{h1.h2,t1.h2},{h1.t2,t1.t2}}

Note. A game form with feedback (I,(Si,Mi,Fi){iI}) satisfies own-strategy independence of feedback if the ex post information partition $ F*{s_i}$ is independent of si for every iI. This property is very strong and is violated in many interesting cases.

Players' Preferences

For Fsi(si)=Fsi(si), we have:

Ui(si,si)=Ui(si,si)

Call game with feedback the tuple G=(I,(Si,Mi,Fi,Ui)iI)

Ambiguity Aversion (KMM smooth ambiguity criterion)

ambiguity attitudes in population i are represented by a strictly increasing and continuous function ϕi:UiR, where

Ui=[minsSUi(s),maxsSUi(s)]

KMM smooth ambiguity criterion is

Viϕi(si,μi)=ϕi1(suppμiϕi(Ui(si,σi))μi(dσi))
  • player i is uncertain about the true distribution σiΔ(Si) of strategies in the population of potential opponents
  • measure of ambiguity aversion: ϕi/ϕi

von-Neumann Utility in KMM

Ui(si,σi)=siSiUi(si,si)σi(si)

standard Bayesian SEU in KMM

Viid(si,μi)=suppμiUi(si,σi)μi(dσi)

robust criterion Gilboa and Schmeidler

Viω(si,μi)=minσisuppμiUi(si,σi)
0 comments
Anonymous
Markdown is supported

Be the first person to leave a comment!