Jump to content

Program equilibrium

From Wikipedia, the free encyclopedia

Program equilibrium is a game-theoretic solution concept for a scenario in which players submit computer programs to play the game on their behalf and the programs can read each other's source code. The term was introduced by Moshe Tennenholtz in 2004.[1] The same setting had previously been studied by R. Preston McAfee,[2] J. V. Howard[3] and Ariel Rubinstein.[4]

Setting and definition

[edit]

The program equilibrium literature considers the following setting. Consider a normal-form game as a base game. For simplicity, consider a two-player game in which and are the sets of available strategies and and are the players' utility functions. Then we construct a new (normal-form) program game in which each player chooses a computer program . The payoff (utility) for the players is then determined as follows. Each player's program is run with the other program as input and outputs a strategy for Player . For convenience one also often imagines that programs can access their own source code.[nb 1] Finally, the utilities for the players are given by for , i.e., by applying the utility functions for the base game to the chosen strategies.

One has to further deal with the possibility that one of the programs doesn't halt. One way to deal with this is to restrict both players' sets of available programs to prevent non-halting programs.[1][5]

A program equilibrium is a pair of programs that constitute a Nash equilibrium of the program game. In other words, is a program equilibrium if neither player can deviate to an alternative program such that their utility is higher in than in .

Instead of programs, some authors have the players submit other kinds of objects, such as logical formulas specifying what action to play depending on an encoding of the logical formula submitted by the opponent.[6][7]

Different mechanisms for achieving cooperative program equilibrium in the Prisoner's Dilemma

[edit]

Various authors have proposed ways to achieve cooperative program equilibrium in the Prisoner's Dilemma.

Cooperation based on syntactic comparison

[edit]

Multiple authors have independently proposed the following program for the Prisoner's Dilemma:[1][3][2]

algorithm CliqueBot(opponent_program):
    if opponent_program == this_program then
        return Cooperate
    else
        return Defect

If both players submit this program, then the if-clause will resolve to true in the execution of both programs. As a result, both programs will cooperate. Moreover, (CliqueBot,CliqueBot) is an equilibrium. If either player deviates to some other program that is different from CliqueBot, then the opponent will defect. Therefore, deviating to can at best result in the payoff of mutual defection, which is worse than the payoff of mutual cooperation.

This approach has been criticized for being fragile.[5] If the players fail to coordinate on the exact source code they submit (for example, if one player adds an extra space character), both programs will defect. The development of the techniques below is in part motivated by this fragility issue.

Proof-based cooperation

[edit]

Another approach is based on letting each player's program try to prove something about the opponent's program or about how the two programs relate.[6][8][9][10] One example of such a program is the following:

algorithm FairBot(opponent_program):
    if there is a proof that opponent_program(this_program) = Cooperate then
        return Cooperate
    else
        return Defect

Using Löb's theorem it can be shown that when both players submit this program, they cooperate against each other.[8][9][10] Moreover, if one player were to instead submit a program that defects against the above program, then (assuming consistency of the proof system is used) the if-condition would resolve to false and the above program would defect. Therefore, (FairBot,FairBot) is a program equilibrium as well.

Cooperating with ε-grounded simulation

[edit]

Another proposed program is the following:[5][11]

algorithm GroundedFairBot(opponent_program):
    With probability :
        return Cooperate
    return opponent_program(this_program)

Here is a small positive number.

If both players submit this program, then they terminate almost surely and cooperate. The expected number of steps to termination is given by the geometric series. Moreover, if both players submit this program, neither can profitably deviate, assuming is sufficiently small, because defecting with probability would cause the opponent to defect with probability .

Folk theorem

[edit]

We here give a theorem that characterizes what payoffs can be achieved in program equilibrium.

The theorem uses the following terminology: A pair of payoffs is called feasible if there is a pair of (potentially mixed) strategies such that for both players . That is, a pair of payoffs is called feasible if it is achieved in some strategy profile. A payoff is called individually rational if it is better than that player's minimax payoff; that is, if , where the minimum is over all mixed strategies for Player .[nb 2]

Theorem (folk theorem for program equilibrium):[4][1] Let G be a base game. Let be a pair of real-valued payoffs. Then the following two claims are equivalent:

  • The payoffs are feasible and individually rational.
  • There is a program equilibrium that achieves payoffs .

The result is referred to as a folk theorem in reference to the so-called folk theorems (game theory) for repeated games, which use the same conditions on equilibrium payoffs .

See also

[edit]

Notes

[edit]
  1. ^ It is not necessary for programs in the program game to be given access to their own source code. By the diagonalization lemma, one can use quining to enable programs to refer to their source code.[2][3][4]
  2. ^ Equivalently, (by von Neumann's minimax theorem), , where the maximum is over all mixed strategies for Player .

References

[edit]
  1. ^ a b c d Tennenholtz, M. (November 2004). "Program equilibrium". Games and Economic Behavior. 49 (2). Elsevier: 363–373. doi:10.1016/j.geb.2004.02.002. ISSN 0899-8256.
  2. ^ a b c McAfee, R. P. (May 1984). Effective Computability in Economic Decisions (PDF) (Technical report). University of Western Ontario.
  3. ^ a b c Howard, J. V. (May 1988). "Cooperation in the Prisoner's Dilemma". Theory and Decision. 24 (3). Kluwer Academic Publishers: 203–213. doi:10.1007/BF00148954. S2CID 121119727.
  4. ^ a b c Rubinstein, A. (1998). "Ch. 10.4". Modeling Bounded Rationality. MIT Press. ISBN 978-0262681001.
  5. ^ a b c Oesterheld, C. (February 2019). "Robust Program Equilibrium". Theory and Decision. 86. Springer: 143–159. doi:10.1007/s11238-018-9679-3. S2CID 255103752.
  6. ^ a b van der Hoek, W.; Witteveen, C.; Wooldridge, M. (2013). "Program equilibrium—a program reasoning approach". International Journal of Game Theory. 42 (3). Springer: 639–671. CiteSeerX 10.1.1.228.6517. doi:10.1007/s00182-011-0314-6. S2CID 253720520.
  7. ^ Peters, Michael; Szentes, Balázs (January 2012). "Definable and Contractible Contracts" (PDF). Econometrica. 80 (1). The Econometric Society: 363–411. doi:10.3982/ECTA8375.
  8. ^ a b Barasz, M.; Christiano, P.; Fallenstein, B.; Herreshoff, M.; LaVictoire, P.; Yudkowsky, E. (2014). "Robust Cooperation in the Prisoner's Dilemma: Program Equilibrium via Provability Logic". arXiv:1401.5577 [cs.GT].
  9. ^ a b Critch, A. (2019). "A Parametric, Resource-Bounded Generalization of Löb's Theorem, and a Robust Cooperation Criterion for Open-Source Game Theory". Journal of Symbolic Logic. 84 (4). Cambridge University Press: 1368–1381. doi:10.1017/jsl.2017.42. S2CID 133348715.
  10. ^ a b Critch, A.; Dennis, M.; Russell, S. (2022). "Cooperative and uncooperative institution designs: Surprises and problems in open-source game theory". arXiv:2208.07006 [cs.GT].
  11. ^ DiGiovanni, A.; Clifton, J. (2023). "Commitment games with conditional information disclosure". Proceedings of the AAAI Conference on Artificial Intelligence. arXiv:2204.03484.