Abstract
This work proposes an extension of the well-known Eisert–Wilkens–Lewenstein scheme for playing a twice repeated \(2\times 2\) game using a single quantum system with ten maximally entangled qubits. The proposed scheme is then applied to the Prisoner’s Dilemma game. Rational strategy profiles are examined in the presence of limited awareness of the players. In particular, the paper considers two cases of a classical player against a quantum player game: the first case when the classical player does not know that his opponent is a quantum one and the second case, when the classical player is aware of it. To this end, the notion of unawareness was used, and the extended Nash equilibria were determined.
Similar content being viewed by others
1 Introduction
In the recent years, the field of quantum computing has developed significantly. One of its related aspects is quantum game theory that merges together ideas from quantum information [1] and game theory [2] to open up new opportunities for finding optimal strategies for many games. The concept of quantum strategy was first mentioned in [3], where a simple extensive game called PQ Penny Flip was introduced. The paper showed that one player could always win if he was allowed to use quantum strategies against the other player restricted to the classical ones. Next, Eisert Wilkens and Lewenstein proposed a quantum scheme for Prisoners Dilemma game based on entanglement [4]. Their solution leads to a Nash equilibrium that a Pareto-optimal payoff point.
Since then, many other examples of quantum games were proposed. Good overview of quantum game theory can be found in [5]. One of the latest trends is to study quantum repeated games [6, 7]. In particular, quantum repeated Prisoner’s Dilemma [8, 9] was investigated. In [8], the idea was to classically repeat the Prisoner’s Dilemma with strategy sets extended to include some special unitary strategies. That enabled one to study conditional strategies similar to ones defined in the classical repeated Prisoner’s Dilemma, for example, the “tit for tat” or Pavlov strategies.
We present a different approach taking advantage of the fact that a repeated game is a particular case of an extensive-form game. A twice repeated \(2\times 2\) game is an extensive game with five information sets for each of the two players. Instead of using a classically repeated scheme based on two entangled qubits [8], we consider a twice repeated game as a single quantum system which requires ten maximally entangled qubits. Our scheme uses the quantum framework introduced in [10] and recently generalized in [11], according to which choosing an action in an information set is identified with acting a unitary operation on a qubit.
In this paper, we examine one of the most interesting cases in quantum game theory—the problem in which one of the players has access to the full range of unitary strategies whereas the other player can only choose from unitary operators that correspond to the classical strategies. Additionally, we examine the quantum game in terms of players’ limited awareness about available strategies. We use the concept of games with unawareness [12,13,14] to check to what extend two different factors: access to quantum strategies and game perception affect the result of the game.
2 Preliminaries
In what follows, we give a brief review of the basic concepts of games with unawareness. The reader who is not familiar with this topic is encouraged to see [12]. Introductory examples and application of the notion of games with unawareness to quantum games can be found in [15, 16].
2.1 Strategic game with unawareness
A strategic form game with unawareness is defined as a family of strategic form games. The family specifies how each player perceives the game, how she perceives the other players’ perceptions of the game and so on. To be more precise, let \(G = (N, (S_{i})_{i\in N}, (u_{i})_{i\in N})\) be a strategic form game. This is the game played by the players which is also called the modeler’s game. Each player may have a restricted view of the game, i.e., she may not be aware of the full description of G. Hence, \(G_{\text {v}} = (N_{\text {v}}, ((S_{i})_{\text {v}})_{i\in N_{\text {v}}}, ((u_{i})_{\text {v}})_{i\in N_{\text {v}}})\) denotes player \({\text {v}}\)’s view of the game for \({\text {v}} \in N\). That is, the player \({\text {v}}\) views the set of players, the sets of players’ strategies, and the payoff functions as \(N_{{\text {v}}}\), \((S_{i})_{{\text {v}}}\) and \((u_{i})_{{\text {v}}}\), respectively. In general, each player also considers how each of the other players views the game. Formally, with a finite sequence of players \(v = (i_{1}, \dots , i_{n})\) there is associated a game \(G_{v} = (N_{v},((S_{i})_{v})_{i\in N_{v}}, ((u_{i})_{v})_{i\in N_{v}})\). This is the game that player \(i_{1}\) considers that player \(i_{2}\) considers that ...player \(i_{n}\) is considering. A sequence v is called a view. The empty sequence \(v = \emptyset \) is assumed to be the modeler’s view, i.e., \(G_{\emptyset } = G\). We denote an action profile \(\prod _{i\in N_{v}}s_{i}\) in \(G_{v}\), where \(s_{i} \in (S_{i})_{v}\) by \((s)_{v}\). The concatenation of two views \({\bar{v}} = (i_{1}, \dots , i_{n})\) followed by \({\tilde{v}} = (j_{1}, \dots , j_{n})\) is defined to be \(v = \hat{{\bar{v}}} {\tilde{v}} = (i_{1}, \dots , i_{n}, j_{1}, \dots , j_{n})\). The set of all potential views is \(V = \bigcup ^{\infty }_{n=0}N^{(n)}\) where \(N^{(n)} = \prod ^{n}_{j=1}N\) and \(N^{(0)} = \emptyset \).
Definition 1
A collection \(\{G_{v}\}_{v\in {\mathcal {V}}}\) where \({\mathcal {V}} \subset V\) is a collection of finite sequences of players is called a strategic-form game with unawareness and the collection of views \({\mathcal {V}}\) is called its set of relevant views if the following properties are satisfied:
-
1.
For every \(v \in {\mathcal {V}}\),
$$\begin{aligned} v{^{\hat{\,}}}{\text {v}} \in {\mathcal {V}} ~\text {if and only if}~{\text {v}} \in N_{v}. \end{aligned}$$(1) -
2.
For every \(v{^{\hat{\,}}}{\tilde{v}} \in {\mathcal {V}}\),
$$\begin{aligned} v \in {\mathcal {V}}, \quad \emptyset \ne N_{v{^{\hat{\,}}}{\tilde{v}}} \subset N_{v}, \quad \emptyset \ne (S_{i})_{v{^{\hat{\,}}}{\tilde{v}}} \subset (S_{i})_{v} ~\text {for all}~i \in N_{v{^{\hat{\,}}}{\tilde{v}}}. \end{aligned}$$(2) -
3.
If \({v{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\bar{v}} \in {\mathcal {V}}}\), then
$$\begin{aligned} {v{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\bar{v}} \in {\mathcal {V}} ~\text {and}~ G_{v{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\bar{v}}} = G_{v{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\bar{v}}}.} \end{aligned}$$(3) -
4.
For every strategy profile \((s)_{v{^{\hat{\,}}}{\tilde{v}}} = \{s_{j}\}_{j\in N_{v{^{\hat{\,}}}{\tilde{v}}}}\), there exists a completion to an strategy profile \((s)_{v} = \{s_{j}, s_{k}\}_{j\in N_{v{^{\hat{\,}}}{\tilde{v}}}, k\in N_{v}{\setminus } N_{v{^{\hat{\,}}}{\tilde{v}}}}\) such that
$$\begin{aligned} (u_{i})_{{v{^{\hat{\,}}}{\tilde{v}}}}((s)_{v{^{\hat{\,}}}{\tilde{v}}}) = (u_{i})_{v}((s)_{v}). \end{aligned}$$(4)
2.2 Extended Nash equilibrium
A basic solution concept for predicting players’ behavior is a Nash equilibrium [17].
Definition 2
A strategy profile \(s^* = (s_{1}, s_{2}, \dots , s_{n})\) is a Nash equilibrium if for each player \(i\in \{1, \dots , n\}\) and each strategy \(s_{i}\) of player i
where \(s^*_{-i} {:}{=}(s_{j})_{j \ne i}\).
In order to define the Nash-type equilibrium for a strategic-form game with unawareness, it is needed to redefine the notion of strategy profile.
Definition 3
Let \(\{G_{v}\}_{v\in {\mathcal {V}}}\) be a strategic-form game with unawareness. An extended strategy profile (ESP) in this game is a collection of (pure or mixed) strategy profiles \(\{(\sigma )_{v}\}_{v\in {\mathcal {V}}}\) where \((\sigma )_{v}\) is a strategy profile in the game \(G_{v}\) such that for every \({v{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\bar{v}} \in {\mathcal {V}}}\) holds
To illustrate (6), let us take the game \(G_{12}\)—the game that player 1 thinks that player 2 is considering. If player 1 assumes that player 2 plays strategy \((\sigma _{2})_{12}\) in the game \(G_{12}\), she must assume the same strategy in the game \(G_{1}\) that she considers, i.e., \((\sigma _{2})_{1} = (\sigma _{2})_{12}\).
Next step is to extend rationalizability from strategic-form games to the games with unawareness.
Definition 4
An ESP \(\{(\sigma )_{v}\}_{v\in {\mathcal {V}}}\) in a game with unawareness is called extended rationalizable if for every \(v{^{\hat{\,}}}{\text {v}} \in {\mathcal {V}}\) strategy \((\sigma _{{\text {v}}})_{v}\) is a best reply to \((\sigma _{-{\text {v}}})_{v{^{\hat{\,}}}{\text {v}}}\) in the game \(G_{v{^{\hat{\,}}}{\text {v}}}\).
Consider a strategic-form game with unawareness \(\{G_{v}\}_{v\in {\mathcal {V}}}\). For every relevant view \(v \in {\mathcal {V}}\), the relevant views as seen from v are defined to be \({\mathcal {V}}^{v} = \{{\tilde{v}} \in {\mathcal {V}}:v{^{\hat{\,}}}{\tilde{v}} \in {\mathcal {V}}\}\). Then, the game with unawareness as seen from v is defined by \(\{G_{v{^{\hat{\,}}}{\tilde{v}}}\}_{{\tilde{v}} \in {\mathcal {V}}^{v}}\). We are now in a position to define the Nash equilibrium in the strategic-form games with unawareness.
Definition 5
An ESP \(\{(\sigma )_{v}\}_{v\in {\mathcal {V}}}\) in a game with unawareness is called an extended Nash equilibrium (ENE) if it is rationalizable and for all \(v, {\bar{v}} \in {\mathcal {V}}\) such that \(\{G_{v{^{\hat{\,}}}{\tilde{v}}}\}_{{\tilde{v}} \in {\mathcal {V}}^{v}} = \{G_{\hat{{\bar{v}}} {\tilde{v}}}\}_{{\tilde{v}} \in {\mathcal {V}}^{{\bar{v}}}}\) we have that \((\sigma )_{v} = (\sigma )_{{\bar{v}}}\).
The first part of the definition (rationalizability) is similar to the standard Nash equilibrium, where it is required that each strategy in the equilibrium is a best reply to the other strategies of that profile. For example, according to Definition 4, player 2’s strategy \((\sigma _{2})_{1}\) in the game of player 1 has to be a best reply to player 1’s strategy \((\sigma _{1})_{12}\) in the game \(G_{12}\). On the other hand, in contrast to the concept of Nash equilibrium, \((\sigma _{1})_{12}\) does not have to a best reply to \((\sigma _{2})_{1}\) but to strategy \((\sigma _{2})_{121}\).
The following proposition shows that the notion of extended Nash equilibrium coincides with the standard one for strategic-form games when all views share the same perception of the game.
Proposition 1
Let G be a strategic-form game and \(\{G_{v}\}_{v \in {\mathcal {V}}}\) a strategic-form game with unawareness such that for some \(v\in {\mathcal {V}}\), we have \(G_{v{^{\hat{\,}}}{\bar{v}}} = G\) for every \({\bar{v}}\) such that \(v{^{\hat{\,}}}{\bar{v}} \in {\mathcal {V}}\). Let \(\sigma \) be a strategy profile in G. Then,
-
1.
\(\sigma \) is rationalizable for G if and only if \((\sigma )_{v} = \sigma \) is part of an extended rationalizable profile in \(\{G_{v}\}_{v\in {\mathcal {V}}}\).
-
2.
\(\sigma \) is a Nash equilibrium for G if and only if \((\sigma )_{v} = \sigma \) is part of on an ENE for \(\{G_{v}\}_{v\in {\mathcal {V}}}\), and this ENE also satisfies \((\sigma )_{v} = (\sigma )_{v{^{\hat{\,}}}{\bar{v}}}\).
Remark 1
We see from (3) and (6) that for every \({v{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\bar{v}} \in {\mathcal {V}}}\) a normal-form game \({G_{v{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\bar{v}}}}\) and a strategy profile \({(\sigma )_{v{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\bar{v}}}}\), determine the games and profiles in the form \({G_{v{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\dots }{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\bar{v}}}}\) and \({(\sigma )_{v{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\dots }{^{\hat{\,}}}{\text {v}}{^{\hat{\,}}}{\bar{v}}}}\), respectively, for example, \(G_{121}\) determines \(G_{122\dots 21}\). Hence, in general, a game with unawareness \(\{G_{v}\}_{v\in {\mathcal {V}}}\) and an extended strategy profile \(\{(\sigma )_{v}\}_{v\in {\mathcal {V}}}\) are defined by \(\{G_{v}\}_{v\in {\mathcal {N}} \cup \{\emptyset \}}\) and \(\{(\sigma )_{v}\}_{v\in {\mathcal {N}} \cup \{\emptyset \}}\), where
Then, we get \(\{G_{v}\}_{v\in {\mathcal {V}}}\) from \(\{G_{v}\}_{v\in {\mathcal {N}} \cup \{\emptyset \}}\) by setting \(G_{{\tilde{v}}} = G_{v}\) for \(v=(i_{1},\dots , i_{n})\in {\mathcal {N}}\) and \({\tilde{v}} = (i_{1}, \dots , i_{k}, i_{k}, i_{k+1}, \dots , i_{n}) \in {\mathcal {V}}\). For this reason, we often restrict ourselves to \({\mathcal {N}} \cup \{\emptyset \}\) throughout the paper.
3 Twice repeated \(2\times 2\) game
The concept of a finitely repeated game assumes playing a normal-form game (a stage of the repeated game) for a fixed number of times (see, for example, [18]). The players are informed about the results of consecutive stages. Let us consider a \(2\times 2\) bimatrix game
In the two-stage \(2\times 2\) bimatrix, the game can be easily depicted as an extensive-form game (see Fig. 1). The first stage of the twice repeated \(2\times 2\) game is a part of the game where the players specify an action C or D at the information sets 1.1 and 2.1. When the players choose their actions, the result of the first stage is announced. Since they have knowledge about the results of the first stage, they can choose different actions at the second stage depending on the previous result. Hence, the next four game trees from Fig. 1 are required to describe the repeated game. Each player has five information sets at which they specify their own actions; player 1’s information sets are denoted by 1.1, 1.2, 1.3, 1.4 and 1.5, player 2’s information sets are 2.1, 2.2, 2.3, 2.4 and 2.5. Note that player 2’s information sets consist of two nodes connected by dotted lines. This is intended to show a lack of knowledge of the player 2 about the previous move of player 1. Recall that a player’s strategy is a function that assigns to each information set of that player an action available at that information set. In our example, this means that each player’s strategy specifies an action at the first stage and four actions at the second stage. For example, strategy (C, C, D, D, C) of a player in the game given in Fig. 1 says that the player chooses action C at the first stage, and depending on one of the four possible results of the first stage, he chooses actions C, D, D, C, respectively.
If player 1 plays that strategy whereas player 2 chooses, for example (D, C, D, C, C), then the resulting strategy vector determines the unique path from the node 1.1 that intersects the nodes 2.1, 1.2 and 2.3 and gives the payoff outcome \((a_{01} + a_{10}, b_{01} + b_{10})\).
The players can also choose their own actions in a random way, i.e., according to some probability distribution determined by themselves. Such strategies are called behavioral strategies (see, for example, [2]).
Definition 6
A behavior strategy of a player in an extensive-form game is a function mapping each of his information sets to a probability distribution over the set of possible actions at that information set.
For example, in the case of the game given by Fig. 1, player 1’s and player 2’s behavioral strategies are determined by quintuples \((p_{1}, p_{2}, p_{3}, p_{4}, p_{5})\) and \((q_{1}, q_{2}, q_{3}, q_{4}, q_{5})\), respectively, in which \(p_{i}\) and \(q_{i}\) are the probabilities of choosing their first strategy at information set i. The payoff outcome resulting from playing by the players the general behavioral strategies is
4 Construction of a twice repeated \(2\times 2\) quantum game
We propose a scheme of playing a twice repeated \(2\times 2\) game. It is based on the protocol introduced in [10], where a quantum approach to general finite extensive quantum games was considered. A two-stage \(2\times 2\) game is an example of an extensive game with ten information sets. According to the idea presented in [9], we associate choosing an action at an information set with a unitary operation performed on a qubit. As a result, each player specifies a unitary action on each of five qubits. To be more specific, let us consider a \(2\times 2\) bimatrix game (8). We define a triple
where
-
\({\mathcal {H}}\) is a Hilbert space \(\left( \mathbb {C}^2\right) ^{\otimes 10}\).
-
\({{\mathsf {S}}}{{\mathsf {U}}}(2)\) is the special unitary group of degree 2. The commonly used parameterization for \(U\in {{\mathsf {S}}}{{\mathsf {U}}}(2)\) is given by
$$\begin{aligned} \begin{pmatrix} \text {e}^{\text {i}\alpha }\cos {\frac{\theta }{2}} &{} \text {i}\text {e}^{\text {i}\beta }\sin {\frac{\theta }{2}}\\ \text {i}\text {e}^{-\text {i}\beta }\sin {\frac{\theta }{2}} &{} \text {e}^{-\text {i}\alpha }\cos {\frac{\theta }{2}} \end{pmatrix}, \theta \in [0,\pi ], \alpha , \beta \in [0, 2\pi ). \end{aligned}$$(11) -
\(|\Psi _{\text {f}}\rangle \) is the final state determined by a strategy \(\bigotimes ^5_{i=1}U_{i}(\theta _{i}, \alpha _{i}, \beta _{i}) \in {{\mathsf {S}}}{{\mathsf {U}}}(2)^{\otimes 5}\) of player 1 and a strategy \(\bigotimes ^{10}_{j=6}U_{j}(\theta _{j}, \alpha _{j}, \beta _{j}) \in {{\mathsf {S}}}{{\mathsf {U}}}(2)^{\otimes 5}\) of player 2 according to the following formula:
$$\begin{aligned} |\Psi _{\text {f}}\rangle = J^{\dag }\left( \bigotimes ^{10}_{i=1}U_{i}(\theta _{i}, \alpha _{i}, \beta _{i})\right) J|0\rangle ^{\otimes 10}, \quad J=\frac{1}{\sqrt{2}}\left( \mathbb {1}^{\otimes 10} + i\sigma ^{\otimes 10}_{x}\right) , \end{aligned}$$(12) -
the payoff vector function \((u_{1}, u_{2})\) is given by
$$\begin{aligned} (u_{1}, u_{2})\left( \bigotimes ^{10}_{i=1}U_{i}(\theta _{i}, \alpha _{i}, \beta _{i}) \right) = {\text {tr}}\left( X|\Psi _{\text {f}}\rangle \langle \Psi _{\text {f}}|\right) , \end{aligned}$$(13)where
$$\begin{aligned} X= & {} (2a_{00}, 2b_{00})|00\rangle \langle 00|\otimes \mathbb {1}^{\otimes 3}\otimes |00\rangle \langle 00| \otimes \mathbb {1}^{\otimes 3}\nonumber \\&+\, (a_{00} + a_{01}, b_{00} + b_{01})|00\rangle \langle 00|\otimes \mathbb {1}^{\otimes 3}\otimes |01\rangle \langle 01|\otimes \mathbb {1}^{\otimes 3}\nonumber \\&+\, (a_{00} + a_{10}, b_{00} + b_{10})|01\rangle \langle 01|\otimes \mathbb {1}^{\otimes 3}\otimes |00\rangle \langle 00|\otimes \mathbb {1}^{\otimes 3} \nonumber \\&+\, (a_{00}+a_{11}, b_{00}+b_{11})|01\rangle \langle 01|\otimes \mathbb {1}^{\otimes 3} \otimes |01\rangle \langle 01|\otimes \mathbb {1}^{\otimes 3}\nonumber \\&+\, (a_{01} + a_{00}, b_{01} + b_{00})|0\rangle \langle 0| \otimes \mathbb {1} \otimes |0\rangle \langle 0|\otimes \mathbb {1}^{\otimes 2} \otimes |1\rangle \langle 1|\otimes \mathbb {1} \otimes |0\rangle \langle 0| \otimes \mathbb {1}^{\otimes 2}\nonumber \\&+\, (2a_{01}, 2b_{01})|0\rangle \langle 0| \otimes \mathbb {1} \otimes |0\rangle \langle 0|\otimes \mathbb {1}^{\otimes 2} \otimes |1\rangle \langle 1|\otimes \mathbb {1} \otimes |1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 2} \nonumber \\&+\, (a_{01}+a_{10}, b_{01} + b_{10})|0\rangle \langle 0| \otimes \mathbb {1} \otimes |1\rangle \langle 1|\otimes \mathbb {1}^{\otimes 2} \otimes |1\rangle \langle 1|\otimes \mathbb {1} \otimes |0\rangle \langle 0| \otimes \mathbb {1}^{\otimes 2} \nonumber \\&+\, (a_{01}+a_{11}, b_{01}+b_{11})|0\rangle \langle 0| \otimes \mathbb {1} \otimes |1\rangle \langle 1|\otimes \mathbb {1}^{\otimes 2} \otimes |1\rangle \langle 1|\otimes \mathbb {1} \otimes |1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 2} \nonumber \\&+\, (a_{10}+a_{00}, b_{10} + b_{00})|1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 2} \otimes |0\rangle \langle 0|\otimes \mathbb {1} \otimes |0\rangle \langle 0| \otimes \mathbb {1}^{\otimes 2} \otimes |0\rangle \langle 0| \otimes \mathbb {1} \nonumber \\&+\, (a_{10}+a_{01}, b_{10} + b_{01})|1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 2} \otimes |0\rangle \langle 0|\otimes \mathbb {1} \otimes |0\rangle \langle 0| \otimes \mathbb {1}^{\otimes 2} \otimes |1\rangle \langle 1| \otimes \mathbb {1} \nonumber \\&+\, (2a_{10}, 2b_{10})|1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 2} \otimes |1\rangle \langle 1|\otimes \mathbb {1} \otimes |0\rangle \langle 0| \otimes \mathbb {1}^{\otimes 2} \otimes |0\rangle \langle 0| \otimes \mathbb {1} \nonumber \\&+\, (a_{10}+a_{11}, b_{10} + b_{11})|1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 2}\otimes |1\rangle \langle 1| \otimes \mathbb {1} \otimes |0\rangle \langle 0| \otimes \mathbb {1}^{\otimes 2} \otimes |1\rangle \langle 1|\otimes \mathbb {1} \nonumber \\&+\, (a_{11}+a_{00}, b_{11} + b_{00})|1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 3}\otimes |0\rangle \langle 0| \otimes |1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 3} \otimes |0\rangle \langle 0|\nonumber \\&+\, (a_{11}+a_{01}, b_{11} + b_{01})|1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 3}\otimes |0\rangle \langle 0| \otimes |1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 3} \otimes |1\rangle \langle 1|\nonumber \\&+\, (a_{11}+a_{10}, b_{11} + b_{10})|1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 3}\otimes |1\rangle \langle 1| \otimes |1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 3} \otimes |0\rangle \langle 0| \nonumber \\&+\, (2a_{11}, 2b_{11})|1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 3}\otimes |1\rangle \langle 1| \otimes |1\rangle \langle 1| \otimes \mathbb {1}^{\otimes 3} \otimes |1\rangle \langle 1|. \end{aligned}$$(14)
The construction (14) of the operator X results from the following reasoning. First note that the information sets 1.1, ..., 1.5 of player 1 are associated with the first five qubits, and the information sets 2.1, ..., 2.5 of player 2 are associated with the other five qubits. Now, consider, for example, the outcome \((2a_{00}, 2b_{00})\). In the classical case that payoff outcome is obtained if the players choose their first strategies at the information sets 1.1, 2.1, 1.2 and 2.2. These information sets are assigned to the first, sixth, second and seventh qubit, respectively. Therefore, the state 0 measured on those qubits results in the outcome \((2a_{00}, 2b_{00})\) in the quantum game. In similar way, we can justify the other terms of (14).
The scheme defined by (10)–(14) is an extension of the classical way of playing the game. As in the case of the standard Eisert–Wilkens–Lewenstein scheme, the model \(\Gamma _{QQ}\) determines the game equivalent to the classical one by restricting the strategy sets of the players.
Proposition 2
The game determined by
is outcome-equivalent to the two-stage bimatrix \(2\times 2\) game.
Proof
Let us first consider the outcome \((2a_{00}, 2b_{00})\). Denote by P the projection of (14) corresponding to that outcome,
If player 1 and 2 choose \(\bigotimes ^5_{i=1}U_{i}(\theta _{i}, 0,0)\) and \(\bigotimes ^{10}_{i=6}U_{i}(\theta _{i}, 0,0)\), respectively, the final state becomes
and the probability of obtaining \((2a_{00}, 2b_{00})\) is
So, by substituting
the right-hand side of (18) multiplied by \((2a_{00}, 2b_{00})\) is equal to the first term of (9). Similarly, the outcome \((a_{10}+a_{11}, b_{10} + b_{11})\) is associated with the projection
In this case,
Substituting
we obtain \((1-p_{1})q_{1}(1-p_{4})(1-q_{4})\). In general, a strategy profile in the form
results in the outcome (9). \(\square \)
5 Twice-repeated quantum Prisoner’s Dilemma with unawareness
The Prisoner’s Dilemma is one of the most interesting problems in game theory. It shows how the individual rationality of the players can lead them to an inefficient result. Let us consider a general form of the Prisoner’s Dilemma
where \(T>R>P>S\). The payoff profile (R, R) of (24) is more beneficial to both players than (P, P). However, each player obtains a higher payoff by choosing D instead of C (in other words, the strategy C is strictly dominated by D). As a result, the rational strategy profile is (D, D), and it implies the payoff P for each player. A similar scenario occurs in a case of finitely repeated Prisoner’s Dilemma game. By induction, it can be shown that playing the action D at each stage of finitely repeated Prisoner’s Dilemma constitutes the unique Nash equilibrium.
We assume that the modeler’s game \(G_{\emptyset }\) (the game that is actually played by the players) is defined by (10). Player 1 being aware of all the unitary strategies also views the quantum game, i.e., \(G_{1} = \Gamma _{QQ}\). Next, we assume that player 2 perceives the game to be the classical one. In other words, player 2 views the game of the form
We then assume that player 1 finds that player 2 is considering \(\Gamma _{CC}\), and higher-order views \(v\in \{21, 121, 212, \dots \}\) are associated with \(\Gamma _{CC}\). We thus obtain a game with unawareness \(\{\Gamma _{v}\}_{v\in {\mathcal {V}}_{0}}\) defined as follows:
In what follows, we determine the players’ rational strategies by applying the notion of extended Nash equilibrium. First, we need to formulate the lemma that specifies player 1’s best reply to the Nash equilibrium strategy of the classical twice repeated Prisoner’s dilemma. Recall that the action D corresponds to \(\text {i}\sigma _{x}\) in the quantum scheme (10). This implies that \((\text {i}\sigma _{x})^{\otimes 5}\) is a counterpart of the unique Nash equilibrium (D, D, D, D, D) in the classical game. The following result is a part of the extended Nash equilibrium.
Lemma 1
Player 1’s best reply to \((\text {i}\sigma _{x})^{\otimes 5}\) in the set \({{\mathsf {S}}}{{\mathsf {U}}}(2)^{\otimes 5}\) is of the form
where \(\theta _{i} \in [0,\pi /2]\), \(\sum _{i}\gamma _{i} = k\pi /2\), \(k\in 2\mathbb {Z}+1\).
The complete proof of Lemma 1 is given in “Appendix A.” Here, we derive the result of playing the strategy profile \((\tau ^*\otimes (\text {i}\sigma _{x})^{\otimes 5})\). The player 1’s payoff resulting from playing the strategy (26) against \((\text {i}\sigma _{x})^{\otimes 5}\) is
Thus, player 1 obtains the maximal payoff 2T by choosing \(\sum ^5_{i=1}\gamma _{i} = k\pi /2\), \(k\in 2\mathbb {Z}+1\).
Remark 2
It is worth noting that the strategy (26) turns out to be a nontrivial extension of the quantum player’s best reply to strategy \(\text {i}\sigma _{x}\) in the one-stage Prisoner’s Dilemma. Recall that according to [4, 19], the Eisert–Wilkens–Lewenstein approach to game (24) is defined by the final state
and the measurement operator
In case
player 1’s payoff \(u_{1}(U_{1}(\theta _{1}, \alpha _{1}, \beta _{1})\otimes \text {i}\sigma _{x}) = {\text {tr}}\left( Y|\Psi _{1}\rangle \langle \Psi _{1}|\right) = T\) if \(\theta _{1} = 0\) and \(\alpha _{1} \in \{\pi /2, 3\pi /2\}\). Thus, the set of player 1’s best replies to \(\text {i}\sigma _{x}\) is
Proposition 3
Let \(\{\Gamma _{v}\}_{v\in {\mathcal {V}}_{0}}\) be a game with unawareness defined by (25). Then, all extended Nash equilibria \(\{(\sigma )_{v}\}\) of \(\{\Gamma _{v}\}_{v\in {\mathcal {V}}_{0}}\) are of the form:
Proof
Since \(\Gamma _{v} = \Gamma _{CC}\) for \(v\in {\mathcal {V}}_{0}{\setminus } \{\emptyset , 1\}\), it follows that \((\sigma )_{v}\) is a Nash equilibrium in \(\Gamma _{CC}\). We know from classical game theory that the unique Nash equilibrium in the twice repeated Prisoner’s Dilemma is (D, D, D, D, D). In terms of the EWL scheme that profile can be written as \((i\sigma _{x})^{\otimes 5}\). Therefore,
In order to prove that \((\sigma )_{1} = (\sigma _{1}, \sigma _{2})_{1} = \left( \tau ^*, (i\sigma _{x})^{\otimes 5}\right) \), we first note from the definition of extended strategy profile that
According to Definition 4, player 1’s strategy \((\sigma _{1})_{1}\) has to be a best reply to \((\sigma _{2})_{1} = (i\sigma _{x})^{\otimes 5}\) in the game \(\Gamma _{1} = \Gamma _{QQ}\). Since player 1 has access to all the unitary actions, by Lemma 1, his best reply to \((i\sigma _{x})^{\otimes 5}\) is \((\sigma _{1})_{1} = \tau ^*\) given by (26). Finally, (6) implies that
\(\square \)
6 Higher-order unawareness
In the previous section, we considered a typical case in which one of the players is aware of quantum strategies, whereas the other player views the classical game. Then, we showed that the quantum player obtains the best possible payoff resulting from playing an extended Nash equilibrium. An interesting question that arises here is whether the strategic position of the classical player can be improved by increasing his awareness about the game. Let us consider the case that player 1 views the quantum game. In addition, player 2 is aware of using quantum strategies by player 1, (\(\Gamma '_{2} = \Gamma _{QC}\)) and he knows that player 1 views the quantum strategies (\(\Gamma '_{21} = \Gamma _{QC}\)). The formal way of describing the problem is twofold. Player 1 can perceive the game with quantum strategies for both players (\(\Gamma '_{1} = \Gamma _{QQ}\)), or he may think that he is the only one who has access to all the unitary strategies (\(\Gamma ''_{1} = \Gamma _{QC}\)). As long as player 1 finds that player 2 is considering the classical game \(\Gamma _{CC}\) (i.e., \(\Gamma '_{12} = \Gamma ''_{12} = \Gamma _{CC}\)), both ways describe the same problem. Formally, the case in which the classical player is aware of using the quantum strategies by player 1 is given by collections of games \(\{\Gamma '_{v}\}\) or \(\{\Gamma ''_{v}\}\) where
In order to find out the reasonable outcome of (37), we need to determine player 2’s best reply to \(\tau ^*\).
Lemma 2
Player 2’s best reply to \(\tau ^*\) in the set \(\{\mathbb {1}, \text {i}\sigma _{x}\}^{\otimes 5}\) is of the form
Proof
Since player 2’s payoff function is linear in each pure strategy of \(\{\mathbb {1}, \text {i}\sigma _{x}\}^{\otimes 5}\) when player 1’s strategy is fixed, any mixed best reply cannot lead to a higher payoff. It is therefore sufficient to compare the expected payoffs of player 2 that correspond to strategy profiles from \(\tau ^*\otimes \{\mathbb {1}, \text {i}\sigma _{x}\}^{\otimes 5}\). We obtain the following four different outcomes
Strategy profile | Player 2’s payoff |
---|---|
\(\tau ^* \otimes \left( \mathbb {1}\otimes \{\mathbb {1}, \text {i}\sigma _{x}\}^{\otimes 3}\otimes \mathbb {1}\right) \) | \((P+T)\sin ^2{(\theta _{5}/2)} + 2P\cos ^2{(\theta _{5}/2)}\) |
\(\tau ^* \otimes \left( \mathbb {1}\otimes \{\mathbb {1}, \text {i}\sigma _{x}\}^{\otimes 3}\otimes \text {i}\sigma _{x} \right) \) | \((P+R)\sin ^2{(\theta _{5}/2)} + (P+S)\cos ^2{(\theta _{5}/2)}\) |
\(\tau ^* \otimes \left( \text {i}\sigma _{x} \otimes \{\mathbb {1}, \text {i}\sigma _{x}\}^{\otimes 2} \otimes \mathbb {1}\otimes \{\mathbb {1}, \text {i}\sigma _{x}\} \right) \) | \(S+P\) |
\(\tau ^*\otimes \left( \text {i}\sigma _{x} \otimes \{\mathbb {1}, \text {i}\sigma _{x}\}^{\otimes 2}\otimes \text {i}\sigma _{x} \otimes \{\mathbb {1}, \text {i}\sigma _{x}\}\right) \) | 2S |
From the fact that \(T>R>P>S\), we see that player 2’s best reply is given by (38) for every \(\theta _{5} \in [0,\pi /2]\). \(\square \)
Lemma 2 enables us to determine all the extended Nash equilibria in \(\{\Gamma _{v}\}\) defined by (37).
Proposition 4
Let \(\{\Gamma _{v}\}_{v\in {\mathcal {V}}_{0}}\) be a game with unawareness defined by (37). Then, all extended Nash equilibria \(\{(\sigma )_{v}\}\) of \(\{\Gamma _{v}\}_{v\in {\mathcal {V}}_{0}}\) are of the form:
Proof
The proof proceeds along the same lines as the proof of Proposition 3. Without restriction of generality, we can assume that \(\{\Gamma _{v}\} = \{\Gamma ''_{v}\}\) according to (37). Similar arguments to those in the proof of Proposition show that
and
Since \((\sigma _{{\text {v}}})_{v} = (\sigma _{{\text {v}}})_{v{^{\hat{\,}}}{\text {v}}}\) (see Definition 3),
Now, \((\sigma _{2})_{2}\) is a best reply to \((\sigma _{1})_{2}\) in the game \(\Gamma ''_{2}\). By Lemma 2, \((\sigma _{1})_{2} = \tau ^*_{2}\). Consequently,
\(\square \)
According to (39), the result of the game is \((\sigma )_{2} = (\sigma )_{\emptyset } = (\tau ^*, \tau ^*_{2})\). It corresponds to the following payoffs: \((P+S)\sin ^2(\theta _{5}/2) + 2P\cos ^2(\theta _{5}/2)\) for player 1 and \((P+T)\sin ^2(\theta _{5}/2) + 2P\cos ^2(\theta _{5}/2)\) for player 2. Since player 1 does not have the most preferred parameter \(\theta _{5}\) in \(\tau ^*\), the difference between player 2 and player 1 payoff can take any value of \((T-S)\sin ^2(\theta _{5}/2)\). If we assume that the parameter \(\theta _{5}\) is uniformly distributed over \([0,\pi ]\) then, on average, player 2 gets
more than player 1.
7 Summary and conclusions
In this paper, we proposed a new scheme for a twice repeated quantum game based on the fact that it is a particular case of an extensive form game. We analyzed the scheme for a twice repeated Prisoner’s Dilemma game, with focus on the situation where players have different perception of the game described by the formalism of the games with unawareness [12].
In particular, we determined the extended Nash equilibrium for the case where one player has access to full range of quantum strategies, while the other perceives the game as a classical one. We found best replies of the quantum player to the classical equilibrium strategy. This result is an extension of the corresponding one-stage version of the game, and it similarly allows quantum player to get the best possible outcome.
We also discussed high-order unawareness, where we slightly increase game perception of the classical player, so that he knows that his opponent is actually a quantum player, while the quantum player is not aware of that knowledge of the classical player. We show that this situation improves the strategic position of the classical player. As a result of playing the extended Nash equilibrium, the difference between the classical and quantum player’s payoffs is always nonnegative and strictly positive as long as the parameter \(\theta _{5} \ne 0\) in the player 1’s equilibrium action \(\tau ^*\). Therefore, the average payoff of the classical player is grater that the payoff of the quantum player.
Our results showed that the proposed scheme is a nontrivial generalization of the well-known EWL scheme. It can be easily extended to any repeated \(2\times 2\) quantum game. Additionally, in the future it should be possible to implement our scheme on already existing quantum hardware: IBM-Q or Rigetti computing. The research based on the proposed scheme is also promising in the incoming era of quantum internet as indicated by appearing quantum network simulators such as Simulaqron, which can be used to simulate two players playing over quantum net.
References
Wilde, M.: Quantum Information Theory. Cambridge University Press, Cambridge (2013)
Maschler, M., Solan, E., Zamir, S.: Game Theory. Cambridge University Press, Cambridge (2013)
Meyer, D.A.: Quantum strategies. Phys. Rev. Lett. 82, 1052–1055 (1999). https://doi.org/10.1103/PhysRevLett.82.1052
Eisert, J., Wilkens, M., Lewenstein, M.: Quantum games and quantum strategies. Phys. Rev. Lett. 83, 3077–3080 (1999). https://doi.org/10.1103/PhysRevLett.83.3077
Khan, F.S., Solmeyer, N., Balu, R., Humble, T.S.: Quantum games: a review of the history, current state, and interpretation. Quantum Inf. Process. 17(11), 309 (2018). https://doi.org/10.1007/s11128-018-2082-8
Iqbal, A., Toor, A.H.: Quantum repeated games. Phys. Lett. A 300, 541 (2002). https://doi.org/10.1016/S0375-9601(02)00893-9
Yang, Z., Zhang, X.: Quantum repeated games with continuous-variable strategies. Phys. Lett. A 383, 2874 (2019). https://doi.org/10.1016/j.physleta.2019.06.030
Giannakis, K., Theocharopoulou, G., Papalitsas, C., Fanarioti, S., Andronikos, T.: Quantum conditional strategies and automata for Prisoners’ Dilemmata under the EWL scheme. Appl. Sci. 9, 2635 (2019). https://doi.org/10.3390/app9132635
Frąckiewicz, P.: Quantum repeated games revisited. J. Phys. A Math. Theor. (2011). https://doi.org/10.1088/1751-8113/45/8/085307
Frąckiewicz, P.: Quantum information approach to normal representation of extensive games. Int. J. Quantum Inf. 10, 1250048 (2012). https://doi.org/10.1142/S0219749912500487
Chen, J., Jian, J., Hong, S.: Quantum repeated pricing game. Quantum Inf. Process. (2020). https://doi.org/10.1007/s11128-019-2538-5
Feinberg, Y.: Games with unawareness. Research Papers 2122, Stanford University, Graduate School of Business, August 2012. https://ideas.repec.org/p/ecl/stabus/2122.html
Feinberg, Y.: Subjective reasoning—games with unawareness. Technical Report Research Paper Series 1875, Stanford University, Graduate School of Business (2004)
Feinberg, Y.: Games with incomplete awareness. Technical Report Research Paper Series 1894, Stanford University, Graduate School of Business (2005)
Frąckiewicz, P.: Quantum games with unawareness. Entropy 20, 555 (2018). https://doi.org/10.3390/e20080555
Frąckiewicz, P.: Quantum Penny Flip game with unawareness. Quantum Inf. Process. 18(1), 15 (2018). https://doi.org/10.1007/s11128-018-2111-7
Nash, J.: Non-cooperative games. Ann. Math. 54, 286 (1951)
Peters, H.: Game Theory. A Multi-Leveled Approach. Springer, Berlin (2008)
Eisert, D.A., Wilkens, M.: Quantum games and quantum strategies. J. Mod. Optics 47, 1052–1055 (2000)
Funding
The research presented in this paper has been partially supported by the funds of Polish Ministry of Science and Higher Education assigned to AGH University of Science and Technology in Kraków, and Pomeranian University in Slupsk.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Proof of Lemma 1
Appendix: Proof of Lemma 1
Proof
First, note that a unitary matrix \(U \in {{\mathsf {S}}}{{\mathsf {U}}}(2)\) can be written as a product of the rotation matrices \(U_{z}\) and \(U_{x}\) in the following way:
The probability of winning the biggest payoff by a quantum player is described by:
where \(\varvec{\theta }=(\theta _2, \theta _3,\theta _5)\), \(\varvec{\gamma }=(\gamma _1,\gamma _2,\gamma _3,\gamma _4,\gamma _5)\) and \(\varvec{\delta }=(\delta _1,\delta _2,\delta _3,\delta _4,\delta _5)\). Functions in (46) are defined as
and
where
For each \(\theta _1 \in [0, 2\pi ], \theta _4 \in [0, 2\pi ]\) we have \(f_1(\theta _1, \theta _4) \in [0, 1]\), and \(f_2(\varvec{\theta },\varvec{\gamma },\varvec{\delta }) \in [0, 1]\) is satisfied for each \(\varvec{\theta } \in [0, 2\pi ]^3, \varvec{\gamma } \in [0, 2\pi ]^5, \varvec{\delta } \in [0,2\pi ]^5 \). So if \(f_1(\theta _1, \theta _4)f_2(\varvec{\theta },\varvec{\gamma },\varvec{\delta })=1\), then
To obtain \(f_1(\theta _1, \theta _4)=1\), we need \(\cos ^2\left( \frac{\theta _1}{2}\right) =1\) and \(\cos ^2\left( \frac{\theta _4}{2}\right) =1\). That implies \(U_x(\theta )=I\) in representation (45) and therefore strategy elements for qubits 1 i 4 have the form
To see which conditions need to be fulfilled to obtain \(f_2(\varvec{\theta },\varvec{\gamma },\varvec{\delta })=1\), first note that
Then, we can write
where \(s_{ggg}(\varvec{\delta },\varvec{\gamma }) \in [0, 1]\) for \(ggg \in \{ccc, ccs, csc, \ldots , sss\}\), \(\sum f_{ggg}(\varvec{\theta })=1\) and \(f_{ggg}(\varvec{\theta }) \in [0,1]\) for each \(\varvec{\theta }\).
If we introduce:
then we can rewrite (48) as:
To obtain \(f_2(\varvec{\theta },\varvec{\gamma },\varvec{\delta })=1\) for each \(ggg \in \{ccc, ccs, csc, \ldots , sss\}\) for which \(f_{ggg}>0\), there have to be \(s_{ggg}=1\). The list of these conditions for \(i,j,k \in \{2,3,5\}\) is shown in detail in the following table:
Let us consider several cases.
-
1.
If for each \(ggg \in \{ccc, ccs, csc, \ldots , sss\}\) \(f_{ggg}>0\), then all conditions listed in (57(b)) apply. From the first condition, \(\alpha _{sum}=\sum _{i=1}^{5}\alpha _{i}=\sum _{i=1}^{5}(\delta _{i}+\gamma _{i})=k\frac{\pi }{2}\) where k is odd. Also, from all conditions, for every \( w \in \{2,3,5\}\) \(\gamma _w=n_w\frac{\pi }{2}\) where \(n_w \in \{0, 1, 2, 3, 4\}\). If \(n_w\) is odd: \(U_z(\gamma _w)=I\) and
$$\begin{aligned} U_w=U_x(\theta _w)U_z(\delta _w)=U_x(\theta _w)U_z(\delta _w+\gamma _w)= U_x(\theta _w)U_z(\alpha _w). \end{aligned}$$(58)If \(n_w\) is even:
$$\begin{aligned} U_w&= U_z(\gamma _w)U_x(\theta _w)U_z(\delta _w) =U_z\left( n\frac{\pi }{2}\right) U_x(\theta _w)U_z(\delta _w) \nonumber \\&=U_x(-\theta _w)U_z\left( \delta _w+n\frac{\pi }{2}\right) =U_x(-\theta _w)U_z(\delta _w+\gamma _w)=U_x(-\theta _w)U_z(\alpha _w). \end{aligned}$$(59) -
2.
If there exists at least one \(w \in \{2,3,5\}\) for which \(\cos ^2(\theta _w)=0\) or \(\sin ^2(\theta _w)=0\), if \(\cos ^2(\theta _{w})=0\) then
$$\begin{aligned} U_{w}&=U_{z}(\gamma _w)U_{x}(\pi )U_{z}(\delta _w)\nonumber \\&=U_{z}(\gamma _w)i\sigma _{x}U_{z}(\delta _w) = i\sigma _{x}U_z{(\delta _w-\gamma _w)}=i\sigma _{x}U_z{(\beta _w)}. \end{aligned}$$(60)if[(a)] \(\sin ^2(\theta _{w})=0\) then
$$\begin{aligned} U_{w}&=U_{z}(\gamma _w)U_{x}(0)U_{z}(\delta _w) \nonumber \\ {}&=U_{z}(\gamma _w)U_{z}(\delta _w)= U_z{(\delta _w+\gamma _w)}=U_z{(\alpha _w)}. \end{aligned}$$(61)Next, we consider three subcases: exactly one, exactly two and exactly three \(w \in \{2,3,5\}\) such that \(\cos ^2(\theta _w)=0\) or \(\sin ^2(\theta _w)=0\).
-
(b)
There exists exactly one \(w \in \{2,3,5\}\) for which \(\cos ^2(\theta _w)=0\) or \(\sin ^2(\theta _w)=0\). If \(\cos ^2(\theta _{w})=0\) then, according to (57), the following conditions for \(v,r \in \{2,3,5\}\) and \(v \ne w\), \(r \ne w\) apply:
$$\begin{aligned}&\sin ^2(\alpha _{sum}-2\gamma _w)=1,\nonumber \\&\sin ^2(\alpha _{sum}-2\gamma _w-2\gamma _v)=1, \nonumber \\&\sin ^2(\alpha _{sum}-2\gamma _w-2\gamma _r)=1, \nonumber \\&\sin ^2(\alpha _{sum}-2\gamma _w-2\gamma _v-2\gamma _r)=1. \end{aligned}$$(62)Next, from the first condition of (62), we have: \(\alpha _{sum}-2\gamma _w=\beta _w+\sum ^5_{p=1,p\ne w}\alpha _p=k\frac{\pi }{2} \), where k is even. And from all conditions of (62): \(\gamma _v=n\frac{\pi }{2}, \gamma _r=n\frac{\pi }{2}, n \in \{0, 1, 2, 3, 4\}\). From that, the actual strategy elements for qubits w, v and r are: \(U_w=i\sigma _{x}U_z{(\beta _w)}\), \(U_v=U_x(\theta _v)U_z(\alpha _v)\) or \(U_v=U_x(-\theta _v)U_z(\alpha _v)\), \(U_r=U_x(\theta _r)U_z(\alpha _r)\) or \(U_r=U_x(-\theta _r)U_z(\alpha _r)\) If \(\sin ^2(\theta _{w})=0\) then, according to (57), the following conditions for \(v,r \in \{2,3,5\}\) and \(v \ne w\), \(r \ne w\) apply:
$$\begin{aligned}&\sin ^2(\alpha _{sum})=1,\nonumber \\&\sin ^2(\alpha _{sum}-2\gamma _v)=1,\nonumber \\&\sin ^2(\alpha _{sum}-2\gamma _r)=1,\nonumber \\&\sin ^2(\alpha _{sum}-2\gamma _v-2\gamma _r)=1. \end{aligned}$$(63)Next, from the first condition of (63), we have: \(\alpha _{sum}=\sum ^5_{p=1}\alpha _p=k\frac{\pi }{2} \), where k is even. And from all the conditions of (63): \(\gamma _w=n\frac{\pi }{2}\), \(\gamma _v=n\frac{\pi }{2}\), \(\gamma _r=n\frac{\pi }{2}\), \(n \in \{0, 1, 2, 3, 4\}\). From that, the actual strategy elements for qubits w, v and r are: \(U_w=U_z{(\alpha _w)}\), \(U_v=U_x(\theta _v)U_z(\alpha _v)\) or \(U_v=U_x(-\theta _v)U_z(\alpha _v)\) \(U_r=U_x(\theta _r)U_z(\alpha _r)\) or \(U_r=U_x(-\theta _r)U_z(\alpha _r)\).
-
(b)
There exist exactly one \(w \in \{2,3,5\}\) that \(\cos ^2(\theta _w) \ne 0\) and \(\sin ^2(\theta _w) \ne 0\). In that case, we use similar inference as in previous one and present the final results for simplicity. In these case, there are three possibilities for \( v,r \in \{2,3,5\}\): If \(\cos ^2(\theta _{v})=0,\cos ^2(\theta _{r})=0\), then the actual strategy elements are: \(U_w=U_x(\theta _w)U_z(\alpha _w)\) or \(U_w=U_x(-\theta _w)U_z(\alpha _w)\), \(U_v=i\sigma _x U_z{(\beta _v)}\), \(U_r=i\sigma _x U_z{(\beta _r)}\), and \(\alpha _{sum}-2\gamma _v-2\gamma _r=\sum ^5_{p=1; p \ne v; p \ne r}\alpha _p+\beta _v+\beta _r=k\frac{\pi }{2},\) k-even. If \(\cos ^2(\theta _{v})=0,\sin ^2(\theta _{r})=0\), then the actual strategy elements are: \(U_w=U_x(\theta _w)U_z(\alpha _w)\) or \(U_w=U_x(-\theta _w)U_z(\alpha _w)\), \(U_v=i\sigma _x U_z{(\beta _v)}\), \(U_r=U_z{(\alpha _r)}\), and \(\alpha _{sum}-2\gamma _v=\sum ^5_{p=1; p \ne v}\alpha _p+\beta _v=k\frac{\pi }{2},\) k-even. If \(\sin ^2(\theta _{v})=0,\sin ^2(\theta _{r})=0\), then the actual strategy elements are: \(U_w=U_x(\theta _w)U_z(\alpha _w)\) or \(U_w=U_x(-\theta _w)U_z(\alpha _w)\), \(U_v= U_z{(\alpha _v)}\), \(U_r=U_z{(\alpha _r)}\), and \(\alpha _{sum}=\sum ^5_{p=1}\alpha _p=k\frac{\pi }{2},\) k-even.
-
(c)
For all \(w \in \{2,3,5\}\), \(\cos ^2(\theta _w) = 0\) or \(\sin ^2(\theta _w) = 0\). In that case, we also use similar inference as in previous ones and present the final results for simplicity. In these case, there are four possibilities for \(w, v,r \in \{2,3,5\}\): If \(\cos ^2(\theta _{w})=0\),\(\cos ^2(\theta _{v})=0\),\(\cos ^2(\theta _{r})=0\), then the actual strategy elements are: \(U_w=i\sigma _{x}U_z{(\beta _w)}\), \(U_v=i\sigma _{x}U_z{(\beta _v)}\), \(U_r=i\sigma _{x}U_z{(\beta _r)}\) and \(\alpha _{sum}-2\gamma _w-2\gamma _v-2\gamma _r=\alpha _1+\alpha _4+\beta _w+\beta _v+\beta _r=k\frac{\pi }{2} \), k-even. If \(\sin ^2(\theta _{w})=0\),\(\cos ^2(\theta _{v})=0\),\(\cos ^2(\theta _{r})=0\), then the actual strategy elements are: \(U_w=U_z{(\alpha _w)}\), \(U_v=i\sigma _{x}U_z{(\beta _v)}\), \(U_r=i\sigma _{x}U_z{(\beta _r)}\) and \( \alpha _{sum}-2\gamma _v-2\gamma _r=\sum ^5_{p=1; p\ne v p \ne r}\alpha _p+\beta _v+\beta _r=k\frac{\pi }{2}\), k-even If \(\sin ^2(\theta _{w})=0\),\(\sin ^2(\theta _{v})=0\),\(\cos ^2(\theta _{r})=0\), then the actual strategy elements are: \(U_w=U_z{(\alpha _w)}\), \(U_v=U_z{(\alpha _v)}\), \(U_r=i\sigma _{x}U_z{(\beta _r)}\) and \( \alpha _{sum}-2\gamma _r=\sum ^5_{p=1; p \ne r}\alpha _p+\beta _r=k\frac{\pi }{2}\), k-even. If \(\sin ^2(\theta _{w})=0\),\(\sin ^2(\theta _{v})=0\),\( \sin ^2(\theta _{r})=0\), then the actual strategy elements are: \(U_w=U_z{(\alpha _w)}\), \(U_v=U_z{(\alpha _v)}\), \(U_r=U_z{(\alpha _r)}\) and \( \alpha _{sum}=\sum ^5_{p=1}\alpha _p=k\frac{\pi }{2}\), k-even. \(\square \)
-
(b)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Rycerz, K., Frąckiewicz, P. A quantum approach to twice-repeated \(2\times 2\) game. Quantum Inf Process 19, 269 (2020). https://doi.org/10.1007/s11128-020-02743-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11128-020-02743-0