A Unified Approach to Dynamic Decision Problems with Asymmetric Information 
Part II: Strategic Agents
Abstract
We study a general class of dynamic games with asymmetric information where agents’ beliefs are strategy dependent, i.e. signaling occurs. We show that the notion of sufficient information, introduced in the companion paper [2], can be used to effectively compress the agents’ information in a mutually consistent manner that is sufficient for decisionmaking purposes. We present instances of dynamic games with asymmetric information where we can characterize a timeinvariant information state for each agent. Based on the notion of sufficient information, we define a class of equilibria for dynamic games called Sufficient Information Based Perfect Bayesian Equilibrium (SIBPBE). Utilizing the notion of SIBPBE, we provide a sequential decomposition of dynamic games with asymmetric information over time; this decomposition leads to a dynamic program that determines SIBPBE of dynamic games. Furthermore, we provide conditions under which we can guarantee the existence of SIBPBE.
H. Tavafoghi is with the Department of Mechanical Engineering at the University of California, Berkeley (email: ). Y. Ouyang is with Preferred Networks America, Inc. (email: ). D. Teneketzis is with the Department of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor (email: )
This work was supported in part by the NSF grants CNS1238962, CCF1111061, AROMURI grant W911NF1310421, and ARO grant W911NF1710232.
I Introduction
We study a general class of stochastic dynamic games with asymmetric information. We consider a setting where the underlying system has Markovian dynamics controlled by the agents’ joint actions at every time. The instantaneous utility of each agent depends on the agents’ joint actions and the system state. At every time, each agent makes a private noisy observation that depends on the current system state and the agents’ past actions. Therefore, at every time agents have asymmetric and imperfect information about the history of the game. Furthermore, the information that an agent possesses about the history of the game at each time instant depends on the other agents’ past actions and strategies; this phenomenon is known as signaling among the agents. Moreover, at each time, each agent’s strategy depend on his information about the current system state and the other agents’ strategies. Therefore, the agents’ decisions and information are coupled and interdependent over time.
There are three main challenges in the study of dynamic games with asymmetric information. First, since the agents’ decisions and information are interdependent and coupled over time, we need to determine the agents’ strategies simultaneously for all times. Second, the agents’ strategy domains grow over time as they acquire more information. Third, in contrast to dynamic teams where agents coordinate their strategies, in dynamic games each agent’s strategy is his private information as he chooses it individually so as to maximize his utility. Therefore, in dynamic games each agent needs to form a belief about other agents’ strategies as well as about the game history.
In this paper, we propose a general approach for the study of a dynamic games with asymmetric information that addresses the statedabove challenges. We build our approach based on notion of sufficient information, introduced in the companion paper [2], and define a class of sufficient information based assessments, where strategic agents compress their information in a mutually consistent manner that is sufficient for decisionmaking purposes. Accordingly, we propose the notion of Sufficient Information Based Perfect Bayesian Equilibrium (SIBPBE) for dynamic games that characterizes a set of equilibrium outcomes. Using the notion of SIBPBE, we provide a sequential decomposition of the game over time, and formulate a dynamic program that enables us to compute the set of SIBPBEs via backward induction. We discover specific instances of dynamic games where we can determine a set of information states for the agents that have timeinvariant domain. We determine conditions that guarantee the existence of SIBPBEs. We discuss the relation between the class of SIBPBE and PBE in dynamic games, and argue that the class of SIBPBE provides a simpler and more robust set of equilibria than PBE that are consistent with agents’ rationality.
The notion of SIBPBE we introduce in this paper provides a generalization/extension of Markov Perfect Equilibrium (MPE) to dynamic games with asymmetric information. The authors in [3] introduce the notion of Markov Perfect Equilibrium that characterizes a subset of Subgame Perfect Equilibria (SPE) for dynamic games with symmetric information and provide a sequential decomposition of the game over time. Moreover, our results, along with those in the companion paper [2], provide a unified approach to the study of dynamic decision problems with asymmetric information and strategic or nonstrategic agents.
Ia Related Literature
Dynamic games with asymmetric information have been investigated extensively in literature in the context of repeated games; see [4, 5, 6, 7] and the references therein. The key feature of these games is the absence of a dynamic system. Moreover, the works on repeated games study primarily their asymptotic properties when the horizon is infinite and agents are sufficiently patient (i.e. the discount factor is close one). In repeated games, agents play a stage (static) game repeatedly over time. As a result, the decision making problem that each agent faces is very simple. The main objective of this strand of literature is to explore situations where agents can form selfenforcing punishment/reward mechanisms so as to create additional equilibria that improve upon the payoffs agents can get by simply playing an equilibrium of the stage game over time. Recent works (see [8, 9, 10]) adopt approaches similar to those used in repeated games to study infinite horizon dynamic games with asymmetric information when there is an underlying dynamic Markovian system. Under certain conditions on the system dynamics and information structure, the authors of [8, 9, 10] characterize a set of asymptotic equilibria when the agents are sufficiently patient.
The problem we study in this paper is different from the ones in [4, 5, 6, 7, 8, 9, 10] in two aspects. First, we consider a class of dynamic games where the underlying system has a general Markovian dynamics and a general information structure, and we do not restrict attention to asymptotic behaviors when the horizon is infinite and the agents are sufficiently patient. Second, we study situations where the decision problem that each agent faces, in the absence of strategic interactions with other agents, is a Partially Observed Markov Decision Process (POMDP), which is a complex problem to solve by itself. Therefore, reaching (and computing) a set of equilibrium strategies, which take into account the strategic interactions among the agents, is a very challenging task. As a result, it is not very plausible for the agents to seek reaching an equilibria that is generated by the formation of selfenforcing punishment/reward mechanisms similar to those used in infinitely repeated games (see Section VII for more discussion). We believe that our results provide new insight into the behavior of strategic agents in complex and dynamic environments, and complement the existing results in the repeated games literature with simple and (mostly) static environments.
The works in [11, 12, 13, 14] consider dynamic zerosum games with asymmetric information. The authors of [12, 11] study zerosum games with Markovian dynamics and lack of information on one side (i.e. one informed player and one uninformed player). The authors of [13, 14] study zerosum games with Markovian dynamics and lack of information on both sides. The problem that we study in this paper is different from the ones in [11, 12, 13, 14] in three aspects. First, we study a general class of dynamic games that include dynamic zerosum games with asymmetric information as a special case. Second, we consider a general Markovian dynamics for the underlying system whereas the authors of [12, 11, 13, 14] consider a specific Markovian dynamics where each agent observes perfectly a local state that evolves independently of the other local states conditioned on the agents’ observable actions. Third, we consider a general information structure that allows us to capture scenarios with unobservable actions and imperfect observations that are not captured in [12, 11, 13, 14].
The problems investigated in [15, 16, 17, 18, 19, 20] are the most closely related to our problem. The authors of [15, 16] study a class of dynamic games where the agents’ common information based belief (defined in [15]) is independent of their strategies; that is, there is no signaling among them. This property allows them to apply ideas from the common information approach developed in [21, 22], and define an equivalent dynamic game with symmetric information among the fictitious agents. Consequently, they characterize a class of equilibria for dynamic games called Common Information based Markov Perfect Equilibrium. Our results are different from those in [15, 16] in two aspects. First, we consider a general class of dynamic games where the agents’ CIB beliefs are strategydependent, thus, signaling is present. Second, the proposed approach in [15, 16] requires the agents to keep track of all of their private information over time. We propose an approach to effectively compress the agents’ private information, and consequently, reduce the number of variables on which the agents need to form CIB beliefs.
The authors of [17, 18, 19, 20] study a class of dynamic games with asymmetric information where signaling occurs. When the horizon in finite, the authors of [17, 18] introduce the notion of Common Information Based Perfect Bayesian Equilibrium, and provide a sequential decomposition of the game over time. The authors of [19, 20] extend the results of [17, 18] to finite horizon LinearQuadraticGaussian (LQG) dynamic games and infinite horizon dynamic games, respectively. The class of dynamic games studied in [17, 18, 19, 20] satisfies the following assumptions: (i) agents’ actions are observable (ii) each agent has a perfect observation of his own local states/type (iii) conditioned on the agents’ actions, the evolution of the local states are independent.
We relax assumptions (i)(iii) of [17, 18, 19, 20], and study a general class of dynamic games with asymmetric information, hidden actions, imperfect observations, and controlled and coupled dynamics. As a result, each agent needs to form a belief about the other agents’ past actions and private (imperfect) observations. Moreover, in contrast to [17, 18, 19, 20], an agent’s, say agent ’s, belief about the system state and the other agents’ private information is his own private information and is different from the CIB belief. In this paper, we extend the methodology developed in [17, 18] for dynamic games, and generalize the notion of CIBPBE. Furthermore, we propose an approach to effectively compress the agents’ private information and obtain the results of [17, 18, 19, 20] as special cases.
IB Contribution
We develop a general methodology for the study and analysis of dynamic games with asymmetric information, where the information structure is nonclassical; that is, signaling occurs. We propose an approach to characterize a set of information states that effectively compress the agents’ private and common information in a mutually consistent manner. We characterize a subclass of Perfect Bayesian Equilibria, called SIBPBE, and provide a sequential decomposition of these games over time. This decomposition provides a backward induction algorithm to determine the set of SIBPBEs. We discover special instances of dynamic games where we can identify a set of information states with timeinvariant domain. We provide conditions that guarantee the existence of SIBPBEs in dynamic games with asymmetric information. We show that the methodology developed in this paper generalizes the existing results on dynamic games with nonclassical information structure.
IC Organization
The rest of the paper is organized as follows. In Section II, we describe our model. In Section III, we discuss the main issues that arise in the study of dynamic games with asymmetric information. We provide the formal definition of Perfect Bayesian Equilibrium in Section IV. In Section V, we describe the sufficient information approach to dynamic games with asymmetric information and introduce the notion of Sufficient Information Based (SIB) assessment and SIBPBE. In Section VI, we present our main results and provide a sequential decomposition of dynamic games over time. We discuss our results in Section VII, and compare the notion of SIBPBE with other equilibrium concepts..In Section VIII, we determine conditions that guarantee the existence of SIBPBE in dynamic games with asymmetric information. We conclude in Section IX. The proofs of all the theorems and lemmas appear in the Appendix.
Remark 1.
Section ID on notation and Section VA on the definition of sufficient private information are similar to the ones appearing in the companion paper [23]; moreover, the model presented in Section II with strategic agents is similar to that of the companion paper[23] with nonstrategic agents. All these sections are included in this paper for ease of reading and to make the paper selfcontained.
ID Notation
Random variables are denoted by upper case letters, their realizations by the corresponding lower case letters. In general, subscripts are used as time index while superscripts are used to index agents. For , (resp. ) is the short hand notation for the random variables (resp. functions ). When we consider a sequence of random variables (resp. functions) for all time, we drop the subscript and use to denote (resp. to denote ). For random variables (resp. functions ), we use (resp. ) to denote the vector of the set of random variables (resp. functions) at , and
Ii Model
1) System dynamics: There are strategic agents who live in a dynamic Markovian world over horizon , . Let denote the state of the world at . At time , each agent, indexed by , where denotes the set of available actions to him at . Given the collective action profile , the state of the world evolves according to the following stochastic dynamic equation, , chooses an action
(1) 
where is a sequence of independent random variables. The initial state is a random variable that has a probability distribution with full support.
At every time , before taking an action, agent receives a noisy private observation of the current state of the world and the action profile , given by
(2) 
where , , are sequences of independent random variables. Moreover, at every , all agents receive a common observation of the current state of the world and the action profile , given by
(3) 
where , is a sequence of independent random variables. We note that the agents’ actions is commonly observable at if . We assume that the random variables , , , and , are mutually independent.
2) Information structure: Let denote the aggregate information of all agents at time . Assuming that agents have perfect recall, we have , i.e. denotes the set of all agents’ past observations and actions. The set of all possible realizations of the agents’ aggregate information is given by .
At time , the aggregate information is not fully known to all agents. Let and ’s private information about , where and denote the set of all possible realizations of agent ’s private and common information at time , respectively. In Section IIA, we discuss several instances of information structures that can be captured as special cases of our model. denote agent denote the agents’ common information about
3) Strategies and Utilities: Let denote the information available to agent at , where denote the set of all possible realizations of agent ’s information at . Agent ’s behavioral strategy , , is defined as a sequence of mappings , , that determine agent ’s action for every realization of the history at .
Agent ’s instantaneous utility at depends on the system state and the collective action profile , and is given by chooses his strategy so as to maximize his total (expected) utility over horizon , given by, . Agent
(4) 
To avoid measuretheoretic technical difficulties and for clarity and convenience of exposition, we assume that all the random variables take values in finite sets.
Assumption 1.
(Finite game) The sets , , , , , and are finite.
Moreover, we assume that given any sequence of actions up to time , every possible realization of the system state at has a strictly positive probability of realization.
Assumption 2.
(Strictly positive transition matrix) For all , and , we have .
Furthermore, we assume that for any sequence of actions , all possible realizations of private observations have positive probability. That is, no agent can infer perfectly another agent’s action based only on his private observations.
Assumption 3.
(Imperfect private monitoring) For all , , and , we have .
Remark 2.
We can relax Assumptions 2 and 3 under certain conditions and obtain results similar to those appearing in this paper; for instance, when agents actions are observable we can relax Assumptions 2 and 3. Broadly, the crucial assumption that underlies our results is that every deviation that can be detected by agent at any time must be also detectable by all agents at the same time based only on the common information . Due to space limitation we do not include the discussion of Assumptions 2 and 3 and the extension of our results when we relax them; we refer an interested reader to [24].
Iia Special Cases
We discuss several instances of dynamic games with asymmetric information that are special cases of the general model described above.
1) Nested information structure: Consider a twoplayer game with one informed player and one uninformed player and a general Markovian dynamics. At every time , the informed player makes a private perfect observation of the state , i.e. . The uninformed player does not have any observation of the state . Both the informed and uninformed players observe each others’ actions, i.e. . Therefore, we have , , and . The above nested information structure corresponds to dynamic games considered in [11, 25, 26], where in [25, 26] the state is static. for all
2) Independent dynamics with observable actions: Consider an player game where the state
3) Perfectly controlled dynamics with hidden actions: Consider a player game where the state
Iii Appraisals and Assessments
In this section we provide an overview of the notions of appraisals, assessments, and an equilibrium solution concept for dynamic games with asymmetric information. We argue that an equilibrium solution concept must consist of a pair of a strategy profile and a belief system (to be defined below), and discuss the importance of offequilibrium path beliefs in dynamic games.^{2}^{2}2We refer the interested reader to the papers by Battigalli [27], Myerson and Remy [28], and Watson [29] for more discussion.
In a dynamic game with asymmetric information agents have private information about the evolution of the game, and they do not observe the complete history of the game given by . Therefore, at every time , each agent, say agent , needs to form (i) an appraisal about the current state of the system and the other agents’ information (appraisal about the history), and (ii) an appraisal about how other agents will play in the future, so as to evaluate the performance of his strategy choices (appraisal about the future). Given the other agents’ strategies , agent can utilize his own information at , along with (i) other agents’ past strategies and (ii) other agents’ future strategies to form these appraisals about the history and future of the game, respectively.
In contrast to dynamic teams where agents have a common objective and coordinate their strategies, in dynamic games each agent has his own objective and chooses his strategy so as to maximize his objective. Thus, unlike dynamic teams, in dynamic games strategy is agent ’s private information and not known to other agents. Therefore, in dynamic games, each agent needs to form a prediction about the other agents’ strategies. We denote this prediction by to distinguish it from the strategy profile that is actually being played by the agents. Following Nash’s idea, we assume that agents share a common prediction about the actual strategy . We would like to emphasize that the prediction does not necessarily coincide with the actual strategy . As we point out later, one requirement of an equilibrium is that for every agent , the prediction must be an optimal strategy for him given the other agents prediction strategies .
Since an agent’s actual strategy, say agent ’s strategy , is his own private information, it is possible that is different from the prediction . Below we discuss the implication of an agent’s deviation from the prediction strategy profile . For that matter, we first consider an agent who may want to deviate from , and then we consider an agent who faces such a deviation and his response.
In dynamic games, when agent chooses his strategy , he needs to know how other agents will play for any choice of which can be different from the prediction . Therefore, the prediction has to be defined at all possible information realizations (i.e. information sets) of every agent, those that have positive probability under as well as those that have zero probability under .^{3}^{3}3This is not an issue in dynamic teams since agents coordinate in advance their choice of strategy profile , and no agent has an incentive to (privately) deviate from it. Hence, the agents’ strategy profile is needed to be defined only on information sets of positive probability under . Using the prediction , any agent, say agent , can form an appraisal about the future of the game for any strategy choice , and evaluate the performance of .
By the same rationale, when agent chooses he needs to determine his strategy for all of his information sets, even those that have zero probability under . This is because it is possible that some agent may deviate from and play a strategy that is different from the prediction . Agent must foresee these possible deviations by other agents and determine his response to these deviations.
To determine his optimal strategy at any information set, agent needs to first form an appraisal about the history of the game at as well as an appraisal about the future of the game using the strategy prediction . For an information set that is compatible with the prediction given his strategy at (i.e. has positive probability of being realized under ), agent can use Bayes’ rule to derive the appraisal about the history of the game at . However, for an information set that has zero probability under the prediction given , agent cannot anymore rely on the prediction and use Bayes’ rule to form his appraisal about the history of the game at . The realization of history tells agent that his original prediction is not (completely) correct, thus, he needs to revise his original prediction and to form a revised appraisal about the history of the game at . Therefore, agent must determine how to form/revise his appraisal about the history of the game for every realization , , that has zero probability under . We note that upon reaching an information set of measure zero, agent only revises his prediction about other agents’ past strategies, but does not change his prediction about their future strategies. This is because at equilibrium, the prediction specifies a set of strategies for other agents that are optimal in the continuation game that takes place after the realization of the information set of zero probability under .^{4}^{4}4In dynamic teams, agents only need to determine their optimal strategy for information sets that have positive probability under . As a result, a collective choice of strategy is optimal at every information set with positive probability if and only if it maximizes the (expected) utility of the team from up to . However, in dynamic games agents need to determine their strategies for all information sets irrespective of whether they have zero or positive probability under . Therefore, if a choice of strategy maximizes agent ’s (expected) utility from to , it does not imply that it is also optimal at all information sets that have zero probability under . Consequently, a choice of agent ’s strategy must be optimal for all continuation games that follow after a realization of an information set irrespective of whether it has zero or positive probability.
We describe below how one can formalize the above issues we need to consider in the study of dynamic games with asymmetric information. Following the game theory literature [30], agents’ appraisals about the history and future of the game can be captured by an assessment that all agents commonly hold about the game. We define an assessment as a pair of mappings , where , prediction about agent ’s strategy at , and , where denotes a
Using the definition of an assessment, we can extend the idea of Nash equilibrium to dynamic games with asymmetric information. An equilibrium of the dynamic game is defined as a common assessment among the agents that satisfies the following conditions under the assumption that the agents are rational. (i) Agent chooses his strategy so as to maximize his total expected utility (4) in all continuation games given the assessment about the game. Therefore, the prediction that other agents hold about agent ’s strategy must be a maximizer of agent ’s total expected utility under the assessment . (ii) For all , agent ’s, , belief at information set that has positive probability of realization under , must be equal to the probability distribution of conditioned on the realization (determined via Bayes’ rule) assuming that agents play according to . When has zero probability under the assessment , the belief cannot be determined via Bayes’ rule and must be revised. The revised belief must satisfy a certain set of “reasonable” conditions so as to be compatible with agent ’s rationality. Various sets of conditions have been proposed in the literature (see [30, 31]) to capture the notion of ”reasonable” beliefs that are compatible with the agents’ rationality. Different sets of conditions for offequilibrium beliefs result in the different equilibrium concepts that are proposed for dynamic games with asymmetric information.
In this paper, we consider Perfect Bayesian Equilibrium (PBE) as the equilibrium solution concept. In the next section we provide the formal definition of PBE.
Iv Perfect Bayesian Equilibrium
The formal definition of Perfect Bayesian Equilibrium (PBE) for dynamic games in extensive form can be found in [31]. In this paper we use a state space representation for dynamic games instead of an extensive game form representation, therefore, we need to adapt the definition of PBE to this representation. A PBE is defined as an assessment that satisfies the sequential rationality and consistency conditions. The sequential rationality condition requires that for all , the prediction is optimal for agent given the assessment . The consistency condition requires that for all , , and , agent ’s belief must be compatible with prediction . We formally define these conditions below.
Let denote the probability measure induced by the stochastic process that starts at time with initial condition
Definition 1 (Sequential rationality).
We say that an assessment is sequentially rational if , , and , the strategy prediction is a solution to
(5) 
The sequential rationality condition (5) requires that, given the assessment , the prediction strategy for agent is an optimal strategy for him for all continuation games after history realization , irrespective of whether has positive or zero probability under . That is, the common prediction about agent ’s strategy must be an optimal strategy choice for him since it is common knowledge that he is a rational agent. We note that the sequential rationality condition defined above is more restrictive than the optimality condition for Bayesian Nash Equilibrium (BNE) which only requires (5) to hold at . By the sequential rationality condition, we require the optimality of prediction even along offequilibrium paths, and thus, we rule out the possibility of noncredible threats (see [30] for more discussion).
The sequential rationality condition results in a set of constraints that the strategy prediction must satisfy given a belief system . As we argued in Section III, the belief system must be also compatible with the strategy prediction . The following consistency condition captures such compatibility between the belief system and the prediction .
Definition 2 (Consistency).
We say that an assessment is consistent if

for all , , , and such that , the belief must satisfy Bayes’ rule, i.e.
(6) 
for all , , , and such that , we have
only if there exists an open loop strategy
(7)
The above consistency condition places a restriction on the belief system so that it is compatible with the strategy prediction . For information sets along equilibrium paths, i.e. , belief must be updated according to (6) via Bayes’ rule since agent ’s observations are consistent with the prediction . For information sets along offequilibrium paths, i.e. , agent needs to revise his belief about the strategy of agents as the realization of indicates that some agent has deviated from prediction . As pointed out before, the revised belief must be “reasonable”. Definition 2 provides a set of such “reasonable” conditions captured by (6) and (7) that we discuss further below.
First, consider an information set along an offequilibrium path such that
Next, consider an information set along an offequilibrium path such that at , has a zero probability of realization under the prediction . In this case, the realization of indicates that agents have deviated from prediction , and this deviation has not been detected by agent before. Therefore, agent needs to form a new belief about agents ’s private information and the state by revising . Part (ii) of the consistency condition concerns such belief revisions and requires that the support of agent ’s revised belief includes only the states and private information that are feasible under the system and information dynamics (1) and (2), that is, they are reachable under some openloop control strategy . That is, conditioned on reaching information set
Remark 3.
We can now provide the formal definition of PBE for the dynamic game of Section II.
Definition 3.
An assessment is called a PBE if it satisfies the sequential rationality and consistency conditions.
The definition of Perfect Bayesian equilibrium provides a general formalization of outcomes that are rationalizable (i.e. consistent with agents’ rationality) under some strategy profile and belief system. However, as we argue further in Section VII, there are computational and philosophical reasons that motivate us to define a sub class of PBEs that provide a simpler and more tractable approach to characterizing the outcomes of dynamic games with asymmetric information.
There are two major challenges in computing a PBE . First, there is an intertemporal coupling between the agents’ strategy prediction and belief system . According to the consistency requirement, the belief system has to satisfy a set of conditions given a strategy prediction . On the other hand, by sequential rationality, a strategy prediction must satisfy a set of optimality condition given belief system . Therefore, there is a circular dependency between a prediction strategy and a belief system over time. For instance, by sequential rationality, agent ’s strategy at time depends on the agents’ future strategies and on the agents’ past strategies indirectly through the consistency condition for . As a result, one needs to determine the strategy prediction and belief system simultaneously for the whole time horizon so as to satisfy the sequential rationality and consistency conditions, and thus, cannot sequentially decompose the computation of PBE over time. Second, the agents’ information , , has a growing domain over time. Hence, the agents’ strategies have growing domains over time, and this feature further complicates the computation of PBEs of dynamic games with asymmetric information.
The definition of PBE requires an agent to keep track of all observations he acquires over time and to form beliefs about the private information of all other agents. As we show next, agents do not need to keep track of all of their past observations to reach an equilibrium. They can take into account fewer variables for decision making and ignore part of their information that is not relevant to the continuation game. As we argue in Section VII, the class of simpler strategies proposed in this paper characterize a more plausible prediction about the outcome of the interaction among agents when the underlying system is highly dynamic and there exists considerable information asymmetry among them.
V The Sufficient Information Approach
We characterize a class of PBEs that utilize strategy choices that are simpler than general behavioral strategies as they require agents to keep track of only a compressed version of their information over time. We proceed as follows. In Section VA we provide sufficient conditions for the subset of private information an agent needs to keep track of over time for decision making purposes. In Section VD, we introduce the sufficient information based belief as a compressed version of the agents’ common information that is sufficient for decisionmaking purposes. Based on these compressions of the agents’ private and common information, we introduce the notion of sufficient information based assessments and Sufficient Information BasedPerfect Bayesian Equilibrium (SIBPBE) in Sections VD and VE, respectively.
Va Sufficient Private Information
The key ideas for compressing an agent’s private information appear in Definitions 4 below; We refer an interested reader to the companion paper [2] for discussion on the rationale behind Definition 4.
Definition 4 (Sufficient private information).
We say , , , is sufficient private information for the agents if,

it can be updated recursively as
(8) 
for any strategy profile and for all realizations of positive probability,
(9) where

for every strategy profile of the form and , ;
(10) for all realizations of positive probability where

given an arbitrary strategy profile of the form
(11) for all realizations of positive probability where
We note that the conditions of Definition 4 is written in terms of strategy prediction profile for dynamic games. This is because, as we discussed before, the agents’ actual strategy profile is their private information. Therefore, each agent , , evaluates the sufficiency of a compression of his private information using the strategy prediction he holds about other agents.
VB Sufficient Common Information
Based on the characterization of sufficient private information, we present a statistic (compressed version) of the common information that agents need to keep track of over time for decision making purposes.
Consider the sufficient private information , . Define to be the set of all possible realizations of , and . Let
VC Special Cases:
We consider the special classes described in Section II and identify the sufficient information and SIB belief for each of them.
1) Nested information structure: The uninformed agent (agent ) has no private information, . Thus, . For the informed agent (agent ) consider . Consequently, we can set . Note that , thus, the uninformed agent’s belief about is the same as SIB belief .
2) Independent dynamics with observable actions: Consider . Note that , have independent dynamics given the collective action that is commonly observable by all agents. Therefore, agent ’s belief about , , is the same as SIB belief, .
3) Perfectly controlled dynamics with hidden actions: Since agent , , perfectly controls over time , we set and .
VD Sufficient Information based Assessment
As we discussed in Section IV, to form a prediction about the game we need to determine an assessment about the game that is sequentially rational and consistent. We show below that using the sufficient private information and sufficient common information (the SIB belief) , we can form a sufficient information based assessment about the game. We prove that such a sufficient information based assessment is rich enough to capture a subset of PBE.
Consider a class of strategies that utilize the information given by for agent at time . We call the mapping
In Section IV, we defined a consistency condition between strategy prediction and a belief system . Below, we provide an analogous consistency condition between a SIB strategy prediction and a SIB belief system .
Definition 5.
A pair of a SIB strategy prediction profile and belief system satisfies the consistency condition if

for all ^{5}^{5}5For , is given by the conditional probability at as ., ,
(12) 
for all , ,
only if there exists an openloop strategy such that
(13) 
for all , ,
if there exists an openloop strategy
(14)
Parts (i) and (ii) of Definition 5 follow from rationales similar to their analogues in Definition 2, and require a SIB belief system to satisfy a sets of constraints with respect to a SIB strategy profile that are similar to those for an assessment . Definition 5 requires an additional condition described by part (iii). By (14), we require a SIB belief system consistent with the SIB strategy profile to assign a positive probability to every realization