ConvergenceOptimal Quantizer Design of Distributed Contractionbased Iterative Algorithms with Quantized Message Passing
Abstract
In this paper, we study the convergence behavior of distributed iterative algorithms with quantized message passing. We first introduce general iterative function evaluation algorithms for solving fixed point problems distributively. We then analyze the convergence of the distributed algorithms, e.g. Jacobi scheme and GaussSeidel scheme, under the quantized message passing. Based on the closedform convergence performance derived, we propose two quantizer designs, namely the time invariant convergenceoptimal quantizer (TICOQ) and the time varying convergenceoptimal quantizer (TVCOQ), to minimize the effect of the quantization error on the convergence. We also study the tradeoff between the convergence error and message passing overhead for both TICOQ and TVCOQ. As an example, we apply the TICOQ and TVCOQ designs to the iterative waterfilling algorithm of MIMO interference game.
I Introduction
Distributed algorithm design and analysis is a very important topic with important applications in many areas such as deterministic network utility maximization (NUM) for wireless networks and noncooperative game. For example, in [1, 2], the authors derived various distributed algorithms for a generic deterministic NUM problem using the decomposition techniques, which can be classified into primal decomposition and dual decomposition methods. In [3], the authors investigated a distributed power control algorithm for an interference channel using noncooperative game and derived an iterative waterfilling algorithm to approach the Nash equilibrium (NE). The interference game problem was extended to iterative waterfilling algorithm for a wideband interference game with time/frequency offset in [4] and an iterative precoder optimization algorithm for a MIMO interference game in [5, 6]. The authors established a unified convergence proof of the iterative waterfilling algorithms for the SISO frequencyselective interference game and the MIMO interference game using a contraction mapping approach. Using this framework, the iterative best response update (such as the iterative power waterfilling as well as the iterative precoder design) can be regarded as an iterative function evaluations w.r.t. a certain contraction mapping and the convergence property can be easily established using fixed point theory[7, 8]. In all these examples, the iterative function evaluation algorithms involved explicit message passing between nodes in the wireless networks during the iteration process. Furthermore, these existing results have assumed perfect message passing during the iterations.
In practice, explicit message passing during the iterations in the distributed algorithms requires explicit signaling in wireless networks. As such, the message passing cannot be perfect and in many cases, the messages to pass have to be quantized. As a result, it is very important and interesting to study about the impact of quantized message passing on the convergence of the distributed algorithms. Existing studies on the distributed algorithms under quantized message passing can be classified into two categories, namely the distributed quantized average consensus algorithms [9, 10, 11, 12, 13, 14] as well as the distributed quantized incremental subgradient algorithms[15, 16, 17, 18]. For the distributed quantized average consensus algorithms, existing works considered the algorithm convergence performance under quantized message passing for uniform quantizer [9, 10, 12, 13, 14] and logarithmic quantizer [11] with fixed quantization rate. In [12, 14], the authors also considered quantization interval optimization (for average consensus algorithms) based on the uniform fixedrate quantization structure. Similarly, for the second category of quantized incremental subgradient algorithms, the authors in [15, 16, 17, 18] considered the convergence performance of fixedrate uniform quantization. In this paper, we are interested in the convergence behavior of distributed iterative algorithms for solving general fixed point problems under quantized message passing. The above works on quantized message passing cannot be applied to our case due to the following reasons. First of all, the algorithm dynamics of the existing works (linear dynamics for average consensus algorthms and stepsize based algorithms for incremental subgradient algorithms) are very different from the contractionbased iterative algorithms we are interested in (for solving fixed point problems). Secondly, the above works have imposed simplifying constraints of uniform and fixed rate quantizer design and it is not known if a more general quantizer design or adaptive quantization rate could further improve the convergence performance of the iterative algorithms. There are a few technical challenges regarding the study of convergence behavior in distributed contractionbased iterative function evaluations.

Convergence Analysis and Performance Tradeoff under Quantized Message Passing: In the literature, convergence of distributed iterative function evaluation algorithms under quantized message passing has not been considered. The general model under quantized message passing and how does the quantization error affect the convergence are not fully studied. Furthermore, it will also be interesting to study the tradeoff between convergence error and message passing overhead.

Quantizer Design based on the Convergence Performance: Given the convergence analysis results, how to optimize the quantizer to minimize the effect of the quantization error on the convergence is a difficult problem. In general, quantizers are designed w.r.t. a certain distortion measure such as the mean square error [19, 20]. However, it is not clear which distortion measure we should use to design the quantizer in order to optimize the convergence performance of the iterative algorithms we considered. Furthermore, the convergence performance highly depends on the quantizer structure as well as the quantization rate, and hence, a lowcomplexity solution to the nonlinear integer quantizer optimization problem is of great importance.
In this paper, we shall attempt to shed some lights on these questions. We shall first introduce a general iterative function evaluation algorithm with distributed message passing for solving fixed point problems. We shall then analyze the convergence of the distributed algorithms, e.g. Jacobi scheme and GaussSeidel scheme, under the quantized message passing. Based on the analysis, we shall propose two rateadaptive quantizer designs, namely the time invariant convergenceoptimal quantizer (TICOQ) and the time varying convergenceoptimal quantizer (TVCOQ), to minimize the effect of the quantization error on the convergence. We shall also develop efficient algorithms to solve the nonlinear integer programming problem associated with the quantizer optimization problem. As an illustrative example, we shall apply the TICOQ and TVCOQ designs to the iterative waterfilling algorithm of the MIMO interference game[5, 6].
We first list the important notations in this paper in table I.
dimension of vector of state variables  
()  element index of vector 
number of nodes/blocks  
()  node index/block index 
total number of iterations  
()  iteration index 
component quantizer of node (general)  
system quantizer (general)  
superscript  scalar quantizer (SQ) 
superscript  vector quantizer (VQ) 
component quantizer of node (SQ)  
system quantizer (SQ)  
quantization index vector (SQ)  
quantization rate vector (SQ)  
component quantizer of node (VQ)  
system quantizer (VQ)  
quantization index vector (VQ)  
quantization rate vector (VQ)  
the set of nonnegative real numbers  
the set of nonnegative integers 
Ii Iterative Function Evaluations
In this section, we shall introduce the basic iterative function evaluation algorithm to solve fixed point problems as well as its parallel and distributed implementations. We shall then review the convergence property under perfect message passing in the iteration process. We shall also illustrate the application of the framework using the MIMO interference game in [5, 6] as an example.
Iia A General Framework of Iterative Function Evaluation Algorithms
In algorithm designs of wireless systems, many iterative algorithms can be described as the following dynamic update equation[7]:
(1) 
where is the vector of state variables of the system at (discrete) time and is a mapping from a subset into itself. Such iterative algorithm with dynamics described by (1) is called the iterative function evaluation algorithm, which is widely used to solve fixed point problems[8, 7]. Specifically, any vector satisfying is called a fixed point of . If the sequence converges to some and is continuous at , then is a fixed point of [7]. Therefore, the iteration in (1) can be viewed as an algorithm for finding such a fixed point of . We shall first review a few properties below related to the convergence of (1). Specifically, is called a contraction mapping if it satisfies some property, which is defined as follows:
Definition 1 (Contraction Mapping)
Let be a mapping from a subset into itself satisfying the following property (), where is some norm and is a constant scalar. Then the mapping is called a contraction mapping and the scalar is called the modulus of . \QED
Remark 1
(Comparison with Stepsize Based Incremental Subgradient Algorithms) The incremental subgradient algorithms[21] can be described as , where is the stepsize sequence and is a subgradient of the objective function at in a minimization problem. Such stepsize based update algorithms and their associated convergence dynamics are quite different from the iterative function evaluation algorithm we considered in (1). \QED
If is a contraction mapping, then the iterative update in (1) is called contracting iteration. The convergence of (1) is summarized as follows (Proof can be found in [7]):
Theorem 1 (Convergence of Contracting Iterations)
Suppose that is a contraction mapping with modulus and that is closed. We have:
(1) (Existence and Uniqueness of Fixed Points) The mapping has a unique fixed point .
(2) (Geometric Convergence) For any initial vector , the sequence generated by (1) converges to geometrically. In particular, . \QED
In the above discussion, can be any welldefined norm. There are many useful norms in the literature. However, the commonly used norms can be classified into two groups, namely weighted maximum norm and norm (). They are elaborated below:

Weighted maximum norm:
(2) Note that for , this reduces to the maximum norm, which can also be obtained from the norm by taking the limit .

norm ():
(3) Note that for we get the taxicab norm and for we get the Euclidean norm.
IiB Parallel and Distributed Implementation of Contracting Iterations
In practice, large scale computation always involves a number of processors or communication nodes jointly executing a computational task. As a result, parallel and distributed implementation is of prime importance. Information acquisition and control are within geographically distributed nodes, in which distributed computation is more preferable. In this part, we shall discuss the efficient parallel and distributed computation of the contracting iteration in (1).
To perform efficient parallel and distributed implementations with processors, the set is partitioned into a Cartesian product of lower dimensional sets, based on the computational complexity consideration or the local information extraction and control requirement. Mathematically, it can be expressed as , where and . Let and be the index set of the th component set (), where is the set of integers. Thus, , where . Any vector is decomposed as with the th block component and the mapping is decomposed as with the th block component .
When the set is a Cartesian product of lower dimensional sets , blockparallelization with processors can be implemented by assigning one processor to update a different block component. The most common updating strategies for based on the block mapping are:

Jacobi Scheme: All block components are updated simultaneously, i.e.
(4) 
GaussSeidel Scheme: All block components are updated sequentially, one after the other, i.e.
(5) where given by
(6) is the th block component of the GuassSeidel mapping , i.e. .
Both Jacobi Scheme and GaussSeidel Scheme belong to synchronous update schemes^{1}^{1}1Due to page limit, we shall illustrate the design for synchronous updates in (4) and (5). However, the scheme can be extended to deal with totally asynchronous updates easily [7], which will be further illustrated later in footnote 5.. Specifically, Jacobi Scheme assumes the network is synchronized, while the GaussSeidel Scheme assumes the network provides a (Hamiltonian) cyclic route[7].
The general weighted blockmaximum norm on , which is usually associated with the block partition of the vector , is defined as[7]:
(7) 
where is the vector weight and is the norm for the th block component^{2}^{2}2Since in general, the norm of each block component may not be the same, subscript is used in . , which can be any given norm on , such as the weighted maximum norm and the norm () defined in (2) and (3). The mapping is called a blockcontraction with modulus if it is a contraction under the above induced weighted blockmaximum norm with modulus . The convergence of the Jacobi scheme and GaussSeidel scheme based on the blockcontraction is summarized in the following theorem [7]:
Theorem 2
(Convergence of Jacobi Scheme and GaussSeidel Scheme) If is a blockcontraction, then the GaussSeidel mapping is also a blockcontraction with the same modulus as . Furthermore, if is closed, then the sequence generated by both the Jocobi scheme in (4) and the GaussSeidel scheme in (5) based on the mapping converges to the unique fixed point of geometrically. \QED
IiC Application Example — MIMO Interference Game
The contracting iteration in (1) is very useful in solving fixed point problems. Fixed point problem is highly related to distributed resource optimization problems in wireless systems[3, 22, 5, 6]. For example, finding the Nash Equilibrim (NE) of a game is a fixed point problem. In this subsection, we shall illustrate the application of contracting iterations using MIMO interference game [5, 6] as an example.
Consider a system with noncooperative transmitterreceiver pairs communicating simultaneously over a MIMO channel with transmit antenna and receive antenna[5, 6]. The received signal of the th receiver is given by:
(8) 
where and are the vector transmitted by the th transmitter and the vector received by the th receiver respectively, is the directchannel of the th link, is the crosschannel from the th transmitter to the th receiver, and is a zeromean circularly symmetric complex Gaussian noise vector with covariance matrix . For each transmitter , the total average transmit power is given by
(9) 
where denotes the trace operator, is the covariance matrix of the transmitted vector and is the maximum average transmitted power. The maximum throughput of link for a given set of users’ covariance matrices is as follows
(10) 
where is the noise covariance matrix plus the MUI observed by user , and is the covariance matrix of all other users except user .
In the MIMO interference game [5, 6], each player competes against the others by choosing his transmit covariance matrix (i.e., his strategy) that maximizes his own maximum throughput in (10), subject to the transmit power constraint in (9), the mathematical structure of which is as follows
(11)  
where is the admissible strategy set of user , and denotes that is a positive semidefinite matrix. Given and , the solution to the noncooperative game (11) is the wellknow waterfilling solution , where the waterfilling operator can be equivalently written as [5]
(12) 
where denotes the matrix projection of w.r.t Frobenius norm^{3}^{3}3If we arrange elements of a matrix as a dimensional vector , then the Frobenius norm of matrix is equivalent to the norm of vector . onto the set . The NE of the MIMO Gaussian interference game is the fixed point solution of the waterfilling mapping , i.e. , where and .
In [5], it is shown that under some mild condition, the mapping is a blockcontraction^{4}^{4}4After rearranging the elements of the covariance matrix as a dimensional vector, the blockcontraction w.r.t. is equivalent to a blockcontraction w.r.t. defined in (7) with each being norm. w.r.t. . Therefore, the NE can be achieved by the following contracting iteration
(13) 
where . It can be easily seen that the waterfilling algorithm for the MIMO interference game in (13) is a special case of the contracting iterations in (1). In our general model, in (1) corresponds to in (13); the blockcontraction mapping in (1) corresponds to in (13); the th block component corresponds to the covariance matrix ; the th block component mapping corresponds to .
For the parallel and distributed implementation, we can partition the variable space , where each variable space corresponds to the covariance matrix of the th link. In each iteration, the receiver of each link needs to locally measure the PSD of the interference received from the transmitter of the other links, i.e. , computes the covariance matrix of the th link and transmits the computational results to the associated transmitter. There are two distributed iterative waterfilling algorithms (IWFA) based on this waterfilling blockcontraction, namely simultaneous IWFA and sequential IWFA, which are described as follows:

Simultaneous IWFA: It is an example of the Jacobi scheme, which is given by

Sequential IWFA: It is an example of the GaussSeidel scheme, which is given by
Iii Contracting Iterations under Quantized Message Passing
In this section, we shall study the impact of the quantized message passing on the contracting iterations. We shall first introduce a general quantized message passing model, followed by some general results regarding the convergence behavior under quantized message passing.
Iiia General Model of Quantized Message Passing
We assume there are processing nodes geographically distributed in the wireless systems. Fig. 1 illustrates an example of pair MIMO interference game with quantized message passing. The system quantizer can be characterized by the tuple , where is the component quantizer (can be scalar or vector quantizer) for the th node. can be further denoted by the tuple . is an encoder and is a decoder. and are the index set and the quantization rate of the component quantizer . is the reproduction codebook, which is the set of all possible quantized outputs of [19]. The quantization rule is completely specified by . Specifically, the quantized value is given by . Each node updates the th block component of the dimensional vector , i.e. computes . The encoder of accepts the input and produces a quantization index . Each node broadcasts the quantization index . In other words, the message passing involves only the quantization indices instead of the actual controls . Upon receiving the quantization index , the decoder of produces a quantized value . Therefore, the contracting iteration update dynamics of (1) with quantized message passing can be modified as:
(14) 
where is the quantization error vector at time . The quantizer design affects the convergence property of the iterative update algorithm fundamentally via the quantization error random process . Generally, the update of each block component is based on the latest overall vector, because . Thus, the decoders of the system quantizer is needed at each node. On the other hand, the th node only requires the encoder of the corresponding quantizer component .
Consider the application example in Section IIC under quantized message passing. The system quantizer can be applied in the MIMO interference game with noncooperative transmitterreceiver pairs as illustrated in Fig. 1. Specifically, for the th link, the encoder is placed at receiver and the decoder is placed at the transmitter. The MIMO interference game under quantized message passing will be illustrated in the following example:
Example 1
(MIMO Interference under Quantized Message Passing) In the th iteration, the receiver of the th link locally measures PSD of the interference received from the transmitter of the other links, i.e. , and computes . The encoder of at the th receiver encodes and passes the quantization index to the th transmitter. The decoder of at th transmitter produces a quantized value . The contracting iterative update dynamics of (13) for the MIMO interference game under quantized message passing is given by:
(15) 
IiiB Convergence Property under Quantized Message Passing
Under the quantized message passing, the convergence of the contracting iterations is summarized in the following lemma:
Lemma 1
(Convergence of Contracting Iterations under Quantized Message Passing) Suppose that is a contraction mapping with modulus and fixed point , and that is closed. For any initial vector , the sequence generated by (14) satisfies:
(a) , where is the accumulated error up to the time induced by the quantized message passing.
(b) For each , if there exists a vector such that , then , where .
(c) If , then , where with limiting error bound . Furthermore, define the stationary set as . The sufficient condition for convergence is and the necessary condition for convergence is , such that .
Please refer to Appendix A for the proof.
Note that, in the above lemma, the norm can be any general norm. In the following, we shall focus on characterizing the convergence behavior of the distributed Jacobi and GaussSeidel schemes under quantized message passing with the underlying contraction mapping defined w.r.t. the weighted blockmaximum norm [7, 5, 6]. Under quantized message passing, the algorithm dynamics of the two commonly used parallel and distributed schemes can be described as follows:

Jacobi Scheme under Quantized Message Passing:
(16) 
GaussSeidel Scheme under Quantized Message Passing:
(17) where
(18)
Applying the results of Lemma 1, the convergence property of the distributed Jacobi and GaussSeidel schemes in (16) and (17) can be summarized in the following lemma.
Lemma 2
(Convergence of Jacobi Scheme and GaussSeidel Scheme under Quantized Message Passing) Suppose that is a blockcontraction mapping w.r.t. the weighted blockmaximum norm with modulus and fixed point , and that is closed. For every initial vector , the sequence generated by both the Jacobi scheme and the GaussSeidel scheme under quantized message passing in (16) and (17) satisfies^{5}^{5}5Our analysis can be extended to totally asynchronous scheme in which the results of Lemma 2 becomes: (a) . (b) . (c) If , we have and . By the Asynchronous Convergence Theorem (Proposition 2.1 in Chapter 6 of [7]), we can prove (c) (similar to the proof of Theorem 12 in [6]). The proof is omitted here due to page limit. Since the error bound result of totally asynchronous scheme is similar to Jacobi Scheme and GaussSeidel Scheme, our quantizer design later can be applied to the asynchronous case.
(a) , where for Jacobi scheme and for GaussSeidel scheme.
(b) If condition in (b) of Lemma 1 holds w.r.t. , then , where for Jacobi scheme and for GaussSeidel scheme.
(c) If condition in (c) of Lemma 1 holds w.r.t. , then , where for Jacobi scheme with and ^{6}^{6}6Compared with Jacobi scheme, GaussSeidel scheme and totally asynchronous scheme have extra error terms and , respectively, and . scheme with . Furthermore, define the stationary set as . The sufficient condition and necessary condition are the same as those in Lemma 1. for GaussSeidel
Please refer to Appendix B for the proof.
Remark 2
As a result of Lemma 1 and Lemma 2, the effect of quantized message passing affects the convergence property of the contracting iterative algorithm in a fundamental way. From Lemma 2, the Jacobi and GaussSeidel distributed iterative algorithms may not be able to converge precisely to the fixed point under quantized message passing due to the term .
Iv Time Invariant ConvergenceOptimal Quantizer Design
In this section, we shall define a Time Invariant Quantizer (TIQ) and then formulate the Time Invariant ConvergenceOptimal Quantizer (TICOQ) design problem. We shall consider the TICOQ design for the scalar quantizer (SQ) and the vector quantizer (VQ) cases separately. Specifically, the component quantizer of the th node can be a group of scalar quantizers or a vector quantizer . In the SQ case, each element () of the vector is quantized by a coordinate scalar quantizer separately. However, in the VQ case, the input to a vector quantizer is the vector .
Definition 2 (Time Invariant Quantizer (TIQ))
A Time Invariant Quantizer (TIQ) is a quantizer such that and are time invariant mappings. \QED
The system scalar TIQ can be denoted as . Let be the quantization rate vector for the system scalar TIQ , where is the quantization rate (number of bits) of the coordinate scalar quantizer (). The sum quantization rate of the system scalar TIQ is given by . Similarly, the system vector TIQ can be denoted as . Let be the quantization rate vector for the system vector TIQ , where is defined as the quantization rate (number of bits) of the coordinate vector quantizer (). The sum quantization rate of the system vector TIQ is given by .
Using Lemma 2 (c), the limiting error bound of the algorithm trajectory is given by (Jacobi scheme) or (GaussSeidel scheme). Therefore, the TICOQ design, which minimizes under the sum quantization rate constraint, is equivalent to the following:
Problem 1 (TICOQ Design Problem)
(19)  
(20)  
or  (21) 
where (SQ case) or (VQ case).
Remark 3 (Interpretation of Problem 1)
Note that the optimization variable in Problem 1 is the system TIQ or . The objective function or obviously depends on the choice of the system TIQ or . Furthermore, the constraint (20) or (21) is the constraint on the quantization rate or , which is also an effective constraint on the optimization domains of or , respectively. It is because or is a parameter (corresponding to the cardinality of the index set, i.e. or ) of the encoder and decoder of or . The Lagrangian function of Problem 1 is given by: (SQ case) or (VQ case), where or is the Lagrange multiplier (LM) corresponding to the constraint (20) or (21). Hence, the optimization problem 1 can also be interpreted as optimizing the tradeoff between the convergence performance and the communication overhead or . The LM or can be regarded as the periteration cost sensitivity.
Remark 4 (Robust Consideration in Problem 1)
The optimization objective in (19) actually corresponds to a worst case error in the algorthm trajectory. In other words, the TICOQ design is trying to find the optimal TIQ which minimizes the worst case error. In fact, the algorithm trajectory is a random process induced by the uncertainty in the initial point . In general, we do not have knowledge on the distribution of due to the uncertainty on . Hence, the solution to Problem 1 (optimizing the worst case error) offers some robustness w.r.t. the choice of . \QED
In the following, we shall discuss the scalar and vector TICOQ design based on Problem 1 separately.
Iva Time Invariant ConvergenceOptimal Scalar Quantizer
We first have a lemma on the structure of the optimizing quantizer in the scalar TICOQ design in Problem 1.
Lemma 3 (Structure of the Scalar TICOQ)
Please refer to Appendix C for the proof.
While the optimization variable in Problem 1 (SQ case) is , using Lemma 7, we can restrict the optimization domain of each coordinate scalar quantizer () to uniform quantizer without loss of optimality. Thus, the worst case error of the th coordinate is given by (), where is the length of the interval (), and the remaining optimization variable is reduced from to . Scalar TICOQ design in Problem 1 w.r.t. is a Nonlinear Integer Programming (NLIP) problem, which is in general difficult to solve. Verifying the optimality of a solution requires enumerating all the feasible solutions in most cases. In the following, we shall derive the optimal solution to the scalar TICOQ design in Problem 1 w.r.t. the weighted blockmaximum norm defined by (7), in which each component norm is the weighted maximum norm and the norm separately.
Theorem 3 (Solution for Weighted Maximum Norm)
Given a weighted blockmaximum norm defined by (7) (parameterized by ) with being the weighted maximum norm defined by (2) (parameterized by ), let , where , is indicator function and is a constant related to the LM of the constraint (20) chosen to satisfy the constraint . The optimal integer solution of Problem 1 for the SQ case is given by^{8}^{8}8We arrange the real sequence in decreasing order and denote them as , where represents the th largest term of .:
(22) 
The optimal value of Problem 1 under continuous relaxation is .
Please refer to Appendix D for the Proof.