Yiqiang Q. Zhao, W. John Braun, Wei Li† Department of Mathematics and Statistics University of Winnipeg Winnipeg, MB Canada R3B 2E9 September 2, 2004
Abstract: In this paper, we consider approximations to countable state Markov chains and provide a simple, direct proof for the convergence of certain probabilistic quantities when one uses a northwest corner or a banded matrix approximation to the original probability transition matrix.
1
Introduction
Consider a Markov chain with a countable state space and stochastic matrix P . In this paper, we provide simple proofs of convergence when a northwest corner or a banded matrix is used to approximate certain measures associated with P . Our treatment is unified in the sense that these probabilistic quantities are well defined for both ergodic and nonergodic Markov chains, and all results are valid for both approximation methods. Our proofs are simple in the sense that they only depend on one theorem from analysis: Weierstrass’ M-test. Our results include the convergence of stationary probability distributions when the Markov chain is ergodic. This work was directly motivated by the need to compute stationary probabilities for infinite-state Markov chains, but applications need not be limited to this. Computationally, when we solve for the stationary distribution of a countable-state Markov chain, the transition probability matrix has to be: i) truncated in some way to a finite matrix; or ii) banded in some way such that the computer implementation is finite. The second method is highly recommended for preserving properties of structured probability transition matrices. There are two questions which naturally arise here: i) in which ways can we truncate or band the transition matrix? and ii) for a selected truncation or banded restriction, does the solution approximate the original probability distribution?
This work has been
References: [1] Asmussen, S. (1987) Applied Probability and Queues. Wiley, Chichester. [2] Gibson, D. and Seneta, E. (1987a) Augmented truncations of infinite stochastic matrices. J. Appl. Prob. 24, 600–608. 10 [3] Grassmann, W.K. and Heyman, D.P. (1993) Computation of steady-state probabilities for infinite-state Markov chains with repeating rows. ORSA J. on Computing 5, 292–303. [4] Hunter, J.J. (1983) Mathematical Techniques of Applied Probability, Vol. I. Academic Press, New York. [5] Heyman, D.P. (1991) Approximating the stationary distribution of an infinite stochastic matrix. J. Appl. Prob. 28, 96–103. [6] Kao, E.P.C. (1991) Using state reduction for computing steady state probabilities of queues of GI/P H/1 type. ORSA J. on Computing 3, 231–240. [7] Kao, E.P.C. (1992) Using state reduction for computing steady state vectors in Markov chains of M/G/1 type. Queueing Systems 10, 89–104. [8] Karlin, S.K. and Taylor, H.M. (1981) A Second Course in Stochastic Processes. Academic Press, New York. [9] Kemeny, J.G., Snell, J.L. and Knapp, A.W. (1976) Denumerable Markov Chains, 2nd edn, Springer-Verlag, New York. [10] Kleinrock, L. (1975) Queueing Systems, Volume 1: Theory. John Wiley & Sons, New York. [11] Latouche, G. (1993) Algorithms for infinite Markov chains with repeating columns. Linear Algebra, Markov Chains and Queueing Models, Meyer, C.D. and Plemmons, R.J. Eds, Springer Verlag, New York, 231–265. [12] Murthy, G.R., Kim, M. and Coyle, E.J. (1991) Equilibrium analysis of skip free Markov chains: nonlinear matrix equations. Stoch. Models 7, 547–571. [13] Neuts, M.F. (1981) Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach, The Johns Hopkins University Press, Baltimore. [14] Neuts, M.F. (1989) Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Decker Inc., New York. [15] Puterman, M.L. (1994) Markov Decision Processes — Discrete Stochastic Dynamic Programming. John Wiley & Sons, New York. [16] Ross, S.M. (1983) Stochastic Processes. John Wiley & Sons, New York. [17] Rothblum, U.G. and Whittle, P. (1982) Growth optimality for branching Markov decision chains. Math. Op. Res. 7, 582–601. [18] Seneta, E. (1967) Finite approximation to infinite non-negative matrices. Proc. Camb. Phil. Soc. 63, 983–992. 11 [19] Seneta, E. (1968) Finite approximation to infinite non-negative matrices, II. Proc. Camb. Phil. Soc. 64, 465–470. [20] Seneta, E. (1981) Non-negative Matrices and Markov Chains, 2nd ed. SpringerVerlag, New York. [21] Sennott, L.I. (1996) The computation of average optimal policies in denumerable state Markov decision chains. Adv. Appl. Aprob., to appear. [22] Sheskin, T.J. (1985) A Markov chain partitioning algorithm for computing steady state probabilities. Operations Research 33, 229–235. [23] Tweedie, R.L. (1971) Criteria for classifying general Markov chains. Adv. Appl. Prob. 8, 737–771. [24] Wolf, D. (1980) Approximation of the invariant probability measure of an infinite stochastic matrix. Adv. Appl. Prob. 12, 710–726. [25] Zhao, Y.Q. and Liu, D. (1996) The censored Markov chain and the best augmentation. J. Appl. Prob. 33, 623–629.