Preview

Hello World

Powerful Essays
Open Document
Open Document
3828 Words
Grammar
Grammar
Plagiarism
Plagiarism
Writing
Writing
Score
Score
Hello World
Northwest Corner and Banded Matrix Approximations to a Countable Markov Chain∗
Yiqiang Q. Zhao, W. John Braun, Wei Li† Department of Mathematics and Statistics University of Winnipeg Winnipeg, MB Canada R3B 2E9 September 2, 2004

Abstract: In this paper, we consider approximations to countable state Markov chains and provide a simple, direct proof for the convergence of certain probabilistic quantities when one uses a northwest corner or a banded matrix approximation to the original probability transition matrix.

1

Introduction

Consider a Markov chain with a countable state space and stochastic matrix P . In this paper, we provide simple proofs of convergence when a northwest corner or a banded matrix is used to approximate certain measures associated with P . Our treatment is unified in the sense that these probabilistic quantities are well defined for both ergodic and nonergodic Markov chains, and all results are valid for both approximation methods. Our proofs are simple in the sense that they only depend on one theorem from analysis: Weierstrass’ M-test. Our results include the convergence of stationary probability distributions when the Markov chain is ergodic. This work was directly motivated by the need to compute stationary probabilities for infinite-state Markov chains, but applications need not be limited to this. Computationally, when we solve for the stationary distribution of a countable-state Markov chain, the transition probability matrix has to be: i) truncated in some way to a finite matrix; or ii) banded in some way such that the computer implementation is finite. The second method is highly recommended for preserving properties of structured probability transition matrices. There are two questions which naturally arise here: i) in which ways can we truncate or band the transition matrix? and ii) for a selected truncation or banded restriction, does the solution approximate the original probability distribution?
This work has been



References: [1] Asmussen, S. (1987) Applied Probability and Queues. Wiley, Chichester. [2] Gibson, D. and Seneta, E. (1987a) Augmented truncations of infinite stochastic matrices. J. Appl. Prob. 24, 600–608. 10 [3] Grassmann, W.K. and Heyman, D.P. (1993) Computation of steady-state probabilities for infinite-state Markov chains with repeating rows. ORSA J. on Computing 5, 292–303. [4] Hunter, J.J. (1983) Mathematical Techniques of Applied Probability, Vol. I. Academic Press, New York. [5] Heyman, D.P. (1991) Approximating the stationary distribution of an infinite stochastic matrix. J. Appl. Prob. 28, 96–103. [6] Kao, E.P.C. (1991) Using state reduction for computing steady state probabilities of queues of GI/P H/1 type. ORSA J. on Computing 3, 231–240. [7] Kao, E.P.C. (1992) Using state reduction for computing steady state vectors in Markov chains of M/G/1 type. Queueing Systems 10, 89–104. [8] Karlin, S.K. and Taylor, H.M. (1981) A Second Course in Stochastic Processes. Academic Press, New York. [9] Kemeny, J.G., Snell, J.L. and Knapp, A.W. (1976) Denumerable Markov Chains, 2nd edn, Springer-Verlag, New York. [10] Kleinrock, L. (1975) Queueing Systems, Volume 1: Theory. John Wiley & Sons, New York. [11] Latouche, G. (1993) Algorithms for infinite Markov chains with repeating columns. Linear Algebra, Markov Chains and Queueing Models, Meyer, C.D. and Plemmons, R.J. Eds, Springer Verlag, New York, 231–265. [12] Murthy, G.R., Kim, M. and Coyle, E.J. (1991) Equilibrium analysis of skip free Markov chains: nonlinear matrix equations. Stoch. Models 7, 547–571. [13] Neuts, M.F. (1981) Matrix-Geometric Solutions in Stochastic Models: An Algorithmic Approach, The Johns Hopkins University Press, Baltimore. [14] Neuts, M.F. (1989) Structured Stochastic Matrices of M/G/1 Type and Their Applications, Marcel Decker Inc., New York. [15] Puterman, M.L. (1994) Markov Decision Processes — Discrete Stochastic Dynamic Programming. John Wiley & Sons, New York. [16] Ross, S.M. (1983) Stochastic Processes. John Wiley & Sons, New York. [17] Rothblum, U.G. and Whittle, P. (1982) Growth optimality for branching Markov decision chains. Math. Op. Res. 7, 582–601. [18] Seneta, E. (1967) Finite approximation to infinite non-negative matrices. Proc. Camb. Phil. Soc. 63, 983–992. 11 [19] Seneta, E. (1968) Finite approximation to infinite non-negative matrices, II. Proc. Camb. Phil. Soc. 64, 465–470. [20] Seneta, E. (1981) Non-negative Matrices and Markov Chains, 2nd ed. SpringerVerlag, New York. [21] Sennott, L.I. (1996) The computation of average optimal policies in denumerable state Markov decision chains. Adv. Appl. Aprob., to appear. [22] Sheskin, T.J. (1985) A Markov chain partitioning algorithm for computing steady state probabilities. Operations Research 33, 229–235. [23] Tweedie, R.L. (1971) Criteria for classifying general Markov chains. Adv. Appl. Prob. 8, 737–771. [24] Wolf, D. (1980) Approximation of the invariant probability measure of an infinite stochastic matrix. Adv. Appl. Prob. 12, 710–726. [25] Zhao, Y.Q. and Liu, D. (1996) The censored Markov chain and the best augmentation. J. Appl. Prob. 33, 623–629.

You May Also Find These Documents Helpful

  • Satisfactory Essays

    M1 Final Exam

    • 1146 Words
    • 5 Pages

    10. A squirrel can be found in any of 4 states—Alabama(1), Colorado (2), Delaware (3), and Exhaustion (4). It is known that this is a Markov process with the transition matrix given below. On the first observation, the squirrel is in Delaware. What is the probability that it is in the state of Exhaustion on the next two observations?…

    • 1146 Words
    • 5 Pages
    Satisfactory Essays
  • Satisfactory Essays

    Lab 6

    • 690 Words
    • 3 Pages

    1. Run your model. Compare the queue statistics of the 3 processes with those obtained for Part C in the previous Lab. How have they changed and what conclusions can you draw? (Note the sums of all capacities for both cases are equivalent – 12 in each).…

    • 690 Words
    • 3 Pages
    Satisfactory Essays
  • Satisfactory Essays

    Stat 231 Course Notes

    • 7029 Words
    • 29 Pages

    References: [1] William Kong, Stochastic Seeker : Resources, Internet: http://stochasticseeker.wordpress.com, 2011. [2] Paul Marriott, STAT 231/221 Winter 2012 Course Notes, University of Waterloo, 2009.…

    • 7029 Words
    • 29 Pages
    Satisfactory Essays
  • Satisfactory Essays

    Brs Mdm3 Tif Ch09

    • 2979 Words
    • 16 Pages

    5) In a drive-in fast food restaurant, customers form a single lane, place their order and pay their bill at one window, and then pick up their food at a second window. This queuing configuration is referred to as:…

    • 2979 Words
    • 16 Pages
    Satisfactory Essays
  • Satisfactory Essays

    Ismene and her sister were taken away to separate cells. When Ismene was taken to her cell, she found a mysterious old man sitting in the far end of the cell. Ismene starts talking to the man and explained to him why she was taken to prison.…

    • 216 Words
    • 1 Page
    Satisfactory Essays
  • Good Essays

    Introductory Study

    • 999 Words
    • 4 Pages

    transitions between states, and calculate and interpret the probability of being in a particular state and…

    • 999 Words
    • 4 Pages
    Good Essays
  • Good Essays

    Proof. If we get the up sate, then X1 = X1 (H) = ∆0 uS0 + (1 + r)(X0 − ∆0 S0 ); if we get the down state, then X1 = X1 (T ) = ∆0 dS0 + (1 + r)(X0 − ∆0 S0 ). If X1 has a positive probability of being strictly positive, then we must either have X1 (H) > 0 or X1 (T ) > 0. (i) If X1 (H) > 0, then ∆0 uS0 + (1 + r)(X0 − ∆0 S0 ) > 0. Plug in X0 = 0, we get u∆0 > (1 + r)∆0 . By condition d < 1 + r < u, we conclude ∆0 > 0. In this case, X1 (T ) = ∆0 dS0 + (1 + r)(X0 − ∆0 S0 ) = ∆0 S0 [d − (1 + r)] < 0. (ii) If X1 (T ) > 0, then we can similarly deduce ∆0 < 0 and hence X1 (H) < 0. So we cannot have X1 strictly positive with positive probability unless X1 is strictly negative with positive probability as well, regardless the choice of the number ∆0 . Remark: Here the condition X0 = 0 is not essential, as far as a property definition of arbitrage for arbitrary X0 can be given. Indeed, for the one-period binomial model, we can define arbitrage as a trading strategy such that P (X1 ≥ X0 (1 + r)) = 1 and P (X1 > X0 (1 + r)) > 0. First, this is a generalization of the case X0 = 0; second, it is “proper” because it is comparing the result of an arbitrary investment involving money and stock markets with that of a safe investment involving only money market. This can also be seen by regarding X0 as…

    • 19710 Words
    • 79 Pages
    Good Essays
  • Powerful Essays

    Series on Foundations and Trends in Theoretical Computer Science, now Publishers, Hanover, MA, 2008, ISBN 978-1-60198-106-6.…

    • 3499 Words
    • 14 Pages
    Powerful Essays
  • Powerful Essays

    2.F.Girosi and T.Poggio, Networks and the best approximation property. A.I.Memo 1164,Massachusetts Institute of Technology, 10,1989.…

    • 1953 Words
    • 7 Pages
    Powerful Essays
  • Good Essays

    Master Thesis Mathematics/Applied Mathematics Supervisor: BÄrje Nilsson, VÄxjÄ University. o a o Examiner: BÄrje Nilsson, VÄxjÄ University. o a o…

    • 1920 Words
    • 8 Pages
    Good Essays
  • Powerful Essays

    Yin, G., and Q. Zhang, Continuous-Time Markov Chains and Applications: A Singular Perturbation Approach, Springer-Verlag, New…

    • 10858 Words
    • 44 Pages
    Powerful Essays
  • Good Essays

    Although these have proven their interest in diverse applications, theoretical studies of their math- ematical properties are sparce. For example, only Monte Carlo simulations of Ducher’s Z behavior are available [27]. A more theoretical approach to these mea- sures could be of interest. Moreover, improvements in Zebu are also possible. The first concerns discretization, a necessary step for continuous variables. We have restrained ourselves to very simple discretization methods: equal-width and user- defined. Other discretization algorithms exist [28] and may be more adapted for computation of association measures. These will have to be considered in future versions of Zebu. Furthermore, the bootstrap function in Zebu is based on an it- erative procedure. These are particularly slow in R. To speed this up, writing the bootstrap function in C or Fortran and calling it from R could be a reliable solution. Finally, Zebu has been conceived for people with no programming knowledge. How- ever, a R package for more experienced users could be of use for more sophisticated…

    • 819 Words
    • 4 Pages
    Good Essays
  • Best Essays

    Curriculum Framework

    • 3409 Words
    • 14 Pages

    Haag, S., (2012). Week one class notes (Module One). Lecture posted to University of Southern…

    • 3409 Words
    • 14 Pages
    Best Essays
  • Good Essays

    Matrix and Vector

    • 1018 Words
    • 5 Pages

    1. Introduction Before we give the definition of a Markov process, we will look at an example: Example 1: Suppose that the bus ridership in a city is studied. After examining several years of data, it was found that 30% of the people who regularly ride on buses in a given year do not regularly ride the bus in the next year. Also it was found that 20% of the people who do not regularly ride the bus in that year, begin to ride the bus regularly the next year. If 5000 people ride the bus and 10,000 do not ride the bus in a given year, what is the distribution of riders/non-riders in the next year? In 2 years? In n years? First we will determine how many people will ride the bus next year. Of the people who currently ride the bus, 70% of them will continue to do so. Of the people who don’t ride the bus, 20% of them will begin to ride the bus. Thus: 5000(0.7) + 10, 000(0.2) = The number of people who ride bus next year. = b1 By the same argument as above, we see that: 5000(0.3) + 10, 000(0.8) = The number of people who don’t ride the bus next year. = b2 This system of equations is equivalent to the matrix equation: M x = b where 0.7 0.2 0.3 0.8 5000 10, 000 b1 b2…

    • 1018 Words
    • 5 Pages
    Good Essays
  • Powerful Essays

    Time Series

    • 8073 Words
    • 33 Pages

    If {yt, It} is a martingale, a MDS {εt, It} may be constructed by defining…

    • 8073 Words
    • 33 Pages
    Powerful Essays