Undarmaa Bayarsaikhan and Lu Jin∗
1
Introduction
der some certain assumptions. In addition, this research also shows monotonic property in action changing threshold.
Many systems deteriorate overtime time due to age and operation, and are subject to stochastic breakdowns. Deterioration in a system causes higher operation costs, and breakdowns that occur sometimes can cause fatal damage to society. Condition monitoring for a deteriorating system, which can prevent the breakdowns beforehand has been receiving extensive attention. A great deal of research has been devoted to the optimal decision making problems for stochastically deteriorating systems. These problems are usually formulated as a Markov Decision Process (MDP). Derman
[3] provided a sufficient condition for the optimality of the control limit policy for a keep-replacement problem.
In an MDP model, a system deteriorates according to a stationary transition law. However in real situations, systems also deteriorate due to age. In such a case, the transition probabilities of the system should differ for different ages of the systems. Such kind of systems are called non-stationary deteriorating systems. Few researchers have considered this influence of age in their models. Abeygunawardane, Jirutitijaroen and Xu [1] formulated the decision making problem for aging systems using an MDP and proposed a solution procedure to obtain the adaptive decision. Chattwal, Alagoz and Burnside [2] investigated the structural property of the control limit policy and monotonicity of control limit for a breast biopsy decision-making problem using an MDP. In [2], some additional conditions are necessary to derive the control limit policy.
This research investigates structural properties of the optimal policy for a general case of non-stationary deteriorating systems in an analytical way using an
MDP. These structural properties establish the existence of an
References: (2013) : “Adaptive Maintenance Policies for Aging Devices Using a Markov Desicion Process”, IEEE Transactions on Power Systems, vol.28, 3194-3203. Operations Research, vol.58, 1577-1591. [4] Marshall, A.W. and Olkin, I., (1979) : Inequalities: Theory of Majorization and Its Applications, Academic [5] Ohnishi, M., Kawai, H. and Mine, H. (1986) : “An Optimal Inspection And Replacement Policy Under Incomplete State Information”, European Journal of Operational Research, vol.27, 117-128. [6] Puterman, M. L. (1994) : Markov Decision Processes: Discrete stochastic dynamic programming, Wiley. [7] Smallwood, R. D. and Sondik, E. J. (1973) : “The Optimal Control of Partially Observable Markov Processes over a Finite Horizon”, Operations Research, vol.21,