Lecture Note 9: Introduction to Stochastic Processes
This Version: October 5, 2013
These notes are based on S. Ross, Introduction to Probability Models, Academic Press, and J.
Hamilton, Time Series Analysis, Princeton University Press.
Definition 1 A stochastic process {X (t ), t ∈ T } is a collection of random variables: for each t ∈ T ,
X (t ) is a random variable.
The set T is called an index set, and t ∈ T is often interpreted as time. So X (t ) would be the
(random) state of a process at time t . Sometimes for simplicity we write X t = X (t ).
Note that each X t is a random variable, so really we should be writing X t (ω). Each X t is a function from the sample space Ω to a subset of R. In applications we often are interested in modelling the evolution of some variable over time, so it is reasonable that the range of X t is the same across time. In that case we call the range of X t the state space.
If the index set T is a countable set, we call the stochastic process a discrete-time process. If the index set is an interval of the real line, we call the process a continuous-time process.
Although t is often used to indicate time, it can be used in other ways as well. For example, when modelling spatial phenomena (e.g. geographical concentrations of pollution), we might use a two-dimensional index t corresponding to longitude and latitude.
Example 1: Consider flipping a coin repeatedly. The sample space Ω would contain every possible infinite sequence of H s and T s. We could define the index set as T = {1, 2, 3, . . . , } and X 1 = 1 if the first toss is heads and 0 otherwise, X 2 = 1 if the second toss is heads and zero otherwise, and so on. This defines a stochastic process {X t , t ∈ T }, where X s is independent of X r for s = r .
Next, we could define a new stochastic process {Y t , t ∈ T }, where Y t is the total number of heads up to that point in time: Y t =
t i =1 X i .
Now there is a very distinct dependence