The Math of Reliability: stochastic processes
We need "time", change is the norm not the exception
Take a collection random variables and organize them in some way, you get a stochastic process or a random field.
We need to explore more than just basic statistics and discuss how sequences of random variables change over time and space. (This is another introductory post, so I can build upon these concepts later)
What does organizing mean? It involves assigning each variable an index from a set. If the index set consists of natural or real numbers (or can be mapped to these), we refer to it as a stochastic process. If the index set is higher-dimensional, it's called a random field.
The state space is the shared space from which all these random variables draw their values.
When we consider the product of index set and state space, there are several typical combinations:
discrete-time, discrete-state
discrete-time, continuous-state
continuous-time, discrete-state
continuous-time, continuous-state
(note: this list isn't complete! there are more options!)
The index set doesn't have to be time.
When we do talk about time, we should be really thinking about times in the plural. There are multiple domains of time, not just one. A lot of confusion and mistakes happen when different concepts of time are mixed without careful consideration.
Random variables aren't limited to single numbers. They can be vectors, matrices, functions, and much more. All they require is a method to measure things, which in mathematical terms, is called a σ-algebra. A measurable space is actually defined by a pair (X,Σ), where X is a set and Σ is a σ-algebra.
The result of a stochastic process is a function that connects the index set with the state space. This is known by several names: as the realization, sample function, or, when it involves time, as a trajectory or sample path. The difference between two random variables (for example, two steps in the same sample path) is called an increment.
In my previous article, I talked about Bernoulli random variables. A Bernoulli process is a series of independent and identically distributed (i.i.d.) random variables with a state space of {0,1}, where there's a constant probability p over time. A Bernoulli process operates in discrete time and has discrete states.
A random walk is a sum of random variables or vectors. Many ideas can be thought of as sums: you just need an associative way to combine things and a starting point (a zero or identity element), this is called a Monoid. If you've heard about Haskell, you might know about Monads, which apply similar concepts in programming. Monads can represent many things: from simple optional values and lists (like a series of changes over time), to changes in state, ongoing processes, and even parallel operations. Random walks are processes that operate in discrete time.
Random walks are typically described as the sum of iid random variables or vectors. However, we shouldn't restrict ourselves to fixed parameters. In reality, especially in incident response, transient behavior often arises from parameters or conditions that change over time.
The simplest form, a simple random walk, is indeed stationary discrete-time, discrete-space. It is based on Bernoulli trials that are mapped to increments of {-1, 1}, and the resulting state space includes all integers.
A Wiener process is the continuous equivalent of a simple random walk: it operates in continuous time and has stationary, independent, and identically distributed increments that follow a normal distribution. This concept is widely used in areas like quantitative finance (e.g., Black-Scholes model) and physics (e.g., Brownian motion).
A Poisson process counts the random number of events up to some time. If the rate of events remains constant over time, it is called an homogeneous Poisson process. When the probability of events changes over time, they are called nonhomogeneous.