In practice, basic assumptions about exponential distributions of times between failures and repair times sometimes do not hold. In this case, more complicated mathematical techniques such as semi- Markov processes and embedded Markov chains may be applied. Corresponding issues are also considered in this chapter. A stochastic or random process is, essentially, a set of random variables where the variables are ordered in a given sequence.
For example, the daily maximum tem- peratures at a weather station form a sequence of random variables, and this or- dered sequence can be considered as a stochastic process. Another example is the sequence formed by the continuously changing number of people waiting in line at the ticket window of a railway station. More formally, the sequence of random variables in a process can be denoted by X t , where t is the index of the process.
The values assumed by the random variable X t are called states, and the set of all possible values forms the state space of the process. In this book, we mainly deal with stochastic processes where t represents time. A random variable X can be considered as the rule for assigning to every out- come of an experiment the value X.
A stochastic process is a rule for as- signing to every the function X t ,. Thus, a stochastic process is a family of time functions depending on the parameter or, equivalently, a function of t and. The domain of is the set of all the possible experimental outcomes and the domain of t is a set of non-negative real numbers. For example, the instantaneous speed of a car movement during its trip from point A to point B will be a stochastic process.
The speed on each trip can be con- sidered as an experimental outcome , and each trip will have its own speed X t , that characterizes for this case an instantaneous speed of the trip as a function of time. This function will be different from such functions of other trips because of the influence of many random factors such as wind, broad conditions etc. In Figure 2. It should be noticed that the cut of this stochastic process at time instant t1 will represent the random variable with mean Vm.
In real-world systems many parameters such as temperature, voltage, frequency, etc.
The time may be discrete or continuous. A discrete time may have a finite or infinite number of values; continuous time obviously has only an infinite number of values. The values taken by the random variables constitute the state space. This state space, in its turn, may be discrete or continuous.
Therefore, stochastic processes may be classified into four categories according to whether their state spaces and time are continuous or discrete. If the state space of a stochastic proc- ess is discrete, then it is called a discrete-state process, often referred to as a chain. It is a family of functions X t , , where t and are variables. It is a single time function or a realization sample of the given process if t is a variable and is fixed.
It is a random variable equal to the state of the given process at time t when t is fixed and is variable.
http://fiverrprofitsplan.com/4451.php It is a number if t and are fixed. One can use the notation X t to represent a stochastic process omitting, as in the case of random variables, its dependence on. For a fixed number x1 , the probability of the event X t1 x1 gives the CDF of the random variable X t1 , denoted by. Given two time instants t1 and t2, X t1 and X t2 are two ran- dom variables in the same probability space. Their joint distribution is known as the second-order distribution of the process and is given by. The last formula represents a complete description of a stochastic process. In practice, to get such a complete description of a stochastic process is a very diffi- cult task.
Fortunately, in practice many stochastic processes permit a simpler de- scription. The simplest form of the joint distribution corresponds to a family of independ- ent random variables. Then the joint distribution is given by the product of indi- vidual distributions. Definition 2. The assumption of an independent process considerably simplifies analysis, but it is often unwarranted and we are forced to consider some kind of dependence. The simplest and a very important type of dependence is the first-order depend- ence or Markov dependence. This is a general definition, which applies to Markov processes with a continu- ous-state space.
When MSS reliability is studied, discrete-state Markov processes or Markov chains are mostly involved. In the next sections we will study both dis- crete-time and continuous-time Markov chains.
In other words, the state probabilities at a future instant, given the present state of the process, do not depend on the states occupied in the past. Therefore, this process is also called memoryless. In many cases the conditional distribution 2. Such a Markov process is said to be homogeneous. In addition we consider here two important stochastic processes that will be used in the future: point and renewal processes. A point process is a set of random points ti on the time axis. For each point proc- ess one can associate a stochastic process X t equal to the number of points ti in the interval 0,t.
In reliability theory point processes are widely used to de- scribe the appearance of events in time e. An example of the point processes is the so-called Poisson process. The Pois- son process is usually introduced using Poisson points. If the intervals t1 , t2 and t3 , t4 are not overlapping, then the random vari- ables N t1 , t2 and N t3 , t4 are independent.
The Poisson process plays a special role in reliability analysis, comparable to the role of the normal distribution in probability theory. Many real physical situations can be successfully described with the help of Poisson processes.
A well-known type of point process is the so-called renewal process. This process can be described as a sequence of events, the intervals between which are independent and identically distributed random variables. In reliability theory, this kind of mathematical model is used to describe the flow of failures in time. This sequence is called a renewal process. An example is the life history of items that are replaced as soon as they fail.
In this case, yi is the total time the ith item is in operation and ti is the time of its failure. A generalization of this type of process is the so-called alternating renewal process. This process consists of two types of independent and identically distrib- uted random variables alternating with each other in turn.
This type of process is convenient for the description of repairable systems. For such systems, periods of successful operation alternate with periods of idle time. As was described above, a Markov process is a stochastic process whose dynamic behavior is such that the probability distribution for its future development de- pends only on the present state and not on how the process arrived at that state Trivedi When the state space, S, is discrete finite or countably infinite , then the Markov process is known as a Markov chain.
If the parameter space, T recall that we usually will consider time as the parameter , is discrete too, then we have a discrete-time Markov chain. Then X0 is the initial state of the system at time step 0. By using these designations in analogy with 2. As in the case of a general Markov process, Equation 2.
We designate the probability that at step n the chain will be in state j as p j n. Thus, we can write.
We also define the probability pij m, n that the chain makes a transition to state j at step n if at step m it was in state i. This probability is a conditional prob- ability, and we can write the following. Conditional probability pij m, n is known as the transition probability func- tion of the Markov chain. Here we will only consider homogeneous Markov chains those in which pij m, n depends only on difference n-m.
For such chains, the simpler notation. In words, pij n is the probability that a homogeneous Markov chain will move from state i to state j in exactly n steps.