Submission Deadline-05th September 2025
September Issue of 2025 : Publication Fee: 30$ USD Submit Now
Submission Deadline-04th September 2025
Special Issue on Economics, Management, Sociology, Communication, Psychology: Publication Fee: 30$ USD Submit Now
Submission Deadline-19th September 2025
Special Issue on Education, Public Health: Publication Fee: 30$ USD Submit Now

Managing Transportation Infrastructure with Markov and Semi-Markov Models

  • Mr. Jay Prakash, Research Scholar
  • Dr. Umesh Sharma, Associate Professor
  • 786-806
  • Jun 9, 2025
  • Education

Managing Transportation Infrastructure with Markov and Semi-Markov Models

Mr. Jay Prakash1, Dr. Umesh Sharma2

1Research Scholar, Mathematics Department in Sanskriti University, Mathura (U.P), India

2Associate Professor, Mathematics Department in Sanskriti University, Mathura (U.P), India

DOI: https://doi.org/10.51244/IJRSI.2025.12050075

Received: 22 May 2025; Accepted: 26 May 2025; Published: 09 June 2025

ABSTRACT

Effective management of transportation infrastructure requires accurate modeling of system dynamics and deterioration processes. This study explores the application of Markov and Semi-Markov models as decision-support tools for the maintenance and rehabilitation of transportation assets, such as roads, bridges, and transit systems. Markov models are employed to represent the probabilistic transitions of infrastructure condition states over discrete time intervals, enabling planners to estimate long-term performance and optimize maintenance policies. Semi-Markov models extend this framework by incorporating variable sojourn times, allowing for more realistic modeling of time-dependent deterioration and maintenance effects. By comparing the predictive capabilities and computational performance of both models, this research highlights their respective advantages and suitability for different infrastructure management scenarios. The findings support the integration of stochastic modeling approaches into infrastructure asset management systems, leading to improved decision-making, cost-efficiency, and service reliability. For many years, pavement and bridge management systems have included Markov models. Semi-Markov models have been used in Bridge Management Systems in more recent years. According to research, this stochastic technique can be used to predict future network level conditions and to develop preservation models for transportation infrastructure if there is sufficient data to create semi-Markov models for that infrastructure. These methods can be used in numerous contexts and are not just restricted to transportation infrastructure.

Keywords: Markov Models, Semi-Markov Models, Transportation Infrastructure, Stochastic Modeling.

INTRODUCTION

The management of transportation infrastructure is a critical component of national and regional development, directly impacting economic growth, public safety, and quality of life. As infrastructure systems such as highways, bridges, and public transit networks age, they are subject to continuous deterioration due to traffic loads, environmental conditions, and material aging. Maintaining these assets in a serviceable condition over their life cycle presents a significant challenge for transportation agencies, particularly under constraints of limited budgets and growing demand. To address these challenges, data-driven and probabilistic methods have gained increasing attention for supporting infrastructure asset management decisions. Among these, Markov and Semi-Markov models have emerged as powerful tools for modeling the deterioration and maintenance of infrastructure systems. These stochastic models offer a structured framework for predicting future condition states, evaluating maintenance strategies, and optimizing resource allocation. Markov models are widely used due to their mathematical simplicity and ease of integration with existing asset management systems. They model the condition of infrastructure as a set of discrete states, with transitions between states governed by probabilities that depend solely on the current state. However, a key limitation of traditional Markov models is the assumption of constant time intervals between transitions, which may not accurately reflect real-world deterioration processes. To overcome this limitation, Semi-Markov models have been introduced as a more flexible alternative. By incorporating variable sojourn times—the time an asset remains in a particular state—Semi-Markov models provide a more realistic representation of deterioration behavior and maintenance timing. This added complexity allows for better modeling of infrastructure systems that exhibit non-exponential transition patterns and varying maintenance effects. This paper explores the application of both Markov and Semi-Markov models in managing transportation infrastructure, comparing their capabilities, limitations, and implications for policy and practice. The goal is to demonstrate how these models can be effectively integrated into infrastructure asset management frameworks to support long-term planning, reduce life-cycle costs, and enhance the reliability and safety of transportation networks.

Markov Chain Model

The Markov chain model is one of the most widely used stochastic processes for modeling the deterioration of transportation infrastructure. It provides a mathematical framework for predicting the evolution of an asset’s condition over time based on probabilistic transitions between discrete condition states. The fundamental assumption of a Markov chain is that the future state of the system depends only on its current state and not on the sequence of past states—this is known as the Markov property.

<p>
\( P(X_{n+1} = j \mid X_n = i, X_{n-1} = i_{n-1}, \ldots, X_0 = i_0) = P_{i,j} \quad \text{for all states } i_0, i_1, \ldots, i_{n-1}, i, j \text{ and all } n \geq 0. \)
</p>

If   describes a process such that the process is in state i at time n, and the process in state i has a fixed probability of being in state j, after a transition, then

For all states,, and all

Examine the spectrum of crack indices linked to flexible (asphalt) road pavement in Table 1, where each condition state has been allocated a crack index. Transition probabilities for a Markov chain model can be produced using the following formula, which was employed by Wang et al.

<p>
\( P_{i,j}(a_k) = \frac{m_{i,j}(a_k)}{m_i(a_k)} \)
</p>

for i, j = 10,9,8,7,6,5 & 4 where,

 rehabilitation action, in this case, the ‘do-nothing’ action, i.e. k =1.

  transition probability from state i to j after action k is taken.

  Total number of miles of pavement for which the state prior to action k was i and the state after the action k was j.

Total number of miles of pavement for which the state prior to action k was i.

Table 1. The range of crack indices and corresponding condition states.

Crack Index(CRK) range Condition state
9:5 ≤ CRK ≤ 10 10
8:5  ≤ CRK < 9:5 9
7:5  ≤ CRK < 8:5 8
6:5 ≤ CRK < 7:5 7
5:5  ≤ CRK < 6:5 6
4:5 ≤ CRK < 5:5 5
CRK < 4:5 4

Semi-Markov model       

This section demonstrates a Semi-Markov model for flexible (asphalt) road pavement. Consider a stochastic process having states 0,1,2, …, which is such that whenever it enters state i, i ≥ 0, then: (1) it will enter the next state j with probability Pij, i, j, ≥ 0, and (2) given that the next state is j the sojourn time from i to j has distribution Fij. For a semi-Markov process, the sojourn times may follow a specific distribution and the method of Maximum Likelihood Estimation (MLE) can be used to estimate the parameters of that distribution, such as that of a Weibull Distribution. Before discussing how the semi-Markov process can be applied to model deterioration, it is beneficial to define the basic concepts of the MLE method. One well-known statistical method for determining estimators is the Maximum Likelihood.

Maximum likelihood estimation

The likelihood function is defined as follows if a population with a probability density function (pdf) or probability mass function (pmf) has an identical and independently distributed sample, X1, …,Xn.

<p>
\( L(\theta \mid X) = L(\theta_1, \ldots, \theta_k \mid x_1, \ldots, x_n) = \prod_{i=1}^{n} f(x_i \mid \theta_1, \ldots, \theta_k) \) &nbsp; &nbsp; &nbsp; &nbsp; (3)
</p>

Assuming a Weibull distribution for the sojourn time, the Weibull distribution’s pdf, as defined by Billing t and Allan, Tobias, and Trindade, is

<p>
\( f(t) = \frac{\beta}{\alpha} \left( \frac{t}{\alpha} \right)^{\beta – 1} e^{ -\left( \frac{t}{\alpha} \right)^{\beta} } \) &nbsp; &nbsp; &nbsp; &nbsp; (4)
</p>

where t is the number of years it takes for each unit of a mile of pavement segment to spend in one condition state before changing to another, and η and β are the corresponding scale and shape parameters. If η =1/α, then

\( f(t) = \beta \eta (\eta t)^{\beta – 1} e^{-(\eta t)^\beta} \quad \text{……………………………..(5)}

Using Eq. 3 it follows that the likelihood is:

\[
L(t_1, \ldots, t_n, \eta, \beta) = (\beta \eta^\beta)^n e^{-\eta^\beta (t_1^\beta + \cdots + t_n^\beta)} \prod_{i=1}^n t_i^{\beta – 1} \quad \text{…………(6)}
\]

Once the log likelihood has been differentiated and set to zero, the MLE of the parameters  and  are

\[
\hat{\beta} = \left[ \frac{\sum_{i=1}^n t_i^{\hat{\beta}} \ln(t_i)}{\sum_{i=1}^n t_i^{\hat{\beta}}} – \frac{1}{n} \sum_{i=1}^n \ln(t_i) \right]^{-1} \quad \text{……………………………..(7)}
\]

And

\hat{\eta} = \left( \frac{n}{\sum_{i=1}^n t_i^{\hat{\beta}}} \right)^{\frac{1}{\hat{\beta}}} \quad \text{………………………………….(8)}

The sojourn times T1,…,Tn k for then k units of pavement should also be taken into consideration in the evaluation . For the Weibull distribution, if it is assumed that the incomplete sojourn times of T1,…,Tn k have been observed in addition to the complete sojourn times t1,…,tk, then the likelihood function can be expressed as follows: If there are n units of pavement in a particular condition state and k units have transitioned to a lower condition state (with complete individual sojourn times t1 < t2 <… tk and the sojourn time for n k units of pavement are unknown,

\[
L = \prod_{i=1}^k f(t_i \mid \theta) \cdot \prod_{j=1}^{n-k} \left( 1 – F(T_j \mid \theta) \right) \quad \text{……………(9)}
\]

Where I sums over all completed sojourn times, j sums over all incomplete sojourn times, and θ can be a vector.

Producing the MLE of the parameters  and :

\(
\hat{\beta} = \left[
\frac{
\sum_{i=1}^k t_i^{\hat{\beta}} \ln(t_i) + \sum_{j=1}^{n-k} T_j^{\hat{\beta}} \ln(T_j)
}{
\sum_{i=1}^k t_i^{\hat{\beta}} + \sum_{j=1}^{n-k} T_j^{\hat{\beta}}
}
– \frac{1}{k} \sum_{i=1}^k \ln(t_i)
\right]^{-1} \quad \text{…………….(11)}
\)

And

\[
\hat{\eta} = \left( \frac{k}{\sum_{i=1}^k t_i^{\hat{\beta}} + \sum_{j=1}^{n-k} T_j^{\hat{\beta}}} \right)^{\frac{1}{\hat{\beta}}} \quad \text{………………………………….(12)}
\]

Semi-Markov Kernel

The semi-Markov kernel in the form given in Eq.13 can be used to illustrate how the semi-Markov can be generated by knowing the sojourn periods in a specific condition state prior to transitioning. I be defines the one-step transition probability Q_(i,j) (t) the semi-Markov process as:

\[
Q_{i,j}(t) = P\big[X_{n+1} = j, \; G_n \leq t \mid X_n = i \big], \quad t \geq 0 \quad \text{…………………..(13)}
\]

where, provided that the process is now in state i and that the waiting time in state i is less than t, Q_(i , j) (t) is the conditional probability that it will be in state j next. The amount of time the process stays in i before moving on to j is denoted by Gn. Additionally, it follows that

\(
Q_{i,j}(t) = p_{i,j} \, H_{i,j}(t) \quad \text{…………………………………..(14)}
\)

where  is defined as the transition probability of the embedded Markov chain, and

\[
H_{i,j}(t) = P\big[G_n \leq t \mid X_n = i, X_{n+1} = j \big] \quad \text{…………………………..(15)}
\]

Semi-Markov process

Given that a continuous-time semi-Markov process enters state i at time zero, Howard offered the following method to calculate the likelihood that it will be in state j at time n.

\[
\varnothing_{ij}(n) = \delta_{ij} > w_i(n) + \sum_{k=1}^N P_{ik} \int_0^n h_{ik}(m) \varnothing_{kj}(n – m) \, dm, \quad i=1,2, \ldots, N; \; j=1,2, \ldots, N; \; n=0,1,2, \ldots \quad \text{………………(16)}
\]

 

\[
\delta_{ij} =
\begin{cases}
1 & \text{if } i = j, \\
0 & \text{if } i \neq j.
\end{cases}
\quad \text{……………………….(17)}
\]

This is known as the interval transition probability from state i to state j in the interval (0, n) > wi(n), where  is the probability that a continuous-time semi-Markov process will be in state j at time n given that it entered state i at time n = 0. It is also known as the interval transition probability from state i to state j in the interval (0, n) > wi(n).

The probability of the sequence of events is represented by the second element, which means that at time m, the process moves from state i to a state k, and then in the remaining period n–m, it moves from state k to state j. To account for all possible outcomes, the probability is computed by adding up all periods (m) of the initial transition between l and n and all states (k) to which the first transition may have occurred. hik(m), where pik is the probability of transitioning from i to k, is a representation of the probability distribution of the sojourn time from i to k at time m. The matrix formulation equation for

\[
\varnothing(n) = W(n) + \int_0^n [P_x H(m)] \, \varnothing(n – m) \, dm, \quad n = 0, 1, 2, \ldots \quad (18)
\]

In addition, let

\[
C(m) = [P_x H(m)] \quad \text{………………………………………(19)}
\]

When the core matrix is specified as C(m). The components of cij (m)=pijhij are as follows: hij (m) is the probability distribution of the sojourn time in state i prior to migrating to j at time m, and pij is the transition probability of the embedded Markov chain.

Consequently, the interval transition matrix for “do-nothing” actions that represents a single transition at time m, ,  can be written as follows:

\[
\varnothing^{(0,m)}(m) =
\begin{bmatrix}
1 – \sum_{j=4}^9 p_{10,j} H_{10,j}(m) & 0 & 0 & 0 & 0 & 0 \\
\vdots & \ddots & 0 & 0 & 0 & 0 \\
\vdots & \vdots & \ddots & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & 0 \\
p_{10,5} H_{10,5}(m) & p_{9,5} H_{9,5}(m) & p_{8,5} H_{8,5}(m) & p_{7,5} H_{7,5}(m) & p_{6,5} H_{6,5}(m) & 1 – p_{5,4} H_{5,4}(m)
\end{bmatrix}
\       …20

where Hij is the cumulative distribution of the sojourn time between condition states i and j at time m, and pij is the probability for the semi-Markov process’s embedded Markov chain.

Another method is proposed to model the overall transition over time since it is evident that when calculating the overall transition probabilities for an interval (0, n), a number of permutations must be taken into account as the number of years, m, rises. The “do nothing” scenarios—in which no action is performed on the infrastructure—are the main emphasis when estimating asset deterioration. Therefore, it is reasonable to suppose that the semi-Markov process only proceeds in one direction after the infrastructure leaves a condition state and that condition state is never revisited. There’s also a chance that certain infrastructure, like the pavement portions in this instance, will “skip” condition states in a single transition;

But since there is very little chance of “skipping” two (2) or more condition states, it might not need to be taken into account. The transition probability (for the embedded Markov chain), pij, of “skipping” a condition state, k, is equal to one minus the transition probability (for the embedded Markov chain) of moving on to the next state, pik, assuming that only one condition state may be “skipped” in a single transition. Consequently,

\[
p_{ij} = 1 – p_{ik}, \quad i = 10,9,8,7,5; \quad j = i – 2, \quad k = 1, \quad \min(j,k) = 4 \quad \text{…………………….(21)}
\]

This is because, since deterioration is being taken into consideration, the overall chance of eventually departing a specific condition state—which is not a terminal state—to a lower condition state is 1.

The period between the pavement segment’s initial entry into condition state i and its initial entry into condition state j is also assumed to be the sojourn time in condition state i prior to shifting to condition state j. Figure 1’s transition diagram illustrates the potential “single step” transitions in developing

Figure 1. Transition diagram that represents the possible ‘single step’ transitions between condition states.

the semi-Markov model, in which hij (m) is the probability density function of the sojourn time between condition states i and j at time m, and pij is the probability for the embedded Markov chain from one state to the other. The idea that only one transition can occur in a year is the basis for the term “single step” transition. The pavement segment spends some time in condition state 10 before moving into condition state 9 for the transition labeled p10, 9 h10,9 (m), and it spends some time in condition state 10 before moving into condition state 8 without going to condition state 9.

It is assumed that only one transition occurs in a year, and in addition to calculating the interval transition probabilities for the interval (0, n), the conditional transition probabilities for each yearly interval for m = 1, 2, …,n are calculated and multiplied by one another to estimate the transition probabilities for the interval (0, n).

\[
\varnothing^{(0,n)}(n) = \varnothing^{(0,1)} \cdot \varnothing^{(1,2)} \cdots \varnothing^{(n-1,n)} \quad \text{……………………………(22)}
\]

In this case, m = 1, 2,…,n, and  is a one-year “single” transition probability matrix during the time m-1 to m (i.e., mth interval). According to Eq. (20), the interval transition probability for the first year is as follows if the condition states can decrease by one (1) or two (2) states:

\[
\varnothing(m) =
\begin{bmatrix}
1 – \sum_{j=8}^9 p_{10,j} H_{10,j}(m) & 0 & 0 & 0 & 0 & 0 \\
p_{10,9} H_{10,9}(m) & \ddots & 0 & 0 & 0 & 0 \\
p_{10,8} H_{10,8}(m) & \cdots & \ddots & 0 & 0 & 0 \\
0 & \cdots & \cdots & \ddots & 0 & 0 \\
0 & 0 & \cdots & \cdots & \ddots & 0 \\
0 & 0 & 0 & \cdots & \cdots & 1 \\
\end{bmatrix}
\]

with m = 1. However, a different formulation is employed to calculate the corresponding transition probability matrices for the following intervals, assuming that the sojourn duration is left trimmed at the beginning of each period.

Service life analysis and left truncation

Reliability theory, also known as survival analysis, can be used to statistically analyze the service lives of transportation infrastructures. Sometimes the start time at the point of selection is not t = 0 (i.e., not at birth), but rather a value t = t0 >0. In other cases, individuals are chosen and tracked prospectively until the event or censorship occurs. As a result, it indicates that the subjects’ life or censoring time, Ti, is larger than t0, and that Ti is shortened at t0. When the same idea is applied to a transportation infrastructure’s service life, then

\(
T_{(T \mid T > t_0)}(t) =
\begin{cases}
0, & \text{if } t_0 > t; \\
\dfrac{T_T(t) – F_T(t_0)}{1 – F_T(t_0)}, & \text{if } t_0 < t.
\end{cases}
\quad \text{………………………….(24)}
\)

To determine the probabilities that are associated with the sojourn time for interval (1, 2],we assume that from Eq.(24) t0=1 and < t ≤ 2 It therefore means that at this point only sojourn times greater than t=1  are being considered and the cumulative distribution of the sojourn time in the interval can be considered truncated. It is therefore can be described as:

\[
H_{i,j_{(T|T>1)}}(1) = \frac{H_{i,j_T}(t) – H_{i,j_T}(1)}{1 – H_{i,j_T}(1)}, \quad 1 < t \leq 2 \quad \text{…………………………(25)}
\]

Consequently, the cumulative distribution of the sojourn time in the interval for interval (m -1, m) can be expressed as follows:

\[
H_{i,j_{(T \mid T > m-1)}}(1) = \frac{H_{i,j_T}(t) – H_{i,j_T}(m-1)}{1 – H_{i,j_T}(m-1)}, \quad m-1 < t \leq m \quad \text{…………………………(26)}
\]

When t = m, the sojourn time’s cumulative distribution turns into

\[
H_{i,j_{(T \mid T > m-1)}}(m) = \frac{H_{i,j_T}(m) – H_{i,j_T}(m-1)}{1 – H_{i,j_T}(m-1)}, \quad m-1 < t \leq m \quad \text{…………………………(27)}
\]

Consequently, for the interval (m – 1, m), the transition probability is

\[
\varnothing^{(m-1,m)}(m) =
\begin{bmatrix}
1 – \sum_{j=8}^9 p_{10,j} H_{10,j_{(T|T > m-1)}}(m) & 0 & 0 & 0 & 0 & 0 \\
p_{10,9} H_{10,9_{(T|T > m-1)}}(m) & \ddots & 0 & 0 & 0 & 0 \\
\vdots & \ddots & \ddots & 0 & 0 & 0 \\
0 & \cdots & \cdots & \ddots & 0 & 0 \\
0 & 0 & \cdots & \cdots & \ddots & 0 \\
0 & 0 & 0 & \cdots & \cdots & 1 \\
\end{bmatrix}
\]     …………  28

Black et al. utilized Equation (27) to calculate the transition probabilities for yearly transitions in degradation models based on one-step states, assuming that the embedded Markov chain’s transition probability was 1. Given that it has “survived” until the beginning of the period, it follows that the probability derived from Eq. (27) can be used to characterize the likelihood related to the sojourn duration until the end of the period.

Sojourn times of flexible pavement condition states

The sojourn time distribution for each one-tenth unit of a mile of pavement segment is then examined in order to create the semi-Markov model. The number of miles for a given segment can be rounded to the nearest one-tenth of a mile. A schematic of the pavement segment’s division into tenth of a mile sub-sections is shown in Figure 2. The infrastructure data can be arranged and analyzed with the aid of an algorithm to estimate the parameters of the sojourn time distributions in each condition state. The steps are described in the following.

  • Each segment’s length is calculated to the closest tenth of a mile.
  •  In order to mimic “do-nothing” activities on the pavement segments over time, a series of decreasing CRKs for each segment are extracted, which shows the yearly decreases in CRKs for each segment after the year that the segment was initially created or overlay.
  •  As previously mentioned, Table 1 is used to allocate the pavement segments to condition states throughout time.

All of the pavement segments that are tracked eventually either reach the current condition state from a higher or equal condition state, or they are simply becoming “new.” The sojourn period of a unit of pavement segment in the present condition state is practically known if it leaves a given condition state at a known time and enters a lower condition state. However, the sojourn time of a pavement segment unit is not precisely known and is regarded as right-censored if the pavement is in a specific condition state and the pavement tracking ended because the condition state either increased or the “study” ended. Pavement segments’ condition states are shown changing over time in Figure 3, which also provides instances of the total and censored amount of time spent in each condition state.

The distribution of the sojourn time in condition state I prior to transitioning to condition state j (Hi, j (t)) can be ascertained based on the complete and censored durations derived from the data, where j is either i – 1 or i – 2, i = 10,9,8,7,6,5 min(j) = 4. The percentage of the total quantity of the

Figure 3. An example of the change in the condition states of pavement segments over time.

For each condition state i, the number of units of assets that left condition state i and moved to all condition states other than itself (pi,j) can be calculated as the infrastructure that left condition state i and moved to condition state j. Segment 3 spends six years in condition state 10, drops one state, then spends seven years in condition state 9, and then drops one more state to condition state 8, as shown in Figure 3. It demonstrates that after spending a year in condition state 8, segment 3 moves on to condition state 7, where the pavement segment remains in condition state 7 for at least six years. Figure 3 before experiencing a two-state decline to condition state 8, segment 4 is observed to have spent ten years in condition state 10. This is followed by a sequence of one-state declines, from which the corresponding sojourn periods can also be deduced.

It is possible to calculate the maximum likelihood estimate of the Weibull distribution’s scale (α) and shape (β) parameters, which are used to characterize the sojourn time distributions. Based on the Weibull parameters (α and β values) found for the sojourn periods between condition states, an algorithm may be developed to find the series of transition probability matrices. This algorithm uses the MLE of the and β values as inputs. After that, the transition probabilities based on Eqs. (23) and (28) can be calculated and utilized to model the survival curves and the anticipated degradation over time.

Examples of transition probabilities associated with the Markov chain and semi-Markov models

Transition probabilities in a Markov chain model

As demonstrated below in Eq. (29), a transition probability matrix is an example of a set of transition probabilities for a Markov chain model. In Eq. (29), the condition states i and j are at the top and left of the transition probability matrix, respectively, where Πij is the transition probability matrix for a Markov chain model.

\[
\prod_{ij} =
\begin{bmatrix}
0.905 & 0 & 0 & 0 & 0 & 0 \\
0.072 & 0.737 & 0 & 0 & 0 & 0 \\
0.017 & 0.157 & 0.660 & 0 & 0 & 0 \\
0.006 & 0.090 & 0.274 & 0.707 & 0 & 0 \\
0 & 0.016 & 0.042 & 0.188 & 0.724 & 0 \\
0 & 0 & 0.014 & 0.086 & 0.112 & 0.582 \\
0 & 0 & 0.010 & 0.019 & 0.164 & 0.418 \\
0 & 0 & 0 & 0 & 0 & 1
\end{bmatrix}
\quad \text{..(29)}

Transition probabilities in a semi-Markov model

An example of the transition probability of the embedded Markov chain of a semi Markov process are shown for each transition in Table2

Transitions i to j                Transition probability
10 to 9 0.707
9 to 8 0.752
8 to 7 0.645
7 to 6 0.468
6 to 5 0.214
5 to 4 1.000
10 to 8 0.293
9 to 7 0.248
8 to 6 0.355
7 to 5 0.532
6 to 4 0.786

The frequency distributions of the observed sojourn time, comprising both the uncensored and right censored sojourn times, for each unit mile prior to transition are displayed in Figures 4–7.

As anticipated, Figures 4 and 5 show that more pavement units fully changed from condition state 10 to 9 than from condition state 10 to 8. Figures 4 and 5 show the same total pavement length that is right-censored. This is due to the uncertainty surrounding whether each pavement unit would have gone from condition state 10 to 9 or from 10 to 8.

Table 3 displays instances of the Weibull distribution parameters based on the Maximum Likelihood Estimation (MLE), and the corresponding uncertainties are

Figure 4. Frequency of sojourn times in condition state 10 before transitioning to 9.

 Figure 5. Frequency of sojourn times in condition state 10 before transitioning to 8.

Figure 6. Frequency of sojourn times in condition state 9 before transitioning to 8

Figure 7. Frequency of sojourn times in condition state 9 before transitioning to 7.

Year 1

\[
\varnothing_{ij}(1) =
\begin{bmatrix}
0.991 & 0 & 0 & 0 & 0 & 0 \\
0.008 & 0.822 & 0 & 0 & 0 & 0 \\
0.001 & 0.078 & 0.795 & 0 & 0 & 0 \\
0 & 0.100 & 0.170 & 0.814 & 0 & 0 \\
0 & 0 & 0.035 & 0.123 & 0.886 & 0 \\
0 & 0 & 0 & 0.086 & 0.059 & 0.911 \\
0 & 0 & 0 & 0.063 & 0.056 & 0.089 \\
0 & 0 & 0 & 0 & 0 & 1
\end{bmatrix}
\quad \text{..(30)}
\]

 

\[
\varnothing_{ij}(1) =
\begin{bmatrix}
0.970 & 0 & 0 & 0 & 0 & 0 \\
0.028 & 0.716 & 0 & 0 & 0 & 0 \\
0.002 & 0.150 & 0.690 & 0 & 0 & 0 \\
0 & 0.134 & 0.249 & 0.749 & 0 & 0 \\
0 & 0 & 0.061 & 0.166 & 0.773 & 0 \\
0 & 0 & 0 & 0.085 & 0.107 & 0.744 \\
0 & 0 & 0 & 0 & 0.120 & 0.256 \\
0 & 0 & 0 & 0 & 0 & 1
\end{bmatrix}
\quad \text{..(31)}
\]

Year 3

\[
\varnothing_{ij}(3) =
\begin{bmatrix}
0.944 & 0 & 0 & 0 & 0 & 0 \\
0.049 & 0.652 & 0 & 0 & 0 & 0 \\
0.007 & 0.197 & 0.633 & 0 & 0 & 0 \\
0 & 0.151 & 0.290 & 0.717 & 0 & 0 \\
0 & 0 & 0.077 & 0.188 & 0.695 & 0 \\
0 & 0 & 0 & 0.095 & 0.138 & 0.602 \\
0 & 0 & 0 & 0 & 0.167 & 0.398 \\
0 & 0 & 0 & 0 & 0 & 1
\end{bmatrix}
\quad \text{..(32)}
\]

Year 4

\[
\varnothing_{ij}(4) =
\begin{bmatrix}
0.915 & 0 & 0 & 0 & 0 & 0 \\
0.071 & 0.603 & 0 & 0 & 0 & 0 \\
0.014 & 0.234 & 0.591 & 0 & 0 & 0 \\
0 & 0.163 & 0.319 & 0.694 & 0 & 0 \\
0 & 0 & 0.090 & 0.203 & 0.631 & 0 \\
0 & 0 & 0

Year 5

\[
\varnothing_{ij}(4) =
\begin{bmatrix}
0.883 & 0 & 0 & 0 & 0 & 0 \\
0.093 & 0.562 & 0 & 0 & 0 & 0 \\
0.024 & 0.265 & 0.557 & 0 & 0 & 0 \\
0 & 0.173 & 0.343 & 0.676 & 0 & 0 \\
0 & 0 & 0.100 & 0.215 & 0.577 & 0 \\
0 & 0 & 0 & 0.109 & 0.183 & 0.388 \\
0 & 0 & 0 & 0 & 0.240 & 0.612 \\
0 & 0 & 0 & 0 & 0 & 1
\end{bmatrix}
\quad \text{..(34)}
\]

Stochastic preservation model

The approach can be used to the management of transportation infrastructure, where a semi-Markov decision process (SMDP) serves as the foundation for the decision model. One example of a bridge element with several condition states to which the preservation model might be used in a Bridge Management System is the “Bare Concrete Deck.” Researchers in the field of infrastructure management, especially as it pertains to pavement and rail infrastructures, have applied the adaptive preservation approach. Because transition probabilities are revised over time, Madanat et al. also characterize the Pont is Bridge Management System’s methodology as adaptive. The application of SMDP serves as the foundation for the methods described here.

A bridge element’s maintenance activities can include a variety of tasks with different time frames. Once more, the sojourn time—in this example, the amount of time needed to finish maintenance tasks—can be estimated using the Weibull distribution. It is expected that the sojourn periods for maintenance actions are independent of the bridge element’s present and future condition states, in contrast to the do-nothing action. It is also reasonable to presume that rehabilitation efforts primarily include replacing the relevant bridge component, which has a more predictable timeline than maintenance efforts. It is possible to interpret the time required to complete improvement (maintenance and rehabilitation) work as beginning when the issue with the bridge element was initially identified, including the time needed to put the necessary work out to bid if needed, and then the time needed to actually complete the work.

Network level optimization

The ability to determine the minimum-cost long-term policy for each bridge element is one of the primary objectives of a Bridge Management System (BMS). This policy is based on the steady state concept and consists of a set of recommended actions that minimize the long-term Main Tenance, Repair and Rehabilitation (MR&R) cost requirements while keeping the bridge element out of risk of failure. If the minimum-cost long-term policy can be determined, it represents the most cost-efficient set of actions for the bridge element; therefore, if any of the actions are delayed, it will result in higher long-term expenses; if more improvement actions than the recommended actions are carried out, it will also result in higher long-term costs. It is anticipated that bridge components will continue to provide continuous transit connectivity over extended periods of time. It is crucial to have an ideal policy that can be sustained for a very long time. In a BMS the following three (3) things normally occur place each year: (1) Bridge elements deteriorate when no improvement actions are taken, also known as “do-nothing” actions; (2) some bridge elements undergo improvement actions, which incur associated costs; and (3) the improvement actions improve network conditions overall.

According to the MR & R actions, elements will move in and out of condition states at the network level for any given condition state. In order for a steady state to be achieved throughout the bridge network or a subset of it, the total number of bridge elements entering a specific condition state must match the total number of similar bridge elements exiting that condition state. This makes the policy long-term viable and ensures that the distribution of a bridge element among its condition states stays consistent from year to year within the group. The best policy is the one that meets steady state criteria and, as a result, reduces the transportation agency’s operating expenses each year.

Since a Markov chain model assumes that the sojourn time in one state before transitioning to another follows an exponential distribution for continuous time, it is more restrictive than a semi-Markov process, where the sojourn times can be assumed to follow a different distribution, such as the Weibull distribution. Accordingly, the Pont is Bridge Management System has used the Markov Decision Process (MDP) to determine the optimal policy, in which there is a means to update the transition probabilities over time or as needed.

\[
f(t) = \frac{\beta}{\alpha} \left(\frac{t}{\alpha}\right)^{\beta – 1} e^{-\left(\frac{t}{\alpha}\right)^\beta}
\quad \text{……………………………………………(35)}
\]

The scale and form parameters are represented by α and, respectively. The distribution that results becomes exponential and the rate of deterioration is constant when the shape parameter is equal to 1. The rate of deterioration increases with time if the form parameter is more than 1, and this rate reduces with time if the shape parameter is less than 1. The latter is generally not anticipated for infrastructure related to transportation.

The deterioration model, which is a component of the preservation model for bridge elements, is similar to the one described above in that it uses the Maximum Likelihood Estimation of the parameters of the Weibull distribution to describe the sojourn time in one condition state before transitioning to a lower condition state. The sojourn time distribution for maintenance works can be determined in a similar manner.

Discount coefficient

We will examine continuous-time discounting with a rate of s > 0, for which the present value of one unit received t time units in the future is est. In the case of discounting over a 1-year period, we set t=1, which gives us e s=d. Here, d denotes the corresponding discrete-time discount rate used in a MDP. For instance, in the MDP, d=0:9 correspond to s= log 0:9 =0:105 in the SMDP, which represents the relevant discount factor derived from continuous-time

The Laplace transforms (s-transform)

In the context of the SMDP model, when accounting for continuous-time discounting, it is necessary to calculate the Laplace transform (s-transform) of the distribution of sojourn times between states. It is wise to consider the Laplace transform. Now, if is the pdf of a continuous random variable X that takes only non negative values; then f ⨯ x = 0 for x

\[
M_X(s) = E[e^{-sX}] = \int_0^\infty e^{-s x} \, f_X(x) \, dx
\quad \text{………………………………….(36)}
\]

A fundamental characteristic of the Laplace transform is that its evaluation at s=0 yields a value of 1.

\[
M_X(s)\bigg|_{s=0} = \int_0^\infty f_X(x) \, dx = 1
\quad \text{………………………………………….(37)}
\]

The Weibull distribution’s Laplace transform can be complex, as demonstrated in Eq. (38).

\[
E\left[e^{-tX}\right] = \frac{1}{\lambda^k t^k} \cdot \frac{p^k \sqrt{\frac{q}{p}}}{(\sqrt{2\pi})^{q+p-2}} \,
G_{p,q}^{\,q,p} \left(
\begin{matrix}
\frac{1-k}{p}, \frac{2-k}{p}, \ldots, \frac{p-k}{p} \\
0, \frac{1}{q}, \ldots, \frac{q-1}{q}
\end{matrix}
\,\middle|\, \frac{p^p}{(q \lambda^k t^k)^q}
\right)
\quad \text{……….(38)}
\]

Where G is known as the Meijer G-function.

Consequently, it might need to be addressed through numerical methods. Numerical solutions are derived by inserting the scale and shape (Weibull) parameter estimates, along with the continuous-time discount rate s, into the expression specified in Eq. (39)

\[
M_X(s) = E[e^{-sX}] = \int_0^\infty e^{-s x} \cdot \frac{\beta}{\alpha} \left(\frac{x}{\alpha}\right)^{\beta – 1} e^{-\left(\frac{x}{\alpha}\right)^\beta} \, dx
\quad \text{……………………….(39)}
\]

In the situation where the bridge element condition is in the terminal state, it can be mathematically assumed that the bridge element remains in that state for 1 year before ‘transitioning’ back into the terminal state. In cases where a certain transition consistently requires 1 year, the equivalent to the Laplace transform in this situation is given by Eq. (40)

\[
M_X(s) = E[e^{-sX}] = e^{-t}
\quad \text{…………………………(40)}
\]

Semi-Markov decision process with discounting

The subsequent wording gives the calculation of present values according to the SMDP

\[
v_i(a, \lambda) = r_i(a, \lambda) + \sum_{j=1}^N p_{ij}(a) \, M_{ij}^H(a, \lambda) \, v_j(a, \lambda)
\quad \text{……………………..(41)}
\]

Where  represents the Laplace transform (s-transform) of  and the transition at  probability matrix,  is defined as

\[
P_{i,j}(a) =
\begin{bmatrix}
p_{1,1}(a) & p_{2,1}(a) & \cdots & p_{N-1,1}(a) & p_{N,1}(a) \\
p_{1,2}(a) & p_{2,2}(a) & \cdots & p_{N-1,2}(a) & p_{N,2}(a) \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
\vdots & \vdots & \vdots & \vdots & \vdots \\
p_{1,N}(a) & p_{2,N}(a) & \cdots & p_{N-1,N}(a) & p_{N,N}(a)
\end{bmatrix}
\quad \text{…………….(43)}
\]

If we let

\[
q_{ij}(a, \lambda) = p_{ij}(a) \, M_{ij}^H(a, \lambda)
\quad \text{……………………………..………(44)}
\]

We obtain

\[
v_i(a, \lambda) = r_i(a, \lambda) + \sum_{j=1}^N q_{ij}(a, \lambda) \, v_j(a, \lambda), \quad i = 1, \ldots, N \quad \text{………………..(45)}
\]

According to [19], the long-term value for ri  is given by

\[
= B_i(a) + \sum_{j=1}^N p_{ij} \int_0^\infty \int_0^t e^{-\lambda x} \, b_{ij}(x, a) \, f_{H_{ij}}(\tau, a) \, dx \, d\tau
\quad \text{……………………(46)}
\]

Where Bi (a) is the immediate cost and is determined by

\[
B_i(a) = \sum_{j=1}^N p_{ij}(a) \, B_{ij}(a) \quad \text{for } i = 1, \ldots, N \quad \text{where } B_{ij}(a) < \infty
\quad \text{……………………(47)}
\]

From a mathematical perspective, the second term in Eq. (46) signifies the sum of costs that accrues at the rate per unit time until the transition to state j takes place, bij In the SMDP model, it is assumed that only ‘immediate’ costs are incurred during a transition from one condition state to another. It can thus be assumed that the second term in Eq. (46) is zero. When expressed in matrix form, Eq. (45) becomes

\[
V(a, \lambda) =
\begin{bmatrix}
v_1(a, \lambda) \\
v_2(a, \lambda) \\
\vdots \\
v_N(a, \lambda)
\end{bmatrix}
\quad \text{……………………(48)}
\]

 

\[
R(a, \lambda) =
\begin{bmatrix}
r_1(a, \lambda) \\
r_2(a, \lambda) \\
\vdots \\
r_N(a, \lambda)
\end{bmatrix}
= V(a, \lambda)
\quad \text{……………………(49)}
\]

 

\[
Q(a, \lambda) =
\begin{bmatrix}
q_{1,1}(a,\lambda) & q_{1,2}(a,\lambda) & \cdots & q_{1,N}(a,\lambda) \\
q_{2,1}(a,\lambda) & q_{2,2}(a,\lambda) & \cdots & q_{2,N}(a,\lambda) \\
\vdots & \vdots & \ddots & \vdots \\
q_{N,1}(a,\lambda) & q_{N,2}(a,\lambda) & \cdots & q_{N,N}(a,\lambda)
\end{bmatrix}
\quad \text{……………………(50)}
\]

Based on Eq.(45), considering steady state conditions,

\[
V(a, \lambda) = R(a, \lambda) + Q(a, \lambda) \cdot V(a, \lambda) \quad \text{…………………………….(51)}
\]

Then

\[
V(a, \lambda) = [I – Q(a, \lambda)]^{-1} \cdot R(a, \lambda) \quad \text{…………………..(52)}
\]

Consequently, there exists a Q (a, λ) that represents the minimum long-term cost, derived from an optimal policy. I be defines the minimum long-term cost as follows:

\[
v_i^* = \min_{(a, \lambda)} \left\{ r_i(a, \lambda) + \sum_{j=1}^N q_{ij}(a, \lambda) \, v_j^* \right\}, \quad i = 1, \ldots, N \quad \text{……..(53)}
\]

Do-nothing action (action‘d’)

Like previously stated, for ‘do-nothing’ actions (ad), it can be assumed that the sojourn time in an a condition state before transition follows a Weibull distribution, except when sojourning in the terminal state. With the help of Eq. (39), the numerical solution for the Laplace of the Weibull distribution can be found for all scenarios apart from failure. Let

\[
M_{ij}^H(a_d, \lambda) = \mathcal{L} \left\{ f(\infty, a_d \mid \alpha_{ij}, \beta_{ij}) \right\}, \quad i,j = 1, \ldots, NS \tag{54}
\]

Here, NS denotes the count of states excluding the terminal state. The terminal state is characterized as an absorption condition state or the moment when the bridge element has reached the conclusion of its useful life. To put it differently, when evaluating the ‘do-nothing’ action on a bridge element, once it reaches the terminal state, it will persist in that state without rehabilitation. Thus, it follows that

\[
M_{ij}^H(a_d, \lambda) = e^{-s} = e^{-\lambda}, \quad \text{where } s = \lambda; \quad i = j = F \tag{55}
\]

Where F represents the terminal state. From Eqs. (44), (50) and (54),

\[
(a_d, \lambda) =
\begin{bmatrix}
p_{1,1} \cdot \mathcal{L}\{f(\infty, a_d \mid \alpha_{1,1}, \beta_{1,1})\} & \cdots & p_{1,F} \cdot \mathcal{L}\{f(\infty, a_d \mid \alpha_{1,F}, \beta_{1,F})\} \\
\vdots & \ddots & \vdots \\
p_{F,1} \cdot \mathcal{L}\{f(\infty, a_d \mid \alpha_{F,1}, \beta_{F,1})\} & \cdots & p_{F,F} \cdot \mathcal{L}\{f(\infty, a_d \mid \alpha_{F,F}, \beta_{F,F})\}
\end{bmatrix}
+
\begin{bmatrix}
0 \\
\vdots \\
0 \\
e^{-\lambda}
\end{bmatrix}
\tag{56}
\]

Where ad represents ‘do-nothing’ action.

It can be assumed that the deterioration of bridge elements occurs in a one-step process, with the bridge element able to drop by only one condition state at any given moment. To put it differently, an abridge element in a specific condition state will ultimately move to the next lower condition state when the only actions taken on that element are ‘do-nothing’ actions. It can also be assumed that for ‘do-nothing’ actions, once the condition of the bridge element exits a condition state, that state will not be revisited. For the ‘do-nothing’ action, the transition probability pij for the embedded Markov chain of the semi-Markov process is thus assumed to be 1 between the current and next condition states, as well as for the terminal state. Thus, for the bridge component that possesses five (5) condition states in addition to the terminal state

\[
P_{(i,j)}(a_d) =
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 1
\end{bmatrix}
\tag{57}
\]

And so Eq. (56) can be simplified to:

\[
Q(a_d, \lambda) =
\begin{bmatrix}
0 & 0 & 0 & 0 & 0 & 0 \\
\mathcal{L}\{f(\infty, a_d \mid \alpha_{1,2}, \beta_{1,2})\} & 0 & 0 & 0 & 0 & 0 \\
0 & \mathcal{L}\{f(\infty, a_d \mid \alpha_{2,3}, \beta_{2,3})\} & 0 & 0 & 0 & 0 \\
0 & 0 & \ddots & 0 & 0 & 0 \\
0 & 0 & 0 & \ddots & 0 & 0 \\
0 & 0 & 0 & 0 & \mathcal{L}\{f(\infty, a_d \mid \alpha_{5,F}, \beta_{5,F})\} & e^{-\lambda}
\end{bmatrix}
\tag{58}
\]

Maintenance action (action ‘m’)

For maintenance action (am), the sojourn time distribution may also follow a Weibull distribution. Every time a maintenance action is performed, it can impact the condition of each bridge element in the network differently. To rephrase, the same maintenance action can lead to three possible outcomes for the condition state of a bridge element: it may increase, remain unchanged, or decrease at a slower rate than it would have without the action being taken. In order to estimate the transition probability of the embedded Markov chain as a result of the maintenance action, one can observe the changes occurring in a sample of bridge elements that underwent the maintenance action. The expected transition probability matrix between pairs of observations can be determined using the method of least squares with matrices computation. Take Eq.(59) into account:

\[
\begin{pmatrix}
x_1 & \cdots & x_F \\
\vdots & \ddots & \vdots \\
x_1^* & \cdots & x_F^*
\end{pmatrix}
\cdot
\begin{pmatrix}
p_{1,1}(a_m) & \cdots & p_{1,F}(a_m) \\
\vdots & \ddots & \vdots \\
p_{F,1}(a_m) & \cdots & p_{F,F}(a_m)
\end{pmatrix}
=
\begin{pmatrix}
y_1 & \cdots & y_F \\
\vdots & \ddots & \vdots \\
y_1^* & \cdots & y_F^*
\end{pmatrix}
\tag{59}
\]

In which each row of xi’s represents the proportions of the bridge element (in each state) under condition state I prior to the maintenance action, am; each row of yi represents the proportions of the same bridge element in condition state i following the maintenance action, am, for i=1,2,…NS, F. Each row of the corresponding matrices containing the values of xi and yi corresponds to a distinct in-section record for a specific bridge element. When the xi’s and yi’s are known for a sample of identical bridge elements, the transition probability matrix for action am can be calculated through matrix computations via the least squares method, following Eq. (60). The nearest pair of in-section dates preceding and succeeding a maintenance action can be utilized to capture the pairs of records that possess the respective condition states.

\[
P_{i,j}(a_m) = \bigl(X^T X\bigr)^{-1} \bigl(X^T X\bigr)
\tag{60}
\]

The resulting transition matrix does not account for uncertainties regarding the duration of maintenance work and the distribution of sojourn time in each condition state. It can also be assumed that the sojourn time distribution for the maintenance action follows a Weibull distribution. The numerical solution for the Laplace of the Weibull distribution of sojourn times, regardless of the current and subsequent condition states, can be determined for NS states (excluding the terminal state) using Eq. (39). As the solutions for NS states are numerical and the scale (α) and shape (β) estimates are not specific to any state, the Laplace of the Weibull distribution for each i, j is represented by:

\[
M_{ij}^H (a_m, \lambda) = L \big\{ f(\infty, a_m \mid \alpha, \beta) \big\} \quad \text{for the condition state } 1, \ldots, NS
\tag{61}
\]

Where NS denotes the count of states, excluding the terminal state. To find ., Eq. (55) can be applied. For an element with five (5) condition states (plus a terminal state),

\[
Q(a_m, \lambda) =
\begin{pmatrix}
p_{1,1} \cdot L\{ f(\infty, a_m \mid \alpha, \beta) \} & & & & & 0 \\
\vdots & \ddots & & & & \vdots \\
\vdots & & \ddots & & & \vdots \\
\vdots & & & \ddots & & \vdots \\
p_{1,1} \cdot L\{ f(\infty, a_m \mid \alpha, \beta) \} & & & & p_{1,1} \cdot L\{ f(\infty, a_m \mid \alpha, \beta) \} & e^{-\lambda}
\end{pmatrix}
\tag{62}
\]

Rehabilitation action (action ‘r’)

It is possible to ascertain the length of specific rehabilitation actions if there is adequate data on these rehabilitation projects. If this information is unavailable, you can estimate the duration of their habilitation action by calculating the time between the two (2) closest inspections before the start and after the completion of the rehabilitation action. In this case, the latter was assumed for sojourn time of two (2) years. It can be assumed that the transition probability of the embedded Markov chain for the rehabilitation action is fixed when the condition state of the bridge element returns to 1. As such, the transition probability for the embedded Markov chain of the semi-Markov process, pij, for rehabilitation action.

\[
P_{i,j}(a_r) =
\begin{pmatrix}
1 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0
\end{pmatrix}
\tag{63}
\]

Based on  Eqs. (40), (44) and (50),

\[
\begin{pmatrix}
e^{-2\lambda} & 0 & 0 & 0 \\
e^{-2\lambda} & 0 & 0 & 0 \\
e^{-2\lambda} & 0 & 0 & 0 \\
e^{-2\lambda} & 0 & 0 & 0 \\
e^{-2\lambda} & 0 & 0 & 0 \\
e^{-2\lambda} & 0 & 0 & 0
\end{pmatrix}
\tag{64}
\]

CONCLUSIONS

This chapter presented viable methods that show how the state of transportation infrastructure can be utilized to create Markov and semi-Markov deterioration models, which represent network-level performance in the absence of maintenance actions. Modeling deterioration with semi-Markov processes allows for a less restrictive approach than Markov chain deterioration models, as it relaxes the assumption regarding the distribution of sojourn times in condition states for ‘do-nothing’ actions. A preservation model utilizing Semi-Markov Decision Processes for transportation infrastructure was introduced. The preservation model aims to establish the least long-term costs associated with maintaining a transportation infrastructure within a group or network of comparable infrastructures. The primary ‘inputs’ for the preservation model are: (a) the scale and shape parameter estimates of the Weibull distribution that characterizes ‘do-nothing’ actions, and (b) the transition probabilities of the embedded Markov chain along with the scale and shape parameter estimates of the Weibull distribution for maintenance actions. When sufficient data is available, employing a Semi-Markov Decision Process is beneficial for modeling Transportation Infrastructure preservation.

REFERENCES

  1. Wang CP. Pavement network optimization and analysis [PhD dissertation]. Tempe, AZ, USA: Arizona State University; 1992
  2. Wang KCP, Zaniewski J, Way G. Probabilistic behavior of pavements. Journal of Transportation Engineering, American Society of Civil Engineers (ASCE). 1994;120(3):358-375
  3. Nasseri S, Gunaratne M, Yang J, Nazef A. Application of improved crack prediction methodology in florida’s highway network. Transportation Research Record: Journal of the Transportation Research Board. 2009; 2093:67-7
  4. Golabi K, Thompson PD, Hyman WA.Pontis Version 2.0 Technical Manual, A Network Optimization System for Bridge Improvements and Maintenance. Washington DC: Federal Highway Administration; December 1993
  5. Micevski T, Kuczera G, Coombes P. Markov model for storm water pipe deterioration. Journal of Infrastructure Systems, American Society of Civil Engineers (ASCE). 2002;8(2):49-56
  6. Baik H-S, Seok HJ, Abraham DM. Estimating transition probabilities in markov chain-based deterioration models for management of wastewater systems. Journal of Water Resources Planning and Management, American Society of Civil Engineers (ASCE). 2006; 132(1):15-24
  7. Yang J, Gunaratne M, Jian John L, Dietrich B. Use of recurrent markov chains for modeling the crack performance of flexible pavements. Journal of Transportation Engineering, American Society of Civil Engineers (ASCE). 2005;131(11):861-872
  8. Yang J, Lu JJ, Gunaratne M, Dietrich B. Modeling crack deterioration of flexible pavements: Comparison of recurrent markov chains and artificial neural networks. Transportation Research Record: Journal of the Transpor tation Research Board. 1974;18–25:2009
  9. Ng S-K, Moses F. Bridge deterioration modeling using semi-markov theory. A. A. Balkema Uitgevers B.V. Structural Safety and Reliability. 1998;1:113-120
  10. Sobanjo JO. State transition probabilities in bridge deterioration based on weibull sojourn times. Structure and Infrastructure Engineering: Maintenance, Management, Life-Cycle Design and Performance. 2011;7(10):747-764
  11. Black M, Brint AT, Brailsford JR. Comparing probabilistic methods for the asset management of distributed items. Journal of Infrastructure Systems, American Society of Civil Engineers (ASCE). 2005;11(2):102-109
  12. Black M, Brint AT, Brailsford JR. A semi-markov approach for modelling asset deterioration. Journal of the Operational Research Society. 2005;56: 1241-1249
  13. Thomas O. Stochatic preservation model for transportation infrastructure [PhD dissertation]. Tallahassee, FL, USA: Florida State University; 2011
  14. Ross SM. Stochastic Processes. 2nd ed. USA: John Wiley and Sons, Inc.; 1996
  15. Casella G, Berger RL. Statistical Inference. 2nd ed. Duxbury, Pacific Grove, CA, USA: Cenage Learning; 2001
  16. Billington R, Allan RN. Reliability Evaluation of Engineering Systems: Concepts and Techniques. London: Pitman Books Limited; 1983
  17. Tobias PA, Trindade DC. Applied Reliability. 2nd ed. Florida: Chapman and Hall/CRC Press; 1995
  18. Birolini A. Reliability Engineering, Theory and Practice. 5th ed. Berlin, Heidelberg: Springer, Verlag; 2007
  19. Ibe OC. Markov Processes for Stochastic Modeling. Massachusetts: Elsevier Academic Press; 2009
  20. Howard RA. Dynamic probabilistic systems. In: Volume II: Semi-Markov and Decision Processes. Canada: John Wiley and Sons Inc.; 1971
  21. Cleves MA, Gould WW, Guitierrez RG. An Introduction to Survival Analysis Using Stata. Revised ed. 4905 Lakeway Drive, College Station, Texas 77845: Stata Press; 2001
  22. Castillo E, Hadi AS, Balakrishnan N, Sarabia JM. Extreme Value and Related Models with Applications in Engineering and Science. New Jersey: John Wiley and Sons, Inc.; 2005
  23. Lee ET. Statistical Methods for Survival Data Analysis. 2nd ed. USA: John Wiley and Sons, Inc.; 1992
  24. Durango PL, Madanat SM. Optimal maintenance and repair policies in infrastructure management under uncertain facility deterioration rates: And adaptive approach. Transportation Research Part A. 2002;36:763-778
  25. Guillaumot VM, Durango-Cohen PL, Madanat SM. Adaptive optimization of infrastructure maintenance and inspection decisions under performance model uncertainty. Journal of Infrastructure Systems, American Society of Civil Engineers (ASCE). 2003; 9(4):133-139
  26. Gonzalez J, Romera JCR, Perez JM. Optimal railway infrastructure maintenance and repair policies to manage under uncertainty with adaptive control. UC3MWorking Papers. Statistics and Econometrics. 2006;5:1-15
  27. Madanat SM, Park S, Kuhn K. Adaptive optimization of infrastructure maintenance and inspection decisions under performance model uncertainty. Journal of Infrastructure Systems, American Society of Civil Engineers (ASCE). 2006;12(3):192-198
  28. Thompson PD, Harrison FD. Pontis Version 2.0 User’s Manual, A Network Optimization System for Bridge Improvements and Maintenance. Washington DC: Federal Highway Administration; December 1993
  29. Evans M, Hastings N, Peacock B. Statistical Distributions. 3rd ed. New York: John Wiley and Sons; 2000
  30. Puterman ML. Markov Decision Processes: Discrete Stochastic Dynamic Programming. New Jersey: John Wiley and Sons; 2005
  31. Sagias NC, Karagiannidis GK. Gaussian class multivariate weibull distributions: Theory and applications in fading channels. Institute of Electrical and Electronics Engineers. Transactions on Information Theory. 2005;51(10): 3608-3619
  32. Thompson PD, Johnson MB. Markovian bridge deterioration: Developing models from historical data. Structure and Infrastructure Engineering. 2005;1:85-91

Article Statistics

Track views and downloads to measure the impact and reach of your article.

0

PDF Downloads

8 views

Metrics

PlumX

Altmetrics

Track Your Paper

Enter the following details to get the information about your paper

GET OUR MONTHLY NEWSLETTER