To see the other types of publications on this topic, follow the link: Absorbing markov chain.

Journal articles on the topic 'Absorbing markov chain'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Absorbing markov chain.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Martínez, Servet. "Entropy of killed-resurrected stationary Markov chains." Journal of Applied Probability 58, no. 1 (2021): 177–96. http://dx.doi.org/10.1017/jpr.2020.81.

Full text
Abstract:
AbstractWe consider a strictly substochastic matrix or a stochastic matrix with absorbing states. By using quasi-stationary distributions we show that there is an associated canonical Markov chain that is built from the resurrected chain, the absorbing states, and the hitting times, together with a random walk on the absorbing states, which is necessary for achieving time stationarity. Based upon the 2-stringing representation of the resurrected chain, we supply a stationary representation of the killed and the absorbed chains. The entropies of these representations have a clear meaning when o
APA, Harvard, Vancouver, ISO, and other styles
2

Yu, Pyung Il. "An instructive example of absorbing Markov chain." International Journal of Mathematical Education in Science and Technology 27, no. 3 (1996): 397–403. http://dx.doi.org/10.1080/0020739960270310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Yongseok, Soyoung Park, and Jeonghee Chi. "Absorbing Markov chain-based roadside units deployment." Contemporary Engineering Sciences 9 (2016): 579–86. http://dx.doi.org/10.12988/ces.2016.6444.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gerontidis, Ioannis I. "Semi-Markov Replacement Chains." Advances in Applied Probability 26, no. 03 (1994): 728–55. http://dx.doi.org/10.1017/s0001867800026525.

Full text
Abstract:
We consider an absorbing semi-Markov chain for which each time absorption occurs there is a resetting of the chain according to some initial (replacement) distribution. The new process is a semi-Markov replacement chain and we study its properties in terms of those of the imbedded Markov replacement chain. A time-dependent version of the model is also defined and analysed asymptotically for two types of environmental behaviour, i.e. either convergent or cyclic. The results contribute to the control theory of semi-Markov chains and extend in a natural manner a wide variety of applied probabilit
APA, Harvard, Vancouver, ISO, and other styles
5

Gerontidis, Ioannis I. "Semi-Markov Replacement Chains." Advances in Applied Probability 26, no. 3 (1994): 728–55. http://dx.doi.org/10.2307/1427818.

Full text
Abstract:
We consider an absorbing semi-Markov chain for which each time absorption occurs there is a resetting of the chain according to some initial (replacement) distribution. The new process is a semi-Markov replacement chain and we study its properties in terms of those of the imbedded Markov replacement chain. A time-dependent version of the model is also defined and analysed asymptotically for two types of environmental behaviour, i.e. either convergent or cyclic. The results contribute to the control theory of semi-Markov chains and extend in a natural manner a wide variety of applied probabilit
APA, Harvard, Vancouver, ISO, and other styles
6

Boumi, Shahab, and Adan Ernesto Vela. "Improving Graduation Rate Estimates Using Regularly Updating Multi-Level Absorbing Markov Chains." Education Sciences 10, no. 12 (2020): 377. http://dx.doi.org/10.3390/educsci10120377.

Full text
Abstract:
American universities use a procedure based on a rolling six-year graduation rate to calculate statistics regarding their students’ final educational outcomes (graduating or not graduating). As an alternative to the six-year graduation rate method, many studies have applied absorbing Markov chains for estimating graduation rates. In both cases, a frequentist approach is used. For the standard six-year graduation rate method, the frequentist approach corresponds to counting the number of students who finished their program within six years and dividing by the number of students who entered that
APA, Harvard, Vancouver, ISO, and other styles
7

Teslenko, Victor I., and Oleksiy L. Kapitanchuk. "Multimodal dynamics of nonhomogeneous absorbing Markov chains evolving at stochastic transition rates." International Journal of Modern Physics B 34, no. 11 (2020): 2050105. http://dx.doi.org/10.1142/s0217979220501052.

Full text
Abstract:
The Tokuyama–Mori projection operator method for a reduced time-convolutionless description of a local temporal behavior of an open quantum system interacting with the weakly dissipative and fluctuating pervasive environment is applied to a Markov chain subject to random transition probabilities. The solution to the problem of the multimodal dynamics of a two-stage absorbing Markov chain with the fluctuating forward rate constant augmented by a symmetric dichotomous stochastic process is found exactly and compared with that of the problem for the same Markov chain with the fluctuating backward
APA, Harvard, Vancouver, ISO, and other styles
8

Minh, Do L., and R. Bhaskar. "Analyzing linear recursive projects as an absorbing chain." Journal of Applied Mathematics and Decision Sciences 2006 (February 23, 2006): 1–6. http://dx.doi.org/10.1155/jamds/2006/84735.

Full text
Abstract:
Hardie (2001) used general Markov chains to study the linear “recursive projects” in which some activities may be revisited after a later activity is completed. In this paper, we propose that it is better to treat the project as an absorbing chain. This allows us to calculate the expected value and the variance of the project duration.
APA, Harvard, Vancouver, ISO, and other styles
9

Udom, Akaninyene Udo. "An absorbing Markov chain in labour system planning." International Journal of Operational Research 18, no. 2 (2013): 188. http://dx.doi.org/10.1504/ijor.2013.056106.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chae, Kyung C., and Tae S. Kim. "Reversed absorbing Markov chain: A simple path approach." Operations Research Letters 16, no. 1 (1994): 41–46. http://dx.doi.org/10.1016/0167-6377(94)90020-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Igboanugo, A. C. "Application of Markov Chain Model to Corporate Manpower Planning: A Nigeria Local Government Hub Example." Advanced Materials Research 824 (September 2013): 514–26. http://dx.doi.org/10.4028/www.scientific.net/amr.824.514.

Full text
Abstract:
A corporate manpower planning study, seeking to gain insight into, and hence, attempt tounwrapthe wider meanings of a long-run manpower practice inherent in a set of data obtained from one of the 774 Local Government Organizations in Nigeria, was conducted. The data which spanned over a period of twenty years, relate to six states recruitment, staff stock, training, interdiction, wastage, and retirement and, in particular were found to possess Markov properties, especially stochastic regularity, and therefore had absorbing Markov Chain model fitted into the set. Our results suggest that staff
APA, Harvard, Vancouver, ISO, and other styles
12

Kim, Woo-sung, Hyunsang Eom, and Youngsung Kwon. "Optimal Design of Photovoltaic Connected Energy Storage System Using Markov Chain Models." Sustainability 13, no. 7 (2021): 3837. http://dx.doi.org/10.3390/su13073837.

Full text
Abstract:
This study improves an approach for Markov chain-based photovoltaic-coupled energy storage model in order to serve a more reliable and sustainable power supply system. In this paper, two Markov chain models are proposed: Embedded Markov and Absorbing Markov chain. The equilibrium probabilities of the Embedded Markov chain completely characterize the system behavior at a certain point in time. Thus, the model can be used to calculate important measurements to evaluate the system such as the average availability or the probability when the battery is fully discharged. Also, Absorbing Markov chai
APA, Harvard, Vancouver, ISO, and other styles
13

Csenki, Attila. "The joint distribution of sojourn times in finite Markov processes." Advances in Applied Probability 24, no. 01 (1992): 141–60. http://dx.doi.org/10.1017/s0001867800024204.

Full text
Abstract:
Rubino and Sericola (1989c) derived expressions for the mth sojourn time distribution associated with a subset of the state space of a homogeneous irreducible Markov chain for both the discrete- and continuous-parameter cases. In the present paper, it is shown that a suitable probabilistic reasoning using absorbing Markov chains can be used to obtain respectively the probability mass function and the cumulative distribution function of the joint distribution of the first m sojourn times. A concise derivation of the continuous-time result is achieved by deducing it from the discrete-time formul
APA, Harvard, Vancouver, ISO, and other styles
14

Csenki, Attila. "The joint distribution of sojourn times in finite Markov processes." Advances in Applied Probability 24, no. 1 (1992): 141–60. http://dx.doi.org/10.2307/1427733.

Full text
Abstract:
Rubino and Sericola (1989c) derived expressions for the mth sojourn time distribution associated with a subset of the state space of a homogeneous irreducible Markov chain for both the discrete- and continuous-parameter cases. In the present paper, it is shown that a suitable probabilistic reasoning using absorbing Markov chains can be used to obtain respectively the probability mass function and the cumulative distribution function of the joint distribution of the first m sojourn times. A concise derivation of the continuous-time result is achieved by deducing it from the discrete-time formul
APA, Harvard, Vancouver, ISO, and other styles
15

CHEN, PEI-DE, and R. L. TWEEDIE. "Orthogonal measures and absorbing sets for Markov chains." Mathematical Proceedings of the Cambridge Philosophical Society 121, no. 1 (1997): 101–13. http://dx.doi.org/10.1017/s0305004196008985.

Full text
Abstract:
For a general state space Markov chain on a space (X, [Bscr ](X)), the existence of a Doeblin decomposition, implying the state space can be written as a countable union of absorbing ‘recurrent’ sets and a transient set, is known to be a consequence of several different conditions all implying in some way that there is not an uncountable collection of absorbing sets. These include([Mscr ]) there exists a finite measure which gives positive mass to each absorbing subset of X;([Gscr ]) there exists no uncountable collection of points (xα) such that the measures Kθ(xα, ·)[colone ](1−θ)ΣPn(xα, ·)θ
APA, Harvard, Vancouver, ISO, and other styles
16

Lorek, Paweł. "Antiduality and Möbius monotonicity: generalized coupon collector problem." ESAIM: Probability and Statistics 23 (2019): 739–69. http://dx.doi.org/10.1051/ps/2019004.

Full text
Abstract:
For a given absorbing Markov chain X* on a finite state space, a chain X is a sharp antidual of X* if the fastest strong stationary time (FSST) of X is equal, in distribution, to the absorption time of X*. In this paper, we show a systematic way of finding such an antidual based on some partial ordering of the state space. We use a theory of strong stationary duality developed recently for Möbius monotone Markov chains. We give several sharp antidual chains for Markov chain corresponding to a generalized coupon collector problem. As a consequence – utilizing known results on the limiting distr
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Lihe, Jianwu Ai, Bowen Jiang, Huchuan Lu, and Xiukui Li. "Saliency Detection via Absorbing Markov Chain With Learnt Transition Probability." IEEE Transactions on Image Processing 27, no. 2 (2018): 987–98. http://dx.doi.org/10.1109/tip.2017.2766787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Wenjie, Qingyu Xiong, Weiren Shi, and Shuhan Chen. "Region saliency detection via multi-feature on absorbing Markov chain." Visual Computer 32, no. 3 (2015): 275–87. http://dx.doi.org/10.1007/s00371-015-1065-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Gast, Nicolas, and Bruno Gaujal. "Computing absorbing times via fluid approximations." Advances in Applied Probability 49, no. 3 (2017): 768–90. http://dx.doi.org/10.1017/apr.2017.21.

Full text
Abstract:
AbstractIn this paper we compute the absorbing timeTnof ann-dimensional discrete-time Markov chain comprisingncomponents, each with an absorbing state and evolving in mutual exclusion. We show that the random absorbing timeTnis well approximated by a deterministic timetnthat is the first time when a fluid approximation of the chain approaches the absorbing state at a distance 1 /n. We provide an asymptotic expansion oftnthat uses the spectral decomposition of the kernel of the chain as well as the asymptotic distribution ofTn, relying on extreme values theory. We show the applicability of this
APA, Harvard, Vancouver, ISO, and other styles
20

Coolen-Schrijner, Pauline, Andrew Hart, and Phil Pollett. "Quasistationarity of continuous-time Markov chains with positive drift." Journal of the Australian Mathematical Society. Series B. Applied Mathematics 41, no. 4 (2000): 423–41. http://dx.doi.org/10.1017/s0334270000011735.

Full text
Abstract:
AbstractWe shall study continuous-time Markov chains on the nonnegative integers which are both irreducible and transient, and which exhibit discernible stationarity before drift to infinity “sets in”. We will show how this ‘quasi’ stationary behaviour can be modelled using a limiting conditional distribution: specifically, the limiting state probabilities conditional on not having left 0 for the last time. By way of a dual chain, obtained by killing the original process on last exit from 0, we invoke the theory of quasistationarity for absorbing Markov chains. We prove that the conditioned st
APA, Harvard, Vancouver, ISO, and other styles
21

Kostoska, Olivera, Viktor Stojkoski, and Ljupco Kocarev. "On the Structure of the World Economy: An Absorbing Markov Chain Approach." Entropy 22, no. 4 (2020): 482. http://dx.doi.org/10.3390/e22040482.

Full text
Abstract:
The expansion of global production networks has raised many important questions about the interdependence among countries and how future changes in the world economy are likely to affect the countries’ positioning in global value chains. We are approaching the structure and lengths of value chains from a completely different perspective than has been available so far. By assigning a random endogenous variable to a network linkage representing the number of intermediate sales/purchases before absorption (final use or value added), the discrete-time absorbing Markov chains proposed here shed new
APA, Harvard, Vancouver, ISO, and other styles
22

Wu, Jiajia, Guangliang Han, Peixun Liu, Hang Yang, Huiyuan Luo, and Qingqing Li. "Saliency Detection with Bilateral Absorbing Markov Chain Guided by Depth Information." Sensors 21, no. 3 (2021): 838. http://dx.doi.org/10.3390/s21030838.

Full text
Abstract:
The effectiveness of depth information in saliency detection has been fully proved. However, it is still worth exploring how to utilize the depth information more efficiently. Erroneous depth information may cause detection failure, while non-salient objects may be closer to the camera which also leads to erroneously emphasis on non-salient regions. Moreover, most of the existing RGB-D saliency detection models have poor robustness when the salient object touches the image boundaries. To mitigate these problems, we propose a multi-stage saliency detection model with the bilateral absorbing Mar
APA, Harvard, Vancouver, ISO, and other styles
23

Teslenko, V. I. "Fourth-Order Differential Equation for a Two-Stage Absorbing Markov Chain with a Stochastic Forward Transition Probability." Ukrainian Journal of Physics 62, no. 4 (2017): 349–61. http://dx.doi.org/10.15407/ujpe62.04.0349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Kim, Gak-Gyu, Seungwon Baek, and Bong-Kyu Yoon. "A Reliability Redundancy Optimization Problem with Continuous Time Absorbing Markov Chain." Journal of Korean Institute of Industrial Engineers 39, no. 4 (2013): 290–97. http://dx.doi.org/10.7232/jkiie.2013.39.4.290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Huang, Wen-Tso, and Cheng-Chang Lu. "An enhanced absorbing Markov chain model for predicting TAIEX Index Futures." Communications in Statistics - Theory and Methods 47, no. 1 (2017): 133–46. http://dx.doi.org/10.1080/03610926.2017.1300281.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Liu, Xiuping, Pingping Tao, Junjie Cao, He Chen, and Changqing Zou. "Mesh saliency detection via double absorbing Markov chain in feature space." Visual Computer 32, no. 9 (2015): 1121–32. http://dx.doi.org/10.1007/s00371-015-1184-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Jianye, Zhang, Zeng Qinshun, Song Yiyang, and Li Cunbin. "Information Security Risk Assessment of Smart Grid Based on Absorbing Markov Chain and SPA." International Journal of Emerging Electric Power Systems 15, no. 6 (2014): 527–32. http://dx.doi.org/10.1515/ijeeps-2014-0123.

Full text
Abstract:
Abstract To assess and prevent the smart grid information security risks more effectively, this paper provides risk index quantitative calculation method based on absorbing Markov chain to overcome the deficiencies that links between system components were not taken into consideration and studies mostly were limited to static evaluation. The method avoids the shortcomings of traditional Expert Score with significant subjective factors and also considers the links between information system components, which make the risk index system closer to the reality. Then, a smart grid information securi
APA, Harvard, Vancouver, ISO, and other styles
28

Patra, Pradipta, and U. Dinesh Kumar. "Analysing performance-based contract for manufacturing systems using absorbing state Markov chain." International Journal of Advanced Operations Management 9, no. 1 (2017): 37. http://dx.doi.org/10.1504/ijaom.2017.085631.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Patra, Pradipta, and U. Dinesh Kumar. "Analysing performance-based contract for manufacturing systems using absorbing state Markov chain." International Journal of Advanced Operations Management 9, no. 1 (2017): 37. http://dx.doi.org/10.1504/ijaom.2017.10006423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Pillai, V. Madhusudanan, and M. P. Chandrasekharan. "An absorbing Markov chain model for production systems with rework and scrapping." Computers & Industrial Engineering 55, no. 3 (2008): 695–706. http://dx.doi.org/10.1016/j.cie.2008.02.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Brezavšček, Alenka, Mirjana Pejić Bach, and Alenka Baggia. "Markov Analysis of Students’ Performance and Academic Progress in Higher Education." Organizacija 50, no. 2 (2017): 83–95. http://dx.doi.org/10.1515/orga-2017-0006.

Full text
Abstract:
Abstract Background: The students’ progression towards completing their higher education degrees possesses stochastic characteristics, and can therefore be modelled as an absorbing Markov chain. Such application would have a high practical value and offer great opportunities for implementation in practice. Objectives: The aim of the paper is to develop a stochastic model for estimation and continuous monitoring of various quality and effectiveness indicators of a given higher education study programme. Method: The study programme is modelled by a finite Markov chain with five transient and two
APA, Harvard, Vancouver, ISO, and other styles
32

Bobrowski, A. "Quasi-stationary distributions of a pair of Markov chains related to time evolution of a DNA locus." Advances in Applied Probability 36, no. 01 (2004): 57–77. http://dx.doi.org/10.1017/s0001867800012878.

Full text
Abstract:
We consider a pair of Markov chains representing statistics of the Fisher-Wright-Moran model with mutations and drift. The chains have absorbing state at 0 and are related by the fact that some random time τ ago they were identical, evolving as a single Markov chain with values in {0,1,…}; from that time on they began to evolve independently, conditional on a state at the time of split, according to the same transition probabilities. The distribution of τ is a function of deterministic effective population size 2N(·). We study the impact of demographic history on the shape of the quasi-station
APA, Harvard, Vancouver, ISO, and other styles
33

Bobrowski, A. "Quasi-stationary distributions of a pair of Markov chains related to time evolution of a DNA locus." Advances in Applied Probability 36, no. 1 (2004): 57–77. http://dx.doi.org/10.1239/aap/1077134464.

Full text
Abstract:
We consider a pair of Markov chains representing statistics of the Fisher-Wright-Moran model with mutations and drift. The chains have absorbing state at 0 and are related by the fact that some random time τ ago they were identical, evolving as a single Markov chain with values in {0,1,…}; from that time on they began to evolve independently, conditional on a state at the time of split, according to the same transition probabilities. The distribution of τ is a function of deterministic effective population size 2N(·). We study the impact of demographic history on the shape of the quasi-station
APA, Harvard, Vancouver, ISO, and other styles
34

Fu, James C., and Tung-Lung Wu. "Linear and Nonlinear Boundary Crossing Probabilities for Brownian Motion and Related Processes." Journal of Applied Probability 47, no. 04 (2010): 1058–71. http://dx.doi.org/10.1017/s0021900200007361.

Full text
Abstract:
We propose a new method to obtain the boundary crossing probabilities or the first passage time distribution for linear and nonlinear boundaries for Brownian motion. The method also covers certain classes of stochastic processes associated with Brownian motion. The basic idea of the method is based on being able to construct a finite Markov chain, and the boundary crossing probability of Brownian motion is cast as the limiting probability of the finite Markov chain entering a set of absorbing states induced by the boundaries. Error bounds are obtained. Numerical results for various types of bo
APA, Harvard, Vancouver, ISO, and other styles
35

Fu, James C., and Tung-Lung Wu. "Linear and Nonlinear Boundary Crossing Probabilities for Brownian Motion and Related Processes." Journal of Applied Probability 47, no. 4 (2010): 1058–71. http://dx.doi.org/10.1239/jap/1294170519.

Full text
Abstract:
We propose a new method to obtain the boundary crossing probabilities or the first passage time distribution for linear and nonlinear boundaries for Brownian motion. The method also covers certain classes of stochastic processes associated with Brownian motion. The basic idea of the method is based on being able to construct a finite Markov chain, and the boundary crossing probability of Brownian motion is cast as the limiting probability of the finite Markov chain entering a set of absorbing states induced by the boundaries. Error bounds are obtained. Numerical results for various types of bo
APA, Harvard, Vancouver, ISO, and other styles
36

Walde, Getinet Seifu. "Triple absorbing Markov chain model to study the flow of higher education students." Journal of Physics: Conference Series 1176 (March 2019): 042066. http://dx.doi.org/10.1088/1742-6596/1176/4/042066.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Sejal, D., V. Rashmi, K. R. Venugopal, S. S. Iyengar, and L. M. Patnaik. "Image recommendation based on keyword relevance using absorbing Markov chain and image features." International Journal of Multimedia Information Retrieval 5, no. 3 (2016): 185–99. http://dx.doi.org/10.1007/s13735-016-0104-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Chong, Siang Yew, Peter Tiňo, Jun He, and Xin Yao. "A New Framework for Analysis of Coevolutionary Systems—Directed Graph Representation and Random Walks." Evolutionary Computation 27, no. 2 (2019): 195–228. http://dx.doi.org/10.1162/evco_a_00218.

Full text
Abstract:
Studying coevolutionary systems in the context of simplified models (i.e., games with pairwise interactions between coevolving solutions modeled as self plays) remains an open challenge since the rich underlying structures associated with pairwise-comparison-based fitness measures are often not taken fully into account. Although cyclic dynamics have been demonstrated in several contexts (such as intransitivity in coevolutionary problems), there is no complete characterization of cycle structures and their effects on coevolutionary search. We develop a new framework to address this issue. At th
APA, Harvard, Vancouver, ISO, and other styles
39

Lorek, Paweł. "SIEGMUND DUALITY FOR MARKOV CHAINS ON PARTIALLY ORDERED STATE SPACES." Probability in the Engineering and Informational Sciences 32, no. 4 (2017): 495–521. http://dx.doi.org/10.1017/s0269964817000341.

Full text
Abstract:
For a Markov chain on a finite partially ordered state space, we show that its Siegmund dual exists if and only if the chain is Möbius monotone. This is an extension of Siegmund's result for totally ordered state spaces, in which case the existence of the dual is equivalent to the usual stochastic monotonicity. Exploiting the relation between the stationary distribution of an ergodic chain and the absorption probabilities of its Siegmund dual, we present three applications: calculating the absorption probabilities of a chain with two absorbing states knowing the stationary distribution of the
APA, Harvard, Vancouver, ISO, and other styles
40

Gamboa, Maria, and Maria Lopez-Herrero. "On the Number of Periodic Inspections During Outbreaks of Discrete-Time Stochastic SIS Epidemic Models." Mathematics 6, no. 8 (2018): 128. http://dx.doi.org/10.3390/math6080128.

Full text
Abstract:
This paper deals with an infective process of type SIS, taking place in a closed population of moderate size that is inspected periodically. Our aim is to study the number of inspections that find the epidemic process still in progress. As the underlying mathematical model involves a discrete time Markov chain (DTMC) with a single absorbing state, the number of inspections in an outbreak is a first-passage time into this absorbing state. Cumulative probabilities are numerically determined from a recursive algorithm and expected values came from explicit expressions.
APA, Harvard, Vancouver, ISO, and other styles
41

Tang, Wei, Zhijian Wang, Jiyou Zhai, and Zhangjing Yang. "Salient object detection via two-stage absorbing Markov chain based on background and foreground." Journal of Visual Communication and Image Representation 71 (August 2020): 102727. http://dx.doi.org/10.1016/j.jvcir.2019.102727.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Moid, Azfar, and Abraham Fapojuwo. "Three-dimensional absorbing Markov chain model for video streaming over IEEE 802.11 wireless networks." IEEE Transactions on Consumer Electronics 54, no. 4 (2008): 1672–80. http://dx.doi.org/10.1109/tce.2008.4711219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Qurashi, Mohammedelameen Eissa, Amal Fudlalla Ahmed Tag Eldeen, and Nassr Al-Maflehi. "Absorbing Markov chain for measuring performance of intensive care unit at Soba hospital university." Applied Mathematical Sciences 13, no. 22 (2019): 1077–89. http://dx.doi.org/10.12988/ams.2019.99127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Liang, Chao, Guang Cheng, Devin L. Wixon, and Teri C. Balser. "An Absorbing Markov Chain approach to understanding the microbial role in soil carbon stabilization." Biogeochemistry 106, no. 3 (2010): 303–9. http://dx.doi.org/10.1007/s10533-010-9525-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kijima, Masaaki. "On passage and conditional passage times for Markov chains in continuous time." Journal of Applied Probability 25, no. 2 (1988): 279–90. http://dx.doi.org/10.2307/3214436.

Full text
Abstract:
Let X(t) be a temporally homogeneous irreducible Markov chain in continuous time defined on . For k < i < j, let H = {k + 1, ···, j − 1} and let kTij (jTik) be the upward (downward) conditional first-passage time of X(t) from i to j(k) given no visit to . These conditional passage times are studied through first-passage times of a modified chain HX(t) constructed by making the set of states absorbing. It will be shown that the densities of kTij and jTik for any birth-death process are unimodal and the modes kmij (jmik) of the unimodal densities are non-increasing (non-decreasing) with re
APA, Harvard, Vancouver, ISO, and other styles
46

Kijima, Masaaki. "On passage and conditional passage times for Markov chains in continuous time." Journal of Applied Probability 25, no. 02 (1988): 279–90. http://dx.doi.org/10.1017/s0021900200040924.

Full text
Abstract:
Let X(t) be a temporally homogeneous irreducible Markov chain in continuous time defined on . For k < i < j, let H = {k + 1, ···, j − 1} and let kTij ( jTik ) be the upward (downward) conditional first-passage time of X(t) from i to j(k) given no visit to . These conditional passage times are studied through first-passage times of a modified chain HX(t) constructed by making the set of states absorbing. It will be shown that the densities of kTij and jTik for any birth-death process are unimodal and the modes kmij ( jmik ) of the unimodal densities are non-increasing (non-decreas
APA, Harvard, Vancouver, ISO, and other styles
47

Darlington, S. J., and P. K. Pollett. "Quasistationarity in continuous-time Markov chains where absorption is not certain." Journal of Applied Probability 37, no. 02 (2000): 598–600. http://dx.doi.org/10.1017/s002190020001576x.

Full text
Abstract:
In a recent paper [4] it was shown that, for an absorbing Markov chain where absorption is not guaranteed, the state probabilities at timetconditional on non-absorption bytgenerally depend ont. Conditions were derived under which there can be no initial distribution such that the conditional state probabilities are stationary. The purpose of this note is to show that these conditions can be relaxed completely: we prove, once and for all, that there arenocircumstances under which a quasistationary distribution can admit a stationary conditional interpretation.
APA, Harvard, Vancouver, ISO, and other styles
48

Darlington, S. J., and P. K. Pollett. "Quasistationarity in continuous-time Markov chains where absorption is not certain." Journal of Applied Probability 37, no. 2 (2000): 598–600. http://dx.doi.org/10.1239/jap/1014842561.

Full text
Abstract:
In a recent paper [4] it was shown that, for an absorbing Markov chain where absorption is not guaranteed, the state probabilities at time t conditional on non-absorption by t generally depend on t. Conditions were derived under which there can be no initial distribution such that the conditional state probabilities are stationary. The purpose of this note is to show that these conditions can be relaxed completely: we prove, once and for all, that there are no circumstances under which a quasistationary distribution can admit a stationary conditional interpretation.
APA, Harvard, Vancouver, ISO, and other styles
49

Duan, Qihong, Zhiping Chen, and Dengfu Zhao. "An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains." Mathematical Problems in Engineering 2010 (2010): 1–16. http://dx.doi.org/10.1155/2010/242567.

Full text
Abstract:
In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there aremtransient
APA, Harvard, Vancouver, ISO, and other styles
50

Takayama, Jun-ichi, and Tomomi Sugiyama. "A Study on O-D Estimation Model by Observed Link Flows Using Absorbing Markov Chain." Doboku Gakkai Ronbunshu, no. 569 (1997): 75–84. http://dx.doi.org/10.2208/jscej.1997.569_75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!