Auswahl der wissenschaftlichen Literatur zum Thema „Convergence of Markov processes“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Convergence of Markov processes" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Convergence of Markov processes"

1

Abakuks, A., S. N. Ethier, and T. G. Kurtz. "Markov Processes: Characterization and Convergence." Biometrics 43, no. 2 (1987): 484. http://dx.doi.org/10.2307/2531839.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Perkins, Edwin, S. N. Ethier, and T. G. Kurtz. "Markov Processes, Characterization and Convergence." Journal of the Royal Statistical Society. Series A (Statistics in Society) 151, no. 2 (1988): 367. http://dx.doi.org/10.2307/2982773.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Franz, Uwe, Volkmar Liebscher, and Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes." Advances in Applied Probability 44, no. 3 (2012): 729–48. http://dx.doi.org/10.1239/aap/1346955262.

Der volle Inhalt der Quelle
Annotation:
A classical result about Markov jump processes states that a certain class of dynamical systems given by ordinary differential equations are obtained as the limit of a sequence of scaled Markov jump processes. This approach fails if the scaling cannot be carried out equally across all entities. In the present paper we present a convergence theorem for such an unequal scaling. In contrast to an equal scaling the limit process is not purely deterministic but still possesses randomness. We show that these processes constitute a rich subclass of piecewise-deterministic processes. Such processes ap
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Franz, Uwe, Volkmar Liebscher, and Stefan Zeiser. "Piecewise-Deterministic Markov Processes as Limits of Markov Jump Processes." Advances in Applied Probability 44, no. 03 (2012): 729–48. http://dx.doi.org/10.1017/s0001867800005851.

Der volle Inhalt der Quelle
Annotation:
A classical result about Markov jump processes states that a certain class of dynamical systems given by ordinary differential equations are obtained as the limit of a sequence of scaled Markov jump processes. This approach fails if the scaling cannot be carried out equally across all entities. In the present paper we present a convergence theorem for such an unequal scaling. In contrast to an equal scaling the limit process is not purely deterministic but still possesses randomness. We show that these processes constitute a rich subclass of piecewise-deterministic processes. Such processes ap
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Swishchuk, Anatoliy, and M. Shafiqul Islam. "Diffusion Approximations of the Geometric Markov Renewal Processes and Option Price Formulas." International Journal of Stochastic Analysis 2010 (December 19, 2010): 1–21. http://dx.doi.org/10.1155/2010/347105.

Der volle Inhalt der Quelle
Annotation:
We consider the geometric Markov renewal processes as a model for a security market and study this processes in a diffusion approximation scheme. Weak convergence analysis and rates of convergence of ergodic geometric Markov renewal processes in diffusion scheme are presented. We present European call option pricing formulas in the case of ergodic, double-averaged, and merged diffusion geometric Markov renewal processes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Aldous, David J. "Book Review: Markov processes: Characterization and convergence." Bulletin of the American Mathematical Society 16, no. 2 (1987): 315–19. http://dx.doi.org/10.1090/s0273-0979-1987-15533-9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Crank, Keith N., and Prem S. Puri. "A method of approximating Markov jump processes." Advances in Applied Probability 20, no. 1 (1988): 33–58. http://dx.doi.org/10.2307/1427269.

Der volle Inhalt der Quelle
Annotation:
We present a method of approximating Markov jump processes which was used by Fuhrmann [7] in a special case. We generalize the method and prove weak convergence results under mild assumptions. In addition we obtain bounds on the rates of convergence of the probabilities at arbitrary fixed times. The technique is demonstrated using a state-dependent branching process as an example.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Crank, Keith N., and Prem S. Puri. "A method of approximating Markov jump processes." Advances in Applied Probability 20, no. 01 (1988): 33–58. http://dx.doi.org/10.1017/s0001867800017936.

Der volle Inhalt der Quelle
Annotation:
We present a method of approximating Markov jump processes which was used by Fuhrmann [7] in a special case. We generalize the method and prove weak convergence results under mild assumptions. In addition we obtain bounds on the rates of convergence of the probabilities at arbitrary fixed times. The technique is demonstrated using a state-dependent branching process as an example.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Deng, Chang-Song, René L. Schilling, and Yan-Hong Song. "Subgeometric rates of convergence for Markov processes under subordination." Advances in Applied Probability 49, no. 1 (2017): 162–81. http://dx.doi.org/10.1017/apr.2016.83.

Der volle Inhalt der Quelle
Annotation:
Abstract We are interested in the rate of convergence of a subordinate Markov process to its invariant measure. Given a subordinator and the corresponding Bernstein function (Laplace exponent), we characterize the convergence rate of the subordinate Markov process; the key ingredients are the rate of convergence of the original process and the (inverse of the) Bernstein function. At a technical level, the crucial point is to bound three types of moment (subexponential, algebraic, and logarithmic) for subordinators as time t tends to ∞. We also discuss some concrete models and we show that subo
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Lam, Hoang-Chuong. "Weak Convergence for Markov Processes on Discrete State Spaces." Markov Processes And Related Fields, no. 2024 № 4 (30) (February 8, 2025): 587–98. https://doi.org/10.61102/1024-2953-mprf.2024.30.4.004.

Der volle Inhalt der Quelle
Annotation:
This study investigates the weak convergence of Markov processes on discrete state spaces under the assumption that the transition intensities converge to a constant. Additionally, the research determines the limits of higher-order moments of the Markov process, which are utilized to prove the existence of limit theorems.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Convergence of Markov processes"

1

Hahn, Léo. "Interacting run-and-tumble particles as piecewise deterministic Markov processes : invariant distribution and convergence." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2024. http://www.theses.fr/2024UCFA0084.

Der volle Inhalt der Quelle
Annotation:
Cette thèse étudie le comportement en temps long des particules run-and-tumble (RTPs), un modèle pour les bactéries en physique statistique hors équilibre, en utilisant des processus de Markov déterministes par morceaux (PDMPs). La motivation est d'améliorer la compréhension au niveau particulaire des phénomènes actifs, en particulier la séparation de phase induite par la motilité (MIPS). La mesure invariante pour deux RTPs avec jamming sur un tore 1D est déterminée pour mécanismes de tumble et jamming généraux, révélant deux classes d'universalité hors équilibre. De plus, la dépendance du tem
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Pötzelberger, Klaus. "On the Approximation of finite Markov-exchangeable processes by mixtures of Markov Processes." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1991. http://epub.wu.ac.at/526/1/document.pdf.

Der volle Inhalt der Quelle
Annotation:
We give an upper bound for the norm distance of (0,1) -valued Markov-exchangeable random variables to mixtures of distributions of Markov processes. A Markov-exchangeable random variable has a distribution that depends only on the starting value and the number of transitions 0-0, 0-1, 1-0 and 1-1. We show that if, for increasing length of variables, the norm distance to mixtures of Markov processes goes to 0, the rate of this convergence may be arbitrarily slow. (author's abstract)<br>Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Drozdenko, Myroslav. "Weak Convergence of First-Rare-Event Times for Semi-Markov Processes." Doctoral thesis, Västerås : Mälardalen University, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-394.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Yuen, Wai Kong. "Application of geometric bounds to convergence rates of Markov chains and Markov processes on R[superscript]n." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/NQ58619.pdf.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Kaijser, Thomas. "Convergence in distribution for filtering processes associated to Hidden Markov Models with densities." Linköpings universitet, Matematik och tillämpad matematik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-92590.

Der volle Inhalt der Quelle
Annotation:
A Hidden Markov Model generates two basic stochastic processes, a Markov chain, which is hidden, and an observation sequence. The filtering process of a Hidden Markov Model is, roughly speaking, the sequence of conditional distributions of the hidden Markov chain that is obtained as new observations are received. It is well-known, that the filtering process itself, is also a Markov chain. A classical, theoretical problem is to find conditions which implies that the distributions of the filtering process converge towards a unique limit measure. This problem goes back to a paper of D Blackwell f
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Lachaud, Béatrice. "Détection de la convergence de processus de Markov." Phd thesis, Université René Descartes - Paris V, 2005. http://tel.archives-ouvertes.fr/tel-00010473.

Der volle Inhalt der Quelle
Annotation:
Notre travail porte sur le phénomène de cutoff pour des n-échantillons de processus de Markov, dans le but de l'appliquer à la détection de la convergence d'algorithmes parallélisés. Dans un premier temps, le processus échantillonné est un processus d'Ornstein-Uhlenbeck. Nous mettons en évidence le phénomène de cutoff pour le n-échantillon, puis nous faisons le lien avec la convergence en loi du temps d'atteinte par le processus moyen d'un niveau fixé. Dans un second temps, nous traitons le cas général où le processus échantillonné converge à vitesse exponentielle vers sa loi stationnaire. Nou
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Fisher, Diana. "Convergence analysis of MCMC method in the study of genetic linkage with missing data." Huntington, WV : [Marshall University Libraries], 2005. http://www.marshall.edu/etd/descript.asp?ref=568.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Wang, Xinyu. "Sur la convergence sous-exponentielle de processus de Markov." Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2012. http://tel.archives-ouvertes.fr/tel-00840858.

Der volle Inhalt der Quelle
Annotation:
Ma thèse de doctorat se concentre principalement sur le comportement en temps long des processus de Markov, les inégalités fonctionnelles et les techniques relatives. Plus spécifiquement, Je vais présenter les taux de convergence sous-exponentielle explicites des processus de Markov dans deux approches : la méthode Meyn-Tweedie et l'hypocoercivité (faible). Le document se divise en trois parties. Dans la première partie, Je vais présenter quelques résultats importants et des connaissances connexes. D'abord, un aperçu de mon domaine de recherche sera donné. La convergence exponentielle (ou sous
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Bouguet, Florian. "Étude quantitative de processus de Markov déterministes par morceaux issus de la modélisation." Thesis, Rennes 1, 2016. http://www.theses.fr/2016REN1S040/document.

Der volle Inhalt der Quelle
Annotation:
L'objet de cette thèse est d'étudier une certaine classe de processus de Markov, dits déterministes par morceaux, ayant de très nombreuses applications en modélisation. Plus précisément, nous nous intéresserons à leur comportement en temps long et à leur vitesse de convergence à l'équilibre lorsqu'ils admettent une mesure de probabilité stationnaire. L'un des axes principaux de ce manuscrit de thèse est l'obtention de bornes quantitatives fines sur cette vitesse, obtenues principalement à l'aide de méthodes de couplage. Le lien sera régulièrement fait avec d'autres domaines des mathématiques d
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chotard, Alexandre. "Markov chain Analysis of Evolution Strategies." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112230/document.

Der volle Inhalt der Quelle
Annotation:
Cette thèse contient des preuves de convergence ou de divergence d'algorithmes d'optimisation appelés stratégies d'évolution (ESs), ainsi que le développement d'outils mathématiques permettant ces preuves.Les ESs sont des algorithmes d'optimisation stochastiques dits ``boîte noire'', i.e. où les informations sur la fonction optimisée se réduisent aux valeurs qu'elle associe à des points. En particulier, le gradient de la fonction est inconnu. Des preuves de convergence ou de divergence de ces algorithmes peuvent être obtenues via l'analyse de chaînes de Markov sous-jacentes à ces algorithmes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Bücher zum Thema "Convergence of Markov processes"

1

G, Kurtz Thomas, ed. Markov processes: Characterization and convergence. Wiley, 1986.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Roberts, Gareth O. Convergence of slice sampler Markov chains. University of Toronto, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Baxter, John Robert. Rates of convergence for everywhere-positive markov chains. University of Toronto, Dept. of Statistics, 1994.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Roberts, Gareth O. Quantitative bounds for convergence rates of continuous time Markov processes. University of Toronto, Dept. of Statistics, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Yuen, Wai Kong. Applications of Cheeger's constant to the convergence rate of Markov chains on Rn. University of Toronto, Dept. of Statistics, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Roberts, Gareth O. On convergence rates of Gibbs samplers for uniform distributions. University of Toronto, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Cowles, Mary Kathryn. Possible biases induced by MCMC convergence diagnostics. University of Toronto, Dept. of Statistics, 1997.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Cowles, Mary Kathryn. A simulation approach to convergence rates for Markov chain Monte Carlo algorithms. University of Toronto, Dept. of Statistics, 1996.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Wirsching, Günther J. The dynamical system generated by the 3n + 1 function. Springer, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Petrone, Sonia. A note on convergence rates of Gibbs sampling for nonparametric mixtures. University of Toronto, Dept. of Statistics, 1998.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Buchteile zum Thema "Convergence of Markov processes"

1

Zhang, Hanjun, Qixiang Mei, Xiang Lin, and Zhenting Hou. "Convergence Property of Standard Transition Functions." In Markov Processes and Controlled Markov Chains. Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Altman, Eitan. "Convergence of discounted constrained MDPs." In Constrained Markov Decision Processes. Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-17.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Altman, Eitan. "Convergence as the horizon tends to infinity." In Constrained Markov Decision Processes. Routledge, 2021. http://dx.doi.org/10.1201/9781315140223-18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Kersting, G., and F. C. Klebaner. "Explosions in Markov Processes and Submartingale Convergence." In Athens Conference on Applied Probability and Time Series Analysis. Springer New York, 1996. http://dx.doi.org/10.1007/978-1-4612-0749-8_9.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cai, Yuzhi. "How Rates of Convergence for Gibbs Fields Depend on the Interaction and the Kind of Scanning Used." In Markov Processes and Controlled Markov Chains. Springer US, 2002. http://dx.doi.org/10.1007/978-1-4613-0265-0_31.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Bernou, Armand. "On Subexponential Convergence to Equilibrium of Markov Processes." In Lecture Notes in Mathematics. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96409-2_5.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Pop-Stojanovic, Z. R. "Convergence in Energy and the Sector Condition for Markov Processes." In Seminar on Stochastic Processes, 1984. Birkhäuser Boston, 1986. http://dx.doi.org/10.1007/978-1-4684-6745-1_10.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Feng, Jin, and Thomas Kurtz. "Large deviations for Markov processes and nonlinear semigroup convergence." In Mathematical Surveys and Monographs. American Mathematical Society, 2006. http://dx.doi.org/10.1090/surv/131/05.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Chatterjee, Krishnendu, Mahdi JafariRaviz, Raimundo Saona, and Jakub Svoboda. "Value Iteration with Guessing for Markov Chains and Markov Decision Processes." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-90653-4_11.

Der volle Inhalt der Quelle
Annotation:
Abstract Two standard models for probabilistic systems are Markov chains (MCs) and Markov decision processes (MDPs). Classic objectives for such probabilistic models for control and planning problems are reachability and stochastic shortest path. The widely studied algorithmic approach for these problems is the Value Iteration (VI) algorithm which iteratively applies local updates called Bellman updates. There are many practical approaches for VI in the literature but they all require exponentially many Bellman updates for MCs in the worst case. A preprocessing step is an algorithm that is dis
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Negoro, Akira, and Masaaki Tsuchiya. "Convergence and uniqueness theorems for markov processes associated with Lévy operators." In Lecture Notes in Mathematics. Springer Berlin Heidelberg, 1988. http://dx.doi.org/10.1007/bfb0078492.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Convergence of Markov processes"

1

Saldi, Naci, Sina Sanjari, and Serdar Yüksel. "Quantum Markov Decision Processes." In 2024 IEEE 63rd Conference on Decision and Control (CDC). IEEE, 2024. https://doi.org/10.1109/cdc56724.2024.10886823.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Majeed, Sultan Javed, and Marcus Hutter. "On Q-learning Convergence for Non-Markov Decision Processes." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/353.

Der volle Inhalt der Quelle
Annotation:
Temporal-difference (TD) learning is an attractive, computationally efficient framework for model- free reinforcement learning. Q-learning is one of the most widely used TD learning technique that enables an agent to learn the optimal action-value function, i.e. Q-value function. Contrary to its widespread use, Q-learning has only been proven to converge on Markov Decision Processes (MDPs) and Q-uniform abstractions of finite-state MDPs. On the other hand, most real-world problems are inherently non-Markovian: the full true state of the environment is not revealed by recent observations. In th
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Amiri, Mohsen, and Sindri Magnússon. "On the Convergence of TD-Learning on Markov Reward Processes with Hidden States." In 2024 European Control Conference (ECC). IEEE, 2024. http://dx.doi.org/10.23919/ecc64448.2024.10591108.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Shi, Chongyang, Yuheng Bu, and Jie Fu. "Information-Theoretic Opacity-Enforcement in Markov Decision Processes." In Thirty-Third International Joint Conference on Artificial Intelligence {IJCAI-24}. International Joint Conferences on Artificial Intelligence Organization, 2024. http://dx.doi.org/10.24963/ijcai.2024/749.

Der volle Inhalt der Quelle
Annotation:
The paper studies information-theoretic opacity, an information-flow privacy property, in a setting involving two agents: A planning agent who controls a stochastic system and an observer who partially observes the system states. The goal of the observer is to infer some secret, represented by a random variable, from its partial observations, while the goal of the planning agent is to make the secret maximally opaque to the observer while achieving a satisfactory total return. Modeling the stochastic system using a Markov decision process, two classes of opacity properties are considered---Las
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Ding, Dongsheng, Kaiqing Zhang, Tamer Basar, and Mihailo R. Jovanovic. "Convergence and optimality of policy gradient primal-dual method for constrained Markov decision processes." In 2022 American Control Conference (ACC). IEEE, 2022. http://dx.doi.org/10.23919/acc53348.2022.9867805.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Ferreira Salvador, Paulo J., and Rui J. M. T. Valadas. "Framework based on Markov modulated Poisson processes for modeling traffic with long-range dependence." In ITCom 2001: International Symposium on the Convergence of IT and Communications, edited by Robert D. van der Mei and Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434317.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Takagi, Hideaki, Muneo Kitajima, Tetsuo Yamamoto, and Yongbing Zhang. "Search process evaluation for a hierarchical menu system by Markov chains." In ITCom 2001: International Symposium on the Convergence of IT and Communications, edited by Robert D. van der Mei and Frank Huebner-Szabo de Bucs. SPIE, 2001. http://dx.doi.org/10.1117/12.434312.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Hongbin Liang, Lin X. Cai, Hangguan Shan, Xuemin Shen, and Daiyuan Peng. "Adaptive resource allocation for media services based on semi-Markov decision process." In 2010 International Conference on Information and Communication Technology Convergence (ICTC). IEEE, 2010. http://dx.doi.org/10.1109/ictc.2010.5674663.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Tayeb, Shahab, Miresmaeil Mirnabibaboli, and Shahram Latifi. "Load Balancing in WSNs using a Novel Markov Decision Process Based Routing Algorithm." In 2016 6th International Conference on IT Convergence and Security (ICITCS). IEEE, 2016. http://dx.doi.org/10.1109/icitcs.2016.7740350.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Chanron, Vincent, and Kemper Lewis. "A Study of Convergence in Decentralized Design." In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/dac-48782.

Der volle Inhalt der Quelle
Annotation:
The decomposition and coordination of decisions in the design of complex engineering systems is a great challenge. Companies who design these systems routinely allocate design responsibility of the various subsystems and components to different people, teams or even suppliers. The mechanisms behind this network of decentralized design decisions create difficult management and coordination issues. However, developing efficient design processes is paramount, especially with market pressures and customer expectations. Standard techniques to modeling and solving decentralized design problems typic
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Convergence of Markov processes"

1

Adler, Robert J., Stamatis Gambanis, and Gennady Samorodnitsky. On Stable Markov Processes. Defense Technical Information Center, 1987. http://dx.doi.org/10.21236/ada192892.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Athreya, Krishna B., Hani Doss, and Jayaram Sethuraman. A Proof of Convergence of the Markov Chain Simulation Method. Defense Technical Information Center, 1992. http://dx.doi.org/10.21236/ada255456.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Abdel-Hameed, M. Markovian Shock Models, Deterioration Processes, Stratified Markov Processes Replacement Policies. Defense Technical Information Center, 1985. http://dx.doi.org/10.21236/ada174646.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Newell, Alan. Markovian Shock Models, Deterioration Processes, Stratified Markov Processes and Replacement Policies. Defense Technical Information Center, 1986. http://dx.doi.org/10.21236/ada174995.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Cinlar, E. Markov Processes Applied to Control, Reliability and Replacement. Defense Technical Information Center, 1989. http://dx.doi.org/10.21236/ada208634.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Rohlicek, J. R., and A. S. Willsky. Structural Decomposition of Multiple Time Scale Markov Processes,. Defense Technical Information Center, 1987. http://dx.doi.org/10.21236/ada189739.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Serfozo, Richard F. Poisson Functionals of Markov Processes and Queueing Networks. Defense Technical Information Center, 1987. http://dx.doi.org/10.21236/ada191217.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Serfozo, R. F. Poisson Functionals of Markov Processes and Queueing Networks,. Defense Technical Information Center, 1987. http://dx.doi.org/10.21236/ada194289.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Draper, Bruce A., and J. Ross Beveridge. Learning to Populate Geospatial Databases via Markov Processes. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/ada374536.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Sethuraman, Jayaram. Easily Verifiable Conditions for the Convergence of the Markov Chain Monte Carlo Method. Defense Technical Information Center, 1995. http://dx.doi.org/10.21236/ada308874.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!