Дисертації з теми "Probabilities"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Probabilities.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Probabilities".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Piesse, Andrea Robyn. "Coherent predictive probabilities." Thesis, University of Canterbury. Mathematics and Statistics, 1996. http://hdl.handle.net/10092/8049.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The main aim of this thesis is to study ways of predicting the outcome of a vector of category counts from a particular group, in the presence of like data from other groups regarded exchangeably with this one and with each other. The situation is formulated using the subjectivist framework and strategies for estimating these predictive probabilities are presented and analysed with regard to their coherency. The range of estimation procedures considered covers naive, empirical Bayes and hierarchical Bayesian methods. Surprisingly, it turns out that some of these strategies must be asserted with zero probability of being used, in order for them to be coherent. A theory is developed which proves to be very useful in determining whether or not this is the case for a given collection of predictive probabilities. The conclusion is that truly Bayesian inference may lie behind all of the coherent strategies discovered, even when they are proposed under the guise of some other motivation.
2

Beebee, Helen. "Causes and probabilities." Thesis, King's College London (University of London), 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.244728.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Das, Sreejith. "Class conditional voting probabilities." Thesis, Birkbeck (University of London), 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.497794.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Yu, Xiaofeng. "Prediction Intervals for Class Probabilities." The University of Waikato, 2007. http://hdl.handle.net/10289/2436.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Prediction intervals for class probabilities are of interest in machine learning because they can quantify the uncertainty about the class probability estimate for a test instance. The idea is that all likely class probability values of the test instance are included, with a pre-specified confidence level, in the calculated prediction interval. This thesis proposes a probabilistic model for calculating such prediction intervals. Given the unobservability of class probabilities, a Bayesian approach is employed to derive a complete distribution of the class probability of a test instance based on a set of class observations of training instances in the neighbourhood of the test instance. A random decision tree ensemble learning algorithm is also proposed, whose prediction output constitutes the neighbourhood that is used by the Bayesian model to produce a PI for the test instance. The Bayesian model, which is used in conjunction with the ensemble learning algorithm and the standard nearest-neighbour classifier, is evaluated on artificial datasets and modified real datasets.
5

Roy, Kirk Andrew. "Laplace transforms, probabilities and queues." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ31000.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Peña-Castillo, Maria de Lourdes. "Probabilities and simulations in poker." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0018/MQ47080.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

老瑞欣 and Sui-yan Victor Lo. "Statistical modelling of gambling probabilities." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1992. http://hub.hku.hk/bib/B3123270X.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Cooper, Iain E. "Surface reactions and sticking probabilities." Thesis, University of Nottingham, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.403696.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Van, Jaarsveldt Cole. "Modelling probabilities of corporate default." Master's thesis, Faculty of Commerce, 2019. http://hdl.handle.net/11427/31331.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation follows, scrupulously, the probability of default model used by the National University of Singapore Risk Management Institute (NUS-RMI). Any deviations or omissions are noted with reasons related to the scope of this study on modelling probabilities of corporate default of South African firms. Using our model, we simulate defaults and subsequently, infer parameters using classical statistical frequentist likelihood estimation and one-world-view pseudo-likelihood estimation. We improve the initial estimates from our pseudo-likelihood estimation by using Sequential Monte Carlo techniques and pseudo-Bayesian inference. With these techniques, we significantly improve upon our original parameter estimates. The increase in accuracy is most significant when using few samples which mimics real world data availability
10

Lo, Sui-yan Victor. "Statistical modelling of gambling probabilities /." [Hong Kong] : University of Hong Kong, 1992. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13205389.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
11

Tallur, Gayatri. "Uncertain data integration with probabilities." Thesis, The University of North Carolina at Greensboro, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=1551297.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

Real world applications that deal with information extraction, such as business intelligence software or sensor data management, must often process data provided with varying degrees of uncertainty. Uncertainty can result from multiple or inconsistent sources, as well as approximate schema mappings. Modeling, managing and integrating uncertain data from multiple sources has been an active area of research in recent years. In particular, data integration systems free the user from the tedious tasks of finding relevant data sources, interacting with each source in isolation using its corresponding interface and combining data from multiple sources by providing a uniform query interface to gain access to the integrated information.

Previous work has integrated uncertain data using representation models such as the possible worlds and probabilistic relations. We extend this work by determining the probabilities of possible worlds of an extended probabilistic relation. We also present an algorithm to determine when a given extended probabilistic relation can be obtained by the integration of two probabilistic relations and give the decomposed pairs of probabilistic relations.

12

Zuccon, Guido. "Document ranking with quantum probabilities." Thesis, University of Glasgow, 2012. https://eprints.qut.edu.au/69287/1/zuccon2012b.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis we investigate the use of quantum probability theory for ranking documents. Quantum probability theory is used to estimate the probability of relevance of a document given a user's query. We posit that quantum probability theory can lead to a better estimation of the probability of a document being relevant to a user's query than the common approach, i. e. the Probability Ranking Principle (PRP), which is based upon Kolmogorovian probability theory. Following our hypothesis, we formulate an analogy between the document retrieval scenario and a physical scenario, that of the double slit experiment. Through the analogy, we propose a novel ranking approach, the quantum probability ranking principle (qPRP). Key to our proposal is the presence of quantum interference. Mathematically, this is the statistical deviation between empirical observations and expected values predicted by the Kolmogorovian rule of additivity of probabilities of disjoint events in configurations such that of the double slit experiment. We propose an interpretation of quantum interference in the document ranking scenario, and examine how quantum interference can be effectively estimated for document retrieval. To validate our proposal and to gain more insights about approaches for document ranking, we (1) analyse PRP, qPRP and other ranking approaches, exposing the assumptions underlying their ranking criteria and formulating the conditions for the optimality of the two ranking principles, (2) empirically compare three ranking principles (i. e. PRP, interactive PRP, and qPRP) and two state-of-the-art ranking strategies in two retrieval scenarios, those of ad-hoc retrieval and diversity retrieval, (3) analytically contrast the ranking criteria of the examined approaches, exposing similarities and differences, (4) study the ranking behaviours of approaches alternative to PRP in terms of the kinematics they impose on relevant documents, i. e. by considering the extent and direction of the movements of relevant documents across the ranking recorded when comparing PRP against its alternatives. Our findings show that the effectiveness of the examined ranking approaches strongly depends upon the evaluation context. In the traditional evaluation context of ad-hoc retrieval, PRP is empirically shown to be better or comparable to alternative ranking approaches. However, when we turn to examine evaluation contexts that account for interdependent document relevance (i. e. when the relevance of a document is assessed also with respect to other retrieved documents, as it is the case in the diversity retrieval scenario) then the use of quantum probability theory and thus of qPRP is shown to improve retrieval and ranking effectiveness over the traditional PRP and alternative ranking strategies, such as Maximal Marginal Relevance, Portfolio theory, and Interactive PRP. This work represents a significant step forward regarding the use of quantum theory in information retrieval. It demonstrates in fact that the application of quantum theory to problems within information retrieval can lead to improvements both in modelling power and retrieval effectiveness, allowing the constructions of models that capture the complexity of information retrieval situations. Furthermore, the thesis opens up a number of lines for future research. These include: (1) investigating estimations and approximations of quantum interference in qPRP; (2) exploiting complex numbers for the representation of documents and queries, and; (3) applying the concepts underlying qPRP to tasks other than document ranking.
13

Zuccon, Guido. "Document ranking with quantum probabilities." Thesis, University of Glasgow, 2012. http://theses.gla.ac.uk/3463/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis we investigate the use of quantum probability theory for ranking documents. Quantum probability theory is used to estimate the probability of relevance of a document given a user's query. We posit that quantum probability theory can lead to a better estimation of the probability of a document being relevant to a user's query than the common approach, i.e. the Probability Ranking Principle (PRP), which is based upon Kolmogorovian probability theory. Following our hypothesis, we formulate an analogy between the document retrieval scenario and a physical scenario, that of the double slit experiment. Through the analogy, we propose a novel ranking approach, the quantum probability ranking principle (qPRP). Key to our proposal is the presence of quantum interference. Mathematically, this is the statistical deviation between empirical observations and expected values predicted by the Kolmogorovian rule of additivity of probabilities of disjoint events in configurations such that of the double slit experiment. We propose an interpretation of quantum interference in the document ranking scenario, and examine how quantum interference can be effectively estimated for document retrieval. To validate our proposal and to gain more insights about approaches for document ranking, we (1) analyse PRP, qPRP and other ranking approaches, exposing the assumptions underlying their ranking criteria and formulating the conditions for the optimality of the two ranking principles, (2) empirically compare three ranking principles (i.e. PRP, interactive PRP, and qPRP) and two state-of-the-art ranking strategies in two retrieval scenarios, those of ad-hoc retrieval and diversity retrieval, (3) analytically contrast the ranking criteria of the examined approaches, exposing similarities and differences, (4) study the ranking behaviours of approaches alternative to PRP in terms of the kinematics they impose on relevant documents, i.e. by considering the extent and direction of the movements of relevant documents across the ranking recorded when comparing PRP against its alternatives. Our findings show that the effectiveness of the examined ranking approaches strongly depends upon the evaluation context. In the traditional evaluation context of ad-hoc retrieval, PRP is empirically shown to be better or comparable to alternative ranking approaches. However, when we turn to examine evaluation contexts that account for interdependent document relevance (i.e. when the relevance of a document is assessed also with respect to other retrieved documents, as it is the case in the diversity retrieval scenario) then the use of quantum probability theory and thus of qPRP is shown to improve retrieval and ranking effectiveness over the traditional PRP and alternative ranking strategies, such as Maximal Marginal Relevance, Portfolio theory, and Interactive PRP. This work represents a significant step forward regarding the use of quantum theory in information retrieval. It demonstrates in fact that the application of quantum theory to problems within information retrieval can lead to improvements both in modelling power and retrieval effectiveness, allowing the constructions of models that capture the complexity of information retrieval situations. Furthermore, the thesis opens up a number of lines for future research. These include (1) investigating estimations and approximations of quantum interference in qPRP, (2) exploiting complex numbers for the representation of documents and queries, and (3) applying the concepts underlying qPRP to tasks other than document ranking.
14

Deyoe, Kelly Joseph 1957. "TYPE OF EVIDENCE AS A BASIS FOR COMBINING SUBJECTIVE PROBABILITIES." Thesis, The University of Arizona, 1987. http://hdl.handle.net/10150/276454.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Two models for aggregating subjective probabilities are presented. One employs a multiplicative rule and the other a weighted average. The choice of a model is based on the type of evidence upon which the subjective probabilities were estimated. An experiment was developed to determine if people are sensitive to this difference in the type of evidence when combining subjective probabilities. Two other variables tested were the tense of the event and the experience of the subject with the use of probabilities. The type of evidence presented had an effect on the combination rule employed, whereas tense of the event did not. The naive and expert subjects approached the problems differently. An order effect due to the presentation order of the evidence within a problem was found. A momentum tendency, which may explain the order effect, was present in the expert subjects. Further research on combining subjective probabilities is indicated.
15

Gaier, Johanna, Peter Grandits, and Walter Schachermayer. "Asymptotic ruin probabilities and optimal investment." SFB Adaptive Information Systems and Modelling in Economics and Management Science, WU Vienna University of Economics and Business, 2002. http://epub.wu.ac.at/1260/1/document.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
We study the infinite time ruin probability for an insurance company in the classical Cramér-Lundberg model with finite exponential moments. The additional non-classical feature is that the company is also allowed to invest in some stock market, modeled by geometric Brownian motion. We obtain an exact analogue of the classical estimate for the ruin probability without investment, i.e., an exponential inequality. The exponent is larger than the one obtained without investment, the classical Lundberg adjustment coefficient, and thus one gets a sharper bound on the ruin probability. A surprising result is that the trading strategy yielding the optimal asymptotic decay of the ruin probability simply consists in holding a fixed quantity (which can be explicitly calculated) in the risky asset, independent of the current reserve. This result is in apparent contradiction to the common believe that 'rich' companies should invest more in risky assets than 'poor' ones. The reason for this seemingly paradoxical result is that the minimization of the ruin probability is an extremely conservative optimization criterion, especially for 'rich' companies. (author's abstract)
Series: Working Papers SFB "Adaptive Information Systems and Modelling in Economics and Management Science"
16

Listgarten, Jennifer. "Exploring Qualitative Probabilities for image understanding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape3/PQDD_0011/MQ53388.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
17

Bordianu, Gheorghita. "Learning influence probabilities in social networks." Thesis, McGill University, 2013. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=114597.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Social network analysis is an important cross-disciplinary area of research, with applications in fields such as biology, epidemiology, marketing and even politics. Influence maximization is the problem of finding the set of seed nodes in an information diffusion process that guarantees maximum spread of influence in a social network, given its structure. Most approaches to this problem make two assumptions. First, the global structure of the network is known. Second, influence probabilities between any two nodes are known beforehand, which is rarely the case in practical settings. In this thesis we propose a different approach to the problem of learning those influence probabilities from past data, using only the local structure of the social network. The method is grounded in unsupervised machine learning techniques and is based on a form of hierarchical clustering, allowing us to distinguish between influential and the influenceable nodes. Finally, we provide empirical results using real data extracted from Facebook.
L'analyse des réseaux sociaux est un domaine d'études interdisciplinaires qui comprend des applications en biologie, épidémiologie, marketing et même politique. La maximisation de l'influence représente un problème où l'on doit trouver l'ensemble des noeuds de semence dans un processus de diffusion de l'information qui en même temps garantit le maximum de propagation de son influence dans un réseau social avec une structure connue. La plupart des approches à ce genre de problème font appel à deux hypothèses. Premièrement, la structure générale du réseau social est connue. Deuxièmement, les probabilités des influences entre deux noeuds sont connues à l'avance, fait qui n'est d'ailleurs pas valide dans des circonstances pratiques. Dans cette thèse, on propose un procédé différent visant la problème de l'apprentissage de ces probabilités d'influence à partir des données passées, en utilisant seulement la structure locale du réseau social. Le procédé se base sur l'apprentissage automatique sans surveillance et il est relié à une forme de regroupement hiérarchique, ce qui nous permet de faire la distinction entre les noeuds influenceurs et les noeuds influencés. Finalement, on fournit des résultats empiriques en utilisant des données réelles extraites du réseau social Facebook.
18

Wilson, Nigel John. "Inner-shell photoionization and transition probabilities." Thesis, Queen's University Belfast, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.314150.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Lowe, Gavin. "Probabilities and priorities in timed CSP." Thesis, University of Oxford, 1993. http://ora.ox.ac.uk/objects/uuid:cfec28d9-aa50-46f3-a664-eb5fbe97b261.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
20

Sletmo, Patrik. "Introducing probabilities within grey-box fuzzing." Thesis, Linköpings universitet, Databas och informationsteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161893.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Over the recent years, the software industry has faced a steady increase in the number of exposed and exploited software vulnerabilities. With more software and devices being connected to the internet every day, the need for proactive security measures has never been more important. One promising new technology for making software more secure is fuzz testing. This automated testing technique is based around generating a large number of test cases with the intention of revealing dangerous bugs and vulnerabilities. In this thesis work, a new direction within grey-box fuzz testing is evaluated against previous work. The presented approach uses sampled probability data in order to guide the fuzz testing towards program states that are expected to be easy to reach and beneficial for the discovery of software vulnerabilities. Evaluation of the design shows that the suggested approach provides no obvious advantage over existing solutions, but also indicates that the performance advantage could be dependent on the structure of the system under test. However, analysis of the design itself highlights several design decisions that could benefit from more extensive research. While the design proposed in this thesis work is insufficient for replacing current state of the art fuzz testing software, it provides a solid foundation for future research within the field. With the many insights gained from the design and implementation work, this thesis work aims to both inspire others and showcase the challenges of creating a probability-based approach to grey-box fuzz testing.
21

Truran, J. M. "The teaching and learning of probability, with special reference to South Australian schools from 1959-1994." Title page, contents and abstract only, 2001. http://web4.library.adelaide.edu.au/theses/09PH/09pht872.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
22

Koether, Paul. "GARCH-like models with dynamic crash-probabilities." [S.l.] : [s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=976610248.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

Feng, Yi. "Dynamic Fuzzy Logic Control of GeneticAlgorithm Probabilities." Thesis, Högskolan Dalarna, Datateknik, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:du-3286.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Genetic algorithms are commonly used to solve combinatorial optimizationproblems. The implementation evolves using genetic operators (crossover, mutation,selection, etc.). Anyway, genetic algorithms like some other methods have parameters(population size, probabilities of crossover and mutation) which need to be tune orchosen.In this paper, our project is based on an existing hybrid genetic algorithmworking on the multiprocessor scheduling problem. We propose a hybrid Fuzzy-Genetic Algorithm (FLGA) approach to solve the multiprocessor scheduling problem.The algorithm consists in adding a fuzzy logic controller to control and tunedynamically different parameters (probabilities of crossover and mutation), in anattempt to improve the algorithm performance. For this purpose, we will design afuzzy logic controller based on fuzzy rules to control the probabilities of crossoverand mutation. Compared with the Standard Genetic Algorithm (SGA), the resultsclearly demonstrate that the FLGA method performs significantly better.
24

Peterson, Kristofer A. "Numerical simulation investigations in weapon delivery probabilities." Thesis, Monterey, Calif. : Naval Postgraduate School, 2008. http://handle.dtic.mil/100.2/ADA483491.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.S. in Mechanica;l Engineering)--Naval Postgraduate School, June 2008.
Thesis Advisor(s): Driels, Morris. "June 2008." Description based on title screen as viewed on August 26, 2008. Includes bibliographical references (p. 67). Also available in print.
25

Coutinho, Cristina Fonseca. "Sovereign default probabilities within the european crisis." Master's thesis, Instituto Superior de Economia e Gestão, 2012. http://hdl.handle.net/10400.5/4955.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Mestrado em Matemática Financeira
In this thesis we assess the real default probabilities of three groups of European sovereigns - peripheral, central and safe haven - in order to get a forward looking measure of the market sentiment about their default, as well as their evolution within the current European crisis. We follow Moody's CDS-implied EDF Credit Measures and Fair-Value Spreads methodology by extracting risk-neutral probabilities of default, assumed to be Weibull distributed, from CDS spreads and convert them into real probabilities of default, using an adaptation of the Merton model to remove the risk premium. We use CDS spreads data from 2008 to 2011 and country dependent market prices of risk as proxy for the risk premium based on the equity benchmark indices of each country. The obtained real default probabilities proved to be a suitable indicator to predict defaults according to the credit events. They have increased severely since 2009/2010, in particular for the peripheral economies - Greece, Ireland and Portugal. The Greece's 1-year probability of default reached 55% at the end of 2011 and a default took place in March 2012. These three countries had to request a bailout from the EU/IMF authorities, Greece and Ireland in 2010 and Portugal in April 2011. Spain and Italy, the central economies, have been a concern for investors, which is reected in their real probabilities of default that increased substantially during the second half of 2011. The safe haven sovereigns - Germany and France - were also not immune to the economic slowdown in Eurozone and its GDP started to shrink, however, the rise in the default probabilities was more limited.
Nesta tese apresentamos as probabilidades de incumprimento objectivas de três grupos de soberanos Europeus - periféricos, centrais e seguros - com o objectivo de captar antecipadamente o sentimento de mercado acerca dos mesmos, bem como analisar a evolução dessas probabilidades no contexto de crise europeia. Foi seguida a metodologia descrita em CDS-implied EDF Credit Measures and Fair-Value Spreads da Moody's, extraindo as probabilidades de incumprimento risco-neutrais, que se assume seguirem a distribuição Weibull, a partir dos preços dos CDS e convertendo-as em probabilidades de incumprimento objectivas, usando uma adaptação do modelo de Merton para expurgar o prémio de risco. Foram usados os preços dos CDS de 2008 a 2011 e os índices de Sharpe, variáveis com o país como proxy para o prémio de risco, baseados nos índices accionistas de referência de cada país. As probabilidades de incumprimento objectivas obtidas parecem ser indicadas para prever os incumprimentos de acordo com os acontecimentos reais. As probabilidades têm aumentado drasticamente desde 2009/2010, especialmente para os países periféricos - Grécia, Irlanda e Portugal. A probabilidade de incumprimento a um ano da Grécia era de 55% no final de 2011 e o incumprimento ocorreu efectivamente em Março de 2012. Estes três países tiveram de recorrer à ajuda financeira das autoridades União Europeia e do Fundo Monetário Internacional, a Grécia e a Irlanda em 2010 e Portugal em Abril de 2011. Espanha e Itália, as economias centrais, têm sido uma preocupação para os investidores, reflectida no aumento substancial das probabilidades de incumprimento no segundo semestre de 2011. Os soberanos seguros - Alemanha e França - também não ficaram imunes ao abrandamento económico na zona Euro e o seu PIB diminuiu, no entanto, o aumento das suas probabilidades de incumprimento foi mais limitado.
26

Merkel, Benjamin E. "Probabilities of Consecutive Events in Coin Flipping." University of Cincinnati / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1307442290.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Mu, Xiaoyu. "Ruin probabilities with dependent forces of interest." [Johnson City, Tenn. : East Tennessee State University], 2003. https://dc.etsu.edu/etd/796.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.S.)--East Tennessee State University, 2003.
Title from electronic submission form. ETSU ETD database URN: etd-0713103-233105. Includes bibliographical references. Also available via Internet at the UMI web site.
28

Bissey, Nancy R. "Probabilistic reasoning based on age of students and context of questions /." free to MU campus, to others for purchase, 1996. http://wwwlib.umi.com/cr/mo/fullcit?p9737862.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
29

Ambrose, Charles R. "Strehl ratio probabilities for phase-only adaptive optic." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA362881.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.S. in Systems Engineering) Naval Postgraduate School, March 1999.
Thesis advisor(s): Donald L. Walters, David L. Fried. "March 1999". Includes bibliographical references (p. 69). Also available online.
30

Sinn, Mathieu. "Estimation of ordinal pattern probabilities in stochastic processes." Lübeck Zentrale Hochschulbibliothek Lübeck, 2010. http://d-nb.info/1002256178/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
31

Bertacchi, D., F. Zucca, and Andreas Cap@esi ac at. "Uniform Asymptotic Estimates of Transition Probabilities on Combs." ESI preprints, 2001. ftp://ftp.esi.ac.at/pub/Preprints/esi1003.ps.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
32

Bruns, Morgan Chase. "Propagation of Imprecise Probabilities through Black Box Models." Thesis, Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10553.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
From the decision-based design perspective, decision making is the critical element of the design process. All practical decision making occurs under some degree of uncertainty. Subjective expected utility theory is a well-established method for decision making under uncertainty; however, it assumes that the DM can express his or her beliefs as precise probability distributions. For many reasons, both practical and theoretical, it can be beneficial to relax this assumption of precision. One possible means for avoiding this assumption is the use of imprecise probabilities. Imprecise probabilities are more expressive of uncertainty than precise probabilities, but they are also more computationally cumbersome. Probability Bounds Analysis (PBA) is a compromise between the expressivity of imprecise probabilities and the computational ease of modeling beliefs with precise probabilities. In order for PBA to be implemented in engineering design, it is necessary to develop appropriate computational methods for propagating probability boxes (p-boxes) through black box engineering models. This thesis examines the range of applicability of current methods for p-box propagation and proposes three alternative methods. These methods are applied towards the solution of three successively complex numerical examples.
33

Sun, Guohong. "Risk premiums and their applications in ruin probabilities." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape7/PQDD_0002/MQ41784.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
34

Zhu, Yuhong. "COMPUTING CALL BLOCKING PROBABILITIES IN WAVELENGTH ROUTING NETWORKS." NCSU, 1999. http://www.lib.ncsu.edu/theses/available/etd-19990322-203342.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

We study a class of circuit switched wavelength routing networks with fixed or alternate routing, with or without converts, and with various wavelength allocation policies.We first construct an exact Markov process and an approximate Markovprocess which has a closed-form solution for a single path. We alsodevelop an iterative decomposition algorithm to analyze long paths with orwithout wavelength converters effectively. Based on this algorithm, we thenpresent an iterative path decomposition algorithm to evaluate the blocking performance of mesh topology networks with fixed and alternate routing accurately and efficiently.The decomposition approach can naturally capture the correlation of both link loads and link blocking events, giving accurate results for a wide range of loads and network topologies.Our model also allows non-uniform traffic, i.e., call request arrival rates that can vary with the source-destination pair, and it can be used when the location of converters is fixed but arbitrary.Our algorithm represents a simple and computationally efficient solution to the difficult problem of computing call blocking probabilities in wavelength routing networks. Finally we show through numericaland simulation results that the blocking probabilities for the randomwavelength allocation and the circuit-switched case provide upper and lowerbounds on the blocking probabilities for two wavelength allocation policiesthat are most likely to be use in practice, namely most-used and first-fitallocation. Furthermore, we demonstrate that using these two policieshas an effect on call blocking probabilities that is equivalent toemploying converters at a number of nodes in the network.

35

Ekenberg, Love. "A unified framework for indeterminate probabilities and utilities /." Stockholm : Matematiska institutionen, Stockholms universitet, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-358.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
36

Hirani, Pranav. "Dynamic models of credit ratings and default probabilities." Diss., Columbia, Mo. : University of Missouri-Columbia, 2007. http://hdl.handle.net/10355/5998.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.A.)--University of Missouri-Columbia, 2007.
The entire dissertation/thesis text is included in the research.pdf file; the official abstract appears in the short.pdf file (which also appears in the research.pdf); a non-technical general description, or public abstract, appears in the public.pdf file. Title from title screen of research.pdf file (viewed on April 17, 2008) Includes bibliographical references.
37

Blais, Éric. "Computing probabilities for common substrings in random strings." Thesis, McGill University, 2006. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=99324.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Common Substring in Random Strings (CSRS) problem is defined as follows: what is the probability that a set of r random strings of length n generated by a random process P contain a common substring of length k? In this thesis, we investigate the CSRS problem and introduce two new methods for computing approximate solutions to the CSRS problem in the cases where the random strings are generated by a Bernoulli or Markov process. We also present generalizations to the methods to compute the probability of finding a common substring among only q of the r random strings, and to allow mismatches in the substring occurrences. We show through simulation experiments that the two new methods introduced in this thesis provide a substantial improvement in accuracy over previous methods when there are r > 2 random strings in the problem instance.
38

Ferreira, Poblete Alberto Julio. "On the modelling and analysis of conception probabilities." Thesis, Imperial College London, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.271120.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
39

Hu, Y. F. "Intermodulation interference probabilities in cellular mobile radio systems." Thesis, University of Bradford, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.380394.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
40

Knizel, Alisa. "Random tilings : gap probabilities, local and global asymptotics." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112900.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mathematics, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 119-125).
In the thesis we explore and develop two different approaches to the study of random tiling models. First, we consider tilings of a hexagon by rombi, viewed as 3D random stepped surfaces with a measure proportional to q-volume. Such model is closely related to q-Hahn orthogonal polynomial ensembles, and we use this connection to obtain results about the local behavior of this model. In terms of the q-Hahn orthogonal polynomial ensemble, our goal is to show that the one-interval gap probability function can be expressed through a solution of the asymmetric q-Painleve V equation. The case of the q-Hahn ensemble we consider is the most general case of the orthogonal polynomial ensembles that have been studied in this context. Our approach is based on the analysis of q-connections on P1 with a particular singularity structure. It requires a new derivation of a q-difference equation of Sakai's hierarchy [75] of type A(1)/2. We also calculate its Lax pair. Following [7], we introduce the notion of the [tau]-function of a q-connection and its isomonodromy transformations. We show that the gap probability function of the q-Hahn ensemble can be viewed as the [tau]-function for an associated q-connection and its isomonodromy transformations. Second, in collaboration with Alexey Bufetov we consider asymptotics of a domino tiling model on a class of domains which we call rectangular Aztec diamonds. We prove the Law of Large Numbers for the corresponding height functions and provide explicit formulas for the limit. For a special class of examples, an explicit parametrization of the frozen boundary is given. It turns out to be an algebraic curve with very special properties. Moreover, we establish the convergence of the fluctuations of the height functions to the Gaussian Free Field in appropriate coordinates. Our main tool is a recently developed moment method for discrete particle systems.
by Alisa Knizel.
Ph. D.
41

Mackay, Francisco J. "Calculating failure probabilities of passive systems during transients." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/41308.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Nuclear Science and Engineering, 2007.
Includes bibliographical references (p. 45-46).
A time-dependent reliability evaluation of a two-loop passive Decay Heat Removal (DHR) system was performed as part of the iterative design process for a helium-cooled fast reactor. The system was modeled using RELAP5-3D. The uncertainties in input parameters were assessed and were propagated through the model using Latin Hypercube Sampling. An important finding was the discovery that the smaller pressure loss through the DHR heat exchanger than through the core would make the flow to bypass the core through one DHR loop, if two loops operated in parallel. This finding is a warning against modeling only one lumped DHR loop and assuming that n of them will remove n times the decay power. Sensitivity analyses revealed that there are values of some input parameters for which failures are very unlikely. The calculated conditional (i.e., given the LOCA) failure probability was deemed to be too high leading to the identification of several design changes to improve system reliability. This study is an example of the kinds of insights that can be obtained by including a reliability assessment in the design process. It is different from the usual use of PSA in design, which compares different system configurations, because it focuses on the thermal-hydraulic performance of a safety function.
by Francisco J. Mackay.
S.M.
42

Cannon, Joseph E. "Approximation of Marginal Probabilities While Learning Bayesian Networks." NSUWorks, 2000. http://nsuworks.nova.edu/gscis_etd/444.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Computation of marginal probabilities in Bayesian Belief Networks is central to many probabilistic reasoning systems and automatic decision making systems. The process of belief updating in Bayesian Belief Networks (BBN) is a well-known computationally hard problem that has recently been approximated by several deterministic algorithms and by various randomized approximation algorithms. Although the deterministic algorithms usually provide probability bounds, they have exponential runtimes. Some of the randomized schemes have a polynomial runtime, but do not exploit the causal independence in BBNs to reduce the complexity of the problem. This dissertation presents a computationally efficient and deterministic approximation scheme for this NP-hard problem that recovers approximate posterior probabilities given a large multiply connected BBN. The scheme presented utilizes recent work in belief updating for BBNs by Santos and Shimony (1998) and Bloemeke (1998). The scheme employs the Independence-based (IB) assignments proposed by Santos and Shimony to reduce the graph connectivity and the number of variables in the BBN by exploiting causal independence. It recovers the desired posterior probabilities by means of Netica™, a commercially available application for Belief Networks and Influence Diagrams.
43

Piffer, Michele. "An analysis of leverage ratios and default probabilities." Thesis, London School of Economics and Political Science (University of London), 2014. http://etheses.lse.ac.uk/989/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The thesis consists of three independent chapters. In Chapter 1 (page 7) - Counter-cyclical defaults in “costly state verification” models - I argue that a pro-cyclical risk-free rate can solve the problem of pro-cyclical defaults in “costly state verification” models. Using a partial equilibrium framework, I compute numerically the coefficient of a Taylor rule that delivers pro-cyclical output, pro-cyclical capital and counter-cyclical defaults. This parametrization is consistent with the empirical evidence on Taylor rules. In Chapter 2 (page 67) - Monetary Policy, Leverage, and Default - I use the Bernanke, Gertler and Gilchrist (1999) model to study the effect of monetary policy on the probability that firms default on loans. I argue that a monetary expansion affects defaults through two opposing partial equilibrium effects. It increases defaults because it leads firms to take on more debt and leverage up net worth, and it decreases defaults because the cost of borrowing decreases and aggregate demand shifts out, increasing firms’ profits and net worth. I argue that the leverage effect could explain the empirical partial equilibrium finding by Jimenez et al. (2008) that defaults on new loans increase after a monetary expansion. I then argue that this effect does not hold in general equilibrium due to an increase in firms’ profits. In the full model, defaults decrease after a monetary expansion, although the effect equals only few basis points. In Chapter 3 (page 131) - Monetary Policy and Defaults in the US - I study empirically the effect of an unexpected monetary expansion on the delinquency rate on US business loans, residential mortgages and consumer credit. I consider several identification strategies and use Granger-causality tests to assess the exogeneity of the time series of monetary shocks to the Fed’s forecast of defaults on loans. I then compute impulse responses using a variant of the Local Projection method by Jorda (2005). I find that the delinquency rates do not respond to monetary shocks in a statistically significant way. The paper suggests that the risk-taking incentives analyzed in partial equilibrium by several existing contributions might not be strong enough to prevail and increase aggregate defaults, although they could explain why defaults do not significantly decrease.
44

Humphreys, Natalia A. "A central limit theorem for complex-valued probabilities /." The Ohio State University, 1999. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488187049540163.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Cerroni, Simone. "Subjective Probabilities in Choice Experiments' Design: Three Essays." Doctoral thesis, Università degli studi di Trento, 2013. https://hdl.handle.net/11572/368095.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation which consists of three essays investigates the influence of subjective probabilities on decision making processes under conditions of risk. In particular, it examines whether subjects adjust new risk information on their prior subjective estimates, and, to what extent this adjustment affects their choices. In the first essay, by using an artefactual field experiment, I examine the potential correlation between incentive compatibility and validity of subjective probabilities elicited via the Exchangeability Method, an innovative elicitation mechanism which consists of several chained questions. Here, validity is investigated using de Finetti’s notion of coherence under which subjective probabilities are coherent if and only if they obey all axioms and theorems of probability theory. Experimental results suggest that subjects provided with monetary incentives and randomized questions more likely express valid subjective probabilities than others because they are not aware of the chaining which undermines the incentive compatibility of the Exchangeability Method. In the second essay, by using the same experimental data, I show that valid subjective probabilities do not significantly diverge from invalid ones, indicative of little effect of internal validity on the actual magnitude of subjective probabilities. In the third essay, by using a field Choice Experiment, I investigate to what extent subjects adjust risk information given in the status quo alternative on their subjective probability estimates. An innovative two-stage approach that incorporates subjective probabilities into Choice Experiments’ design is developed to investigate this phenomenon, known as the scenario adjustment. In the first stage, subjective probabilities that given outcomes will occur are elicited using the Exchangeability Method. In the second stage, two treatment groups are designed: in the first group, each subject is presented with a status quo alternative which incorporates her/his subjective probabilities, and, hence, no adjustment is required; in the second group, each subject faces a status quo alternative where the presented risk is not consistent with her/his probability estimates, and, hence, a mental adjustment to the scenario might take place. By comparing willingness to pay across the treatment groups, my results suggest that, when subjects are provided with SQ alternatives in which the risk is lower than the perceived one, the mental adjustment takes place, but, when subjects are provided with SQ alternatives in which the risk is higher than their own estimates, these subjects appear to make irrational choices.
46

Cerroni, Simone. "Subjective Probabilities in Choice Experiments' Design: Three Essays." Doctoral thesis, University of Trento, 2013. http://eprints-phd.biblio.unitn.it/871/1/Dissertation_Cerroni.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This dissertation which consists of three essays investigates the influence of subjective probabilities on decision making processes under conditions of risk. In particular, it examines whether subjects adjust new risk information on their prior subjective estimates, and, to what extent this adjustment affects their choices. In the first essay, by using an artefactual field experiment, I examine the potential correlation between incentive compatibility and validity of subjective probabilities elicited via the Exchangeability Method, an innovative elicitation mechanism which consists of several chained questions. Here, validity is investigated using de Finetti’s notion of coherence under which subjective probabilities are coherent if and only if they obey all axioms and theorems of probability theory. Experimental results suggest that subjects provided with monetary incentives and randomized questions more likely express valid subjective probabilities than others because they are not aware of the chaining which undermines the incentive compatibility of the Exchangeability Method. In the second essay, by using the same experimental data, I show that valid subjective probabilities do not significantly diverge from invalid ones, indicative of little effect of internal validity on the actual magnitude of subjective probabilities. In the third essay, by using a field Choice Experiment, I investigate to what extent subjects adjust risk information given in the status quo alternative on their subjective probability estimates. An innovative two-stage approach that incorporates subjective probabilities into Choice Experiments’ design is developed to investigate this phenomenon, known as the scenario adjustment. In the first stage, subjective probabilities that given outcomes will occur are elicited using the Exchangeability Method. In the second stage, two treatment groups are designed: in the first group, each subject is presented with a status quo alternative which incorporates her/his subjective probabilities, and, hence, no adjustment is required; in the second group, each subject faces a status quo alternative where the presented risk is not consistent with her/his probability estimates, and, hence, a mental adjustment to the scenario might take place. By comparing willingness to pay across the treatment groups, my results suggest that, when subjects are provided with SQ alternatives in which the risk is lower than the perceived one, the mental adjustment takes place, but, when subjects are provided with SQ alternatives in which the risk is higher than their own estimates, these subjects appear to make irrational choices.
47

Vignudelli, Valeria <1985&gt. "Behavioral Equivalences for Higher-Order Languages with Probabilities." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amsdottorato.unibo.it/7968/7/Vignudelli_Valeria_tesi.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Higher-order languages, whose paradigmatic example is the lambda-calculus, are languages with powerful operators that are capable of manipulating and exchanging programs themselves. This thesis studies behavioral equivalences for programs with higher-order and probabilistic features. Behavioral equivalence is formalized as a contextual, or testing, equivalence, and two main lines of research are pursued in the thesis. The first part of the thesis focuses on contextual equivalence as a way of investigating the expressiveness of different languages. The discriminating powers offered by higher-order concurrent languages (Higher-Order pi-calculi) are compared with those offered by higher-order sequential languages (à la lambda-calculus) and by first-order concurrent languages (à la CCS). The comparison is carried out by examining the contextual equivalences induced by the languages on two classes of first-order processes, namely nondeterministic and probabilistic processes. As a result, the spectrum of the discriminating powers of several varieties of higher-order and first-order languages is obtained, both in a nondeterministic and in a probabilistic setting. The second part of the thesis is devoted to proof techniques for contextual equivalence in probabilistic lambda-calculi. Bisimulation-based proof techniques are studied, with particular focus on deriving bisimulations that are fully abstract for contextual equivalence (i.e., coincide with it). As a first result, full abstraction of applicative bisimilarity and similarity are proved for a call-by-value probabilistic lambda-calculus with a parallel disjunction operator. Applicative bisimulations are however known not to scale to richer languages. Hence, more robust notions of bisimulations for probabilistic calculi are considered, in the form of environmental bisimulations. Environmental bisimulations are defined for pure call-by-name and call-by-value probabilistic lambda-calculi, and for a (call-by-value) probabilistic lambda-calculus extended with references (i.e., a store). In each case, full abstraction results are derived.
48

Williams, Tasha Lyn. "A comparison of selection procedures for the best mean from a set of normal populations." Thesis, Georgia Institute of Technology, 1995. http://hdl.handle.net/1853/23420.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
49

Pekerten, Uygar Supervisor :. Körezliğlu Hayri. "Yield curve modelling via two parameter process." Ankara : METU, 2005. http://etd.lib.metu.edu.tr/upload/12605905/index.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Le, cousin Jean-Maxime. "Asymptotique des feux rares dans le modèle des feux de forêts." Thesis, Paris Est, 2015. http://www.theses.fr/2015PESC1018/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cette thèse, nous nous intéressons à deux modèles de feux de forêts définis sur Z. On étudie le modèle des feux de forêts sur Z avec propagation non instantanée dans le chapitre 2. Dans ce modèle, chaque site a trois états possibles : vide, occupé ou en feu. Un site vide devient occupé avec taux 1. Sur chaque site, des allumettes tombent avec taux λ. Si le site est occupé, il brûle pendant un temps exponentiel de paramètre π avant de se propager à ses deux voisins. S’ils sont eux-mêmes occupés, ils brûlent, sinon le feu s’éteint. On étudie l’asymptotique des feux rares c’est à dire la limite du processus lorsque λ → 0 et π → ∞. On montre qu’il y a trois catégories possibles de limites d’échelles, selon le régime dans lequel λ tend vers 0 et π vers l’infini. On étudie formellement et brièvement dans le chapitre 3 le modèle des feux de forêts sur Z en environnement aléatoire. Dans ce modèle, chaque site n’a que deux états possibles : vide ou occupé. On se donne un paramètre λ > 0, une loi ν sur (0 ,∞) et une suite (κi)i∈Z de variables aléatoires indépendantes identiquement distribuées selon ν. Un site vide i devient occupé avec taux κi. Sur chaque site, des allumettes tombent avec taux λ et détruisent immédiatement la composante de sites occupés correspondante. On étudie l’asymptotique des feux rares. Sous une hypothèse raisonnable sur ν, on espère que le processus converge, avec une renormalisation correcte, vers un modèle limite. On s’attend à distinguer trois processus limites différents
The aim of this work is to study two differents forest-fire processes defined on Z. In Chapter 2, we study the so-called one dimensional forest-fire process with non instantaeous propagation. In this model, each site has three possible states: ’vacant’, ’occupied’ or ’burning’. Vacant sites become occupied at rate 1. At each site, ignition (by lightning) occurs at rate λ. When a site is ignited, a fire starts and propagates to neighbors at rate π. We study the asymptotic behavior of this process as λ → 0 and π → ∞. We show that there are three possible classes of scaling limits, according to the regime in which λ → 0 and π → ∞. In Chapter 3, we study formally and briefly the so-called one dimensional forest-fire processes in random media. Here, each site has only two possible states: ’vacant’ or occupied’. Consider a parameter λ > 0, a probability distribution ν on (0 ,∞) as well as (κi)i∈Z an i.i.d. sequence of random variables with law ν. A vacant site i becomes occupied at rate κi. At each site, ignition (by lightning) occurs at rate λ. When a site is ignited, the fire destroys the corresponding component of occupied sites. We study the asymptotic behavior of this process as λ → 0. Under some quite reasonable assumptions on the law ν, we hope that the process converges, with a correct normalization, to a limit forest fire model. We expect that there are three possible classes of scaling limits

До бібліографії