To see the other types of publications on this topic, follow the link: Non-random modes.

Dissertations / Theses on the topic 'Non-random modes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Non-random modes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Marques-da-Silva, Antonio Hermes. "Gradient test under non-parametric random effects models." Thesis, Durham University, 2018. http://etheses.dur.ac.uk/12645/.

Full text
Abstract:
The gradient test proposed by Terrell (2002) is an alternative to the likelihood ratio, Wald and Rao tests. The gradient statistic is the result of the inner product of two vectors — the gradient of the likelihood under null hypothesis (hence the name) and the result of the difference between the estimate under alternative hypothesis and the estimate under null hypothesis. Therefore the gradient statistic is computationally less expensive than Wald and Rao statistics as it does not require matrix operations in its formula. Under some regularity conditions, the gradient statistic has χ2 distribution under null hypothesis. The generalised linear model (GLM) introduced by Nelder & Wedderburn (1972) is one of the most important classes of statistical models. It incorporates the classical regression modelling and analysis of variance either for continuous response and categorical response variables under the exponential family. The random effects model extends the standard GLM for situations where the model does not describe appropriately the variability in the data (overdispersion) (Aitkin, 1996a). We propose a new unified notation for GLM with random effects and the gradient statistic formula for testing fixed effects parameters on these models. We also develop the Fisher information formulae used to obtain the Rao and Wald statistics. Our main interest in this thesis is to investigate the finite sample performance of the gradient test on generalised linear models with random effects. For this we propose and extensive simulation experiment to study the type I error and the local power of the gradient test using the methodology developed by Peers (1971) and Hayakawa (1975). We also compare the local power of the test with the local power of the tests of the likelihood ratio, of Wald and Rao tests.
APA, Harvard, Vancouver, ISO, and other styles
2

Häggström, Lundevaller Erling. "Tests of random effects in linear and non-linear models." Doctoral thesis, Umeå universitet, Statistik, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Häggström, Lundevaller Erling. "Tests of random effects in linear and non-linear models /." Umeå : Department of Statistics, University of Umeå, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Shiber, Dan Yariv-Chaim. "Tracial and non-tracial random matrix models in free probability." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1781954221&sid=11&Fmt=2&clientId=48051&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

山本, 俊行, and Toshiyuki YAMAMOTO. "非補償型意思決定方略を表現するためのデータマイニング手法の適用に関する分析." 土木学会, 2004. http://hdl.handle.net/2237/8619.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Abd, El-Fadeel Salah Ibrahim. "A mathematical model for calculating non-detection probability of a random tour target." Thesis, Monterey California. Naval Postgraduate School, 1985. http://hdl.handle.net/10945/21424.

Full text
Abstract:
The primary objective of this thesis was to build a mathematical model to predict the probability of a target moving according to a two-dimensional random tour model avoiding detection (i.e., surviving) to some specified time, t. This model assumes that there is a stationary searcher having a 'cookie-cutter' sensor located in the center of the search area. A Monte-Carlo simulation computer program was used to generate the non-detection probabilities. The output of this program was used to construct the required mathematical model. The model predicts, and simulation supports, that as the mean segment length of the random tour becomes small with respect to the square root of the area size, the probability of non-detection approaches that previously obtained for a diffusing target. In the opposite extreme, the probability of non-detection approaches the general form of Koopman's random search formula. Keywords: Diffusion; RATSIM Computer program; FORTRAN; RATSIM (Random Tour Simulation). (Author)
APA, Harvard, Vancouver, ISO, and other styles
7

Phillips, Michael James. "A random matrix model for two-colour QCD at non-zero quark density." Thesis, Brunel University, 2011. http://bura.brunel.ac.uk/handle/2438/5084.

Full text
Abstract:
We solve a random matrix ensemble called the chiral Ginibre orthogonal ensemble, or chGinOE. This non-Hermitian ensemble has applications to modelling particular low-energy limits of two-colour quantum chromo-dynamics (QCD). In particular, the matrices model the Dirac operator for quarks in the presence of a gluon gauge field of fixed topology, with an arbitrary number of flavours of virtual quarks and a non-zero quark chemical potential. We derive the joint probability density function (JPDF) of eigenvalues for this ensemble for finite matrix size N, which we then write in a factorised form. We then present two different methods for determining the correlation functions, resulting in compact expressions involving Pfaffians containing the associated kernel. We determine the microscopic large-N limits at strong and weak non-Hermiticity (required for physical applications) for both the real and complex eigenvalue densities. Various other properties of the ensemble are also investigated, including the skew-orthogonal polynomials and the fraction of eigenvalues that are real. A number of the techniques that we develop have more general applicability within random matrix theory, some of which we also explore in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
8

Devamitta, Perera Muditha Virangika. "Robustness of normal theory inference when random effects are not normally distributed." Kansas State University, 2011. http://hdl.handle.net/2097/8786.

Full text
Abstract:
Master of Science
Department of Statistics
Paul I. Nelson
The variance of a response in a one-way random effects model can be expressed as the sum of the variability among and within treatment levels. Conventional methods of statistical analysis for these models are based on the assumption of normality of both sources of variation. Since this assumption is not always satisfied and can be difficult to check, it is important to explore the performance of normal based inference when normality does not hold. This report uses simulation to explore and assess the robustness of the F-test for the presence of an among treatment variance component and the normal theory confidence interval for the intra-class correlation coefficient under several non-normal distributions. It was found that the power function of the F-test is robust for moderately heavy-tailed random error distributions. But, for very heavy tailed random error distributions, power is relatively low, even for a large number of treatments. Coverage rates of the confidence interval for the intra-class correlation coefficient are far from nominal for very heavy tailed, non-normal random effect distributions.
APA, Harvard, Vancouver, ISO, and other styles
9

Toroczkai, Zoltan. "Analytic Results for Hopping Models with Excluded Volume Constraint." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30481.

Full text
Abstract:
Part I: The Theory of Brownian Vacancy Driven Walk We analyze the lattice walk performed by a tagged member of an infinite 'sea' of particles filling a d-dimensional lattice, in the presence of a single vacancy. The vacancy is allowed to be occupied with probability 1/2d by any of its 2d nearest neighbors, so that it executes a Brownian walk. Particle-particle exchange is forbidden; the only interaction between them being hard core exclusion. Thus, the tagged particle, differing from the others only by its tag, moves only when it exchanges places with the hole. In this sense, it is a random walk "driven" by the Brownian vacancy. The probability distributions for its displacement and for the number of steps taken, after n-steps of the vacancy, are derived. Neither is a Gaussian! We also show that the only nontrivial dimension where the walk is recurrent is d=2. As an application, we compute the expected energy shift caused by a Brownian vacancy in a model for an extreme anisotropic binary alloy. In the last chapter we present a Monte-Carlo study and a mean-field analysis for interface erosion caused by mobile vacancies. Part II: One-Dimensional Periodic Hopping Models with Broken Translational Invariance.Case of a Mobile Directional Impurity We study a random walk on a one-dimensional periodic lattice with arbitrary hopping rates. Further, the lattice contains a single mobile, directional impurity (defect bond), across which the rate is fixed at another arbitrary value. Due to the defect, translational invariance is broken, even if all other rates are identical. The structure of Master equations lead naturally to the introduction of a new entity, associated with the walker-impurity pair which we call the quasi-walker. Analytic solution for the distributions in the steady state limit is obtained. The velocities and diffusion constants for both the random walker and impurity are given, being simply related to that of the quasi-particle through physically meaningful equations. As an application, we extend the Duke-Rubinstein reputation model of gel electrophoresis to include polymers with impurities and give the exact distribution of the steady state.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
10

Xu, Xiaosong. "Benefit transfer with multiple sources of heterogeneity in non-market valuation random utility models." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq23093.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mardoukhi, Yousof [Verfasser], and Ralf [Akademischer Betreuer] Metzler. "Random environments and the percolation model : non-dissipative fluctuations of random walk process on finite size clusters / Yousof Mardoukhi ; Betreuer: Ralf Metzler." Potsdam : Universität Potsdam, 2020. http://d-nb.info/1219580082/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Baruah, Monita. "Analysis of some batch arrival queueing systems with balking, reneging, random breakdowns, fluctuating modes of service and Bernoulli schedulled server vacations." Thesis, Brunel University, 2017. http://bura.brunel.ac.uk/handle/2438/14524.

Full text
Abstract:
The purpose of this research is to investigate and analyse some batch arrival queueing systems with Bernoulli scheduled vacation process and single server providing service. The study aims to explore and extend the work done on vacation and unreliable queues with a combination of assumptions like balking and re-service, reneging during vacations, time homogeneous random breakdowns and fluctuating modes of service. We study the steady state properties, and also transient behaviour of such queueing systems. Due to vacations the arriving units already in the system may abandon the system without receiving any service (reneging). Customers may decide not to join the queue when the server is in either working or vacation state (balking). We study this phenomenon in the framework of two models; a single server with two types of parallel services and two stages of service. The model is further extended with re-service offered instantaneously. Units which join the queue but leave without service upon the absence of the server; especially due to vacation is quite a natural phenomenon. We study this reneging behaviour in a queueing process with a single server in the context of Markovian and non-Markovian service time distribution. Arrivals are in batches while each customer can take the decision to renege independently. The non-Markovian model is further extended considering service time to follow a Gamma distribution and arrivals are due to Geometric distribution. The closed-form solutions are derived in all the cases. Among other causes of service interruptions, one prime cause is breakdowns. We consider breakdowns to occur both in idle and working state of the server. In this queueing system the transient and steady state analysis are both investigated. Applying the supplementary variable technique, we obtain the probability generating function of queue size at random epoch for the different states of the system and also derive some performance measures like probability of server‟s idle time, utilization factor, mean queue length and mean waiting time. The effect of the parameters on some of the main performance measures is illustrated by numerical examples to validate the analytical results obtained in the study. The Mathematica 10 software has been used to provide the numerical results and presentation of the effects of some performance measures through plots and graphs.
APA, Harvard, Vancouver, ISO, and other styles
13

Kim, Nungsoo. "Extraction of the second-order nonlinear response from model test data in random seas and comparison of the Gaussian and non-Gaussian models." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3183.

Full text
Abstract:
This study presents the results of an extraction of the 2nd-order nonlinear responses from model test data. Emphasis is given on the effects of assumptions made for the Gaussian and non-Gaussian input on the estimation of the 2nd-order response, employing the quadratic Volterra model. The effects of sea severity and data length on the estimation of response are also investigated at the same time. The data sets used in this study are surge forces on a fixed barge, a surge motion of a compliant mini TLP (Tension Leg Platform), and surge forces on a fixed and truncated column. Sea states are used from rough sea (Hs=3m) to high sea (Hs=9m) for a barge case, very rough sea (Hs=3.9m) for a mini TLP, and phenomenal sea (Hs=15m) for a truncated column. After the estimation of the response functions, the outputs are reconstructed and the 2nd order nonlinear responses are extracted with all the QTF distributed in the entire bifrequency domain. The reconstituted time series are compared with the experiment in both the time and frequency domains. For the effects of data length on the estimation of the response functions, 3, 15, and 40- hour data were investigated for a barge, but 3-hour data was used for a mini TLP and a fixed and truncated column due to lack of long data. The effects of sea severity on the estimation of the response functions are found in both methods. The non-Gaussian method for estimation is more affected by data length than the Gaussian method.
APA, Harvard, Vancouver, ISO, and other styles
14

Christou, Christos [Verfasser], and Andreas [Gutachter] Schadschneider. "Non-Equilibrium Stochastic Models: Random Average Process and Diffusion with Resetting / Christos Christou ; Gutachter: Andreas Schadschneider." Köln : Universitäts- und Stadtbibliothek Köln, 2018. http://d-nb.info/1186251751/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Yin. "Quasi-Monte Carlo methods in generalized linear mixed model with correlated and non-normal random effects." Thesis, University of Manchester, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.516829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Jurczak, Kamil [Verfasser], Angelika [Akademischer Betreuer] Rohde, and Holger [Akademischer Betreuer] Dette. "Spectral analysis and minimax estimation in non-standard random matrix models / Kamil Jurczak. Gutachter: Angelika Rohde ; Holger Dette." Bochum : Ruhr-Universität Bochum, 2016. http://d-nb.info/1095884220/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Poitevin, Caroline Myriam. "Non-random inter-specific encounters between Amazon understory forest birds : what are theyand how do they change." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/150626.

Full text
Abstract:
Os bandos mistos de aves são agregações sociais complexas estáveis durante o tempo e espaço. Até hoje, a estrutura social dessas espécies foi descrita a partir de estudos subjetivos de campo ou a partir de compilações do número e intensidade das interações a nível de todo o grupo, sem considerar as interações par-a-par individualmente. Nossos objetivos foram buscar evidências de associações não-randômicas entre pares de espécies de aves, delimitar os grupos a partir das espécies com as associações mais fortes e verificar se há diferenças na estrutura social entre os habitat de floresta primária e secundária. Utilizamos dados de ocorrência das espécies coletados a partir de redes de neblina e gravação de vocalizações para identificar pares de espécies que foram co-detectadas mais frequentemente do que o esperado a partir do modelo nulo e compararamos a força dessa interação entre as florestas tropicais primária e secundária Amazônicas. Nós também utilizamos as associações par-a-par para construir as redes de interação social e suas mudanças entre os tipos de habitat. Nós encontramos muitas interações positivas fortes entre as espécies, mas nenhuma evidência de repulsão. As análises das redes de interação revelaram vários grupos de espécies que corroboram com grupos ecológios descritos na literatura. Além disso, tanto a estrutura da rede de interação como a força da interação se alteraram drasticamente com a perturbação do habitat, com formação de algumas associações novas, mas uma tendência geral para quebra de associações entre as espécies. Nossos resultados mostram que as interações sociais entre essas aves podem ser fortemente afetados pela degradação do habitat, sugerindo que a estabilidade das interações desenvolvida entre espécies é ameaçada pelos distúrbios causados pelo homem.
Inter-specific associations of birds are complex social phenomena, frequently detected and often stable over time and space. So far, the social structure of these associations has been largely deduced from subjective assessments in the field or by counting the number of inter-specific encounters at the whole-group level, without considering changes to individual pairwise interactions. Here, we look for evidence of non-random association between pairs of bird species, delimit groups of more strongly associated species and examine differences in social structure between old growth and secondary forest habitat. We used records of bird species detection from mist-netting capture and from acoustic recordings to identify pairwise associations that were detected more frequently than expected under a null distribution, and compared the strength of these associations between old-growth and secondary forest Amazonian tropical forest. We also used the pairwise strength associations to visualize the social network structure and its changes between habitat types. We found many strongly positive interactions between species, but no evidence of repulsion. Network analyses revealed several modules of species that broadly agree with the subjective groupings described in the ornithological literature. Furthermore, both network structure and association strength changed drastically with habitat disturbance, with the formation of a few new associations but a general trend towards the breaking of associations between species. Our results show that social grouping in birds is real and may be strongly affected by habitat degradation, suggesting that the stability of the associations is threatened by anthropogenic disturbance.
APA, Harvard, Vancouver, ISO, and other styles
18

Heinken, Thilo, and Eckart Winkler. "Non-random dispersal by ants : long-term field data versus model predictions of population spread of a forest herb." Universität Potsdam, 2009. http://opus.kobv.de/ubp/volltexte/2010/4648/.

Full text
Abstract:
Myrmecochory, i.e. dispersal of seeds by ants towards and around their nests, plays an important role in temperate forests. Yet hardly any study has examined plant population spread over several years and the underlying joint contribution of a hierarchy of dispersal modes and plant demography. We used a seed-sowing approach with three replicates to examine colonization patterns of Melampyrum pratense, an annual myrmecochorous herb, in a mixed Scots pine forest in northeastern Germany. Using a spatially explicit individualbased (SEIB) model population patterns over 4 years were explained by short-distance transport of seeds by small ant species with high nest densities, resulting in random spread. However, plant distributions in the field after another 4 years were clearly deviating from model predictions. Mean annual spread rate increased from 0.9 m to 5.1 m per year, with a clear inhomogeneous component. Obviously, after a lag-phase of several years, non-random seed dispersal by large red wood ants (Formica rufa) was determining the species’ spread, thus resulting in stratified dispersal due to interactions with different-sized ant species. Hypotheses on stratified dispersal, on dispersal lag, and on non-random dispersal were verified using an extended SEIB model, by comparison of model outputs with field patterns (individual numbers, population areas, and maximum distances). Dispersal towards red wood ant nests together with seed loss during transport and redistribution around nests were essential features of the model extension. The observed lag-phase in the initiation of non-random, medium-distance transport was probably due to a change of ant behaviour towards a new food source of increasing importance, being a meaningful example for a lag-phase in local plant species invasion. The results demonstrate that field studies should check model predictions wherever possible. Future research will show whether or not the M. pratense–ant system is representative for migration patterns of similar animal dispersal systems after having crossed range edges by long-distance dispersal events.
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Weibin. "Attribute Non Attendance in a Revealed Preference Study." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/82935.

Full text
Abstract:
This dissertation investigates attribute non-attendance in an urban random utility model (RUM). Using the RUM, this dissertation also investigates the local residents' willingness to pay (WTP) to improve the conditions of the riparian vegetation in southern Sydney, Australia. To elicit self-reported ANA, in the survey we ask respondents either how important they think of the attributes in a public green space or how frequently they use the attributes when they visit a public green space and use this information to estimate stated ANA and inferred ANA models. Stated ANA model results show that ANA does impact the WTP estimates for most of the site attributes but people in the non-attendance group do not necessarily have zero or lower WTP for these attributes. However, stated model results do show that self-reported ANA statements from 'importance questions' for some attributes such as riparian vegetation are more consistent with the estimated ANA. This finding suggests that elicitation method affects the accuracy of self-reported ANA. We also find that the consistency between respondents' self-reported ANA and the estimated ANA from inferred ANA models largely depends on the particular attribute but it can be concluded that when respondents say they 'never' used the site attributes or see the attribute as 'unimportant' or 'somewhat unimportant', it is very likely that they truly ignored the attributes in their decision making process. Our study also finds that respondents are willing to pay a compensation to improve the conditions of the riparian vegetation and the WTP increases if the channel is less modified and there is more vegetation. For example, an average respondent with 19 trips annually is willing to pay $58 to improve the riparian vegetation condition from the lowest level to the highest level. Another interesting finding of the study is that those who considered riparian vegetation in decision making processes differentiated between different riparian vegetation conditions more than those who did not. The importance sample results also show that ignoring the ANA in model estimation will under-estimate the annual value of the vegetation improvement for a person with 20 trips by $2.85.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
20

Kolstoe, Sonja. "Essays on the Recreational Value of Avian Biodiversity." Thesis, University of Oregon, 2016. http://hdl.handle.net/1794/20494.

Full text
Abstract:
This dissertation uses a convenience sample of members of eBird, a large citizen science project maintained by the Cornell University's Laboratory of Ornithology, to explore the value of avian biodiversity to bird watchers. Panel data (i.e. longitudinal data) are highly desirable for preference estimation. Fortuitously, the diaries of birding excursions by eBird members provide a rich source of spatial data on trips taken, over time by the same individuals, to a variety of birding destinations. Origin and destination data can be combined with exogenous species prevalence information. These combined data sources permit estimation of utility-theoretic choice models that allow derivation of the marginal utilities of avian biodiversity measures as well as the marginal utility of net income (i.e. consumption of other goods and services). Ratios of these marginal utilities yield marginal willingness to pay (MWTP) estimates for numbers of bird species (or numbers of species of different types, in richer specifications). MWTP for levels of other attributes of birding destinations are also derived (e.g. ecosystem type, management regime, seasonal variations, a time trend).\\ The chapters are organized as follows: Chapter 2 is a stand-alone paper that demonstrates the feasibility of a travel-cost based random utility model with the eBird data. This chapter focuses on measuring the total number of bird species at each birding hotspot in Washington and Oregon states. This chapter does not differentiate among types of birders beyond using their recent birding activities in an analysis of habit formation or variety-seeking behavior. For this model, beyond past behavior, a representative consumer is postulated. Chapter 3 starts from the basic specifications identified in Chapter 2 and explores heterogeneous preferences among consumers as well as their preferences for species richness and for different categories of birds. This chapter explores whether different types of birds are relatively more attractive to different types of birders (for example, by gender or by age or by neighborhood characteristics and educational attainment). Chapter 4 is an extension of the work in Chapter 3 to explore how changing site attributes in the face of climate change effects birder welfare. This dissertation includes previously unpublished co-authored material.
10000-01-01
APA, Harvard, Vancouver, ISO, and other styles
21

Ospina, Arango Juan David. "Predictive models for side effects following radiotherapy for prostate cancer." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S046/document.

Full text
Abstract:
La radiothérapie externe (EBRT en anglais pour External Beam Radiotherapy) est l'un des traitements référence du cancer de prostate. Les objectifs de la radiothérapie sont, premièrement, de délivrer une haute dose de radiations dans la cible tumorale (prostate et vésicules séminales) afin d'assurer un contrôle local de la maladie et, deuxièmement, d'épargner les organes à risque voisins (principalement le rectum et la vessie) afin de limiter les effets secondaires. Des modèles de probabilité de complication des tissus sains (NTCP en anglais pour Normal Tissue Complication Probability) sont nécessaires pour estimer sur les risques de présenter des effets secondaires au traitement. Dans le contexte de la radiothérapie externe, les objectifs de cette thèse étaient d'identifier des paramètres prédictifs de complications rectales et vésicales secondaires au traitement; de développer de nouveaux modèles NTCP permettant l'intégration de paramètres dosimétriques et de paramètres propres aux patients; de comparer les capacités prédictives de ces nouveaux modèles à celles des modèles classiques et de développer de nouvelles méthodologies d'identification de motifs de dose corrélés à l'apparition de complications. Une importante base de données de patients traités par radiothérapie conformationnelle, construite à partir de plusieurs études cliniques prospectives françaises, a été utilisée pour ces travaux. Dans un premier temps, la fréquence des symptômes gastro-Intestinaux et génito-Urinaires a été décrite par une estimation non paramétrique de Kaplan-Meier. Des prédicteurs de complications gastro-Intestinales et génito-Urinaires ont été identifiés via une autre approche classique : la régression logistique. Les modèles de régression logistique ont ensuite été utilisés dans la construction de nomogrammes, outils graphiques permettant aux cliniciens d'évaluer rapidement le risque de complication associé à un traitement et d'informer les patients. Nous avons proposé l'utilisation de la méthode d'apprentissage de machine des forêts aléatoires (RF en anglais pour Random Forests) pour estimer le risque de complications. Les performances de ce modèle incluant des paramètres cliniques et patients, surpassent celles des modèle NTCP de Lyman-Kutcher-Burman (LKB) et de la régression logistique. Enfin, la dose 3D a été étudiée. Une méthode de décomposition en valeurs populationnelles (PVD en anglais pour Population Value Decomposition) en 2D a été généralisée au cas tensoriel et appliquée à l'analyse d'image 3D. L'application de cette méthode à une analyse de population a été menée afin d'extraire un motif de dose corrélée à l'apparition de complication après EBRT. Nous avons également développé un modèle non paramétrique d'effets mixtes spatio-Temporels pour l'analyse de population d'images tridimensionnelles afin d'identifier une région anatomique dans laquelle la dose pourrait être corrélée à l'apparition d'effets secondaires
External beam radiotherapy (EBRT) is one of the cornerstones of prostate cancer treatment. The objectives of radiotherapy are, firstly, to deliver a high dose of radiation to the tumor (prostate and seminal vesicles) in order to achieve a maximal local control and, secondly, to spare the neighboring organs (mainly the rectum and the bladder) to avoid normal tissue complications. Normal tissue complication probability (NTCP) models are then needed to assess the feasibility of the treatment and inform the patient about the risk of side effects, to derive dose-Volume constraints and to compare different treatments. In the context of EBRT, the objectives of this thesis were to find predictors of bladder and rectal complications following treatment; to develop new NTCP models that allow for the integration of both dosimetric and patient parameters; to compare the predictive capabilities of these new models to the classic NTCP models and to develop new methodologies to identify dose patterns correlated to normal complications following EBRT for prostate cancer treatment. A large cohort of patient treated by conformal EBRT for prostate caner under several prospective French clinical trials was used for the study. In a first step, the incidence of the main genitourinary and gastrointestinal symptoms have been described. With another classical approach, namely logistic regression, some predictors of genitourinary and gastrointestinal complications were identified. The logistic regression models were then graphically represented to obtain nomograms, a graphical tool that enables clinicians to rapidly assess the complication risks associated with a treatment and to inform patients. This information can be used by patients and clinicians to select a treatment among several options (e.g. EBRT or radical prostatectomy). In a second step, we proposed the use of random forest, a machine-Learning technique, to predict the risk of complications following EBRT for prostate cancer. The superiority of the random forest NTCP, assessed by the area under the curve (AUC) of the receiving operative characteristic (ROC) curve, was established. In a third step, the 3D dose distribution was studied. A 2D population value decomposition (PVD) technique was extended to a tensorial framework to be applied on 3D volume image analysis. Using this tensorial PVD, a population analysis was carried out to find a pattern of dose possibly correlated to a normal tissue complication following EBRT. Also in the context of 3D image population analysis, a spatio-Temporal nonparametric mixed-Effects model was developed. This model was applied to find an anatomical region where the dose could be correlated to a normal tissue complication following EBRT
APA, Harvard, Vancouver, ISO, and other styles
22

Pascalie, Romain. "Tenseurs aléatoires et modèle de Sachdev-Ye-Kitaev." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0099.

Full text
Abstract:
Dans cette thèse nous traitons de différents aspects des tenseurs aléatoires. Dans la première partie de la thèse, nous étudions la formulation des tenseurs aléatoires en termes de théorie quantique des champs nommée théorie de champs tensoriels (TFT). En particulier nous déterminons les équations de Schwinger-Dyson pour une TFT de tenseurs de rang arbitraire, munie d'un terme d'intéraction quartic melonique U(N)-invariant.Les fonctions de corrélations sont classifiées par des graphes de bords et nous utilisons l'identité de Ward-Takashi pour déterminer le système complet d'équations de Schwinger-Dyson, exactes et analytiques, vérifiées par les fonctions de corrélations avec un graphe de bord connexe.Nous analysons ensuite la limite de grand N des équations de Schwinger-Dyson à rang 3 et trouvons les facteurs appropriés en puissance de N des différents termes de l'action. Cela nous permet de résoudre les équations de Schwinger-Dyson pour la fonction à 2-points d'une TFT avec seulement une intéraction quartique mélonique, dont la solution est basée sur la fonction W de Lambert, en utilisant une expansion perturbative et la resommation de Lagrange-Bürmann. Les fonctions de corrélation à plus haut nombre de points s'obtiennent récursivement.Dans la deuxième partie de la thèse, nous nous intéressons au modèle de Sachdev-Ye-Kitaev (SYK) qui est très similaires aux modèles de tenseurs. Il s'agit d'un modèle composé de N fermions qui intéragissent q à la fois et dont le couplage est un tensor moyenné selon une distribution Gaussienne. Nous étudions les effets du moyennage des couplages aléatoires selon une distributions non-Gaussienne dans une version complexe du modèle SYK. En utilisant une équation de type Polchinski et l'universalité de tenseurs aléatoires Gaussiens, nous montrons que le moyennage selon une distribution non-Gaussienne correspond à l'ordre dominant en N à un moyennage Gaussien avec une variance modifiée. Nous déterminons ensuite la forme de l'action effective à tout ordre et réalisons un calcul explicite de la modification de la variance dans le cas d'une perturbation quartique.Dans la troisième partie de la thèse, nous étudions une application des tenseurs aléatoires à l'étude des systèmes non-linéaire résonants. Nous nous focalisons sur un modèle typique, similaire au modèle SYK bosonique, dont le couplage tensoriel entre les modes est moyenné selon une distribution Gaussienne, ainsi que les conditions initiales. Dans la limite o`u la configuration initiale possède un grand nombre de modes excités, nous calculons la variance de normes de Sobolev qui caractérisent la représentativité du modèle moyenné pour cette classe de systèmes résonants
This thesis treats different aspects of random tensors. In the first part of the thesis, we study the formulation of random tensors as a quantum field theory called tensor field theory (TFT). In particular we derive the Schwinger-Dyson equations for a tensor field theory with an U(N)-invariant melonic quartic interactions, at any tensor rank. The correlation functions are classified by boundary graphs and we use the Ward-Takahashi identity to derive the complete tower of exact, analytic Schwinger-Dyson equations for correlation functions with connected boundary graph.We then analyse the large N limit of the Schwinger-Dyson equations for rank 3 tensors. We find the appropriate scalings in powers of N for the various terms present in the action. This enable us to solve the closed Schwinger-Dyson equation for the 2-point function of a TFT with only one quartic melonic interaction, in terms of Lambert's W-function, using a perturbative expansion and Lagrange-Bürmann resummation. Higher-point functions are then obtained recursively.In the second part of the thesis, we study the Sachdev-Ye-Kitaev (SYK) model which is closely related to tensor models. The SYK model is a quantum mechanical model of N fermions who interact q at a time and whose coupling constant is a tensor average over a Gaussian distribution. We study the effect of non-Gaussian average over the random couplings in a complex version of the SYK model. Using a Polchinski-like equation and random tensor Gaussian universality, we show that the effect of this non-Gaussian averaging leads to a modification of the variance of the Gaussian distribution of couplings at leading order in N. We then derive the form of the effective action to all orders and perform an explicit computation of the modification of the variance in the case of a quartic perturbation.In the third part of the thesis, we analyse an application of random tensors to non-linear resonant system. Focusing on a typical model similar to the SYK model but with bosons instead of fermions, we perform a Gaussian averaging both for the tensor coupling between modes and for the initial conditions. In the limit when the initial configuration has many modes excited, we compute the variance of the Sobolev norms to characterise how representative the averaged model is of this class of resonant systems
APA, Harvard, Vancouver, ISO, and other styles
23

Derflinger, Gerhard, Wolfgang Hörmann, Josef Leydold, and Halis Sak. "Efficient Numerical Inversion for Financial Simulations." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2009. http://epub.wu.ac.at/830/1/document.pdf.

Full text
Abstract:
Generating samples from generalized hyperbolic distributions and non-central chi-square distributions by inversion has become an important task for the simulation of recent models in finance in the framework of (quasi-) Monte Carlo. However, their distribution functions are quite expensive to evaluate and thus numerical methods like root finding algorithms are extremely slow. In this paper we demonstrate how our new method based on Newton interpolation and Gauss-Lobatto quadrature can be utilized for financial applications. Its fast marginal generation times make it competitive, even for situations where the parameters are not always constant.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
24

Bruna, Escuer Pere. "Microstructural characterization and modelling in primary crystallization." Doctoral thesis, Universitat Politècnica de Catalunya, 2007. http://hdl.handle.net/10803/6588.

Full text
Abstract:
L'objectiu de la tesi és estudiar la cinètica de les cristal·litzacions primàries en vidres metàl·lics mitjançant simulacions de tipus phase field. Una cristal·lització primària és una transició de fase sòlid-sòlid on la fase que cristal·litza (fase transformada o fase secundaria) té una composició química diferent de la fase precursora (fase no transformada o fase primària).
Les dades experimentals obtingudes a partir de l'estudi calorimètric de cristal·litzacions primàries s'analitzen generalment en el marc del model KJMA (Kolmogorov, Johnson & Mehl, Avrami). Aquest model proporciona l'evolució temporal de la fracció transformada basant-se en tres hipòtesis:
- Els nuclis de la fase secundaria estan distribuïts aleatòriament en tot l'espai.
- El creixement d'aquests nuclis és isotròpic.
- El creixement s'atura únicament per xoc directe (hard impingement).

En la cristal·lizació de vidres metàl·lics s'ha observat experimentalment un alentiment de la cinètica respecte del comportament calculat emprant la citada cinètica KJMA. Aquest alentiment s'explica a la literatura en base a que en aquest tipus de transformacions, controlades per difusió, la interacció entre els cristalls no és directa sinó que es produeix a través dels perfils de concentració (soft impingement) i, a més, l'evolució d'aquests perfils de concentració causa canvis en la concentració de la matriu amorfa, estabilitzant la i per tant fent que la nucleació de nous cristalls esdevingui no aleatòria. Diversos autors han proposat modificacions del model KJMA per tal d'intentar superar aquestes limitacions, basats bé en consideracions geomètriques, bé en aproximacions de camp mitjà. A pesar de tot, cap d'aquests models és capaç d'explicar satisfactòriament la cinètica observada en cristal·litzacions primàries. L'objectiu d'aquest treball ha estat la simulació realista de la cinètica de les transformacions primàries per trobar una explicació consistent a les diferències observades entre les dades experimentals i els models teòrics disponibles.
Per tal de poder descriure de forma realista el procés de cristal·lització primària s'ha d'estudiar el procés de nucleació i creixement de la fase secundaria alhora que es resol l'equació de difusió en la fase primària. En aquest treball s'ha emprat un model de simulació phase field que permet estudiar aquest sistema introduint una nova variable lligada al camp de concentració que pren dos valors diferents segons es tracti de fase transformada o no transformada. Amb aquest tipus de models també es poden introduir diferents protocols de nucleació i per tant estudiar independentment els efectes de la nucleació en la cinètica. D'aquesta manera s'han realitzat simulacions en 2 i 3 dimensions de cristal·litzacions primàries amb diferents graus de fracció transformada final). Els resultats de les simulacions s'ha comparat amb el model KJMA i, contra el que es preveia, s'ha obtingut un bon acord entre les fraccions transformades del model KJMA i de les simulacions. Donat que el model KJMA no reprodueix satisfactòriament el comportament experimental d'aquest resultat es dedueix que ni el soft impingement ni la nucleació no aleatòria son les responsables de l'alentiment de la cinètica obtingut en cristal·litzacions primàries.
Per tal de trobar una explicació físicament convincent del comportament observat experimentalment s'ha aprofundit en l'estudi teòric de les cristali·litzaciones primàries, incloent-hi l'efecte dels canvis composicionals que tenen lloc en la matriu a mesura que la transformació es produeix. Aquest fet, tot i ser conegut a la bibliografia, ha estat sistemàticament ignorat en l'elaboració de models cinètics. En concret, s'ha fet palès que canvis en la composició química de la fase primària han d'afectar de forma radical a la viscositat, que varia fortament a prop de la transició vitrea, i han de produir canvis en les propietats de transport atòmic. Això s'ha modelat a través de l'assumpció d'un coeficient de difusió depenent de la concentració, en base a la relació modificada d'Stokes-Einstein entre la viscositat i el coeficient de difusió. Les simulacions phase-field amb un coeficient de difusió d'aquest tipus donen lloc a una cinètica més lenta i que mostra un acord excel·lent amb la cinètica experimentalment observada en cristal·litzacions primàries de vidres metàl·lics. Per tant, les simulacions phase field confirmen que la cinètica de les cristal·litzacions primàries està controlada fonamentalment pel canvi en les propietats de transport atòmic, mentre que els efectes de soft impingement i nucleació no aleatoria, tot i estar presents, son secundaris.
El objetivo de la tesi es estudiar la cinética de las cristalizaciones primarias en vidrios metálicos mediante simulaciones de tipo phase field. Una cristalización primaria es una transición de fase sólido-sólido donde la fase que cristaliza (fase transformada o fase secundaria) tiene una composición química diferente a la fase precursora (fase no transformada o fase primaria).
Los datos experimentales obtenidos a partir del estudio calorimétrico de cristalizaciones primarias se analizan generalmente en el marco del modelo KJMA (Kolmogorov, Johnson & Mehl, Avrami). Este modelo proporciona la evolución temporal de la fracción transformada basándose en tres hipótesis:
- Los núcleos de la fase secundaria están distribuidos aleatoriamente en todo el espacio
- El crecimiento de estos núcleos es isotrópico
- El crecimiento se detiene únicamente por choque directo (hard impingement).

En la cristalización de vidrios metálicos se ha observado experimentalmente un retardo de la cinética respecto del comportamiento calculado usando la cinética KJMA. Este retardo se explica en la literatura en base a que en este tipo de transformaciones, controladas por difusión, la interacción entre los cristales no es directa sino que se produce a través de los perfiles de concentración (soft impingement) y, además, la evolución de estos perfiles de concentración causa cambios en la concentración de la matriz amorfa, estabilizándola y por tanto haciendo que la nucleación de nuevos cristales sea no aleatoria. Varios autores han propuesto modificaciones del modelo KJMA para intentar superar estas limitaciones, basados bien en consideraciones geométricas, bien en aproximaciones de campo medio. A pesar de todo, ninguno de estos modelos es capaz de explicar satisfactoriamente la cinética observada en cristalizaciones primarias. El objetivo de este trabajo ha sido la simulación realista de la cinética de las transformaciones primarias para hallar una explicación consistente a las diferencias entre los datos experimentales y los modelos teóricos disponibles.
Para describir de manera realista el proceso de cristalización primaria se tiene que estudiar el proceso de nucleación y crecimiento de la fase secundaria a la vez que se resuelve la ecuación de difusión en la fase primaria. En este trabajo se ha usado un modelo de simulación phase-field que permite estudiar este sistema introduciendo una nueva variable ligada al campo de concentración que toma dos valores diferentes según se trate de fase transformada o no transformada. Con este tipo de modelos también se pueden introducir diferentes protocolos de nucleación y por tanto estudiar independientemente los efectos de la nucleación en la cinética. De esta manera se han realizado simulaciones en 2 y 3 dimensiones de cristalizaciones primarias con diferentes grados de fracción transformada final. Los resultados de la simulaciones se han comparado con el modelo KJMA y, en contra de lo que se preveía, se ha obtenido un buen acuerdo entre las fracciones transformadas del modelo KJMA y de las simulaciones. Dado que el modelo KJMA no reproduce satisfactoriamente el comportamiento experimental, de este resultado se deduce que ni el soft impingement ni la nucleación no aleatoria son las responsables del retardo en la cinética obtenido en cristalizaciones primarias.
Para encontrar una explicación físicamente convincente del comportamiento observado experimentalmente se ha profundizado en el estudio teórico de las cristalizaciones primarias, incluyendo el efecto de los cambios composicionales que tienen lugar en la matriz a medida que la transformación se produce. Este hecho, aún y ser conocido en la bibliografía, ha sido sistemáticamente ignorado en la elaboración de modelos cinéticos. En concreto, se ha hecho patente que cambios en la composición química de la fase primaria tienen que afectar de forma radical a la viscosidad, que varía fuertemente cerca de la transición vítrea, y tienen que producirse cambios en las propiedades de transporte atómico. Esto se ha modelado a través de la asunción de un coeficiente de difusión dependiente de la concentración, en base a la relación de Stokes-Einstein modificada entre la viscosidad y el coeficiente de difusión. Las simulaciones phsae-field con un coeficiente de difusión de este tipo dan lugar a una cinética más lenta y que muestra un acuerdo excelente con la cinética experimentalmente observada en cristalizaciones primarias de vidrios metálicos. Por tanto, las simulaciones phase-field confirman que la cinética de las cristalizaciones primarias está controlada fundamentalmente por los cambios en las propiedades de transporte atómico, mientras que los efectos de soft-impingement y nucleación no aleatoria, aún y estar presentes, son secundarios.
The aim of this thesis is to study the kinetics of primary crystallization in metallic glasses by means of phase-field simulations. A primary crystallization is a solid-solid phase transformation where the crystallized phase (transformed phase or secondary phase) has a chemical composition different than the precursor phase (untransformed phase or primary phase).
Experimental data from calorimetric studies of primary crystallization are usually studied in the framework of the KJMA model (Kolmogorov, Johnson & Mehl, Avrami). This model yields the temporal evolution of the transformed fraction on the basis of three main assumptions:
- A random distribution of particle nuclei of the secondary phase
- The growth of these nuclei is isotropic
- The growth is only halted by direct collisions (hard impingement).

In the crystallization of metallic glasses, a slowing down of the kinetics respect the behavior calculated with the KJMA kinetics has been observed. This delay is explained in the literature by the fact that in this kind of transformations, that are diffusion controlled, the interaction between the crystals is not direct but through the concentration profiles (soft impingement) and moreover, the evolution of these profiles causes changes in the concentration of the amorphous matrix, stabilizing it and thus, the nucleation of new nuclei become non random. Several authors had proposed modifications to the KJMA model to try to overcome these limitations, based either on geometrical considerations or in mean field approaches. However, none of these models is able to explain the observed kinetics in primary crystallizations. The aim of this work has been the realistic simulation of the kinetics of primary crystallization to find a explanation to the differences between the experimental data and the available theoretical models.
In order to describe in a realistic way the process of a primary crystallization, the nucleation and growth process of the secondary phase has to be studied at the same time that the diffusion equation is solved in the primary phase. In this work, it has been used a phase field model for the simulations that allows to study this system introducing a new variable, coupled to the concentration field, that takes two different values in each of the existing phases. With these kinds of models, different nucleation protocols can also be introduced and thus, independently study the effects of the nucleation in the kinetics. Therefore, 2 and 3 dimensional simulations of primary crystallization have been performed with several degrees of final transformed fraction. The simulation results have been compared with the KJMA model and, unexpectedly, a good agreement between the simulations and the KJMA model has been obtained. As the KJMA model does not reproduce satisfactorily the experimental behavior, from this result can be deduced that neither the soft impingement nor the non random nucleation are the responsible of the slowing down observed in the kinetics of primary crystallization.
In order to find a physical convincing explanation of the observed experimental behavior, the theoretical study of primary crystallization has been extended, including the effects of the compositional changes that take place in the matrix as the transformation proceed. This fact, notwithstanding being known in the literature, has been systematically ignored in the development of the kinetics models. In particular, it has become clear that changes in the chemical composition of the primary phase have to radically affect the viscosity, that strongly varies near the glass transition, and some changes in the atomic transport properties must occur. This has been modeled through the assumption of a compositional dependent diffusion coefficient, on the basis of a modified Stokes-Einstein relation between viscosity and diffusion coefficient. Phase field simulations with a diffusion coefficient of this type yield a slower kinetics and show an excellent agreement with the kinetics experimentally observed in primary crystallization of metallic glasses. Thus, phase field simulations confirm that the kinetics of primary crystallization is fundamentally controlled by the changes in the atomic transport properties, while the soft impingement and non random effects, although being present, are secondary.
APA, Harvard, Vancouver, ISO, and other styles
25

Love, Randi. "Testing the constructs of the resiliency model in a non-random sample of college students: Effects on quantity and frequency of alcohol use and consequences experienced due to alcohol use /." The Ohio State University, 1997. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487946776022279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Wang, Chao. "Exploiting non-redundant local patterns and probabilistic models for analyzing structured and semi-structured data." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1199284713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Channarond, Antoine. "Recherche de structure dans un graphe aléatoire : modèles à espace latent." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112338/document.

Full text
Abstract:
Cette thèse aborde le problème de la recherche d'une structure (ou clustering) dans lesnoeuds d'un graphe. Dans le cadre des modèles aléatoires à variables latentes, on attribue à chaque noeud i une variable aléatoire non observée (latente) Zi, et la probabilité de connexion des noeuds i et j dépend conditionnellement de Zi et Zj . Contrairement au modèle d'Erdos-Rényi, les connexions ne sont pas indépendantes identiquement distribuées; les variables latentes régissent la loi des connexions des noeuds. Ces modèles sont donc hétérogènes, et leur structure est décrite par les variables latentes et leur loi; ce pourquoi on s'attache à en faire l'inférence à partir du graphe, seule variable observée.La volonté commune des deux travaux originaux de cette thèse est de proposer des méthodes d'inférence de ces modèles, consistentes et de complexité algorithmique au plus linéaire en le nombre de noeuds ou d'arêtes, de sorte à pouvoir traiter de grands graphes en temps raisonnable. Ils sont aussi tous deux fondés sur une étude fine de la distribution des degrés, normalisés de façon convenable selon le modèle.Le premier travail concerne le Stochastic Blockmodel. Nous y montrons la consistence d'un algorithme de classiffcation non supervisée à l'aide d'inégalités de concentration. Nous en déduisons une méthode d'estimation des paramètres, de sélection de modèles pour le nombre de classes latentes, et un test de la présence d'une ou plusieurs classes latentes (absence ou présence de clustering), et nous montrons leur consistence.Dans le deuxième travail, les variables latentes sont des positions dans l'espace ℝd, admettant une densité f, et la probabilité de connexion dépend de la distance entre les positions des noeuds. Les clusters sont définis comme les composantes connexes de l'ensemble de niveau t > 0 fixé de f, et l'objectif est d'en estimer le nombre à partir du graphe. Nous estimons la densité en les positions latentes des noeuds grâce à leur degré, ce qui permet d'établir une correspondance entre les clusters et les composantes connexes de certains sous-graphes du graphe observé, obtenus en retirant les nœuds de faible degré. En particulier, nous en déduisons un estimateur du nombre de clusters et montrons saconsistence en un certain sens
.This thesis addresses the clustering of the nodes of a graph, in the framework of randommodels with latent variables. To each node i is allocated an unobserved (latent) variable Zi and the probability of nodes i and j being connected depends conditionally on Zi and Zj . Unlike Erdos-Renyi's model, connections are not independent identically distributed; the latent variables rule the connection distribution of the nodes. These models are thus heterogeneous and their structure is fully described by the latent variables and their distribution. Hence we aim at infering them from the graph, which the only observed data.In both original works of this thesis, we propose consistent inference methods with a computational cost no more than linear with respect to the number of nodes or edges, so that large graphs can be processed in a reasonable time. They both are based on a study of the distribution of the degrees, which are normalized in a convenient way for the model.The first work deals with the Stochastic Blockmodel. We show the consistency of an unsupervised classiffcation algorithm using concentration inequalities. We deduce from it a parametric estimation method, a model selection method for the number of latent classes, and a clustering test (testing whether there is one cluster or more), which are all proved to be consistent. In the second work, the latent variables are positions in the ℝd space, having a density f. The connection probability depends on the distance between the node positions. The clusters are defined as connected components of some level set of f. The goal is to estimate the number of such clusters from the observed graph only. We estimate the density at the latent positions of the nodes with their degree, which allows to establish a link between clusters and connected components of some subgraphs of the observed graph, obtained by removing low degree nodes. In particular, we thus derive an estimator of the cluster number and we also show the consistency in some sense
APA, Harvard, Vancouver, ISO, and other styles
28

Triampo, Wannapong. "Non-Equilibrium Disordering Processes In binary Systems Due to an Active Agent." Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/26738.

Full text
Abstract:
In this thesis, we study the kinetic disordering of systems interacting with an agent or a walker. Our studies divide naturally into two classes: for the first, the dynamics of the walker conserves the total magnetization of the system, for the second, it does not. These distinct dynamics are investigated in part I and II respectively. In part I, we investigate the disordering of an initially phase-segregated binary alloy due to a highly mobile vacancy which exchanges with the alloy atoms. This dynamics clearly conserves the total magnetization. We distinguish three versions of dynamic rules for the vacancy motion, namely a pure random walk , an ``active' and a biased walk. For the random walk case, we review and reproduce earlier work by Z. Toroczkai et. al.,~cite{TKSZ} which will serve as our base-line. To test the robustness of these findings and to make our model more accessible to experimental studies, we investigated the effects of finite temperatures (``active walks') as well as external fields (biased walks). To monitor the disordering process, we define a suitable disorder parameter, namely the number of broken bonds, which we study as a function of time, system size and vacancy number. Using Monte Carlo simulations and a coarse-grained field theory, we observe that the disordering process exhibits three well separated temporal regimes. We show that the later stages exhibit dynamic scaling, characterized by a set of exponents and scaling functions. For the random and the biased case, these exponents and scaling functions are computed analytically in excellent agreement with the simulation results. The exponents are remarkably universal. We conclude this part with some comments on the early stage, the interfacial roughness and other related features. In part II, we introduce a model of binary data corruption induced by a Brownian agent or random walker. Here, the magnetization is not conserved, being related to the density of corrupted bits }$ ho ${small .} {small Using both continuum theory and computer simulations, we study the average density of corrupted bits, and the associated density-density correlation function, as well as several other related quantities. In the second half, we extend our investigations in three main directions which allow us to make closer contact with real binary systems. These are i) a detailed analysis of two dimensions, ii) the case of competing agents, and iii) the cases of asymmetric and quenched random couplings. Our analytic results are in good agreement with simulation results. The remarkable finding of this study is the robustness of the phenomenological model which provides us with the tool, continuum theory, to understand the nature of such a simple model.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
29

Iskander, D. R. "The Generalised Bessel function K distribution and its application to the detection of signals in the presence of non-Gaussian interference." Thesis, Queensland University of Technology, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
30

Segalas, Corentin. "Inférence dans les modèles à changement de pente aléatoire : application au déclin cognitif pré-démence." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0298/document.

Full text
Abstract:
Le but de ce travail a été de proposer des méthodes d'inférence pour décrire l'histoire naturelle de la phase pré-diagnostic de la démence. Durant celle-ci, qui dure une quinzaine d'années, les trajectoires de déclin cognitif sont non linéaires et hétérogènes entre les sujets. Pour ces raisons, nous avons choisi un modèle à changement de pente aléatoire pour les décrire. Une première partie de ce travail a consisté à proposer une procédure de test pour l'existence d'un changement de pente aléatoire. En effet, dans certaines sous-populations, le déclin cognitif semble lisse et la question de l'existence même d'un changement de pente se pose. Cette question présente un défi méthodologique en raison de la non-identifiabilité de certains paramètres sous l'hypothèse nulle rendant les tests standards inutiles. Nous avons proposé un supremum score test pour répondre à cette question. Une seconde partie du travail concernait l'ordre temporel du temps de changement entre plusieurs marqueurs. La démence est une maladie multidimensionnelle et plusieurs dimensions de la cognition sont affectées. Des schémas hypothétiques existent pour décrire l'histoire naturelle de la démence mais n'ont pas été éprouvés sur données réelles. Comparer le temps de changement de différents marqueurs mesurant différentes fonctions cognitives permet d'éclairer ces hypothèses. Dans cet esprit, nous proposons un modèle bivarié à changement de pente aléatoire permettant de comparer les temps de changement de deux marqueurs, potentiellement non gaussiens. Les méthodes proposées ont été évaluées sur simulations et appliquées sur des données issues de deux cohortes françaises. Enfin, nous discutons les limites de ces deux modèles qui se concentrent sur une accélération tardive du déclin cognitif précédant le diagnostic de démence et nous proposons un modèle alternatif qui estime plutôt une date de décrochage entre cas et non-cas
The aim of this work was to propose inferential methods to describe natural history of the pre-diagnosis phase of dementia. During this phase, which can last around fifteen years, the cognitive decline trajectories are nonlinear and heterogeneous between subjects. Because heterogeneity and nonlinearity, we chose a random changepoint mixed model to describe these trajectories. A first part of this work was to propose a testing procedure to assess the existence of a random changepoint. Indeed, in some subpopulations, the cognitive decline seems smooth and the question of the existence of a changepoint itself araises. This question is methodologically challenging because of identifiability issues on some parameters under the null hypothesis that makes standard tests useless. We proposed a supremum score test to answer this question. A second part of this work was the comparison of the temporal order of different markers changepoint. Dementia is a multidimensional disease where different dimensions of the cognition are affected. Hypothetic cascade models exist for describing this natural history but have not been evaluated on real data. Comparing change over time of different markers measuring different cognitive functions gives precious insight on this hypothesis. In this spirit, we propose a bivariate random changepoint model allowing proper comparison of the time of change of two cognitive markers, potentially non Gaussian. The proposed methodologies were evaluated on simulation studies and applied on real data from two French cohorts. Finally, we discussed the limitations of the two models we used that focused on the late acceleration of the cognitive decline before dementia diagnosis and we proposed an alternative model that estimates the time of differentiation between cases and non-cases
APA, Harvard, Vancouver, ISO, and other styles
31

Montoya, Noguera Silvana. "Evaluation et réduction des risques sismiques liés à la liquéfaction : modélisation numérique de leurs effets dans l’ISS." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC023/document.

Full text
Abstract:
La liquéfaction des sols qui est déclenchée par des mouvements sismiques forts peut modifier la réponse d’un site. Ceci occasionne des dégâts importants dans les structures comme a été mis en évidence lors des tremblements de terre récents tels que celui de Christchurch, Nouvelle-Zélande et du Tohoku, Japon. L’évaluation du risque sismique des structures nécessite une modélisation robuste du comportement non linéaire de sols et de la prise en compte de l’interaction sol-structure (ISS). En général, le risque sismique est décrit comme la convolution entre l’aléa et la vulnérabilité du système. Cette thèse se pose comme une contribution à l’étude, via une modélisation numérique, de l’apparition de la liquéfaction et à l’utilisation des méthodes pour réduire les dommages induits.A cet effet, la méthode des éléments finis(FEM) dans le domaine temporel est utilisée comme outil numérique. Le modèle principal est composé d’un bâtiment fondé sur un sable liquéfiable. Comme la première étape de l’analyse du risque sismique, la première partie de cette thèse est consacrée à la caractérisation du comportement du sol et à sa modélisation.Une attention particulière est donnée à la sensibilité du modèle à des paramètres numériques. En suite, le modèle est validé pour le cas d’une propagation des ondes 1D avec les mesures issus du benchmark international PRENOLIN sur un site japonais. D’après la comparaison, le modèle arrive à prédire les enregistrements dans un test en aveugle.La deuxième partie, concerne la prise en compte dans la modélisation numérique du couplage de la surpression interstitielle (Δpw)et de la déformation du sol. Les effets favorables ou défavorables de ce type de modélisation ont été évalués sur le mouvement en surface du sol lors de la propagation des ondes et aussi sur le tassement et la performance sismique de deux structures.Cette partie contient des éléments d’un article publié dans Acta Geotechnica (Montoya-Noguera and Lopez-Caballero, 2016). Il a été trouvé que l’applicabilité du modèle dépend à la fois du niveau de liquéfaction et des effets d’ISS.Dans la dernière partie, une méthode est proposée pour modéliser la variabilité spatiale ajoutée au dépôt de sol dû à l’utilisation des techniques pour diminuer le degré de liquéfaction. Cette variabilité ajoutée peut différer considérablement de la variabilité inhérente ou naturelle. Dans cette thèse, elle sera modélisée par un champ aléatoire binaire.Pour évaluer l’efficience du mélange, la performance du système a été étudiée pour différents niveaux d’efficacité, c’est-à-dire,différentes fractions spatiales en allant de non traitées jusqu’à entièrement traitées. Tout d’abord le modèle binaire a été testé sur un cas simple, tel que la capacité portante d’une fondation superficielle sur un sol cohérent.Après, il a été utilisé dans le modèle de la structure sur le sol liquéfiable. Ce dernier cas,en partie, a été publié dans la revue GeoRisk (Montoya-Noguera and Lopez-Caballero,2015). En raison de l’interaction entre les deux types de sols du mélange, une importante variabilité est mise en évidence dans la réponse de la structure. En outre, des théories classiques et avancées d’homogénéisation ont été utilisées pour prédire la relation entre l’efficience moyenne et l’efficacité. En raison du comportement non linéaire du sol, les théories traditionnelles ne parviennent pas à prédire la réponse alors que certaines théories avancées qui comprennent la théorie de la percolation peuvent fournir une bonne estimation. En ce qui concerne l’effet de la variabilité spatiale ajoutée sur la diminution du tassement de la structure, différents séismes ont été testés et la réponse globale semble dépendre de leur rapport de PHV et PHA
Strong ground motions can trigger soil liquefaction that will alter the propagating signal and induce ground failure. Important damage in structures and lifelines has been evidenced after recent earthquakes such as Christchurch, New Zealand and Tohoku, Japanin 2011. Accurate prediction of the structures’ seismic risk requires a careful modeling of the nonlinear behavior of soil-structure interaction (SSI) systems. In general, seismic risk analysisis described as the convolution between the natural hazard and the vulnerability of the system. This thesis arises as a contribution to the numerical modeling of liquefaction evaluation and mitigation.For this purpose, the finite element method (FEM) in time domain is used as numerical tool. The main numerical model consists of are inforced concrete building with a shallow rigid foundation standing on saturated cohesionless soil. As the initial step on the seismic risk analysis, the first part of the thesis is consecrated to the characterization of the soil behavior and its constitutive modeling. Later on, some results of the model’s validation witha real site for the 1D wave propagation in dry conditions are presented. These are issued from the participation in the international benchmark PRENOLIN and concern the PARI site Sendaiin Japan. Even though very few laboratory and in-situ data were available, the model responses well with the recordings for the blind prediction. The second part, concerns the numerical modeling of coupling excess pore pressure (Δpw) and soil deformation. The effects were evaluated on the ground motion and on the structure’s settlement and performance. This part contains material from an article published in Acta Geotechnica (Montoya-Noguera andLopez-Caballero, 2015). The applicability of the models was found to depend on both the liquefaction level and the SSI effects.In the last part, an innovative method is proposed to model spatial variability added to the deposit due to soil improvement techniques used to strengthen soft soils and mitigate liquefaction. Innovative treatment processes such as bentonite permeations and biogrouting,among others have recently emerged.However, there remains some uncertainties concerning the degree of spatial variability introduced in the design and its effect of the system’s performance.This added variability can differ significantly from the inherent or natural variability thus, in this thesis, it is modeled by coupling FEM with a binary random field. The efficiency in improving the soil behavior related to the effectiveness of the method measured by the amount of soil changed was analyzed. Two cases were studied: the bearing capacity of a shallow foundation under cohesive soil and the liquefaction-induced settlement of a structure under cohesionless loose soil. The latter, in part, contains material published in GeoRisk journal (Montoya-Noguera and Lopez-Caballero, 2015). Due to the interaction between the two soils, an important variability is evidenced in the response. Additionally, traditional and advanced homogenization theories were used to predict the relation between the average efficiency and effectiveness. Because of the nonlinear soil behavior, the traditional theories fail to predict the response while some advanced theories which include the percolation theory may provide a good estimate. Concerning the effect of added spatial variability on soil liquefaction, different input motions were tested and the response of the whole was found to depend on the ratio of PHV and PHA of the input motion
APA, Harvard, Vancouver, ISO, and other styles
32

Yeh, Chia-Yu. "THREE ECONOMETRIC APPLICATIONS OF NON-MARKET VALUATION." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1037827614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Baczyk, Maxime. "Influence du champ aléatoire et des interactions à longue portée sur le comportement critique du modèle d'Ising : une approche par le groupe de renormalisation non perturbatif." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066105/document.

Full text
Abstract:
Nous étudions l’influence du champ magnétique aléatoire et des interactions à longue portée sur le comportement critique du modèle d’Ising ; notre approche est basée sur une version non perturbative et fonctionnelle du groupe de renormalisation. Les concepts du groupe de renormalisation non perturbatif sont tout d’abord introduits, puis illustrés dans le cadre simple d’une théorie classique d’un champ scalaire. Nous discutons ensuite les propriétés critiques de cette dernière en présence d’un champ magnétique aléatoire gelé qui traduit le désordre dans le système. Celui-ci est distribué comme un bruit blanc gaussien dans l’espace. Nous insistons principalement sur la propriété de réduction dimensionnelle qui prédit un comportement critique identique pour le modèle en champ aléatoire à d dimensions et le modèle pur (c’est à dire sans champ aléatoire) en dimension d − 2. Bien que cette propriété soit démontrée à tous les ordres par la théorie de perturba- tion, on montre que celle-ci est brisée en dessous d’une dimension critique dDR = 5.13. La réduction dimensionnelle et sa brisure sont alors reliées aux caractéristiques d’échelle des grandes avalanches intervenant dans le système à température nulle. Nous considérons, dans un second temps, une généralisation du modèle d’Ising dans laquelle l’interaction ferromagnétique décroit désormais à longue portée comme r^−(d+σ) avec σ > 0 (d désigne toujours la dimension de l’espace). Dans un tel système, il est possible de travailler en dimension fixée (incluant la dimension d = 1) et de varier l’exposant σ afin de parcourir une gamme de comportements critiques similaire à celle obtenue entre les dimensions critiques inférieure et supérieure de la version à courte portée du modèle. Nous avons caractérisé la transition de phase dans le plan (σ, d), et notamment calculé les exposants critiques en fonction du paramètre σ pour les dimensions physiquement intéressantes d = 1, 2 et 3. Finalement, on s’intéresse aussi à la théorie en présence d’un champ magnétique aléatoire dont les corrélations décroissent à grande distance comme r^−d+ρ avec ρ > −d. Dans le cas particulier où ρ = 2 − σ, on montre que la propriété de réduction dimensionnelle est vérifiée lorsque σ est suffisamment petit, mais brisée à grand σ (en dimension inférieure à dDR ). En particulier, concernant le modèle tridimensionnel, nos résultats prédisent une brisure de réduction dimensionnelle lorsque σ > σDR = 0.71
We study the influence of the presence of a random magnetic field and of long-ranged interactions on the critical behavior of the Ising model. Our approach is based on a nonperturbative and functional version of the renormalization group. The bases of the nonperturbative renormalization group are introduced first and then illustrated in the simple case of the classical scalar field theory. We next discuss the critical properties of the latter in the presence of a random magnetic field, which is associated with frozen disorder in the system. The distribution of the random field in space is taken as that of a gaussian white noise. We focus on the property of dimensional reduction that predicts identical critical behavior for the random-field model in dimension $d$ and the pure model, \textit{i.e.} in the absence of random field, in dimension d-2. Although this property is found at all orders of the perturbation theory, it is violated below a critical dimension $d_{DR} \approx 5.13$. We show that the dimensional reduction and its breakdown are related to the large-scale properties of the avalanches that are present in the system at zero temperature. We next consider a generalization of the Ising model in which the ferromagnetic interaction varies at large distance like $r^{-(d+\sigma)}$ with $\sigma > 0$ ($d$ being the spatial dimension). In this system, it is possible to obtain a range of critical behavior similar to that encountered in the short-ranged version of the model between the lower and the upper critical dimensions by varying the exponent $\sigma$ while keeping the dimension $d$ fixed (including the case $d=1$).We have characterized the phase transition of this long-ranged model in the plane $(\sigma,d)$ and computed the critical exponents as a function of the parameter $\sigma$ for the physically interesting dimensions, $d=1,2$ and $3$. Finally, we have also studied the long-ranged random-field Ising model when the correlations of the random magnetic field decrease at large distance as $r^{-d+\rho}$ with $\rho > -d$. In the special case where $\rho=2-\sigma$, we have shown that the dimensional-reduction property is satisfied when $\sigma$ is small enough but breaks down above a critical value (when the spatial dimension $d$ is less than $d_{DR}$). In particular, for $d=3$, we predict a breakdown of dimensional reduction for $\sigma_{DR}\approx 0.71$
APA, Harvard, Vancouver, ISO, and other styles
34

Genbäck, Minna. "Uncertainty intervals and sensitivity analysis for missing data." Doctoral thesis, Umeå universitet, Statistik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-127121.

Full text
Abstract:
In this thesis we develop methods for dealing with missing data in a univariate response variable when estimating regression parameters. Missing outcome data is a problem in a number of applications, one of which is follow-up studies. In follow-up studies data is collected at two (or more) occasions, and it is common that only some of the initial participants return at the second occasion. This is the case in Paper II, where we investigate predictors of decline in self reported health in older populations in Sweden, the Netherlands and Italy. In that study, around 50% of the study participants drop out. It is common that researchers rely on the assumption that the missingness is independent of the outcome given some observed covariates. This assumption is called data missing at random (MAR) or ignorable missingness mechanism. However, MAR cannot be tested from the data, and if it does not hold, the estimators based on this assumption are biased. In the study of Paper II, we suspect that some of the individuals drop out due to bad health. If this is the case the data is not MAR. One alternative to MAR, which we pursue, is to incorporate the uncertainty due to missing data into interval estimates instead of point estimates and uncertainty intervals instead of confidence intervals. An uncertainty interval is the analog of a confidence interval but wider due to a relaxation of assumptions on the missing data. These intervals can be used to visualize the consequences deviations from MAR have on the conclusions of the study. That is, they can be used to perform a sensitivity analysis of MAR. The thesis covers different types of linear regression. In Paper I and III we have a continuous outcome, in Paper II a binary outcome, and in Paper IV we allow for mixed effects with a continuous outcome. In Paper III we estimate the effect of a treatment, which can be seen as an example of missing outcome data.
APA, Harvard, Vancouver, ISO, and other styles
35

Rossiter, Angelina Jane. "Solubility and crystal growth of sodium nitrate from mixed alcohol – water solvents." Thesis, Curtin University, 2009. http://hdl.handle.net/20.500.11937/505.

Full text
Abstract:
Due to the ductile nature of the sodium nitrate crystal which deforms plastically under high levels of strain, most of the crystal growth studies in aqueous solution have focussed on the influence of tensile strain, supersaturation and dislocation, using x-ray surface topography to characterise the dislocation structure of the crystal. Most of the crystal growth studies have also focussed on growth from the melt since single crystals of sodium nitrate find application in optical pumping experiments, are a potential substitute for calcite in the preparation of polarising prisms and are interesting for the study of plastic properties because two types of plastic deformation, glide and twinning, take place in these crystals at room temperature. Its crystal habit is also difficult to modify and many researchers have used dyes to investigate its effect. Sodium nitrate is also a highly soluble substance with 96g of sodium nitrate dissolving in 100g of water at 30.0°C, making aqueous solutions of this salt and its supersaturation extremely unstable. Literature on its solubility in organic solvents, such as methanol, ethanol and isoproponal, are quite outdated and limited to specific conditions.This study involved the determination of the solubility of sodium nitrate in aqueous methanol, ethanol and isopropanol solutions at different temperatures and weight percents of the organic solvents. Splitting into two liquid phases was observed when using isopropanol, however this phase separation does not occur at low and high mass fractions of alcohol, as at lower concentrations of one solvent the two solvents are miscible. Whereas in the presence of methanol and ethanol the solubility of sodium nitrate in water was significantly reduced, with the solubility decreasing with increasing molecular weight of the alcohol. The experimental data for methanol and ethanol was used for the determination of the ion-specific Non random two liquid (NRTL) parameters by correlating with the modified extended NRTL model. It was observed for both methanol and ethanol that the model was found to satisfactorily correlate the data at low to moderate concentrations of alcohol. However, as the concentration of alcohol rises the model prediction was found to be less satisfactory, probably due to the interaction parameters of NRTL between alcohol and the ions not being able to represent the low solubility of electrolytes.The growth rates of individual faces of sodium nitrate crystals grown in situ in a batch cell and observed with an optical microscope were measured at different temperatures (20.0, 30.0 and 40.0°C) and relative supersaturations (0.02, 0.04, 0.06, 0.08 and 0.1) to determine the kinetics of growth for homogeneous nucleation. A combined growth order, of 1, and activation energy of 23,580 J/mol was obtained indicating that crystal growth in these sets of experiments was diffusion controlled. Crystal growth rates were also obtained for sodium nitrate crystal seeds grown at 20.0°C at a supersaturation of 0.02 and 0.04, in a modified growth cell where the saturated solution was circulated at a flow rate of 4mL/min. The crystal growth rates obtained were much lower in comparison to the growth rates obtained by homogeneous nucleation. In both sets of experiments size independent growth was observed.The surface morphology of the crystals was also observed by optical microscopy, scanning electron microscopy (SEM) and atomic force microscopy (AFM) for the crystals grown by homogeneous nucleation to elucidate the mechanism of growth. Liquid inclusions were observed by optical microscopy for crystals that were grown at high temperatures and for a long duration. SEM revealed the presence of pitting on the crystal surface due to the high solubility of sodium nitrate, while AFM images showed the presence of growth hillocks which suggests that crystal growth is surface integration controlled. However, the presence of growth hillocks could have been caused by the formation of some nuclei and surface artefacts when the crystal was taken out from solution. In seeded crystal growth experiments the solute was observed by optical microscopy to deposit onto the crystal surface.The effect of solvent composition on the growth rate and habit modification of sodium nitrate was also investigated with aqueous solutions of methanol and ethanol. Crystal growth rates of sodium nitrate crystals grown in situ in a batch cell by homogeneous nucleation in aqueous ethanol at 30.0°C at 20, 50 and 90 weight percent of ethanol and crystal seeds grown at 20.4°C at a supersaturation of 0.02 and 0.04 at 30, 50 and 90 weight percent of ethanol, in a modified growth cell was measured. It was found that growth rates decrease with increasing amounts of ethanol and the habit of the crystal remains unchanged. The growth rate was also observed to be much lower than the growth rates obtained from pure aqueous solution. For crystals grown by homogeneous nucleation it was observed that with increasing supersaturation, decreasing weight percent of ethanol and with increasing crystal size the number of liquid inclusions observed on the crystal surfaces increased, whereas for seeded crystal growth solute was observed to deposit on to the crystal surface mainly at low alcohol weight percents. Sodium nitrate crystals grown in aqueous methanol was also observed to behave similarly to crystals grown in ethanol, with lower growth rates obtained. For all cases size independent growth was observed.The influence of additives, DOWFAX 3B2 and amaranth was also investigated on the habit modification of sodium nitrate for crystals grown by homogeneous nucleation at 20.0°C at a supersaturation of 0.04. Both additives were observed to be effective in changing the crystal habit of sodium nitrate, with the appearance of triangular truncations or octahedral facets at the corners of the sodium nitrate crystal due to the additive being adsorbed onto the crystal surface. The influence of the additives on the crystal habit modification can be explained due to the presence of the anionic polar group, the sulphonate group. The growth ratio value for DOWFAX 3B2 was also found to decrease with increasing additive concentration.It is believed that the results of this thesis provides up to date data on the solubility of sodium nitrate in aqueous ethanol and for temperatures and weight percents that have not been reported before in literature for the aqueous methanol system. The work reported on crystal growth studies by homogeneous nucleation and using crystal seeds, the effect of solvent and DOWFAX 3B2 on crystal growth rates and habit modification is also new and has not been reported in literature before.
APA, Harvard, Vancouver, ISO, and other styles
36

Gulikers, Lennart. "Sur deux problèmes d’apprentissage automatique : la détection de communautés et l’appariement adaptatif." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE062/document.

Full text
Abstract:
Dans cette thèse, nous étudions deux problèmes d'apprentissage automatique : (I) la détection des communautés et (II) l'appariement adaptatif. I) Il est bien connu que beaucoup de réseaux ont une structure en communautés. La détection de ces communautés nous aide à comprendre et exploiter des réseaux de tout genre. Cette thèse considère principalement la détection des communautés par des méthodes spectrales utilisant des vecteurs propres associés à des matrices choisiesavec soin. Nous faisons une analyse de leur performance sur des graphes artificiels. Au lieu du modèle classique connu sous le nom de « Stochastic Block Model » (dans lequel les degrés sont homogènes) nous considérons un modèle où les degrés sont plus variables : le « Degree-Corrected Stochastic Block Model » (DC-SBM). Dans ce modèle les degrés de tous les nœuds sont pondérés - ce qui permet de générer des suites des degrés hétérogènes. Nous étudions ce modèle dans deux régimes: le régime dense et le régime « épars », ou « dilué ». Dans le régime dense, nous prouvons qu'un algorithme basé sur une matrice d'adjacence normalisée réussit à classifier correctement tous les nœuds sauf une fraction négligeable. Dans le régime épars il existe un seuil en termes de paramètres du modèle en-dessous lequel n'importe quel algorithme échoue par manque d'information. En revanche, nous prouvons qu'un algorithme utilisant la matrice « non-backtracking » réussit jusqu'au seuil - cette méthode est donc très robuste. Pour montrer cela nous caractérisons le spectre des graphes qui sont générés selon un DC-SBM dans son régime épars. Nous concluons cette partie par des tests sur des réseaux sociaux. II) Les marchés d'intermédiation en ligne tels que des plateformes de Question-Réponse et des plateformes de recrutement nécessitent un appariement basé sur une information incomplète des deux parties. Nous développons un modèle de système d'appariement entre tâches et serveurs représentant le comportement de telles plateformes. Pour ce modèle nous donnons une condition nécessaire et suffisante pour que le système puisse gérer un certain flux de tâches. Nous introduisons également une politique de « back-pressure » sous lequel le débit gérable par le système est maximal. Nous prouvons que cette politique atteint un débit strictement plus grand qu'une politique naturelle « gloutonne ». Nous concluons en validant nos résultats théoriques avec des simulations entrainées par des données de la plateforme Stack-Overflow
In this thesis, we study two problems of machine learning: (I) community detection and (II) adaptive matching. I) It is well-known that many networks exhibit a community structure. Finding those communities helps us understand and exploit general networks. In this thesis we focus on community detection using so-called spectral methods based on the eigenvectors of carefully chosen matrices. We analyse their performance on artificially generated benchmark graphs. Instead of the classical Stochastic Block Model (which does not allow for much degree-heterogeneity), we consider a Degree-Corrected Stochastic Block Model (DC-SBM) with weighted vertices, that is able to generate a wide class of degree sequences. We consider this model in both a dense and sparse regime. In the dense regime, we show that an algorithm based on a suitably normalized adjacency matrix correctly classifies all but a vanishing fraction of the nodes. In the sparse regime, we show that the availability of only a small amount of information entails the existence of an information-theoretic threshold below which no algorithm performs better than random guess. On the positive side, we show that an algorithm based on the non-backtracking matrix works all the way down to the detectability threshold in the sparse regime, showing the robustness of the algorithm. This follows after a precise characterization of the non-backtracking spectrum of sparse DC-SBM's. We further perform tests on well-known real networks. II) Online two-sided matching markets such as Q&A forums and online labour platforms critically rely on the ability to propose adequate matches based on imperfect knowledge of the two parties to be matched. We develop a model of a task / server matching system for (efficient) platform operation in the presence of such uncertainty. For this model, we give a necessary and sufficient condition for an incoming stream of tasks to be manageable by the system. We further identify a so-called back-pressure policy under which the throughput that the system can handle is optimized. We show that this policy achieves strictly larger throughput than a natural greedy policy. Finally, we validate our model and confirm our theoretical findings with experiments based on user-contributed content on an online platform
APA, Harvard, Vancouver, ISO, and other styles
37

Yan, Huijie. "Challenges of China’s sustainability : integrating energy, environment and health policies." Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM1092.

Full text
Abstract:
Dans le but de faire face aux défis interdépendants en termes d’épuisement des ressources énergétiques, de dégradation environnementale et des préoccupations de santé publique dans le contexte chinois en réponse au développement durable, nous nous concentrons sur l'étude des politiques en matière d’énergie, d’environnement et de santé en Chine. Dans le chapitre 1, nous donnons un aperçu des politiques chinoises en matière d’énergie, d’environnement et de santé au cours des 20 dernières années afin de connaître les orientations politiques futures auxquelles le gouvernement n'a pas donné une attention suffisante. Dans les trois chapitres suivants, nous proposons une série d'études empiriques afin de tirer quelques implications politiques utiles. Dans le chapitre 2, nous étudions l'impact de l'urbanisation, de l'adaptation de la structure industrielle, du prix de l'énergie et de l'exportation sur les intensités énergétiques agrégés et désagrégés des provinces. Dans le chapitre 3, nous étudions les facteurs qui expliquent la transition énergétique vers des combustibles propres des ménages ruraux. Dans le chapitre 4, nous examinons les effets conjoints des risques environnementaux, du revenu individuel, des politiques de santé sur l'état de santé des adultes chinois. En particulier, nos résultats empiriques suggèrent d’intégrer le développement urbain dans la stratégie d'économies d'énergie; de considérer des substitutions/complémentarités complexes parmi les sources d'énergie et entre l'énergie et l’alimentation pour les ménages ruraux; d’aligner les politiques environnementales, énergétiques et alimentaires avec les politiques de santé
With the purpose of coping with the intertwined challenges of energy depletion, environmental degradation and public health concerns in the Chinese-specific context in response to sustainable development, we focus on investigating China’s energy, environment and health policies. In chapter 1, we provide an overview of China’s energy, environment and health policies over the past 20 years in order to know about the future policy directions to which the government has not given a sufficient attention. In the following three chapters, we provide a series of empirical studies so as to derive some useful policy implications. In chapter 2, we investigate the impact of urbanization, industrial structure adjustment, energy price and export on provincial aggregate and disaggregate energy intensities. In chapter 3, we study the factors explaining the switches from dirty to clean fuel sources in rural households. In chapter 4, we examine the joint effects of environmental hazards, individual income and health policies on the health status of Chinese adults. Our empirical findings particularly suggest integrating urban development into the strategy of energy saving; considering the complex substitutions/complementarities among energy sources and between energy and food for rural households; aligning the environment, energy and food policies with health policies
APA, Harvard, Vancouver, ISO, and other styles
38

Jagelka, Tomáš. "Preferences, Ability, and Personality : Understanding Decision-making Under Risk and Delay." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX028/document.

Full text
Abstract:
Les préférences, les aptitudes et la personnalité prédisent un large éventail de réalisations économiques. Je les mets en correspondance dans un cadre structurel de prise de décision en utilisant des données expérimentales uniques collectées sur plus de 1 200 personnes prenant chacune plus de 100 décisions à enjeu financier.J’estime conjointement les distributions des préférences pour le risque et le temps dans la population, leur stabilité au niveau individuel et la tendance des gens à faire des erreurs. J’utilise le modèle à préférences aléatoires (RPM) dont il a été récemment démontré que ses propriétés théoriques sont supérieures à celles des modèles précédemment employés. Je montre que le RPM a une forte validité interne. Les cinq paramètres structurels estimés dominent un large éventail de variables démographiques et socio-économiques lorsqu'il s'agit d'expliquer des choix individuels observés.Je démontre l’importance économique et économétrique de l’utilisation des chocs aux préférences et de l’incorporation du paramètre dit de « la main tremblante ». Les erreurs et l’instabilité des préférences sont liées à des capacités différentes. Je propose un indice de rationalité qui les condense en un indicateur unique prédictif des pertes de bien-être.J'utilise un modèle à facteurs pour extraire la capacité cognitive et les « Big Five » traits de la personnalité à partir de nombreuses mesures. Ils expliquent jusqu’à 50% de la variation des préférences des gens et de leur capacité à faire des choix rationnels. La conscienciosité explique à elle seule 45% et 10% de la variation transversale du taux d'actualisation et de l'aversion au risque, ainsi que 20% de la variation de leur stabilité individuelle. En outre, l'aversion au risque est liée à l'extraversion et les erreurs dépendent des capacités cognitives, de l’effort, et des paramètres des tâches. Les préférences sont stables pour l'individu médian. Néanmoins, une partie de la population a une certaine instabilité des préférences qui est indicative d’une connaissance de soi imparfaite.Ces résultats ont des implications à la fois pour la spécification des modèles économiques de forme réduite et structurels, et aussi pour l’explication des inégalités et de la transmission intergénérationnelle du statut socio-économique
Preferences, ability, and personality predict a wide range of economic outcomes. I establish a mapping between them in a structural framework of decision-making under risk and delay using unique experimental data with information on over 100 incentivized choice tasks for each of more than 1,200 individuals.I jointly estimate population distributions of risk and time preferences complete with their individual-level stability and of people’s propensity to make mistakes. I am the first to do so using the Random Preference Model (RPM) which has been recently shown to have desirable theoretical properties over previously used frameworks. I show that the RPM has high internal validity. The five estimated structural parameters largely dominate a wide range of demographic and socio-economic variables when it comes to explaining observed individual choices between risky lotteries and time-separated payments.I demonstrate the economic and econometric significance of appending shocks directly to preferences and of incorporating the trembling hand parameter - their necessary complement in this framework. Mistakes and preference instability are not only separately identified but they are also linked to different cognitive and non-cognitive skills. I propose a Rationality Index which condenses them into a single indicator predictive of welfare loss.I use a factor model to extract cognitive ability and Big Five personality traits from noisy measures. They explain up to 50% of the variation in both average preferences and in individuals’ capacity to make consistent rational choices. Conscientiousness explains 45% and 10% respectively of the cross-sectional variation discount rates and risk aversion respectively as well as 20% of the variation in their individual-level stability. Furthermore, risk aversion is related to extraversion and mistakes are a function of cognitive ability, task design, and of effort. Preferences are stable for the median individual. Nevertheless, a part of the population exhibits some degree of preference instability consistent with imperfect self-knowledge.These results have implications both for specifying reduced form and structural economic models, and for explaining inequality and the inter-generational transmission of socioeconomic status
APA, Harvard, Vancouver, ISO, and other styles
39

Nguyen, Ngoc Bien. "Adaptation via des inéqualités d'oracle dans le modèle de regression avec design aléatoire." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4716/document.

Full text
Abstract:
À partir des observations Z(n) = {(Xi, Yi), i = 1, ..., n} satisfaisant Yi = f(Xi) + ζi, nous voulons reconstruire la fonction f. Nous évaluons la qualité d'estimation par deux critères : le risque Ls et le risque uniforme. Dans ces deux cas, les hypothèses imposées sur la distribution du bruit ζi serons de moment borné et de type sous-gaussien respectivement. En proposant une collection des estimateurs à noyau, nous construisons une procédure, qui est initié par Goldenshluger et Lepski, pour choisir l'estimateur dans cette collection, sans aucune condition sur f. Nous prouvons ensuite que cet estimateur satisfait une inégalité d'oracle, qui nous permet d'obtenir les estimations minimax et minimax adaptatives sur les classes de Hölder anisotropes
From the observation Z(n) = {(Xi, Yi), i = 1, ..., n} satisfying Yi = f(Xi) + ζi, we would like to approximate the function f. This problem will be considered in two cases of loss function, Ls-risk and uniform risk, where the condition imposed on the distribution of the noise ζi is of bounded moment and of type sub-gaussian, respectively. From a proposed family of kernel estimators, we construct a procedure, which is initialized by Goldenshluger and Lepski, to choose in this family a final estimator, with no any assumption imposed on f. Then, we show that this estimator satisfies an oracle inequality which implies the minimax and minimax adaptive estimation over the anisotropic Hölder classes
APA, Harvard, Vancouver, ISO, and other styles
40

Cardozo, Sandra Vergara. "Função da probabilidade da seleção do recurso (RSPF) na seleção de habitat usando modelos de escolha discreta." Universidade de São Paulo, 2009. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-11032009-143806/.

Full text
Abstract:
Em ecologia, o comportamento dos animais é freqüentemente estudado para entender melhor suas preferências por diferentes tipos de alimento e habitat. O presente trabalho esta relacionado a este tópico, dividindo-se em três capítulos. O primeiro capitulo refere-se à estimação da função da probabilidade da seleção de recurso (RSPF) comparado com um modelo de escolha discreta (DCM) com uma escolha, usando as estatísticas qui-quadrado para obter as estimativas. As melhores estimativas foram obtidas pelo método DCM com uma escolha. No entanto, os animais não fazem a sua seleção baseados apenas em uma escolha. Com RSPF, as estimativas de máxima verossimilhança, usadas pela regressão logística ainda não atingiram os objetivos, já que os animais têm mais de uma escolha. R e o software Minitab e a linguagem de programação Fortran foram usados para obter os resultados deste capítulo. No segundo capítulo discutimos mais a verossimilhança do primeiro capítulo. Uma nova verossimilhança para a RSPF é apresentada, a qual considera as unidades usadas e não usadas, e métodos de bootstrapping paramétrico e não paramétrico são usados para estudar o viés e a variância dos estimadores dos parâmetros, usando o programa FORTRAN para obter os resultados. No terceiro capítulo, a nova verossimilhança apresentada no capítulo 2 é usada com um modelo de escolha discreta, para resolver parte do problema apresentado no primeiro capítulo. A estrutura de encaixe é proposta para modelar a seleção de habitat de 28 corujas manchadas (Strix occidentalis), assim como a uma generalização do modelo logit encaixado, usando a maximização da utilidade aleatória e a RSPF aleatória. Métodos de otimização numérica, e o sistema computacional SAS, são usados para estimar os parâmetros de estrutura de encaixe.
In ecology, the behavior of animals is often studied to better understand their preferences for different types of habitat and food. The present work is concerned with this topic. It is divided into three chapters. The first concerns the estimation of a resource selection probability function (RSPF) compared with a discrete choice model (DCM) using chi-squared to obtain estimates. The best estimates were obtained by the DCM method. Nevertheless, animals were not selected based on choice alone. With RSPF, the maximum likelihood estimates used with the logistic regression still did not reach the objectives, since the animals have more than one choice. R and Minitab software and the FORTRAN programming language were used for the computations in this chapter. The second chapter discusses further the likelihood presented in the first chapter. A new likelihood for a RSPF is presented, which takes into account the units used and not used, and parametric and non-parametric bootstrapping are employed to study the bias and variance of parameter estimators, using a FORTRAN program for the calculations. In the third chapter, the new likelihood presented in chapter 2, with a discrete choice model is used to resolve a part of the problem presented in the first chapter. A nested structure is proposed for modelling selection by 28 spotted owls (Strix occidentalis) as well as a generalized nested logit model using random utility maximization and a random RSPF. Numerical optimization methods and the SAS system were employed to estimate the nested structural parameters.
APA, Harvard, Vancouver, ISO, and other styles
41

Lawson, Brodie Alexander James. "Cell migration and proliferation on homogeneous and non-homogeneous domains : modelling on the scale of individuals and populations." Thesis, Queensland University of Technology, 2013. https://eprints.qut.edu.au/61066/1/Brodie_Lawson_Thesis.pdf.

Full text
Abstract:
Cell migration is a behaviour critical to many key biological effects, including wound healing, cancerous cell invasion and morphogenesis, the development of an organism from an embryo. However, given that each of these situations is distinctly different and cells are extremely complicated biological objects, interest lies in more basic experiments which seek to remove conflating factors and present a less complex environment within which cell migration can be experimentally examined. These include in vitro studies like the scratch assay or circle migration assay, and ex vivo studies like the colonisation of the hindgut by neural crest cells. The reduced complexity of these experiments also makes them much more enticing as problems to mathematically model, like done here. The primary goal of the mathematical models used in this thesis is to shed light on which cellular behaviours work to generate the travelling waves of invasion observed in these experiments, and to explore how variations in these behaviours can potentially predict differences in this invasive pattern which are experimentally observed when cell types or chemical environment are changed. Relevant literature has already identified the difficulty of distinguishing between these behaviours when using traditional mathematical biology techniques operating on a macroscopic scale, and so here a sophisticated individual-cell-level model, an extension of the Cellular Potts Model (CPM), is been constructed and used to model a scratch assay experiment. This model includes a novel mechanism for dealing with cell proliferations that allowed for the differing properties of quiescent and proliferative cells to be implemented into their behaviour. This model is considered both for its predictive power and used to make comparisons with the travelling waves which result in more traditional macroscopic simulations. These comparisons demonstrate a surprising amount of agreement between the two modelling frameworks, and suggest further novel modifications to the CPM that would allow it to better model cell migration. Considerations of the model’s behaviour are used to argue that the dominant effect governing cell migration (random motility or signal-driven taxis) likely depends on the sort of invasion demonstrated by cells, as easily seen by microscopic photography. Additionally, a scratch assay simulated on a non-homogeneous domain consisting of a ’fast’ and ’slow’ region is also used to further differentiate between these different potential cell motility behaviours. A heterogeneous domain is a novel situation which has not been considered mathematically in this context, nor has it been constructed experimentally to the best of the candidate’s knowledge. Thus this problem serves as a thought experiment used to test the conclusions arising from the simulations on homogeneous domains, and to suggest what might be observed should this non-homogeneous assay situation be experimentally realised. Non-intuitive cell invasion patterns are predicted for diffusely-invading cells which respond to a cell-consumed signal or nutrient, contrasted with rather expected behaviour in the case of random-motility-driven invasion. The potential experimental observation of these behaviours is demonstrated by the individual-cell-level model used in this thesis, which does agree with the PDE model in predicting these unexpected invasion patterns. In the interest of examining such a case of a non-homogeneous domain experimentally, some brief suggestion is made as to how this could be achieved.
APA, Harvard, Vancouver, ISO, and other styles
42

Rangasamy, Jothi Ramalingam. "Cryptographic techniques for managing computational effort." Thesis, Queensland University of Technology, 2012. https://eprints.qut.edu.au/61007/1/Jothi_Rangasamy_Thesis.pdf.

Full text
Abstract:
Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.
APA, Harvard, Vancouver, ISO, and other styles
43

Cabrol, Sébastien. "Les crises économiques et financières et les facteurs favorisant leur occurrence." Thesis, Paris 9, 2013. http://www.theses.fr/2013PA090019.

Full text
Abstract:
Cette étude vise à mettre en lumière les différences et similarités existant entre les principales crises économiques et financières ayant frappé un échantillon de 21 pays avancés depuis 1981. Nous analyserons plus particulièrement la crise des subprimes que nous rapprocherons avec des épisodes antérieurs. Nous étudierons à la fois les années du déclenchement des turbulences (analyse typologique) ainsi que celles les précédant (prévision). Cette analyse sera fondée sur l’utilisation de la méthode CART (Classification And Regression Trees). Cette technique non linéaire et non paramétrique permet de prendre en compte les effets de seuil et les interactions entre variables explicatives de façon à révéler plusieurs contextes distincts explicatifs d’un même événement. Dans le cadre d‘un modèle de prévision, l’analyse des années précédant les crises nous indique que les variables à surveiller sont : la variation et la volatilité du cours de l’once d’or, le déficit du compte courant en pourcentage du PIB et la variation de l’openness ratio et enfin la variation et la volatilité du taux de change. Dans le cadre de l’analyse typologique, l’étude des différentes variétés de crise (année du déclenchement de la crise) nous permettra d’identifier deux principaux types de turbulence d’un point de vue empirique. En premier lieu, nous retiendrons les crises globales caractérisées par un fort ralentissement ou une baisse de l’activité aux Etats-Unis et une faible croissance du PIB dans les pays touchés. D’autre part, nous mettrons en évidence des crises idiosyncratiques propres à un pays donné et caractérisées par une inflation et une volatilité du taux de change élevées
The aim of this thesis is to analyze, from an empirical point of view, both the different varieties of economic and financial crises (typological analysis) and the context’s characteristics, which could be associated with a likely occurrence of such events. Consequently, we analyze both: years seeing a crisis occurring and years preceding such events (leading contexts analysis, forecasting). This study contributes to the empirical literature by focusing exclusively on the crises in advanced economies over the last 30 years, by considering several theoretical types of crises and by taking into account a large number of both economic and financial explanatory variables. As part of this research, we also analyze stylized facts related to the 2007/2008 subprimes turmoil and our ability to foresee crises from an epistemological perspective. Our empirical results are based on the use of binary classification trees through CART (Classification And Regression Trees) methodology. This nonparametric and nonlinear statistical technique allows us to manage large data set and is suitable to identify threshold effects and complex interactions among variables. Furthermore, this methodology leads to characterize crises (or context preceding a crisis) by several distinct sets of independent variables. Thus, we identify as leading indicators of economic and financial crises: variation and volatility of both gold prices and nominal exchange rates, as well as current account balance (as % of GDP) and change in openness ratio. Regarding the typological analysis, we figure out two main different empirical varieties of crises. First, we highlight « global type » crises characterized by a slowdown in US economic activity (stressing the role and influence of the USA in global economic conditions) and low GDP growth in the countries affected by the turmoil. Second, we find that country-specific high level of both inflation and exchange rates volatility could be considered as evidence of « idiosyncratic type » crises
APA, Harvard, Vancouver, ISO, and other styles
44

Tournu, Erik. "Modélisation stochastique du comportement dynamique non linéaire d'un ailetage de turbine : application à une poutre avec contact oblique." Vandoeuvre-les-Nancy, INPL, 1996. http://www.theses.fr/1996INPL118N.

Full text
Abstract:
Des méthodes d'analyse en dynamique stochastique sont étudiées pour une maquette avec contact oblique. Elles ont pour objet de déterminer les caractéristiques statistiques de la réponse de la structure non linéaire soumise à des sollicitations aléatoires. La maquette représente une ailette avec ses ailerons qui sont en contact avec les ailerons voisins. Cette maquette est modélisée par la théorie des poutres d'Euler-Bernoulli pour l'ailette et par le modèle curvilinéaire de Bouc-Wen pour la non-linéarité de contact. Les résultats de ce modèle de comportement sont comparés avec succès à ceux d'une maquette de laboratoire qui concrétise la poutre avec les contacts obliques, sollicitées par des processus aléatoires. Les caractéristiques du processus de réponse du modèle de comportement soumis à des sollicitations aléatoires sont déterminées par la méthode de Monte-Carlo basée sur la simulation numérique des processus à partir des densités spectrales et par la méthode de linéarisation stochastique gaussienne basée sur l'équation de Lyapunov. Les résultats de ces deux méthodes concordent et ont mis en évidence trois domaines de comportement: pour les faibles excitations, il y a adhérence, pour les excitations moyennes, il y a glissement avec effort tangentiel limité constant et pour les excitations élevées, il y a glissement avec effort tangentiel limité croissant
APA, Harvard, Vancouver, ISO, and other styles
45

Driss, Khodja Kouider. "Etude expérimentale et modélisation de la fonction diélectrique des milieux inhomogènes 2D et 3D." Rouen, 1989. http://www.theses.fr/1989ROUES018.

Full text
Abstract:
Pour prendre en compte la structure réelle des milieux inhomogènes, nous avons proposé une théorie de milieu effectif à 2D et 3D basée sur une procédure de renormalisation dans l'espace réel. Ces modèles utilisent la technique des blocs de Kadanoff. Dans un premier temps cette approche a été appliquée à des réseaux carrés générés par ordinateur dont les sites sont occupés aléatoirement pour une concentration donnée. Dans une deuxième étape nous avons appliqué cette procédure à des images réelles constituées de micrographies de microscopie électronique par transmission digitalisées puis binarisées de systèmes 2D et 3D. On présente également une interprétation des similarités et des différences entre la théorie et l'expérience. Les résultats obtenus sur des réseaux simulés rendent compte à la fois de l'anomalie diélectrique et de la percolation optique. Notre approche est ainsi l'une des rares théories à modéliser la transition métal-non métal. Nous avons montré que la fonction diélectrique effective obéit à des lois d'échelle avec des exposants critiques S3D et S2D pour la polarisation et T3D et T2D pour la conduction, valeurs proches des estimations théoriques. Les fonctions diélectriques obtenues par l'application de ces modèles à des micrographies de microscopie électronique par transmission digitalisées puis binarisées sont en bon accord avec la plupart des résultats expérimentaux
APA, Harvard, Vancouver, ISO, and other styles
46

Todeschini, Adrien. "Probabilistic and Bayesian nonparametric approaches for recommender systems and networks." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0237/document.

Full text
Abstract:
Nous proposons deux nouvelles approches pour les systèmes de recommandation et les réseaux. Dans la première partie, nous donnons d’abord un aperçu sur les systèmes de recommandation avant de nous concentrer sur les approches de rang faible pour la complétion de matrice. En nous appuyant sur une approche probabiliste, nous proposons de nouvelles fonctions de pénalité sur les valeurs singulières de la matrice de rang faible. En exploitant une représentation de modèle de mélange de cette pénalité, nous montrons qu’un ensemble de variables latentes convenablement choisi permet de développer un algorithme espérance-maximisation afin d’obtenir un maximum a posteriori de la matrice de rang faible complétée. L’algorithme résultant est un algorithme à seuillage doux itératif qui adapte de manière itérative les coefficients de réduction associés aux valeurs singulières. L’algorithme est simple à mettre en œuvre et peut s’adapter à de grandes matrices. Nous fournissons des comparaisons numériques entre notre approche et de récentes alternatives montrant l’intérêt de l’approche proposée pour la complétion de matrice à rang faible. Dans la deuxième partie, nous présentons d’abord quelques prérequis sur l’approche bayésienne non paramétrique et en particulier sur les mesures complètement aléatoires et leur extension multivariée, les mesures complètement aléatoires composées. Nous proposons ensuite un nouveau modèle statistique pour les réseaux creux qui se structurent en communautés avec chevauchement. Le modèle est basé sur la représentation du graphe comme un processus ponctuel échangeable, et généralise naturellement des modèles probabilistes existants à structure en blocs avec chevauchement au régime creux. Notre construction s’appuie sur des vecteurs de mesures complètement aléatoires, et possède des paramètres interprétables, chaque nœud étant associé un vecteur représentant son niveau d’affiliation à certaines communautés latentes. Nous développons des méthodes pour simuler cette classe de graphes aléatoires, ainsi que pour effectuer l’inférence a posteriori. Nous montrons que l’approche proposée peut récupérer une structure interprétable à partir de deux réseaux du monde réel et peut gérer des graphes avec des milliers de nœuds et des dizaines de milliers de connections
We propose two novel approaches for recommender systems and networks. In the first part, we first give an overview of recommender systems and concentrate on the low-rank approaches for matrix completion. Building on a probabilistic approach, we propose novel penalty functions on the singular values of the low-rank matrix. By exploiting a mixture model representation of this penalty, we show that a suitably chosen set of latent variables enables to derive an expectation-maximization algorithm to obtain a maximum a posteriori estimate of the completed low-rank matrix. The resulting algorithm is an iterative soft-thresholded algorithm which iteratively adapts the shrinkage coefficients associated to the singular values. The algorithm is simple to implement and can scale to large matrices. We provide numerical comparisons between our approach and recent alternatives showing the interest of the proposed approach for low-rank matrix completion. In the second part, we first introduce some background on Bayesian nonparametrics and in particular on completely random measures (CRMs) and their multivariate extension, the compound CRMs. We then propose a novel statistical model for sparse networks with overlapping community structure. The model is based on representing the graph as an exchangeable point process, and naturally generalizes existing probabilistic models with overlapping block-structure to the sparse regime. Our construction builds on vectors of CRMs, and has interpretable parameters, each node being assigned a vector representing its level of affiliation to some latent communities. We develop methods for simulating this class of random graphs, as well as to perform posterior inference. We show that the proposed approach can recover interpretable structure from two real-world networks and can handle graphs with thousands of nodes and tens of thousands of edges
APA, Harvard, Vancouver, ISO, and other styles
47

Shih, Cheng-Yuan, and 施正遠. "A Non-uniform Distribution Random Walk Model for GPRS/UMTS Mobile Communication System." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/19243024622198680112.

Full text
Abstract:
碩士
元智大學
通訊工程學系
93
GPRS/UMTS is one of the well-developed mobile communication networks. It delivers information to the correct location by using the record in database used for location tracking. Therefore, mobile station (MS) must perform a location update procedure after moving across a threshold. The location update procedure is used to update the MS location record in system database. However, the cost of location update is expensive. In cellular network mobility management, cell cluster is used to reduce the operation cost by lowering the times of location update. However, in the cell cluster mechanism, because the system does not know which cell the MS is located on, it has to broadcast the paging signal to all cells which located on the cell cluster when a paging signal is sent to MS. As a result of broadcast, the paging cost is increased. In the past researches about reduction of the operation cost in GPRS/UMTS networks, they almost based on uniform distribution random walk model. Actually, the MS movement should have direction. For this reason, we propose a method for Markov process path searching, called Markov Look up Table for Path Searching (MLTPS). MLTPS can be used to compute accurate probability by non-uniform distribution for random walk model, and to predict the probable path for MS to reduce the paging cost.
APA, Harvard, Vancouver, ISO, and other styles
48

Muscolino, G., and Alessandro Palmeri. "Maximum response statistics of MDoF linear structures excited by non-stationary random processes." 2004. http://hdl.handle.net/10454/4034.

Full text
Abstract:
no
The paper deals with the problem of predicting the maximum response statistics of Multi-Degree-of-Freedom (MDoF) linear structures subjected to non-stationary non-white noises. The extension of two different censored closures of Gumbel type, originally proposed by the authors for the response of Single-Degree-of-Freedom oscillators, it is presented. The improvement associated with the introduction in the closure of a consistent censorship factor, accounting for the response bandwidth, it is pointed out. Simple and effective step-by-step procedures are formulated and described in details. Numerical applications on a realistic 25-storey moment-resisting frame along with comparisons with classical approximations and Monte Carlo simulations are also included.
APA, Harvard, Vancouver, ISO, and other styles
49

Kao, Shih-Che, and 高士哲. "Development of Non-stationary Wind Speed Model and Surface Pressure Random Field Model and Their Applications in Reliability Analyses." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/30989931841735426768.

Full text
Abstract:
博士
國立臺灣科技大學
營建工程系
101
The probability information on uncertain parameters in the limit-state function is required before reliability is evaluated. Uncertainties that are considered in evaluating structural reliability under wind loading could rise from wind loading or structure properties such as, surface pressure, strongly mean wind speed, natural frequency and damping ratio, in which surface pressure varies randomly with space and time. Due to the pronounced uncertainty of wind loading, developing a Hidden Markov Chain (HMC) model and random field model to characterize the variation in wind speed with time and that in surface pressure with space and time respectively is main part. The uncertainties of wind loading are quantified by using the previous two models, and then are used in evaluating structural reliability. Wind speed time series are generally non-stationary, this research proposes a HMC to analyze field wind speed data and to simulate synthetic wind speed data. The recorded wind speed data is used to demonstrate the implementation of the proposed approach. It is shown that the statistical properties of the simulated data are very similar to those of the field data. Long-term synthetic wind speed data is generated based on HMC, and then is used to estimate the uncertainty of strongly mean wind speed. Variation in surface pressure with space and time is random, this research proposes a random field model to simulate the uncertainty of surface pressure. The recorded surface pressure data is used to establish an analytical random filed model for across-wind integration pressure data. Across-wind integration pressure is characterized by a Gaussian, space nonhomogeneous and time weakly stationary random field. A function form of space-time cross-spectral density function of the random field is determined by analyzing integration pressure data, and the parameters in the function are estimated by fitting integration pressure data. If a building is modeled as a MDOF system, the random filed is discretized. Local average method is used to discretize the random field, and cross-spectral density analytical function of spatial averages is derived. To show that developing random field is necessary, an analytical solution for the reliability of top acceleration of a cantilever beam which is represented by a linear continuous system subjected to across-wind integration pressure which is modeled as a random field is derived. To show the previous two models’ application in reliability analyses, the reliability of interstory drift ratio of a building which is represented by a MDOF system subjected to across-wind integration pressure which is modeled as a random field is estimated considering uncertainties of parameters in the wind loading and structure properties. Because the across-wind integration pressure is the function of time, the limit-state function can be described by a function of conditional probability using fast integration method. The overall probability is estimated by using Monte-Carlo simulations. The numerical results are as follows: 1. If across-wind integration pressure is uncertain and strongly mean wind speed, natural frequency and damping ratio are equal to their mean values, the size of segment for high mode generalized across-wind force spectra is smaller than that of segment for low mode generalized across-wind force spectra to have the same accuracy for high mode and low mode generalized across-wind force spectra. 2. If across-wind integration pressure is uncertain and strongly mean wind speed, natural frequency and damping ratio are equal to their mean values, the size of segment for conditional probability in high threshold is smaller than that of segment for conditional probability in low threshold to have the same accuracy for high and low threshold. 3. For this example, the probability with negligence of the uncertainty of the strongly mean wind speed is larger than that with consideration of parameter uncertainties for thresholds of 0.005, 0.006 and 0.007; the probability with negligence of the uncertainty of the strongly mean wind speed is smaller than that with consideration of parameter uncertainties for a threshold of 0.008.
APA, Harvard, Vancouver, ISO, and other styles
50

Chang, Kai-Yun, and 張凱雲. "An Evaluation of Various One-Sided Tolerance Limits for One-Way Random Effects Model under Non-Normal Distributions." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/29354116938492538627.

Full text
Abstract:
碩士
淡江大學
數學學系碩士班
104
This paper gives the results from a computer simulation study concerning the estimated coverage rate and the average of the tolerance limits with their standard deviation for seven one-sided tolerance limits under the one-way random effects model when data are generated from non-normal distributions. These procedures are from Mee and Owen (1983), Vangel (1992), Krishnamoorthy and Mathew (2004), Harris and Chen (2006a) and Chen and Harris (2006b). The simulation results indicate that the coverage rates of all tolerance limits are mostly below the nominal confidence level of 0.95. It suggests that the tolerance limits derived under the one-way random effects model based on the assumption of normality may not suit the non-normal distributed data.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography