Siga este enlace para ver otros tipos de publicaciones sobre el tema: Distribution généralisée de Pareto.

Tesis sobre el tema "Distribution généralisée de Pareto"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Distribution généralisée de Pareto".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Maynadier, Jérôme. "Approches statistiques et fiabilités en dynamique des structures". Toulouse, INSA, 2003. http://www.theses.fr/2003ISAT0017.

Texto completo
Resumen
L'amélioration de la fiabilité des structures à symétrie cyclique des turbomachines nécessite une estimation précise des vibrations extrêmes qu'atteignent ces composants. Les amplitudes de réponse des structures à symétrie cyclique varient significativement en fonction de petites perturbations structurales nommées désaccords. En général, les désaccords sont des paramètres aléatoires. Leur effet sur les amplitudes de vibrations est encore estimé à partir de l'expérience de chaque motoriste. Pour faire face aux évolutions technologiques les approches numériques sont cependant nécessaires. En dynamique des structures, la méthode classique pour estimer la probabilité d'atteindre une amplitude vibratoire est la méthode de Monte-Carlo, efficace pour les probabilités les plus grandes, mais extrêmement coûteuse en temps de calcul pour les probabilités faibles. Les amplitudes de vibrations critiques correspondant précisément aux petites probabilités, les approches probabilistes FORM et SORM sont d'abord envisagées. Nous développons ensuite une méthode originale dite " méthode à variables séparées ". Enfin, une approche statistique fondée sur les modèles de valeurs extrêmes, pour estimer la distribution des amplitudes les plus grandes à partir d'un nombre restreint de simulations est retenue : la distribution généralisée de Pareto,modélisant la probabilité de dépassement d'un seuil. Après avoir validé ces différentes approches sur des exemples académiques, les plus performantes sont appliquées à une structure à symétrie cyclique modélisée par un système réduit. Ce type de modélisation simplifiée permet de représenter la plupart des configurations rencontrées en fonctionnement
The improvement of the cyclic symmetry structures in turboshaft engines requires an accurate valuation of extreme vibrations which are reaching by these components. The amplitudes of the response of cyclic symmetry structures vary significantly in function of small perturbations named "mistuning". In general, mistunings are random parameters. Usually their effects on the vibration amplitudes are estimated from the experience of each motorist. Hence, at the present time, they are verified with the help of experiences by installation of strain gauges on pieces. To anticipate the evolutions of technologies the numerical approaches are necessary. In structure dynamics, the classical approach used to estimate the probability to reach a vibratory amplitude is the Monte Carlo method, efficient to the biggest probabilities, but extremely expensive when probabilities decrease. The most critical vibration amplitudes corresponding to the lowest probabilities, the probabilistic methods FORM and SORM are first considered. We develop then an original method named "separated variables method". Finally, a statistical approach by extreme values distribution on threshold overstepping with a Pareto law is kept to predict the queue of the distribution of the maximal amplitude of the forced responses. This law bases on a minimum quantities of simulations. After the validation of these different approaches on academic examples, the most efficient one are applied on industrial cases. We consider a cyclic symmetric structure modelled by a reduced model. This type of simplified modelization is able to represent the greatest part of configurations met when running
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Fillon, Blandine. "Développement d'un outil statistique pour évaluer les charges maximales subies par l'isolation d'une cuve de méthanier au cours de sa période d'exploitation". Thesis, Poitiers, 2014. http://www.theses.fr/2014POIT2337/document.

Texto completo
Resumen
Ce travail de thèse porte sur les outils statistiques pour l'évaluation des maxima de charges de sloshing dans les cuves de méthaniers. Selon les caractéristiques du navire, son chargement et les conditions de navigation, un ballotement hydrodynamique est observé à l'intérieur des cuves, phénomène communément appelé sloshing. La détermination des charges qui s'appliquent à la structure est basée sur des mesures de pression d'impact au moyen d'essais sur maquette. Les maxima de pression par impact, extraits des mesures, sont étudiés. La durée d'un essai est équivalente à 5 heures au réel et insuffisante pour déterminer des maxima de pression associés à de grandes périodes de retour (40 ans). Un modèle probabiliste est nécessaire pour extrapoler les maxima de pression. Le modèle usuel est une loi de Weibull. Comme ce sont les valeurs extrêmes des échantillons qui nous intéressent, les ajustements sont aussi effectués par les lois des valeurs extrêmes et de Pareto généralisées via les méthodes de maximum par bloc et d'excès au-dessus d'un seuil.L'originalité du travail repose sur l'emploi d'un système alternatif, plus pertinent pour la capture des maxima de pression et d'une quantité de 480 heures de mesures disponible pour les mêmes conditions d'essai. Cela fournit une distribution de référence pour les maxima de pression et nous permet d'évaluer la pertinence des modèles sélectionnés. Nous insistons sur l'importance d'évaluer la qualité des ajustements par des tests statistiques et de quantifier les incertitudes sur les estimations obtenues. La méthodologie fournie a été implémentée dans un logiciel nommé Stat_R qui facilite la manipulation et le traitement des résultats
This thesis focuses on statistical tools for the assessment of maxima sloshing loads in LNG tanks. According to ship features, tank cargo and sailing conditions, a sloshing phenomenon is observed inside LNG tanks. The determination of sloshing loads supported by the tank structure is derived from impact pressure measurements performed on a test rig. Pressure maxima per impact, extracted from test measurements, are investigated. Test duration is equivalent to 5 hours in full scale. This duration is not sufficient to determine pressure maxima associated with high return periods (40 years). It is necessary to use a probabilistic model in order to extrapolate pressure maxima. Usually, a Weibull model is used. As we focus on extreme values from samples, fittings are also performed with the generalized extreme value distribution and the generalized Pareto distribution using block maximum method and peaks over threshold method.The originality of this work is based on the use of an alternate measurement system which is more relevant than usual measurement system to get pressure maxima and a 480 hours measured data available for same test conditions. This provides a reference distribution for pressure maxima which is used to assess the relevance of the selected probabilistic models. Particular attention is paid to the assessment of fittings quality using statistical tests and to the quantification of uncertainties on estimated values.The provided methodology has been implemented in a software called Stat_R which makes the manipulation and the treatment of results easier
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Tencaliec, Patricia. "Developments in statistics applied to hydrometeorology : imputation of streamflow data and semiparametric precipitation modeling". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM006/document.

Texto completo
Resumen
Les précipitations et les débits des cours d'eau constituent les deux variables hydrométéorologiques les plus importantes pour l'analyse des bassins versants. Ils fournissent des informations fondamentales pour la gestion intégrée des ressources en eau, telles que l’approvisionnement en eau potable, l'hydroélectricité, les prévisions d'inondations ou de sécheresses ou les systèmes d'irrigation.Dans cette thèse de doctorat sont abordés deux problèmes distincts. Le premier prend sa source dans l’étude des débits des cours d’eau. Dans le but de bien caractériser le comportement global d'un bassin versant, de longues séries temporelles de débit couvrant plusieurs dizaines d'années sont nécessaires. Cependant les données manquantes constatées dans les séries représentent une perte d'information et de fiabilité, et peuvent entraîner une interprétation erronée des caractéristiques statistiques des données. La méthode que nous proposons pour aborder le problème de l'imputation des débits se base sur des modèles de régression dynamique (DRM), plus spécifiquement, une régression linéaire multiple couplée à une modélisation des résidus de type ARIMA. Contrairement aux études antérieures portant sur l'inclusion de variables explicatives multiples ou la modélisation des résidus à partir d'une régression linéaire simple, l'utilisation des DRMs permet de prendre en compte les deux aspects. Nous appliquons cette méthode pour reconstruire les données journalières de débit à huit stations situées dans le bassin versant de la Durance (France), sur une période de 107 ans. En appliquant la méthode proposée, nous parvenons à reconstituer les débits sans utiliser d'autres variables explicatives. Nous comparons les résultats de notre modèle avec ceux obtenus à partir d'un modèle complexe basé sur les analogues et la modélisation hydrologique et d'une approche basée sur le plus proche voisin. Dans la majorité des cas, les DRMs montrent une meilleure performance lors de la reconstitution de périodes de données manquantes de tailles différentes, dans certains cas pouvant allant jusqu'à 20 ans.Le deuxième problème que nous considérons dans cette thèse concerne la modélisation statistique des quantités de précipitations. La recherche dans ce domaine est actuellement très active car la distribution des précipitations exhibe une queue supérieure lourde et, au début de cette thèse, il n'existait aucune méthode satisfaisante permettant de modéliser toute la gamme des précipitations. Récemment, une nouvelle classe de distribution paramétrique, appelée distribution généralisée de Pareto étendue (EGPD), a été développée dans ce but. Cette distribution exhibe une meilleure performance, mais elle manque de flexibilité pour modéliser la partie centrale de la distribution. Dans le but d’améliorer la flexibilité, nous développons, deux nouveaux modèles reposant sur des méthodes semiparamétriques.Le premier estimateur développé transforme d'abord les données avec la distribution cumulative EGPD puis estime la densité des données transformées en appliquant un estimateur nonparamétrique par noyau. Nous comparons les résultats de la méthode proposée avec ceux obtenus en appliquant la distribution EGPD paramétrique sur plusieurs simulations, ainsi que sur deux séries de précipitations au sud-est de la France. Les résultats montrent que la méthode proposée se comporte mieux que l'EGPD, l’erreur absolue moyenne intégrée (MIAE) de la densité étant dans tous les cas presque deux fois inférieure.Le deuxième modèle considère une distribution EGPD semiparamétrique basée sur les polynômes de Bernstein. Plus précisément, nous utilisons un mélange creuse de densités béta. De même, nous comparons nos résultats avec ceux obtenus par la distribution EGPD paramétrique sur des jeux de données simulés et réels. Comme précédemment, le MIAE de la densité est considérablement réduit, cet effet étant encore plus évident à mesure que la taille de l'échantillon augmente
Precipitation and streamflow are the two most important meteorological and hydrological variables when analyzing river watersheds. They provide fundamental insights for water resources management, design, or planning, such as urban water supplies, hydropower, forecast of flood or droughts events, or irrigation systems for agriculture.In this PhD thesis we approach two different problems. The first one originates from the study of observed streamflow data. In order to properly characterize the overall behavior of a watershed, long datasets spanning tens of years are needed. However, the quality of the measurement dataset decreases the further we go back in time, and blocks of data of different lengths are missing from the dataset. These missing intervals represent a loss of information and can cause erroneous summary data interpretation or unreliable scientific analysis.The method that we propose for approaching the problem of streamflow imputation is based on dynamic regression models (DRMs), more specifically, a multiple linear regression with ARIMA residual modeling. Unlike previous studies that address either the inclusion of multiple explanatory variables or the modeling of the residuals from a simple linear regression, the use of DRMs allows to take into account both aspects. We apply this method for reconstructing the data of eight stations situated in the Durance watershed in the south-east of France, each containing daily streamflow measurements over a period of 107 years. By applying the proposed method, we manage to reconstruct the data without making use of additional variables, like other models require. We compare the results of our model with the ones obtained from a complex approach based on analogs coupled to a hydrological model and a nearest-neighbor approach, respectively. In the majority of cases, DRMs show an increased performance when reconstructing missing values blocks of various lengths, in some of the cases ranging up to 20 years.The second problem that we approach in this PhD thesis addresses the statistical modeling of precipitation amounts. The research area regarding this topic is currently very active as the distribution of precipitation is a heavy-tailed one, and at the moment, there is no general method for modeling the entire range of data with high performance. Recently, in order to propose a method that models the full-range precipitation amounts, a new class of distribution called extended generalized Pareto distribution (EGPD) was introduced, specifically with focus on the EGPD models based on parametric families. These models provide an improved performance when compared to previously proposed distributions, however, they lack flexibility in modeling the bulk of the distribution. We want to improve, through, this aspect by proposing in the second part of the thesis, two new models relying on semiparametric methods.The first method that we develop is the transformed kernel estimator based on the EGPD transformation. That is, we propose an estimator obtained by, first, transforming the data with the EGPD cdf, and then, estimating the density of the transformed data by applying a nonparametric kernel density estimator. We compare the results of the proposed method with the ones obtained by applying EGPD on several simulated scenarios, as well as on two precipitation datasets from south-east of France. The results show that the proposed method behaves better than parametric EGPD, the MIAE of the density being in all the cases almost twice as small.A second approach consists of a new model from the general EGPD class, i.e., we consider a semiparametric EGPD based on Bernstein polynomials, more specifically, we use a sparse mixture of beta densities. Once again, we compare our results with the ones obtained by EGPD on both simulated and real datasets. As before, the MIAE of the density is considerably reduced, this effect being even more obvious as the sample size increases
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Leblanc, Jean-Philippe. "Distribution hyperbolique généralisée et applications financières". Mémoire, Université de Sherbrooke, 2003. http://savoirs.usherbrooke.ca/handle/11143/2360.

Texto completo
Resumen
Dans ce mémoire nous présentons la distribution hyperbolique généralisée ainsi que quatre de ses sous-classes. La portée de l'analyse de cette distribution peu connue est définie à l'aide des limites de ces paramètres et des distributions qui en découlent. Le second chapitre regroupe la théorie nécessaire aux applications du troisième chapitre. Les applications financières développées dans ce mémoire sont des généralisations de la valeur à risque, de la structure des taux d'intérêts, de la volatilité stochastique et de l'évaluation du prix des options par intégration numérique et par approximation par point fixe. Pour faciliter l'expérimentation statistique nous présentons aussi un algorithme pour générer des variables aléatoires suivant une loi hyperbolique généralisée définie par ces cinq paramètres. Dans le dernier chapitre nous abordons l'estimation et l'utilisation empirique de l'hyperbolique généralisée comme outils d'analyse pour les phénomènes financiers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Diamoutene, Abdoulaye. "Contribution de la Théorie des Valeurs Extrêmes à la gestion et à la santé des systèmes". Thesis, Toulouse, INPT, 2018. http://www.theses.fr/2018INPT0139/document.

Texto completo
Resumen
Le fonctionnement d'un système, de façon générale, peut être affecté par un incident imprévu. Lorsque cet incident a de lourdes conséquences tant sur l'intégrité du système que sur la qualité de ses produits, on dit alors qu'il se situe dans le cadre des événements dits extrêmes. Ainsi, de plus en plus les chercheurs portent un intérêt particulier à la modélisation des événements extrêmes pour diverses études telles que la fiabilité des systèmes et la prédiction des différents risques pouvant entraver le bon fonctionnement d'un système en général. C'est dans cette optique que s'inscrit la présente thèse. Nous utilisons la Théorie des Valeurs Extrêmes (TVE) et les statistiques d'ordre extrême comme outil d'aide à la décision dans la modélisation et la gestion des risques dans l'usinage et l'aviation. Plus précisément, nous modélisons la surface de rugosité de pièces usinées et la fiabilité de l'outil de coupe associé par les statistiques d'ordre extrême. Nous avons aussi fait une modélisation à l'aide de l'approche dite du "Peaks-Over Threshold, POT" permettant de faire des prédictions sur les éventuelles victimes dans l'Aviation Générale Américaine (AGA) à la suite d'accidents extrêmes. Par ailleurs, la modélisation des systèmes soumis à des facteurs d'environnement ou covariables passent le plus souvent par les modèles à risque proportionnel basés sur la fonction de risque. Dans les modèles à risque proportionnel, la fonction de risque de base est généralement de type Weibull, qui est une fonction monotone; l'analyse du fonctionnement de certains systèmes comme l'outil de coupe dans l'industrie a montré qu'un système peut avoir un mauvais fonctionnement sur une phase et s'améliorer sur la phase suivante. De ce fait, des modifications ont été apportées à la distribution de Weibull afin d'avoir des fonctions de risque de base non monotones, plus particulièrement les fonctions de risque croissantes puis décroissantes. En dépit de ces modifications, la prise en compte des conditions d'opérations extrêmes et la surestimation des risques s'avèrent problématiques. Nous avons donc, à partir de la loi standard de Gumbel, proposé une fonction de risque de base croissante puis décroissante permettant de prendre en compte les conditions extrêmes d'opérations, puis établi les preuves mathématiques y afférant. En outre, un exemple d'application dans le domaine de l'industrie a été proposé. Cette thèse est divisée en quatre chapitres auxquels s'ajoutent une introduction et une conclusion générales. Dans le premier chapitre, nous rappelons quelques notions de base sur la théorie des valeurs extrêmes. Le deuxième chapitre s'intéresse aux concepts de base de l'analyse de survie, particulièrement à ceux relatifs à l'analyse de fiabilité, en proposant une fonction de risque croissante-décroissante dans le modèle à risques proportionnels. En ce qui concerne le troisième chapitre, il porte sur l'utilisation des statistiques d'ordre extrême dans l'usinage, notamment dans la détection de pièces défectueuses par lots, la fiabilité de l'outil de coupe et la modélisation des meilleures surfaces de rugosité. Le dernier chapitre porte sur la prédiction d'éventuelles victimes dans l'Aviation Générale Américaine à partir des données historiques en utilisant l'approche "Peaks-Over Threshold"
The operation of a system in general may at any time be affected by an unforeseen incident. When this incident has major consequences on the system integrity and the quality of system products, then it is said to be in the context of extreme events. Thus, increasingly researchers have a particular interest in modeling such events with studies on the reliability of systems and the prediction of the different risks that can hinder the proper functioning of a system. This thesis takes place in this very perspective. We use Extreme Value Theory (EVT) and extreme order statistics as a decision support tool in modeling and risk management in industry and aviation. Specifically, we model the surface roughness of machined parts and the reliability of the associated cutting tool with the extreme order statistics. We also did a modeling using the "Peaks-Over Threshold, POT" approach to make predictions about the potential victims in the American General Aviation (AGA) following extreme accidents. In addition, the modeling of systems subjected to environmental factors or covariates is most often carried out by proportional hazard models based on the hazard function. In proportional hazard models, the baseline risk function is typically Weibull distribution, which is a monotonic function. The analysis of the operation of some systems like the cutting tool in the industry has shown that a system can deteriorated on one phase and improving on the next phase. Hence, some modifications have been made in the Weibull distribution in order to have non-monotonic basic risk functions, more specifically, the increasing-decreasing risk function. Despite these changes, taking into account extreme operating conditions and overestimating risks are problematics. We have therefore proposed from Gumbel's standard distribution, an increasingdecreasing risk function to take into account extreme conditions, and established mathematical proofs. Furthermore, an example of the application in the field of industry was proposed. This thesis is organized in four chapters and to this must be added a general introduction and a general conclusion. In the first chapter, we recall some basic notions about the Extreme Values Theory. The second chapter focuses on the basic concepts of survival analysis, particularly those relating to reliability analysis by proposing a function of increasing-decreasing hazard function in the proportional hazard model. Regarding the third chapter, it deals with the use of extreme order statistics in industry, particularly in the detection of defective parts, the reliability of the cutting tool and the modeling of the best roughness surfaces. The last chapter focuses on the prediction of potential victims in AGA from historical data using the Peaks-Over Threshold approach
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Lin, Der-Chen. "Parameter Estimation for Generalized Pareto Distribution". DigitalCommons@USU, 1988. https://digitalcommons.usu.edu/etd/6974.

Texto completo
Resumen
The generalized Pareto distribution was introduced by Pickands (1975). Three methods of estimating the parameters of the generalized Pareto distribution were compared by Hosking and Wallis (1987) . The methods are maximum likelihood, method of moments and probability-weighted moments. An alternate method of estimation for the generalized Pareto distribution, based on least square regression of expected order statistics (REOS), is developed and evaluated in this thesis . A Monte Carlo comparison is made between this method and the estimating methods considered by Hosking and Wallis (1987). This method is shown to be generally superior to the maximum likelihood, method of moments and probability-weighted moments
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Anabila, Moses A. "Skew Pareto distributions". abstract and full text PDF (free order & download UNR users only), 2008. http://0-gateway.proquest.com.innopac.library.unr.edu/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1453191.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Savulytė, Vaida. "Dvimačių Pareto dydžių maksimumų asimptotinė analizė". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2007. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2007~D_20070816_142229-68037.

Texto completo
Resumen
Darbo tikslas – sukonstruoti dvimatį skirstinį, kai duoti vienmačiai (marginalieji) skirstiniai, atlikti maksimumų asimptotinę analizę ir ištirti konvergavimo greitį. Dvimatis skirstinys konstruojamas dviem atvejais: kai vektorių komponentės yra priklausomos ir nepriklausomos. Detalesnė konvergavimo greičio analizė atlikta, kai komponentės yra priklausomos. Tyrimui buvo pasirinktas Pareto skirstinys. Pirmoje tiriamosios dalies ir rezultatų dalyje yra konstruojamas dvimatis skirstinys, skaičiuojamos jo pagrindinės charakteristikos, tiriama, ar prie visų parametrų reikšmių jos egzistuoja. Taip pat generuojami atsitiktiniai dydžiai, kurių skirstiniai yra sukonstruotosios skirstinio funkcijos marginalieji skirstiniai, ir eksperimentiškai bandoma pagrįsti gautus rezultatus. Antroje dalyje atliekama asimptotinė analizė. Apibrėžiami dvimačiai maksimumai, ieškomas ribinis skirstinys. Juos suradus, apibrėžiamas apytikslis konvergavimo greičio įvertis, atliekama jo bei paklaidų kompiuterinė analizė, ieškoma, kokioms sąlygoms esant jie yra mažiausi. Sukonstruoto dvimačio skirstinio skaitinių charakteristikų tyrimas atliekama programiniu paketu MathCAD. Kompiuterinė konvergavimo greičio įverčių analizė atliekama programinio paketo Matlab pagalba. Jo aplinkoje buvo sukurta programa vartotojui, kuri nubraižo konvergavimo greičio įvertį bei paklaidas.
The aim of this paper is to construct two-dimensional random variables, having one-dimensional ones, carry out the asymptotical analysis and study the speed of convergence. Two-dimensional distribution is constructed in two ways: when the components of random variables are independent and dependent. As in the last few years Pareto distribution is popular in financial models, it was chosen for the analyses. It was proved, that in both cases of independent and dependent components of the vector, the limit distribution is the same. This means that although the components of the vector are dependent, the maxima are asymptotically independent. Besides, the errors are smaller than the approximate estimate. Although, the approximate estimate in the case of independent components is smaller than in the case of dependent components, the errors are on the contrary: they are smaller when the components are dependent than when the components are independent.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Kondlo, Lwando Orbet. "Estimation of Pareto distribution functions from samples contaminated by measurement errors". Thesis, University of the Western Cape, 2010. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_6141_1297831463.

Texto completo
Resumen

The intention is to draw more specific connections between certain deconvolution methods and also to demonstrate the application of the statistical theory of estimation in the presence of measurement error. A parametric methodology for deconvolution when the underlying distribution is of the Pareto form is developed. Maximum likelihood estimation (MLE) of the parameters of the convolved distributions is considered. Standard errors of the estimated parameters are calculated from the inverse Fisher&rsquo
s information matrix and a jackknife method. Probability-probability (P-P) plots and Kolmogorov-Smirnov (K-S) goodnessof- fit tests are used to evaluate the fit of the posited distribution. A bootstrapping method is used to calculate the critical values of the K-S test statistic, which are not available.

Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Sellami, Sami. "Comportements hydrodynamiques d'un modèle non gradient : l'exclusion simple généralisée". Rouen, 1998. http://www.theses.fr/1998ROUES083.

Texto completo
Resumen
Cette thèse est constituée de deux parties. Dans la première partie, nous étudions le champ de fluctuations à l'équilibre de la densité d'un modèle non gradient réversible. Nous établissons tout d'abord le principe de Boltzmann-Gibbs pour l'exclusion simple généralisée. Ce principe, introduit pour la première fois par Brox et Rost, constitue l'étape essentielle qui nous permet ensuite d'obtenir la convergence en loi vers un processus d'Ornstein-Uhlenbeck généralisé, en suivant la théorie de Holley-Stroock. Dans la deuxième partie, réalisée en collaboration avec C. Landim et M. Mourragui, nous considérons un système de particules non gradient dont le comportement macroscopique est décrit par une équation parabolique non linéaire sur une boîte d-dimensionnelle avec des conditions aux bords. En supposant que les coefficients de diffusion sont lipschitziens, nous prouvons que le champ de densité converge vers l'unique solution faible de l'équation parabolique
This thesis is constituted by two parts. In the first one, we study the equilibrium density fluctuation field of a one-dimensional reversible nongradient model. We prove, for the generalized exclusion process, the Boltzmann-Gibbs principle. This principle, first introduced by Brox and Rost, is the basic stage which enables us to show afterwards that our process converges in law to a generalized Ornstein-Uhlenbeck process, by applying Holley and Stroock's theory. In the second part, made in collaboration with C. Landim and M. Mourragui, we consider a nonlinear parabolic equation on a square with boundary conditions. Assuming that the diffusion coefficient is Lipschitz, we prove that the rescaled density field converges to the unique weak solution of the parabolic equation
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Pelletier, François. "Modélisation des rendements financiers à l'aide de la distribution de Laplace asymétrique généralisée". Thesis, Université Laval, 2014. http://www.theses.ulaval.ca/2014/30557/30557.pdf.

Texto completo
Resumen
Les modèles classiques en finance sont basés sur des hypothèses qui ne sont pas toujours vérifiables empiriquement. L’objectif de ce mémoire est de présenter l’utilisation de la distribution de Laplace asymétrique généralisée comme une alternative intéressante à ces derniers. Pour ce faire, on utilise ses différentes propriétés afin de développer des méthodes d’estimation paramétrique, d’approximation et de test, en plus d’élaborer quelques principes d’évaluation de produits dérivés. On présente enfin un exemple d’application numérique afin d’illustrer ces différents concepts.
Classical models in finance are based on a set of hypotheses that are not always empirically verifiable. The main objective behind this master’s thesis is to show that the generalized asymmetric Laplace distribution is an interesting alternative to those models. To support this idea, we focus on some of its properties to develop parametric estimation, approximation and testing methods, then we work out some principles of derivatives pricing. Finally, we have a numerical example to illustrate these concepts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Juozulynaitė, Gintarė. "Pareto atsitiktinių dydžių geometrinis maks stabilumas". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2010. http://vddb.laba.lt/obj/LT-eLABa-0001:E.02~2010~D_20100830_094813-81556.

Texto completo
Resumen
Šiame darbe nagrinėjau vienmačių ir dvimačių Pareto atsitiktinių dydžių geometrinį maks stabilumą. Įrodžiau, kad vienmatis Pareto skirstinys yra geometriškai maks stabilus, kai alfa=1. Tačiau nėra geometriškai maks stabilus, kai alfa nelygu 1. Naudodamasi geometrinio maks stabilumo kriterijumi dvimačiams Pareto atsitiktiniams dydžiams, įrodžiau, kad dvimatė Pareto skirstinio funkcija nėra geometriškai maks stabili, kai vektoriaus komponentės nepriklausomos (kai alfa=1, beta=1 ir alfa nelygu 1, beta nelygu 1). Taip pat dvimatė Pareto skirstinio funkcija nėra geometriškai maks stabili, kai vektoriaus komponentės priklausomos (kai alfa=1, beta=1 ir alfa nelygu 1, beta nelygu 1). Dvimačių Pareto skirstinių tyrimas pateikė nelauktus rezultatus. Gauta, kad dvimatė Pareto skirstinio funkcija nėra geometriškai maks stabili, kai alfa=1, beta=1. Tačiau vienmatės marginaliosios Pareto skirstinio funkcijos yra geometriškai maks stabilios, kai alfa=1, beta=1.
In this work I analyzed geometric max stability of univariate and bivariate Pareto random variables. I have proved, that univariate Pareto distribution is geometrically max stable when alpha=1. But it is not geometrically max stable when alpha unequal 1. Using the criterion of geometric max stability for bivariate Pareto random variables, I have proved, that bivariate Pareto distribution function is not geometrically max stable, when vectors’ components are independent (when alpha=1, beta=1 and alpha unequal 1, beta unequal 1). Also bivariate Pareto distribution function is not geometrically max stable, when vectors’ components are dependent (when alpha=1, beta=1 and alpha unequal 1, beta unequal 1). Research of bivariate Pareto distributions submitted unexpected results. Bivariate Pareto distribution function is not geometrically max stable, when alpha=1, beta=1. But marginal Pareto distribution functions are geometrically max stable, when alpha=1, beta=1.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Kamanina, Polina. "On the Invariance of Size Distribution of Establishments". Thesis, Internationella Handelshögskolan, Högskolan i Jönköping, IHH, Nationalekonomi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-18532.

Texto completo
Resumen
The thesis examines the establishment size distribution over time and across groups of regions, using data on Swedish establishments during period 1994-2009. The size distribution of establishments is highly skewed and approximates the Pareto distribution. The shape of size distribution is invariant over time and across groups of regions. The distribution of total number of establishments and incumbent distribution are found to rise from the same distribution. Moreover, the invariance of establishment size distribution is highly determined by the invariance of distribution of incumbents, entry and exit distributions. Larger establishments have more chances to survive and higher probability to remain in current size group comparing to smaller ones, whereas higher probabilities of growth would be attached to smaller establishments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Hernandez, Javiera I. "Does the Pareto Distribution of Hurricane Damage Inherit its Fat Tail from a Zipf Distribution of Assets at Hazard?" FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1488.

Texto completo
Resumen
Tropical Cyclones are a continuing threat to life and property. Willoughby (2012) found that a Pareto (power-law) cumulative distribution fitted to the most damaging 10% of US hurricane seasons fit their impacts well. Here, we find that damage follows a Pareto distribution because the assets at hazard follow a Zipf distribution, which can be thought of as a Pareto distribution with exponent 1. The Z-CAT model is an idealized hurricane catastrophe model that represents a coastline where populated places with Zipf- distributed assets are randomly scattered and damaged by virtual hurricanes with sizes and intensities generated through a Monte-Carlo process. Results produce realistic Pareto exponents. The ability of the Z-CAT model to simulate different climate scenarios allowed testing of sensitivities to Maximum Potential Intensity, landfall rates and building structure vulnerability. The Z-CAT model results demonstrate that a statistical significant difference in damage is found when only changes in the parameters create a doubling of damage.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Chamberlain, Lauren. "The Power Law Distribution of Agricultural Land Size". DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7400.

Texto completo
Resumen
This paper demonstrates that the distribution of county level agricultural land size in the United States is best described by a power-law distribution, a distribution that displays extremely heavy tails. This indicates that the majority of farmland exists in the upper tail. Our analysis indicates that the top 5% of agricultural counties account for about 25% of agricultural land between 1997-2012. The power-law distribution of farm size has important implications for the design of more efficient regional and national agricultural policies as counties close to the mean account for little of the cumulative distribution of total agricultural land. This has consequences for more efficient management and government oversight as a disruption in one of the counties containing a large amount of farmland (due to natural disasters, for instance) could have nationwide consequences for agricultural production and prices. In particular, the policy makers and government agencies can monitor about 25% of total agricultural land by overseeing just 5% of counties.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Höchstötter, Markus. "The pareto stable distribution as a hypothesis for returns of stocks listed in the DAX /". Hamburg : Kovač, 2006. http://www.verlagdrkovac.de/3-8300-2491-6.htm.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Ozdem, Mehmet. "Video Distribution Over Ip Networks". Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608187/index.pdf.

Texto completo
Resumen
As applications like IPTV and VoD (Video on demand) are gaining popularity, it is becoming more important to study the behavior of video signals in the Internet access infrastructures such as ADSL and cable networks. Average delay, average jitter and packet loss in these networks affect the quality of service, hence transmission and access speeds need to be determined such that these parameters are minimized. In this study the behavior of the above mentioned IP networks under variable bit rate (VBR) video traffic is investigated. ns-2 simulator is used for this purpose and actual as well as artificially generated signals are applied to the networks under test. Variable bit rate (VBR) traffic is generated synthetically using ON/OFF sources with ON/OFF times taken from exponential or Pareto distributions. As VBR video shows long range dependence with a Hurst parameter between 0.5 and 1, this parameter was used as a metric to measure the accuracy of the synthetic sources. Two different topologies were simulated in this study: one similar to ADSL access networks and the other behaving like cable distribution network. The performance of the networks (delay, jitter and packet loss) under VBR video traffic and different access speeds were measured. According to the obtained results, minimum access speeds in order achieve acceptable quality video delivery to the customers were suggested.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Zha, Yuanyuan, Tian-Chyi J. Yeh, Walter A. Illman, Hironori Onoe, Chin Man W. Mok, Jet-Chau Wen, Shao-Yang Huang y Wenke Wang. "Incorporating geologic information into hydraulic tomography: A general framework based on geostatistical approach". AMER GEOPHYSICAL UNION, 2017. http://hdl.handle.net/10150/624351.

Texto completo
Resumen
Hydraulic tomography (HT) has become a mature aquifer test technology over the last two decades. It collects nonredundant information of aquifer heterogeneity by sequentially stressing the aquifer at different wells and collecting aquifer responses at other wells during each stress. The collected information is then interpreted by inverse models. Among these models, the geostatistical approaches, built upon the Bayesian framework, first conceptualize hydraulic properties to be estimated as random fields, which are characterized by means and covariance functions. They then use the spatial statistics as prior information with the aquifer response data to estimate the spatial distribution of the hydraulic properties at a site. Since the spatial statistics describe the generic spatial structures of the geologic media at the site rather than site-specific ones (e. g., known spatial distributions of facies, faults, or paleochannels), the estimates are often not optimal. To improve the estimates, we introduce a general statistical framework, which allows the inclusion of site-specific spatial patterns of geologic features. Subsequently, we test this approach with synthetic numerical experiments. Results show that this approach, using conditional mean and covariance that reflect site-specific large-scale geologic features, indeed improves the HT estimates. Afterward, this approach is applied to HT surveys at a kilometerscale- fractured granite field site with a distinct fault zone. We find that by including fault information from outcrops and boreholes for HT analysis, the estimated hydraulic properties are improved. The improved estimates subsequently lead to better prediction of flow during a different pumping test at the site.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Giesen, Kristian [Verfasser], Jens [Akademischer Betreuer] Südekum y Joachim [Akademischer Betreuer] Prinz. "Zipf's Law for Cities and the Double-Pareto-Lognormal Distribution / Kristian Giesen. Gutachter: Joachim Prinz. Betreuer: Jens Südekum". Duisburg, 2012. http://d-nb.info/1024851893/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Borisevič, Jelena. "Apie geometriškai stabiliuosius maksimumo skirstinius". Master's thesis, Lithuanian Academic Libraries Network (LABT), 2004. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2004~D_20040603_162439-78402.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Ochoa, Pizzali Luis Fernando [UNESP]. "Desempenho de redes de distribuição com geradores distribuídos". Universidade Estadual Paulista (UNESP), 2006. http://hdl.handle.net/11449/100362.

Texto completo
Resumen
Made available in DSpace on 2014-06-11T19:30:51Z (GMT). No. of bitstreams: 0 Previous issue date: 2006-11-23Bitstream added on 2014-06-13T21:01:26Z : No. of bitstreams: 1 ochoapizzali_lf_dr_ilha.pdf: 1694440 bytes, checksum: e159d13557d3d0a89139b7565f849244 (MD5)
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Alban
Fundação de Ensino Pesquisa e Extensão de Ilha Solteira (FEPISA)
Neste trabalho, é apresentada uma análise em regime permanente que considera a avaliação de impactos técnicos tais como perdas elétricas, queda de tensão e níveis de curto-circuito, entre outros; utilizando dados de demanda e geração variáveis no tempo ao longo de um horizonte determinado. O objetivo é encontrar um conjunto de arranjos da GD (configurações) que levem ao melhor desempenho da rede de distribuição analisada, minimizando ou maximizando cada aspecto técnico segundo o interesse da empresa de distribuição. Dada a natureza combinatória deste problema, que requer uma ferramenta de otimização capaz de manipular múltiplos objetivos, os impactos técnicos serão avaliados simultaneamente utilizando uma metodologia baseada no conceito do Non-dominated Sorting Genetic Algorithm (NSGA), conduzindo a soluções mais reais e diversificadas para a tomada de decisões, conhecidas como soluções ótimas de Pareto.
In this work a steady-state analysis considering the assessment of technical impacts such as losses, voltage drop and short-circuit levels, among others; utilizing time-variant loads and generation within a specified horizon. The objective is to find a set of configurations that lead to the best performance of the distribution network under analysis, minimizing or maximizing each technical aspect according to the utility's concerns. Given the combinatorial nature of this problem, which requires an optimization tool able to handle multiple objectives, technical impacts will be assessed simultaneously through a methodology based on the non-dominated sorting genetic algorithm (NSGA). This approach leads to a more realistic and diversified set of solutions for taking decisions, known as Pareto-optimal solutions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Nikolaou, Christos. "A multi-objective genetic algorithm optimisation using variable speed pumps in water distribution systems". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/6819/.

Texto completo
Resumen
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Keating, Karen. "Statistical analysis of pyrosequence data". Diss., Kansas State University, 2012. http://hdl.handle.net/2097/14026.

Texto completo
Resumen
Doctor of Philosophy
Department of Statistics
Gary L. Gadbury
Since their commercial introduction in 2005, DNA sequencing technologies have become widely available and are now cost-effective tools for determining the genetic characteristics of organisms. While the biomedical applications of DNA sequencing are apparent, these technologies have been applied to many other research areas. One such area is community ecology, in which DNA sequence data are used to identify the presence and abundance of microscopic organisms that inhabit an environment. This is currently an active area of research, since it is generally believed that a change in the composition of microscopic species in a geographic area may signal a change in the overall health of the environment. An overview of DNA pyrosequencing, as implemented by the Roche/Life Science 454 platform, is presented and aspects of the process that can introduce variability in data are identified. Four ecological data sets that were generated by the 454 platform are used for illustration. Characteristics of these data include high dimensionality, a large proportion of zeros (usually in excess of 90%), and nonzero values that are strongly right-skewed. A nonparametric method to standardize these data is presented and effects of standardization on outliers and skewness are examined. Traditional statistical methods for analyzing macroscopic species abundance data are discussed, and the applicability of these methods to microscopic species data is examined. One objective that receives focus is the classification of microscopic species as either rare or common species. This is an important distinction since there is much evidence to suggest that the biological and environmental mechanisms that govern common species are distinctly different than the mechanisms that govern rare species. This indicates that the abundance patterns for common and rare species may follow different probability models, and the suitability of the Pareto distribution for rare species is examined. Techniques for classifying macroscopic species are shown to be ill-suited for microscopic species, and an alternative technique is presented. Recognizing that the structure of the data is similar to that of financial applications (such as insurance claims and the distribution of wealth), the Gini index and other statistics based on the Lorenz curve are explored as potential test statistics for distinguishing rare versus common species.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Orellana, Aragón Jorge Alberto. "A lei de Zipf e os efeitos de um tratado de livre comércio : caso da Guatemala". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2009. http://hdl.handle.net/10183/16417.

Texto completo
Resumen
Nos últimos 50 anos, registrou-se na América Central um dos processos de integração econômica e regional mais antigos do continente americano. O comércio intra-regional aumentou e dinamizou-se significativamente a partir da formação, em 1960, do Mercado Comum Centro-Americano (MCCA), assim como processos de integração de acordos bilaterais, regionais e multilaterais de livre comércio. A partir desses acordos, surge uma nova perspectiva para estudar os efeitos do comércio internacional, segundo a Nova Geografia Econômica (NGE), a qual tenta explicar como a evolução da distribuição do tamanho das cidades pode ser representada por uma distribuição de Pareto, que deriva numa regularidade empírica chamada Lei de Zipf, que brinda uma explicação de como interagem as forças de aglomeração nos centros urbanos, que favorecem a atividade econômica e o comércio internacional em geral. Esta dissertação procura investigar a maneira como as mudanças na política comercial geraram impacto sobre a ordem no tamanho das cidades e a influência no crescimento econômico da Guatemala. Para esse propósito, foi estimado o coeficiente de Pareto no período compreendido entre 1921-2002, e como um valor agregado na proposta original, foram introduzidas duas não-linearidades na distribuição e uma medida de apoio, como o Índice Hirschman-Herfindahl, para medir o grau da concentração urbana. Por outra parte, foi utilizado um modelo de taxas de variação para medir o impacto de abertura comercial no período de 1960-2002 sobre o crescimento econômico resultante. Portanto, pode-se enfatizar que alterações no tamanho da amostra podem conduzir a diferentes interpretações. Os resultados obtidos apontam um leve crescimento na desigualdade e divergência, apesar do índice de concentração urbana mostrar uma queda gradual desde o ano de 1964, na época MCCA, até o ano de 2002. No caso do período de 1973-2002, pode-se verificar a Lei de Gibrat, que indica ser o crescimento das cidades independente do seu tamanho. Também se verifica a hipótese de que a concentração urbana tem uma relação inversa com a abertura comercial, e que ela está correlacionada de forma positiva com o crescimento econômico no período de 1921- 1964. Com esses resultados, pode-se mostrar o caminho futuro da evolução do crescimento urbano, onde as maiores cidades reduziram o seu crescimento e as médias e pequenas cidades cresceram a um ritmo mais acelerado que os grandes centros, impulsionadas pelo crescimento do comércio internacional.
Over the last 50 years, in Central America was developed one of the oldest processes of economic and regional integration of the American Continent. Since the establishment in 1960 of the Central American Common Market (CACM), intra-regional trade significantly increased under multilateral, bilateral and regional free trade agreements of the integration process. Today, a new perspective exists in the study of the effects of international trade offered by the New Economic Geography (NEG) that seeks to explain the evolution and distribution of the size of the cities that can be represented by Pareto's distribution, derived from a well-known empirical regularity known as the Zipf's Law, which promotes an explanation of how the agglomeration forces in the urban centers interact in favor of economic activity and international trade. This dissertation tries to investigate the way in which the changes in trade policy generate changes in the order of the size in the cities, thus influencing the economic growth of Guatemala. To this purpose Pareto's coefficient was estimated for the period between 1921 and 2002 and it was considered as an aggregated value and therefore the original proposal of two not-linealities were introduced in the distribution as support, as the Hirschman-Herfindahl Index to measure the degree of the urban concentration. On the other hand, a model of variation rates was used during the 1960 and 2002 period to measure the trade impact of the trade opening on the resulting economic growth. Therefore, a model of variation rates was used to measure the impact of the trade opening on the resulting economic growth during the 1960-2002 period. For that reason, it is possible to emphasize the alterations in the size of the sample that can achieve different interpretations. The results obtained point to a slight growth in inequality and divergence, even though the index of urban concentration shows a gradual fall from 1964 during the CACM period up to 2002; which otherwise means that small cities grew at a smaller rate than the larger cities did. In the case of the 1973-2002 period, it is possible to verify Gibrat's Law which indicates that the growth of the cities is independent to its size. Also the hypothesis is verified that the urban concentration has an inverse relation with the trade opening and that the urban concentration is correlated in a positive form with the economic growth during the 1921-1964 period. With these results it is possible to show the future way of the evolution of urban growth where major cities would reduce its growth, and the middle and small cities will grow further at a more accelerated rate than the major cities driven by the growth of international trade.
En los últimos 50 años, se registró en Centro América uno de los procesos de integración económica y regional más antiguos del continente. El comercio intra-regional aumento y se dinamizo significativamente a partir de la formación, en 1960, del Mercado Común Centroamericano (MCCA), así como a los procesos de integración como acuerdos bilaterales, regionales y multilaterales de libre comercio. A partir de esos acuerdos, surge una nueva perspectiva para estudiar los efectos del comercio internacional, la Nueva Geografía Económica (NGE) la cual intenta explicar como la evolución de la distribución del tamaño de las ciudades puede ser representada por una distribución de Pareto, que se deriva en una regularidad empírica llamada la Ley de Zipf, que brinda una explicación de como interactúan las fuerzas de aglomeración en los centros urbanos y que favorecen a la actividad económica en el comercio internacional en general. Esta disertación busca investigar como los cambios en la política comercial generaran un impacto sobre el orden en el tamaño de las ciudades y esto a su vez como influencia en el crecimiento económico de Guatemala. Para ese propósito, fue estimado el coeficiente de Pareto en el período comprendido entre 1921-2002 y como un valor agregado en la propuesta original, fueran introducidas dos no-linealidades en la distribución y una medida de apoyo, como el Índice Hirschman-Herfindahl, para medir el grado de concentración urbana. Por otra parte, fue utilizado un modelo de tasas de variación para medir el impacto de apertura comercial en el período de 1960-2002 sobre el crecimiento económico resultante. Por lo tanto, se puede enfatizar que alteraciones en el tamaño de la muestra pueden conducir a diferentes interpretaciones. Los resultados obtenidos apuntan un leve crecimiento en la desigualdad y divergencia, a pesar de que el índice de concentración urbana muestra una caída gradual desde el año de 1964, en la época del MCCA, hasta el año de 2002. En el caso del período de 1973-2002, se puede verificar la Ley de Gibrat, que indica que el crecimiento de las ciudades es independiente de su tamaño. También se verifica la hipótesis de que la concentración urbana tiene una relación inversa con una apertura comercial y que está correlacionada de forma positiva con el crecimiento económico en el período de 1921-1964. Con estos resultados, se puede mostrar el camino futuro de la evolución del crecimiento urbano, donde las mayores ciudades reducirían su crecimiento y las medianas y pequeñas ciudades crecerán a un ritmo más acelerado que los grandes centros, impulsadas por el crecimiento del comercio internacional.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Silva, Renato Rodrigues. "A distribuição generalizada de Pareto e mistura de distribuições de Gumbel no estudo da vazão e da velocidade máxima do vento em Piracicaba, SP". Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-18112008-145737/.

Texto completo
Resumen
A teoria dos valores extremos é um tópico da probabilidade que descreve a distribuição assintótica das estatísticas de ordem, tais como máximos ou mínimos, de uma seqüência de variáveis aleatórias que seguem uma função de distribuição F normalmente desconhecida. Descreve, ainda, a distribuição assintótica dos excessos acima de um valor limiar de um ou mais termos dessa seqüência. Dessa forma, as metodologias padrões utilizada neste contexto consistem no ajuste da distribuição generalizada dos valores extremos a uma série de máximos anuais ou no ajuste da distribuição generalizada de Pareto a uma série de dados compostas somente de observações excedentes de um valor limiar. No entanto, segundo Coles et al. (2003), há uma crescente insatisfação com o desempenho destes modelos padrões para predição de eventos extremos causada, possivelmente, por pressuposições não atendidas como a de independência das observações ou pelo fato de que os mesmos não sejam recomendados para serem utilizados em algumas situações específicas como por exemplo e quando observações de máximos anuais compostas por duas ou mais populações independentes de eventos extremos sendo que a primeira descreve eventos menos freqüentes e de maior magnitude e a segunda descreve eventos mais freqüentes e de menor magnitude. Então, os dois artigos que compõem este trabalho tem como objetivo apresentar alternativas de análise de valores extremos para estas situações em que o ajuste dos modelos padrões não são adequados. No primeiro, foram ajustadas as distribuições generalizada de Pareto e exponencial, caso particular da GP, aos dados de vazão média diária do Posto de Artemis, Piracicaba, SP, Brasil, conjuntamente com a técnica do desagrupamento, (declustering), e comparadas as estimativas dos níveis de retorno para períodos de 5, 10, 50 e 100 anos. Conclui-se que as estimativas intervalares dos níveis de retorno obtidas por meio do ajuste da distribuição exponencial são mais precisas do que as obtidas com o ajuste da distribuição generalizada de Pareto. No segundo artigo, por sua vez, foi apresentada uma metodologia para o ajuste da distribuição de Gumbel e de misturas de duas distribuições de Gumbel aos dados de velocidades de ventos mensais de Piracicaba, SP. Selecionou-se a distribuição que melhor ajustou-se aos dados por meio de testes de hipóteses bootstrap paramétrico e critérios de seleção AIC e BIC. E concluiu-se que a mistura de duas distribuições de Gumbel é a distribuição que melhor se ajustou-se aos dados de velocidades máxima de ventos dos meses de abril e maio, enquanto que o ajuste da distribuição de Gumbel foi o melhor para os meses de agosto e setembro.
The extreme value theory is a probability topics that describes the asymtoptic distribution of order statistics such as maximum or minimum of random variables sequence that follow a distribution function F normaly unknown. Describes still, the excess asymtoptic distribution over threshold of this sequence. So, the standard methodologies of extremes values analysis are the fitting of generalized extreme value distribution to yearly maximum series or the fitting of generalized Pareto distribution to partial duration series. However, according to Coles et al. (2003), there is a growing dissatisfaction with the use this standard models for the prediction of extremes events and one of possible causes this fact may be a false assumptions about a sequence of observed data as a independence assumptions or because the standards models must not used in some specific situations like for example when maximum sample arise from two or more independents populations, where the first population describes more frequents and low intense events and the second population describes less frequents and more intense events. In this way, the two articles this work has a objective show alternatives about extreme values analysis for this situations that the standards models doesn´t recommended. In the first article, the generalized distribution Pareto and exponencial distribution, particular case of GP, together with to declustering methods was applied to mean daily flow of the Piracicaba river, Artemis station, Piracicaba, SP, and the estimates the return levels of 5, 10, 50 and 100 years were compared. We conclude that the interval estimates of the 50 and 100 year return levels obtained using the fitting the exponencial distribution are more precise than those obtained using the generalized Pareto distribution. In the second article, we propose the fit of Gumbel distribution and the Gumbel mixture to data maximum speed wind in Piracicaba, SP. We select the best model using bootstrap test of hypotheses and the AIC and BIC selection criteria We conclude that the mixture Gumbel is the best model to analyze the maximum wind speed data for months of april e may and otherside the fit of Gumbel distributions was the best fit to months of august e september.
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Engberg, Alexander. "An empirical comparison of extreme value modelling procedures for the estimation of high quantiles". Thesis, Uppsala universitet, Statistiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-297063.

Texto completo
Resumen
The peaks over threshold (POT) method provides an attractive framework for estimating the risk of extreme events such as severe storms or large insurance claims. However, the conventional POT procedure, where the threshold excesses are modelled by a generalized Pareto distribution, suffers from small samples and subjective threshold selection. In recent years, two alternative approaches have been proposed in the form of mixture models that estimate the threshold and a folding procedure that generates larger tail samples. In this paper the empirical performances of the conventional POT procedure, the folding procedure and a mixture model are compared by modelling data sets on fire insurance claims and hurricane damage costs. The results show that the folding procedure gives smaller standard errors of the parameter estimates and in some cases more stable quantile estimates than the conventional POT procedure. The mixture model estimates are dependent on the starting values in the numerical maximum likelihood estimation, and are therefore difficult to compare with those from the other procedures. The conclusion is that none of the procedures is overall better than the others but that there are situations where one method may be preferred.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Cascão, Fernando Miguel Laires. "Regressão do índice de cauda : uma aplicação empírica". Master's thesis, Instituto Superior de Economia e Gestão, 2018. http://hdl.handle.net/10400.5/16662.

Texto completo
Resumen
Mestrado em Econometria Aplicada e Previsão
No presente trabalho é apresentada uma metodologia de estimação do índice de cauda, que assenta numa regressão exponencial do parâmetro função de variáveis explicativas. O método de estimação é o de Quase Máxima Verosimilhança baseada na função log-verosimilhança de Pareto de tipo I. A metodologia em estudo é aplicada às observações do prémio de risco do mercado acionista. Neste sentido, pretende-se explicar os valores extremos da aba esquerda da distribuição dos dados, com recurso a um conjunto de variáveis estudadas na literatura, no contexto do mercado de ações. Os resultados sugerem que as variáveis mais relevantes para explicar a variável de interesse são regressores que representam situações de crise e incerteza social, política e económica, para cada momento de tempo. Os resultados finais indicam que o prémio de risco tem uma massa de probabilidade considerável associada a valores extremos da série.
In the present work a tail index estimation methodology is presented, which is based on an exponential regression of the parameter function of explanatory variables. The estimation method is the Quasi Maximum Likelihood based on the Pareto log-likelihood function of type I. The methodology under study is applied to the observations of the risk premium of the stock market. In this sense, it is intended to explain the extreme values of the left-hand tail of the data distribution, using a set of variables studied in the literature, in the context of the stock market. The results suggest that the most relevant variables to explain the variable of interest are regressors that represent situations of crisis and social, political and economic uncertainty, for each moment of time. The final results indicate that the risk premium has a considerable probability mass associated with extreme values of the series.
info:eu-repo/semantics/publishedVersion
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Manas, Arnaud. "Essais sur le club de Paris, la loi de Gibrat et l'histoire de la Banque de France". Thesis, Aix-Marseille, 2013. http://www.theses.fr/2013AIXM1136/document.

Texto completo
Resumen
Cette thèse sur travaux est la synthèse de publications réalisées entre 2005 et 2012 ainsi que de papiers de travail. Elle est organisée autour de trois axes : des questions relatives au Club de Paris, des articles au sujet de la loi de Gibrat et des travaux autour de l’Histoire de la Banque de France. Le premier axe comprend deux papiers publiés dans le bulletin de la Banque de France : l’un sur l’évaluation de l’initiative PPTE (Pays pauvres très endettés, mécanismes et éléments d’évaluation, Bulletin N°140, août 2005) et le second sur la modélisation des buybacks de créance au sein du club de Paris. Ce dernier papier a été sous deux formes (grand public : Modélisation et analyse des mécanismes du Club de Paris de rachat de créances par prépaiement, avec Laurent Daniel, Bulletin N° 152, août 2006, et recherche : Pricing the implicit contracts in the Paris Club debt buybacks avec Laurent Daniel, working paper, December 2007). Le second axe concerne la validation de la loi de Gibrat, avec la publication de trois articles (French butchers don't do Quantum Physics in Economics Letters, Vol. 103, May 2009, Pp. 101-106 ; The Paretian Ratio Distribution - An application to the volatility of GDP in Economics Letters, Vol. 111, May 2011, pp. 180-183 ; The Laplace Illusion in Physica A, Vol. 391, August 2012, pp. 3963–3970). Le dernier axe regroupe des travaux sur l’Histoire de la Banque de France. Certains sont publiés comme La Caisse de Réserve des Employés de la Banque de France 1800-1950, (Économies et Sociétés, série « Histoire Économique Quantitative », août 2007, n°37, pp. 1365-1383 ou en cours
This dissertation is made of several papers published between 2005 and 2012 and somme working papers. The first part deals with the Paris Club. Two papers published in the Bulletin of the Banque de France deal with the very indebted countries and debt buybacks ( Pricing the implicit contracts in the Paris Club debt buybacks). The second axis is oriented on the Gibrat's law (French butchers don't do Quantum Physics in Economics Letters, Vol. 103, May 2009, Pp. 101-106 ; The Paretian Ratio Distribution - An application to the volatility of GDP in Economics Letters, Vol. 111, May 2011, pp. 180-183 ; The Laplace Illusion in Physica A, Vol. 391, August 2012, pp. 3963–3970). The third axis deals with the history of the Banque de France
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Bogner, Christina. "Analysis of flow patterns and flow mechanisms in soils". Thesis, Bayreuth Bayreuth Center of Ecology and Environmental Research, 2009. http://d-nb.info/997214058/34.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

SANTOS, Rosilda Sousa. "Estudo sobre algumas famílias de distribuições de probabilidades generalizadas". Universidade Federal de Campina Grande, 2012. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1358.

Texto completo
Resumen
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-06T14:18:54Z No. of bitstreams: 1 ROSILDA SOUSA SANTOS - DISSERTAÇÃO PPGMAT 2012..pdf: 864926 bytes, checksum: 9d85b58c8bca6174ef968354411068a1 (MD5)
Made available in DSpace on 2018-08-06T14:18:54Z (GMT). No. of bitstreams: 1 ROSILDA SOUSA SANTOS - DISSERTAÇÃO PPGMAT 2012..pdf: 864926 bytes, checksum: 9d85b58c8bca6174ef968354411068a1 (MD5) Previous issue date: 2012-09
Capes
A proposta desta dissertação está relacionada com o estudo das principais famílias de distribuições de probabilidade generalizadas. Particularmente, estudamos as distribuições Beta Pareto, Beta Exponencial Generalizada, Beta Weibull Modificada, Beta Fréchet e a Kw-G. Para cada uma delas foram obtidas expressões para as funções densidades de probabilidade, funcões de distribuição acumuladas, funções de taxa de falha, funções geratrizes de momentos, bem como foram obtidos os estimadores dos parâmetros pelo método da máxima verossimilhança. Finalmente, para cada distribuição foram feitas aplicações com dados reais.
The purpose of this dissertation is to study the main families of generalized probability distributions. Particularly we study the distributions Beta Pareto, generalized Beta Exponential, Beta Modified Weibull, Beta Fréchet and Kw-G. For each one of these distributions we obtain expressions for the probability density function, cumulative distribution function, hazard function and moment generating function as well as parameter estimates by the method of maximum likelihood. Finally, we make real data applications for each one of the studied distributions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Halimi, Abdelghafour. "Modélisation et traitement statistique d'images de microscopie confocale : application en dermatologie". Phd thesis, Toulouse, INPT, 2017. http://oatao.univ-toulouse.fr/19515/1/HALIMI_Abdleghafour.pdf.

Texto completo
Resumen
Dans cette thèse, nous développons des modèles et des méthodes statistiques pour le traitement d’images de microscopie confocale de la peau dans le but de détecter une maladie de la peau appelée lentigo. Une première contribution consiste à proposer un modèle statistique paramétrique pour représenter la texture dans le domaine des ondelettes. Plus précisément, il s’agit d’une distribution gaussienne généralisée dont on montre que le paramètre d’échelle est caractéristique des tissus sousjacents. La modélisation des données dans le domaine de l’image est un autre sujet traité dans cette thèse. A cette fin, une distribution gamma généralisée est proposée. Notre deuxième contribution consiste alors à développer un estimateur efficace des paramètres de cette loi à l’aide d’une descente de gradient naturel. Finalement, un modèle d’observation de bruit multiplicatif est établi pour expliquer la distribution gamma généralisée des données. Des méthodes d’inférence bayésienne paramétrique sont ensuite développées avec ce modèle pour permettre la classification d’images saines et présentant un lentigo. Les algorithmes développés sont appliqués à des images réelles obtenues d’une étude clinique dermatologique.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Ochoa, Pizzali Luis Fernando. "Desempenho de redes de distribuição com geradores distribuídos /". Ilha Solteira : [s.n.], 2006. http://hdl.handle.net/11449/100362.

Texto completo
Resumen
Orientador: Antonio Padilha Feltrin
Banca: Rubén Augusto Romero Lázaro
Banca: Dionízio Paschoareli Júnior
Banca: Gareth Harrison
Banca: Carmen Lucia Tancredo Borges
Resumo: Neste trabalho, é apresentada uma análise em regime permanente que considera a avaliação de impactos técnicos tais como perdas elétricas, queda de tensão e níveis de curto-circuito, entre outros; utilizando dados de demanda e geração variáveis no tempo ao longo de um horizonte determinado. O objetivo é encontrar um conjunto de arranjos da GD (configurações) que levem ao melhor desempenho da rede de distribuição analisada, minimizando ou maximizando cada aspecto técnico segundo o interesse da empresa de distribuição. Dada a natureza combinatória deste problema, que requer uma ferramenta de otimização capaz de manipular múltiplos objetivos, os impactos técnicos serão avaliados simultaneamente utilizando uma metodologia baseada no conceito do Non-dominated Sorting Genetic Algorithm (NSGA), conduzindo a soluções mais reais e diversificadas para a tomada de decisões, conhecidas como soluções ótimas de Pareto.
Abstract: In this work a steady-state analysis considering the assessment of technical impacts such as losses, voltage drop and short-circuit levels, among others; utilizing time-variant loads and generation within a specified horizon. The objective is to find a set of configurations that lead to the best performance of the distribution network under analysis, minimizing or maximizing each technical aspect according to the utility's concerns. Given the combinatorial nature of this problem, which requires an optimization tool able to handle multiple objectives, technical impacts will be assessed simultaneously through a methodology based on the non-dominated sorting genetic algorithm (NSGA). This approach leads to a more realistic and diversified set of solutions for taking decisions, known as Pareto-optimal solutions.
Doutor
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Hitz, Adrien. "Modelling of extremes". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:ad32f298-b140-4aae-b50e-931259714085.

Texto completo
Resumen
This work focuses on statistical methods to understand how frequently rare events occur and what the magnitude of extreme values such as large losses is. It lies in a field called extreme value analysis whose scope is to provide support for scientific decision making when extreme observations are of particular importance such as in environmental applications, insurance and finance. In the univariate case, I propose new techniques to model tails of discrete distributions and illustrate them in an application on word frequency and multiple birth data. Suitably rescaled, the limiting tails of some discrete distributions are shown to converge to a discrete generalized Pareto distribution and generalized Zipf distribution respectively. In the multivariate high-dimensional case, I suggest modeling tail dependence between random variables by a graph such that its nodes correspond to the variables and shocks propagate through the edges. Relying on the ideas of graphical models, I prove that if the variables satisfy a new notion called asymptotic conditional independence, then the density of the joint distribution can be simplified and expressed in terms of lower dimensional functions. This generalizes the Hammersley- Clifford theorem and enables us to infer tail distributions from observations in reduced dimension. As an illustration, extreme river flows are modeled by a tree graphical model whose structure appears to recover almost exactly the actual river network. A fundamental concept when studying limiting tail distributions is regular variation. I propose a new notion in the multivariate case called one-component regular variation, of which Karamata's and the representation theorem, two important results in the univariate case, are generalizations. Eventually, I turn my attention to website visit data and fit a censored copula Gaussian graphical model allowing the visualization of users' behavior by a graph.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Blanchet, Thomas. "Essays on the Distribution of Income and Wealth : Methods, Estimates and Theory". Thesis, Paris, EHESS, 2020. http://www.theses.fr/2020EHES0004.

Texto completo
Resumen
Cette thèse couvre plusieurs sujets sur la répartition des revenus et des richesses. Dans le premier chapitre, nous développons une nouvelle méthode pour exploiter les tabulations de revenu et de richesse, telle que celle publiée par les autorités fiscales. Nous y définissons les courbes de Pareto généralisées comme la courbe des coefficients de Pareto inversés b(p), où b(p) est le rapport entre le revenu moyen ou la richesse au-dessus du rang p et le p-ième quantile Q(p) (c'est-à-dire b(p)=E[X|X>Q(p)]/Q(p)). Nous les utilisons pour caractériser des distributions entières, y compris les endroits comme le sommet où la lois de Pareto est une bonne description, et les endroits plus bas où elles ne le sont pas. Nous développons une méthode pour reconstruire de manière flexible l'ensemble de la distribution sur la base de données tabulées sur le revenu ou le patrimoine, qui produit courbes de Pareto généralisées lisses et réalistes.Dans le deuxième chapitre, nous présentons une nouvelle approche pour combiner les données d'enquête et les tabulations fiscales afin de corriger la sous-représentation des plus riches au sommet. Elle détermine de façon endogène un "point de fusion'' entre les données avant de modifier les poids tout au long de la distribution et de remplacer les nouvelles observations au-delà du support original de l'enquête. Nous fournissons des simulations de la méthode et des applications aux données réelles. Les premières démontrent que notre méthode améliore la précision et la stabilité des estimations de la distribution, par rapport à d'autres méthodes de correction d'enquêtes utilisant des données externes, et même en présence d'hypothèses extrêmes. Les applications empiriques montrent que non seulement les niveaux d'inégalité des revenus peuvent changer, mais aussi les tendances.Dans le troisième chapitre, nous estimons la distribution du revenu national dans 38 pays européens entre 1980 et 2017 en combinant enquêtes, données fiscales et comptes nationaux. Nous développons une méthodologie cohérente combinant des méthodes d'apprentissage statistique, de calage non linéaire des enquêtes et la théorie des valeurs extrêmes afin de produire des estimations de l'inégalité des revenus avant et après impôt, comparables d'un pays à l'autre et conformes aux taux de croissance macroéconomiques. Nous constatons que les inégalités se sont creusées dans une majorité de pays européens, en particulier entre 1980 et 2000. Le 1% les plus riches en Europe a augmenté plus de deux fois plus vite que les 50% les plus pauvres et a capturé 18% de la croissance des revenus régionaux.Dans le quatrième chapitre, je décompose la dynamique de la distribution de la richesse à l'aide d'un modèle stochastique dynamique simple qui sépare les effets de la consommation, du revenu du travail, des taux de rendement, de la croissance, de la démographie et du patrimoine. À partir de deux théorèmes de calcul stochastique, je montre que ce modèle est identifié de manière non paramétrique et qu'il peut être estimé à partir de données en coupes répétées. Je l'estime à l'aide des comptes nationaux distributifs des États-Unis depuis 1962. Je trouve que, de l'augmentation de 15pp. de la part de la richesse détenue par les 1% les plus riches observée depuis 1980, environ 7pp. peut être attribuée à l'inégalité croissante des revenus du travail, 6pp. à la hausse des rendements sur le capital (principalement sous forme de plus-values), et 2pp. à la baisse de la croissance. En suivant les paramètres actuels, la part de la richesse des 1% les plus riches atteindrait sa valeur stationnaire d'environ 45% d'ici les années 2040, un niveau similaire à celui du début du XXe siècle. J'utilise ensuite le modèle pour analyser l'effet d'un impôt progressif sur les patrimoines au sommet de la distribution
This thesis covers several topics on the distribution of income and wealth. In the first chapter, we develop a new methodology to exploit tabulations of income and wealth such as the one published by tax authorities. In it, we define generalized Pareto curves as the curve of inverted Pareto coefficients b(p), where b(p) is the ratio between average income or wealth above rank p and the p-th quantile Q(p) (i.e. b(p)=E[X|X>Q(p)]/Q(p)). We use them to characterize entire distributions, including places like the top where power laws are a good description, and places further down where they are not. We develop a method to flexibly recover the entire distribution based on tabulated income or wealth data which produces smooth and realistic shapes of generalized Pareto curves.In the second chapter, we present a new approach to combine survey data with tax tabulations to correct for the underrepresentation of the rich at the top. It endogenously determines a "merging point'' between the datasets before modifying weights along the entire distribution and replacing new observations beyond the survey's original support. We provide simulations of the method and applications to real data. The former demonstrate that our method improves the accuracy and precision of distributional estimates, even under extreme assumptions, and in comparison to other survey correction methods using external data. The empirical applications show that not only can income inequality levels change, but also trends.In the third chapter, we estimate the distribution of national income in thirty-eight European countries between 1980 and 2017 by combining surveys, tax data and national accounts. We develop a unified methodology combining machine learning, nonlinear survey calibration and extreme value theory in order to produce estimates of pre-tax and post-tax income inequality, comparable across countries and consistent with macroeconomic growth rates. We find that inequality has increased in a majority of European countries, especially between 1980 and 2000. The European top 1% grew more than two times faster than the bottom 50% and captured 18% of regional income growth.In the fourth chapter, I decompose the dynamics of the wealth distribution using a simple dynamic stochastic model that separates the effects of consumption, labor income, rates of return, growth, demographics and inheritance. Based on two results of stochastic calculus, I show that this model is nonparametrically identified and can be estimated using only repeated cross-sections of the data. I estimate it using distributional national accounts for the United States since 1962. I find that, out of the 15pp. increase in the top 1% wealth share observed since 1980, about 7pp. can be attributed to rising labor income inequality, 6pp. to rising returns on wealth (mostly in the form of capital gains), and 2pp. to lower growth. Under current parameters, the top 1% wealth share would reach its steady-state value of roughly 45% by the 2040s, a level similar to that of the beginning of the 20th century. I then use the model to analyze the effect of progressive wealth taxation at the top of the distribution
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Park, JinSoo. "Adaptive Asymmetric Slot Allocation for Heterogeneous Traffic in WCDMA/TDD Systems". Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/29630.

Texto completo
Resumen
Even if 3rd and 4th generation wireless systems aim to achieve multimedia services at high speed, it is rather difficult to have full-fledged multimedia services due to insufficient capacity of the systems. There are many technical challenges placed on us in order to realize the real multimedia services. One of those challenges is how efficiently to allocate resources to traffic as the wireless systems evolve. The review of the literature shows that strategic manipulation of traffic can lead to an efficient use of resources in both wire-line and wireless networks. This aspect brings our attention to the role of link layer protocols, which is to orchestrate the transmission of packets in an efficient way using given resources. Therefore, the Media Access Control (MAC) layer plays a very important role in this context. In this research, we investigate technical challenges involving resource control and management in the design of MAC protocols based on the characteristics of traffic, and provide some strategies to solve those challenges. The first and foremost matter in wireless MAC protocol research is to choose the type of multiple access schemes. Each scheme has advantages and disadvantages. We choose Wireless Code Division Multiple Access/Time Division Duplexing (WCDMA/TDD) systems since they are known to be efficient for bursty traffic. Most existing MAC protocols developed for WCDMA/TDD systems are interested in the performance of a unidirectional link, in particular in the uplink, assuming that the number of slots for each link is fixed a priori. That ignores the dynamic aspect of TDD systems. We believe that adaptive dynamic slot allocation can bring further benefits in terms of efficient resource management. Meanwhile, this adaptive slot allocation issue has been dealt with from a completely different angle. Related research works are focused on the adaptive slot allocation to minimize inter-cell interference under multi-cell environments. We believe that these two issues need to be handled together in order to enhance the performance of MAC protocols, and thus embark upon a study on the adaptive dynamic slot allocation for the MAC protocol. This research starts from the examination of key factors that affect the adaptive allocation strategy. Through the review of the literature, we conclude that traffic characterization can be an essential component for this research to achieve efficient resource control and management. So we identify appropriate traffic characteristics and metrics. The volume and burstiness of traffic are chosen as the characteristics for our adaptive dynamic slot allocation. Based on this examination, we propose four major adaptive dynamic slot allocation strategies: (i) a strategy based on the estimation of burstiness of traffic, (ii) a strategy based on the estimation of volume and burstiness of traffic, (iii) a strategy based on the parameter estimation of a distribution of traffic, and (iv) a strategy based on the exploitation of physical layer information. The first method estimates the burstiness in both links and assigns the number of slots for each link according to a ratio of these two estimates. The second method estimates the burstiness and volume of traffic in both links and assigns the number of slots for each link according to a ratio of weighted volumes in each link, where the weights are driven by the estimated burstiness in each link. For the estimation of burstiness, we propose a new burstiness measure that is based on a ratio between peak and median volume of traffic. This burstiness measure requires the determination of an observation window, with which the median and the peak are measured. We propose a dynamic method for the selection of the observation window, making use of statistical characteristics of traffic: Autocorrelation Function (ACF) and Partial ACF (PACF). For the third method, we develop several estimators to estimate the parameters of a traffic distribution and suggest two new slot allocation methods based on the estimated parameters. The last method exploits physical layer information as another way of allocating slot to enhance the performance of the system. The performance of our proposed strategies is evaluated in various scenarios. Major simulations are categorized as: simulation on data traffic, simulation on combined voice and data traffic, simulation on real trace data. The performance of each strategy is evaluated in terms of throughput and packet drop ratio. In addition, we consider the frequency of slot changes to assess the performance in terms of control overhead. We expect that this research work will add to the state of the knowledge in the field of link-layer protocol research for WCDMA/TDD systems.
Ph. D.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Mahadevan, Muralidharan Ananth. "Analysis of Garbage Collector Algorithms in Non-Volatile Memory Devices". The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1365811711.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Maire, Anthony. "Comment sélectionner les zones prioritaires pour la conservation et la restauration des communautés de poissons de rivière ? Applications aux échelles de la France et du Pas-de-Calais". Phd thesis, Toulouse, INPT, 2014. http://oatao.univ-toulouse.fr/13313/1/Maire.pdf.

Texto completo
Resumen
Face à l’érosion globale de la biodiversité des écosystèmes aquatiques continentaux, l’identification des mesures de gestion les plus urgentes à mettre en place est cruciale. En s’appuyant sur une approche innovante et multi-facettes de la diversité, les priorités de conservation pour les assemblages de poissons de rivière ont pu être déterminées à l’échelle de la France. La durabilité de ces priorités de conservation face aux principales composantes des changements globaux a ensuite été évaluée afin d’identifier les zones qui protégeront efficacement la biodiversité actuelle dans le futur. La méthodologie développée a finalement été appliquée au réseau hydrographique du département du Pas-de-Calais dans le but d’identifier précisément les priorités locales de conservation et de restauration. Ces outils pourront par la suite être utilisés comme support d’aide à la décision et adaptés selon les besoins des gestionnaires des milieux aquatiques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Holešovský, Jan. "Metody odhadu parametrů rozdělení extrémního typu s aplikacemi". Doctoral thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2016. http://www.nusl.cz/ntk/nusl-240512.

Texto completo
Resumen
The thesis is focused on extreme value theory and its applications. Initially, extreme value distribution is introduced and its properties are discussed. At this basis are described two models mostly used for an extreme value analysis, i.e. the block maxima model and the Pareto-distribution threshold model. The first one takes advantage in its robustness, however recently the threshold model is mostly preferred. Although the threshold choice strongly affects estimation quality of the model, an optimal threshold selection still belongs to unsolved issues of this approach. Therefore, the thesis is focused on techniques for proper threshold identification, mainly on adaptive methods suitable for the use in practice. For this purpose a simulation study was performed and acquired knowledge was applied for analysis of precipitation records from South-Moravian region. Further on, the thesis also deals with extreme value estimation within a stationary series framework. Usually, an observed time series needs to be separated to obtain approximately independent observations. The use of the advanced theory for stationary series allows to avoid the entire separation procedure. In this context the commonly applied separation techniques turn out to be quite inappropriate in most cases and the estimates based on theory of stationary series are obtained with better precision.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Perrin, Yohann. "Etude de la structure partonique de l'hélium". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00845950.

Texto completo
Resumen
La structure des nucléons et des noyaux a été intensivement étudiée au cours duvingtième siècle au travers de la diffusion élastique d'électrons (mesure des facteurs deforme électromagnétique) et de la diffusion profondément inélastique (mesure des distributionsde partons). Le formalisme des distributions généralisées de partons (GPD)a permis d'unifier les facteurs de forme et les distributions de partons. Ce lien procureune source d'information unique sur la dynamique des partons, telle la distribution desforces nucléaires et de moment orbital au sein des hadrons. L'accès expérimental le plussimple aux GPD est la diffusion Compton profondément virtuelle (DVCS), correspondantà l'électroproduction dure d'un photon réel. Tandis que plusieurs expériences sesont déjà focalisées sur la réaction DVCS sur le nucléon, les expériences sur une ciblenucléaire s'avèrent plus rares. Cette thèse se concentre sur l'étude du canal DVCS cohérentsur l'hélium 4, avec pour objectif l'extraction des parties réelle et imaginaire dufacteur de forme Compton via l'asymétrie de spin du faisceau.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Ho, Zhen Wai Olivier. "Contributions aux algorithmes stochastiques pour le Big Data et à la théorie des valeurs extrèmes multivariés". Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCD025/document.

Texto completo
Resumen
La thèse comporte deux parties distinctes. La première partie concerne des modèles pour les extrêmes multivariés.On donne une construction de vecteurs aléatoires multivariés à variations régulières. La construction se base sur une extension multivariée d'un lemme de Breiman établissant la propriété de variation régulière d'un produit $RZ$ de variable aléatoire avec $R$ positive à variation régulière et $Z$ positive suffisamment intégrable. En prenant $mathbf{Z}$ multivarié et suffisamment intégrable, on montre que $Rmathbf{Z}$ est un vecteur aléatoire à variations régulières et on caractérise sa mesure limite. On montre ensuite que pour $mathbf{Z}$ de loi bien choisie, on retrouve des modèles stables classiques comme le modèle t-extremal, Hüsler-Reiss, etc. Puis, on étend notre construction pour considérer la notion de variation régulière multivariée non standard. On montre ensuite que le modèle de Pareto (qu'on appelle Hüsler-Reiss Pareto) associé au modèle max-stable Hüsler-Reiss forme une famille exponentielle complète. On donne quelques propriétés du modèle Hüsler-Reiss Pareto puis on propose un algorithme de simulation exacte. On étudie l'inférence par le maximum de vraisemblance. Finalement, on considère une extension du modèle Hüsler-Reiss Pareto utilisant la notion de variation régulière non standard. On étudie l'inférence par le maximum de vraisemblance du modèle généralisé et on propose une méthode d'estimation des paramètres. On donne une étude numérique sur l'estimateur du maximum de vraisemblance pour le modèle Hüsler-Reiss Pareto. Dans la second partie qui concerne l'apprentissage statistique, on commence par donner une borne sur la valeur singulière minimale d'une matrice perturbée par l'ajout d'une colonne. On propose alors un algorithme de sélection de colonne afin d'extraire les caractéristiques de la matrice. On illustre notre algorithme sur des données réelles de séries temporelles où chaque série est pris comme étant une colonne de la matrice. Deuxièmement, on montre que si une matrice $X$ à une propriété d'incohérence alors $X$ possède aussi une version affaiblie de la propriété NSP (null space property). Puis, on s'intéresse au problème de sélection de matrice incohérente. A partir d'une matrice $Xin mathbb{R}^{n imes p}$ et $mu>0$, on cherche la plus grande sous-matrice de $X$ avec une cohérence inférieure à $mu$. Ce problème est formulé comme un programme linéaire avec contrainte quadratique sur ${0,1}^p$. Comme ce problème est NP-dur, on considère une relaxation sur la sphère et on obtient une borne sur l'erreur lorsqu'on considère le problème relaxé. Enfin, on analyse l'algorithme de gradient stochastique projeté pour l'analyse en composante principale online. On montre qu'en espérance, l'algorithme converge vers un vecteur propre maximum et on propose un algorithme pour sélectionner le pas de l'algorithme. On illustre ensuite cet algorithme par une expérience de simulation
This thesis in divided in two parts. The first part studies models for multivariate extremes. We give a method to construct multivariate regularly varying random vectors. The method is based on a multivariate extension of a Breiman Lemma that states that a product $RZ$ of a random non negative regularly varying variable $R$ and a non negative $Z$ sufficiently integrable is also regularly varying. Replacing $Z$ with a random vector $mathbf{Z}$, we show that the product $Rmathbf{Z}$ is regularly varying and we give a characterisation of its limit measure. Then, we show that taking specific distributions for $mathbf{Z}$, we obtain classical max-stable models. We extend our result to non-standard regular variations. Next, we show that the Pareto model associated with the Hüsler-Reiss max-stable model forms a full exponential family. We show some properties of this model and we give an algorithm for exact simulation. We study the properties of the maximum likelihood estimator. Then, we extend our model to non-standard regular variations. To finish the first part, we propose a numerical study of the Hüsler-Reiss Pareto model.In the second part, we start by giving a lower bound of the smallest singular value of a matrix perturbed by appending a column. Then, we give a greedy algorithm for feature selection and we illustrate this algorithm on a time series dataset. Secondly, we show that an incoherent matrix satisfies a weakened version of the NSP property. Thirdly, we study the problem of column selection of $Xinmathbb{R}^{n imes p}$ given a coherence threshold $mu$. This means we want the largest submatrix satisfying some coherence property. We formulate the problem as a linear program with quadratic constraint on ${0,1}^p$. Then, we consider a relaxation on the sphere and we bound the relaxation error. Finally, we study the projected stochastic gradient descent for online PCA. We show that in expectation, the algorithm converges to a leading eigenvector and we suggest an algorithm for step-size selection. We illustrate this algorithm with a numerical experiment
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Nilsson, Mattias. "Tail Estimation for Large Insurance Claims, an Extreme Value Approach". Thesis, Linnaeus University, School of Computer Science, Physics and Mathematics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-7826.

Texto completo
Resumen

In this thesis are extreme value theory used to estimate the probability that large insuranceclaims are exceeding a certain threshold. The expected claim size, given that the claimhas exceeded a certain limit, are also estimated. Two different models are used for thispurpose. The first model is based on maximum domain of attraction conditions. A Paretodistribution is used in the other model. Different graphical tools are used to check thevalidity for both models. Länsförsäkring Kronoberg has provided us with insurance datato perform the study.Conclusions, which have been drawn, are that both models seem to be valid and theresults from both models are essential equal.


I detta arbete används extremvärdesteori för att uppskatta sannolikheten att stora försäkringsskadoröverträffar en vis nivå. Även den förväntade storleken på skadan, givetatt skadan överstiger ett visst belopp, uppskattas. Två olika modeller används. Den förstamodellen bygger på antagandet att underliggande slumpvariabler tillhör maximat aven extremvärdesfördelning. I den andra modellen används en Pareto fördelning. Olikagrafiska verktyg används för att besluta om modellernas giltighet. För att kunna genomförastudien har Länsförsäkring Kronoberg ställt upp med försäkringsdata.Slutsatser som dras är att båda modellerna verkar vara giltiga och att resultaten ärlikvärdiga.

Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Santos, Jeferino Manuel dos. "Aplicação da Teoria de Valores Extremos à actividade seguradora". Master's thesis, Instituto Superior de Economia e Gestão, 2003. http://hdl.handle.net/10400.5/652.

Texto completo
Resumen
Mestrado em Ciências Actuariais
O objectivo principal deste trabalho é realçar a importância da Teoria de Valores Extremos na actividade seguradora. São apresentados de uma forma sucinta alguns dos principais resultados ligados a esta teoria. São apresentadas algumas estatísticas que possibilitam a simplificação do processo de reconhecimento de dados de cauda pesada. A modelação da cauda é um assunto de particular interesse, são apresentados dois métodos de modelação da cauda, um pelo ajustamento de uma distribuição de Pareto Generalizada, outro pela aplicação de um método semi-paramétrico adaptativo. No fim, os resultados obtidos por cada um dos modelos são integrados como módulo num modelo de solvência.
The main purpose of this dissertation is to enhance the importance of Extreme Value Theory in the insurance sector. A short introduction to the main results inherent in this theory is presented. Also, a set of statistics to simplify the recognition process of heavy tailed data is provided. Tail modelling is a subject of particular interest in this dissertation, two approaches are presented, one by fitting a Generalized Pareto Distribution, other by modelling by means of a semi-parametric adaptive method. In the last part, the results of these approaches are integrated as a module in a broader solvency model.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Dalne, Katja. "The Performance of Market Risk Models for Value at Risk and Expected Shortfall Backtesting : In the Light of the Fundamental Review of the Trading Book". Thesis, KTH, Matematisk statistik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-206168.

Texto completo
Resumen
The global financial crisis that took off in 2007 gave rise to several adjustments of the risk regulation for banks. An extensive adjustment, that is to be implemented in 2019, is the Fundamental Review of the Trading Book (FRTB). It proposes to use Expected Shortfall (ES) as risk measure instead of the currently used Value at Risk (VaR), as well as applying varying liquidity horizons based on the various risk levels of the assets involved. A major difficulty of implementing the FRTB lies within the backtesting of ES. Righi and Ceretta proposes a robust ES backtest based on Monte Carlo simulation. It is flexible since it does not assume any probability distribution and can be performed without waiting for an entire backtesting period. Implementing some commonly used VaR backtests as well as the ES backtest by Righi and Ceretta, yield a perception of which risk models that are the most accurate from both a VaR and an ES backtesting perspective. It can be concluded that a model that is satisfactory from a VaR backtesting perspective does not necessarily remain so from an ES backtesting perspective and vice versa. Overall, the models that are satisfactory from a VaR backtesting perspective turn out to be probably too conservative from an ES backtesting perspective. Considering the confidence levels proposed by the FRTB, from a VaR backtesting perspective, a risk measure model with a normal copula and a hybrid distribution with the generalized Pareto distribution in the tails and the empirical distribution in the center along with GARCH filtration is the most accurate one, as from an ES backtesting perspective a risk measure model with univariate Student’s t distribution with ⱱ ≈ 7 together with GARCH filtration is the most accurate one for implementation. Thus, when implementing the FRTB, the bank will need to compromise between obtaining a good VaR model, potentially resulting in conservative ES estimates, and obtaining a less satisfactory VaR model, possibly resulting in more accurate ES estimates. The thesis was performed at SAS Institute, an American IT company that develops software for risk management among others. Targeted customers are banks and other financial institutions. Investigating the FRTB acts a potential advantage for the company when approaching customers that are to implement the regulation framework in a near future.
Den globala finanskrisen som inleddes år 2007 ledde till flertalet ändringar vad gäller riskreglering för banker. En omfattande förändring som beräknas implementeras år 2019, utgörs av Fundamental Review of the Trading Book (FRTB). Denna föreslår bland annat användande av Expected Shortfall (ES) som riskmått istället för Value at Risk (VaR) som används idag, liksom tillämpandet av varierande likviditetshorisonter beroende på risknivåerna för tillgångarna i fråga. Den huvudsakliga svårigheten med att implementera FRTB ligger i backtestingen av ES. Righi och Ceretta föreslår ett robust ES backtest som baserar sig på Monte Carlo-simulering. Det är flexibelt i den mening att det inte antar någon specifik sannolikhetsfördelning samt att det går att implementera utan att man behöver vänta en hel backtestingperiod. Vid implementation av olika standardbacktest för VaR, liksom backtestet för ES av Righi och Ceretta, fås en uppfattning av vilka riskmåttsmodeller som ger de mest korrekta resultaten från både ett VaR- och ES-backtestingperspektiv. Sammanfattningsvis kan man konstatera att en modell som är acceptabel från ett VaR-backtestingperspektiv inte nödvändigtvis är det från ett ES-backtestingperspektiv och vice versa. I det hela taget har det visat sig att de modeller som är acceptabla ur ett VaR-backtestingperspektiv troligtvis är för konservativa från ett ESbacktestingperspektiv. Om man betraktar de konfidensnivåer som föreslagits i FRTB, kan man ur ett VaR-backtestingperspektiv konstatera att en riskmåttsmodell med normal-copula och en hybridfördelning med generaliserad Pareto-fördelning i svansarna och empirisk fördelning i centrum tillsammans med GARCH-filtrering är den bäst lämpade, medan det från ett ES-backtestingperspektiv är att föredra en riskmåttsmodell med univariat Student t-fördelning med ⱱ ≈ 7 tillsammans med GARCH-filtrering. Detta innebär att när banker ska implementera FRTB kommer de behöva kompromissa mellan att uppnå en bra VaR-modell som potentiellt resulterar i för konservativa ES-estimat och en modell som är mindre bra ur ett VaRperspektiv men som resulterar i rimligare ES-estimat. Examensarbetet genomfördes vid SAS Institute, ett amerikanskt IT-företag som bland annat utvecklar mjukvara för riskhantering. Tänkbara kunder är banker och andra finansinstitut. Denna studie av FRTB innebär en potentiell fördel för företaget vid kontakt med kunder som planerar implementera regelverket inom en snar framtid.
Riskhantering, finansiella tidsserier, Value at Risk, Expected Shortfall, Monte Carlo-simulering, GARCH-modellering, Copulas, hybrida distributioner, generaliserad Pareto-fördelning, extremvärdesteori, Backtesting, likviditetshorisonter, Basels regelverk
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Saaidia, Noureddine. "Sur les familles des lois de fonction de hasard unimodale : applications en fiabilité et analyse de survie". Thesis, Bordeaux 1, 2013. http://www.theses.fr/2013BOR14794/document.

Texto completo
Resumen
En fiabilité et en analyse de survie, les distributions qui ont une fonction de hasard unimodale ne sont pas nombreuses, qu'on peut citer: Gaussienne inverse ,log-normale, log-logistique, de Birnbaum-Saunders, de Weibull exponentielle et de Weibullgénéralisée. Dans cette thèse, nous développons les tests modifiés du Chi-deux pour ces distributions tout en comparant la distribution Gaussienne inverse avec les autres. Ensuite nousconstruisons le modèle AFT basé sur la distribution Gaussienne inverse et les systèmes redondants basés sur les distributions de fonction de hasard unimodale
In reliability and survival analysis, distributions that have a unimodalor $\cap-$shape hazard rate function are not too many, they include: the inverse Gaussian,log-normal, log-logistic, Birnbaum-Saunders, exponential Weibull and power generalized Weibulldistributions. In this thesis, we develop the modified Chi-squared tests for these distributions,and we give a comparative study between the inverse Gaussian distribution and the otherdistributions, then we realize simulations. We also construct the AFT model based on the inverseGaussian distribution and redundant systems based on distributions having a unimodal hazard ratefunction
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Winkler, Anderson M. "Widening the applicability of permutation inference". Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:ce166876-0aa3-449e-8496-f28bf189960c.

Texto completo
Resumen
This thesis is divided into three main parts. In the first, we discuss that, although permutation tests can provide exact control of false positives under the reasonable assumption of exchangeability, there are common examples in which global exchangeability does not hold, such as in experiments with repeated measurements or tests in which subjects are related to each other. To allow permutation inference in such cases, we propose an extension of the well known concept of exchangeability blocks, allowing these to be nested in a hierarchical, multi-level definition. This definition allows permutations that retain the original joint distribution unaltered, thus preserving exchangeability. The null hypothesis is tested using only a subset of all otherwise possible permutations. We do not need to explicitly model the degree of dependence between observations; rather the use of such permutation scheme leaves any dependence intact. The strategy is compatible with heteroscedasticity and can be used with permutations, sign flippings, or both combined. In the second part, we exploit properties of test statistics to obtain accelerations irrespective of generic software or hardware improvements. We compare six different approaches using synthetic and real data, assessing the methods in terms of their error rates, power, agreement with a reference result, and the risk of taking a different decision regarding the rejection of the null hypotheses (known as the resampling risk). In the third part, we investigate and compare the different methods for assessment of cortical volume and area from magnetic resonance images using surface-based methods. Using data from young adults born with very low birth weight and coetaneous controls, we show that instead of volume, the permutation-based non-parametric combination (NPC) of thickness and area is a more sensitive option for studying joint effects on these two quantities, giving equal weight to variation in both, and allowing a better characterisation of biological processes that can affect brain morphology.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

SADEFO, KAMDEM Jules. "Méthodes analytiques pour le Risque des Portefeuilles Financiers". Phd thesis, Université de Reims - Champagne Ardenne, 2004. http://tel.archives-ouvertes.fr/tel-00009187.

Texto completo
Resumen
Dans cette thèse, on propose des méthodes analytiques ou numériques pour l'estimation de la VaR ou l'Expected Shortfall des portefeuilles linéaires, quadratiques, lorsque le vecteur des facteurs de risques suit un mélange convexe de distributions elliptiques. Aussi, on introduit pour la prémière fois la notion de "portefeuille quadratique" d'actifs de bases (ie. actions).
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Bouquiaux, Christel. "Semiparametric estimation for extreme values". Doctoral thesis, Universite Libre de Bruxelles, 2005. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210910.

Texto completo
Resumen
Nous appliquons la théorie asymptotique des expériences statistiques à des problèmes liés aux valeurs extrêmes. Quatre modèles semi-paramétriques sont envisagés. Tout d'abord le modèle d'échantillonnage de fonction de répartition de type Pareto. L'index de Pareto est le paramètre d'intérêt tandis que la fonction à variation lente, qui intervient dans la décomposition de la fonction de survie, joue le rôle de nuisance. Nous considérons ensuite des observations i.i.d. de fonction de répartition de type Weibull. Le troisième modèle étudié est un modèle de régression. On considère des couples d'observations $(Y_i,X_i)$ indépendants, les v.a. $X_i$ sont i.i.d. de loi connue et on suppose que la fonction de répartition de la loi de $Y$ conditionnellement à $X$ est de type Pareto, avec une fonction à variation lente et un index $gamma$ qui dépendent de $X$. On fait l'hypothèse que la fonction $gamma$ a une forme quelconque mais connue, qui dépend d'un paramètre $\
Doctorat en sciences, Orientation statistique
info:eu-repo/semantics/nonPublished
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Brathwaite, Joy Danielle. "Value-informed space systems design and acquisition". Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/43748.

Texto completo
Resumen
Investments in space systems are substantial, indivisible, and irreversible, characteristics that make them high-risk, especially when coupled with an uncertain demand environment. Traditional approaches to system design and acquisition, derived from a performance- or cost-centric mindset, incorporate little information about the spacecraft in relation to its environment and its value to its stakeholders. These traditional approaches, while appropriate in stable environments, are ill-suited for the current, distinctly uncertain and rapidly changing technical, and economic conditions; as such, they have to be revisited and adapted to the present context. This thesis proposes that in uncertain environments, decision-making with respect to space system design and acquisition should be value-based, or at a minimum value-informed. This research advances the value-centric paradigm by providing the theoretical basis, foundational frameworks, and supporting analytical tools for value assessment of priced and unpriced space systems. For priced systems, stochastic models of the market environment and financial models of stakeholder preferences are developed and integrated with a spacecraft-sizing tool to assess the system's net present value. The analytical framework is applied to a case study of a communications satellite, with market, financial, and technical data obtained from the satellite operator, Intelsat. The case study investigates the implications of the value-centric versus the cost-centric design and acquisition choices. Results identify the ways in which value-optimal spacecraft design choices are contingent on both technical and market conditions, and that larger spacecraft for example, which reap economies of scale benefits, as reflected by their decreasing cost-per-transponder, are not always the best (most valuable) choices. Market conditions and technical constraints for which convergence occurs between design choices under a cost-centric and a value-centric approach are identified and discussed. In addition, an innovative approach for characterizing value uncertainty through partial moments, a technique used in finance, is adapted to an engineering context and applied to priced space systems. Partial moments disaggregate uncertainty into upside potential and downside risk, and as such, they provide the decision-maker with additional insights for value-uncertainty management in design and acquisition. For unpriced space systems, this research first posits that their value derives from, and can be assessed through, the value of information they provide. To this effect, a Bayesian framework is created to assess system value in which the system is viewed as an information provider and the stakeholder an information recipient. Information has value to stakeholders as it changes their rational beliefs enabling them to yield higher expected pay-offs. Based on this marginal increase in expected pay-offs, a new metric, Value-of-Design (VoD), is introduced to quantify the unpriced system's value. The Bayesian framework is applied to the case of an Earth Science satellite that provides hurricane information to oil rig operators using nested Monte Carlo modeling and simulation. Probability models of stakeholders' beliefs, and economic models of pay-offs are developed and integrated with a spacecraft payload generation tool. The case study investigates the information value generated by each payload, with results pointing to clusters of payload instruments that yielded higher information value, and minimum information thresholds below which it is difficult to justify the acquisition of the system. In addition, an analytical decision tool, probabilistic Pareto fronts, is developed in the Cost-VoD trade space to provide the decision-maker with additional insights into the coupling of a system's probable value generation and its associated cost risk.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Ibn, Taarit Kaouther. "Contribution à l'identification des systèmes à retards et d'une classe de systèmes hybrides". Phd thesis, Ecole Centrale de Lille, 2010. http://tel.archives-ouvertes.fr/tel-00587336.

Texto completo
Resumen
Les travaux présentés dans cette thèse concernent le problème d'identification des systèmes à retards et d'une certaine classe de systèmes hybrides appelés systèmes "impulsifs".Dans la première partie, un algorithme d'identification rapide a été proposé pour les systèmes à entrée retardée. Il est basé sur une méthode d'estimation distributionnelle non asymptotique initiée pour les systèmes sans retard. Une telle technique mène à des schémas de réalisation simples, impliquant des intégrateurs, des multiplicateurs et des fonctions continues par morceaux polynomiales ou exponentielles. Dans le but de généraliser cette approche pour les systèmes à retard, trois exemples d'applications ont été étudiées. La deuxième partie a été consacrée à l'identification des systèmes impulsifs. En se basant sur le formalisme des distributions, une procédure d'identification a été élaborée afin d'annihiler les termes singuliers des équations différentielles représentant ces systèmes. Par conséquent, une estimation en ligne des instants de commutations et des paramètres inconnus est prévue indépendamment des lois de commutations. Des simulations numériques d'un pendule simple soumis à des frottements secs illustrent notre méthodologie
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Číž, Bronislav. "Progresivita daně z příjmů". Master's thesis, Vysoká škola ekonomická v Praze, 2008. http://www.nusl.cz/ntk/nusl-5231.

Texto completo
Resumen
The diploma thesis is focused on the distribution of non-taxable items, respectively their impact on the distribution of the income or tax base between diverse income groups in the Czech Republic. The aim of the empirical research was to measure redistributional effects of total and particular non-taxable items by various income inequality metrics.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía