To see the other types of publications on this topic, follow the link: Marked point processes.

Dissertations / Theses on the topic 'Marked point processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Marked point processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Peng, Man Kallenberg Olav. "Palm measure invariance and exchangeability for marked point processes." Auburn, Ala, 2008. http://repo.lib.auburn.edu/EtdRoot/2008/FALL/Mathematics_and_Statistics/Dissertation/Peng_Man_3.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Jones, Matthew O. "Spatial Service Systems Modelled as Stochastic Integrals of Marked Point Processes." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/7174.

Full text
Abstract:
We characterize the equilibrium behavior of a class of stochastic particle systems, where particles (representing customers, jobs, animals, molecules, etc.) enter a space randomly through time, interact, and eventually leave. The results are useful for analyzing the dynamics of randomly evolving systems including spatial service systems, species populations, and chemical reactions. Such models with interactions arise in the study of species competitions and systems where customers compete for service (such as wireless networks). The models we develop are space-time measure-valued Markov processes. Specifically, particles enter a space according to a space-time Poisson process and are assigned independent and identically distributed attributes. The attributes may determine their movement in the space, and whenever a new particle arrives, it randomly deletes particles from the system according to their attributes. Our main result establishes that spatial Poisson processes are natural temporal limits for a large class of particle systems. Other results include the probability distributions of the sojourn times of particles in the systems, and probabilities of numbers of customers in spatial polling systems without Poisson limits.
APA, Harvard, Vancouver, ISO, and other styles
3

Yen, Tso-Jung. "Nonparametric Bayesian modelling with marked point processes : theory and methods." Thesis, Imperial College London, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.497933.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wenzel, Susanne [Verfasser]. "High-Level Facade Image Interpretation using Marked Point Processes / Susanne Wenzel." Bonn : Universitäts- und Landesbibliothek Bonn, 2016. http://d-nb.info/1110014147/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Törnqvist, Gustav. "Modelling insurance claims with spatial point processes : An applied case-control study to improve the use of geographical information in insurance pricing." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-108431.

Full text
Abstract:
An important prerequisite for running a successful insurance business is to predict risk. By forecasting the future in as much detail as possible, competitive advantages are created in terms of price differentiation. This work aims at using spatial point processes to provide a proposal for how the geographical position of the customer can be used in developing risk differentiation tools. For spatial variation in claim frequency an approach is presented which is common in spatial epidemiology by considering a group of policyholders, with and without claims, as a realisation of a multivariate Poisson point process in two dimensions. Claim costs are then included by considering the claims as a realisation of a point process with continuous marks. To describe the spatial variation in relative risk, demographic and socio-economic information from Swedish agencies have been used. The insurance data that have been used come from the insurance company If Skadeförsäkring AB, where also the work has been carried out. The result demonstrates problems with parametric modelling of the intensity of policyholders, which makes it difficult to validate the spatial varying intensity of claim frequency. Therefore different proposals of non-parametric estimation are discussed. Further, there are no tendencies that the selected information is able to explain the variation in claim costs.
En viktig förutsättning för att kunna bedriva en framgångsrik försäkringsverksamhet är att prediktera risk. Genom att på en så detaljerad nivå som möjligt kunna förutse framtiden skapas konkurrensfördelar i form av prisdifferentiering. Målet med detta arbete är att med hjälp av spatiala punktprocesser ge ett förslag på hur kunders geografiska position kan utvecklas som riskdifferentieringsverktyg. För spatial variation i skadefrekvens presenteras ett tillvägagångssätt som är vanligt inom spatial epidemiologi genom att betrakta en grupp försäkringstagare, med och utan skador, som en realisering av en multivariat Poissonprocess i två dimensioner. Skadekostnaderna inkluderas sedan genom att betrakta skadorna som en punktprocess med kontinuerliga märken. För att beskriva spatial variation i relativ risk används demografisk och socioekonomisk information från svenska myndigheter. De försäkringsdata som använts kommer från If Skadeförsäkring AB, där också arbetet har utförts. Resultatet påvisar problem med att parametriskt modellera intensiteten för försäkringstagare, vilket medför svårigheter att validera den skattade spatiala variationen i skadefrekvens, varför olika ickeparametriska förslag diskuteras. Vidare upptäcktes inga tendenser till att variationen i skadekostnad kan förklaras med den utvalda informationen.
APA, Harvard, Vancouver, ISO, and other styles
6

Kanaan, Mona N. "Cross-spectral analysis for spatial point-lattice processes." Thesis, [n.p.], 2000. http://dart.open.ac.uk/abstracts/page.php?thesisid=94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Comas, Rodriguez Carlos. "Modelling forest dynamics through the development of spatial and temporal marked point processes." Thesis, University of Strathclyde, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415363.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zass, Alexander [Verfasser], Sylvie [Akademischer Betreuer] Rœlly, Gilles [Akademischer Betreuer] Blanchard, Dai Pra [Gutachter] Paolo, and Suren [Gutachter] Poghosyan. "A multifaceted study of marked Gibbs point processes / Alexander Zass ; Gutachter: Dai Pra Paolo, Suren Poghosyan ; Sylvie Rœlly, Gilles Blanchard." Potsdam : Universität Potsdam, 2021. http://d-nb.info/1238140548/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Malinowski, Alexander [Verfasser], Martin [Akademischer Betreuer] Schlather, and Andrea [Akademischer Betreuer] Krajina. "Financial Models of Interaction Based on Marked Point Processes and Gaussian Fields / Alexander Malinowski. Gutachter: Martin Schlather ; Andrea Krajina. Betreuer: Martin Schlather." Göttingen : Niedersächsische Staats- und Universitätsbibliothek Göttingen, 2013. http://d-nb.info/1044047704/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Meillier, Céline. "Détection de sources quasi-ponctuelles dans des champs de données massifs." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT070/document.

Full text
Abstract:
Dans cette thèse, nous nous sommes intéressés à la détection de galaxies lointaines dans les données hyperspectrales MUSE. Ces galaxies, en particulier, sont difficiles à observer, elles sont spatialement peu étendues du fait de leur distance, leur spectre est composé d'une seule raie d'émission dont la position est inconnue et dépend de la distance de la galaxie, et elles présentent un rapport signal-à-bruit très faible. Ces galaxies lointaines peuvent être considérées comme des sources quasi-ponctuelles dans les trois dimensions du cube. Il existe peu de méthodes dans la littérature qui permettent de détecter des sources dans des données en trois dimensions. L'approche proposée dans cette thèse repose sur la modélisation de la configuration de galaxies par un processus ponctuel marqué. Ceci consiste à représenter la position des galaxies comme une configuration de points auxquels nous ajoutons des caractéristiques géométriques, spectrales, etc, qui transforment un point en objet. Cette approche présente l'avantage d'avoir une représentation mathématique proche du phénomène physique et permet de s'affranchir des approches pixelliques qui sont pénalisées par les dimensions conséquentes des données (300 x 300 x 3600 pixels). La détection des galaxies et l'estimation de leurs caractéristiques spatiales, spectrales ou d'intensité sont réalisées dans un cadre entièrement bayésien, ce qui conduit à un algorithme générique et robuste, où tous les paramètres sont estimés sur la base des seules données observées, la détection des objets d'intérêt étant effectuée conjointement.La dimension des données et la difficulté du problème de détection nous ont conduit à envisager une phase de prétraitement des données visant à définir des zones de recherche dans le cube. Des approches de type tests multiples permettent de construire des cartes de proposition des objets. La détection bayésienne est guidée par ces cartes de pré-détection (définition de la fonction d'intensité du processus ponctuel marqué), la proposition des objets est réalisée sur les pixels sélectionnés sur ces cartes. La qualité de la détection peut être caractérisée par un critère de contrôle des erreurs.L'ensemble des traitements développés au cours de cette thèse a été validé sur des données synthétiques, et appliqué ensuite à un jeu de données réelles acquises par MUSE suite à sa mise en service en 2014. L'analyse de la détection obtenue est présentée dans le manuscrit
Detecting the faintest galaxies in the hyperspectral MUSE data is particularly challenging because they have a small spatial extension, a very sparse spectrum that contains only one narrow emission line, which position in the spectral range is unknown. Moreover, their signal-to-noise ratio are very low. These galaxies are modeled as quasi point sources in the three dimensions of the data cube. We propose a method for the detection of a galaxy configuration based on a marked point process in a nonparametric Bayesian framework. A galaxy is modeled by a point (its position in the spatial domain), and marks (geometrical, spectral features) are added to transform a point into an object. These processes yield a natural sparse representation of massive data (300 x 300 x 3600 pixels). The fully Bayesian framework leads to a general and robust algorithm where the parameters of the objects are estimated in a fully data-driven way. Preprocessing strategies are drawn to tackle the massive dimensions of the data and the complexity of the detection problem, they allow to reduce the exploration of the data to areas that probably contain sources. Multiple testing approaches have been proposed to build proposition map. This map is also used to define the intensity of the point process, textit{i.e.} it describes the probability density function of the point process. It also gives a global error control criterion for the detection. The performance of the proposed algorithm is illustrated on synthetic data and real hyperspectral data acquired by the MUSE instrument for young galaxy detection
APA, Harvard, Vancouver, ISO, and other styles
11

Zhou, Jia. "Application de l’identification d’objets sur images à l’étude de canopées de peuplements forestiers tropicaux : cas des plantations d'Eucalyptus et des mangroves." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20214/document.

Full text
Abstract:
La thèse s'inscrit dans l'étude de la structuration des forêts à partir des propriétés de la canopée telles que décrites par la distribution spatiale ou la taille des houppiers des arbres dominants. L'approche suivie est fondée sur la théorie des Processus Ponctuels Marqués (PPM) qui permet de modéliser ces houppiers comme des disques sur images considérées comme un espace 2D. Le travail a consisté à évaluer le potentiel des PPM pour détecter automatiquement les houppiers d'arbres dans des images optiques de très résolution spatiale acquises sur des forêts de mangroves et des plantations d'Eucalyptus. Pour les mangroves, nous avons également travaillé sur des images simulées de réflectance et des données Lidar. Différentes adaptations (paramétrage, modèles d'énergie) de la méthode de PPM ont été testées et comparées grâce à des indices quantitatifs de comparaison entre résultats de la détection et références de positionnement issues du terrain, de photo-interprétation ou de maquettes forestières.Dans le cas des mangroves, les tailles de houppier estimées par détection restent cohérentes avec les sorties des modèles allométriques disponibles. Les résultats thématiques indiquent que la détection par PPM permet de cartographier dans une jeune plantation d'Eucalyptus la densité locale d'arbres dont la taille des houppiers est proche de la résolution spatiale de l'image (0.5m). Cependant, la qualité de la détection diminue quand le couvert se complexifie. Ce travail dresse plusieurs pistes de recherche tant mathématique, comme la prise en compte des objets de forme complexe, que thématiques, comme l'apport des informations forestières à des échelles pertinentes pour la mise au point de méthodes de télédétection
This PhD work aims at providing information on the forest structure through the analysis of canopy properties as described by the spatial distribution and the crown size of dominant trees. Our approach is based on the Marked Point Processes (MPP) theory, which allows modeling tree crowns observed in remote sensing images by discs belonging a two dimensional space. The potential of MPP to detect the trees crowns automatically is evaluated by using very high spatial resolution optical satellite images of both Eucalyptus plantations and mangrove forest. Lidar and simulated reflectance images are also analyzed for the mangrove application. Different adaptations (parameter settings, energy models) of the MPP method are tested and compared through the development of quantitative indices that allow comparison between detection results and tree references derived from the field, photo-interpretation or the forest mockups.In the case of mangroves, the estimated crown sizes from detections are consistent with the outputs from the available allometric models. Other results indicate that tree detection by MPP allows mapping, the local density of trees of young Eucalyptus plantations even if crown size is close to the image spatial resolution (0.5m). However, the quality of detection by MPP decreases with canopy closeness. To improve the results, further work may involve MPP detection using objects with finer shapes and forest data measurements collected at the tree plant scale
APA, Harvard, Vancouver, ISO, and other styles
12

Burochin, Jean-Pascal. "Segmentation d'images de façades de bâtiments acquises d'un point de vue terrestre." Thesis, Paris Est, 2012. http://www.theses.fr/2012PEST1064/document.

Full text
Abstract:
L'analyse de façades (détection, compréhension et reconstruction) à partir d'images acquises depuis la rue est aujourd'hui un thème de recherche très actif en photogrammétrie et en vision par ordinateur de part ses nombreuses applications industrielles. Cette thèse montre des avancées réalisées dans le domaine de la segmentation générique de grands volumes de ce type d'images, contenant une ou plusieurs zones de façades (entières ou tronquées).Ce type de données se caractérise par une complexité architecturale très riche ainsi que par des problèmes liés à l'éclairage et au point de vue d'acquisition. La généricité des traitements est un enjeu important. La contrainte principale est de n'introduire que le minimum d'a priori possible. Nous basons nos approches sur les propriétés d'alignements et de répétitivité des structures principales de la façade. Nous proposons un partitionnement hiérarchique des contours de l'image ainsi qu'une détection de grilles de structures répétitives par processus ponctuels marqués. Sur les résultats, la façade est séparée de ses voisines et de son environnement (rue, ciel). D'autre part, certains éléments comme les fenêtres, les balcons ou le fond de mur, sans être reconnus, sont extraits de manière cohérente. Le paramétrage s'effectue en une seule passe et s'applique à tous les styles d'architecture rencontrés. La problématique se situe en amont de nombreuses thématiques comme la séparation de façades, l'accroissement du niveau de détail de modèles urbains 3D générés à partir de photos aériennes ou satellitaires, la compression ou encore l'indexation à partir de primitives géométriques (regroupement de structures et espacements entre elles
Facade analysis (detection, understanding and field of reconstruction) in street level imagery is currently a very active field of research in photogrammetric computer vision due to its many applications. This thesis shows some progress made in the field of generic segmentation of a broad range of images that contain one or more facade areas (as a whole or in part).This kind of data is carecterized by a very rich and varied architectural complexity and by problems in lighting conditions and in the choice of a camera's point of view. Workflow genericity is an important issue. One significant constraint is to be as little biased as possible. The approches presented extract the main facade structures based on geometric properties such as alignment and repetitivity. We propose a hierarchic partition of the image contour edges and a detection of repetitive grid patterns based on marked point processes. The facade is set appart from its neighbooring façades and from its environment (the ground, the sky). Some elements such as windows, balconies or wall backgrounds, are extracted in a relevant way, without being recognized. The parameters regulation is done in one step and refers to all architectural styles encountered. The problem originates from most themes such as facade separation, the increase of level of details in 3D city models generated from aerial or satellite imagery, compression or indexation based on geometric primitives (structure grouping and space between them)
APA, Harvard, Vancouver, ISO, and other styles
13

Reype, Christophe. "Modélisation probabiliste et inférence bayésienne pour l’analyse de la dynamique des mélanges de fluides géologiques : détection des structures et estimation des paramètres." Electronic Thesis or Diss., Université de Lorraine, 2022. http://www.theses.fr/2022LORR0235.

Full text
Abstract:
L'analyse de données hydrogéochimiques a pour objectif d'améliorer la compréhension des échanges de matières entre sol et du sous-sol. Ce travail se concentre sur l'étude des interactions fluides-fluides au travers des systèmes de mélange de fluides et plus particulièrement de la détection des compositions des sources du mélange. La détection se fait au moyen d'un processus ponctuel : le modèle proposé se veut non supervisée et applicable à des données multidimensionnelles. Les connaissances physiques sur les mélanges et géologiques sur les données sont directement intégrés dans la densité de probabilité d'un processus ponctuel de Gibbs, qui distribue des configurations de points dans l'espace des données, appelé le modèle HUG. Les sources détectées forment la configuration de points qui maximise la densité de probabilité du modèle HUG. La densité de probabilité est connue sachant un paramètre choisi par l'utilisateur. Ces sources sont obtenues par un algorithme de type recuit simulé et des méthodes de type Monte-Carlo par Chaînes de Markov (MCMC). Le paramètre du modèle est estimé par une méthode de calcul bayésien approximatif (ABC). Tout d'abord, le modèle est appliqué sur des données synthétiques puis sur des données réelles. Le paramètre du modèle est ensuite estimé pour un jeu de données synthétiques avec les sources connues. Enfin, la sensibilité du modèle aux données, au paramètre et aux algorithmes est étudiée
The analysis of hydrogeochemical data aims to improve the understanding of mass transfer in the sub-surface and the Earth’s crust. This work focuses on the study of fluid-fluid interactions through fluid mixing systems, and more particularly on the detection of the compositions of the mixing sources. The detection is done by means of a point process: the proposed model is unsupervised and applicable to multidimensional data. Physical knowledge of the mixtures and geological knowledge of the data are directly integrated into the probability density of a Gibbs point process, which distributes point patterns in the data space, called the HUG model. The detected sources form the point pattern that maximises the probability density of the HUG model. This probability density is known up to the normalization constant. The knowledge related to the parameters of the model, either acquired experimentally or by using inference methods, is integrated in the method under the form of prior distributions. The configuration of the sources is obtained by a simulated annealing algorithm and Markov Chain Monte Carlo (MCMC) methods. The parameters of the model are estimated by an approximate Bayesian computation method (ABC). First, the model is applied to synthetic data, and then to real data. The parameters of the model are then estimated for a synthetic data set with known sources. Finally, the sensitivity of the model to data uncertainties, to parameters choices and to algorithms set-up is studied
APA, Harvard, Vancouver, ISO, and other styles
14

Meitz, Mika. "Five contributions to econometric theory and the econometrics of ultra-high-frequency data." Doctoral thesis, Stockholm : Economic Research Institute, Stockholm School of Economics [Ekonomiska forskningsinstitutet vid Handelshögskolan i Stockholm] (EFI), 2006. http://www2.hhs.se/EFI/summary/694.htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Verdie, Yannick. "Modélisation de scènes urbaines à partir de données aériennes." Thesis, Nice, 2013. http://www.theses.fr/2013NICE4078.

Full text
Abstract:
L'analyse et la reconstruction automatique de scène urbaine 3D est un problème fondamental dans le domaine de la vision par ordinateur et du traitement numérique de la géométrie. Cette thèse présente des méthodologies pour résoudre le problème complexe de la reconstruction d'éléments urbains en 3D à partir de données aériennes Lidar ou bien de maillages générés par imagerie Multi-View Stereo (MVS). Nos approches génèrent une représentation précise et compacte sous la forme d'un maillage 3D comportant une sémantique de l'espace urbain. Deux étapes sont nécessaires ; une identification des différents éléments de la scène urbaine, et une modélisation des éléments sous la forme d'un maillage 3D. Le Chapitre 2 présente deux méthodes de classifications des éléments urbains en classes d'intérêts permettant d'obtenir une compréhension approfondie de la scène urbaine, et d'élaborer différentes stratégies de reconstruction suivant le type d'éléments urbains. Cette idée, consistant à insérer à la fois une information sémantique et géométrique dans les scènes urbaines, est présentée en détails et validée à travers des expériences. Le Chapitre 3 présente une approche pour détecter la 'Végétation' incluses dans des données Lidar reposant sur les processus ponctuels marqués, combinée avec une nouvelle méthode d'optimisation. Le Chapitre 4 décrit à la fois une approche de maillage 3D pour les 'Bâtiments' à partir de données Lidar et de données MVS. Des expériences sur des structures urbaines larges et complexes montrent les bonnes performances de nos systèmes
Analysis and 3D reconstruction of urban scenes from physical measurements is a fundamental problem in computer vision and geometry processing. Within the last decades, an important demand arises for automatic methods generating urban scenes representations. This thesis investigates the design of pipelines for solving the complex problem of reconstructing 3D urban elements from either aerial Lidar data or Multi-View Stereo (MVS) meshes. Our approaches generate accurate and compact mesh representations enriched with urban-related semantic labeling.In urban scene reconstruction, two important steps are necessary: an identification of the different elements of the scenes, and a representation of these elements with 3D meshes. Chapter 2 presents two classification methods which yield to a segmentation of the scene into semantic classes of interests. The beneath is twofold. First, this brings awareness of the scene for better understanding. Second, deferent reconstruction strategies are adopted for each type of urban elements. Our idea of inserting both semantical and structural information within urban scenes is discussed and validated through experiments. In Chapter 3, a top-down approach to detect 'Vegetation' elements from Lidar data is proposed using Marked Point Processes and a novel optimization method. In Chapter 4, bottom-up approaches are presented reconstructing 'Building' elements from Lidar data and from MVS meshes. Experiments on complex urban structures illustrate the robustness and scalability of our systems
APA, Harvard, Vancouver, ISO, and other styles
16

Rambaldi, Marcello. "Some applications of Hawkes point processes to high frequency finance." Doctoral thesis, Scuola Normale Superiore, 2017. http://hdl.handle.net/11384/85718.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Park, Jee Hyuk. "On the separation of preferences among marked point process wager alternatives." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Jeong, Seong-Gyun. "Modélisation de structures curvilignes et ses applications en vision par ordinateur." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4086/document.

Full text
Abstract:
Dans cette thèse, nous proposons des modèles de reconstruction de la structure curviligne fondée sur la modélisation stochastique et sur un système d’apprentissage structuré. Nous supposons que le réseau de lignes, dans sa totalité, peut être décomposé en un ensemble de segments de ligne avec des longueurs et orientations variables. Cette hypothèse nous permet de reconstituer des formes arbitraires de la structure curviligne pour différents types de jeux de données. Nous calculons les descripteurs des caractéristiques curvilignes fondés sur les profils des gradients d’image et les profils morphologiques. Pour le modèle stochastique, nous proposons des contraintes préalables qui définissent l'interaction spatiale des segments de ligne. Pour obtenir une configuration optimale correspondant à la structure curviligne latente, nous combinons plusieurs hypothèses de ligne qui sont calculées par échantillonnage MCMC avec différents jeux de paramètres. De plus, nous apprenons une fonction de classement qui prédit la correspondance du segment de ligne donné avec les structures curvilignes latentes. Une nouvelle méthode fondée sur les graphes est proposée afin d’inférer la structure sous-jacente curviligne en utilisant les classements de sortie des segments de ligne. Nous utilisons nos modèles pour analyser la structure curviligne sur des images statiques. Les résultats expérimentaux sur de nombreux types de jeux de données démontrent que les modèles de structure curviligne proposés surpassent les techniques de l'état de l'art
In this dissertation, we propose curvilinear structure reconstruction models based on stochastic modeling and ranking learning system. We assume that the entire line network can be decomposed into a set of line segments with variable lengths and orientations. This assumption enables us to reconstruct arbitrary shapes of curvilinear structure for different types of datasets. We compute curvilinear feature descriptors based on the image gradient profiles and the morphological profiles. For the stochastic model, we propose prior constraints that define the spatial interaction of line segments. To obtain an optimal configuration corresponding to the latent curvilinear structure, we combine multiple line hypotheses which are computed by MCMC sampling with different parameter sets. Moreover, we learn a ranking function which predicts the correspondence of the given line segment and the latent curvilinear structures. A novel graph-based method is proposed to infer the underlying curvilinear structure using the output rankings of the line segments. We apply our models to analyze curvilinear structure on static images. Experimental results on wide types of datasets demonstrate that the proposed curvilinear structure modeling outperforms the state-of-the-art techniques
APA, Harvard, Vancouver, ISO, and other styles
19

Forbes, Peter G. M. "Quantifying the strength of evidence in forensic fingerprints." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:0915280a-22cc-429d-90dc-77f934d61dde.

Full text
Abstract:
Part I presents a model for fingerprint matching using Bayesian alignment on unlabelled point sets. An efficient Monte Carlo algorithm is developed to calculate the marginal likelihood ratio between the hypothesis that an observed fingerprint and fingermark pair originate from the same finger and the hypothesis that they originate from different fingers. The model achieves good performance on the NIST-FBI fingerprint database of 258 matched fingerprint pairs, though the computed likelihood ratios are implausibly extreme due to oversimplification in our model. Part II moves to a more theoretical study of proper scoring rules. The chapters in this section are designed to be independent of each other. Chapter 9 uses proper scoring rules to calibrate the implausible likelihood ratios computed in Part I. Chapter 10 defines the class of compatible weighted proper scoring rules. Chapter 11 derives new results for the score matching estimator, which can quickly generate point estimates for a parametric model even when the normalization constant of the distribution is intractable. It is used to find an initial value for the iterative maximization procedure in §3.3. Appendix A describes a novel algorithm to efficiently sample from the posterior of a von Mises distribution. It is used within the fingerprint model sampling procedure described in §5.6. Appendix B includes various technical results which would otherwise disrupt the flow of the main dissertation.
APA, Harvard, Vancouver, ISO, and other styles
20

Martínez, Sosa José. "Optimal exposure strategies in insurance." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/optimal-exposure-strategies-in-insurance(3768eede-a363-475b-bf25-8eff039fe6b7).html.

Full text
Abstract:
Two optimisation problems were considered, in which market exposure is indirectly controlled. The first one models the capital of a company and an independent portfolio of new businesses, each one represented by a Cram\'r-Lundberg process. The company can choose the proportion of new business it wants to take on and can alter this proportion over time. Here the objective is to find a strategy that maximises the survival probability. We use a point processes framework to deal with the impact of an adapted strategy in the intensity of the new business. We prove that when Cram\'{e}r-Lundberg processes with exponentially distributed claims, it is optimal to choose a threshold type strategy, where the company switches between owning all new businesses or none depending on the capital level. For this type of processes that change both drift and jump measure when crossing the constant threshold, we solve the one and two-sided exit problems. This optimisation problem is also solved when the capital of the company and the new business are modelled by spectrally positive L\'vy processes of bounded variation. Here the one-sided exit problem is solved and we prove optimality of the same type of threshold strategy for any jump distribution. The second problem is a stochastic variation of the work done by Taylor about underwriting in a competitive market. Taylor maximised discounted future cash flows over a finite time horizon in a discrete time setting when the change of exposure from one period to the next has a multiplicative form involving the company's premium and the market average premium. The control is the company's premium strategy over a the mentioned finite time horizon. Taylor's work opened a rich line of research, and we discuss some of it. In contrast with Taylor's model, we consider the market average premium to be a Markov chain instead of a deterministic vector. This allows to model uncertainty in future conditions of the market. We also consider an infinite time horizon instead of finite. This solves the time dependency in Taylor's optimal strategies that were giving unrealistic results. Our main result is a formula to calculate explicitly the value function of a specific class of pricing strategies. Further we explore concrete examples numerically. We find a mix of optimal strategies where in some examples the company should follow the market while in other cases should go against it.
APA, Harvard, Vancouver, ISO, and other styles
21

Abramowicz, Konrad. "Numerical analysis for random processes and fields and related design problems." Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-46156.

Full text
Abstract:
In this thesis, we study numerical analysis for random processes and fields. We investigate the behavior of the approximation accuracy for specific linear methods based on a finite number of observations. Furthermore, we propose techniques for optimizing performance of the methods for particular classes of random functions. The thesis consists of an introductory survey of the subject and related theory and four papers (A-D). In paper A, we study a Hermite spline approximation of quadratic mean continuous and differentiable random processes with an isolated point singularity. We consider a piecewise polynomial approximation combining two different Hermite interpolation splines for the interval adjacent to the singularity point and for the remaining part. For locally stationary random processes, sequences of sampling designs eliminating asymptotically the effect of the singularity are constructed. In Paper B, we focus on approximation of quadratic mean continuous real-valued random fields by a multivariate piecewise linear interpolator based on a finite number of observations placed on a hyperrectangular grid. We extend the concept of local stationarity to random fields and for the fields from this class, we provide an exact asymptotics for the approximation accuracy. Some asymptotic optimization results are also provided. In Paper C, we investigate numerical approximation of integrals (quadrature) of random functions over the unit hypercube. We study the asymptotics of a stratified Monte Carlo quadrature based on a finite number of randomly chosen observations in strata generated by a hyperrectangular grid. For the locally stationary random fields (introduced in Paper B), we derive exact asymptotic results together with some optimization methods. Moreover, for a certain class of random functions with an isolated singularity, we construct a sequence of designs eliminating the effect of the singularity. In Paper D, we consider a Monte Carlo pricing method for arithmetic Asian options. An estimator is constructed using a piecewise constant approximation of an underlying asset price process. For a wide class of Lévy market models, we provide upper bounds for the discretization error and the variance of the estimator. We construct an algorithm for accurate simulations with controlled discretization and Monte Carlo errors, andobtain the estimates of the option price with a predetermined accuracy at a given confidence level. Additionally, for the Black-Scholes model, we optimize the performance of the estimator by using a suitable variance reduction technique.
APA, Harvard, Vancouver, ISO, and other styles
22

Lu, Min. "A Study of the Calibration Regression Model with Censored Lifetime Medical Cost." Digital Archive @ GSU, 2006. http://digitalarchive.gsu.edu/math_theses/14.

Full text
Abstract:
Medical cost has received increasing interest recently in Biostatistics and public health. Statistical analysis and inference of life time medical cost have been challenging by the fact that the survival times are censored on some study subjects and their subsequent cost are unknown. Huang (2002) proposed the calibration regression model which is a semiparametric regression tool to study the medical cost associated with covariates. In this thesis, an inference procedure is investigated using empirical likelihood ratio method. The unadjusted and adjusted empirical likelihood confidence regions are constructed for the regression parameters. We compare the proposed empirical likelihood methods with normal approximation based method. Simulation results show that the proposed empirical likelihood ratio method outperforms the normal approximation based method in terms of coverage probability. In particular, the adjusted empirical likelihood is the best one which overcomes the under coverage problem.
APA, Harvard, Vancouver, ISO, and other styles
23

Laifa, Oumeima. "A joint discriminative-generative approach for tumour angiogenesis assessment in computational pathology." Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS230.

Full text
Abstract:
L’angiogenèse est le processus par lequel de nouveaux vaisseaux sanguins se forment à partir du réseaux préexistant. Au cours de l’angiogenèse tumorale, les cellules tumorales sécrètent des facteurs de croissance qui activent la prolifération et la migration des cellules et stimulent la surproduction du facteur de croissance endothélial vasculaire (VEGF). Le rôle fondamental de l’approvisionnement vasculaire dans la croissance tumorale et le developement des thérapies anticancéreuses rend l’évaluation de l’angiogenèse tumorale, cruciale dans l’évaluation de l’effet des thérapies anti-angiogéniques, en tant que thérapie anticancéreuse prometteuse. Dans cette étude, nous établissons un panel quantitatif et qualitatif pour évaluer les structures des vaisseaux sanguins de la tumeur sur des images de fluorescence non invasives et des images histopathologique sur toute la surface tumorale afin d’identifier les caractéristiques architecturales et les mesures quantitatives souvent associées à la réponse thérapeutique ou prédictive de celle-ci. Nous développons un pipeline formé de Markov Random Field (MFR) et Watershed pour segmenter les vaisseaux sanguins et les composants du micro-environnement tumoral afin d’évaluer quantitativement l’effet du médicament anti-angiogénique Pazopanib sur le système vasculaire tumoral et l’interaction avec le micro-environnement de la tumeur. Le pazopanib, agent anti-angiogénèse, a montré un effet direct sur le système vasculaire du réseau tumoral via les cellules endothéliales. Nos résultats montrent une relation spécifique entre la néovascularisation apoptotique et la densité de noyau dans une tumeur murine traitée par Pazopanib. Une évaluation qualitative des vaisseaux sanguins de la tumeur est réalisée dans la suite de l’étude. Nous avons développé un modèle de réseau de neurone discriminant-générateur basé sur un modele d’apprentissage : réseau de neurones convolutionnels (CNN) et un modèle de connaissance basé sur des règles Marked Point Process (MPP) permettant de segmenter les vaisseaux sanguins sur des images très hétérogènes à l’aide de très peu de données annotées. Nous détaillons l’intuition et la conception du modèle discriminatif-génératif, sa similarité avec les Réseaux antagonistes génératifs (GAN) et nous évaluons ses performances sur des données histopathologiques et synthétiques. Les limites et les perspectives de la méthode sont présentées à la fin de notre étude
Angiogenesis is the process through which new blood vessels are formed from pre-existing ones. During angiogenesis, tumour cells secrete growth factors that activate the proliferation and migration of endothelial cells and stimulate over production of the vascular endothelial growth factor (VEGF). The fundamental role of vascular supply in tumour growth and anti-cancer therapies makes the evaluation of angiogenesis crucial in assessing the effect of anti-angiogenic therapies as a promising anti-cancer therapy. In this study, we establish a quantitative and qualitative panel to evaluate tumour blood vessels structures on non-invasive fluorescence images and histopathological slide across the full tumour to identify architectural features and quantitative measurements that are often associated with prediction of therapeutic response. We develop a Markov Random Field (MFRs) and Watershed framework to segment blood vessel structures and tumour micro-enviroment components to assess quantitatively the effect of the anti-angiogenic drug Pazopanib on the tumour vasculature and the tumour micro-enviroment interaction. The anti-angiogenesis agent Pazopanib was showing a direct effect on tumour network vasculature via the endothelial cells crossing the whole tumour. Our results show a specific relationship between apoptotic neovascularization and nucleus density in murine tumor treated by Pazopanib. Then, qualitative evaluation of tumour blood vessels structures is performed in whole slide images, known to be very heterogeneous. We develop a discriminative-generative neural network model based on both learning driven model convolutional neural network (CNN), and rule-based knowledge model Marked Point Process (MPP) to segment blood vessels in very heterogeneous images using very few annotated data comparing to the state of the art. We detail the intuition and the design behind the discriminative-generative model, and we analyze its similarity with Generative Adversarial Network (GAN). Finally, we evaluate the performance of the proposed model on histopathology slide and synthetic data. The limits of this promising framework as its perspectives are shown
APA, Harvard, Vancouver, ISO, and other styles
24

PAUS, ANNA. "ORGANISATION, COOPERATION AND REDUCTION: A SOCIO-ECONOMIC ANALYSIS OF ILLEGAL MARKET ACTORS FACILITATING IRREGULAR MIGRATION AT EU-INTERNAL TRANSIT POINTS." Doctoral thesis, Università degli Studi di Milano, 2020. http://hdl.handle.net/2434/737858.

Full text
Abstract:
The facilitation of irregular migration by organised criminal groups [OCGs] at EU-internal transit points represents a specific illegal market type. This PhD thesis uses a mixed methodology approach to study this market with a focus on Italy, one of the main entry and transit countries for irregular migrants aiming to reach Central and Northern Europe, as well as the pulsating heart of intense EU-public and political debate around issues of mismanaged, undocumented immigration. While the debate has concentrated on the organised smuggling of irregular migrants via sea routes, less attention has been paid to EU-inland routes. What is known about the latter is mainly restricted to sporadic cases in which smuggling journeys have ended tragically. This has led to the rather uninformed and sensationalist notion that the market for human smuggling is monopolised by highly structured and sophisticated transnational OCGs. However, existing empirical evidence rather suggests OCGs to be weakly-tied and fragmented in structure. Considering that these OCGs operate on a highly uncertain market, which lacks in institutional control and formal contracts, it becomes not only interesting, but vital to understand how these OCGs nevertheless execute their business successfully. The purpose of this thesis is to shed light on the organisational structure of OCGs operating on this illegal market type, to elucidate how its decentralised structure influences the market’s operation, and to analyse relational mechanisms that induce cooperative rather than opportunistic behaviour by illegal market actors. In doing so, the specificities and parallels of this distinct illegal market actor are compared to human smuggling organisations operating at EU-external borders. On the basis of these results, novel market reduction measures are pointed out, which are context-tailored, as well as more generally applicable to countering human smuggling into and within the EU. The study aims to achieve its purpose through a context-specific socio-economic analysis of organised human smuggling at transit points internally to the EU by means of: (i) a critical review of the literature on EU-related human smuggling; (ii) a thematic analysis of secondary sources as well as expert interviews on EU-internal organised human smuggling, and finally, (iii) a social network analysis of a selected, large-scale human smuggling organisation in Northern Italy. Together, these three different analyses lead to significant conclusions. OCGs involved in EU-internal human smuggling exhibit a decentralised organsational structure, which includes at most a two-tier level, including resourceful smugglers at the top and precarious individuals at the bottom. These OCGs are constituted not only by foreign- but also largely by European actors. Common ethnicity appears to facilitate cooperation between smugglers, as well as the criminal experience of a few. Compared to increasingly structured OCGs operating at the borders of Europe, the EU-internal human smuggling market appears still less organised and less violent and/or life-threatening for migrants. The latter is exhibited by a shift from physical transport to the progressive use of fraudulent documents on the EU-internal human smuggling market, which however might indicate increased involvement of resourcesful smugglers. It is argued that such a highly resilient illegal market structure can only be countered through (i) the improved targeting of high-tier smugglers but more importantly, necessitates (ii) recruitment prevention strategies that target the marginalisation and socio-economic precarity of smugglers, which are measures that notably overlap with the aim to reduce the demand of irregular migrants for smuggling services in the first place.
APA, Harvard, Vancouver, ISO, and other styles
25

Ben, Salah Riadh. "Élaboration d'une méthode tomographique de reconstruction 3D en vélocimétrie par image de particules basée sur les processus ponctuels marqués." Thesis, Poitiers, 2015. http://www.theses.fr/2015POIT2268/document.

Full text
Abstract:
Les travaux réalisés dans cette thèse s'inscrivent dans le cadre du développement de techniques de mesure optiques pour la mécanique des fluides visant la reconstruction de volumes de particules 3D pour ensuite en déduire leurs déplacements. Cette technique de mesure volumique appelée encore Tomo-PIV est apparue en 2006 et a fait l'objet d'une multitude de travaux ayant pour objectif l'amélioration de la reconstruction qui représente l'une des principales étapes de cette technique de mesure. Les méthodes proposées en littérature ne prennent pas forcément en compte la forme particulière des objets à reconstruire et ne sont pas suffisamment robustes pour faire face au bruit présent dans les images. Pour pallier à ce déficit, nous avons proposé une méthode de reconstruction tomographique, appelée (IOD-PVRMPP), qui se base sur les processus ponctuels marqués. Notre méthode permet de résoudre le problème de manière parcimonieuse. Elle facilite l'introduction de l'information à priori et résout les problèmes de mémoire liés aux approches dites "basées voxels". La reconstruction d'un ensemble de particules 3D est obtenue en minimisant une fonction d'énergie ce qui définit le processus ponctuel marqué. A cet effet, nous utilisons un algorithme de recuit simulé basé sur les méthodes de Monte-Carlo par Chaines de Markov à Saut Réversible (RJMCMC). Afin d'accélérer la convergence du recuit simulé, nous avons développé une méthode d'initialisation permettant de fournir une distribution initiale de particules 3D base sur la détection des particules 2D localisées dans les images de projections. Enfin cette méthode est appliquée à des écoulements fluides soit simulé, soit issu d'une expérience dans un canal turbulent à surface libre. L'analyse des résultats et la comparaison de cette méthode avec les méthodes classiques montrent tout l'intérêt de ces approches parcimonieuses
The research work fulfilled in this thesis fit within the development of optical measurement techniques for fluid mechanics. They are particularly related to 3D particle volume reconstruction in order to infer their movement. This volumetric measurement technic, called Tomo-PIV has appeared on 2006 and has been the subject of several works to enhance the reconstruction, which represents one of the most important steps of this measurement technique. The proposed methods in Literature don't necessarily take into account the particular form of objects to reconstruct and they are not sufficiently robust to deal with noisy images. To deal with these challenges, we propose a tomographic reconstruction method, called (IOD-PVRMPP), and based on marked point processes. Our method allows solving the problem in a parsimonious way. It facilitates the introduction of prior knowledge and solves memory problem, which is inherent to voxel-based approaches. The reconstruction of a 3D particle set is obtained by minimizing an energy function, which defines the marked point process. To this aim, we use a simulated annealing algorithm based on Reversible Jump Markov Chain Monte Carlo (RJMCMC) method. To speed up the convergence of the simulated annealing, we develop an initialization method, which provides the initial distribution of 3D particles based on the detection of 2D particles located in projection images. Finally, this method is applied to simulated fluid flow or real one produced in an open channel flow behind a turbulent grid. The results and the comparisons of this method with classical ones show the great interest of this parsimonious approach
APA, Harvard, Vancouver, ISO, and other styles
26

Crăciun, Paula. "Géométrie stochastique pour la détection et le suivi d'objets multiples dans des séquences d'images haute résolution de télédétection." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4095/document.

Full text
Abstract:
Dans cette thèse, nous combinons les outils de la théorie des probabilités et de la géométrie stochastique pour proposer de nouvelles solutions au problème de la détection et le suivi d'objets multiples dans des séquences d'images haute résolution. Nous créons un cadre fondé sur des modèles de processus ponctuels marqués spatio-temporels pour détecter et suivre conjointement plusieurs objets dans des séquences d'images. Nous proposons l'utilisation de formes paramétriques simples pour décrire l'apparition de ces objets. Nous construisons de nouveaux modèles fondés sur des énergies dédiées constituées de plusieurs termes qui tiennent compte à la fois l'attache aux données et les contraintes physiques telles que la dynamique de l'objet, la persistance de la trajectoire et de l'exclusion mutuelle. Nous construisons un schéma d'optimisation approprié qui nous permet de trouver des minima locaux de l'énergie hautement non-convexe proposée qui soient proche de l'optimum global. Comme la simulation de ces modèles requiert un coût de calcul élevé, nous portons notre attention sur les dernières mises en oeuvre de techniques de filtrage pour le suivi d'objets multiples, qui sont connues pour être moins coûteuses en calcul. Nous proposons un échantillonneur hybride combinant le filtre de Kalman avec l'échantillonneur MCMC à sauts réversibles. Des techniques de calcul de haute performance sont également utilisées pour augmenter l'efficacité de calcul de notre méthode. Nous fournissons une analyse en profondeur du cadre proposé sur la base de plusieurs métriques classiques de suivi d'objets et de l'efficacité de calcul
In this thesis, we combine the methods from probability theory and stochastic geometry to put forward new solutions to the multiple object detection and tracking problem in high resolution remotely sensed image sequences. We create a framework based on spatio-temporal marked point process models to jointly detect and track multiple objects in image sequences. We propose the use of simple parametric shapes to describe the appearance of these objects. We build new, dedicated energy based models consisting of several terms that take into account both the image evidence and physical constraints such as object dynamics, track persistence and mutual exclusion. We construct a suitable optimization scheme that allows us to find strong local minima of the proposed highly non-convex energy. As the simulation of such models comes with a high computational cost, we turn our attention to the recent filter implementations for multiple object tracking, which are known to be less computationally expensive. We propose a hybrid sampler by combining the Kalman filter with the standard Reversible Jump MCMC. High performance computing techniques are also used to increase the computational efficiency of our method. We provide an in-depth analysis of the proposed framework based on standard multiple object tracking metrics and computational efficiency
APA, Harvard, Vancouver, ISO, and other styles
27

Berberovic, Adnan, and Alexander Eriksson. "A Multi-Factor Stock Market Model with Regime-Switches, Student's T Margins, and Copula Dependencies." Thesis, Linköpings universitet, Produktionsekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-143715.

Full text
Abstract:
Investors constantly seek information that provides an edge over the market. One of the conventional methods is to find factors which can predict asset returns. In this study we improve the Fama and French Five-Factor model with Regime-Switches, student's t distributions and copula dependencies. We also add price momentum as a sixth factor and add a one-day lag to the factors. The Regime-Switches are obtained from a Hidden Markov Model with conditional Student's t distributions. For the return process we use factor data as input, Student's t distributed residuals, and Student's t copula dependencies. To fit the copulas, we develop a novel approach based on the Expectation-Maximisation algorithm. The results are promising as the quantiles for most of the portfolios show a good fit to the theoretical quantiles. Using a sophisticated Stochastic Programming model, we back-test the predictive power over a 26 year period out-of-sample. Furthermore we analyse the performance of different factors during different market regimes.
APA, Harvard, Vancouver, ISO, and other styles
28

Karyagina, Marina. "Life cycle cost modelling for fault-tolerant CNC architectures." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Malinowski, Alexander. "Financial Models of Interaction Based on Marked Point Processes and Gaussian Fields." Doctoral thesis, 2012. http://hdl.handle.net/11858/00-1735-0000-000D-F0EF-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

(6629942), Anna N. Tatara. "Rate Estimators for Non-stationary Point Processes." Thesis, 2019.

Find full text
Abstract:
Non-stationary point processes are often used to model systems whose rates vary over time. Estimating underlying rate functions is important for input to a discrete-event simulation along with various statistical analyses. We study nonparametric estimators to the marked point process, the infinite-server queueing model, and the transitory queueing model. We conduct statistical inference for these estimators by establishing a number of asymptotic results.

For the marked point process, we consider estimating the offered load to the system over time. With direct observations of the offered load sampled at fixed intervals, we establish asymptotic consistency, rates of convergence, and asymptotic covariance through a Functional Strong Law of Large Numbers, a Functional Central Limit Theorem, and a Law of Iterated Logarithm. We also show that there exists an asymptotically optimal interval width as the sample size approaches infinity.

The infinite-server queueing model is central in many stochastic models. Specifically, the mean number of busy servers can be used as an estimator for the total load faced to a multi-server system with time-varying arrivals and in many other applications. Through an omniscient estimator based on observing both the arrival times and service requirements for n samples of an infinite-server queue, we show asymptotic consistency and rate of convergence. Then, we establish the asymptotics for a nonparametric estimator based on observations of the busy servers at fixed intervals.

The transitory queueing model is crucial when studying a transitory system, which arises when the time horizon or population is finite. We assume we observe arrival counts at fixed intervals. We first consider a natural estimator which applies an underlying nonhomogeneous Poisson process. Although the estimator is asymptotically unbiased, we see that a correction term is required to retrieve an accurate asymptotic covariance. Next, we consider a nonparametric estimator that exploits the maximum likelihood estimator of a multinomial distribution to see that this estimator converges appropriately to a Brownian Bridge.
APA, Harvard, Vancouver, ISO, and other styles
31

Lu, Kevin Weichao. "Weak Subordination of Multivariate Lévy Processes." Phd thesis, 2018. http://hdl.handle.net/1885/148680.

Full text
Abstract:
Based on the idea of constructing a time-changed process, strong subordination is the operation that evaluates a multivariate Lévy process at a multivariate subordinator. This produces a Lévy process again when the subordinate has independent components or the subordinator has indistinguishable components, otherwise we prove that it does not in a wide range of cases. A new operation known as weak subordination is introduced, acting on multivariate Lévy processes and multivariate subordinators, to extend this idea in a way that always produces a Lévy process, even when the subordinate has dependent components. We show that weak subordination matches strong subordination in law in the previously mentioned cases where the latter produces a Lévy process. In addition, we give the characteristics of weak subordination, and prove sample path properties, moment formulas and marginal component consistency. We also give distributional representations for weak subordination with ray subordinators, a superposition of independent subordinators, subordinators having independent components and subordinators having monotonic components. The variance generalised gamma convolution class, formed by strongly subordinating Brownian motion with Thorin subordinators, is further extended using weak subordination. For these weak variance generalised gamma convolutions, we derive characteristics, including a formula for their Lévy measure in terms of that of a variance-gamma process, and prove sample path properties. As an example of a weak variance generalised gamma convolution, we construct a weak subordination counterpart to the variance-alpha-gamma process of Semeraro. For these weak variance-alpha-gamma processes, we derive characteristics, show that they are a superposition of independent variance-gamma processes and compare three calibration methods: method of moments, maximum likelihood and digital moment estimation. As the density function is not explicitly known for maximum likelihood, we derive a Fourier invertibility condition. We show in simulations that maximum likelihood produces a better fit when this condition holds, while digital moment estimation is better when it does not. Also, weak variance-alpha-gamma processes exhibit a wider range of dependence structures and produces a significantly better fit than variance-alpha-gamma processes for the log returns of an S&P500-FTSE100 data set, and digital moment estimation has the best fit in this situation. Lastly, we study the self-decomposability of weak variance generalised gamma convolutions. Specifically, we prove that a driftless Brownian motion gives rise to a self-decomposable process, and when some technical conditions on the underlying Thorin measure are satisfied, that this is also necessary. Our conditions improve and generalise an earlier result of Grigelionis. These conditions are applied to a variety of weakly subordinated processes, including the weak variance-alpha-gamma process, and in the previous fit, a likelihood ratio test fails to reject the self-decomposability of the log returns.
APA, Harvard, Vancouver, ISO, and other styles
32

Héda, Ivan. "Modely kótovaných bodových procesů." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-346977.

Full text
Abstract:
Title: Models of Marked Point Processes Author: Ivan Héda Department: Department of Probability and Mathematical Statistics Supervisor: doc. RNDr. Zbyněk Pawlas, Ph.D. Abstract: In the first part of the thesis, we present necessary theoretical basics as well as the definition of functional characteristics used for examination of marked point patterns. Second part is dedicated to review some known marking strategies. The core of the thesis lays in the study of intensity-marked point processes. General formula for the characteristics is proven for this marking strategy and general class of the models with analytically computable characteristics is introduced. This class generalizes some known models. Theoretical results are used for real data analysis in the last part of the thesis. Keywords: marked point process, marked log-Gaussian Cox process, intensity-marked point process 1
APA, Harvard, Vancouver, ISO, and other styles
33

Li, Yu. "Remotely Sensed Data Segmentation under a Spatial Statistics Framework." Thesis, 2010. http://hdl.handle.net/10012/4931.

Full text
Abstract:
In remote sensing, segmentation is a procedure of partitioning the domain of a remotely sensed dataset into meaningful regions which correspond to different land use and land cover (LULC) classes or part of them. So far, the remotely sensed data segmentation is still one of the most challenging problems addressed by the remote sensing community, partly because of the availability of remotely sensed data from diverse sensors of various platforms with very high spatial resolution (VHSR). Thus, there is a strong motivation to propose a sophisticated data representation that can capture the significant amount of details presented in a VHSR dataset and to search for a more powerful scheme suitable for multiple remotely sensed data segmentations. This thesis focuses on the development of a segmentation framework for multiple VHSR remotely sensed data. The emphases are on VHSR data model and segmentation strategy. Starting with the domain partition of a given remotely sensed dataset, a hierarchical data model characterizing the structures hidden in the dataset locally, regionally and globally is built by three random fields: Markova random field (MRF), strict stationary random field (RF) and label field. After defining prior probability distributions which should capture and characterize general and scene-specific knowledge about model parameters and the contextual structure of accurate segmentations, the Bayesian based segmentation framework, which can lead to algorithmic implementation for multiple remotely sensed data, is developed by integrating both the data model and the prior knowledge. To verify the applicability and effectiveness of the proposed segmentation framework, the segmentation algorithms for different types of remotely sensed data are designed within the proposed segmentation framework. The first application relates to SAR intensity image processing, including segmentation and dark spot detection by marked point process. In the second application, the algorithms for LiDAR point cloud segmentation and building detection are developed. Finally, texture and colour texture segmentation problems are tackled within the segmentation framework. All applications demonstrate that the proposed data model provides efficient representations for hierarchical structures hidden in remotely sensed data and the developed segmentation framework leads to successful data processing algorithms for multiple data and task such as segmentation and object detection.
APA, Harvard, Vancouver, ISO, and other styles
34

Samuel, Richard Abayomi. "Modelling equity risk and external dependence: A survey of four African Stock Markets." Diss., 2019. http://hdl.handle.net/11602/1356.

Full text
Abstract:
Department of Statistics
MSc (Statistics)
The ripple e ect of a stock market crash due to extremal dependence is a global issue with key attention and it is at the core of all modelling e orts in risk management. Two methods of extreme value theory (EVT) were used in this study to model equity risk and extremal dependence in the tails of stock market indices from four African emerging markets: South Africa, Nigeria, Kenya and Egypt. The rst is the \bivariate-threshold-excess model" and the second is the \point process approach". With regards to the univariate analysis, the rst nding in the study shows in descending hierarchy that volatility with persistence is highest in the South African market, followed by Egyptian market, then Nigerian market and lastly, the Kenyan equity market. In terms of risk hierarchy, the Egyptian EGX 30 market is the most risk-prone, followed by the South African JSE-ALSI market, then the Nigerian NIGALSH market and the least risky is the Kenyan NSE 20 market. It is therefore concluded that risk is not a brainchild of volatility in these markets. For the bivariate modelling, the extremal dependence ndings indicate that the African continent regional equity markets present a huge investment platform for investors and traders, and o er tremendous opportunity for portfolio diversi cation and investment synergies between markets. These synergistic opportunities are due to the markets being asymptotic (extremal) independent or (very) weak asymptotic dependent and negatively dependent. This outcome is consistent with the ndings of Alagidede (2008) who analysed these same markets using co-integration analysis. The bivariate-threshold-excess and point process models are appropriate for modelling the markets' risks. For modelling the extremal dependence however, given the same marginal threshold quantile, the point process has more access to the extreme observations due to its wider sphere of coverage than the bivariate-threshold-excess model.
NRF
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography