To see the other types of publications on this topic, follow the link: A priori information.

Dissertations / Theses on the topic 'A priori information'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'A priori information.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Reyland, John M. "Towards Wiener system identification with minimum a priori information." Diss., University of Iowa, 2011. https://ir.uiowa.edu/etd/1066.

Full text
Abstract:
The ability to construct accurate mathematical models of real systems is an important part of control systems design. A block oriented systems identification approach models the unknown system as interconnected linear and nonlinear blocks. The subject of this thesis is a particular configuration of these blocks referred to as a Wiener model. The Wiener model studied here is a cascade of a one input linear block followed by a nonlinear block which then provides one output. We assume that the signal between the linear and nonlinear block is always unknown, only the Wiener model input and output can be sampled. This thesis investigates identification of the linear transfer function in a Wiener model. The question examined throughout the thesis is: given some small amount of a priori information on the nonlinear part, what can we determine about the linear part? Examples of minimal a priori information are knowledge of only one point on the nonlinear transfer characteristic, or simply that the transfer characteristic is monotonic over a certain range. Nonlinear blocks with and without memory are discussed. Several algorithms for identifying the linear transfer function of a block oriented Wiener system are presented and analyzed in detail. Linear blocks identified have both finite and infinite impulse response (i.e. FIR and IIR). Each algorithm has a carefully defined set of minimal a priori information on the nonlinearity. Also, each approach has a minimally restrictive set of assumptions on the input excitation. The universal applicability of each algorithm is established by providing rigorous proofs of identifiability and in some cases convergence. Extensive simulation testing of each algorithm has been performed. Simulation techniques and results are discussed in detail.
APA, Harvard, Vancouver, ISO, and other styles
2

Herold, Christine Ellen [Verfasser]. "INTERSNP : genomweite Interaktionsanalyse mit a-priori Information / Christine Ellen Herold." Bonn : Universitäts- und Landesbibliothek Bonn, 2011. http://d-nb.info/1016151071/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mobley, Paul R. "Use of a priori information to produce more effective, automated chemometrics methods /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/8549.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Yassir, Jedra. "Multi-period portfolio optimization given a priori information on signal dynamics and transactions costs." Thesis, KTH, Optimeringslära och systemteori, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-227264.

Full text
Abstract:
Multi-period portfolio optimization (MPO) has gained a lot of interest in modern portfolio theory due to its consideration for inter-temporal trading e effects, especially market impacts and transactions costs, and for its subtle reliability on return predictability. However, because of the heavy computational demand, portfolio policies based on this approach have been sparsely explored. In that regard, a tractable MPO framework proposed by N. Gârleanu &amp; L. H. Pedersen has been investigated. Using the stochastic control framework, the authors provided a closed form expression of the optimal policy. Moreover, they used a specific, yet flexible return predictability model. Excess returns were expressed using a linear factor model, and the predicting factors were modeled as mean reverting processes. Finally, transactions costs and market impacts were incorporated in the problem formulation as a quadratic function. The elaborated methodology considered that the market returns dynamics are governed by fast and slow mean reverting factors, and that the market transactions costs are not necessarily quadratic. By controlling the exposure to the market returns predicting factors, the aim was to uncover the importance of the mean reversion speeds in the performance of the constructed trading strategies, under realistic market costs. Additionally, for the sake of comparison, trading strategies based on a single-period mean variance optimization were considered. The findings suggest an overall superiority in performance for the studied MPO approach even when the market costs are not quadratic. This was accompanied with evidence of better usability of the factors' mean reversion speed, especially fast reverting factors, and robustness in adapting to transactions costs.<br>Portföljoptimering över era perioder (MPO) har fått stort intresse inom modern portföljteori. Skälet till detta är att MPO tar hänsyn till inter-temporala handelseffekter, särskilt marknadseffekter och transaktionskostnader, plus dess tillförlitlighet på avkastningsförutsägbarhet. På grund av det stora beräkningsbehovet har dock portföljpolitiken baserad på denna metod inte undersökts mycket. I det avseendet, har en underskriven MPO ramverk som föreslagits av N.Gârleanu L. H. Pedersen undersökts. Med hjälp av stokastiska kontrollramen tillhandahöll författarna formuläret för sluten form av den optimala politiken. Dessutom använde de en specifik, men ändå flexibel returförutsägbarhetsmodell. Överskjutande avkastning uttrycktes med hjälp av en linjärfaktormodell och de förutsägande faktorerna modellerades som genomsnittligaåterföringsprocesser. Slutligen inkorporerades transaktionskostnader och marknadseffekter i problemformuleringen som en kvadratisk funktion. Den utarbetade metodiken ansåg att marknadens avkastningsdynamik styrs av snabba och långsammaåterhämtningsfaktorer, och att kostnaderna för marknadstransaktioner inte nödvändigtvis är kvadratiska. Genom att reglera exponeringen mot marknaden återspeglar förutsägande faktorer, var målet att avslöja vikten av de genomsnittliga omkastningshastigheterna i utförandet av de konstruerade handelsstrategierna, under realistiska marknadskostnader. Dessutom, för jämförelses skull, övervägdes handelsstrategier baserade på en enstaka genomsnittlig variansoptimering. Resultaten tyder på en överlägsen överlägsenhet i prestanda för det studerade MPO-tillvägagångssättet, även när marknadsutgifterna inte är kvadratiska. Detta åtföljdes av bevis för bättre användbarhet av faktorernas genomsnittliga återgångshastighet, särskilt snabba återställningsfaktorer och robusthet vid anpassning till transaktionskostnader
APA, Harvard, Vancouver, ISO, and other styles
5

Stefanescu, Radu-Constantin. "Parallel nonlinear registration of medical images with a priori information on anatomy and pathology." Nice, 2005. http://www.theses.fr/2005NICE4090.

Full text
Abstract:
Le but de cette thèse est de proposer un algorithme de recalage non-rigide adapté au recalage atlas / sujet dans un environnement clinique. Les applications médicales sont la planification pré-opératoire pour la radiothérapie conforme des tumeurs cérébrales, et pour la stimulation cérébrale profonde continue des patients atteints de la maladie de Parkinson. Dans ces applications, le recalage non-rigide est utilisé pour déformer les segmentations de l’atlas (effectuées par un expert) dans la géométrie du patient. L’algorithme proposé utilise un champ de déplacement dense pour modeler finement la transformation, et un critère de similarité basé sur l’intensité pour estimer les appariements entre les deux images. L’inversibilité de la transformation estimée est garantie grâce à une nouvelle méthode de rééchantillonage rapide. La régularisation est implémentée à l’aide d’un modèle visco-élastique : une régularisation non-stationnaire et éventuellement anisotrope du champ de déplacement modélise la variabilité spatiale de la déformabilité des différentes structures ; une régularisation non-stationnaire de la dérivée temporelle du critère de similarité permet de pondérer l’information fournie par les différents voxels, et d’éviter les possibles erreurs causées par les pathologies dans l’image du patient. L’utilisation d’un schéma numérique semi-implicite permet des temps de calcul courts. Nous proposons aussi une implémentation parallèle sur une ferme d’ordinateurs personnels qui permet de réduire le temps de calcul à quelques minutes. Finalement, nous utilisons des méthodes de type « grille de calcul » pour connecter l’ordinateur parallèle à un système de visualisation<br>The purpose of this thesis is to provide a nonrigid registration algorithm adapted to atls to subject registration in a clinical environment. The clinical applications addressed are the pre-operative planning of conformal brain radiotherapy and of the deep brain stimulation of Parkinsonian patients. In these applications, the nonrigid registration is used to deform expert segmentations of an anatomical atlas image into a patient’s geometry. The proposed algorithm uses a dense displacement field to finally model the transformation, and an intensity-based similarity criterion to estimate the matches between the two images. The invertibility of the recovered transformation is guaranteed thanks to a new and fast regridding method. The regularization is implemented in a two-step viscoelastic-like model. A non-stationary and possibly anisotropic regularization of the displacement field models the space-varying deformability of different structures. A non-stationary regularization of the temporal derivative of the similarity criterion allows to weight informative vs. Non-informative voxels, and to avoid errors due to pathologies in the patient image. The use of a semi-implicit numerical scheme enables fair computation times. We also propose a parallel implementation on a cluster of personal computers that further reduces the execution time to only a few minutes. Finally, we use grid computing methods to tightly couple the quite heavy parallel architecture to a lightweight visualization system
APA, Harvard, Vancouver, ISO, and other styles
6

Cooke, Jeffrey L. "Techniques and methodologies for intelligent A priori determination and characterisation of information required by decision makers." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Sawchuk, Cynthia. "An investigation of two responsive learning automata, in a network game with no a priori information." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ26982.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Vardoulias, George. "Receiver synchronisation techniques for CDMA mobile radio communications based on the use of a priori information." Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/431.

Full text
Abstract:
Receiver synchronisation can be a major problem in a mobile radio environment where the communication channel is subject to rapid changes. Communication in spread spectrum systems is impossible unless the received spreading waveform and receiver-generated replica of the spreading waveform are initially synchronised in both phase and frequency. Phase and frequency synchronisation is usually accomplished by performing a two-dimensional search in the time/frequency ambiguity area. Generally, this process must be accomplished at very low SNRs, as quickly as possible, using the minimum amount of hardware. This thesis looks into techniques for improving spread spectrum receiver synchronisation in terms of the mean acquisition time. In particular, the thesis is focused on receiver structures that provide and/or use a priori information in order to minimise the mean acquisition time. The first part of this work is applicable to synchronisation scenarios involving LEO satellites. In this case, the receiver faces large Doppler shifts and must be able to search a large Doppler ambiguity area in order to locate the correct cell. A method to calculate the Doppler shift probability density function within a satellite spot-beam is proposed. It is shown that depending on the satellite’s velocity and position as well as the position of the centre of the spot-beam, not all Doppler shifts are equally probable to occur. Under well defined conditions, the Doppler pdf within the spot-beam can be approximated by a parabola-shaped function. Several searching strategies, suitable for the given prior information, are analysed. The effects on the mean frequency searching time are evaluated. In the second part of the thesis a novel acquisition technique, based on a fast preliminary search of the ambiguity area, is described. Every cell of the ambiguity area is examined two times. The first search is a fast straight line serial search, the duration of which is a crucial parameter of the system that must be optimised. The output of the first search is then used as a priori information which determines the search strategy of the second and final search. The system is compared with well known active acquisition systems and results in a large improvement in the mean acquisition time. Its performance is evaluated in Gaussian and fading Rayleigh channels.
APA, Harvard, Vancouver, ISO, and other styles
9

Rouault-Pic, Sandrine. "Reconstruction en tomographie locale : introduction d'information à priori basse résolution." Phd thesis, Université Joseph Fourier (Grenoble), 1996. http://tel.archives-ouvertes.fr/tel-00005016.

Full text
Abstract:
Un des objectifs actuel en tomographie est de réduire la dose injectée au patient. Les nouveaux systèmes d'imagerie, intégrant des détecteurs haute résolution de petites tailles ou des sources fortement collimatées permettent ainsi de réduire la dose. Ces dispositifs mettent en avant le problème de reconstruction d'image à partir d'informations locales. Un moyen d'aborder le problème de tomographie locale est d'introduire une information à priori, afin de lever la non-unicité de la solution. Nous proposons donc de compléter les projections locales haute résolution (provenant de systèmes décrits précédemment) par des projections complètes basse résolution, provenant par exemple d'un examen scanner standard. Nous supposons que la mise en correspondance des données a été effectuée, cette partie ne faisant pas l'objet de notre travail. Nous avons dans un premier temps, adapté des méthodes de reconstruction classiques (ART, Gradient conjugué régularisé et Rétroprojection filtrée) au problème local, en introduisant dans le processus de reconstruction l'information à priori. Puis, dans un second temps, nous abordons les méthodes de reconstruction par ondelettes et nous proposons également une adaptation à notre problème. Dans tous les cas, la double résolution apparait également dans l'image reconstruite, avec une résolution plus fine dans la région d'intérêt. Enfin, étant donné le coût élevé des méthodes mises en oeuvre, nous proposons une parallélisation des algorithmes implémentés.
APA, Harvard, Vancouver, ISO, and other styles
10

Xirouchakis, Michail. "Traffic Load Predictions Using Machine Learning : Scale your Appliances a priori." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254906.

Full text
Abstract:
Layer 4-7 network functions (NF), such as Firewall or NAPT, have traditionally been implemented in specialized hardware with little to no programmability and extensibility. The scientific community has focused on realizing this functionality in software running on commodity servers instead. Despite the many advancements over the years (e.g., network I/O accelerations), software-based NFs are still unable to guarantee some key service-level objectives (e.g., bounded latency) for the customer due to their reactive approach to workload changes. This thesis argues that Machine Learning techniques can be utilized to forecast how traffic patterns change over time. A network orchestrator can then use this information to allocate resources (network, compute, memory) in a timely fashion and more precisely. To this end, we have developed Mantis, a control plane network application which (i) monitors all forwarding devices (e.g., Firewalls) to generate performance-related metrics and (ii) applies predictors (moving average, autoregression, wavelets, etc.) to predict future values for these metrics. Choosing the appropriate forecasting technique for each traffic workload is a challenging task. This is why we developed several different predictors. Moreover, each predictor has several configuration parameters which can all be set by the administrator during runtime. In order to evaluate the predictive capabilities of Mantis, we set up a test-bed, consisting of the state-of-the-art network controller Metron [16], a NAPT NF realized in FastClick [6] and two hosts. While the source host was replaying real-world internet traces (provided by CAIDA [33]), our Mantis application was performing predictions in real time, using a rolling window for training. Visual inspection of the results indicates that all our predictors have good accuracy, excluding (i) the beginning of the trace where models are still being initialized and (ii) instances of abrupt change. Moreover, applying the discrete wavelet transform before we perform predictions can improve the accuracy further.<br>Nätverksfunktioner i lager 4-7 som t.ex. brandväggar eller NAPT har traditionellt implementeras på specialdesignad hårdvara med väldigt få programeringsegenskaper. Forskning inom datakomunikation har fokuserat på att istället möjliggöra dessa funktioner i mjukvara på standardhårdvara. Trots att många framsteg har gjorts inom området under de senaste åren (t.ex. nätverks I/O accelerering), kan inte mjukvarubaserade nätverksfunktioner garantera önskad tjänstenivå för kunderna (t.ex. begränsade latensvärden) p.g.a. det reaktiva tillvägagångsättet när arbetslasten ändras. Den här avhandlingen visar att med hjälp av maskininlärning så går det att förutse hur trafikflöden ändras över tid. Nätverksorkestrering kan sedan användas för att allokera resurser (bandbredd, beräkning, minne) i förväg samt mer precist. För detta ändamål har vi utvecklat Mantis, en nätverksapplikation i kontrolplanet som övervakar alla nätverksenheter för att generera prestandabaserade mätvärden och använder matematiska prediktorer (moving average, autoregression, wavelets, o.s.v.) för att förutse kommande ändringar i dessa värden. Det är en utmaning att välja rätt metod för att skapa prognosen för varje resurs. Därför har vi utvecklat flera olika prediktorer. Dessutom har varje prediktor flera konfigurationsvärden som kan ändras av administratören. För att utvärdera Mantis prognoser har vi satt upp ett testnätverk med en av marknadens ledande nätverkskontrollers, Metron [16], en NAPT nätverksfunktion implementerad med FastClick [6] och två testnoder. Den ena noden skickar data hämtad från verklig Internettrafik (erhållen från CAIDA [33]) samtidigt som vår applikation, Mantis, skapar prognoser i realtid. Manuell inspektion av resultaten tyder på att alla våra prediktorer har god precision, förutom början av en spårning då modellerna byggs upp eller vid abrupt ändring. Dessutom kan precisionen ökas ytterligare genom att använda diskret wavelet transformering av värdena innan prognosen görs.
APA, Harvard, Vancouver, ISO, and other styles
11

Faye, Papa Abdoulaye. "Planification et analyse de données spatio-temporelles." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22638/document.

Full text
Abstract:
La Modélisation spatio-temporelle permet la prédiction d’une variable régionalisée à des sites non observés du domaine d’étude, basée sur l’observation de cette variable en quelques sites du domaine à différents temps t donnés. Dans cette thèse, l’approche que nous avons proposé consiste à coupler des modèles numériques et statistiques. En effet en privilégiant l’approche bayésienne nous avons combiné les différentes sources d’information : l’information spatiale apportée par les observations, l’information temporelle apportée par la boîte noire ainsi que l’information a priori connue du phénomène. Ce qui permet une meilleure prédiction et une bonne quantification de l’incertitude sur la prédiction. Nous avons aussi proposé un nouveau critère d’optimalité de plans d’expérience incorporant d’une part le contrôle de l’incertitude en chaque point du domaine et d’autre part la valeur espérée du phénomène<br>Spatio-temporal modeling allows to make the prediction of a regionalized variable at unobserved points of a given field, based on the observations of this variable at some points of field at different times. In this thesis, we proposed a approach which combine numerical and statistical models. Indeed by using the Bayesian methods we combined the different sources of information : spatial information provided by the observations, temporal information provided by the black-box and the prior information on the phenomenon of interest. This approach allowed us to have a good prediction of the variable of interest and a good quantification of incertitude on this prediction. We also proposed a new method to construct experimental design by establishing a optimality criterion based on the uncertainty and the expected value of the phenomenon
APA, Harvard, Vancouver, ISO, and other styles
12

Bunouf, Pierre. "Lois bayésiennes a priori dans un plan binomial séquentiel." Phd thesis, Université de Rouen, 2006. http://tel.archives-ouvertes.fr/tel-00539868.

Full text
Abstract:
La reformulation du théorème de Bayes par R. de Cristofaro permet d'intégrer l'information sur le plan expérimental dans la loi a priori. En acceptant de transgresser les principes de vraisemblance et de la règle d'arrêt, un nouveau cadre théorique permet d'aborder le problème de la séquentialité dans l'inférence bayésienne. En considérant que l'information sur le plan expérimental est contenue dans l'information de Fisher, on dérive une famille de lois a priori à partir d'une vraisemblance directement associée à l'échantillonnage. Le cas de l'évaluation d'une proportion dans le contexte d'échantillonnages Binomiaux successifs conduit à considérer la loi Bêta-J. L'étude sur plusieurs plans séquentiels permet d'établir que l'"a priori de Jeffreys corrigé" compense le biais induit sur la proportion observée. Une application dans l'estimation ponctuelle montre le lien entre le paramétrage des lois Bêta-J et Bêta dans l'échantillonnage fixe. La moyenne et le mode des lois a posteriori obtenues présentent des propriétés fréquentistes remarquables. De même, l'intervalle de Jeffreys corrigé montre un taux de recouvrement optimal car la correction vient compenser l'effet de la règle d'arrêt sur les bornes. Enfin, une procédure de test, dont les erreurs s'interprètent à la fois en terme de probabilité bayésienne de l'hypothèse et de risques fréquentistes, est construite avec une règle d'arrêt et de rejet de H0 fondée sur une valeur limite du facteur de Bayes. On montre comment l'a priori de Jeffreys corrigé compense le rapport des évidences et garantit l'unicité des solutions, y compris lorsque l'hypothèse nulle est composite.
APA, Harvard, Vancouver, ISO, and other styles
13

Berisha, Suela. "L'apport des concepts du Web sémantique et normes associées aux échanges inter applicatifs dans un SI d'entreprise ou RECAP (Référentiels et connecteurs a priori)." Thesis, Lyon, INSA, 2011. http://www.theses.fr/2011ISAL0073.

Full text
Abstract:
Le sujet de thèse a été traité dans le cadre d’un Système d’Information d’Entreprise (SIE) contenant un grand volume d’informations, très diverses, circulant au sein d’un groupe avec une cinquantaine de filiales. Initialement, les sources informatiques de chaque entité stockaient des données structurées, suivant des logiques métiers locales. Les descriptions de ces sources, destinées prioritairement aux informaticiens, se trouvent dans des documents textuels. Elles ne visaient pas le partage et la compréhension des systèmes par les différents types d’acteurs métiers qui fournissent le contenu. C’est également le cas des référentiels métiers, qui contiennent des connaissances métiers fondamentales. Ainsi, la localisation d’informations pertinentes, permettent un aiguillage et un accès aisé aux sources qui répondent aux attentes et aux contraintes des acteurs métiers, devient une problématique principale du SI. Depuis une décennie, la SNCF vise à répondre à cette problématique par une démarche stratégique de gouvernance inscrite dans les processus de l’urbanisme du SI. Nous sommes concernés par mise sous contrôle des référentiels de données métiers, la standardisation et la simplification des échanges inter applicatifs. Malgré cette démarche, les initiatives de partage d’informations et des connaissances métiers restent ponctuelles, fortement dépendantes des motivations personnelles (et non pas métiers) d’acteurs éclairés du SI. Notre objectif est de fournir une vue stratégique sur le partage des connaissances des différents types d’acteurs du SI en favorisant leur collaboration dans le contexte du déroulement d’un processus, ou d’une activité métier. Pour y parvenir, nous proposons une démarche pour la construction d’une couche du SI en s’appuyant sur de nouveaux concepts fonctionnels : le "référentiel de référentiels" et les "connecteurs a priori". Le premier correspond à un référentiel transverse par rapport à un périmètre métier. Le second correspond à des connexions logiques assurant une interopérabilité entre les applications qui ne sont pas conçues pour cohabiter. D’un point de vue technologique, le "référentiel des référentiels" prend ses fondements dans la modélisation et le stockage des connaissances par les démarches sémantiques. Les "connecteurs a priori" puisent leurs apports dans les Services Web Sémantiques. Le sujet de thèse développe des compétences comme les architectures middlewares orientées services, la modélisation sémantique de documents, la modélisation du SI et les techniques de Recherche d’Information (RI). Ce sont autant de sujets qui s’intègrent dans le projet scientifique de l’équipe DRIM (Distribution et Recherche d'Information Multimédia) du département « Données, Connaissances et Services » du laboratoire LIRIS (Laboratoire d’InfoRmatique en Images et Systèmes d’Informations), CNRS UMR 5205<br>This thesis came about a very large and complex Enterprise Information System (EIS) containing a huge amount of information that moves through a company consisted of a main group and about fifty subsidiaries. At the beginning, computer science resources of each business unit stored structured data, according to a local business logic. The descriptions of these sources, intended first of all for computer specialists, were saved in text documents. They were not meant to help system sharing and understanding by different types of actors, who provide the business content. It's also true for the enterprise repositories, which contain fundamental business knowledge. Thus, finding the relevant information allowing fast and easy access to sources that meet the expectations and constraints of business actors, became a main problematic of IS. Since ten years, our company aims to address this problem by a strategic approach of IS governance included in the process of the IS planning. We are involved in the activity of business data repositories brought under control and information exchanges standardization and simplification between applications. Our goal is to provide a strategic view of knowledge sharing by promoting the IS actor collaboration in a business process context, or in business activity. To achieve this, we propose an approach for the construction of a layer of the IS based on new functional concepts: the "Reference Enterprise Repository (RER)" and " a priori connectors ". The first is about a transverse enterprise repository corresponding to a business context. The second is logical connections ensuring interoperability between applications that weren’t designed to coexist. From a technological point of view, the RER is based on semantic modelling and the knowledge representation. The "a priori connectors" involve integration technologies of Semantic Web Services. The thesis develops skills such as service-oriented middleware architectures, modelling semantics of documents, modelling IS and Information Retrieval (IR) techniques. All these subjects are part of the DRIM (Distribution and Multimedia Information Retrieval) team’s scientific project of the Department "Information, Knowledge and Services' at the LIRIS (Laboratoire d'Informatique en Images and Systems information), CNRS UMR 5205
APA, Harvard, Vancouver, ISO, and other styles
14

Vautier, Alexandre. "Fouille de données sans information a priori sur la structure de la connaissance : application à l’analyse de journaux d’alarmes réseau." Rennes 1, 2008. ftp://ftp.irisa.fr/techreports/theses/2008/vautier.pdf.

Full text
Abstract:
Les travaux de recherche présentés dans cette thèse ont pour objectif de proposer un cadre à la fouille de données pour la découverte de connaissances lorsque l'on n'a pas d'information a priori sur la structure des connaissances Nous proposons le concept de d’esquisses relationnelles qui enrichit les esquisses issues de la théorie des catégories. Ce cadre permet de spécifier des données de natures diverses et des opérateurs de fouille de données variés. L'exécution des opérateurs de fouille de données pour extraire des modèles est rendue possible grâce à l'unification de la spécification des opérateurs avec la spécification des données. Une méthode générique, basée sur la complexité de Kolmogorov, évalue la qualité des modèles à résumer les données. Elle s'appuie notamment sur la relation de couverture qui lie les modèles aux données. L'application ayant motivé ces travaux est l'analyse de journaux d'alarmes réseau de France-Télécom. La première application porte sur le résumé d'alarmes VPN non structurées. La seconde application concerne l'analyse des flux réseau importants pour la détection d'attaques DDoS<br>The aim of this thesis is to propose a data mining framework for discovering knowledge when the user has no a priori information about the knowledge structure. The proposed framework is generic and based on the category theory, more precisely on the sketches. We propose the concept of relational sketches that enhances the sketches with the concepts of power set and relation. This framework enables the specification of various data types and various data mining algorithms. The execution of data mining algorithms for model extraction is enabled by the unification of algorithm specifiations with the data specification. A generic methodology, based on the Kolmogorov complexity, is proposed to evaluate the model quality and their ability to summarize the data. The evaluation essentially relies on the covering relation that links the model and data. The application which motivated this work is an analysis of network alarm logs from France Télécom. The first application focuses on the summarization of unstructured VPN alarms. The second application concerns the analysis of network flows from the internet "backbone" to detect DDoS attacks
APA, Harvard, Vancouver, ISO, and other styles
15

Page, Thomas Sebastian [Verfasser], Ming [Akademischer Betreuer] Jiang, and Peter [Akademischer Betreuer] Maass. "Image reconstruction by Mumford-Shah regularization with a priori edge information / Thomas Sebastian Page. Gutachter: Ming Jiang ; Peter Maass. Betreuer: Ming Jiang." Bremen : Staats- und Universitätsbibliothek Bremen, 2015. http://d-nb.info/1072746395/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Sánchez, Escuderos Daniel. "High-resolution algorithms for the reconstruction of the equivalent currents of an antenna by means of modal theory and a priori information." Doctoral thesis, Universitat Politècnica de València, 2010. http://hdl.handle.net/10251/8323.

Full text
Abstract:
El objetivo del diagnóstico de antenas es la detección de errores en antena fabricadas. Dado que este diagnóstico es difícil de realizar simplemente observando medidas de campo, el diagnóstico se realiza usando las corrientes equivalentes reconstruidas en una superficie próxima a la antena. Esta tesis describe las diferentes posibilidades de realizar esta reconstrucción en una superficie plana a partir de medidas esféricas en campo próximo. En concreto, se estudian extensivamente, y se aplican a situaciones reales, las técnicas de la expansión modal. El problema principal de las técnicas modales es la limitación en la resolución de las corrientes equivalentes. La razón de esta limitación es la pequeña región disponible del espectro de ondas planas (cuya transformada de Fourier son las corrientes equivalentes). En esta tesis se estudia este problema, se muestran varios ejemplos y se describen las posibilidades de mejorar la resolución. De entre estas posibilidades, se propone el uso de una técnica de extrapolación con la que estimar el espectro no visible a partir de la región conocida (el espectro visible) y de información adicional sobre la antena como, por ejemplo, el tamaño de la antena. Entre las diferentes técnicas de extrapolación, se describen y comparan las técnicas más usadas comúnmente. En primer lugar, se aplica el algoritmo iterativo de Papoulis-Gerchberg usando el tamaño y la forma de la antena. Después se describen las versiones directas de este algoritmo, es decir la matriz de extrapolación por filas y columnas y la matriz de extrapolación generalizada. Finalmente, se estudia la transformación PDFT y se compara con los algoritmos anteriores. Todas estas técnicas son aplicadas en situaciones reales con una importante mejora en la resolución. El último capítulo de esta tesis trata de los procedimientos de calibración de sonda. Estos procedimientos son especialmente importantes en el diagnóstico de antenas.<br>Sánchez Escuderos, D. (2009). High-resolution algorithms for the reconstruction of the equivalent currents of an antenna by means of modal theory and a priori information [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8323<br>Palancia
APA, Harvard, Vancouver, ISO, and other styles
17

Bioche, Christèle. "Approximation de lois impropres et applications." Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22626/document.

Full text
Abstract:
Le but de cette thèse est d’étudier l’approximation d’a priori impropres par des suites d’a priori propres. Nous définissons un mode de convergence sur les mesures de Radon strictement positives pour lequel une suite de mesures de probabilité peut admettre une mesure impropre pour limite. Ce mode de convergence, que nous appelons convergence q-vague, est indépendant du modèle statistique. Il permet de comprendre l’origine du paradoxe de Jeffreys-Lindley. Ensuite, nous nous intéressons à l’estimation de la taille d’une population. Nous considérons le modèle du removal sampling. Nous établissons des conditions nécessaires et suffisantes sur un certain type d’a priori pour obtenir des estimateurs a posteriori bien définis. Enfin, nous montrons à l’aide de la convergence q-vague, que l’utilisation d’a priori vagues n’est pas adaptée car les estimateurs obtenus montrent une grande dépendance aux hyperparamètres<br>The purpose of this thesis is to study the approximation of improper priors by proper priors. We define a convergence mode on the positive Radon measures for which a sequence of probability measures could converge to an improper limiting measure. This convergence mode, called q-vague convergence, is independant from the statistical model. It explains the origin of the Jeffreys-Lindley paradox. Then, we focus on the estimation of the size of a population. We consider the removal sampling model. We give necessary and sufficient conditions on the hyperparameters in order to have proper posterior distributions and well define estimate of abundance. In the light of the q-vague convergence, we show that the use of vague priors is not appropriate in removal sampling since the estimates obtained depend crucially on hyperparameters
APA, Harvard, Vancouver, ISO, and other styles
18

Bailloeul, Timothée. "Contours actifs et information à priori pour l'analyse de changements : application à la mise à jour de cartes numériques du bâti urbain à partir d'images optiques de télédétection haute résolution." Phd thesis, Toulouse, INPT, 2005. http://ethesis.inp-toulouse.fr/archive/00000282/.

Full text
Abstract:
Cette thèse propose une méthodologie visant à analyser les changements entre une carte numérique du bâti urbain et des images optiques de télédétection submétriques. Notre approche est basée sur l'utilisation de l'information à priori spécifique des bâtiments symbolisés dans la carte afin de faciliter leur reconnaissance dans une image satellitaire plus récente. Cette connaissance à priori est incorporée comme une contrainte de forme dans un modèle variationnel de recalage fin carte-image par contours actifs. Nous proposons des solutions nouvelles pour une convergence plus rapide, robuste et en même temps plus flexible du contour. Le recalage par contours actif résoud le problème des variabilités exogènes carte-image indépendantes d'un changement effectif et améliore la fiabilité de la détection de changement. Nous illustrons l'efficacité de l'approche proposée à partir d'expériences menées avec une carte d'échelle 1:10 000 et des images satellitaires Quickbird de la ville de Pékin<br>This thesis proposes a methodology aiming at analyzing changes between an urban digital map of buildings and sub-meter optical remote sensing data. Our approach is based on the use of specific prior information derived from the buildings symbolized in the map to ease their recognition in a more recent satellite image. This prior knowledge is embedded in a variational model to constrain the shape of active contours intended to achieve map-to-image fine matching. We propose new solutions to improve the robustness, speed and flexibility of the active contours. The fine matching process solves the issue of exogenous variabilities between the map and the image which are independant from real changes and increases the reliability of change detection. We ilustrate the efficiency of our approach with experiments carried out with a 1:10 000 scale map and Quickbird satellite images of Pekin city
APA, Harvard, Vancouver, ISO, and other styles
19

Bailloeul, Timothée Marthon Philippe Hu Baogang. "Contours actifs et information à priori pour l'analyse de changements application à la mise à jour de cartes numériques du bâti urbain à partir d'images optiques de télédétection haute résolution /." Toulouse : INP Toulouse, 2006. http://ethesis.inp-toulouse.fr/archive/00000282.

Full text
Abstract:
Reproduction de : Thèse de doctorat : Signal, image et acoustique : Toulouse, INPT : 2005. Reproduction de : Thèse de doctorat : Signal, image et acoustique : Institut d'automatique, Académie des Sciences de Chine : 2005.<br>Thèse soutenue en co-tutelle. Titre provenant de l'écran-titre. Bibliogr. 137 réf.
APA, Harvard, Vancouver, ISO, and other styles
20

Moura, Fernando Silva de. "Estimação não linear de estado através do unscented Kalman filter na tomografia por impedância elétrica." Universidade de São Paulo, 2013. http://www.teses.usp.br/teses/disponiveis/3/3152/tde-01082013-154423/.

Full text
Abstract:
A Tomografia por Impedância Elétrica tem como objetivo estimar a distribuição de impedância elétrica dentro de uma região a partir de medidas de potencial elétrico coletadas apenas em seu contorno externo quando corrente elétrica é imposta neste mesmo contorno. Uma das aplicações para esta tecnologia é o monitoramento das condições pulmonares de pacientes em Unidades de Tratamento Intensivo. Dentre vários algoritmos, destacam-se os filtros de Kalman que abordam o problema de estimação sob o ponto de vista probabilístico, procurando encontrar a distribuição de probabilidade do estado condicionada à realização das medidas. Para que estes filtros possam ser utilizados, um modelo de evolução temporal do sistema sendo observado deve ser adotado. Esta tese propõe o uso de um modelo de evolução para a variação de volume de ar nos pulmões durante a respiração de um paciente sob ventilação artificial. Este modelo é utilizado no unscented Kalman filter, uma extensão não linear do filtro de Kalman. Tal modelo é ajustado em paralelo à estimação do estado, utilizando um esquema dual de estimação. Um algoritmo de segmentação de imagem é proposto para identificar as regiões pulmonares nas imagens estimadas e assim utilizar o modelo de evolução. Com o intuito de melhorar as estimativas, o método do erro de aproximação é utilizado no modelo de observação para mitigar os erros de modelagem e informação a priori é adicionada na solução do problema inverso mal-posto. O método é avaliado através de simulações numéricas e ensaio experimental coletado em um voluntário. Os resultados mostram que o método proposto melhora as estimativas feitas pelo filtro de Kalman, propiciando a visualização de imagens absolutas, dinâmicas e com bom nível de contraste entre os tecidos e órgãos internos.<br>Electrical impedance tomography estimates the electrical impedance distribution within a region given a set of electrical potential measurements acquired along its boundary at the same time that electrical currents are imposed on the same boundary. One of the applications of this technology is lung monitoring of patients in Intensive Care Units. One class of algorithms employed for the estimation are the Kalman filters which deal with the estimation problem in a probabilistic framework, looking for the probability density function of the state conditioned to the acquired measurements. In order to use such filters, an evolution models of the system must be employed. This thesis proposes an evolution model of the variation of air in the lungs of patients under artificial ventilation. This model is used on the Unscented Kalman Filter, a nonlinear extension of the Kalman filter. This model is adjusted in parallel to the state estimation, in a dual estimation scheme. An image segmentation algorithm is proposed for identifying the lungs in the images. In order to improve the estimate, the approximation error method is employed for mitigating the observation model errors and prior information is added for the solution of the ill-posed inverse problem. The method is evaluated with numerical simulations and with experimental data of a volunteer. The results show that the proposed method increases the quality of the estimates, allowing the visualization of absolute and dynamic images, with good level of contrast between the tissues and internal organs.
APA, Harvard, Vancouver, ISO, and other styles
21

Gharsalli, Leila. "Approches bayésiennes en tomographie micro-ondes : applications à l'imagerie du cancer du sein." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112048/document.

Full text
Abstract:
Ce travail concerne l'imagerie micro-onde en vue d'application à l'imagerie biomédicale. Cette technique d'imagerie a pour objectif de retrouver la distribution des propriétés diélectriques internes (permittivité diélectrique et conductivité) d'un objet inconnu illuminé par une onde interrogatrice connue à partir des mesures du champ électrique dit diffracté résultant de leur interaction. Un tel problème constitue un problème dit inverse par opposition au problème direct associé qui consiste à calculer le champ diffracté, l'onde interrogatrice et l'objet étant alors connus.La résolution du problème inverse nécessite la construction préalable du modèle direct associé. Celui-ci est ici basé sur une représentation intégrale de domaine des champs électriques donnant naissance à deux équations intégrales couplées dont les contreparties discrètes sont obtenues à l'aide de la méthode des moments. En ce qui concerne le problème inverse, hormis le fait que les équations physiques qui interviennent dans sa modélisation directe le rendent non-linéaire, il est également mathématiquement mal posé au sens de Hadamard, ce qui signifie que les conditions d'existence, d'unicité et de stabilité de la solution ne sont pas simultanément garanties. La résolution d'un tel problème nécessite sa régularisation préalable qui consiste généralement en l'introduction d'information a priori sur la solution recherchée. Cette résolution est effectuée, ici, dans un cadre probabiliste bayésien où l'on introduit une connaissance a priori adaptée à l'objet sous test et qui consiste à considérer ce dernier comme étant composé d'un nombre fini de matériaux homogènes distribués dans des régions compactes. Cet information est introduite par le biais d'un modèle de « Gauss-Markov-Potts ». De plus, le calcul bayésien nous donne la distribution a posteriori de toutes les inconnues connaissant l'a priori et l'objet. On s'attache ensuite à déterminer les estimateurs a posteriori via des méthodes d'approximation variationnelles et à reconstruire ainsi l'image de l'objet recherché. Les principales contributions de ce travail sont d'ordre méthodologique et algorithmique. Elles sont illustrées par une application de l'imagerie micro-onde à la détection du cancer du sein. Cette dernière constitue en soi un point très important et original de la thèse. En effet, la détection du cancer su sein en imagerie micro-onde est une alternative très intéressante à la mammographie par rayons X, mais n'en est encore qu'à un stade exploratoire<br>This work concerns the problem of microwave tomography for application to biomedical imaging. The aim is to retreive both permittivity and conductivity of an unknown object from measurements of the scattered field that results from its interaction with a known interrogating wave. Such a problem is said to be inverse opposed to the associated forward problem that consists in calculating the scattered field while the interrogating wave and the object are known. The resolution of the inverse problem requires the prior construction of the associated forward model. This latter is based on an integral representation of the electric field resulting in two coupled integral equations whose discrete counterparts are obtained by means of the method of moments.Regarding the inverse problem, in addition to the fact that the physical equations involved in the forward modeling make it nonlinear, it is also mathematically ill-posed in the sense of Hadamard, which means that the conditions of existence, uniqueness and stability of the solution are not simultaneously guaranteed. Hence, solving this problem requires its prior regularization which usually involves the introduction of a priori information on the sought solution. This resolution is done here in a Bayesian probabilistic framework where we introduced a priori knowledge appropriate to the sought object by considering it to be composed of a finite number of homogeneous materials distributed in compact and homogeneous regions. This information is introduced through a "Gauss-Markov-Potts" model. In addition, the Bayesian computation gives the posterior distribution of all the unknowns, knowing the a priori and the object. We proceed then to identify the posterior estimators via variational approximation methods and thereby to reconstruct the image of the desired object.The main contributions of this work are methodological and algorithmic. They are illustrated by an application of microwave imaging to breast cancer detection. The latter is in itself a very important and original aspect of the thesis. Indeed, the detection of breast cancer using microwave imaging is a very interesting alternative to X-ray mammography, but it is still at an exploratory stage
APA, Harvard, Vancouver, ISO, and other styles
22

Asnaashari, Amir. "Imagerie sismique 4D quantitative en milieux complexes par l'inversion 2D de forme d'onde complète." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00932597.

Full text
Abstract:
Le suivi temporel est un processus d'acquisition et d'analyse d'acquisitions multiples répétées au même endroit sur la même cible à différentes périodes de temps. Cela s'applique bien à l'exploration sismique quand les propriétés de la cible varient au cours du temps comme pour les réservoirs pétroliers. Cette technique de sismique, dite 4D en raison de l'intégration du temps dans la construction des images, permet une détection et une estimation des variations du sous-sol survenues lors de l'évolution en temps du milieu. En particulier, dans l'industrie, le suivi et la surveillance peuvent améliorer notre compréhension d'un réservoir de pétrole/gaz ou d'un site de stockage de CO2. Analyser la sismique 4D peut aider à mieux gérer les programmes de production des réservoirs. Ainsi, des acquisitions répétées permettent de suivre l'évolutiondes fronts de fluide injectés: on peut optimiser les programmes d'injection de fluides pour une récupération améliorée des hydrocarbures (enhanced oil recovery). Plusieurs méthodes ont été développées pour l'imagerie variable dans le temps en utilisant les informations des ondes sismiques. Dans ma thèse, je montre que l'inversion de forme d'onde complété (FWI) peut être utilisée pour cette imagerie. Cette m'méthode offre des images sismiques quantitatives haute résolution. Elle est une technique prometteuse pour reconstruire les petites variations de propriétés physiques macro-échelle du sous-sol. Sur une cible identifiée pour ces imageries 4D, plusieurs informations a priori sont souvent disponibles et peuvent être utilisées pour augmenter la résolution de l'image. J'ai introduit ces informations grâce à la définition d'un modèle a priori dans une approche classique FWI en l'accompagnant de la construction d'un modèle d'incertitudes a priori. On peut réaliser deux reconstructions indépendantes et faire la différence les reconstruits: on parle de différence parallèle. On peut aussi effectuer une différence séquentielle o'u l'inversion de l'ensemble de données de la second acquisition, dite moniteur, se fait 'a partir du modèle de base et non plus à partir du modèle utilisé initialement. Enfin, l'approche double-différence conduit à l'inversion des différences entre les deux jeux de données que l'on rajoute aux données synthétiques du modèle de base reconstruit. J'étudie quelle stratégie est à adopter pour obtenir des changements vitesse plus précis et plus robustes. En plus, je propose une imagerie 4D ciblée en construisant un modèle d'incertitude a priori grâce 'a une information (si elle existe) sur la localisation potentielle des variations attendues. Il est démontré que l'inversion 4D ciblée empêche l'apparition d'artéfacts en dehors des zones cibles: on évite la contamination des zones extérieures qui pourrait compromettre la reconstruction des changements 4D réels. Une étude de sensibilité, concernant l'échantillonnage en fréquence pour cette imagerie 4D, montre qu'il est nécessaire de faire agir simultanément un grand nombre de fréquences au cours d'un cycle d'inversion. Ce faisant, l'inversion fournit un modèle de base plus précis que l'approche temporelle, ainsi qu'un modèle des variations 4D plus robuste avec moins d'artéfacts. Toutefois, la FWI effectuée dans le domaine temporel semble être une approche plus intéressante pour l'imagerie 4D. Enfin, l'approche d'inversion 4D régularisée avec un modèle a priori est appliquée sur des ensembles de données réelles d'acquisitions sismiques répétées fournis par TOTAL. Cette reconstruction des variations locales s'inscrit dans un projet d'injection de vapeur pour améliorer la récupération des hydro-carbures: Il est possible de reconstituer des variations de vitesse fines causées par la vapeur injectée.
APA, Harvard, Vancouver, ISO, and other styles
23

Sui, Liqi. "Uncertainty management in parameter identification." Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2330/document.

Full text
Abstract:
Afin d'obtenir des simulations plus prédictives et plus précises du comportement mécanique des structures, des modèles matériau de plus en plus complexes ont été développés. Aujourd'hui, la caractérisation des propriétés des matériaux est donc un objectif prioritaire. Elle exige des méthodes et des tests d'identification dédiés dans des conditions les plus proches possible des cas de service. Cette thèse vise à développer une méthodologie d'identification efficace pour trouver les paramètres des propriétés matériau, en tenant compte de toutes les informations disponibles. L'information utilisée pour l'identification est à la fois théorique, expérimentale et empirique : l'information théorique est liée aux modèles mécaniques dont l'incertitude est épistémique; l'information expérimentale provient ici de la mesure de champs cinématiques obtenues pendant l'essai ct dont l'incertitude est aléatoire; l'information empirique est liée à l'information à priori associée à une incertitude épistémique ainsi. La difficulté principale est que l'information disponible n'est pas toujours fiable et que les incertitudes correspondantes sont hétérogènes. Cette difficulté est surmontée par l'utilisation de la théorie des fonctions de croyance. En offrant un cadre général pour représenter et quantifier les incertitudes hétérogènes, la performance de l'identification est améliorée. Une stratégie basée sur la théorie des fonctions de croyance est proposée pour identifier les propriétés élastiques macro et micro des matériaux multi-structures. Dans cette stratégie, les incertitudes liées aux modèles et aux mesures sont analysées et quantifiées. Cette stratégie est ensuite étendue pour prendre en compte l'information à priori et quantifier l'incertitude associée<br>In order to obtain more predictive and accurate simulations of mechanical behaviour in the practical environment, more and more complex material models have been developed. Nowadays, the characterization of material properties remains a top-priority objective. It requires dedicated identification methods and tests in conditions as close as possible to the real ones. This thesis aims at developing an effective identification methodology to find the material property parameters, taking advantages of all available information. The information used for the identification is theoretical, experimental, and empirical: the theoretical information is linked to the mechanical models whose uncertainty is epistemic; the experimental information consists in the full-field measurement whose uncertainty is aleatory; the empirical information is related to the prior information with epistemic uncertainty as well. The main difficulty is that the available information is not always reliable and its corresponding uncertainty is heterogeneous. This difficulty is overcome by the introduction of the theory of belief functions. By offering a general framework to represent and quantify the heterogeneous uncertainties, the performance of the identification is improved. The strategy based on the belief function is proposed to identify macro and micro elastic properties of multi-structure materials. In this strategy, model and measurement uncertainties arc analysed and quantified. This strategy is subsequently developed to take prior information into consideration and quantify its corresponding uncertainty
APA, Harvard, Vancouver, ISO, and other styles
24

Pohl, Kilian Maria. "Prior information for brain parcellation." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33925.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.<br>Includes bibliographical references (p. 171-184).<br>To better understand brain disease, many neuroscientists study anatomical differences between normal and diseased subjects. Frequently, they analyze medical images to locate brain structures influenced by disease. Many of these structures have weakly visible boundaries so that standard image analysis algorithms perform poorly. Instead, neuroscientists rely on manual procedures, which are time consuming and increase risks related to inter- and intra-observer reliability [53]. In order to automate this task, we develop an algorithm that robustly segments brain structures. We model the segmentation problem in a Bayesian framework, which is applicable to a variety of problems. This framework employs anatomical prior information in order to simplify the detection process. In this thesis, we experiment with different types of prior information such as spatial priors, shape models, and trees describing hierarchical anatomical relationships. We pose a maximum a posteriori probability estimation problem to find the optimal solution within our framework. From the estimation problem we derive an instance of the Expectation Maximization algorithm, which uses an initial imperfect estimate to converge to a good approximation.<br>(cont.) The resulting implementation is tested on a variety of studies, ranging from the segmentation of the brain into the three major brain tissue classes, to the parcellation of anatomical structures with weakly visible boundaries such as the thalamus or superior temporal gyrus. In general, our new method performs significantly better than other :standard automatic segmentation techniques. The improvement is due primarily to the seamless integration of medical image artifact correction, alignment of the prior information to the subject, detection of the shape of anatomical structures, and representation of the anatomical relationships in a hierarchical tree.<br>by Kilian Maria Pohl.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
25

Moreau-Gaudry, Alexandre. "Modélisation géométrique de bifurcations." Phd thesis, Université Joseph Fourier (Grenoble), 2000. http://tel.archives-ouvertes.fr/tel-00006751.

Full text
Abstract:
Les objets bifurcation, du fait de leurs topologies non homotopiques aux classiques surfaces sphériques, cylindriques ou toriques, sont des entités difficilement paramétrables de façon naturelle. Relevant du domaine de la modélisation et de l'imagerie, ce travail de thèse présente, dans un premier temps, de possibles paramétrages planaires univoques de cette entité, dont un particulier, d'inspiration physique, a permis la génération d'une surface C1 de topologie compatible: bâtie comme une enveloppe de superquadriques reposant sur un squelette déformable, elle est entièrement définie par la donnée de 24 paramètres. Dans un second temps, motivé par l'amélioration de l'étude d'un marqueur indirect des maladies cardiovasculaires, première cause de mortalité dans les pays industrialisés, cette surface est déformée à partir de données échographiques 2.5D de la bifurcation de l'artère Carotide : pour obtenir ces données, un système d'acquisition, intégrant un localisateur optique à marqueurs actifs, a été développé et évalué. Enrichi successivement par des informations a priori complémentaires de différents types, ce modèle est alors mis en correspondance par deux méthodes distinctes ("extraction puis ajustement", "contours actifs") avec les données échographiques précédemment acquises. Les premiers résultats obtenus sont présentés dans ce travail.
APA, Harvard, Vancouver, ISO, and other styles
26

Barde, Julien. "Mutualisation de données et de connaissances pour laGestion Intégrée des Zones Côtières.Application au projet SYSCOLAG." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2005. http://tel.archives-ouvertes.fr/tel-00112661.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre du programme régional de recherche pluridisciplinaire Syscolag<br />la gestion intégrée de la zone côtière (GIZC). Elle étudie la mise en place de méthodes génériques<br />pour optimiser la gestion de l'information et de la connaissance dans les dispositifs de GIZC. Compte<br />tenu de l'hétérogénéité des ressources informationnelles nécessaires et réparties chez des acteurs variés de l'importance de l'information géographique dans ce domaine, nous proposons, en réponse à cette problématique, une solution qui s'appuie sur l'utilisation d'un service de métadonnées pour décrire et localiser l'information existante et d'un référentiel sémantique pour intégrer et partager la connaissance experte. Ces outils sont accessibles sur un portail Web. Le premier implémente la norme ISO 19115 relative à la gestion de métadonnées pour l'information géographique), le second s'appuie sur un modèle d'ontologie a priori qui structure l'inventaire des concepts du domaine et exprime la connaissance qui leur est associée. Les concepts spatiaux possèdent des propriétés géométriques qui permettent leurs représentations cartographiques géoréférencées et des relations spatiales normalisées d'après les travaux de l'Open Gis Consortium. Le référentiel sémantique sert pour le contrôle de la valuation d'éléments clés du service de métadonnées, en particulier les éléments de descriptions thématique et spatiale (avec une interface cartographique). Le gain en qualité d'indexation améliore la localisation d'information. Le détail du référentiel est consultable sous la forme d'une base terminologique, d'un réseau sémantique et d'un atlas cartographique (basé la norme relative aux Web Mapping Service de l'OGC) dans le cas des concepts spatiaux. De tels systèmes distribués sont capables d'interopérer et partager les métadonnées, l'information géographique, ou les concepts qu'ils hébergent.<br />Mots-clés : métadonnées, partage de l'information, partage de la connaissance, ontologie a priori,<br />information géographique, gestion intégrée de la zone côtière, SIG.
APA, Harvard, Vancouver, ISO, and other styles
27

Sunmola, Funlade Tajudeen. "Optimising learning with transferable prior information." Thesis, University of Birmingham, 2013. http://etheses.bham.ac.uk//id/eprint/3983/.

Full text
Abstract:
This thesis addresses the problem of how to incorporate user knowledge about an environment, or information acquired during previous learning in that environment or a similar one, to make future learning more effective. The problem is tackled within the framework of learning from rewards while acting in a Markov Decision Process (MDP). Appropriately incorporating user knowledge and prior experience into learning should lead to better performance during learning (the exploitation-exploration trade-off), and offer a better solution at the end of the learning period. We work in a Bayesian setting and consider two main types of transferable information namely historical data and constraints involving absolute and relative restrictions on process dynamics. We present new algorithms for reasoning with transition constraints and show how to revise beliefs about the MDP transition matrix using constraints and prior knowledge. We also show how to use the resulting beliefs to control exploration. Finally we demonstrate benefits of historical information via power priors and by using process templates to transfer information from one environment to a second with related local process dynamics. We present results showing that incorporating historical data and constraints on state transitions in uncertain environments, either separately or collectively, can improve learning performance.
APA, Harvard, Vancouver, ISO, and other styles
28

Ahmed, Syed Ejaz Carleton University Dissertation Mathematics. "Estimation strategies under uncertain prior information." Ottawa, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
29

Kamary, Kaniav. "Lois a priori non-informatives et la modélisation par mélange." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED022/document.

Full text
Abstract:
L’une des grandes applications de la statistique est la validation et la comparaison de modèles probabilistes au vu des données. Cette branche des statistiques a été développée depuis la formalisation de la fin du 19ième siècle par des pionniers comme Gosset, Pearson et Fisher. Dans le cas particulier de l’approche bayésienne, la solution à la comparaison de modèles est le facteur de Bayes, rapport des vraisemblances marginales, quelque soit le modèle évalué. Cette solution est obtenue par un raisonnement mathématique fondé sur une fonction de coût.Ce facteur de Bayes pose cependant problème et ce pour deux raisons. D’une part, le facteur de Bayes est très peu utilisé du fait d’une forte dépendance à la loi a priori (ou de manière équivalente du fait d’une absence de calibration absolue). Néanmoins la sélection d’une loi a priori a un rôle vital dans la statistique bayésienne et par conséquent l’une des difficultés avec la version traditionnelle de l’approche bayésienne est la discontinuité de l’utilisation des lois a priori impropres car ils ne sont pas justifiées dans la plupart des situations de test. La première partie de cette thèse traite d’un examen général sur les lois a priori non informatives, de leurs caractéristiques et montre la stabilité globale des distributions a posteriori en réévaluant les exemples de [Seaman III 2012]. Le second problème, indépendant, est que le facteur de Bayes est difficile à calculer à l’exception des cas les plus simples (lois conjuguées). Une branche des statistiques computationnelles s’est donc attachée à résoudre ce problème, avec des solutions empruntant à la physique statistique comme la méthode du path sampling de [Gelman 1998] et à la théorie du signal. Les solutions existantes ne sont cependant pas universelles et une réévaluation de ces méthodes suivie du développement de méthodes alternatives constitue une partie de la thèse. Nous considérons donc un nouveau paradigme pour les tests bayésiens d’hypothèses et la comparaison de modèles bayésiens en définissant une alternative à la construction traditionnelle de probabilités a posteriori qu’une hypothèse est vraie ou que les données proviennent d’un modèle spécifique. Cette méthode se fonde sur l’examen des modèles en compétition en tant que composants d’un modèle de mélange. En remplaçant le problème de test original avec une estimation qui se concentre sur le poids de probabilité d’un modèle donné dans un modèle de mélange, nous analysons la sensibilité sur la distribution a posteriori conséquente des poids pour divers modélisation préalables sur les poids et soulignons qu’un intérêt important de l’utilisation de cette perspective est que les lois a priori impropres génériques sont acceptables, tout en ne mettant pas en péril la convergence. Pour cela, les méthodes MCMC comme l’algorithme de Metropolis-Hastings et l’échantillonneur de Gibbs et des approximations de la probabilité par des méthodes empiriques sont utilisées. Une autre caractéristique de cette variante facilement mise en œuvre est que les vitesses de convergence de la partie postérieure de la moyenne du poids et de probabilité a posteriori correspondant sont assez similaires à la solution bayésienne classique<br>One of the major applications of statistics is the validation and comparing probabilistic models given the data. This branch statistics has been developed since the formalization of the late 19th century by pioneers like Gosset, Pearson and Fisher. In the special case of the Bayesian approach, the comparison solution of models is the Bayes factor, ratio of marginal likelihoods, whatever the estimated model. This solution is obtained by a mathematical reasoning based on a loss function. Despite a frequent use of Bayes factor and its equivalent, the posterior probability of models, by the Bayesian community, it is however problematic in some cases. First, this rule is highly dependent on the prior modeling even with large datasets and as the selection of a prior density has a vital role in Bayesian statistics, one of difficulties with the traditional handling of Bayesian tests is a discontinuity in the use of improper priors since they are not justified in most testing situations. The first part of this thesis deals with a general review on non-informative priors, their features and demonstrating the overall stability of posterior distributions by reassessing examples of [Seaman III 2012].Beside that, Bayes factors are difficult to calculate except in the simplest cases (conjugate distributions). A branch of computational statistics has therefore emerged to resolve this problem with solutions borrowing from statistical physics as the path sampling method of [Gelman 1998] and from signal processing. The existing solutions are not, however, universal and a reassessment of the methods followed by alternative methods is a part of the thesis. We therefore consider a novel paradigm for Bayesian testing of hypotheses and Bayesian model comparison. The idea is to define an alternative to the traditional construction of posterior probabilities that a given hypothesis is true or that the data originates from a specific model which is based on considering the models under comparison as components of a mixture model. By replacing the original testing problem with an estimation version that focus on the probability weight of a given model within a mixture model, we analyze the sensitivity on the resulting posterior distribution of the weights for various prior modelings on the weights and stress that a major appeal in using this novel perspective is that generic improper priors are acceptable, while not putting convergence in jeopardy. MCMC methods like Metropolis-Hastings algorithm and the Gibbs sampler are used. From a computational viewpoint, another feature of this easily implemented alternative to the classical Bayesian solution is that the speeds of convergence of the posterior mean of the weight and of the corresponding posterior probability are quite similar.In the last part of the thesis we construct a reference Bayesian analysis of mixtures of Gaussian distributions by creating a new parameterization centered on the mean and variance of those models itself. This enables us to develop a genuine non-informative prior for Gaussian mixtures with an arbitrary number of components. We demonstrate that the posterior distribution associated with this prior is almost surely proper and provide MCMC implementations that exhibit the expected component exchangeability. The analyses are based on MCMC methods as the Metropolis-within-Gibbs algorithm, adaptive MCMC and the Parallel tempering algorithm. This part of the thesis is followed by the description of R package named Ultimixt which implements a generic reference Bayesian analysis of unidimensional mixtures of Gaussian distributions obtained by a location-scale parameterization of the model. This package can be applied to produce a Bayesian analysis of Gaussian mixtures with an arbitrary number of components, with no need to specify the prior distribution
APA, Harvard, Vancouver, ISO, and other styles
30

Ren, Shijie. "Using prior information in clinical trial design." Thesis, University of Sheffield, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.555104.

Full text
Abstract:
A current concern in medical research is low productivity In the pharmaceutical industry. Failure rates of Phase III clinical trials are high, and this is very costly in terms of using resources and money. Our aim in this thesis is to incorporate prior information in clinical trial design and develop better assessments of the chances of successful clinical trials, so that trial sponsors can improve their success rates. Assurance calculations, which take into account uncertainty about how effective the treatment actually is, provide a more reliable assessment of the probability of a successful trial outcome comparing with power calculations. We develop assurance methods to accommodate survival outcome measures, assuming both parametric and nonparametric models. We also develop prior elicitation procedures for each survival model so that the assurance calculations can be performed more easily and reliably. Prior elicitation is not an easy task, and we may be uncertain about what distribution 'best' represents an expert's beliefs. We demonstrated that robustness of the assurance to different choices of prior distribution can be assessed by treating the elicitation process as a Bayesian inference problem, using a nonparametric Bayesian approach to quantify uncertainty in the expert's density function of the true treatment effect. In this thesis, we also consider a decision-making problem for a single-arm open label Phase 11 trial for the PhD sponsor Roche. Based on the Bayesian decision- theoretic approach and assurance calculations, a model is developed for the trial sponsor in finding the optimal trial strategies according to their beliefs about the true treatment effect.
APA, Harvard, Vancouver, ISO, and other styles
31

Ghadermarzy, Navid. "Using prior support information in compressed sensing." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44912.

Full text
Abstract:
Compressed sensing is a data acquisition technique that entails recovering estimates of sparse and compressible signals from n linear measurements, significantly fewer than the signal ambient dimension N. In this thesis we show how we can reduce the required number of measurements even further if we incorporate prior information about the signal into the reconstruction algorithm. Specifically, we study certain weighted nonconvex Lp minimization algorithms and a weighted approximate message passing algorithm. In Chapter 1 we describe compressed sensing as a practicable signal acquisition method in application and introduce the generic sparse approximation problem. Then we review some of the algorithms used in compressed sensing literature and briefly introduce the method we used to incorporate prior support information into these problems. In Chapter 2 we derive sufficient conditions for stable and robust recovery using weighted Lp minimization and show that these conditions are better than those for recovery by regular Lp and weighted L1. We present extensive numerical experiments, both on synthetic examples and on audio, and seismic signals. In Chapter 3 we derive weighted AMP algorithm which iteratively solves the weighted L1 minimization. We also introduce a reweighting scheme for weighted AMP algorithms which enhances the recovery performance of weighted AMP. We also apply these algorithms on synthetic experiments and on real audio signals.
APA, Harvard, Vancouver, ISO, and other styles
32

Viggh, Herbert E. M. "Surface Prior Information Reflectance Estimation (SPIRE) algorithms." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/17564.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.<br>Includes bibliographical references (p. 393-396).<br>In this thesis we address the problem of estimating changes in surface reflectance in hyperspectral image cubes, under unknown multiplicative and additive illumination noise. Rather than using the Empirical Line Method (ELM) or physics-based approaches, we assumed the presence of a prior reflectance image cube and ensembles of typical multiplicative and additive illumination noise vectors, and developed algorithms which estimate reflectance using this prior information. These algorithms were developed under the additional assumptions that the illumination effects were band limited to lower spatial frequencies and that the differences in the surface reflectance from the prior were small in area relative to the scene, and have defined edges. These new algorithms were named Surface Prior Information Reflectance Estimation (SPIRE) algorithms. Spatial SPIRE algorithms that employ spatial processing were developed for six cases defined by the presence or absence of the additive noise, and by whether or not the noise signals are spatially uniform or varying. These algorithms use high-pass spatial filtering to remove the noise effects. Spectral SPIRE algorithms that employ spectral processing were developed and use zero-padded Principal Components (PC) filtering to remove the illumination noise. Combined SPIRE algorithms that use both spatial and spectral processing were also developed. A Selective SPIRE technique that chooses between Combined and Spectral SPIRE reflectance estimates was developed; it maximizes estimation performance on both modified and unmodified pixels. The different SPIRE algorithms were tested on HYDICE airborne sensor hyperspectral data, and their reflectance estimates were compared to those from the physics-based ATmospheric REMoval (ATREM) and the Empirical Line Method atmospheric compensation algorithms. SPIRE algorithm performance was found to be nearly identical to the ELM ground-truth based results. SPIRE algorithms performed better than ATREM overall, and significantly better under high clouds and haze. Minimum-distance classification experiments demonstrated SPIRE's superior performance over both ATREM and ELM in cross-image supervised classification applications. The taxonomy of SPIRE algorithms was presented and suggestions were made concerning which SPIRE algorithm is recommended for various applications.<br>by Herbert Erik Mattias Viggh.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
33

Parsley, M. P. "Simultaneous localisation and mapping with prior information." Thesis, University College London (University of London), 2011. http://discovery.ucl.ac.uk/1318103/.

Full text
Abstract:
This thesis is concerned with Simultaneous Localisation and Mapping (SLAM), a technique by which a platform can estimate its trajectory with greater accuracy than odometry alone, especially when the trajectory incorporates loops. We discuss some of the shortcomings of the "classical" SLAM approach (in particular EKF-SLAM), which assumes that no information is known about the environment a priori. We argue that in general this assumption is needlessly stringent; for most environments, such as cities some prior information is known. We introduce an initial Bayesian probabilistic framework which considers the world as a hierarchy of structures, and maps (such as those produced by SLAM systems) as consisting of features derived from them. Common underlying structure between features in maps allows one to express and thus exploit geometric relations between them to improve their estimates. We apply the framework to EKF-SLAM for the case of a vehicle equipped with a range-bearing sensor operating in an urban environment, building up a metric map of point features, and using a prior map consisting of line segments representing building footprints. We develop a novel method called the Dual Representation, which allows us to use information from the prior map to not only improve the SLAM estimate, but also reduce the severity of errors associated with the EKF. Using the Dual Representation, we investigate the effect of varying the accuracy of the prior map for the case where the underlying structures and thus relations between the SLAM map and prior map are known. We then generalise to the more realistic case, where there is "clutter" - features in the environment that do not relate with the prior map. This involves forming a hypothesis for whether a pair of features in the SLAMstate and prior map were derived from the same structure, and evaluating this based on a geometric likelihood model. Initially we try an incrementalMultiple Hypothesis SLAM(MHSLAM) approach to resolve hypotheses, developing a novel method called the Common State Filter (CSF) to reduce the exponential growth in computational complexity inherent in this approach. This allows us to use information from the prior map immediately, thus reducing linearisation and EKF errors. However we find that MHSLAM is still too inefficient, even with the CSF, so we use a strategy that delays applying relations until we can infer whether they apply; we defer applying information from structure hypotheses until their probability of holding exceeds a threshold. Using this method we investigate the effect of varying degrees of "clutter" on the performance of SLAM.
APA, Harvard, Vancouver, ISO, and other styles
34

VALENTE, Giancarlo. "Separazione cieca di sorgenti in ambienti reali: nuovi algoritmi, applicazioni e implementazioni." Doctoral thesis, La Sapienza, 2006. http://hdl.handle.net/11573/916995.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Zhonggai. "Objective Bayesian Analysis of Kullback-Liebler Divergence of two Multivariate Normal Distributions with Common Covariance Matrix and Star-shape Gaussian Graphical Model." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28121.

Full text
Abstract:
This dissertation consists of four independent but related parts, each in a Chapter. The first part is an introductory. It serves as the background introduction and offer preparations for later parts. The second part discusses two population multivariate normal distributions with common covariance matrix. The goal for this part is to derive objective/non-informative priors for the parameterizations and use these priors to build up constructive random posteriors of the Kullback-Liebler (KL) divergence of the two multivariate normal populations, which is proportional to the distance between the two means, weighted by the common precision matrix. We use the Cholesky decomposition for re-parameterization of the precision matrix. The KL divergence is a true distance measurement for divergence between the two multivariate normal populations with common covariance matrix. Frequentist properties of the Bayesian procedure using these objective priors are studied through analytical and numerical tools. The third part considers the star-shape Gaussian graphical model, which is a special case of undirected Gaussian graphical models. It is a multivariate normal distribution where the variables are grouped into one "global" group of variable set and several "local" groups of variable set. When conditioned on the global variable set, the local variable sets are independent of each other. We adopt the Cholesky decomposition for re-parametrization of precision matrix and derive Jeffreys' prior, reference prior, and invariant priors for new parameterizations. The frequentist properties of the Bayesian procedure using these objective priors are also studied. The last part concentrates on the discussion of objective Bayesian analysis for partial correlation coefficient and its application to multivariate Gaussian models.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
36

Liu, Yang. "Application of prior information to discriminative feature learning." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/285558.

Full text
Abstract:
Learning discriminative feature representations has attracted a great deal of attention since it is a critical step to facilitate the subsequent classification, retrieval and recommendation tasks. In this dissertation, besides incorporating prior knowledge about image labels into the image classification as most prevalent feature learning methods currently do, we also explore some other general-purpose priors and verify their effectiveness in the discriminant feature learning. As a more powerful representation can be learned by implementing such general priors, our approaches achieve state-of-the-art results on challenging benchmarks. We elaborate on these general-purpose priors and highlight where we have made novel contributions. We apply sparsity and hierarchical priors to the explanatory factors that describe the data, in order to better discover the data structure. More specifically, in the first approach we propose that we only incorporate sparse priors into the feature learning. To this end, we present a support discrimination dictionary learning method, which finds a dictionary under which the feature representation of images from the same class have a common sparse structure while the size of the overlapped signal support of different classes is minimised. Then we incorporate sparse priors and hierarchical priors into a unified framework, that is capable of controlling the sparsity of the neuron activation in deep neural networks. Our proposed approach automatically selects the most useful low-level features and effectively combines them into more powerful and discriminative features for our specific image classification problem. We also explore priors on the relationships between multiple factors. When multiple independent factors exist in the image generation process and only some of them are of interest to us, we propose a novel multi-task adversarial network to learn a disentangled feature which is optimized with respect to the factor of interest to us, while being distraction factors agnostic. When common factors exist in multiple tasks, leveraging common factors cannot only make the learned feature representation more robust, but also enable the model to generalise from very few labelled samples. More specifically, we address the domain adaptation problem and propose the re-weighted adversarial adaptation network to reduce the feature distribution divergence and adapt the classifier from source to target domains.
APA, Harvard, Vancouver, ISO, and other styles
37

Qin, Jing. "Prior Information Guided Image Processing and Compressive Sensing." Case Western Reserve University School of Graduate Studies / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=case1365020074.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ahmed, Iftikhar, and Muhammad Farooq. "Switched Multi-hop Priority Queued Networks-Influence of priority levels on Soft Real-time Performance." Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-6054.

Full text
Abstract:
In the last few years, the number of real-time applications has increased. These applications are sensitive and require the methods to utilize existing network capacity efficiently to meet performance requirements and achieve the maximum throughput to overcome delay, jitter and packet loss. In such cases, when the network needs to support highly interactive traffic like packet-switched voice, the network congestion is an issue that can lead to various problems. If the level of congestion is high enough, the users may not be able to complete their calls and have existing calls dropped or may experience a variety of delays that make it difficult to participate smooth conversation. In this paper, we investigate the effect of priority levels on soft real-time performance. We use the priority queues to help us manage the congestion, handle the interactive traffic and improve the over all performance of the system. We consider switched multi-hop network with priority queues. All the switches and end-nodes control the real-time traffic with “Earlier Deadline First” scheduling. The performance of the network is characterized in terms of the average delay, the deadline missing ratio and the throughput. We will analyze these parameters with both the bursty traffic and evenly distributed traffic. We will analyze different priority levels and will see how the increase in priority level increases the performance of the soft real-time system.
APA, Harvard, Vancouver, ISO, and other styles
39

Fronczyk, Kassandra M. "Development of Informative Priors in Microarray Studies." Diss., CLICK HERE for online access, 2007. http://contentdm.lib.byu.edu/ETD/image/etd2031.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Stroeymeyt, Nathalie. "Information gathering prior to emigration in house-hunting ants." Thesis, University of Bristol, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.529832.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kempter, Bernhard. "Konfliktbehandlung im policy–basierten Management mittels a priori Modellierung." Diss., lmu, 2004. http://nbn-resolving.de/urn:nbn:de:bvb:19-33473.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Giménez, Febrer Pere Joan. "Matrix completion with prior information in reproducing kernel Hilbert spaces." Doctoral thesis, Universitat Politècnica de Catalunya, 2021. http://hdl.handle.net/10803/671718.

Full text
Abstract:
In matrix completion, the objective is to recover an unknown matrix from a small subset of observed entries. Most successful methods for recovering the unknown entries are based on the assumption that the unknown full matrix has low rank. By having low rank, each of its entries are obtained as a function of a small number of coefficients which can be accurately estimated provided that there are enough available observations. Hence, in low-rank matrix completion the estimate is given by the matrix of minimum rank that fits the observed entries. Besides low rankness, the unknown matrix might exhibit other structural properties which can be leveraged in the recovery process. In a smooth matrix, it can be expected that entries that are close in index distance will have similar values. Similarly, groups of rows or columns can be known to contain similarly valued entries according to certain relational structures. This relational information is conveyed through different means such as covariance matrices or graphs, with the inconvenient that these cannot be derived from the data matrix itself since it is incomplete. Hence, any knowledge on how the matrix entries are related among them must be derived from prior information. This thesis deals with matrix completion with prior information, and presents an outlook that generalizes to many situations. In the first part, the columns of the unknown matrix are cast as graph signals with a graph known beforehand. In this, the adjacency matrix of the graph is used to calculate an initial point for a proximal gradient algorithm in order to reduce the iterations needed to converge to a solution. Then, under the assumption that the graph signals are smooth, the graph Laplacian is incorporated into the problem formulation with the aim to enforce smoothness on the solution. This results in an effective denoising of the observed matrix and reduced error, which is shown through theoretical analysis of the proximal gradient coupled with Laplacian regularization, and numerical tests. The second part of the thesis introduces a framework to exploit prior information through reproducing kernel Hilbert spaces. Since a kernel measures similarity between two points in an input set, it enables the encoding of any prior information such as feature vectors, dictionaries or connectivity on a graph. By associating each column and row of the unknown matrix with an item in a set, and defining a pair of kernels measuring similarity between columns or rows, the missing entries can be extrapolated by means of the kernel functions. A method based on kernel regression is presented, with two additional variants aimed at reducing computational cost, and online implementation. These methods prove to be competitive with existing techniques, especially when the number of observations is very small. Furthermore, mean-square error and generalization error analyses are carried out, shedding light on the factors impacting algorithm performance. For the generalization error analysis, the focus is on the transductive case, which measures the ability of an algorithm to transfer knowledge from a set of labelled inputs to an unlabelled set. Here, bounds are derived for the proposed and existing algorithms by means of the transductive Rademacher complexity, and numerical tests confirming the theoretical findings are presented. Finally, the thesis explores the question of how to choose the observed entries of a matrix in order to minimize the recovery error of the full matrix. A passive sampling approach is presented, which entails that no labelled inputs are needed to design the sampling distribution; only the input set and kernel functions are required. The approach is based on building the best Nyström approximation to the kernel matrix by sampling the columns according to their leverage scores, a metric that arises naturally in the theoretical analysis to find an optimal sampling distribution.<br>A matrix completion, l'objectiu és recuperar una matriu a partir d'un subconjunt d'entrades observables. Els mètodes més eficaços es basen en la idea que la matriu desconeguda és de baix rang. Al ser de baix rang, les seves entrades són funció d'uns pocs coeficients que poden ser estimats sempre que hi hagi suficients observacions. Així, a matrix completion la solució s'obté com la matriu de mínim rang que millor s'ajusta a les entrades visibles. A més de baix rang, la matriu desconeguda pot tenir altres propietats estructurals que poden ser aprofitades en el procés de recuperació. En una matriu suau, pot esperar-se que les entrades en posicions pròximes tinguin valor similar. Igualment, grups de columnes o files poden saber-se similars. Aquesta informació relacional es proporciona a través de diversos mitjans com ara matrius de covariància o grafs, amb l'inconvenient que aquests no poden ser derivats a partir de la matriu de dades ja que està incompleta. Aquesta tesi tracta sobre matrix completion amb informació prèvia, i presenta metodologies que poden aplicar-se a diverses situacions. En la primera part, les columnes de la matriu desconeguda s'identifiquen com a senyals en un graf conegut prèviament. Llavors, la matriu d'adjacència del graf s'usa per calcular un punt inicial per a un algorisme de gradient pròxim amb la finalitat de reduir les iteracions necessàries per arribar a la solució. Després, suposant que els senyals són suaus, la matriu laplaciana del graf s'incorpora en la formulació del problema amb tal forçar suavitat en la solució. Això resulta en una reducció de soroll en la matriu observada i menor error, la qual cosa es demostra a través d'anàlisi teòrica i simulacions numèriques. La segona part de la tesi introdueix eines per a aprofitar informació prèvia mitjançant reproducing kernel Hilbert spaces. Atès que un kernel mesura la similitud entre dos punts en un espai, permet codificar qualsevol tipus d'informació tal com vectors de característiques, diccionaris o grafs. En associar cada columna i fila de la matriu desconeguda amb un element en un set, i definir un parell de kernels que mesuren similitud entre columnes o files, les entrades desconegudes poden ser extrapolades mitjançant les funcions de kernel. Es presenta un mètode basat en regressió amb kernels, amb dues variants addicionals que redueixen el cost computacional. Els mètodes proposats es mostren competitius amb tècniques existents, especialment quan el nombre d'observacions és molt baix. A més, es detalla una anàlisi de l'error quadràtic mitjà i l'error de generalització. Per a l'error de generalització, s'adopta el context transductiu, el qual mesura la capacitat d'un algorisme de transferir informació d'un set de mostres etiquetades a un set no etiquetat. Després, es deriven cotes d'error per als algorismes proposats i existents fent ús de la complexitat de Rademacher, i es presenten proves numèriques que confirmen els resultats teòrics. Finalment, la tesi explora la qüestió de com triar les entrades observables de la matriu per a minimitzar l'error de recuperació de la matriu completa. Una estratègia de mostrejat passiva és proposada, la qual implica que no és necessari conèixer cap etiqueta per a dissenyar la distribució de mostreig. Només les funcions de kernel són necessàries. El mètode es basa en construir la millor aproximació de Nyström a la matriu de kernel mostrejant les columnes segons la seva leverage score, una mètrica que apareix de manera natural durant l'anàlisi teòric.
APA, Harvard, Vancouver, ISO, and other styles
43

Olsen, Catharina. "Causal inference and prior integration in bioinformatics using information theory." Doctoral thesis, Universite Libre de Bruxelles, 2013. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209401.

Full text
Abstract:
An important problem in bioinformatics is the reconstruction of gene regulatory networks from expression data. The analysis of genomic data stemming from high- throughput technologies such as microarray experiments or RNA-sequencing faces several difficulties. The first major issue is the high variable to sample ratio which is due to a number of factors: a single experiment captures all genes while the number of experiments is restricted by the experiment’s cost, time and patient cohort size. The second problem is that these data sets typically exhibit high amounts of noise.<p><p>Another important problem in bioinformatics is the question of how the inferred networks’ quality can be evaluated. The current best practice is a two step procedure. In the first step, the highest scoring interactions are compared to known interactions stored in biological databases. The inferred networks passes this quality assessment if there is a large overlap with the known interactions. In this case, a second step is carried out in which unknown but high scoring and thus promising new interactions are validated ’by hand’ via laboratory experiments. Unfortunately when integrating prior knowledge in the inference procedure, this validation procedure would be biased by using the same information in both the inference and the validation. Therefore, it would no longer allow an independent validation of the resulting network.<p><p>The main contribution of this thesis is a complete computational framework that uses experimental knock down data in a cross-validation scheme to both infer and validate directed networks. Its components are i) a method that integrates genomic data and prior knowledge to infer directed networks, ii) its implementation in an R/Bioconductor package and iii) a web application to retrieve prior knowledge from PubMed abstracts and biological databases. To infer directed networks from genomic data and prior knowledge, we propose a two step procedure: First, we adapt the pairwise feature selection strategy mRMR to integrate prior knowledge in order to obtain the network’s skeleton. Then for the subsequent orientation phase of the algorithm, we extend a criterion based on interaction information to include prior knowledge. The implementation of this method is available both as part of the prior retrieval tool Predictive Networks and as a stand-alone R/Bioconductor package named predictionet.<p><p>Furthermore, we propose a fully data-driven quantitative validation of such directed networks using experimental knock-down data: We start by identifying the set of genes that was truly affected by the perturbation experiment. The rationale of our validation procedure is that these truly affected genes should also be part of the perturbed gene’s childhood in the inferred network. Consequently, we can compute a performance score<br>Doctorat en Sciences<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
44

Johnson, Robert Spencer. "Incorporation of prior information into independent component analysis of FMRI." Thesis, University of Oxford, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.711637.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Poulsen, Rachel Lynn. "XPRIME: A Method Incorporating Expert Prior Information into Motif Exploration." BYU ScholarsArchive, 2009. https://scholarsarchive.byu.edu/etd/2083.

Full text
Abstract:
One of the primary goals of active research in molecular biology is to better understand the process of transcription regulation. An important objective in understanding transcription is identifying transcription factors that directly regulate target genes. Identifying these transcription factors is a key step toward eliminating genetic diseases or disease susceptibilities that are encoded inside deoxyribonucleic acid (DNA). There is much uncertainty and variation associated with transcription factor binding sites, requiring these sites to be represented stochastically. Although typically each transcription factor prefers to bind to a specific DNA word, it can bind to different variations of that DNA word. In order to model these uncertainties, we use a Bayesian approach that allows the binding probabilities associated with the motif to vary. This project presents a new method for motif searching that uses expert prior information to scan DNA sequences for multiple known motif binding sites as well as new motifs. The method uses a mixture model to model the motifs of interest where each motif is represented by a Multinomial distribution, and Dirichlet prior distributions are placed on each motif of interest. Expert prior information is given to search for known motifs and diffuse priors are used to search for new motifs. The posterior distribution of each motif is then sampled using Markov Chain Monte Carlo (MCMC) techniques and Gibbs sampling.
APA, Harvard, Vancouver, ISO, and other styles
46

Stewart, Alexander D. "Localisation using the appearance of prior structure." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:4ee889ac-e8e3-4000-ae23-a9d7f84fcd65.

Full text
Abstract:
Accurate and robust localisation is a fundamental aspect of any autonomous mobile robot. However, if these are to become widespread, it must also be available at low-cost. In this thesis, we develop a new approach to localisation using monocular cameras by leveraging a coloured 3D pointcloud prior of the environment, captured previously by a survey vehicle. We make no assumptions about the external conditions during the robot's traversal relative to those experienced by the survey vehicle, nor do we make any assumptions about their relative sensor configurations. Our method uses no extracted image features. Instead, it explicitly optimises for the pose which harmonises the information, in a Shannon sense, about the appearance of the scene from the captured images conditioned on the pose, with that of the prior. We use as our objective the Normalised Information Distance (NID), a true metric for information, and demonstrate as a consequence the robustness of our localisation formulation to illumination changes, occlusions and colourspace transformations. We present how, by construction of the joint distribution of the appearance of the scene from the prior and the live imagery, the gradients of the NID can be computed and how these can be used to efficiently solve our formulation using Quasi-Newton methods. In order to reliably identify any localisation failures, we present a new classifier using the local shape of the NID about the candidate pose and demonstrate the performance gains of the complete system from its use. Finally, we detail the development of a real-time capable implementation of our approach using commodity GPUs and demonstrate that it outperforms a high-grade, commercial GPS-aided INS on 57km of driving in central Oxford, over a range of different conditions, times of day and year.
APA, Harvard, Vancouver, ISO, and other styles
47

Hotti, Alexandra. "Bayesian insurance pricing using informative prior estimation techniques." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-286312.

Full text
Abstract:
Large, well-established insurance companies build statistical pricing models based on customer claim data. Due to their long experience and large amounts of data, they can predict their future expected claim losses accurately. In contrast, small newly formed insurance start-ups do not have access to such data. Instead, a start-up’s pricing model’s initial parameters can be set by directly estimating the risk premium tariff’s parameters in a non-statistical manner. However, this approach results in a pricing model that cannot be adjusted based on new claim data through classical frequentist insurance approaches. This thesis has put forth three Bayesian approaches for including estimates of an existing multiplicative tariff as the expectation of a prior in a Generalized Linear Model (GLM). The similarity between premiums set using the prior estimations and the static pricing model was measured as their relative difference. The results showed that the static tariff could be closely estimated. The estimated priors were then merged with claim data through the likelihood. These posteriors were estimated via the two Markov Chain Monte Carlo approaches, Metropolis and Metropolis-Hastings. All in all, this resulted in three risk premium models that could take advantage of existing pricing knowledge and learn over time as new cases arrived. The results showed that the Bayesian pricing methods significantly reduced the discrepancy between predicted and actual claim costs on an overall portfolio level compared to the static tariff. Nevertheless, this could not be determined on an individual policyholder level.<br>Stora, väletablerade försäkringsbolag modellerar sina riskpremier med hjälp av statistiska modeller och data från skadeanmälningar. Eftersom försäkringsbolagen har tillgång till en lång historik av skadeanmälningar, så kan de förutspå sina framtida skadeanmälningskostnader med hög precision. Till skillnad från ett stort försäkringsbolag, har en liten, nyetablerad försäkringsstartup inte tillgång till den mängden data. Det nyetablerade försäkringsbolagets initiala prissättningsmodell kan därför istället byggas genom att direkt estimera parametrarna i en tariff med ett icke statistiskt tillvägagångssätt. Problematiken med en sådan metod är att tariffens parametrar inte kan justerares baserat på bolagets egna skadeanmälningar med klassiska frekvensbaserade prissättningsmetoder. I denna masteruppsats presenteras tre metoder för att estimera en existerande statisk multiplikativ tariff. Estimaten kan användas som en prior i en Bayesiansk riskpremiemodell. Likheten mellan premierna som har satts via den estimerade och den faktiska statiska tariffen utvärderas genom att beräkna deras relativa skillnad. Resultaten från jämförelsen tyder på att priorn kan estimeras med hög precision. De estimerade priorparametrarna kombinerades sedan med startupbolaget Hedvigs skadedata. Posteriorn estimerades sedan med Metropolis and Metropolis-Hastings, vilket är två Markov Chain Monte Carlo simuleringsmetoder. Sammantaget resulterade detta i en prissättningsmetod som kunde utnyttja kunskap från en existerande statisk prismodell, samtidigt som den kunde ta in mer kunskap i takt med att fler skadeanmälningar skedde. Resultaten tydde på att de Bayesianska prissättningsmetoderna kunde förutspå skadekostnader med högre precision jämfört med den statiska tariffen.
APA, Harvard, Vancouver, ISO, and other styles
48

Gursoy, Dogan. "Development of a Travelers' Information Search Behavior Model." Diss., Virginia Tech, 2001. http://hdl.handle.net/10919/29970.

Full text
Abstract:
In the dynamic global environment of today, understanding how travelers acquire information is important for marketing management decisions (Srinivasan 1990; Wilkie and Dickson 1985). For destination marketing managers, understanding information search behavior of travelers is crucial for designing effective marketing communication campaigns because information search represents the primary stage at which marketing can provide information and influence travelers' vacation decisions. Therefore, conceptual and empirical examinations of tourist information search behavior have a long tradition in tourism marketing literature (Etzel and Wahlers, 1985; Fodness and Murray, 1997, 1998, 1999; Perdue, 1985; Schul and Crompton, 1983; Snepenger and Snepenger 1993; Woodside and Ronkainen, 1980). Even though several studies examined travelers information search behavior and the factors that are likely to affect it, they all examined travelers' prior product knowledge as a uni-dimensional construct, most often referred to as destination familiarity or previous trip experiences (Woodside and Ronkainen, 1980). However, consumer behavior literature suggests that the prior product knowledge is not a uni-dimensional construct (Alba and Hutchinson). Alba and Hutchinson (1987) propose that prior product knowledge has two major components, familiarity and expertise, and cannot be measured by a single indicator. In addition, in tourism, little research has been done on the factors that are likely to influence travelers' prior product knowledge and, therefore, their information search behavior. The purpose of this study is to examine travelers' information search behavior by studying the effects of travelers' familiarity and expertise on their information search behavior and identifying the factors that are likely to influence travelers' familiarity and expertise and their information search behavior. A travelers' information search behavior model and a measurement instrument to assess the constructs of the model were designed for the use of this study. The model proposed that the type of information search (internal and/or external) that is likely to be utilized will be influenced by travelers' familiarity and expertise. In addition, travelers' involvement, learning, prior visits and cost of information search are proposed to influence travelers' familiarity and their information search behavior. Even though a very complex travelers' information search behavior model was proposed, only the effects of travelers' prior product knowledge (familiarity and expertise) on travelers' information search behavior were empirically tested due to the complex nature of the model. First the proposed measurement scales were pretested on 224 consumers. After making sure that proposed measures of each construct were valid and reliable, a survey of 470 consumers of travel/tourism services who reside in Virginia was conducted. Structural Equation Modeling (i.e., LISREL) analysis was performed to test the fit of the model. Results of the study confirmed that travelers' prior product knowledge has two components, familiarity and expertise, and expertise is a function of familiarity. Both familiarity and expertise affect travelers' information search behavior. While the effect of familiarity on internal search is positive and on external search is negative, the effect of expertise on internal search is negative and on external search is positive. The study identified a U-shaped relationship between travelers' prior product knowledge and external information search. At early stages of learning (low familiarity), travelers are likely to rely on external information sources to make their vacation decisions. As their prior product knowledge (familiarity) increases they tend to make their vacation decisions based on what is in their memory, therefore, reliance on external information sources decreases. However, as they learn more (become experts), they realize that they need more detailed information to make their vacation decisions. As a result, they start searching for additional external information to make their vacation decisions.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Georgiou, Christina Nefeli. "Constructing informative Bayesian priors to improve SLAM map quality." Thesis, University of Sheffield, 2016. http://etheses.whiterose.ac.uk/17167/.

Full text
Abstract:
The problem of Simultaneous Localisation And Mapping (SLAM) has been widely researched and has been of particular interest in recent years, with robots and self driving cars becoming ubiquitous. SLAM solutions to date have aimed to produce faster, more robust solutions that yield consistent maps by improving the filtering algorithms used, introducing better sensors, more efficient map representations or improved motion estimates. Whilst performing well in simplified scenarios, many of these solutions perform poorly in challenging real life scenarios. It is therefore important to produce SLAM solutions that can perform well even when using limited computational resources and performing a quick exploration for time critical operations such as Urban Search And Rescue missions. In order to address this problem this thesis proposes the construction of informative Bayesian priors to improve performance without adding to the computational complexity of the SLAM algorithm. Indoors occupancy grid SLAM is used as a case study to demonstrate this concept and architectural drawings are used as a source of prior information. The use of prior information to improve the performance of robotics systems has been successful in applications such as visual odometry, self-driving car navigation and object recognition. However, none of these solutions leverage prior information to construct Bayesian priors that can be used in recursive map estimation. This thesis addresses this problem and proposes a novel method to process architectural drawings and floor plans to extract structural information. A study is then conducted to identify optimal prior values of occupancy to assign to extracted walls and empty space. A novel approach is proposed to assess the quality of maps produced using different priors and a multi-objective optimisation is used to identify Pareto optimal values. The proposed informative priors are found to perform better than the commonly used non-informative prior, yielding an increase of over 20% in the F2 metric, without adding to the computational complexity of the SLAM algorithm.
APA, Harvard, Vancouver, ISO, and other styles
50

Kubes, Milena. "Use of prior knowledge in integration of information from technical materials." Thesis, McGill University, 1988. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=75962.

Full text
Abstract:
This study was designed to examine the ability to use prior knowledge in text comprehension and knowledge integration. The focus of the research was on effects of different degrees of subjects' theoretical knowledge in the domain of biochemistry on their comprehension of written technical materials describing experimental procedures and results, and the ability to integrate such new text derived information with prior theoretical knowledge considered by experts to be relevant to the topic. Effects of cues on the accessibility and use of prior knowledge were also examined. Pre-test questions testing the extent of subjects' prior knowledge of photosynthesis, and a "cue article" specifically designed to prime subjects' relevant prior knowledge of photosynthesis, served as cues in the study.<br>A theoretical model of experts' knowledge was developed from a semantic analysis of expert-produced texts. This "expert model" was used to evaluate the extent of students' theoretical knowledge of photosynthesis, and its accessibility while applying it to the experimental tasks. College students and university graduate students served as subjects in the study, permitting a contrast of groups varying in prior knowledge of and expertise in chemistry.<br>Statistical analyses of data obtained from coding subjects' verbal protocols against text propositions and the expert model revealed that prior knowledge and comprehension contribute significantly to predicting knowledge integration, but they are not sufficient for this process to take place. It appears that qualitative aspects and specific characteristics of subjects' knowledge structure contribute to the process of integration, not simply the amount of accumulated knowledge. There was also evidence that there are specific inferential processes unique to knowledge integration that differentiate it from test comprehension. Cues manifested their effects on performance on comprehension tasks and integrative tasks only through their interactions with other factors. Furthermore, it was found that textual complexity placed specific constraints on students' performance: the application of textual information to the integrative tasks and students' ability to build conceptual frame representations based on text propositions depended on the complexity of the textual material. (Abstract shortened with permission of author.)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!