To see the other types of publications on this topic, follow the link: A Priori and a Posteriori Knowledge.

Dissertations / Theses on the topic 'A Priori and a Posteriori Knowledge'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'A Priori and a Posteriori Knowledge.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zhou, Hao. "La chute du "triangle d'or" : apriorité, analyticité, nécessité : de l'équivalence à l'indépendance." Thesis, Paris 1, 2020. http://www.theses.fr/2020PA01H204.

Full text
Abstract:
Les trois concepts d’apriorité, d’analyticité et de nécessité, qui ont longtemps été considérés comme équivalents, constituent ce que l’on peut appeler le « triangle d’or » ou « triangle d’équivalence ». Or, la conception kantienne du synthétique a priori et les conceptions kripkéennes du contingent a priori et du nécessaire a posteriori représentent des critiques décisives contre ce triangle d’équivalence. Héritant, de manière critique, des idées révolutionnaires de Kant et de Kripke, un nouveau schéma épistémologique intitulé « sujet-connaissance-monde » est ici systématiquement construit. Ce schéma rend totalement caduc le triangle d’or. Les concepts d’apriorité, d’analyticité et de nécessité deviennent indépendants les uns des autres. On aboutit ainsi à un nouvel espace des catégories de la connaissance, issu du libre entre croisement des trois distinctions a priori-a posteriori, analytique-synthétique et nécessaire-contingent. Ces catégories de la connaissance, dont certaines sont nouvelles, s’appliquent aux sciences exclusivement et exhaustivement<br>The three concepts of apriority, analyticity and necessity, which have long been considered equivalent, constitute whatcould be called the “golden triangle” or “triangle of equivalence”. Yet, the Kantian conception of the synthetic a priori and the Kripkean conceptions of the contingent a priori and the necessary a posteriori represent decisive criticismsagainst this triangle of equivalence. Inheriting critically these revolutionary thoughts from Kant and Kripke, a newepistemological schema entitled “subject-knowledge-world” is here systematically constructed. This schema renders thegolden triangle totally obsolete. The concepts of apriority, analyticity and necessity become independent of each other.This leads to a new space of knowledge categories, resulting from the free intersecting of the three distinctions a priori-aposteriori, analytic-synthetic and necessary-contingent. These knowledge categories, some of which are new, apply to science exclusively and exhaustively
APA, Harvard, Vancouver, ISO, and other styles
2

CHAN, Hiu Man. "Is there a distinction between a priori and a posteriori." Digital Commons @ Lingnan University, 2014. https://commons.ln.edu.hk/philo_etd/10.

Full text
Abstract:
This thesis studies whether there is a tenable distinction between a priori justification and a posteriori justification. My research considers three possible conceptions of a priori: (1) Justification Independent of Experience, (2) Mere Meaning Based Justification and (3) Justification by Rational Insight, and examines whether they can provide a sound and significant distinction between a priori and a posteriori. This thesis contains five chapters. Chapter 1 introduces the background knowledge of the a priori/a posteriori distinction. Chapter 2 analyzes the traditional conception of a priori, i.e. justification independent of experience, and considers whether the distinction based on it is tenable. Five approaches for defining “experience” are examined, but none of them succeed in providing a distinction between a priori and a posteriori. Chapter 3 focuses on the empiricist conception of the a priori, i.e. a priori as mere meaning based justification, and argues that the distinction based on it has a problem of classification. Chapter 4 concerns the rationalist conception of the a priori, i.e. a priori as justification by rational insight, and argues that neither the idea of justification by rational insight itself nor the distinctive features of rational insight could provide a distinction between a priori and a posteriori. Given that none of the current major accounts seem to work, we should not be optimistic about the potential for success in accounting for the distinction between a priori and a posteriori. In the last chapter, I will conclude the thesis and point out the implication of abandoning the a priori/a posteriori distinction: a need to reform our understanding of the nature of different sources of justification and knowledge.
APA, Harvard, Vancouver, ISO, and other styles
3

Bourgeade, Tom. "Interprétabilité a priori et explicabilité a posteriori dans le traitement automatique des langues." Thesis, Toulouse 3, 2022. http://www.theses.fr/2022TOU30063.

Full text
Abstract:
Avec l'avènement des architectures Transformer en Traitement Automatique des Langues il y a quelques années, nous avons observé des progrès sans précédents dans diverses tâches de classification ou de génération de textes. Cependant, l'explosion du nombre de paramètres et de la complexité de ces modèles "boîte noire" de l'état de l'art, rendent de plus en plus évident le besoin désormais urgent de transparence dans les approches d'apprentissage automatique. La capacité d'expliquer, d'interpréter et de comprendre les décisions algorithmiques deviendra primordiale à mesure que les modèles informatiques deviennent de plus en plus présents dans notre vie quotidienne. En utilisant les méthodes de l'IA eXplicable (XAI), nous pouvons par exemple diagnostiquer les biais dans des ensembles de données, des corrélations erronées qui peuvent au final entacher le processus d'apprentissage des modèles, les conduisant à apprendre des raccourcis indésirables, ce qui pourrait conduire à des décisions algorithmiques injustes, incompréhensibles, voire risquées. Ces modes d'échec de l'IA peuvent finalement éroder la confiance que les humains auraient pu placer dans des applications bénéfiques. Dans ce travail, nous explorons plus spécifiquement deux aspects majeurs de l'XAI, dans le contexte des tâches et des modèles de Traitement Automatique des Langues : dans la première partie, nous abordons le sujet de l'interprétabilité intrinsèque, qui englobe toutes les méthodes qui sont naturellement faciles à expliquer. En particulier, nous nous concentrons sur les représentations de plongement de mots, qui sont une composante essentielle de pratiquement toutes les architectures de TAL, permettant à ces modèles mathématiques de manipuler le langage humain d'une manière plus riche sur le plan sémantique. Malheureusement, la plupart des modèles qui génèrent ces représentations les produisent d'une manière qui n'est pas interprétable par les humains. Pour résoudre ce problème, nous expérimentons la construction et l'utilisation de modèles de plongement de mots interprétables, qui tentent de corriger ce problème, en utilisant des contraintes qui imposent l'interprétabilité de ces représentations. Nous utilisons ensuite ces modèles, dans une configuration nouvelle, simple mais efficace, pour tenter de détecter des corrélations lexicales, erronées ou non, dans certains ensembles de données populaires en TAL. Dans la deuxième partie, nous explorons les méthodes d'explicabilité post-hoc, qui peuvent cibler des modèles déjà entraînés, et tenter d'extraire diverses formes d'explications de leurs décisions. Ces méthodes peuvent aller du diagnostic des parties d'une entrée qui étaient les plus pertinentes pour une décision particulière, à la génération d'exemples adversariaux, qui sont soigneusement conçus pour aider à révéler les faiblesses d'un modèle. Nous explorons un nouveau type d'approche, en partie permis par les architectures Transformer récentes, très performantes mais opaques : au lieu d'utiliser une méthode distincte pour produire des explications des décisions d'un modèle, nous concevons et mettons au point une configuration qui apprend de manière jointe à exécuter sa tâche, tout en produisant des explications en langage naturel en forme libre de ses propres résultats. Nous évaluons notre approche sur un ensemble de données de grande taille annoté avec des explications humaines, et nous jugeons qualitativement certaines des explications générées par notre approche<br>With the advent of Transformer architectures in Natural Language Processing a few years ago, we have observed unprecedented progress in various text classification or generation tasks. However, the explosion in the number of parameters, and the complexity of these state-of-the-art blackbox models, is making ever more apparent the now urgent need for transparency in machine learning approaches. The ability to explain, interpret, and understand algorithmic decisions will become paramount as computer models start becoming more and more present in our everyday lives. Using eXplainable AI (XAI) methods, we can for example diagnose dataset biases, spurious correlations which can ultimately taint the training process of models, leading them to learn undesirable shortcuts, which could lead to unfair, incomprehensible, or even risky algorithmic decisions. These failure modes of AI, may ultimately erode the trust humans may have otherwise placed in beneficial applications. In this work, we more specifically explore two major aspects of XAI, in the context of Natural Language Processing tasks and models: in the first part, we approach the subject of intrinsic interpretability, which encompasses all methods which are inherently easy to produce explanations for. In particular, we focus on word embedding representations, which are an essential component of practically all NLP architectures, allowing these mathematical models to process human language in a more semantically-rich way. Unfortunately, many of the models which generate these representations, produce them in a way which is not interpretable by humans. To address this problem, we experiment with the construction and usage of Interpretable Word Embedding models, which attempt to correct this issue, by using constraints which enforce interpretability on these representations. We then make use of these, in a simple but effective novel setup, to attempt to detect lexical correlations, spurious or otherwise, in some popular NLP datasets. In the second part, we explore post-hoc explainability methods, which can target already trained models, and attempt to extract various forms of explanations of their decisions. These can range from diagnosing which parts of an input were the most relevant to a particular decision, to generating adversarial examples, which are carefully crafted to help reveal weaknesses in a model. We explore a novel type of approach, in parts allowed by the highly-performant but opaque recent Transformer architectures: instead of using a separate method to produce explanations of a model's decisions, we design and fine-tune an architecture which jointly learns to both perform its task, while also producing free-form Natural Language Explanations of its own outputs. We evaluate our approach on a large-scale dataset annotated with human explanations, and qualitatively judge some of our approach's machine-generated explanations
APA, Harvard, Vancouver, ISO, and other styles
4

Kroedel, Thomas. "A priori knowledge of modal truths." Thesis, University of Oxford, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.440706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Midelfart, Herman. "Knowledge discovery from cDNA microarrays and a priori knowledge." Doctoral thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for datateknikk og informasjonsvitenskap, 2003. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-912.

Full text
Abstract:
Microarray technology has recently attracted a lot of attention. This technology can measure the behavior (i.e., RNA abundance) of thousands of genes simultaneously, while previous methods have only allowed measurements of single genes. By enabling studies on a genome-wide scale, microarray technology is currently revolutionizing biological research and creating a wide range of research opportunities. However, the technology generates a vast amount of data that cannot be handled manually. Computational analysis is thus a prerequisite for the success of this technology, and research and development of computational tools for microarray analysis are of great importance. This thesis develops supervised learning methods based on Rough Set Theory (RST) for analyzing microarray data together with prior knowledge. Two kinds of microarray studies are considered. The first is cancer studies where supervised learning may be used for predicting tumor subtypes and clinical parameters. We introduce a general RST approach for classification of tumor samples analyzed by microarrays. This includes a feature selection method for selecting genes that discriminate significantly between a set of classes. RST classifiers are then learned from the selected genes. The approach is applied to a data set of gastric tumors. Classifiers for six clinical parameters are developed and demonstrate that these parameters can be predicted from the expression profile of gastric tumors. Moreover, the performance of the feature selection method as well as several learning and discretization methods implemented in ROSETTA are examined and compared to the performance of linear and quadratic discrimination analysis. The classifiers are also biologically validated. One of the best classifiers is selected for each clinical parameter, and the connection between the genes used in these classifiers and the parameters are compared to the established knowledge in the biomedical literature. Many of these genes have no previously known connection to gastric cancer and provide interesting targets for further biological research. The second kind of study is prediction of gene function from expression profiles measured with microarrays. A serious problem in this case is that functional classes, which are assigned to genes, are typically organized in an ontology where the classes may be related to each other. One example is the Gene Ontology where the classes form a Directed Acyclic Graph (DAG). Standard learning methods such as RST assume, however, that the classes are unrelated, and cannot deal with this problem directly. This thesis gives a solution by introducing an extended RST framework and two novel algorithms for learning in a DAG. The DAG also constitutes a problem when a classifier is to be evaluated since standard performance measures such as accuracy or AUC do not recognize the structure of the DAG. Therefore, several new performance measures are introduced. The algorithms are first tested on a data set that was created from human fibroblast cells by the means of microarrays. They are then applied on artificial data in order to obtain a better understanding of their behavior, and their weaknesses and strengths are identified.
APA, Harvard, Vancouver, ISO, and other styles
6

El, Alaoui Lakhnati Linda. "Analyse d'erreur a priori et a posteriori pour des méthodes d'éléments finis mixtes non-conformes." Phd thesis, Ecole des Ponts ParisTech, 2005. http://pastel.archives-ouvertes.fr/pastel-00001267.

Full text
Abstract:
Dans cette thèse nous nous intéressons à l'analyse d'erreur a priori et a posteriori de méthodes d'éléments finis mixtes et non-conformes. Nous considérons en particulier les équations de Darcy à perméabilité fortement variable et les équations de convection-diffusion-réaction en régime de convection dominante. Nous discrétisons les équations de Darcy par une méthode d'éléments finis mixtes non-conformes de type Petrov-Galerkin appelée schéma boîte. Les techniques d'estimations d'erreur a posteriori par résidu et hiérarchique conduisent à des estimateurs d'erreur a posteriori fiables et optimaux indépendamment des fluctuations de la perméabilité. Les résultats théoriques sont validés numériquement sur différents cas tests présentant de forts contrastes de perméabilité. Enfin, nous montrons comment les indicateurs d'erreur obtenus permettent de générer des maillages adaptatifs. Nous discrétisons les équations de convection-diffusion-réaction par des éléments finis nonconformes. Deux méthodes de stabilisation sont étudiées: la stabilisation par viscosité de sous-maille, conduisant à un schéma boîte et la méthode de pénalisation sur les faces. Nous montrons que les deux schémas ainsi obtenus ont les mêmes propriétés de convergence que les approximations par éléments finis conformes. Grâce aux techniques d'estimations d'erreur par résidu nous obtenons des estimateurs d'erreur a posteriori fiables et optimaux. Certains des indicateurs d'erreur sont robustes au sens de Verfürth, c'est à dire que le rapport des constantes intervenant dans les inégalités de fiabilité et d'optimalité explose en au plus l'inverse du nombre de Péclet. Les résultats théoriques sont validés numériquement et les indicateurs d'erreur a posteriori obtenus permettent de générer des maillages adaptatifs sur des problèmes présentant des couches intérieures.
APA, Harvard, Vancouver, ISO, and other styles
7

Langlois, Xavier. "Adaptation a priori et a posteriori de maillage autour d'une interface dans des problèmes thermiques." Nancy 1, 1993. http://www.theses.fr/1993NAN10178.

Full text
Abstract:
L'objet de ce travail est l'analyse mathématique et numérique d'un critère de maillage au voisinage de l'interface entre deux milieux pour un problème de transfert thermique. Nous caractérisons son domaine de validité pour diverses méthodes d'approximation numérique et nous établissons le lien avec les techniques usuelles de maillage adaptatif. Une première analyse montre que le critère est efficace sur certains schémas monodimensionnels aux différences finies à trois points en espace et qu'il se traduit par une amélioration de l'erreur de consistance au point interface. Dans le cadre de la méthode aux éléments finis de degré un, nous montrons que le critère coïncide avec celui classique d'equirepartition de la norme l#2 de l'erreur d'interpolation. Par contre, pour d'autres normes, en particulier la norme énergie, l'analyse conduit à des critères différents. Ensuite, nous situons le critère par rapport aux estimations a posteriori faites en maillage adaptatif pour des estimateurs équivalents à la norme énergie de l'erreur. Des simulations numériques illustrent tous ces aspects en dimension un et deux
APA, Harvard, Vancouver, ISO, and other styles
8

El, Alaoui Lakhnati Linda. "Analyse d'esrreur a priori et a posteriori pour des méthodes d'éléments finis mixtes non-conformes." Marne-la-vallée, ENPC, 2005. https://pastel.archives-ouvertes.fr/pastel-00001267.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lane, Ashley Alexander. "A critique of a priori moral knowledge." Thesis, Birkbeck (University of London), 2018. http://bbktheses.da.ulcc.ac.uk/368/.

Full text
Abstract:
Many ethicists believe that if it is possible to know a true moral proposition, it is always possible to ascertain a priori the normative content of that proposition. I argue that this is wrong; the only way to ascertain the normative content of some moral propositions requires the use of a posteriori information. I examine what I call determinate core moral propositions. I assume that some of these propositions are true and that actual agents are able to know them. Ethicists whom I call coreapriorists believe that it is always possible to ascertain a priori the normative content of such propositions. Core-aposteriorists believe that this is false, and that sometimes a posteriori information must be used to ascertain that normative content. I develop what I call the a posteriori strategy to show that core-apriorists are likely to be wrong, and so core-aposteriorists are correct. The strategy examines the details of particular core-apriorist theories and then shows that the theories have one of two problems: either some of the knowable determinate core moral propositions in the theories are not knowable a priori, or some of the propositions are not determinate, so they cannot perform the epistemological work required of them. Therefore, some knowable determinate core moral propositions are only knowable with the aid of a posteriori information. I apply the strategy to four different core-apriorist theories. The first is Henry Sidgwick's theory of self-evident moral axioms, as recently developed by Katarzyna de Lazari-Radek and Peter Singer. The second is Matthew Kramer's moral realism. I then examine Michael Smith's moral realism, and Frank Jackson and Philip Pettit's moral functionalism. The a posteriori strategy shows that there are serious difficulties with all four theories. I conclude that it provides good evidence that the core-apriorist is mistaken, and that the core-aposteriorist is right.
APA, Harvard, Vancouver, ISO, and other styles
10

Al-Ghufli, Saeed M. A. O. "A reconsideration of constitutional review in the United Arab Emirates : 'a posteriori' or 'a priori' review?" Thesis, Aberystwyth University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.252170.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Vega, Carrizo Ricardo Andrés. "Simplificación de un Modelo de Planificación Minera con Agregación a Priori y a Posteriori para Codelco." Tesis, Universidad de Chile, 2008. http://repositorio.uchile.cl/handle/2250/103096.

Full text
Abstract:
El presente Trabajo de Título se enfoca dentro del marco teórico de la planificación minera. A su vez, una de las herramientas básicas para la planificación minera es el modelo de bloques, el cual consiste en una representación discreta de un yacimiento determinado a través de un muestreo ordenado de las características del terreno. La planificación de extracción de estos bloques se entiende en este caso como la determinación del año en que cada uno de ellos será extraído a lo largo de un horizonte dado. Para ello se contó con un Problema de Programación Mixta, el cual al ser ejecutado en alguna herramienta computacional (en este caso CPLEX) entrega la solución óptima de extracción. Además se contó con una agregación ‘a priori’ de los bloques en clusters como situación inicial, realizada en el trabajo de título de Ximena Schultz. El trabajo tiene como objetivo reducir aun más el modelo, de forma de poder ocuparlo en Análisis de Escenarios con Procesos Estocásticos (el cual requiere ejecutar el modelo alrededor de 3.000 veces) y además crear un modelo corporativo para CODELCO. Es además deseable que la solución encontrada no tenga un error mayor al 10% que la original, y en particular inferior al 3% con respecto a la agregación “a priori” ya mencionada. La metodología utilizada para llevar a cabo la simplificación consistió en tres etapas. Primero, hubo una etapa de fijación de variables binarias donde muchas de ellas fueron fijadas en 0 cuando era posible determinar que al tomar el valor 1 se transgredía alguna restricción. Luego vino una segunda etapa donde se compactaron la mayoría de los datos que alimentan el problema, reemplazándose grandes matrices con ceros y unos por pequeños archivos de pares ordenados. La última etapa consistió en realizar una segunda agregación (“a posteriori”) de modo de formar grupos de clusters de forma similar a la agregación “a priori”, para disminuir el número de variables utilizando criterios distintos a la primera. Finalmente se implementaron exitosamente 4 agregaciones distintas (con un máximo de 2, 3, 4 ó 5 clusters por grupo cada una), donde la mejor de ellas presenta una reducción del 80% del tiempo de ejecución y un 3% de error con respecto al modelo con agregación “a priori”. Considerando que la agregación “a priori” presentaba una reducción del tiempo de ejecución del 74% con un 3,62% de error con respecto al modelo original, el modelo final queda con una reducción del tiempo de ejecución acumulada del 95% con un 6,51% de error acumulado, con lo cual se cumple que el error entre el modelo final y el original es inferior a 10%.
APA, Harvard, Vancouver, ISO, and other styles
12

Kai, Li. "Neuroanatomical segmentation in MRI exploiting a priori knowledge /." view abstract or download file of text, 2007. http://proquest.umi.com/pqdweb?did=1400964181&sid=1&Fmt=2&clientId=11238&RQT=309&VName=PQD.

Full text
Abstract:
Thesis (Ph. D.)--University of Oregon, 2007.<br>Typescript. Includes vita and abstract. Includes bibliographical references (leaves 148-158). Also available for download via the World Wide Web; free to University of Oregon users.
APA, Harvard, Vancouver, ISO, and other styles
13

Lynch, Timothy J. "Aquinas, Lonergan, and the a priori." Thesis, Queen's University Belfast, 2001. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.343058.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Cozzio-Büeler, Enrico Albert. "The design of neural networks using a priori knowledge /." Zürich, 1995. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=10991.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Young, Benedict. "Naturalising the 'a priori' : reliabilism and experience-independent knowledge." Thesis, University of Edinburgh, 2000. http://hdl.handle.net/1842/26064.

Full text
Abstract:
The thesis defends the view that the concept of a priori knowledge can be naturalised without sacrificing the core aspects of the traditional conception of apriority. I proceed by arguing for three related claims. The first claim is that the adoption of naturalism in philosophy is not automatically inconsistent with belief in the existence of a priori knowledge. A widespread view to the contrary has come about through the joint influence of Quine and the logical empiricists. I hold that by rejecting a key assumption made by the logical empiricists (the assumption that apriority can be explained only by appeal to the concept of analyticity), we can develop an account of naturalism in philosophy which does not automatically rule out the possibility of a priori knowledge, and which retains Quine's proposals that philosophy be seen as continuous with the enterprise of natural science, and that the theory of knowledge be developed within the conceptual framework of psychology. The first attempt to provide a theory of a priori knowledge within such a framework was made by Philip Kitcher. Kitcher's strategy involves giving an account of the idea of "experience-independence" independently of the theory of knowledge in general (he assumes that an appropriate account of the latter will be reliabilist). Later authors in the tradition Kitcher inaugurated have followed him on this, while criticising him for adopting too strong a notion of experience-independence. The second claim I make is an qualified agreement with this: it is that only a weak notion of experience-independence will give a viable account of a priori knowledge, but that the reasons why this is so have been obscured by Kitcher's segregation of the issues. Strong reasons for adopting a weak notion are provided by consideration of the theory of knowledge, but these same reasons also highlight severe problems for the project of providing a naturalistic theory of knowledge in general. The third claim is that a plausible naturalistic theory of knowledge in general can be given, and that it provides an appropriate framework within which to give an account of minimally experience-independent knowledge.
APA, Harvard, Vancouver, ISO, and other styles
16

Riaz, Azba. "Une nouvelle formulation Galerkin discontinue pour équations de Maxwell en temps, a priori et a posteriori erreur estimation." Thesis, Cergy-Pontoise, 2016. http://www.theses.fr/2016CERG0790/document.

Full text
Abstract:
Dans la première partie de cette thèse, nous avons considéré les équations de Maxwell en temps et construit une formulation discontinue de Galerkin (DG). On a montré que cette formulation est bien posée et ensuite on a établi des estimateurs a priori pour cette formulation. On a obtenu des résultats numériques pour valider les estimateurs a priori obtenus théoriquement. Dans la deuxième partie de cette thèse, des estimateurs d'erreur a posteriori de cette formulation sont établis, pour le cas semi-discret et pour le système complètement discrétisé. Dans la troisième partie de cette thèse, on considére les équations de Maxwell en régime harmonique. On a développé une formulation discontinue de Galerkin mixte. On a établi des estimations d'erreur a posteriori pour cette formulation<br>In the first part of this thesis, we have considered the time-dependent Maxwell's equations in second-order form and constructed discontinuous Galerkin (DG) formulation. We have established a priori error estimates for this formulation and carried out the numerical analysis to confirm our theoretical results. In the second part of this thesis, we have established a posteriori error estimates of this formulation for both semi discrete and fully discrete case. In the third part of the thesis we have considered the time-harmonic Maxwell's equations and we have developed mixed discontinuous Galerkin formulation. We showed the well posedness of this formulation and have established a posteriori error estimates
APA, Harvard, Vancouver, ISO, and other styles
17

Chan, Tung 1972. "The complexity and a priori knowledge of learning from examples." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/11464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Ndour, Souleymane. "L’articulation des contrôles a priori et a posteriori en contentieux constitutionnel. L’expérience française à la lumière de droits étrangers." Electronic Thesis or Diss., Reims, 2024. http://www.theses.fr/2024REIMD006.

Full text
Abstract:
Un système juridique qui ne dispose initialement que d’un seul mécanisme de contrôle peut, au cours de son évolution, en introduire un autre afin de compléter ce modèle. Cela se produit notamment lorsqu’il présente des lacunes significatives. Combiner deux types de contrôle de constitutionnalité des lois, l’un a priori et l’autre a posteriori dans un même système n’est pas évidente, car leur articulation ne va pas de soi. Pour réussir une combinaison efficace des contrôles, il est nécessaire de mettre en place des dispositifs susceptibles de favoriser leur harmonisation, à savoir mieux encadrer l’autorité de la chose jugée de chacun des contrôles. Ceci permet de garantir un équilibre et une interaction effective entre eux. Ainsi, la dualité des contrôles contribue à protéger plus efficacement l’ordre juridique contre les atteintes liées aux inconstitutionnalités. Le contrôle a priori a pour objet de prévenir l’entrée en vigueur de dispositions législatives contraires à la Constitution. Si celles-ci échappent à la vigilance du juge constitutionnel ou si une loi devient inconstitutionnelle avec la pratique, le contrôle a posteriori sert dans ce cas à éviter leur maintien. La coexistence de ces deux types de contrôle est donc un moyen efficace pour mieux faire respecter la constitutionnalité. Les conditions d’une combinaison viable de ces contrôles doivent être définies par les pouvoirs publics, avant que le juge constitutionnel, chargé de leur application n’en assure leur effectivité. Ce dernier joue un rôle important, car la réussite ou l’échec de cette articulation dépend de lui. En France, le juge constitutionnel a permis une articulation harmonieuse des contrôles a priori et a posteriori, où ils se complètent sans heurt, sans que l’un supplante l’autre. À l’inverse, en Espagne, le Tribunal constitutionnel a « saboté » le fonctionnement du contrôle a priori, conduisant à sa suppression. À titre comparatif, le modèle français de combinaison apparaît comme une exception<br>A legal system that initially has only one mechanism of oversight can, over the course of its evolution, introduce another to complement this model. This especially occurs when it exhibits significant shortcomings. Combining two types of constitutional review of laws—one a priori and the other a posteriori—within the same system is not straightforward, as their coordination is not self-evident. To successfully achieve an effective combination of these reviews, it is necessary to establish mechanisms that promote their harmonization, specifically by better defining the legal authority of the judgments rendered by each type of review. This ensures a balance and effective interaction between them. Thus, the duality of reviews helps to more effectively protect the legal order against violations arising from unconstitutionalities. The purpose of a priori review is to prevent the entry into force of legislative provisions that are contrary to the Constitution. If such provisions escape the constitutional judge’s scrutiny or if a law becomes unconstitutional in practice, the a posteriori review then serves to prevent its continued application. The coexistence of these two types of review is, therefore, an effective means of ensuring better compliance with constitutionality. The conditions for a viable combination of these reviews must be defined by public authorities before the constitutional judge, responsible for their implementation, ensures their effectiveness. The judge plays an important role, as the success or failure of this coordination depends on them. In France, the constitutional judge has facilitated a harmonious coordination of a priori and a posteriori reviews, where they complement each other smoothly, without one overshadowing the other. Conversely, in Spain, the Constitutional Court "sabotaged" the functioning of a priori review, leading to its abolition. Comparatively, the French model of combination stands out as an exception
APA, Harvard, Vancouver, ISO, and other styles
19

Delisle, Sylvain. "Text processing without a priori domain knowledge: Semi-automatic linguistic analysis for incremental knowledge acquisition." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6574.

Full text
Abstract:
Technical texts are an invaluable source of the domain-specific knowledge which plays a crucial role in advanced knowledge-based systems today. However, acquiring such knowledge has always been a major difficulty in the construction of these systems--this critical obstacle is sometimes referred to as the "knowledge acquisition bottleneck". In order to lessen the burden on the knowledge engineer's shoulders, several approaches have been proposed in the literature. A few of these suggest processing texts pertaining to the domain of interest in order to extract the knowledge they contain and thus facilitate the domain modelling. We herein propose a new approach to knowledge acquisition from texts; this approach is comprised of a new methodology and computational framework for the implementation of a linguistic processor which represents the central component of a system for the acquisition of knowledge from text. The system, named TANKA, is not given the complete domain model beforehand. It is designed to process technical texts in order to incrementally build a knowledge base containing a conceptual model of the domain. TANKA is an intelligent assistant to the knowledge engineer; when it cannot proceed entirely on its own, the user is asked to collaborate. In the process, the system acquires knowledge from text; it can be said to learn about the domain. The originality of the research is due mainly to the fact that we do not assume significant a priori domain-specific (semantic) knowledge: this assumption represents a severe constraint on the natural language processor. The only external elements of knowledge we consider in the proposed framework are "off-the-shelf" publicly available and domain-independent repositories, such as a basic dictionary containing surface syntactic information (i.e. The Collins) and a lexical database (i.e. WordNet). Other components of the proposed framework are general-purpose. The parser (DIPETT) is domain-independent with a large coverage of English: our approach relies on full syntactic analysis. The Case-based semantic analyzer (HAIKU) is semi-automatic: it interacts with the user in order to get his$\sp1$ approval of the analysis it has just proposed and negotiates refined elements of the analysis when necessary. The combined processing of DIPETT and HAIKU allows TANKA, the encompassing system$\sp2$, to acquire knowledge, based on the conceptual elements produced by HAIKU. The thesis also describes experiments that have been conducted on a Prolog implementation of both of these text analysis components. The approach presented in the thesis is general and in principle portable to any domain in which suitable technical texts are available. The thesis presents theoretical considerations as well as engineering aspects of the many facets of this research work. We also provide a detailed discussion of many future work items that could be added to what has already been accomplished in order to make the framework even more productive. (Abstract shortened by UMI.) ftn$\sp1$In order to lighten the text, the terms 'he' and 'his' have been used generically to refer equally to persons of either sex. No discrimination is either implied or intended. $\sp2$DIPETT and HAIKU constitute a conceptual analyzer that can be used independently of TANKA or within a different encompassing system.
APA, Harvard, Vancouver, ISO, and other styles
20

Boulaajine, Lahcen. "Méthode des éléments finis mixte duale pour les problèmes de l'élasticité et de l'élastodynamique: analyse d'erreur à priori et à posteriori." Phd thesis, Université de Valenciennes et du Hainaut-Cambresis, 2006. http://tel.archives-ouvertes.fr/tel-00136422.

Full text
Abstract:
Dans ce travail, nous étudions le raffinement de maillage pour des méthodes d'éléments finis mixtes duales pour deux types de problèmes : le premier concerne le problème de l'élasticité linéaire et le second problème celui de l'élastodynamique.<br /> <br /> Pour ces deux types de problèmes et dans des domaines non réguliers, les méthodes d'éléments finis mixtes analysées jusqu'à présent, sont celles qui concernent des méthodes mixtes "classiques". Ici, nous analysons la formulation mixte duale pour les deux problèmes de l'élasticité linéaire et de l'élastodynamique. <br /> Pour le problème d'élasticité, nous sommes concernés premièrement par une analyse a priori d'erreur en utilisant l'approximation par l'élément fini $BDM_1$ stabilisé. Afin de dériver une estimation a priori optimales d'erreur, nous établissons des règles de raffinement de maillage. <br /> Ensuite, nous faisons une analyse d'erreur à posteriori sur un domaine simplement ou multiplement connexe. En fait nous établissons un estimateur résiduel fiable et efficace. Cet estimateur est alors utilisé dans un algorithme adaptatif pour le raffinement automatique de maillage. Pour le problème de l'élastodynamique, nous faisons une analyse a priori d'erreur en utilisant le même élément fini que pour le problème d'élasticité, en utilisant une formulation mixte duale pour la discrétisation des variables spatiales. <br /> Pour la discrétisation en temps nous étudions les deux schémas de Newmark explicite et implicite. Par des règles de raffinement de maillage appropriées, nous dérivons des estimées d'erreur optimales pour les deux schémas numérique.
APA, Harvard, Vancouver, ISO, and other styles
21

Boulaajine, Lahcen. "Méthode des éléments finis mixte duale pour les problèmes de l'élasticité et de l'élastodynamique : analyse d'erreur a priori et a posteriori." Valenciennes, 2006. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/e616fd07-9eb3-4e76-90f4-bdba4f8aba34.

Full text
Abstract:
Dans ce travail, nous étudions le raffinement de maillage pour des méthodes d'éléments finis mixtes duales pour deux types de problèmes: le premier concerne le problème de l'élasticité linéaire et le second problème celui de l'élastodynamique. Pour le problème d'élasticité, nous sommes concernés premièrement par une analyse a priori d'erreur en utilisant l'approximation par l'élément fini BDM stabilisé. Afin de dériver une estimation a priori optimales d'erreur, nous établissons des règles de raffinnement de maillage. Ensuite, nous faisons une analyse d'erreur à posteriori sur un domaine simplement ou multiplement connexe. En fait nous établissons un estimateur residuel fiable et efficace. Cet estimateur est alors utilisé dans un algorithme adaptatif pour le raffinement automatique de maillage. Pour le problème de l'élastodynamique, nous faisons une analyse a priori d'erreur en utilisant le même élément fini que pour le problème d'élasticité, en utilisant une formulation mixte duale pour la discrétisation des variables spatiales. Pour la discrétisation en temps nous étudians les deux schemas de Newmark explicite et implicite. Par des règles de raffinnement de maillage appropriées<br>In this work, we study the refinement of grids for the dual mixed finite element method for 2 types of problems : the 1st one concerns the linear elasticity problem and the 2nd one the linear elastodynamic problem. Here, we analyze the dual mixed formulation for both linear elastodynamic problems. For the elasticity problem, we are concerned firstly by an a priori error analysis when using finite element approximation by stabilzed BDM element. Then, we make an a posteriori error analysis for the dual mixed finite element method for both a simply and a multiply connected domain. In fact we establish a residue based reliable and efficient error estimator for the dual mixed finite element method. This estimator is then used in an adaptive algotrithm for automatic mesh refinement. For the elastodynamic problem, we make an a priori error analysis when using the same finite element as for the elasticity problem, using a dual mixed formulation for the discretization in time. By adequete refinement rules on the regular family of trinagulations we derive optimal a priori error estimates for the explicit-in-time and implicit-in-time numerical schemes
APA, Harvard, Vancouver, ISO, and other styles
22

Oudin, Fabienne. "Schémas volumes finis pour problèmes elliptiques : analyse a priori et a posteriori par éléments finis mixtes, méthode de décomposition de domaines." Lyon 1, 1995. http://www.theses.fr/1995LYO10303.

Full text
Abstract:
Dans ce travail, on s'intéresse aux relations entre les méthodes de type volumes finis et les méthodes éléments finis mixtes pour la discrétisation des problèmes elliptiques. L'intérêt est d'utiliser un cadre théorique de type variationnel permettant d'obtenir pour une classe de schémas de type volumes finis, des résultats de majoration d'erreurs, à priori et à posteriori. Un estimateur d'erreur à posteriori, asymptotiquement exact, est obtenu en exploitant les liens existant entre méthodes volumes finis, éléments finis mixtes et éléments finis non conformes, et une méthode adaptative de décomposition de domaines est développée pour des méthodes volumes finis.
APA, Harvard, Vancouver, ISO, and other styles
23

Melis, Giacomo. "The epistemic defeat of a priori and empirical certainties : a comparison." Thesis, University of Aberdeen, 2014. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?pid=225946.

Full text
Abstract:
I explore the traditional contention that a priori epistemic warrants enjoy some sort of higher epistemic security than empirical warrants. By focusing on warrants that might plausibly be called 'basic', and by availing myself of an original taxonomy of epistemic defeaters, I defend a claim in the vicinity of the traditional contention. By discussing some examples, I argue that basic a priori warrants are immune to some sort of empirical defeaters, which I describe in detail. An important by-product of my investigation is a novel theory of epistemic defeaters, according to which only agents able to engage in higher-order epistemic thinking can suffer undermining defeat, while wholly unreflective agents can, in principle, suffer overriding defeat.
APA, Harvard, Vancouver, ISO, and other styles
24

Kuntjoro, Wahyu. "Expert System for Structural Optimization Exploiting Past Experience and A-priori Knowledge." Thesis, Cranfield University, 1994. http://hdl.handle.net/1826/4534.

Full text
Abstract:
The availability of comprehensive Structural Optimization Systems in the market is allowing designers direct access to software tools previously the domain of the specialist. The use of Structural Optimization is particularly troublesome requiring knowledge of finite element analysis, numerical optimization algorithms, and the overall design environment. The subject of the research is the application of Expert System methodologies to support nonspecialists when using a Structural Optimization System. The specific target is to produce an Expert System as an adviser for a working structural optimization system. Three types of knowledge are required to use optimization systems effectively; that relating to setting up the structural optimization problem which is based on logical deduction; past, experience; together with run-time and results interpretation knowledge. A knowledge base which is based on the above is set, up and reasoning mechanisms incorporating case based and rule based reasoning, theory of certainty, and an object oriented approach are developed. The Expert SVstem described here concentrates on the optimization formulation aspects. It is able to set up an optimization run for the user and monitor the run-time performance. In this second mode the system is able to decide if an optimization run is likely to converge to a, solution and advice the user accordingly. The ideas and Expert System techniques presented in this thesis have been implemented in the development; of a prototype system written in C++. The prototype has been extended through the development of a user interface which is based on XView.
APA, Harvard, Vancouver, ISO, and other styles
25

Haase, Kristine [Verfasser]. "Maritime Augmented Reality with a priori knowledge of sea charts / Kristine Haase." Kiel : Universitätsbibliothek Kiel, 2013. http://d-nb.info/1034073729/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Paraskevopoulos, Vasileios. "Design of optimal neural network control strategies with minimal a priori knowledge." Thesis, University of Sussex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324189.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Christiansen, Jesse G. "Apriority in naturalized epistemology investigation into a modern defense /." unrestricted, 2007. http://etd.gsu.edu/theses/available/etd-11272007-193136/.

Full text
Abstract:
Thesis (M.A.)--Georgia State University, 2007.<br>Title from file title page. George W. Rainbolt, committee chair; Jessica Berry, Steve Jacobson, committee members. Electronic text (43 p.) : digital, PDF file. Description based on contents viewed Jan 18, 2008. Includes bibliographical references (p. 43).
APA, Harvard, Vancouver, ISO, and other styles
28

Ebert, Philip A. "The context principle and implicit definitions : towards an account of our a priori knowledge of arithmetic." Thesis, University of St Andrews, 2005. http://hdl.handle.net/10023/14916.

Full text
Abstract:
This thesis is concerned with explaining how a subject can acquire a priori knowledge of arithmetic. Every account for arithmetical, and in general mathematical knowledge faces Benacerraf's well-known challenge, i.e. how to reconcile the truths of mathematics with what can be known by ordinary human thinkers. I suggest four requirements that jointly make up this challenge and discuss and reject four distinct solutions to it. This will motivate a broadly Fregean approach to our knowledge of arithmetic and mathematics in general. Pursuing this strategy appeals to the context principle which, it is proposed, underwrites a form of Platonism and explains how reference to and object-directed thought about abstract entities is, in principle, possible. I discuss this principle and defend it against different criticisms as put forth in recent literature. Moreover, I will offer a general framework for implicit definitions by means of which - without an appeal to a faculty of intuition or purely pragmatic considerations - a priori and non-inferential knowledge of basic mathematical principles can be acquired. In the course of this discussion, I will argue against various types of opposition to this general approach. Also, I will highlight crucial shortcomings in the explanation of how implicit definitions may underwrite a priori knowledge of basic principles in broadly similar conceptions. In the final part, I will offer a general account of how non-inferential mathematical knowledge resulting from implicit definitions is best conceived which avoids these shortcomings.
APA, Harvard, Vancouver, ISO, and other styles
29

Bhowal, Nabanita. "Kants notion of synthetic a priori judgement and some later developments on it." Thesis, University of North Bengal, 2019. http://ir.nbu.ac.in/handle/123456789/4042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Meunier, Sébastien. "Analyse d'erreur a posteriori pour les couplages Hydro-Mécaniques et mise en œuvre dans Code Aster." Phd thesis, Ecole des Ponts ParisTech, 2007. http://pastel.archives-ouvertes.fr/pastel-00003314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Gning, Lucien Diégane. "Utilisation des mélanges de lois de Poisson et de lois binomiales négatives pour établir des tarifs a priori et a posteriori en assurance non-vie." Paris 6, 2011. http://www.theses.fr/2011PA066304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Bauer, Patrick Marcel [Verfasser]. "Artificial Bandwidth Extension of Telephone Speech Signals Using Phonetic A Priori Knowledge / Patrick Marcel Bauer." Aachen : Shaker, 2017. http://d-nb.info/1138178519/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Basoukos, Antonios. "Science, practice, and justification : the a priori revisited." Thesis, University of Exeter, 2014. http://hdl.handle.net/10871/17358.

Full text
Abstract:
History is descriptive. Epistemology is conceived as normative. It appears, then, that a historical approach to epistemology, like historical epistemology, might not be epistemically normative. In our context here, epistemology is not a systematic theory of knowledge, truth, or justification. In this thesis I approach epistemic justification through the vantage point of practice of science. Practice is about reasoning. Reasoning, conceived as the human propensity to order perceptions, beliefs, memories, etc., in ways that permit us to have understanding, is not only about thinking. Reasoning has to do with our actions, too: In the ordering of reasoning we take into account the desires of ourselves and others. Reasoning has to do with tinkering with stuff, physical or abstract. Practice is primarily about skills. Practices are not mere groping. They have a form. Performing according to a practice is an activity with a lot of plasticity. The skilled performer retains the form of the practice in many different situations. Finally, practices are not static in time. Practices develop. People try new things, some of which may work out, others not. The technology involved in how to go about doing things in a particular practice changes, and the concepts concerning understanding what one is doing also may change. This is the point where history enters the picture. In this thesis I explore the interactions between history, reasoning, and skills from the viewpoint of a particular type of epistemic justification: a priori justification. An a priori justified proposition is a proposition which is evident independent of experience. Such propositions are self-evident. We will make sense of a priori justification in a context of regarding science as practice, so that we will be able to demonstrate that the latter accommodates the normative character of science.
APA, Harvard, Vancouver, ISO, and other styles
34

Meidner, Dominik. "Adaptive space-time finite element methods for optimization problems governed by nonlinear parabolic systems." [S.l. : s.n.], 2007. http://nbn-resolving.de/urn:nbn:de:bsz:16-opus-82723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Campbell, Douglas Ian. "A Theory of Consciousness." Diss., The University of Arizona, 2010. http://hdl.handle.net/10150/195372.

Full text
Abstract:
It is shown that there is an unconditional requirement on rational beings to adopt “reflexive” beliefs, these being beliefs with a very particular sort of self-referential structure. It is shown that whoever adopts such beliefs will thereby adopt beliefs that imply that a certain proposition, ᴪ, is true. From the fact that there is this unconditional requirement on rational being to adopt beliefs that imply ᴪ, it is concluded that ᴪ is knowable a priori. ᴪ is a proposition that says, in effect, that one’s own point of view is a point in space and time that is the point of view of some being who has reflexive beliefs. It is argued that this information that is contained in ᴪ boils down to the information that one’s point of view is located at a point in the world at which there is something that is “conscious” in a certain natural and philosophically interesting sense of that word. In other words, a theory of consciousness is defended according to which an entity is conscious if and only if it has reflexive beliefs.
APA, Harvard, Vancouver, ISO, and other styles
36

Los, Artem. "Modelling an individual's selection of a partner in a speed-dating experiment using a priori knowledge." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-208668.

Full text
Abstract:
Speed dating is a relative new concept that allows researchers to study various theories related to mate selection. A problem with current research is that it focuses on finding general trends and relationships between the attributes. This report explores the use of machine learning techniques to predict whether an individual will want to meet his partner again after the 4-minute meeting based on their attributes that were known before they met. We will examine whether Random Forest or Extremely Randomized Trees perform better than Support Vector Machines for both limited attributes (describe appearance only) and extended attributes (includes answers to some questions about their preferences). It is shown that Random Forests perform better than Support Vector Machines and that extended attributes give better result for both classifiers. Furthermore, it is observed that the more information is known about the individuals, the better a classifier performs. Clubbing preferences of the partner stands out as an important attribute, followed by the same preference for the individual.<br>Speed dating är ett relativt nytt koncept som tillåter forskare att studera olika teorier relaterade till val av partner. Ett problem med nuvarande forskning är att den fokuserar på att hitta generella trender och samband mellan attribut. Den här rapporten utforskar användning av maskinlärningsteknik för att förutsäga om en individ kommer vilja träffa sin partner igen efter ett 4-minuters möte baserat på deras attribut som var tillgängliga innan de träffades. Vi kommer att undersöka om Random Forest eller Extremely Randomized Trees fungerar bättre än Support Vector Machine för både begränsade attribut (beskriver bara utseende) och utökade attribut (inkluderar svar på några frågor om deras preferenser). Det visas att Random Forest fungerar bättre än Support Vector Machines och att utökade attribut ger bättre resultat för båda klassificerarna. Dessutom är det observerat att ju mer information som finns tillgänglig om individerna, desto bättre resultat ger en klassificerare. Partners preferens för att besöka nattklubbar står ut som ett viktigt attribut, följt av individers samma preferens för individen.
APA, Harvard, Vancouver, ISO, and other styles
37

Princeton, Judith. "Pratiques innovantes d'exploitation des réseaux routiers en lien avec une mobilité durable : une nouvelle approche de l'évaluation." Thesis, Paris Est, 2011. http://www.theses.fr/2011PEST1152/document.

Full text
Abstract:
La gestion du trafic sur les réseaux routiers se heurte aux nouveaux enjeux du développement durable. L'objectif n'est plus seulement de proposer aux usagers des temps de parcours raisonnables. Il faut aussi limiter la consommation énergétique et les émissions des gaz à effet de serre et des polluants qui y sont associées, afin de garantir une meilleure qualité de vie pour les générations actuelle et futures. Les exigences en matière de sécurité routière sont également renforcées et visent à éliminer le nombre de tués sur les routes. Les exploitants ont donc recours à diverses stratégies, souvent innovantes, pour au moins approcher la situation idéale. Néanmoins, si les décideurs disposent d'une plus grande capacité à mettre en œuvre leurs programmes dans le domaine, ils ont également l'obligation d'en évaluer les performances à divers stades. Cette thèse analyse les nouvelles stratégies de gestion des réseaux autoroutiers en identifiant leurs domaines d'application ainsi que leurs impacts potentiels et réels. Les limites des méthodes existantes d'évaluation a priori et/ou a posteriori sont mises en évidence et une nouvelle approche est proposée. Celle-ci associe les trois principaux critères d'une mobilité durable à un seul concept: le niveau de service, largement employé par les exploitants de réseaux. La méthodologie a fait l'objet d'une validation sur différentes opérations. Par ailleurs, se basant sur les résultats obtenus sur un ensemble d'opérations d'affectation variable des voies au niveau européen, la thèse propose un outil d'aide au choix d'une stratégie d'exploitation d'un réseau en fonction de la configuration de l'infrastructure et du niveau de congestion. Cet outil se présente sous la forme d'un catalogue de cas-types applicable au réseau d'Ile-de-France. La nouvelle approche d'évaluation proposée dans cette thèse présente l'intérêt de pouvoir facilement s'intégrer aux outils de simulation du trafic. Les impacts d'une opération d'exploitation routière sur la congestion, la sécurité et l'environnement peuvent ainsi être fournis par ces simulateurs dans le cadre de l'évaluation a priori. L'intégration est également possible au niveau des systèmes des centres de gestion du trafic, pour l'évaluation a posteriori. Par ailleurs, la thèse identifie des pistes potentielles pour des investigations futures. Tout d'abord, la gravité des accidents pourrait être prise en compte dans l'approche d'évaluation proposée, qui considère pour l'instant tous les accidents corporels confondus en raison du manque de données. De même, seules quatre stratégies d'affectation variable des voies sont proposées dans le catalogue de cas-types. Celui-ci pourrait donc être étendu à l'ensemble des opérations d'exploitation en suivant la même méthodologie décrite dans la thèse<br>Traffic management is facing the new issues of the sustainable development concept. The objective is not only to guarantee acceptable travels times over the networks anymore. Energy consumption as well as associated greenhouse gaz and pollutant emissions must be reduced for a better quality of life for current and future generations. Standards in road safety have also been reinforced and aim at cutting off the number of accident fatalities. Thus, traffic operators use the most innovating strategies. Nevertheless, if decision-makers have greater possibilities to implement their programmes, they also are committed to assess their performance at different stages. This doctoral thesis analyses the new strategies in motorway network management by identifying their respective domains of application as well as their potential and real impacts. Limitations of existing a priori and a posteriori evaluation methods are highlighted and a new approach is proposed. It associates the three main criteria of sustainable mobility to one concept: the level of service, which is widely used by network operators. The methodology is validated on several operations. Besides, based on results obtained from the various lane management operations implemented all accross Europe, the thesis proposes a tool to help in choosing the appropriate strategy according to the motorway layout and congestion level. The tool is presented in the form of a catalog of typical cases for the Ile-de-France motorway network. The new evaluation approach proposed in this thesis may be easily integrated in the available traffic simulation tools. Hence, the impacts of a traffic management operation on congestion, safety and the environment may be obtained as output from those simulators in the framewok of an a priori evaluation. This integration is also possible in the traffic management center systems, for a posteriori evaluations. Besides, the thesis identifies potential subjects for future research. Firstly, accident severity could be considered in the proposed evaluation approach, which takes into account all injury accidents at once by now, due to a lack of data. Likewise, only four manged lane strategies are included in the catalog, which could be extended to all the existing traffic management operations through the same methodology described in the thesis
APA, Harvard, Vancouver, ISO, and other styles
38

Pollock, William J. "The epistemology of necessity." Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/4053.

Full text
Abstract:
The thesis examines the direct reference theory of proper names and natural kind terms as expounded by Saul Kripke, Hilary Putnam and others and finds that it has not succeeded in replacing some kind of description theory of the reference of such terms - although it does concede that the traditional Fregean theory is not quite correct. It is argued that the direct reference theory is mistaken on several counts. First of all it is question-begging. Secondly, it is guilty of a 'use/mention' confusion. And thirdly, and most importantly, it fails to deal with the notion of understanding. The notion of understanding is crucial to the present thesis - specifically, what is understood by a proper name or natural kind term. It is concluded that sense (expressed in the form of descriptions) is at least necessary for reference, which makes a significant difference to Kripke's claim that there are necessary a posteriori truths as well as contingent a priori truths. It is also argued that sense could be sufficient for reference, if it is accepted that it is speakers who effect reference. In this sense, sense determines reference. The thesis therefore not only argues against the account of reference given by the direct reference theorists, it also gives an account of how proper names and natural kind terms actually do function in natural language. As far as the epistemology of necessity is concerned the thesis concludes that Kripke (along with many others) has not succeeded in establishing the existence of the necessary a posteriori nor the contingent a priori from the theory of direct reference. Whether such truths can be established by some other means, or in principle, is not the concern of the thesis; although the point is made that, if a certain view of sense is accepted, then questions of necessity and a priority seem inappropriate.
APA, Harvard, Vancouver, ISO, and other styles
39

Serin, Ismail. "The Quiddity Of Knowledge In Kant&#039." Phd thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605758/index.pdf.

Full text
Abstract:
In this thesis the quiddity of knowledge in Kant&#039<br>s critical philosophy has been investigated within the historical context of the problem. In order to illustrate the origins of the subject-matter of the dissertation, the historical background of Kant&#039<br>s views on the theory of knowledge has been researched too. As a result of this research, it is concluded that Kant did not invent a new philosophical problem, but he tried to improve a decisive solution for one of the oldest question of history of philosophy i.e., &ldquo<br>How is synthetic a priori knowledge is possible?&rdquo<br>The theoretical dimension of Kant&#039<br>s theory of knowledge is reserved for this purpose. The above mentioned question is not new neither for us nor for Kant, but his answer and his philosophical stand have clearly revolutionary meaning both for us and for him. This thesis claims that his stand-point not only leads to an original epoch for the theory of knowledge, but creates a serious possibility for a new ontology explicating the quiddity of knowledge.
APA, Harvard, Vancouver, ISO, and other styles
40

Paquin-Pelletier, Alexandre. "Du pancanadianisme a priori au pancanadianisme a posteriori: Une exploration de la nature structurante des institutions centrales sur le discours des intellectuels publics pancanadianistes de 1960 a 2007." Thesis, University of Ottawa (Canada), 2009. http://hdl.handle.net/10393/28305.

Full text
Abstract:
Le pancanadianisme est au coeur des débats politiques au Canada, particulièrement depuis le rapatriement de la Constitution en 1982. Selon un bon nombre de définitions classiques, le nationalisme pancanadien se caractérise par le "centralisme" et l'"anti-américanisme" (voir Bashevkin 1991). En outre, depuis 1947, la citoyenneté -- et plus particulièrement la citoyenneté sociale -- a representé la pierre angulaire du discours officiel pancanadianiste (Bourque et Duchastel 1996). Ces composantes centrales du pancanadanisme semblent toutefois en tension avec le climat institutionnel des dernières années. La Charte canadienne des droits et libertés, la reconfiguration de l'État providence et l'adoption du libre-échange nord-américain remettent en question les fondements du discours pancanadianiste, de la nature de la citoyenneté a la possibilité même d'un discours anti-américain. Quelles ont donc été les incidences de ces transformations institutionnelles sur le discours identitaire pancanadianiste? L'objectif de la thèse est d'analyser le caractère structurant de ces changements sur le discours pancanadianiste d'intellectuels publics et de peser ensuite les implications normatives de ces transformations pour la reconnaissance interne. L'hypothèse est que les transformations de l'État fédéral ont apporté deux principales reconfigurations. Premièrement, les réformes constitutionnelles des gouvernements Trudeau ont favorisé le passage d'un pancanadianisme à priori à un pancanadianisme à posteriori. Deuxièmement, les réformes néolibérales de Mulroney et de Chrétien ont favorisé une transformation à l'intérieur même de la structure des pancanadianismes à priori et à posteriori. Après la revue des écrits et la présentation du cadre théorique et méthodologique de la thèse au chapitre 1, la vérification de l'hypothèse se fera en trois étapes. Le chapitre 2 définit et situe le pancanadianisme au sein de la "conversation canadienne" (Webber 1994). Plutôt que d'être structuré autour de trois principaux récits (Un Canada, Deux Canadas et Mosaïque culturelle), la conversation canadienne révèle la présence d'une multitude de récits nationaux, organisés autours de deux principales idées. D'abord, celle d'une fondation à priori (dans laquelle l'expérience contemporaine est guidée par l'identité canadienne) et ensuite, celle d'une fondation à posteriori (dans laquelle l'expérience contemporaine crée l'identité canadienne). Le chapitre 3 observe l'évolution du discours officiel, d'abord d'un canadianisme anglo-saxon à un pancanadianisme à priori après la Deuxième Guerre mondiale, et ensuite à un pancanadianisme à posteriori sous Trudeau. Depuis Mulroney, le pancanadianisme à posteriori a été profondément remis en question et tend à adopter un visage économique. Le chapitre 4 présente les résultats de l'analyse d'un corpus de textes pancanadianistes et observe que le discours de certains intellectuels publics, comme le discours officiel, passe d'un pancanadianisme à priori a un pancanadianisme à posteriori. Toutefois, la grille de lecture permet de raffiner l'analyse en observant une deuxième transition dans le discours, soit le passage du pancanadianisme à priori au pancanadianisme à posteriori à des pancanadianismes à priori 2 et à posteriori 2. Dans ces pancanadianismes, les institutions et l'anti-américanisme en tant que moyen de fermeture identitaire cèdent le pas à un nouveau moyen de fermeture, à savoir l'idée d'une "complexité interne (point)", dans lequel le "soi complexe (point)" est désormais la principale définition de l'identité pancanadienne. La thèse permet d'attribuer en partie ces transformations au nouveau contexte institutionnel en place. Par conséquent, il semble que la reconnaissance des groupes internes sera de plus en plus difficile, au fur et à mesure que ce brouillage interne se poursuit.
APA, Harvard, Vancouver, ISO, and other styles
41

Denaxas, Spiridon Christoforos. "A novel framework for integrating a priori domain knowledge into traditional data analysis in the context of bioinformatics." Thesis, University of Manchester, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.492124.

Full text
Abstract:
Recent advances in experimental technology have given scientists the ability to perform large-scale multidimensional experiments involving large data sets. As a direct implication, the amount of data that is being generated is rising in an exponential manner. However, in order to fully scrutinize and comprehend the results obtained from traditional data analysis approaches, it has been proven that a priori domain knowledge must be taken into consideration. Infusing existing knowledge into data analysis operations however is a non-trivial task which presents a number of challenges. This research is concerned into utilizing a structured ontology representing the individual elements composing such large data sets for assessing the results obtained. More specifically, statistical natural language processing and information retrieval methodologies are used in order to provide a seamless integration of existing domain knowledge in the context of cluster analysis experiments on gene product expression patterns. The aim of this research is to produce a framework for integrating a priori domain knowledge into traditional data analysis approaches. This is done in the context of DNA microarrays and gene expression experiments. The value added by the framework to the existing body of research is twofold. First, the framework provides a figure of merit score for assessing and quantifying the biological relatedness between individual gene products. Second, it proposes a mechanism for evaluating the results of data clustering algorithms from a biological point of view.
APA, Harvard, Vancouver, ISO, and other styles
42

Abruzzo, Vincent G. "Content and Contrastive Self-Knowledge." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/philosophy_theses/108.

Full text
Abstract:
It is widely believed that we have immediate, introspective access to the content of our own thoughts. This access is assumed to be privileged in a way that our access to the thought content of others is not. It is also widely believed that, in many cases, thought content is individuated according to properties that are external to the thinker's head. I will refer to these theses as privileged access and content externalism, respectively. Though both are widely held to be true, various arguments have been put forth to the effect that they are incompatible. This charge of incompatibilism has been met with a variety of compatibilist responses, each of which has received its own share of criticism. In this thesis, I will argue that a contrastive account of self-knowledge is a novel compatibilist response that shows significant promise.
APA, Harvard, Vancouver, ISO, and other styles
43

Sebyhed, Hugo, and Emma Gunnarsson. "The Impotency of Post Hoc Power." Thesis, Uppsala universitet, Statistiska institutionen, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-433274.

Full text
Abstract:
In this thesis, we hope to dispel some confusion regarding the so-called post hoc power, i.e. power computed making the assumption that the estimated sample effect is equal to the population effect size. In previous research, it has been shown that post hoc power is a function of the p-value, making it redundant as a tool of analysis. We go further, arguing for it to never be reported, since it is a source of confusion and potentially harmful incentives. We also conduct a Monte Carlo simulation to illustrate our points of view. Previous research is confirmed by the results of this study.
APA, Harvard, Vancouver, ISO, and other styles
44

Barros, Cardoso da Silva André [Verfasser], and A. [Akademischer Betreuer] Moreira. "A Priori Knowledge-Based Post-Doppler STAP for Traffic Monitoring with Airborne Radar / André Barros Cardoso da Silva ; Betreuer: A. Moreira." Karlsruhe : KIT-Bibliothek, 2019. http://d-nb.info/1199458635/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kaiser, Julius A., and Fredrick W. Herold. "ANTENNA CONTROL FOR TT&C ANTENNA SYSTEMS." International Foundation for Telemetering, 2000. http://hdl.handle.net/10150/608253.

Full text
Abstract:
International Telemetering Conference Proceedings / October 23-26, 2000 / Town & Country Hotel and Conference Center, San Diego, California<br>A thinned array sensor system develops error voltages for steering dish antennas from signals arriving over a broad range of angles, thereby eliminating need for a priori knowledge of signal location.
APA, Harvard, Vancouver, ISO, and other styles
46

Meunier, Sébastien. "Analyse d'erreur a postériori pour les couplages hydro-mécaniques et mise en oeuvre dans code_aster." Marne-la-vallée, ENPC, 2007. http://www.theses.fr/2007ENPC0717.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Lapine, Lewis A. Commander. "Analytical calibration of the airborne photogrammetric system using a priori knowledge of the exposure station obtained from kinematic global positioning system techniques /." The Ohio State University, 1990. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487685204967272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Özel, Ali. "Simulation aux grandes échelles des lits fluidisés circulants gaz-particule." Thesis, Toulouse, INPT, 2011. http://www.theses.fr/2011INPT0090/document.

Full text
Abstract:
Les simulations numériques des équations d’Euler deux-fluides réalisé sur des maillages grossiers éliminent les structures fins d’écoulement gaz-solide dans les lits fluidisés. Pour précisément estimer l’hydrodynamique globale de lit, il faut proposer une modélisation qui prend en compte les effets de structure non-résolue. Dans ce but, les maillages sont raffinés pour obtenir le résultat de simulation pleinement résolue ce que les grandeurs statistiques ne modifient plus avec un autre raffinement pour le lit fluidisé périodique dilué gaz-particules sur une géométrie 3D cartésienne et ce résultat est utilisé pour tests "a priori". Les résultats de tests "a priori" montrent que l’équation filtrée de la quantité de mouvement est effectuée mais il faut prendre en compte le flux de la fraction volumique de solide de sous-maille en raison de l’interaction locale de la vitesse du gaz et la fraction volumique de solide pour la force traniée. Nous proposons les modèles fonctionnels et structurels pour le flux de la fraction volumique de solide de sous-maille. En plus, les modèles fermetures du tenseur de sous-maille de la phase dispersée sont similaires aux modèles classiquement utilisés en écoulement turbulent monophasique. Tous les modèles sont validés par test "a priori" et "a posteriori"<br>Eulerian two fluid approach is generally used to simulate gas-solid flows in industrial circulating fluidized beds. Because of limitation of computational resources, simulations of large vessels are usually performed by using too coarse grid. Coarse grid simulations can not resolve fine flow scales which can play an important role in the dynamic behaviour of the beds. In particular, cancelling out the particle segregation effect of small scales leads to an inadequate modelling of the mean interfacial momentum transfer between phases and particulate shear stresses by secondary effect. Then, an appropriate modelling ac counting for influences of unresolved structures has to be proposed for coarse-grid simu-lations. For this purpose, computational grids are refined to get mesh-independent result where statistical quantities do not change with further mesh refinement for a 3-D peri-odic circulating fluidized bed. The 3-D periodic circulating fluidized is a simple academic configuration where gas-solid flow conducted with A-type particles is periodically driven along the opposite direction of the gravity. The particulate momentum and agitation equations are filtered by the volume averaging and the importance of additional terms due to the averaging procedure are investigated by budget analyses using the mesh independent result. Results show that the filtered momentum equation of phases can be computed on coarse grid simulations but sub-grid drift velocity due to the sub-grid correlation between the local fluid veloc- ity and the local particle volume fraction and particulate sub-grid shear stresses must be taken into account. In this study, we propose functional and structural models for sub- grid drift velocity, written in terms of the difference between the gas velocity-solid volume fraction correlation and the multiplication of the filtered gas velocity with the filtered solid volume fraction. Particulate sub-grid shear stresses are closed by models proposed for single turbulent flows. Models’ predictabilities are investigated by a priori tests and they are validated by coarse-grid simulations of 3-D periodic circulating, dense fluidized beds and experimental data of industrial scale circulating fluidized bed in manner of a posteriori tests
APA, Harvard, Vancouver, ISO, and other styles
49

Cooke, Jeffrey L. "Techniques and methodologies for intelligent A priori determination and characterisation of information required by decision makers." Thesis, Queensland University of Technology, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
50

Mattsson, Nils-Göran. "Den moderata rationalismen : Kommentarer, preciseringar och kritik av några begrepp och teser som framlagts av Laurence Bonjour i dennes In Defense of Pure Reason." Thesis, Linköping University, Department of Religion and Culture, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-4543.

Full text
Abstract:
<p>The paper contains comment, clarification and criticism, even constructive criticism, of some theses that have been put forward by Laurence Bonjour in his In Defense of Pure Reason.</p><p>It presents a concept of experience that deals with the relation between cognizer and object of experience that has a great similarity to that of Bonjour. Through analysis it is shown that the concept of a priori entails that Bonjour has two concepts of a priori, a narrow and a broad one. The narrow one is, in my own words: According to moderate rationalism a proposition p is a priori justified if and only if you apprehend that p must be true in every possible world. This doesn’t mean that Bonjour doesn’t believe in an epistemological, metaphysical and semantic realm. The broad one does not mention anything about possible worlds.</p><p>Casullo in his A priori justification rejects Bonjour’s argument against Quine’s coherentism. A defense is put forward with the concept ‘an ideal of science for apparent rational insights’. The concept of axiomatic system and foundationalism is used. If we assume that the colour proposition ‘nothing can be red all over and green all over at the same time’ has the meaning that we, in this very moment, are representing a property in the world, thus we have an argument of superposition for the correctness of the proposition. The ground for this argumentation relies on the identification of colours with superposing electromagnetic waves.</p>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography