To see the other types of publications on this topic, follow the link: Evolution of principles.

Dissertations / Theses on the topic 'Evolution of principles'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Evolution of principles.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kerce, James Clayton. "Geometric problems relating evolution equations and variational principles." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/28739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Kouvaris, Konstantinos. "How evolution learns to evolve : principles of induction in the evolution of adaptive potential." Thesis, University of Southampton, 2018. https://eprints.soton.ac.uk/423467/.

Full text
Abstract:
Explaining how organisms can exhibit suitable phenotypic variation to rapidly adapt to novel environmental conditions is central in evolutionary biology. Although such variability is crucial for the survival of a lineage and its adaptive potential, it remains poorly understood. Recent theory suggests that organisms can evolve designs that help them generate novel features that are more likely to be beneficial. This is possible when the environments that the organisms are exposed to share common regularities. Selection though cannot favour phenotypes for fitness benefits that have not yet been realised. Such capacity implies that natural selection has a form of foresight, which is inconsistent with the existing evolutionary theory. It is unclear why selection would favour flexible biological structures in the present environments that promote beneficial phenotypic variants in the future, previously unseen environments. In this thesis, I demonstrate how organisms can systematically evolve designs that enhance their evolutionary potential for future adaptation relying on insights from learning theory. I investigate how organisms can predispose the production of useful phenotypic variation that helps them cope with environmental variability within and across generations, either through genetic mutation or environmental induction. I find that such adaptive capacity can arise as an epiphenomenon of past selection towards target optima in different selective environments without a need for a direct or lineage selection. Specifically, I resolve the tension between canalisation of past selected targets and anticipation of future environments by recognising that induction in learning systems merely requires the ability to represent structural regularities in previously seen situations that are also true in the yet-unseen ones. In learning systems, such generalisation ability is neither mysterious, nor taken for granted. Understanding the evolution of developmental biases as a form of model learning and adaptive plasticity as task learning can provide valuable insights into the mechanistic nature of the evolution of adaptive potential and the evolutionary conditions promoting it.
APA, Harvard, Vancouver, ISO, and other styles
3

Shen, Yanfen. "A formal ontology for data mining : principles, design, and evolution." Thèse, Trois-Rivières : Université du Québec à Trois-Rivières, 2007. http://www.uqtr.ca/biblio/notice/resume/30004656R.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Elderfield, James Alexander David. "Using epidemiological principles and mathematical models to understand fungicide resistance evolution." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275061.

Full text
Abstract:
The use of agricultural fungicides exerts very strong selection pressures on plant pathogens. This can lead to the spread of fungicide resistance in the pathogen population, which leads to a reduction in efficacy of disease control and loss of yield. In this thesis, we use mathematical modelling to investigate how the spread of fungicide resistant pathogen strains can be slowed, using epidemiological models to understand how application strategies can be optimised. A range of different fungicide application strategies have been proposed as anti-resistance strategies. Two of the most often considered strategies rely on combining two fungicides with different modes of action. The first involves spraying the two fungicides at the same time (mixture) and the second spraying them alternately at different times (alternation). These strategies have been compared both experimentally and by mathematical modellers for decades, but no firm conclusion as to which is better has been reached, although mixtures have in general often been favoured. We use mathematical models of septoria leaf blotch (Zymoseptoria tritici) on winter wheat and powdery mildew on grapevine (Erysiphe necator) to investigate the relative performance of these two strategies. We show that depending on the exact way in which the strategies are compared and the exact case, either strategy can be the more effective. However, when aiming to optimise yield in the long-term, we show that mixtures are very likely to be the most effective strategy in any given case. The structure of mathematical models clearly impacts on the conclusions of those models. As well as investigating the sensitivity of our conclusions to the structure of the models, we use a range of nested models to isolate mechanisms driving the differential performance of fungicide mixtures and alternation. Although the fine detail of a model’s predictions depends on its exact structure, we find a number of conserved patterns. In particular we find no case in which mixtures do not produce the overall largest yield over the time for which the fungicide remains effective. We also investigate the effects of the timing of an individual fungicide spray on its contribution toward resistance development and disease control. A set of so-called “governing principles” to understand the performance of resistance-management strategies was recently introduced by van den Bosch et al., formalising concepts from earlier literature. These quantify selection rates by examining the difference between the growth rates of fungicide-sensitive and fungicide resistant pathogen strains. Throughout the thesis, we concentrate on the extent to which these governing principles can be used to explain the relative performance of the resistance-management strategies that are considered.
APA, Harvard, Vancouver, ISO, and other styles
5

Binder, Bernd. "Design principles and control mechanisms of signal transduction networks." Doctoral thesis, [S.l. : s.n.], 2005. http://deposit.ddb.de/cgi-bin/dokserv?idn=975655868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yesilyurt, Yasar. "Evolution of total quality management principles and their implementation in high schools." Master's thesis, University of Cape Town, 2001. http://hdl.handle.net/11427/5447.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dibartolomeo, Theresa. "The evolution of U.S. generally accepted accounting principles and its current and future status /." Staten Island, N.Y. : [s.n.], 2005. http://library.wagner.edu/theses/business/2005/thesis_bus_2005_dibar_evolu.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Manuse, Jennifer E. "The strategic evolution of systems : principles and framework with applications to space communication networks." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/54603.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. [369]-377).
Complex systems in operation are constrained by legacy; in other words, the existing properties, structure, components, protocols, software, people and etc. that are inherited over time. This inheritance heavily influences the type and timing of feasible and available changes to the system, the effectiveness and desirability of their impact while accounting for uncertainty, and the future constraints imposed as a result. This thesis introduces the Strategic Evolution of Systems, a novel framework for evolving complex systems that directly addresses legacy challenges during system operation within the context of space communication networks. The framework - perspective, position, plan and pattern - is based on Mintzberg's "emergent" interpretation of strategy. This thesis also presents several unique ideas including the concept of option lock-out, or the tendency to lose access to potentially desirable regions of the architectural space when exercising a transition; an energy analogy to model static architecture value; an entropy-based formulation to evaluate the desirability, or dynamic multidimensional value, of an architecture by considering the structural and temporal space of possible transitions; and the application of the entropy-based formulation to define the overall desirability of an architecture as its position, or current situation (favorable or unfavorable) relative to accessible alternatives, in order to identify the most advantageous immediate transition. A key contribution of this thesis is a method to value legacy in a physical non-market traded system, including a demonstration of its application to a system in which benefits and costs are nonmonetary in nature.
(cont.) Other important contributions include a change exposure tool, referred to as a Strategic Advantage Map, to visualize the near- and long-term impact of immediate transitions relative to legacy. Here, an architecture's position relative to the legacy system can be thought of as the region of entropy space it occupies (evaluated over time and uncertainty). The more dominating this region of position entropy is, the more desirable the architecture. For monetary-based systems, a second change exposure tool includes an "Iceberg Exposure,"which maps the exposure of net present value for each accessible transition option relative to a neutral no-gain-no-loss line, resulting in a graph resembling an iceberg. The visualization tools allow decision makers to quickly evaluate the impact (risk/opportunity) of change, based on their concept of desirability. Case studies include a historical look at the NASA Deep Space Network for insight into legacy and complex system evolution, a demonstration of the Strategic Evolution of Systems framework for a global commercial satellite communication system, and an illustration of the method extended to non-monetary systems for the deployment of communication assets to support manned exploration of Earth's moon. The satellite system case study introduces an extended market model that evaluates the attainable business segments in a global satellite communications system by integrating estimates of the global distribution of market demand, observed traffic statistics, and calculations of the resulting steady-state network performance.
(cont.) This thesis will show how to use the framework and principles for evaluating a system's current position as well as how to update the evaluation as time progresses. The satellite communication case study will provide one example where the methodology enables identification of the optimal transition path over the system's operational life. It will become evident that the choice of horizon time and the use of debiasing factors can have significant influence on the results. Future study on properly identifying and constructing these variables is strongly recommended. Finally, the ideas and tools presented in this thesis may be used to compare preferred systems to suggested alternatives in order to justify expenditures or to initiate research and development programs.
by Jennifer E. Manuse.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
9

Nyström, Dag. "The UN mission in Congo and the basic principles of peacekeeping : revolution or evolution?" Thesis, Stockholms universitet, Juridiska institutionen, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-127731.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Orji, Peter. "The evolution of a regulatory framework for e-commerce formation : metamorphosis of traditional contract principles." Thesis, University of Reading, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.567593.

Full text
Abstract:
This research, entitled The Evolution of a Regulatory Framework for E-commerce: Metamorphosis of Traditional Contract Principles, is set against the background of the general question whether there is the need for a whole new legal structure for contract formation in the on line environment, or if the existing traditional laws of contract are sufficient by adapting the current provisions to cyber space. In the first chapter, the research examines the context of e-contract, laying a foundation for the analysis of the legal framework through which electronic business transactions are conducted. The research covers matters such as the rudimentary use of the prefix e as an attempt to translate commerce from its traditional form to its cyber-based equivalent. This chapter also explores a description of the technological infrastructure for various avenues of e-commerce. Chapter Two provides a functional definition of the law of e-commerce. From the proposal that the virtual world is completely devoid of law to the view that it is too strictly regulated, this chapter examines whether or not there can be a legal mechanism for governing businesses online - as distinct from the general law of contract - what that mechanism might be, and the efficacy of any such law. In Chapter Three a model of a virtual contract formed by the use of electronic media is examined. This model of contract formation is aided by importing the rules of traditional contract into the virtual shop. The contract rules are tested for relevance and applicability in the online environment. Chapter Four deals with a crucial feature of many online contracts: 'standard forms'. It answers the question whether there is anything significantly different from the day-to-day standard form paper contracts when these contracts are formed and/or executed online. In Chapter Five the concept of a separate legal personality for automated agents is discussed. There is an analogous review of the creation of personality from other non-human v legal persons. Signature and other authenticating means as key to contract formation, though not necessarily ingredients for determining validity, are discussed. In Chapter Six the research explores the relevance and increased use of authentication features like pin numbers, biometrics and e-signatures, particularly the legal aspects of electronic signatures (statutory requirements, practical problems with their use, and case law response to the use of electronic signatures). Finally the work turns to the core issues surrounding complex e-commerce transactions: choosing a forum for the adjudication of disputes. The work, while dealing with keys aspects of contract, moves from the traditional contract form to contracts in the virtual environment, and questions the applicability of the existing law, then proposes an approach specific to the uniqueness of the online market.
APA, Harvard, Vancouver, ISO, and other styles
11

Wang, Zhongyi [Verfasser], and Henrik [Akademischer Betreuer] Kaessmann. "Ribosome profiling reveals principles of translatome and transcriptome evolution in mammalian organs / Zhongyi Wang ; Betreuer: Henrik Kaessmann." Heidelberg : Universitätsbibliothek Heidelberg, 2019. http://d-nb.info/1200636376/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Ferguson, Michael. "The origin, gestation and evolution of management consultancy within Britain (1869-1965) : the principles, practices and techniques of a new professional grouping." Thesis, Open University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301880.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Delandar, Arash Hosseinzadeh. "Modeling defect structure evolution in spent nuclear fuel container materials." Doctoral thesis, KTH, Materialteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-206175.

Full text
Abstract:
Materials intended for disposal of spent nuclear fuel require a particular combination of physical and chemical properties. The driving forces and mechanisms underlying the material’s behavior must be scientifically understood in order to enable modeling at the relevant time- and length-scales. The processes that determine the mechanical behavior of copper canisters and iron inserts, as well as the evolution of their mechanical properties, are strongly dependent on the properties of various defects in the bulk copper and iron alloys. The first part of the present thesis deals with precipitation in the cast iron insert. A nodular cast iron insert will be used as the inner container of the spent nuclear fuel. Precipitation is investigated by computing effective interaction energies for point defect pairs (solute–solute and vacancy–solute) in bcc iron using first-principles calculations. The main considered impurities in the iron matrix include 3sp (Si, P, S) and 3d (Cr, Mn, Ni, Cu) solute elements. By computing interaction energies possibility of formation of different second phase particles such as late blooming phases (LBPs) in the cast iron insert is evaluated. The second part is devoted to the fundamentals of dislocations and their role in plastic deformation of metals. Deformation of single-crystal copper under high strain rates is simulated by employing dislocation dynamics (DD) method to examine the effect of strain rate on mechanical properties as well as dislocation microstructure development. Creep deformation of copper canister at low temperatures is studied. The copper canister will be used in the long-term storage of spent nuclear fuel as the outer shell of the waste package to provide corrosion protection. A glide rate is derived based on the assumption that at low temperatures it is controlled by the climb rate of jogs on the dislocations. Using DD simulation creep deformation of copper at low temperatures is modeled by taking glide but not climb into account. Moreover, effective stresses acting on dislocations are computed using the data extracted from DD simulations.

QC 20170428

APA, Harvard, Vancouver, ISO, and other styles
14

Lim, John. "Understanding acupuncture : a review of the evolution of the theoretical and philosophical principles governing the development of the art of acupuncture through two millennia." Thesis, University of Cambridge, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.293583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Delepine, Léa. "L'évolution du droit international de l'environnement : entre impulsion et régression." Thesis, Perpignan, 2019. http://www.theses.fr/2019PERP0042.

Full text
Abstract:
La protection de l'environnement est entrée dans l'inconscient collectif et elle est devenue une priorité des sociétés contemporaines, encadrée par le droit international. C'est en ce sens que le droit international de l’environnement est conçu comme évolutif, afin de permettre sa protection. La notion d'évolution renvoie au concept de « transformation », qui peut être progressive et impulsive ou régressive. Bien que n'étant pas un terme juridique l'impulsion sous-entend l'action et le mouvement, et, s'agissant de l'évolution, elle emporte une transformation du droit international de l'environnement. Cette dernière peut être impulsive, car favorable à une protection accrue de l'environnement ou, au contraire, régressive, diminuant les garanties juridiques de la protection de l'environnement. Tandis que, a contrario, la notion d'involution est étroitement associée à l'idée d'une transformation régressive. Cette thèse a pour objet de s'interroger sur l'évolution du droit international de l'environnement. En la retraçant, il s’agira d’observer ses aspects évolutifs et impulsifs ou, involutifs et régressifs. Pour comprendre et analyser l'évolution du droit international de l'environnement, nous nous sommes concentrés sur les éléments et les domaines juridiques les plus significatifs du droit international public et du droit de l'Union européenne, afin de dégager les domaines juridiques les plus significatifs porteurs d' une standardisation, comme le droit de la lutte contre les changements climatiques ou encore le droit de la biodiversité, ainsi que les grands principes environnementaux. Pour cela, ce travail se divise en deux parties, la première concerne l'élaboration et l'intégration des préoccupations environnementales en droit international, et la seconde développe les systèmes de mise en œuvre de responsabilité ainsi que la portée normative des principes environnementaux, afin de dégager une rétrospective du droit international de l'environnement à travers une interprétation évolutive
International Law is aware that the protection of the environment has finally entered the collective unconscious and has become a priority of contemporary societies, therefore it tries to evolve a legal framework conducive to its preservation. It is in this sense that it is conceived as evolutionary in order to protect the environment, the notion of evolution refers to the concept of transformation that can be progressive and impulsive or regressive. Although not a legal term the impulse implies the action and the movement, with regard to the evolution it involves a transformation of the International Law of the Environment, an impulsive transformation because favorable to a greater protection of the environment or conversely, a regressive transformation that reduces the legal guarantees of the protection of the environment. Whereas, a contrario, the notion of involution is associated with the idea of a regressive transformation. The purpose of this PhD is to examine the evolution of International Environmental Law, by retracing this evolution we will observe if the latter are evolutionary and impulsive or, involutive and regressive. To understand and analyze the evolution of International Environmental Law, we focused on the most significant legal elements and areas of Public International Law and European Union Law in order to identify the most important legal standardization, such as the law of the fight against climate change or the law of biodiversity as well as environmental principles. This work was done in two parts, the first concerns the development and integration of environmental concerns in International Law, and the second develops the systems of implementation of responsibility and the normative scope of environmental principles. In this way he will attempt to retrace a retrospective of International Environmental Law through an evolutionary interpretation
La protección del medio ambiente finalmente ha entrado en el inconsciente colectivo y se ha convertido en una prioridad de las sociedades contemporáneas, enmarcada por el Derecho Internacional. En este sentido, el Derecho Internacional del Medio Ambiente se concibe como evolutivo, para permitir su protección. La noción de evolución se refiere al concepto de “transformación”, que puede ser progresivo e impulsivo o regresivo. Aunque no es un término legal, el impulso implica la acción y el movimiento, mientras que la evolución implica una transformación del Derecho Internacional del Medio Ambiente. Esta puede ser impulsiva, porque es favorable a una mayor protección del medio ambiente o, por el contrario, regresiva, disminuyendo las garantías legales de la protección del medio ambiente. Mientras que, a contrario, la noción de involución está estrechamente asociada con la idea de una transformación regresiva. El propósito de esta tesis es examinar la evolución del Derecho Ambiental Internacional. Al examinar esta evolución, observaremos si estas últimas son evolutivas e impulsivas o, involutivas y regresivas. Para comprender y analizar la evolución del Derecho Ambiental Internacional, nos concentramos en los elementos legales y áreas más importantes del Derecho Internacional Público y el Derecho de la Unión Europea, para identificar los aspectos legales más importantes implicando una estandarización, como las normas de lucha contra el cambio climático o de biodiversidad, así como los grandes principios medioambientales. Para ello, este trabajo consta de dos partes, la primera se refiere al desarrollo e integración de las preocupaciones ambientales en el Derecho Internacional, y la segunda desarrolla los sistemas de implementación de responsabilidad y el alcance normativo de los principios ambientales, para desarrollar así una retrospectiva del Derecho Ambiental Internacional a través de una interpretación evolutiva
APA, Harvard, Vancouver, ISO, and other styles
16

Le, Dang Huy. "Modélisation simplifiée des processus de laminage." Phd thesis, Université Paris-Est, 2013. http://pastel.archives-ouvertes.fr/pastel-00966940.

Full text
Abstract:
L'objectif initial de la thèse était de proposer une nouvelle modélisation simplifiée du laminage permettant un calcul rapide, si possible en temps réel, afin que le modèle soit éventuellement intégré à un outil de pilotage des machines de production. Ce modèle ne doit pas négliger les déformations élastiques afin de pouvoir être éventuellement appliqué à l'étude de phénomènes associés à la variation de largeur de la bande ou à des phénomènes de planéité. Il doit par ailleurs être assez ouvert pour que l'on puisse y intégrer éventuellement une description de la microstructure du matériau polycristallin et prendre en compte la déformation des cylindres de laminage. Pour atteindre cet objectif, nous avons proposé de tenter de construire un modèle simplifié semi-analytique du laminage. Dans ce type de modèle, le gradient de la transformation globale peut alors être décomposé multiplicativement en un produit d'une première transformation locale " plastique " , qui transforme le voisinage local initial de dans la configuration relâchée, par une seconde transformation locale " élastique " qui transforme la configuration relâchée dans la configuration actuelle . Cette décomposition est à la base de l'analyse thermodynamique de l'évolution mécanique lorsque le matériau subit de grandes transformations élastoplastiques, laquelle analyse fournit les concepts d'efforts intérieurs et de variables d'état nécessaires à l'écriture de ce comportement. Nous avons montré deux approches permettant le calcul analytique de ces champs lorsque l'histoire de est connue au voisinage d'une particule .Nous avons ensuite proposé l'étude d'une classe particulière d'évolutions élastoplastiques que nous avons appelées " simples radiales " et nous avons montré que les évolutions obéissaient à un principe de minimum énergétique. Nous avons enfin conjecturé que ce principe pouvait être étendu en régime permanant pour permettre de construire une modélisation simplifiée des processus de laminage
APA, Harvard, Vancouver, ISO, and other styles
17

Darg, Daniel W. "Pattern recognition in astrophysics and the anthropic principle." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:4cb9e1d5-d9d9-4993-8991-f43882d70016.

Full text
Abstract:
The role of the Anthropic Principle in astrophysics and cosmology is examined in two principal parts. The first (minor) part takes a chiefly philosophical perspective and examines the manner in which human cognition features into discussions on cosmic origins. It is shown that the philosophical questions raised by the Anthropic Principle and ‘fine-tuning of life’ bear resemblances to problems within the philosophy of mind and we seek a common origin for this surprising parallel. A form of ‘epistemic structural realism’ is defended and used to critique the physicalist identity thesis. It is argued that equating ‘reality’ with mathematical structures, which is the basis of the identity thesis, leads to incoherent conclusions. Similar reasoning is used to critique infinite Multiverse theories. In the second (major) part, we gradually transition into mainstream astrophysics, first presenting a new line of research to explore counterfactual universes using semi-analytic models (SAMs) and offering a preliminary study wherein the cosmological constant is varied and the effects on ‘advanced civilisations’ are examined. The importance of galaxy mergers is highlighted and leads to their study. We first try solving the pattern-recognition problem of locating mergers using the Galaxy Zoo database and produce the largest homogenous merger catalogue to date. We examine their properties and compare them with the SAMs of the Millennium Simulation finding good general agreement. We develop the Galaxy Zoo approach with a new visual-interface design and double the size of the merger catalogue of SDSS mergers in the local Universe.
APA, Harvard, Vancouver, ISO, and other styles
18

Schmitt, Cédric. "Le principe "un homme, une voix" dans les sociétés coopératives." Thesis, Besançon, 2015. http://www.theses.fr/2015BESA0002/document.

Full text
Abstract:
« De plus en plus les sociétés coopératives deviennent de plus en plus des sociétés et de moins en moins des coopératives » : Jacques Mestre, Marie-Eve Pancrazi, Isabelle Arnaud-Grossi, Laure Merland et Nancy Tagliarino-Vignal, Droit commercial / Droit interne et aspect de droit international, 29ème édition, n°650, éditeur L.G.D.J.. Construite pendant des siècles en opposition aux modes traditionnels entrepreneuriat, la société coopérative suit en effet depuis quelques années le chemin inverse et glisse progressivement vers les sociétés que l’on peut qualifier de « classiques » ; sociétésanonymes, sociétés par actions simplifiées ou bien encore sociétés à responsabilité limitée notamment. Néanmoins s’agissant du principe « un homme, une voix », qui veut que chaque associé dispose d’une seule et unique voix et cela quel que soit son apport, composant si ce n’est essentiel en tout cas totalement indispensable de l’originalité des sociétés coopératives celui-ci reste omniprésent. Aussi bien dans la loi du 10 septembre 1947 formant le statut général de la coopération que dans les plus importants types de sociétés coopératives, sociétés coopératives agricoles, banques coopératives, sociétés coopératives de commerçants détaillants entre autres le principe « un homme, une voix » reste la règle sur laquelle s’appuie la répartition des voix dans les sociétés coopératives et ce même si celui-ci n’est plus toutseul…
« More and more cooperative societies become more and more societies and less and less cooperatives » : Jacques Mestre, Marie-Eve Pancrazi, Isabelle Arnaud-Grossi, Laure Merland et Nancy Tagliarino-Vignal, Droit commercial / Droit interne et aspect de droit international, 29th edition, n° 650, L.G.D.J. editor. Built during centuries in opposition to the traditional modes of entrepreneurship, the cooperative society indeed follows since a few years the inverse way and slides gradually towards the societies which we can qualify as « classics » ; sociétés anonymes, sociétés par actions simplifiées either still sociétés à responsabilité limitée in particular. Nevertheless as regards the principle « a man, a voice », which wants that every partner has only one voice and it whatever is his contribution, component if it is not essential in any case totally essential of the originality of cooperative societies this one remains omnipresent. However in the law of September 10th, 1947 forming the general status of the cooperation in the most important structure of cooperative societies, sociétés coopératives agricoles, banques coopératives, sociétés coopératives de commerçants détaillants among others things the principle « a man, a voice » stays the rule on which leans the distribution of the voices in cooperative societieseven if this one is not alone any more …
APA, Harvard, Vancouver, ISO, and other styles
19

Lebreton, Arnaud. "Les evolutions du principe de souverainete permanente sur les ressources naturelles." Electronic Thesis or Diss., Angers, 2017. http://www.theses.fr/2017ANGE0088.

Full text
Abstract:
Forgée à partir de 1952 sous l’impulsion notamment de certains Etats d’Amérique latine et réaffirmée par de nombreuses résolutions des Nations unies, la souveraineté permanente sur les ressources naturelles est devenue, à la suite d’une lente évolution, un principe bien établi du droit international contemporain dont le caractère coutumier a récemment été confirmé par la Cour internationale de justice. Visant à préciser en les restreignant progressivement les limites que le droit international peut imposer aux Etats vis-à-vis des intérêts économiques étrangers, sa formulation eut principalement pour intérêt de montrer la complexité des relations entre la souveraineté et l’exploitation des ressources du sol et du sous-sol. Ayant, en effet, accédé à l’indépendance avec des structures économiques héritées de la période coloniale et de ses avatars, les pays en développement ont très vite constaté le décalage existant entre la souveraineté quelque peu immatérielle qui leur était reconnue et leur incapacité de contrôler la vie économique nationale alors dominée par les compagnies étrangères et les anciennes puissances métropolitaines soucieuses de protéger leurs approvisionnements en matières premières. Face à cette situation propice à la perpétuation des rapports de dépendance économique, les Etats nouvellement indépendants entreprirent ainsi, à partir d’une « relecture » du concept de souveraineté, classiquement définie par ses seuls éléments politiques une vaste action destinée à éliminer, dans un premier temps, les séquelles de la domination coloniale et, dans un second temps, toute forme d’exploitation qui s’opposait à une emprise réelle de l’Etat sur l’ensemble des activités relatives aux ressources naturelles situées sur son territoire. On comprend, dès lors, les nombreuses controverses suscitées par l’interprétation des modalités d’exercice du principe dont le contenu risquait d’entraîner une révision des règles du droit international classique notamment en matière de nationalisation mais aussi une remise en cause des traités et autres contrats de concessions jugés contraires à l’équité. S’il est devenu courant désormais d’analyser le principe sous un angle strictement historique, l’objet de la présente étude tentera de démontrer qu’il ne semble pas avisé de le considérer comme tombé en désuétude. La souveraineté permanente sur les ressources naturelles demeure un principe fondamental du droit international, non sans subir des évolutions. Deux tendances majeures seront, en particulier, analysées sous l’angle d’une double relation dialectique. L’une tend à appréhender les relations entre le peuple et l’Etat en matière de libre disposition des ressources naturelles, l’autre vise à s’interroger sur l’articulation entre la souveraineté permanente sur les ressources naturelles et les exigences liées à l’inderdépendance, tant dans le domaine économique qu’environnemental
Forged since 1952, an impetus notably from certain Latin American states and reaffirmed by the numerous resolutions of the United Nations, permanent sovereignty over natural resources has developed into, under a slow evolution, a well-established principle of contemporary international law where the customary character was recently confirmed by the International Court of Justice. In clarifying the progressively restraining limits that international law can impose on states in regards to economic foreign interests, its formulation has principally had the interest of showing the complex relationship between sovereignty and the exploitation of resources above and below the ground. In fact, since developing countries accessed their independence through inherited economic structures from the colonial era and its avatars, they noticed early on the existing gap between their somewhat recognized intangible sovereignty and their inability to control their national economy that is still dominated by foreign companies and their former metropolitan powers concerned with protecting their supply of raw materials. Given that this situation lends itself to the perpetuation of dependent economic relationships, the newly independent states thus undertake, a revision of the concept of sovereignty, classically defined solely by political elements destined to eliminate: first, the legacy of colonial rule, and secondly, all forms of exploitation that oppose the real hold of the state over a range of activities related to natural resources situated on their territory. It is therefore understandable, that the numerous controversies raised by the interpretation of the terms and conditions of exercising the principle whose content risks causing a revision of the procedures of classic international law, particularly the subject of nationalization but also the calling into question of treaties and other concession contracts that are deemed contrary to equity. If it has become common to analyze the principle under a strictly historical angle, the purpose of this study will attempt to demonstrate that it is not wise to consider it as obsolete. Permanent sovereignty over natural resources remains a fundamental principle of international law, but not without undergoing changes. Two major tendencies must be specifically analyzed from a double dialectic angle. One tends to apprehend the relationship between people and the state in matters of a free disposal of the natural resources, the other intends to question the articulation between permanent sovereignty over natural resources and the demands tied to interdependency, in the economic sphere as well as in the environmental sphere
APA, Harvard, Vancouver, ISO, and other styles
20

Zhao, Zhenghang. "Design Principle on Carbon Nanomaterials Electrocatalysts for Energy Storage and Conversion." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc984279/.

Full text
Abstract:
We are facing an energy crisis because of the limitation of the fossil fuel and the pollution caused by burning it. Clean energy technologies, such as fuel cells and metal-air batteries, are studied extensively because of this high efficiency and less pollution. Oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) are essential in the process of energy storage and conversion, and noble metals (e.g. Pt) are needed to catalyze the critical chemical reactions in these devices. Functionalized carbon nanomaterials such as heteroatom-doped and molecule-adsorbed graphene can be used as metal-free catalysts to replace the expensive and scarce platinum-based catalysts for the energy storage and conversion. Traditionally, experimental studies on the catalytic performance of carbon nanomaterials have been conducted extensively, however, there is a lack of computational studies to guide the experiments for rapid search for the best catalysts. In addition, theoretical mechanism and the rational design principle towards ORR and OER also need to be fully understood. In this dissertation, density functional theory calculations are performed to calculate the thermodynamic and electrochemical properties of heteroatom-doped graphene and molecule-adsorbed graphene for ORR and OER. Gibb's free energy, overpotential, charge transfer and edge effect are evaluated. The charge transfer analysis show the positive charges on the graphene surface caused by the heteroatom, hetero-edges and the adsorbed organic molecules play an essential role in improving the electrochemical properties of the carbon nanomaterials. Based on the calculations, design principles are introduced to rationally design and predict the electrochemical properties of doped graphene and molecule-adsorbed graphene as metal-free catalysts for ORR and OER. An intrinsic descriptor is discovered for the first time, which can be used as a materials parameter for rational design of the metal-free catalysts with carbon nanomaterials for energy storage and conversion. The success of the design principle provides a better understanding of the mechanism behind ORR and OER and a screening approach for the best catalyst for energy storage and conversion.
APA, Harvard, Vancouver, ISO, and other styles
21

Grécourt, Gilles. "L'évolution de la notion de violence à l'aune du droit pénal." Thesis, Poitiers, 2012. http://www.theses.fr/2012POIT3008.

Full text
Abstract:
À rebours de l'enseignement des historiens, selon lequel les sociétés se pacifient à mesure que leurs mœurs s'affinent, notre société contemporaine semble en proie à une violence omniprésente. Pour autant, ni le scientifique ni le profane n'est véritablement dans l'erreur, car la notion de violence revêt une dimension subjective qui la rend susceptible de variations considérables selon les époques et les communautés. Cette subjectivité dont est empreinte la notion, le droit pénal, ne s'en accommode que difficilement. Fidèle aux principes qui le fondent, et le préservent de l'arbitraire, le droit pénal se doit de définir avec clarté et précision les comportements qu'il entend réprimer. Or, pas plus que la jurisprudence, le législateur n'a pris soin de définir la notion de violence. Pourtant, celle-ci irradie le Code pénal et connaît de surcroît un emploi inflationniste au sein de l'hémicycle, comme en témoigne la répression des violences routières, conjugales, urbaines, scolaires, sportives… S'il est de son office d'encadrer les évolutions de la société, le droit pénal ne doit cependant pas en accompagner les dérives avec bienveillance. Ne serait-ce parce qu'en matière de violence, il souffrirait immanquablement de se voir reprocher celle qui, originellement, est la sienne
Contrary to historian's learning, according to societies pacify themselves as their manners are refined, contemporary society seems plagued by widespread violence. However, neither the scientist nor layman is really wrong, because the concept of violence has a subjective dimension that makes it susceptible to considerable variations across periods and communities.This subjectivity imbuing the concept, criminal law can't admit it easily. Faithful to the underlying principles, and preserve itself of the arbitrary, criminal law should define clearly and precisely the behavior it intends to punish. However, no more than the jurisprudence, the legislature took care to define the concept of violence. Even so, it radiates the Penal Code and has furthermore inflationary employment within the Parliament, as evidenced by the punishment of violence roads, domestic, urban, school, sports... If it's his office to oversee the evolution of society, the criminal law should not, however, support the drifts with kindness. If only because in terms of violence, suffering inevitably be accused of that which, originally, was hers
APA, Harvard, Vancouver, ISO, and other styles
22

Lignières, François. "Evolution et distribution du moment cinetique dans les etoiles pre-sequence-principale de masses intermediaires." Paris 7, 1997. http://www.theses.fr/1997PA077131.

Full text
Abstract:
La chromosphere et la couronne des etoiles de type solaire sont generalement attribuees a la presence d'une zone convective sous leur surface. Dans ce contexte, la decouverte d'une chromosphere etendue dans l'atmosphere d'une classe d'etoiles jeunes et de masse comprise entre 2 et 5 masses solaires (les etoiles ae/be de herbig) etait inattendue car ces etoiles sont trop chaudes pour qu'une zone convective se developpe. Au cours de ma these, j'ai explore en detail l'idee selon laquelle l'existence d'un vent stellaire puissant combinee a la rotation rapide des etoiles de herbig pourrait expliquer le chauffage non radiatif de ces chromospheres. Nous montrons tout d'abord comment la perte de moment cinetique induite par le vent provoque d'importants gradients de vitesse angulaire au voisinage de la surface. Une instabilite de cisaillement permet alors un transfert d'energie entre les mouvements axisymetrique de rotation et des mouvements turbulents a petite echelle. Pour comprendre l'evolution non lineaire de la couche de cisaillement, nous considerons des situations analogues rencontrees en geophysique et nous effectuons des simulations numeriques intensives a deux et trois dimensions. Lorsqu'un cisaillement de vitesse est force a la surface d'une atmosphere stable, une couche turbulente s'enfonce progressivement dans l'atmosphere par entrainement turbulent. Nos resultats numeriques montrent que des theories auto-similaires permettent de decrire cet enfoncement. Ces resultats, adaptes au transport du moment cinetique dans les etoiles de herbig, indiquent que la couche turbulente formee dans les couches superficielles s'enfonce de facon significative au cours de l'evolution de l'etoile vers la sequence principale. Il est donc possible d'envisager des mecanismes de chauffage ou cette couche turbulente jouerait un role similaire a celui de la couche convective des etoiles de type solaire.
APA, Harvard, Vancouver, ISO, and other styles
23

GIOVACCHINI, PIERRE. "Dermato-fibro-sarcome de darier et ferrand : diagnostic, evolution et principes therapeutiques ; a propos de 12 cas." Aix-Marseille 2, 1989. http://www.theses.fr/1989AIX20204.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

AMIARD, VALERIE. "Evolution des indications therapeutiques de la lithiase de la voie biliaire principale a l'heure de la chirurgie coelioscopique." Amiens, 1994. http://www.theses.fr/1994AMIEM102.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kanwal, Jasmeen Kaur. "Word length and the principle of least effort : language as an evolving, efficient code for information transfer." Thesis, University of Edinburgh, 2018. http://hdl.handle.net/1842/33051.

Full text
Abstract:
In 1935 the linguist George Kingsley Zipf made a now classic observation about the relationship between a word's length and its frequency: the more frequent a word is, the shorter it tends to be. He claimed that this 'Law of Abbreviation' is a universal structural property of language. The Law of Abbreviation has since been documented in a wide range of human languages, and extended to animal communication systems and even computer programming languages. Zipf hypothesised that this universal design feature arises as a result of individuals optimising form-meaning mappings under competing pressures to communicate accurately but also efficiently - his famous Principle of Least Effort. In this thesis, I present a novel set of studies which provide direct experimental evidence for this explanatory hypothesis. Using a miniature artificial language learning paradigm, I show in Chapter 2 that language users optimise form-meaning mappings in line with the Law of Abbreviation only when pressures for accuracy and efficiency both operate during a communicative task. These results are robust across different methods of data collection: one version of the experiment was run in the lab, and another was run online, using a novel method I developed which allows participants to partake in dyadic interaction through a web-based interface. In Chapter 3, I address the growing body of work suggesting that a word's predictability in context may be an even stronger determiner of its length than its frequency alone. For instance, Piantadosi et al. (2011) show that shorter words have a lower average surprisal (i.e., tend to appear in more predictive contexts) than longer words, in synchronic corpora across many languages. We hypothesise that the same communicative pressures posited by the Principle of Least Effort, when acting on speakers in situations where context manipulates the information content of words, can give rise to these lexical distributions. Adapting the methodology developed in Chapter 2, I show that participants use shorter words in more predictive contexts only when subject to the competing pressures for accurate and efficient communication. In a second experiment, I show that participants are more likely to use shorter words for meanings with a lower average surprisal. These results suggest that communicative pressures acting on individuals during language use can lead to the re-mapping of a lexicon to align with 'Uniform Information Density', the principle that information content ought to be evenly spread across an utterance, such that shorter linguistic units carry less information than longer ones. Over generations, linguistic behaviour such as that observed in the experiments reported here may bring entire lexicons into alignment with the Law of Abbreviation and Uniform Information Density. For this to happen, a diachronic process which leads to permanent lexical change is necessary. However, crucial evidence for this process - decreasing word length as a result of increasing frequency over time - has never before been systematically documented in natural language. In Chapter 4, I conduct the first large-scale diachronic corpus study investigating the relationship between word length and frequency over time, using the Google Books Ngrams corpus and three different word lists covering both English and French. Focusing on words which have both long and short variants (e.g., info/information), I show that the frequency of a word lemma may influence the rate at which the shorter variant gains in popularity. This suggests that the lexicon as a whole may indeed be gradually evolving towards greater efficiency. Taken together, the behavioural and corpus-based evidence presented in this thesis supports the hypothesis that communicative pressures acting on language-users are at least partially responsible for the frequency-length and surprisal-length relationships found universally across lexicons. More generally, the approach taken in this thesis promotes a view of language as, among other things, an evolving, efficient code for information transfer.
APA, Harvard, Vancouver, ISO, and other styles
26

Ero, Comfort Ekhuase. "The evolution of norms in international relations : intervention and the principle of non-intervention in intra-African affairs." Thesis, London School of Economics and Political Science (University of London), 1999. http://etheses.lse.ac.uk/1547/.

Full text
Abstract:
This thesis is about the co-evolution of non-interventionist norms and interventionist practice among African states in the post-colonial era. To understand this co-evolution, this study begins from the year 1957, when the first post-colonial state emerged, and is divided into three phases: the early post-colonial period (1957-1970), the post-independence period (1970-mid 1980), and the post-Cold War period (1990-April 1998). Each phase looks at examples of African involvement in internal disputes to consider how the practice of intervention has evolved alongside the clause of non-intervention in Article 3(2) of the Charter of the Organisation of African Unity (OAU). The cases studied illustrate the view that African leaders, to justify intervening in internal disputes, have often cited two persistent and recurrent themes: "African exclusivity" (often defined as "African solutions for African problems") and "African Unity" (often called "solidarity"). These however are not the only themes that explicate how intervention has evolved in African affairs. There are complex regional political realities and sensitivities and factors such as the problem of regional instability posed by internal disputes, the spread of arms and the overflow of refugees into neighbouring countries that impinge on the thinking of intervention and non-intervention. While there is an apparent contradiction between non-interventionist norms and interventionist practice in the history under investigation, the thesis concludes that instead, it represents a careful and pragmatic balance of coping with short-term contingencies (through intervention) and longer-term security (through strengthening the norm) without undermining the undoubted interest of African leaders to secure non-interventionist norms for Africa.
APA, Harvard, Vancouver, ISO, and other styles
27

Cornudella, Gaya Miquel. "Autotelic Principle : the role of intrinsic motivation in the emergence and development of artificial language." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEE082/document.

Full text
Abstract:
Dans cette thèse nous étudions le rôle de la motivation intrinsèque dans l’émergence et le développement des systèmes communicationnels. Notre objectif est d’explorer comment des populations d’agents artificiels peuvent utiliser un système de motivation computationnel particulier, appelé l’autotelic principle, pour réguler leur développement linguistique et les dynamiques qui en résultent au niveau de la population.Nous proposons d’abord une mise en œuvre concrète de l’autotelic principle. Le noyau de ce système repose sur l’équilibre des défis, des tâches à accomplir afin d’atteindre un objectif, et des compétences, les capacités que le système peut utiliser pour accomplir les différentes tâches. La relation entre les deux éléments n’est pas stable mais se déstabilise régulièrement lorsque de nouvelles compétences sont acquises, ce qui permet au système de tenter des défis de plus grande complexité. Ensuite, nous testons l’utilité de ce système de motivation dans une série d’expériences sur l’évolution du langage. Dans le premier ensemble d’expériences, une population d’agents artificiels doit développer une langue pour se référer à des objets ayant des caractéristiques discrètes. Ces expériences se concentrent sur la façon dont les systèmes communicatifs non ambigus peuvent émerger lorsque l’autotelic principle est utilisé pour réguler le développement du langage en étapes de difficulté croissante. Dans le deuxième ensemble d’expériences, les agents doivent créer un langage artificiel pour communiquer sur des couleurs. Dans cette partie, on explore comment le système de motivation peut contrôler la complexité linguistique des interactions pour un domaine continu et on examine aussi la validité de l’autotelic principle en tant que mécanisme permettant de réguler simultanément plusieurs stratégies linguistiques de difficulté similaire. En résumé, nous avons démontré à travers de notre travail que l’autotelic principle peut être utilisé comme un mécanisme général pour réguler la complexité du langage développé de manière autonome en domaines discrets et continus
This thesis studies the role of intrinsic motivation in the emergence and development of communicative systems in populations of artificial agents. To be more specific, our goal is to explore how populations of agents can use a particular motivation system called autotelic principle to regulate their language development and the resulting dynamics at the population level.To achieve this, we first propose a concrete implementation of the autotelic principle. The core of this system is based on the balance between challenges, tasks to be done to achieve a goal, and skills, the abilities the system can employ to accomplish the different tasks. The relation between the two elements is not steady but regularly becomes destabilised when new skills are learned, which allows the system to attempt challenges of increasing complexity. Then, we test the usefulness of the autotelic principle in a series of language evolution experiments. In the first set of experiments, a population of artificial agents should develop a language to refer to objects with discrete values. These experiments focus on how unambiguous communicative systems can emerge when the autotelic principle is employed to scaffold language development into stages of increasing difficulty. In the second set of experiments, agents should agree on a language to communicate with about colour samples. In this part, we explore how the motivation system can regulate the linguistic complexity of interactions for a continuous domain and examine the value of the autotelic principle as a mechanism to control several language strategies simultaneously. To summarise, we have shown through our work that the autotelic principle can be used as a general mechanism to regulate complexity in language emergence in an autonomous way for discrete and continuous domains
APA, Harvard, Vancouver, ISO, and other styles
28

Librado, Sanz Pablo. "Genómica evolutiva de la regulación transcripcional en las principales familias multigénicas del sistema quimiosensorial de Drosophila." Doctoral thesis, Universitat de Barcelona, 2014. http://hdl.handle.net/10803/145375.

Full text
Abstract:
El sistema quimiosensorial (SQ) participa en la detección de nutrientes, depredadores y pareja, siendo -por tanto- fundamental en la supervivencia de los organismos y de las especies. En insectos, las primeras etapas de la quimiopercepción están mediadas por familias multigénicas que codifican: (i) proteínas extracelulares, como las Odorant-Binding Proteins (OBPs) y las Chemosensory Proteins (CSPs); (ii) proteínas quimiorreceptoras, como los Odorant (ORs), Gustatory (GRs) e Ionotropic Receptors (IRs). Dado que la eficacia biológica de los individuos depende de su correcta expresión, estas familias multigénicas constituyen un excelente modelo para estudiar procesos adaptativos a nivel molecular. La creciente disponibilidad de datos moleculares nos ha conferido la oportunidad de comprender el papel de la selección natural en la evolución transcripcional de los genes del SQ. No obstante, la naturaleza masiva e innovadora de estos datos ha hecho indispensable el desarrollo e implementación de nuevos métodos de genética de poblaciones y evolución molecular en las potentes herramientas bioinformáticas DnaSPv5, BadiRate y popDrowser. Utilizando éstas y otras herramientas, hemos determinado que los genes que codifican OBPs no están distribuidos aleatoriamente en el genoma de Drosophila, sino agrupados formando clústeres de genes. La conservación de estos clústeres está relacionada con la amplitud y el ruido transcripcional de sus integrantes, así como también con su estado de la cromatina (’transcription elongation’ y con la unión de la proteína JIL-1). Entre otras funciones, la JIL-1 libera la ARN polimerasa pausada en la región promotora, lo que produce una ráfaga de elongación de la transcripción génica que incrementa el ruido transcripcional. Debido a que las fluctuaciones de OBPs pueden alterar el comportamiento de los individuos, este ruido transcripcional puede generar una plasticidad fenotípica beneficiosa, especialmente en ambientes externos cambiantes. La arquitectura de la región promotora puede jugar un papel fundamental en pausar la actividad de la ARN polimerasa. En este sentido, hemos inferido que las regiones upstream de los aIRs y las CSPs están sometidas a una importante constricción funcional, mientras que las de los ORs y los GRs han experimentado un mayor impacto de la selección darwiniana. Aunque aún faltan más evidencias al respecto, la evolución de las regiones upstream de las OBPs podría estar vinculada con el impacto diferencial de la cromatina en su regulación transcripcional. De cualquier modo, no cabe duda que la selección natural (negativa y positiva) ha contribuido significativamente a la evolución transcripcional de las principales familias multigénicas del SQ, mediante los mecanismos que implican elementos cis-reguladores y estados de la cromatina.
The chemosensory system is involved in the detection of food, predators and mates, being thus essential for the species survival. In insects, the first steps of chemoperception are mediated by multigene families encoding: (i) extracellular proteins, such as Odorant-Binding Proteins (OBPs) and Chemosensory Proteins (CSPs), as well as (ii) chemoreceptors, such as Odorant (ORs), Gustatory (GRs) and Ionotropic Receptors (IRs). Since the fitness of individuals depends on their correct expression, these multigene families are an excellent model to study adaptive processes at the molecular level. The increasing availability of molecular data gives us the opportunity to understand the role of natural selection in the transcriptional regulation of the chemosensory genes. However, the massive and innovative nature of these data makes crucial the development and implementation of new methods for population genetics and molecular evolution in powerful bioinformatics tools, such as DnaSPv5, popDrowser and BadiRate. Using these and other tools, we determined that genes encoding OBPs are clustered along the Drosophila genome. The conservation of these clusters is related to the transcriptional amplitude and noise of their members, as well as to its chromatin state ('transcription elongation' and binding of JIL-1 protein). Among other functions, the JIL-1 releases the RNA polymerase paused at the promoter region, inducing a transcriptional elongation burst that increases expression noise. Because OBPs fluctuations can alter the behavior of individuals, this noise can generate a beneficial phenotypic plasticity, especially in changing external environments. The architecture of the promoter region can play a key role in the RNA polymerase pausing. In this regard, we have inferred that the upstream regions of aIRS and CSPs are under strong functional constraints, whereas Darwinian selection is more pervasive at the ORs and GRs upstream regions. Although further studies are needed, the evolution of the OBP upstream regions may be linked to the differential impact of the high-order chromatin regulatory mechanisms in the OBP transcription. Anyway, it is certain that natural selection (negative and positive) has significantly contributed to the transcriptional evolution of the major chemosensory multigene families, through mechanisms involving cis- regulatory elements and high-order chromatin states.
APA, Harvard, Vancouver, ISO, and other styles
29

Allain, Stéphanie. "L'évolution du moment cinétique des étoiles pré-séquence principale de faible masse." Grenoble 1, 1997. http://www.theses.fr/1997GRE10167.

Full text
Abstract:
Cette thèse présente l'étude de la rotation des étoiles de faible masse (entre 0,5 et 1,2 m#+) pendant leurs phases pré-séquence principale, depuis les T Tauri âgées de quelques millions d'années, et séquence principale, à quelques milliards d'années. Deux approches complémentaires ont été utilisées : les observations apportent de nouvelles mesures de rotation de ces objets et la modélisation permet de comprendre les processus physiques mis en jeu. Les observations ont porté essentiellement sur les amas jeunes, IC4665, Alpha Persée et les Pléiades. Dans ces amas, les étoiles de type solaire sont à un âge charnière entre la phase pré-séquence principale et la séquence principale. Alors qu'un grand pourcentage d’étoiles tournent à des vitesses inferieures à 10 km. S#-#1, leurs vitesses de rotation exactes n'étaient pas connues à cause des limites de résolution instrumentales. Grace aux instruments CORAVEL et ELODIE de l'OHP, toutes les vitesses de rotation sont maintenant résolues dans Persée et les Pléiades pour les étoiles de masse comprise entre 0,6 et 1,1 m#+. Les distributions de vitesse équatoriales en fonction de la masse ont été construites dans les deux amas et sont comparées aux modèles. Un modèle d'évolution du moment cinétique a été développé, qui permet de prendre en compte l'évolution pré-séquence principale : les changements de structure interne, l'effet d'un disque d'accrétion, la perte de moment cinétique à la surface et le transfert de moment cinétique entre le cœur et l'enveloppe. Les nouvelles données apportent des contraintes fortes quant au transport de moment cinétique dans les intérieurs stellaires. Dans les étoiles en rotation rapide, un transfert très efficace du moment cinétique permet à l’étoile de garder une rotation quasi-solide pendant toute son évolution, de la phase T Tauri jusqu'à l’âge du soleil, en accord avec les observations de l'intérieur solaire. Par contre, l'existence même d'un grand nombre de rotateurs lents nécessite un découplage entre le cœur et l'enveloppe, avec un temps caractéristique de couplage de 100 millions d'années. L'évolution de la vitesse de ces rotateurs très lents au début de la séquence principale, pendant laquelle leur vitesse varie très peu, est également en accord avec un temps de couplage très long.
APA, Harvard, Vancouver, ISO, and other styles
30

Fevrier, François. "La specificite des principes du service public. Identite, fonctions et evolutions des "lois" de continuite, d'egalite et d'adaptation constante." Rennes 1, 1999. http://www.theses.fr/1999REN10402.

Full text
Abstract:
A l'heure ou les transformations du droit public connaissent une singuliere acceleration, ou les influences conjuguees de facteurs politiques, economiques et juridiques - nationaux et communautaires notamment - tendent a une plus grande unite du droit, rares sont les regles et concepts de droit public interne qui ne soient soumis a l'examen de leur legitimite, de leur efficacite. Dans ce contexte, qui noue la enieme "crise" du service public - face a la generalisation du principe de concurrence et a la restriction concomitante des interventions publiques -, la problematique de la specificite des principes du service public revele une importance majeure. C'est que, transcendant le caractere polysemique du service public de meme que la diversite des regimes qui lui sont rattaches, la combinaison des "lois" de continuite, d'egalite et d'adaptation constante avait ete jusqu'ici communement comprise comme formant une trilogie de principes federateurs et caracteristiques du service public, un gage de sa singularite et de son efficacite, un socle essentiel de sa legitimite. Des lors, la remise en question, la recherche et la mesure de la specificite veritable de ces principes paraissent determinantes, specialement a l'aune du mouvement actuel qui exalte les vertus du systeme concurrentiel, conteste l'importance et l'utilite des activites de service public et, en divers domaines, invoque leur retour a la seule initiative privee. Prenant le contrepied de cette contestation croissante, la presente etude conclue a la specificite aussi effective que justifiee des principes du service public. Un examen de droit interne compare, integrant par ailleurs les exigences du droit communautaire, montre en effet que, par les finalites particulieres qu'ils revelent et realisent - par comparaison notamment avec l'entreprise privee -, ces principes s'averent caracteristiques non seulement de la notion et du regime de service public, mais aussi du cadre dans lequel ce mode d'intervention est aujourd'hui appele a evoluer.
APA, Harvard, Vancouver, ISO, and other styles
31

Vilches, Karina. "Study of mathematical models of phenotype evolution and motion of cell populations." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066117/document.

Full text
Abstract:
Cette thèse porte sur deux équations aux dérivées partielles qui modélisent les phénomènes biologiques de l'évolution génétique et mouvement dans l'espace d'une population de cellules. Le premier problème (Partie I, Chapitre 1), il est sur l'évolution phénotypique d'une population de cellules, nous avons réussi à démontrer que la limite asymptotique des solutions de l'équation différentielle partielle proposée est une masse de Dirac. Pour modéliser ce phénomène, nous avons étudié une équation de transport sur le mouvement génétique, y compris des éléments classiques de l'écologie mathématique et ajouter un transport terme dans la variable génétique x pour modéliser le phénomène de sélection naturelle. Nous intégrons un paramètre approprié dans notre modèle, qui a un problème associé normalisée. Ensuite, nous faisons quelques estimations pour donner des propriétés des solutions et obtenir sa limite. Pour ce faire, nous définissons une sous-solution et sur-solution, qui délimitent la solution du problème en appliquant un principe du maximum.Le deuxième problème (Partie II, Chapitre 2), résume les principaux résultats obtenus dans l'étude d'un système d'équations aux dérivées partielles paraboliques inspiré par l'équation Keller-Segel. C'est pourquoi le résultat principal est d'obtenir des conditions optimales sur la masse initiale pour l'existence globale et blow-up des solutions du système étudié, utilisé la méthode des moments et des inégalités de Hardy-Littlewood-Sobolev pour systèmes
In Chapter 1, we consider a cell population where the individuals live in the same environmental conditions for some fixed period of time where they compete for nutrients among themselves, considering that offspring has the same trait as their parents, we were defining a fitness function that is trait and density dependent, assuming there were a unique trait best adapted at fixed environmental conditions. We modeled this phenomenon using a Transport Equation. The main result have been obtaining a Dirac mass concentration like solutions for the asymptotic behavior, incorporating a parameter, which is biologically sustained. We applied the classical framework to obtain this result. First, we give the apriori estimates and existence result to the simplified problem, next we add terms to have a more realistic model, then we study an approximate problem given some regularity and properties at solutions, finally we obtain this limit. We used tools as BV convergence properties, Anzats, sub and super solutions, maximum principle, etc.Chapter 2 had been publishing in the following papers (see part II):- E. ESPEJO, K. VILCHES, C. CONCA (2012), Sharp conditon for blow-up and global existence in a two species chemotactic Keller-Segel system in R^2, European J. Appl. Math- C. CONCA, E. ESPEJO, K. VILCHES (2011), Remarks on the blow-up and global existence for a two species chemotactic Keller-Segel system in R^2. European J. Appl. Math.In this chapter, we give the main results obtained in these two publications. We have been studying the sharp condition to global existence and Blow-up in time to the parabolic PDE system in R^2, inspired by the studies were done in the one species case. We model the movement for two chemotactic populations produced by one chemical substance. The main result is to extend the result obtained to classical simplified Keller-Segel model in one species case to the multispecies case, using the adequately tools for PDE’s systems. We used the moment method to prove Blow-up and have been bounding the entropy to show global existence
APA, Harvard, Vancouver, ISO, and other styles
32

DRIGUEZ, EMMANUEL. "Evolution de l'activite serique des aminotransferases au cours des pancreatiques aigues alcooliques, des pancreatites aigues biliaires et des lithiases de la voie biliaire principale." Amiens, 1990. http://www.theses.fr/1990AMIEM097.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Mendes, Neto Edilberto Batista. "Evolução e distribuição de riqueza da cultura de soja nas principais regiões produtoras no Brasil." Universidade Federal de Uberlândia, 2015. https://repositorio.ufu.br/handle/123456789/12622.

Full text
Abstract:
The subsector of Brazilian agribusiness has a highlight and growth scenario in GDP composition. Culture and soybean trade in grains have shown performance in the Brazilian economy, representing the trade 27 % of exports in the country. Farming soybeans, in the perception of society, government and the owners, is known to have a high degree of Wealth, understood by the fact that their activity attract jobs, income sources and improved quality of services. In addition, we identified, in theory, instruments that allow measuring the value of this wealth and its distribution. The objective of this work is to understand the process of evolution and wealth distribution seen in crop soybean in Brazil, understood from 1998 to 2014. To meet these goals, data adequacy was held on Wealth, identifying eleven variables, the time horizon of 17 years. The study was delimited to five producing cities represented in the soybean cultivation in the South and Midwest. The analysis presents descriptive and quantitative, has been carried out in two steps: the creation of the comparative examination of the evolution of elements of Wealth by Analysis of Variance and subsequently the bivariate correlation, evaluating the membership level of the variable results with other data. The results showed that the Wealth Generated, costs Seeds and Pesticides, Equity Compensation and the results have similar characteristics in major producing regions when evaluated the average values. The relevance of this study is the theoretical contribution highlighting elements that make up the Value Added cultivation of soybeans.
O subsetor do agronegócio brasileiro apresenta um cenário de crescimento e destaque na composição do Produto Interno Bruto. A cultura e o comércio de soja em grãos têm apresentado um desempenho na economia brasileira, representando o comércio em 27% das exportações no país. A atividade agrícola da sojicultora, na percepção da sociedade, do governo e dos proprietários, é conhecida por apresentar um elevado grau de Riqueza, compreendida pelo fato de sua atividade atrair empregos, fontes de renda e melhoria na qualidade de serviços. Além disso, foram identificados, na teoria, instrumentos que possibilitam mensurar o valor dessa Riqueza e a sua forma de distribuição. O objetivo deste trabalho é a compreensão do processo de evolução e de distribuição de riqueza observado na cultura agrícola de soja no Brasil, compreendido no período de 1998 a 2014. Para atender a esses propósitos, foi realizada adequação de dados sobre a Riqueza, identificando-se onze variáveis, no horizonte temporal de 17 anos. O estudo foi delimitado a cinco cidades produtoras com representatividade no cultivo de soja na Região Sul e Centro-Oeste. A análise apresenta caráter descritivo e quantitativo, tendo sido realizada em duas etapas: a realização do exame comparativo dos elementos de evolução da Riqueza por meio da Análise de Variância e, posteriormente, a Correlação Bivariável, avaliando-se o nível de associação da variável Resultado com os demais dados. Os resultados apontaram que a Riqueza Gerada, Custos com Sementes e Agrotóxicos, Remuneração de Capital Próprio e o Resultado possuem características semelhantes nas grandes regiões produtoras quando avaliados o valores médios. A relevância deste estudo está na contribuição teórica que evidencia elementos que configuram o Valor Adicionado do cultivo da soja em grão.
Mestre em Ciências Contábeis
APA, Harvard, Vancouver, ISO, and other styles
34

Paglia, Gianluca. "Determination of the structure of y-alumina using empirical and first principle calculations combined with supporting experiments." Curtin University of Technology, Department of Applied Physics & Department of Applied Chemistry, 2004. http://espace.library.curtin.edu.au:80/R/?func=dbin-jump-full&object_id=14992.

Full text
Abstract:
Aluminas have had some form of chemical and industrial use throughout history. For little over a century corundum (α-Al2O3) has been the most widely used and known of the aluminas. The emerging metastable aluminas, including the γ, δ, η, θ, κ, β, and χ polymorphs, have been growing in importance. In particular, γ-Al2O3 has received wide attention, with established use as a catalyst and catalyst support, and growing application in wear abrasives, structural composites, and as part of burner systems in miniature power supplies. It is also growing in importance as part of the feedstock for aluminium production in order to affect both the adsorption of hydrogen fluoride and the feedstock solubility in the electrolytic solution. However, much ambiguity surrounds the precise structure of γ-Al2O3. Without proper knowledge of the structure, understanding the properties, dynamics and applications will always be less than optimal. The aim of this research was to contribute towards settling this ambiguity. This work was achieved through extensive computer simulations of the structure, based on interatomic potentials with refinements of promising structures using density functional theory (DFT), and a wide range of supporting experiments. In addition to providing a more realistic representation of the structure, this research has also served to advance knowledge of the evolution of the structure with changing temperature and make new insights regarding the location of hydrogen in γ-Al2O3.
Both the molecular modelling and Rietveld refinements of neutron diffraction data showed that the traditional cubic spinel-based structure models, based on m Fd3 space group symmetry, do not accurately describe the defect structure of γ-Al2O3. A more accurate description of the structure was provided using supercells of the cubic and tetragonal unit cells with a significant number of cations on c symmetry positions. These c symmetry based structures exhibited diffraction patterns that were characteristic of γ-Al2O3. The first three chapters of this Thesis provide a review of the literature. Chapter One provides a general introduction, describing the uses and importance of the aluminas and the problems associated with determining the structure of γ-Al2O3. Chapter Two details the research that has been conducted on the structure of vi γ-Al2O3 historically. Chapter Three describes the major principles behind the computational methods employed in this research. In Chapter Four, the specific experimental and computational techniques used to investigate the structure of γ-Al2O3 are described. All preparation conditions and parameters used are provided. Chapter Five describes the methodology employed in computational and experimental research. The examination of the ~ 1.47 billion spinel-based structural possibilities of γ-Al2O3, described using supercells, and the selection of ~ 122,000 candidates for computer simulation, is detailed. This chapter also contains a case study of the structure of κ-Al2O3, used to investigate the applicability of applying interatomic potentials to solving complex structures, where many possibilities are involved, and to develop a systematic procedure of computational investigation that could be applied to γ-Al2O3. Chapters Six to Nine present and discuss the results from the experimental studies.
Preliminary heating trials, performed to determine the appropriate preparation conditions for obtaining a highly crystalline boehmite precursor and an appropriate calcination procedure for the systematic study of γ-Al2O3, were presented in Chapter Six. Chapter Seven details the investigation of the structure from a singletemperature case. Several known structural models were investigated, including the possibility of a dual-phase model and the inclusion of hydrogen in the structure. It was demonstrated that an accurate structural model cannot be achieved for γ-Al2O3 if the cations are restricted to spinel positions. It was also found that electron diffraction patterns, typical for γ-Al2O3, could be indexed according to the I41/amd space group, which is a maximal subgroup of m Fd3 . Two models were presented which describe the structure more accurately; Cubic-16c, which describes cubic γ-Al2O3 and Tetragonal-8c, which describes tetragonal γ-Al2O3. The latter model was found to be a better description for the γ-Al2O3 samples studied. Chapter Eight describes the evolution of the structure with changing calcination temperature. Tetragonal γ-Al2O3 was found to be present between 450 and 750 °C. The structure showed a reduction in the tetragonal distortion with increasing temperature but at no stage was cubic γ-Al2O3 obtained. Examination of the progress of cation migration indicates the reduction in the tetragonal nature is due to ordering within inter-skeletal oxygen layers of the unit cell, left over from the breakdown of the hydroxide layers of boehmite when the transformation to γ-Al2O3 occurred. Above 750 °C, δ-Al2O3 was not observed, but a new phase was identified and designated γ.-Al2O3.
The structure of this phase was determined to be a triple cell of γ-Al2O3 and is herein described using the 2 4m P space group. Chapter Nine investigates the presence of hydrogen in the structure of γ-Al2O3. It was concluded that γ-Al2O3 derived from highly crystalline boehmite has a relatively well ordered bulk crystalline structure which contains no interstitial hydrogen and that hydrogen-containing species are located at the surface and within amorphous regions, which are located in the vicinity of pores. Expectedly, the specific surface area was found to decrease with increasing calcination temperature. This trend occurred concurrently with an increase in the mean pore and crystallite size and a reduction in the amount of hydrogen-containing species within the structure. It was also demonstrated that γ-Al2O3 derived from highly crystalline boehmite has a significantly higher surface area than expected, attributed to the presence of nano-pores and closed porosity. The results from the computational study are presented and discussed in Chapter Ten. Optimisation of the spinel-based structural models showed that structures with some non-spinel site occupancy were more energetically favourable. However, none of the structural models exhibited a configuration close to those determined from the experimental studies. Nor did any of the theoretical structures yield a diffraction pattern that was characteristic of γ-Al2O3. This discrepancy between the simulated and real structures means that the spinel-based starting structure models are not close enough to the true structure of γ-Al2O3 to facilitate the derivation of its representative configuration.
Large numbers of structures demonstrate migration of cations to c symmetry positions, providing strong evidence that c symmetry positions are inherent in the structure. This supports the Cubic-16c and Tetragonal-8c structure models presented in Chapter Seven and suggests that these models are universal for crystalline γ-Al2O3. Optimisation of c symmetry based structures, with starting configurations based on the experimental findings, resulted in simulated diffraction patterns that were characteristic of γ-Al2O3.
APA, Harvard, Vancouver, ISO, and other styles
35

Broinizi, Marcos Eduardo Bolelli. "Ordenação evolutiva de anúncios em publicidade computacional." Universidade de São Paulo, 2015. http://www.teses.usp.br/teses/disponiveis/45/45134/tde-09112015-104805/.

Full text
Abstract:
Otimizar simultaneamente os interesses dos usuários, anunciantes e publicadores é um grande desafio na área de publicidade computacional. Mais precisamente, a ordenação de anúncios, ou ad ranking, desempenha um papel central nesse desafio. Por outro lado, nem mesmo as melhores fórmulas ou algoritmos de ordenação são capazes de manter seu status por um longo tempo em um ambiente que está em constante mudança. Neste trabalho, apresentamos uma análise orientada a dados que mostra a importância de combinar diferentes dimensões de publicidade computacional por meio de uma abordagem evolutiva para ordenação de anúncios afim de responder a mudanças de forma mais eficaz. Nós avaliamos as dimensões de valor comercial, desempenho histórico de cliques, interesses dos usuários e a similaridade textual entre o anúncio e a página. Nessa avaliação, nós averiguamos o desempenho e a correlação das diferentes dimensões. Como consequência, nós desenvolvemos uma abordagem evolucionária para combinar essas dimensões. Essa abordagem é composta por três partes: um repositório de configurações para facilitar a implantação e avaliação de experimentos de ordenação; um componente evolucionário de avaliação orientado a dados; e um motor de programação genética para evoluir fórmulas de ordenação de anúncios. Nossa abordagem foi implementada com sucesso em um sistema real de publicidade computacional responsável por processar mais de quatorze bilhões de requisições de anúncio por mês. De acordo com nossos resultados, essas dimensões se complementam e nenhuma delas deve ser neglicenciada. Além disso, nós mostramos que a combinação evolucionária dessas dimensões não só é capaz de superar cada uma individualmente, como também conseguiu alcançar melhores resultados do que métodos estáticos de ordenação de anúncios.
Simultaneous optimization of users, advertisers and publishers\' interests has been a formidable challenge in online advertising. More concretely, ranking of advertising, or more simply ad ranking, has a central role in this challenge. However, even the best ranking formula or algorithm cannot withstand the ever-changing environment of online advertising for a long time. In this work, we present a data-driven analysis that shows the importance of combining different aspects of online advertising through an evolutionary approach for ad ranking in order to effectively respond to changes. We evaluated aspects ranging from bid values and previous click performance to user behavior and interests, including the textual similarity between ad and page. In this evaluation, we assessed commercial performance along with the correlation between different aspects. Therefore, we proposed an evolutionary approach for combining these aspects. This approach was composed of three parts: a configuration repository to facilitate deployment and evaluation of ranking experiments; an evolutionary data-based evaluation component; and a genetic programming engine to evolve ad ranking formulae. Our approach was successfully implemented in a real online advertising system that processes more than fourteen billion ad requests per month. According to our results, these aspects complement each other and none of them should be neglected. Moreover, we showed that the evolutionary combination of these aspects not only outperformed each of them individually, but was also able to achieve better overall results than static ad ranking methods.
APA, Harvard, Vancouver, ISO, and other styles
36

Bäcklund, Jimmy Ulf Anti-Krister. "Reciprok egoism, skeptisk empirism och modern fysikalism : Titelförslag på några principer och diskurs kring dessas korrelation." Thesis, Linköpings universitet, Avdelningen för kulturvetenskaper, KVA, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-96930.

Full text
Abstract:
Denna essä är en ontologisk och epistemologisk undersökning av bland annat etiska och medvetandefilosofiska implikationer av en konsekvent fysikalistisk hållning. I detta kontrasteras mot en transcendentalistisk hållning, som den av T. M. Scanlon, den skeptiska empirismen av David Hume, reciprokt baserade moraliska system (e.g. J. L. Mackies självreferentiella altruism) samt en medvetandesyn i linje med Galen Strawsons kriterier för en realistisk fysikalism som i min mening löser alla så kallade psykofysiska problem.
This paper contains an ontological and epistemic analysis of the implication of a consistently physicalist view of reality. This in polemic contrast with transcendentalist positions as that of T. M. Scanlon. I follow along the lines of a sceptical empiricism that I ascribe to Hume and from which, I argue, consistently follows guidelines as set by for example J. L. Mackie and Galen Strawson on topics of self-referential altruism and realistic physicalism respectively.
APA, Harvard, Vancouver, ISO, and other styles
37

Poulin, Lucie. "Génotypage à haut niveau de résolution des xanthomonades phytopathogènes à l’aide de marqueurs de type CRISPR et VNTR : de la preuve de principe à l’application." Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20012.

Full text
Abstract:
La sécurité alimentaire est basée sur des systèmes de cultures durables. Les agents phytopathogènes présentent un risque sérieux pour la stabilité de l'agriculture mondiale. Dans ce contexte, la biosurveillance des agents phytopathogènes s'avère indispensable afin de connaitre et de comprendre la répartition, les routes et les facteurs de dispersion des populations phytopathogènes, et de prendre les mesures adaptées pour limiter leur propagation. Le genre bactérien Xanthomonas comprend un ensemble d'espèces phytopathogènes-spécifiques s'attaquant a une large gamme d'espèces végétales dont certaines sont importantes pour la production agricole. Les deux espèces d'étude, i.e les pathovars de Xanthomonas oryzae (Xo) et Xanthomonas axonopodis pathovar manihotis (Xam), pathogènes respectivement du riz et du manioc, font partie du « top 10 » des bactéries phytopathogènes d'importance majeure dans le monde. Des travaux portant sur l''épidémiosurveillance de ces bactéries phytopathogenes doivent pouvoir être mis en place en routine. L'objectif est de typer et de relier ces souches bactériennes à différentes échelles géographiques, ainsi que de détecter et caractériser les épidémies de manière précoce. Pour ce faire, plusieurs approches de typage moléculaire à haut niveau de résolution ont été explorées. Des marqueurs moléculaires basés sur les loci VNTR (Variable Number of Tandem repeats) ont été étudiés. Chez X. oryzae, un outil MLVA-25 (Multilocus VNTR Analysis) pour le pathovar Xanthomonas oryzae pv. oryzicola (Xoc) et un outil MLVA-16 pour les trois lignées génétiques de X. oryzae ont été développés. L'étude par le MLVA-16 de populations de X. oryzae a permis de caractériser des complexes clonaux généralement associés à de nouvelles épidémies. La description de nouvelles souches de Xoc en Afrique Centrale et Afrique de l'Est indique une provenance vraisemblablement d'origine asiatique. Chez Xam, la recherche de loci VNTR polymorphiques sur 65 génomes de Xam complets a abouti à la description de seize loci VNTR robustes donc cinq ont été ensuite utilisés pour l'étude de populations de Xam dans les plaines de l'est Colombien. Cette dernière étude met en avant une structuration des populations de Xam selon les régions. Enfin, une méthode de spoligotypage associée aux locus CRISPR-Cas (Clustered Regularly Interspaced Short Palindromic Repeats) et des marqueurs minisatellites ont été développés chez les souches du pathovar Xanthomonas oryzae pv. oryzae (Xoo). Les analyses préliminaires ont permis de définir la composition de la cassette CRISPR et de proposer des outils de spoligotypage utiles pour les souches asiatiques de Xoo. D'autre part, 18 marqueurs minisatellites ont indiqué une corrélation significative avec les races des souches et peuvent servir à l'étude plus large de populations Xoo philippines ou asiatiques. En conclusion, des nouvelles approches de typage moléculaire ont été évaluées, mises au point et employées avec succès pour étudier les bactéries pathogènes du riz et du manioc appartenant au genre Xanthomonas
Food and agriculture safety rely on durable cropping systems. Consequently, phytopathogens pose a serious risk for durable agriculture in the world. In this context, surveillance of phytopathogens is a mandatory prerequisite in order to understand and to predict pathogen repartition, dispersion routes and factors, and to trigger appropriate measures to reduce the pathogen's propagation. The genus Xanthomonas displays a large diversity of host-specific plant-pathogenic species that infect a wide range of plant species, including commercially grown crops. The two studied species, i.e. the rice-pathogenic Xanthomonas oryzae and the cassava-pathogenic Xanthomonas axonopodis pathovar manihotis (Xam), belong to the top-10 of phytopathogenic bacteria and are thus of major interest. Routine epidemiological surveillance of these bacteria has to be achieved in order to type and link strains at different geographic scales as well as to characterize outbreaks and epidemics. For this purpose, several high-resolution molecular typing approaches were explored. Firstly, VNTR (Variable Number of Tandem repeats)-based molecular markers were studied. For X. oryzae, multilocus VNTR analyses (MLVA) were developed: MLVA-25 for the pathovar oryzicola (Xoc) and MLVA- 16 for the three known lineages of X. oryzae. A large population study of X. oryzae by MLVA-16 allowed us to characterize genetic clonal complexes, which were likely associated with new epidemics. Also, the novel description of Xoc strains from central and east Africa indicated their probable Asian provenance. For Xam, the exploration of polymorphic VNTR loci in 65 available genome sequences allowed the description of sixteen robust VNTR loci. Among them, five highly polymorphic loci were further used in a population study of Xam in the eastern plain of Colombia. The results provided evidence of a geographical Xam population structuration. Secondly, CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats)-associated spoligotyping and minisatellites markers were explored for a largely divergent set of Philippine strains of Xanthomonas oryzae pv. oryzae (Xoo). Both approaches were compared to genome-wide SNPs and races. Preliminary studies identified the composition of CRISPR arrays, which could be useful for a spoligotyping approach. On the other hand, 18 minisatellites markers revealed a significant correlation with races and could be used for a larger study of Philippine or Asian Xoo populations. In conclusion, novel molecular typing approaches were successfully evaluated, implemented and used to study rice- and cassava-pathogenic bacteria of the genus Xanthomonas
APA, Harvard, Vancouver, ISO, and other styles
38

Oliveira, João Guilherme Silva Marcondes de. "Do caráter aberto dos tipos penais: revisão de uma dicotomia." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/2/2136/tde-02082011-112356/.

Full text
Abstract:
Na evolução da teoria do tipo penal, podemos perceber um movimento de intensificação de complexidade, no qual os mais novos e diferentes posicionamentos doutrinários confluem para tornar aquela figura o ponto central do estudo do delito. Dentre as inúmeras classificações dogmáticas que surgiram neste desenvolvimento, nosso trabalho analisa uma em particular: a divisão entre tipos fechados e tipos abertos. Embora criada originalmente por Hans WELZEL para descrever um grupo específico de casos, a noção de tipos abertos ganhou contornos mais amplos, sendo admitida pela doutrina de maneira geral. Todavia, a aceitação dessa categoria científica não se limitou a uma atividade expositiva, servindo para a crítica de modelos jurídicos. Frente ao princípio da legalidade, conquista jurídica de longa data, os estudiosos do Direito Penal apontaram a ilegitimidade dos tipos abertos, por ofensa ao mandato de certeza, um dos quatro aspectos daquele princípio maior. Por outro lado, as conclusões da atual ciência hermenêutica ensinam que não se pode confundir texto legal e norma, e que a compreensão do fenômeno jurídico perpassa uma série de valorações adstritas ao Direito. Neste sentido, as diferenças que, em tese, tornavam específicos os tipos abertos, quando confrontadas com essa nova descoberta, se mostram apenas aparentes. Toda e qualquer norma apresenta um caráter aberto, algo intrínseco à linguagem humana. Logo, não existe tipo fechado. Inobstante, a censura que fora aventada pela doutrina não perde sua razão de ser. Pelo contrário, é necessária sua reformulação, para afirmar que o problema se encontra no grau de intensidade da abertura, na aceitabilidade ou não da indeterminação da conduta humana diante do caso concreto, único instante em que é possível a individualização da norma. Para tanto, é preciso erigir critérios seguros a fim de efetuar o julgamento da legitimidade dos tipos penais. Defendemos que os próprios fundamentos do princípio da legalidade a vedação da arbitrariedade e a previsibilidade das condutas servem como critérios de avaliação. Mais ainda, a realização dessa operação somente pode ser feita por meio do controle das decisões judiciais, o que nos leva a um problema de ordem prática e não apenas teórica.
In the evolution of the criminal type theory, we can notice a movement of complexity intensification, in which the newest and most different doctrinal positions join together to make that figure the central point of the crime study. Among the multiple dogmatic classifications that aroused in this development, our task analyses one in particular: the division between closed and open types. Though originally created by Hans WELZEL to describe a specific group of cases, the notion of open types acquired a wider profile, being generally admitted by the doctrine. However, the acceptance of this scientific category has not been limited to an expository activity, serving to the critic of juridical models. Before the principle of the legality, a long-term juridical conquer, the scholars of the Criminal Law pointed to the illegitimacy of the open types, due to the offense of the certainty term, as one of the four aspects of that major principle. Moreover, the conclusions of todays hermeneutic science instruct that one cannot confuse legal text and norm, and that the comprehension of the juridical phenomenon pervades a series of valuations bonded to Law. In this way, the differences that, in thesis, made specific the open types, when confronted with this new finding, prove to be only apparent. All and any rule presents an open feature, an aspect intrinsic to human language. Therefore, there are no closed types. Despite that, the censure that was made by doctrine does not lose its reason. In the opposite, its reformulation is necessary, to affirm that the problem is in the intensity extent of the opening, in the acceptance or not of the human conduct indetermination ahead of a concrete case, the single moment in which it is possible to individualize the rule. Therefore, it is necessary to built firm criteria to perform the judgment of the criminal types legitimacy. We sustain that the own foundations of the principle of the legality the prohibition of arbitrariness and the prevision of conducts serve as evaluation criteria. Furthermore, the accomplishment of this transaction can only be fulfilled by the control of judiciary decisions, what leads us to a practical problem, not only theoretical.
APA, Harvard, Vancouver, ISO, and other styles
39

Atalla, El-Awady Attia El-Awady. "Prise en compte de la flexibilité des ressources humaines dans la planification et l’ordonnancement des activités industrielles." Thesis, Toulouse, INPT, 2013. http://www.theses.fr/2013INPT0018/document.

Full text
Abstract:
Le besoin croissant de réactivité dans les différents secteurs industriels face à la volatilité des marchés soulève une forte demande de la flexibilité dans leur organisation. Cette flexibilité peut être utilisée pour améliorer la robustesse du planning de référence d’un programme d’activités donné. Les ressources humaines de l’entreprise étant de plus en plus considérées comme le coeur des structures organisationnelles, elles représentent une source de flexibilité renouvelable et viable. Tout d’abord, ce travail a été mis en oeuvre pour modéliser le problème d’affectation multi-périodes des effectifs sur les activités industrielles en considérant deux dimensions de la flexibilité: L’annualisation du temps de travail, qui concerne les politiques de modulation d’horaires, individuels ou collectifs, et la polyvalence des opérateurs, qui induit une vision dynamique de leurs compétences et la nécessité de prévoir les évolutions des performances individuelles en fonction des affectations successives. La nature dynamique de l’efficacité des effectifs a été modélisée en fonction de l’apprentissage par la pratique et de la perte de compétence pendant les périodes d’interruption du travail. En conséquence, nous sommes résolument placés dans un contexte où la durée prévue des activités n’est plus déterministe, mais résulte du nombre des acteurs choisis pour les exécuter, en plus des niveaux de leur expérience. Ensuite, la recherche a été orientée pour répondre à la question : « quelle genre, ou quelle taille, de problème pose le projet que nous devons planifier? ». Par conséquent, les différentes dimensions du problème posé sont classées et analysés pour être évaluées et mesurées. Pour chaque dimension, la méthode d’évaluation la plus pertinente a été proposée : le travail a ensuite consisté à réduire les paramètres résultants en composantes principales en procédant à une analyse factorielle. En résultat, la complexité (ou la simplicité) de la recherche de solution (c’est-à-dire de l’élaboration d’un planning satisfaisant pour un problème donné) peut être évaluée. Pour ce faire, nous avons développé une plate-forme logicielle destinée à résoudre le problème et construire le planning de référence du projet avec l’affectation des ressources associées, plate-forme basée sur les algorithmes génétiques. Le modèle a été validé, et ses paramètres ont été affinés via des plans d’expériences pour garantir la meilleure performance. De plus, la robustesse de ces performances a été étudiée sur la résolution complète d’un échantillon de quatre cents projets, classés selon le nombre de leurs tâches. En raison de l’aspect dynamique de l’efficacité des opérateurs, le présent travail examine un ensemble de facteurs qui influencent le développement de leur polyvalence. Les résultats concluent logiquement qu’une entreprise en quête de flexibilité doit accepter des coûts supplémentaires pour développer la polyvalence de ses opérateurs. Afin de maîtriser ces surcoûts, le nombre des opérateurs qui suivent un programme de développement des compétences doit être optimisé, ainsi que, pour chacun d’eux, le degré de ressemblance entre les nouvelles compétences développées et les compétences initiales, ou le nombre de ces compétences complémentaires (toujours pour chacun d’eux), ainsi enfin que la façon dont les heures de travail des opérateurs doivent être réparties sur la période d’acquisition des compétences. Enfin, ce travail ouvre la porte pour la prise en compte future des facteurs humains et de la flexibilité des effectifs pendant l’élaboration d’un planning de référence
The growing need of responsiveness for manufacturing companies facing the market volatility raises a strong demand for flexibility in their organization. This flexibility can be used to enhance the robustness of a baseline schedule for a given programme of activities. Since the company personnel are increasingly seen as the core of the organizational structures, they provide the decision-makers with a source of renewable and viable flexibility. First, this work was implemented to model the problem of multi-period workforce allocation on industrial activities with two degrees of flexibility: the annualizing of the working time, which offers opportunities of changing the schedules, individually as well as collectively. The second degree of flexibility is the versatility of operators, which induces a dynamic view of their skills and the need to predict changes in individual performances as a result of successive assignments. The dynamic nature of workforce’s experience was modelled in function of learning-by-doing and of oblivion phenomenon during the work interruption periods. We firmly set ourselves in a context where the expected durations of activities are no longer deterministic, but result from the number and levels of experience of the workers assigned to perform them. After that, the research was oriented to answer the question “What kind of problem is raises the project we are facing to schedule?”: therefore the different dimensions of the project are inventoried and analysed to be measured. For each of these dimensions, the related sensitive assessment methods have been proposed. Relying on the produced correlated measures, the research proposes to aggregate them through a factor analysis in order to produce the main principal components of an instance. Consequently, the complexity or the easiness of solving or realising a given scheduling problem can be evaluated. In that view, we developed a platform software to solve the problem and construct the project baseline schedule with the associated resources allocation. This platform relies on a genetic algorithm. The model has been validated, moreover, its parameters has been tuned to give the best performance, relying on an experimental design procedure. The robustness of its performance was also investigated, by a comprehensive solving of four hundred instances of projects, ranked according to the number of their tasks. Due to the dynamic aspect of the workforce’s experience, this research work investigates a set of different parameters affecting the development of their versatility. The results recommend that the firms seeking for flexibility should accept an amount of extra cost to develop the operators’ multi functionality. In order to control these over-costs, the number of operators who attend a skill development program should be optimised, as well as the similarity of the new developed skills relative to the principal ones, or the number of the additional skills an operator may be trained to, or finally the way the operators’ working hours should be distributed along the period of skill acquisition: this is the field of investigations of the present work which will, in the end, open the door for considering human factors and workforce’s flexibility in generating a work baseline program
APA, Harvard, Vancouver, ISO, and other styles
40

Schroeder, Florian Alexander Yinkan Nepomuk. "Tensor network states simulations of exciton-phonon quantum dynamics for applications in artifcial light-harvesting." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/275988.

Full text
Abstract:
Light-harvesting in nature is known to work differently than conventional man-made solar cells. Recent studies found electronic excitations, delocalised over several chromophores, and a soft, vibrating structural environment to be key schemes that might protect and direct energy transfer yielding increased harvest efficiencies even under adversary conditions. Unfortunately, testing realistic models of noise assisted transport at the quantum level is challenging due to the intractable size of the environmental wave function. I developed a powerful tree tensor network states (TTNS) method that finds an optimally compressed explicit representation of the combined electronic and vibrational quantum state. With TTNS it is possible to simulate exciton-phonon quantum dynamics from small molecules to larger complexes, modelled as an open quantum system with multiple bosonic environments. After benchmarking the method on the minimal spin-boson model by reproducing ground state properties and dynamics that have been reported using other methods, the vibrational quantum state is harnessed to investigate environmental dynamics and its correlation with the spin system. To enable simulations of realistic non-Born-Oppenheimer molecular quantum dynamics, a clustering algorithm and novel entanglement renormalisation tensors are employed to interface TTNS with ab initio density functional theory (DFT). A thereby generated model of a pentacene dimer containing 252 vibrational normal modes was simulated with TTNS reproducing exciton dynamics in agreement with experimental results. Based on the environmental state, the (potential) energy surfaces, underlying the observed singlet fission dynamics, were calculated yielding unprecedented insight into the super-exchange mediated avoided crossing mechanism that produces ultrafast and high yield singlet fission. This combination of DFT and TTNS is a step towards large scale material exploration that can accurately predict excited states properties and dynamics. Furthermore, application to biomolecular systems, such as photosynthetic complexes, may give valuable insights into novel environmental engineering principles for the design of artificial light-harvesting systems.
APA, Harvard, Vancouver, ISO, and other styles
41

Lacerda, Fernando Hideo Iochida. "Direito penal mínimo e constituição: o bem jurídico como aquisição evolutiva e a criminalização de seu tempo." Pontifícia Universidade Católica de São Paulo, 2013. https://tede2.pucsp.br/handle/handle/6290.

Full text
Abstract:
Made available in DSpace on 2016-04-26T20:22:18Z (GMT). No. of bitstreams: 1 Fernando Hideo Iochida Lacerda.pdf: 1457772 bytes, checksum: b9aecec24ea84eaf447c16136ce8b004 (MD5) Previous issue date: 2013-10-28
The scope of the present work is to propose boundaries for the criminalization of our time, from an overview of the juridical value as an evolutionary acquisition. In this sense, the juridical value corresponds to the structural coupling between criminal law and criminal policy, being a product of evolutionary differentiation that operated between the legal and political systems. With that purpose, Niklas Luhmann s theory of systems was adopted as a conceptual assumption, as well as a view of time, considering that we live in a risk society, according to the notions of Ulrich Beck. Applying these scientific references, this thesis proposes a new discussion of the relationship between the Constitution, the juridical value, the criminal law, criminal procedure and criminal policy, defending the idea that it is a function of the legislature to identify the juridical value as a basis for creating criminal law, considering that all the process of penal intervention is positively limited by constitutional norms. The dissertation deals with the criminalization of our time: regarding criminal intervention as a product of politics - analyzing the (non) existence of constitutional warrants binding the production of non constitutional rules, from a vision of the Constitution as a threshold of criminal law, whose foundation would be the juridical value - or concerning the moment of criminal intervention as an operation of the legal system, from the (non) possibility of challenging the constitutional procedural safeguards aiming to adapt risk society's expectations. It is a search for foundations, limits and parameters for the penal system of our time: the minimum criminal law and criminal procedure, informed by constitutional principles
Escopo deste trabalho é a proposta de balizas para a criminalização de nosso tempo, a partir de uma visão do bem jurídico como aquisição evolutiva. Nesse sentido, o bem jurídico penal corresponde ao acoplamento estrutural entre o direito penal e a política criminal, produto da diferenciação evolutiva que se operou entre os sistemas jurídico e político. Para tanto, são adotados como pressupostos conceituais basilares a teoria dos sistemas de Niklas Luhmann e uma visão da sociedade de risco como o tempo em que vivemos, a partir de noções formuladas por Ulrich Beck. Empregando esses referentes científicos, a presente dissertação rediscute a relação entre Constituição, bem jurídico, direito penal, processual penal e política criminal, defendendo a ideia de que é função do legislador a identificação do bem jurídico como fundamento de normas penais incriminadoras, estando todo o processo de intervenção penal limitado positivamente pelas normas constitucionais. A dissertação trata da criminalização de nosso tempo: seja no momento da intervenção penal como produto político ― analisando-se a (in)existência de mandados constitucionais que vinculariam a produção normativa infraconstitucional, a partir de uma visão da Constituição como limite do direito penal, cujo fundamento seria o bem jurídico ―, quer no momento da intervenção penal como operação do sistema jurídico, a partir da (im)possibilidade de relativização das garantias processuais de natureza constitucional para adequação às expectativas da sociedade de risco. É uma busca por fundamentos, limites e parâmetros para o sistema penal de nosso tempo: do direito penal mínimo e do processo penal garantista, informados pelos princípios constitucionais
APA, Harvard, Vancouver, ISO, and other styles
42

Tecklenburg, Gerhard. "Design of body assemblies with distributed tasks under the support of parametric associative design (PAD)." Thesis, University of Hertfordshire, 2011. http://hdl.handle.net/2299/5809.

Full text
Abstract:
This investigation identifies how CAD models of typical automotive body assemblies could be defined to allow a continuous optimisation of the number of iterations required for the final design and the number of variants on the basis of Parametric Associative Design (PAD) and how methodologies for the development of surfaces, parts and assemblies of the automotive body can be represented and structured for a multiple re-use in a collaborative environment of concept phase of a Product Evolution (Formation) Process (PEP). The standardisation of optimised processes and methodologies and the enhanced interaction between all parties involved in product development could lead to improve product quality and reduce development time and hence expenses. The fundamental principles of PAD, the particular methodologies used in automotive body design and the principles of methodical development and design in general are investigated. The role which automotive body engineers play throughout the activities of the PEP is also investigated. The distribution of design work in concept teams of automotive body development and important methodologies for the design of prismatic profile areas is critically analysed. To address the role and distribution of work, 25 group work projects were carried out in cooperation with the automotive industry. Large assemblies of the automotive bodies were developed. The requirements for distributed design work have been identified and improved. The results of the investigation point towards a file based, well structured administration of a concept design, with a zone based approach. The investigation was extended to the process chain of sections, which are used for development of surfaces, parts and assemblies. Important methods were developed, optimised and validated with regard to an update safe re-use of 3D zone based CAD models instead of 2D sections. The thesis presents a thorough description of the research undertaken, details the experimental results and provides a comprehensive analysis of them. Finally it proposes a unique methodology to a zone based approach with a clearly defined process chain of sections for an update-safe re-use of design models.
APA, Harvard, Vancouver, ISO, and other styles
43

Dadi, Slimane. "Qualité des eaux de la Moselle à la prise d'eau du district de l'agglomération nancéienne : analyse des données pour la période 1973-1988." Vandoeuvre-les-Nancy, INPL, 1991. http://www.theses.fr/1991INPL051N.

Full text
Abstract:
Ce mémoire est consacré à l'étude de la qualité des eaux brutes de la Moselle à la prise de Messein. Cette qualité est sujette à de nombreuses fluctuations. Afin de comprendre les mécanismes qui la règlent, nous mettons en lumière les principales données sur le bassin de la Moselle à Messein ; nous étudions les variations spatio-temporelles de la qualité des eaux en quelques points stratégiques de celui-ci et nous effectuons une analyse statistique des paramètres caractérisant la qualité des eaux de Moselle à Messein. L'exploration des données sur la qualité des eaux de la Moselle et de ses deux principaux affluents (Moselotte et Vologne) pour la période 1973-1988, nous a permis de mettre en évidence l'origine et la dynamique de transfert des éléments dissous et particulaires par ces cours d'eaux. En effet, par l'étude des variations des concentrations dans l'espace et dans le temps, on a pu d'une part attribuer à la plupart des espèces dissoutes une origine diffuse, superficielle ou profonde, liée à l'altération de sols et des roches du bassin et aux pollutions anthropiques, et d'autre part déterminer les relations entre les concentrations dans les cours d'eau et les débits liquides en périodes de hautes et basses eaux. Des bilans des flux de matières dissoutes et particulaires sont établis à l'exutoire du bassin de la Moselle et de ses principaux affluents ; ils permettent de chiffrer les apports de la Moselle à la station de Messein mais également de déterminer l'origine géographique de ces apports notamment les contributions respectives des bassins de la Moselotte, de la Vologne, de la Moselle supérieure et de la partie aval du bassin. On a pu aussi chiffrer les contributions mensuelles dans les différents bilans annuels, les pertes en nitrates dans la Moselle, et mettre en évidence les zones de sédimentation et les zones d'érosion. Au niveau de Messein et pour la période 1973-1988, nous avons retenu 45 paramètres de qualité. Nous avons étudié leur distribution statistique en les ajustant aux différentes lois de probabilité les plus fréquemment utilisées en hydrologie. Nous avons montré que la distribution de la majorité des paramètres peut être considérée comme normale, log-normale ou log-normale tronquée. Certains paramètres possèdent une distribution hétérogène. Dans ce cas, la loi de distribution des valeurs naturelles ou logarithmiques est une loi normale à deux composantes gaussiennes. Ces dernières sont séparées soit suivant les débits liquides soit selon les deux saisons hydrologiques. L'application des méthodes statistiques fondées sur l'analyse des corrélations ou sur l'analyse en composantes principales, nous ont fourni des renseignements importants concernant les relations entre les différents paramètres de l'eau à Messein et la détermination des facteurs qui contrôlent la composition chimique des eaux
APA, Harvard, Vancouver, ISO, and other styles
44

Tourte, Alain. "Quel traitement pour le sujet autiste ? : exposé et analyse critique des principales approches de l'autisme : les différents moyens mis en oeuvre par le sujet autiste pour compenser sa carence symbolique : développement d'un traitement possible du sujet autiste." Thesis, Strasbourg, 2012. http://www.theses.fr/2012STRAG046.

Full text
Abstract:
Ce travail soutient une conception non déficitaire de l’autisme. Il s’intéresse au traitement du sujet autiste, à son accompagnement et son évolution subjective. Il développe une prise en charge des autistes centrée sur leur fonctionnement spécifique et leur singularité. Il fait l’hypothèse d’un sujet au travail dans l’autisme, qui cherche désespérément à réfréner ce qui l’envahit, à tempérer son angoisse, et à symboliser son monde. Notre lecture lacanienne des principales approches de l’autisme (psychodynamiques, comportementales, cognitives) permet de dégager les conditions et modalités de traitement qui favorisent la relance du sujet autiste dans la dynamique du langage, son ouverture à autrui, à la connaissance, et au lien social. Cette évolution passe par l’élaboration d’un « symptôme autistique ». Nous montrons la fonction thérapeutique essentielle des différents moyens de compensation (ou « bases de suppléances ») à la carence symbolique, mis en oeuvre par le sujet autiste. Et précisons la fonction et le rôle déterminant du thérapeute au cours du traitement. Enfin, nous dégageons une clinique différentielle entre autisme et psychose
This work supports an approach of autism as non deficient. It focuses on treatment, support and subjective evolution of the autistic subject. It develops a care centered on specificity and singularity of this subject. It makes the hypothesis that there is a subject working his way through in autism, desperately trying to stop what invades him, to moderate his anxiety, to symbolize his world. Our lacanian reading of the major approaches to autism (psychoanalytical, behaviorism, cognitivism) allows to develop the conditions and methods of a treatment that helps the autistic subject to re-start in the dynamics of language, stimulates his opening in others, in knowledge, and in social links. This evolution requires the elaboration of an « autistic symptom ». We underline the essential therapeutic function of various means of compensation (or « suppletion basis ») for the symbolic deficiency used by the autistic subject. And we specify the function and determining role of the therapist during the treatment. Finally, we define a differential clinical approach between autism and psychosis
APA, Harvard, Vancouver, ISO, and other styles
45

Grobler, Trienko Lups. "Fountain codes and their typical application in wireless standards like edge." Diss., University of Pretoria, 2008. http://hdl.handle.net/2263/25381.

Full text
Abstract:
One of the most important technologies used in modern communication systems is channel coding. Channel coding dates back to a paper published by Shannon in 1948 [1] entitled “A Mathematical Theory of Communication”. The basic idea behind channel coding is to send redundant information (parity) together with a message to make the transmission more error resistant. There are different types of codes that can be used to generate the parity required, including block, convolutional and concatenated codes. A special subclass of codes consisting of the codes mentioned in the previous paragraph, is sparse graph codes. The structure of sparse graph codes can be depicted via a graphical representation: the factor graph which has sparse connections between its elements. Codes belonging to this subclass include Low-Density-Parity-Check (LDPC) codes, Repeat Accumulate (RA), Turbo and fountain codes. These codes can be decoded by using the belief propagation algorithm, an iterative algorithm where probabilistic information is passed to the nodes of the graph. This dissertation focuses on noisy decoding of fountain codes using belief propagation decoding. Fountain codes were originally developed for erasure channels, but since any factor graph can be decoded using belief propagation, noisy decoding of fountain codes can easily be accomplished. Three fountain codes namely Tornado, Luby Transform (LT) and Raptor codes were investigated during this dissertation. The following results were obtained:
  1. The Tornado graph structure is unsuitable for noisy decoding since the code structure protects the first layer of parity instead of the original message bits (a Tornado graph consists of more than one layer).
  2. The successful decoding of systematic LT codes were verified.
  3. A systematic Raptor code was introduced and successfully decoded. The simulation results show that the Raptor graph structure can improve on its constituent codes (a Raptor code consists of more than one code).
Lastly an LT code was used to replace the convolutional incremental redundancy scheme used by the 2G mobile standard Enhanced Data Rates for GSM Evolution (EDGE). The results show that a fountain incremental redundancy scheme outperforms a convolutional approach if the frame lengths are long enough. For the EDGE platform the results also showed that the fountain incremental redundancy scheme outperforms the convolutional approach after the second transmission is received. Although EDGE is an older technology, it still remains a good platform for testing different incremental redundancy schemes, since it was one of the first platforms to use incremental redundancy.
Dissertation (MEng)--University of Pretoria, 2008.
Electrical, Electronic and Computer Engineering
MEng
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
46

Delhaye, Coralie. "Comparaison des positionnements entre savoirs scientifiques et croyances religieuses à propos des origines du vivant dans les curriculums officiels grec, français et belge." Doctoral thesis, Universite Libre de Bruxelles, 2014. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/209167.

Full text
Abstract:
La problématique de recherche étudiée dans le cadre de cette thèse, émerge de diverses réflexions, données empiriques et observations, toutes liées à un constat qui a des implications importantes pour l’enseignement des sciences :le rejet partiel ou total de la théorie de l’évolution aux cours de sciences dispensés à l’école, au nom de croyances créationnistes, dans des sociétés modernes européennes où la science fait autorité.

La littérature scientifique qui traite de cette problématique dans le cadre de l’enseignement scolaire en Europe ,analyse les conceptions d’acteurs de l’enseignement scolaire – enseignants et/ou élèves – sur ce sujet, en étudiant notamment le lien qu’entretiennent ces conceptions avec les représentations que ces mêmes acteurs ont de la science, avec leurs parcours personnels, avec leur formation, etc. Un point aveugle observé dans cette littérature est la rareté des recherches portant sur les directives officiellement adressées aux enseignants. C’est pourquoi nous avons choisi de nous pencher sur le contenu de ces directives.

Cette recherche a, en premier lieu, une visée exploratoire. Elle consiste à construire et utiliser un instrument théorique et méthodologique qui permet, d’une part, d’identifier des représentations du savoir scientifique, de la croyance religieuse et/ou de leurs rapports (ou non rapports) véhiculées par les curriculums prescrits européens et, d’autre part, de déterminer des mécanismes à travers lesquels ces représentations pourraient influencer, d’une façon ou d’une autre, le rejet ou l’acceptation de la théorie de l’évolution au nom de croyances créationnistes ou encore, inversement, le rejet ou l’acceptation de croyances créationnistes au nom de la théorie de l’évolution. Pour repérer les représentations recherchées, nous utilisons la méthode de l’analyse de contenu thématique.

Une autre visée de cette étude est confirmatoire. Il s’agit de confirmer le postulat suivant lequel la nature des éventuelles représentations repérées au sein des curriculums prescrits au moyen de l’instrument susmentionné peut être mise en lien – lien dont la nature sera définie dans le corps de notre dissertation, sur la base de l’analyse de données sociohistoriques rapportées dans la littérature – avec les modalités de gestion de la laïcité mises en place par les politiques éducatives de différents pays européens :la France, la Grèce et la Belgique francophone. Ces pays ont justement été sélectionnés pour leur profil divergent en matière de politiques de gestion de la diversité culturelle. Pour démontrer ce lien, nous nous livrons à une analyse comparative sociétale.
Doctorat en Sciences Psychologiques et de l'éducation
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
47

BERRUYER, DESIROTTE NICOLE. "Contribution a l'etude des enveloppes circumstellaires : effet du couplage grains-gaz." Nice, 1987. http://www.theses.fr/1987NICE4157.

Full text
Abstract:
L'enveloppe d'une etoile est particulierement importante a sa naissance, son origine etant proto-stellaire, et aux phases finales de son evolution ou la matiere ejectee par la matiere forme un cocon entourant l'etoile. La description de ces enveloppes circumstellaires depend donc de sa constitution et du rayonnement emis par l'objet stellaire central. On precise les criteres observationnels d'un objet en formation en fonction de sa masse; et pour les objets evolues, on decrit un modele de vent stellaire tenant compte des grains de poussieres et de la pression de radiation
APA, Harvard, Vancouver, ISO, and other styles
48

Oakshott, Stephen Craig School of Information Library &amp Archives Studies UNSW. "The Association of Libarians in colleges of advanced education and the committee of Australian university librarians: The evolution of two higher education library groups, 1958-1997." Awarded by:University of New South Wales. School of Information, Library and Archives Studies, 1998. http://handle.unsw.edu.au/1959.4/18238.

Full text
Abstract:
This thesis examines the history of Commonwealth Government higher education policy in Australia between 1958 and 1997 and its impact on the development of two groups of academic librarians: the Association of Librarians in Colleges in Advanced Education (ALCAE) and the Committee of Australian University Librarians (CAUL). Although university librarians had met occasionally since the late 1920s, it was only in 1965 that a more formal organisation, known as CAUL, was established to facilitate the exchange of ideas and information. ALCAE was set up in 1969 and played an important role helping develop a special concept of library service peculiar to the newly formed College of Advanced Education (CAE) sector. As well as examining the impact of Commonwealth Government higher education policy on ALCAE and CAUL, the thesis also explores the influence of other factors on these two groups, including the range of personalities that comprised them, and their relationship with their parent institutions and with other professional groups and organisations. The study focuses on how higher education policy and these other external and internal factors shaped the functions, aspirations, and internal dynamics of these two groups and how this resulted in each group evolving differently. The author argues that, because of the greater attention given to the special educational role of libraries in the CAE curriculum, the group of college librarians had the opportunity to participate in, and have some influence on, Commonwealth Government statutory bodies responsible for the coordination of policy and the distribution of funding for the CAE sector. The link between ALCAE and formal policy-making processes resulted in a more dynamic group than CAUL, with the university librarians being discouraged by their Vice-Chancellors from having contact with university funding bodies because of the desire of the universities to maintain a greater level of control over their affairs and resist interference from government. The circumstances of each group underwent a reversal over time as ALCAE's effectiveness began to diminish as a result of changes to the CAE sector and as member interest was transferred to other groups and organisations. Conversely, CAUL gradually became a more active group during the 1980s and early 1990s as a result of changes to higher education, the efforts of some university librarians, and changes in membership. This study is based principally on primary source material, with the story of ALCAE and CAUL being told through the use of a combination of original documentation (including minutes of meetings and correspondence) and interviews with members of each group and other key figures.
APA, Harvard, Vancouver, ISO, and other styles
49

Bretschneider, Jörg. "Ein wellenbasiertes stochastisches Modell zur Vorhersage der Erdbebenlast." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2007. http://nbn-resolving.de/urn:nbn:de:swb:14-1183717691264-79760.

Full text
Abstract:
Starke Erdbeben stellen weltweit ein hohes Risiko für urbane Zentren dar, dem unter anderem durch Methoden der aseismischen Bauwerksbemessung begegnet wird. Grundlage hierfür bilden Annahmen und Erfahrungswissen über die lokale seismische Bodenbeschleunigung, Grenzen sind hingegen durch die zusätzlichen Kosten gesetzt. Die Schadensbilanz der Starkbeben der letzten Jahre, auch in den Industrieländern, verdeutlicht die Notwendigkeit, die Konzepte und Methoden des erdbebensicheren Bauens weiter zu verfeinern. In dieser Arbeit wird ein neuer Ansatz zur stochastischen seismischen Lastmodellierung vorgestellt, der über die übliche Annahme eines stationären, eindimensionalen Prozesses für die Bodenbeschleunigung hinausgeht. Ziel ist eine standort- und wellenspezifische räumliche Lastmodellierung, die durch Nutzung von Informationen über physikalische Invarianten eine transparente und kostengünstige aseismische Bauwerksbemessung erlaubt, zumindest aber das Risiko gegenüber gebräuchlichen Bemessungsmethoden reduziert. Solche seismischen und geotechnischen Invarianten sind die gesetzmäßige Struktur des seismischen Wellenfeldes sowie die Resonanzeigenschaften der Bodenschichtung am Standort. Das vorgeschlagene Lastmodell bildet das Wellenfeld am Standort als Komposition stochastischer evolutionärer Teilprozesse auf zeitveränderlichen Hauptachsen ab, die zu Wellenzügen mit jeweils spezifischer Lastcharakteristik korrespondieren. Diese Lastcharakteristik wird sowohl im Frequenz- und Zeitbereich als auch räumlich durch wellenspezifische Formfunktionen beschrieben, deren Parameter stark zu seismischen und geotechnischen Größen korrelieren. Schwerpunkt der Arbeit sind neuartige, korrelationsbasierte Schätzverfahren zur empirischen Spezifikation der Modellparameter für die Baupraxis. Das spektraladaptive Korrelations-Hauptachsenschätzverfahren (SAPCA) sichert die optimale Erfassung der räumlichen Wellenzüge durch Transformation der Messung auf Referenzkomponenten. Gleichzeitig liefert es - in Verbindung mit einem Korrekturverfahren für den Streichwinkel der Hauptachse - prägnante, assoziierte Hauptachsenverlaufsmuster, anhand derer Dominanzphasen für drei verallgemeinerte Wellenzüge zuverlässig identifiziert werden können. Innerhalb dieser Dominanzphasen werden die wellenzugspezifischen Parameter des Lastmodells bestimmt. Außerdem wird ein Algorithmus angegeben, um Rayleighwellen in Einzelmessungen zu identifizieren. Die Eignung des Modellansatzes und die Effizienz der Schätzverfahren werden anhand von Starkbebenmessungen des Northridge-Erdbebens 1994 verifiziert. Mit dem vorgestellten nichtstationären Modellansatz werden in herkömmlichen stochastischen Lastmodellen unterschätzte Lastanteile des Starkbebenwellenfeldes genauer abgebildet. Bisher unterschlagene oder pauschal modellierte Lastanteile werden erstmals der Analyse und Modellierung zugänglich gemacht. Das stochastische Modell wird bezüglich der wichtigsten lastgenerierenden Effekte physikalisch transparent und dadurch - trotz höherer Komplexität - in der Ingenieurpraxis besser handhabbar. Die Hauptachsenmethode (SAPCA) eignet sich auch für seismologische Analysen im Nahbereich, etwa zur Analyse von Bruchprozessen und topographischen Standorteffekten
Strong earthquakes are a potential high risk for urban centres worldwide, which is, amongst others, confronted by methods of aseismic structural design. This is based on both assumptions and thorough knowledge about local seismic ground acceleration; limits are set, on the other side, by additional costs. Damage balance of recent strong quakes - also in industrialized countries - emphasize the need for further refinement of concepts and methods of earthquake resistant structural design. In this work, a new approach of stochastic seismic load modelling is presented, letting go the usual presupposition of a stationary, one-dimensional stochastic process for ground acceleration. The goal is site and wave specific load modelling, using information about physical and geotechnical invariants, which enables transparency and low cost approaches in aseismic structural design, but at least reduces seismic risk in comparison to common design methods. Those physical and geotechnical invariants are the structure of the seismic wave field according to physical laws as well as resonance properties of the soil strata at the local site. The proposed load model represents the local wave field as a composition of stochastic evolutionary sub-processes upon time-variant principal axes, which correspond to wave trains with specific load characteristics. Those load characteristics are described in the frequency and time as well as in the spatial domain by wave-specific shape functions, whose parameters strongly correlate to seismic and geotechnical entities. Main contributions of the work are newly developed estimation procedures based on correlation, which serve in the framework of empirical specification of the model parameters for the building practice. The Spectral-Adaptive Principal Correlation Axes (SAPCA) algorithm ensures an optimal covering of the spatial wave trains by transforming the recorded data onto Reference Components. At the same time - in connection with a correction algorithm for the strike angle of the principal axis - it delivers concise associated patterns in the course of the principal axis, which are in turn used to reliably identify dominance phases for three generalized wave trains. Within those wave dominance phases, the wave specific parameters of the load model are determined. Additionally, an algorithm is presented to identify Rayleigh waves in single site acceleration records. Adequacy of the modelling approach and efficiency of the estimation procedures are verified by means of strong motion records from the 1994 Northridge Earthquake The proposed non-stationary modelling approach describes with more accuracy load portions of the strong motion wave field underestimated in conventional stochastic load models. Load portions which are left out or lump-sum modelled so far are made available for analysis and modelling for the first time. The stochastic model gains physical transparency with respect to the most important load generating effects, and hence will be - despite higher complexity - easy to handle in engineering practice. The Principal Axis method will also be useful for seismological analyses in the near field, e.g., for the analysis of rupture processes and topographic site effects
Des séismes forts sont un gros risque potentiel pour des centres urbains dans le monde entier, qui est, entre autres, confronté par des méthodes de conception aséismique de bâtiments. Ceci est fondé sur des hypothèses et la connaissance profonde au sujet de l'accélération séismique au sol locale. Limites sont placées, de l'autre côté, par des coûts additionnels. Les dommages des séismes forts récents, aussi dans les pays industrialisés, soulignent la nécessité de raffiner plus loin les concepts et les méthodes de conception aséismique de bâtiments. Dans cette oeuvre, une nouvelle approche à la modélisation stochastique de la charge séismique est présentée, qui renonce la présupposition habituelle d'un processus stationnaire et unidimensionnel pour l'accélération de sol. L'objectif est une modélisation spatiale de charge, spécifique d'ondes et de site, qui, par l'utilisation des informations sur des invariantes physiques, permet une mesure de bâtiment asismique transparente et économique, au moins toutefois réduit le risque par rapport aux méthodes de mesures courantes. De tels invariants séismiques et géotechniques sont la structure du champ des ondes séismiques déterminé par les lois de la physique et les qualités de résonance de la stratification de sol locale. Le modèle de charge proposé décrit le champ des ondes au site comme composition des sous-processes évolutionnaires stochastiques sur les axes principales variables dans le temps, qui correspondent aux trains des ondes qu'ont une caractéristique de charge respectivement spécifique. Cette caractéristique de charge est décrit dans le domaine temporel et de fréquence et aussi bien que spatial par les fonctions de forme spécifique d'ondes dont les paramètres corrèlent fortement à des dimensions séismiques et géotechniques. Une priorité d'oeuvre sont des nouvelles procédures d'estimation, pour la spécification empirique des paramètres de modèle pour la pratique de construction, qui se basent sur la sur la corrélation de croix de composante. La procédure adaptative spectrale d'estimation d'axes principals de corrélation (SAPCA) assure la saisie optimale des trains des ondes spatiaux par la transformation des enregistrements sur des composantes de référence. En même temps - en relation avec une procédure de correction d'angle égal d'axe prin¬ci¬pal - il livre des concises schémas associés de cours d'axes principals, au moyen de ceux peut être identifié fiable des phases de dominance pour trois trains généralisés des ondes. Dans ces phases de dominance, les paramètres du modèle de charge spécifiques pour chaque train des ondes sont déterminés. En outre, un algorithme est indiqué, pour identi¬fier des ondes de Rayleigh dans un enregistrement individuel de l'accélération de sol. La qualification de l'approche de modèle et l'efficience des procédures d'estimation sont vérifiées au moyen d'enregistrements de tremblement fort du séisme á Northridge 1994. Avec l'approche de modèle non-stationnaire présentée, tels des parts de charge du champ des ondes sismiques forts sont décrites plus précisément qui sont sous-estimées dans les modèles de charge stochastiques habituels. Des parts de charge q'ont été supprimés ou modelées forfaitairement jusqu'ici, sont rendues accessibles à l'analyse et à la modélisation pour la première fois. Le modèle stochastique devient physico-transparent concernant les effets les plus importants, produisants une charge sur le bâtiment, et ainsi - malgré la complexité plus élevée - mieux maniable en pratique d'ingénieur. La méthode d'axes principals adaptative spectrale (SAPCA) convient aussi pour des analyses sismologiques dans la proximité d'epicentre, par exemple à l'analyse des processus de rupture et des effets de site topographiques
Por todo el mundo, los terremotos fuertes son un alto riesgo potencial para los centros urbanos, que está, entre otros, enfrentado por métodos de diseño estructural antisísmico. Estos métodos son basa en asunciones y conocimiento fundamentado sobre la aceleración de tierra sísmica local; los límites son fijados, en el otro lado, por costes adicionales. Balance de los daños de temblores fuertes recientes - también en países industrializados - acentúe la necesidad del refinamiento adicional de conceptos y de métodos de diseño estructural resistente del terremoto. En este trabajo, una nueva aproximación de modelar estocástico de la carga sísmica se presenta, superando la presuposición generalmente de un proceso estocástico unidimensional y estacionario para la aceleración de tierra. La meta avisada es un modelo de la carga específico del sitio y de las ondas que, con la información sobre las invariantes físicas y geotécnicas, permite las aproximaciones transparentes y económicas, en diseño estructural antisísmico; pero por lo menos reduce el riesgo sísmico en la comparación a los métodos usados de diseño. Esos invariantes son la estructura regular del campo de las ondas sísmicas, así como las características de la resonancia de los estratos del suelo en el sitio local. El modelo propuesto de la carga representa el campo local de las ondas sísmicas como composición de los procesos parciales evolutivos estocásticos sobre las hachas principales variables-temporales, que corresponden a los trenes de las ondas con características específicas de la carga. Esas características de la carga son descritas en el dominio de la frecuencia y del tiempo así como en el dominio espacial por las funciones de la forma, que parámetros son especificas por los trenos generalizados de la onda sísmica y correlacionan fuertemente a las entidades sísmicas y geotécnicas. La contribución principal de este trabajo son los procedimientos nuevamente desarrollados de la valoración basados en la correlación, que sirven en el contexto de la especificación empírica de los parámetros de modelo para la práctica de construcción. El algoritmo de las Ejes Mayor de la Correlación Espectral-Adaptante (SAPCA) asegura la recogida óptima de los trenes espaciales de la onda transformando los datos registrados sobre componentes de la referencia. En el mismo tiempo - en la conexión con un algoritmo de la corrección para el ángulo del acimut del eje mayor/principal – SAPCA entrega los patrones asociados concisos en el curso del eje principal, que después se utilizan para identificar confiablemente las fases de la dominación para tres trenes generalizados de la onda. Dentro de esas fases de la dominación de la onda, los parámetros específicos de la onda del modelo de la carga se determinan. Además, un algoritmo se presenta para identificar las ondas de Rayleigh en solos mensuras de la aceleración del sitio. La suficiencia del aproximación que modela y la eficacia de los procedimientos de la valoración se verifican por medio de los datos del terremoto catastrófico a Northridge 1994. La aproximación non-estacionaria que modela propuesto describe con más exactitud las porciones de la carga del campo de la onda del terremoto fuerte subestimado en modelos estocásticos convencionales de la carga. Cargue las porciones que se dejan hacia fuera o modelado global hasta ahora se hace disponible para el análisis y modelar para la primera vez. El modelo estocástico gana la transparencia física con respecto a la carga más importante que genera efectos, y por lo tanto será - a pesar de una complejidad más alta - fácil de dirigir en práctica de la ingeniería. El método principal del eje también será útil para los análisis sismológicos en el campo cercano, p. e., para el análisis de los procesos de la ruptura y de los efectos topográficos del sitio
Сильные землетрясения всемирно являются потенциально высоким риском для ур¬банизированных центров. Для уменшения сейсмического риска развивются методы антисейсмичной структурной конструкции. Эти методы построены на предположениях, которые требуют тщательного эмпирического знания характеристик местного сейсмического ускорения грунта. Предел состоит, с другой стороны, в дополнительных стоимостях строительства. Убытки от недавних сильных землетрясений - также в индустриально развитых странах – подчеркивают потребность более глубокого уточнения прин¬ципиальных схем и методов антисейсмичного строительства. Эта работа представляет новый подход стохастического сейсмического моде¬лирования нагрузки, развивающий обычное предположениe стационарного, одномерного стохастического процессa нa сейсмическое ускорение грунта. Целью будет создание модели нагрузки, которая указана по отдельности для специфических характеристик и сейсмических волн и местного положения, делающей возможным, путём использования информации о физических и геотехничес¬ких инвариантностях, и прозрачных и недорогих подходов в антисейсмичной структурной конструкции, но по крайней мере уменьшающей сейсмический риск, по сравнению с общими методами антисейсмичного строительства. Эти инвариантности являются закономерной структурой волнового поля также, как и свойствами резонан¬са слоёв грунта в месте постройки.. Предложенная модель нагрузки представляет местное волновое поле как составляющая стохастических подпроцессов развития на главных осях зависящих от времени, которые соответствуют волновым пакетам со специфическими характе¬ристиками нагрузки. Те эти характеристики нагрузки описаны в диапазонах частоты и времени также, как трёхмерного объёма функциями формы, параметры которых указаны по отдельности для различных обобщанных волновых пакетов u сильно соотносят от сейсмичес¬ких и геотехнических величин. Главны вклад работы – это новые процедуры оценивания, основанные на корреляции, которые служат в рамках эмпирической спецификации модельных параметров для практики строительства. Новый Aлгоритм Спектрально-Приспо¬собительных Главных Oсей Kорреляции (SAPCA) обеспечивает оптимальное заволакивание трёхмерных волновых пакетов преобразованием записанных данных сейсмического ускорения грунта на калибровочные компоненты на этих главных осях. В то же самое время - в связи с алгоритмом коррекции для угла простирания главной оси - SAPCA поставляет сжатые связанные волновые картины в ходе временно-изменчивых глав¬ных осей, которые в свою очередь использованы для надежного определения доминантных фаз для трёх обобщенных волновых пакетов. В этих фазах засилья отдельного волного пакета, определёны волново-специфические параметры модели нагруз¬ки. Дополнительно, показан алгоритм для идентификации и определения волн типа Релея при одиночной регистрации сейсмической ускорении грунта. Адекватность моделированного подходa и эффективность процедур оценивания подтвержены посредством данных сильного землетрясения Northridge 1994. Предложенный нестационарный подход моделировании описывает с большей точностью части нагрузки волного поля сильных землетрясений недооцененных в обычных стохастических моделях нагрузки. Части нагрузки не рассматривающиеся или слишком обобщаемые до сих пор, при новым подходе впервые можно будет учитывать и анализировать. Стохастическая модель приобретает физической прозрачностей по отношению к самым важным влияниям, которые производят нагрузку, и следовательно будет – несмотря на более высокую сложность – легка для того, чтобы применять её в практике инженерных расчётов. Mетод Главных Oсей также будет полезенно для сейсмологических анализов в близком поле, например, для анализа процессов повреждения и топографических влиянии местного положения
APA, Harvard, Vancouver, ISO, and other styles
50

Bretschneider, Jörg. "Ein wellenbasiertes stochastisches Modell zur Vorhersage der Erdbebenlast." Doctoral thesis, Technische Universität Dresden, 2006. https://tud.qucosa.de/id/qucosa%3A25000.

Full text
Abstract:
Starke Erdbeben stellen weltweit ein hohes Risiko für urbane Zentren dar, dem unter anderem durch Methoden der aseismischen Bauwerksbemessung begegnet wird. Grundlage hierfür bilden Annahmen und Erfahrungswissen über die lokale seismische Bodenbeschleunigung, Grenzen sind hingegen durch die zusätzlichen Kosten gesetzt. Die Schadensbilanz der Starkbeben der letzten Jahre, auch in den Industrieländern, verdeutlicht die Notwendigkeit, die Konzepte und Methoden des erdbebensicheren Bauens weiter zu verfeinern. In dieser Arbeit wird ein neuer Ansatz zur stochastischen seismischen Lastmodellierung vorgestellt, der über die übliche Annahme eines stationären, eindimensionalen Prozesses für die Bodenbeschleunigung hinausgeht. Ziel ist eine standort- und wellenspezifische räumliche Lastmodellierung, die durch Nutzung von Informationen über physikalische Invarianten eine transparente und kostengünstige aseismische Bauwerksbemessung erlaubt, zumindest aber das Risiko gegenüber gebräuchlichen Bemessungsmethoden reduziert. Solche seismischen und geotechnischen Invarianten sind die gesetzmäßige Struktur des seismischen Wellenfeldes sowie die Resonanzeigenschaften der Bodenschichtung am Standort. Das vorgeschlagene Lastmodell bildet das Wellenfeld am Standort als Komposition stochastischer evolutionärer Teilprozesse auf zeitveränderlichen Hauptachsen ab, die zu Wellenzügen mit jeweils spezifischer Lastcharakteristik korrespondieren. Diese Lastcharakteristik wird sowohl im Frequenz- und Zeitbereich als auch räumlich durch wellenspezifische Formfunktionen beschrieben, deren Parameter stark zu seismischen und geotechnischen Größen korrelieren. Schwerpunkt der Arbeit sind neuartige, korrelationsbasierte Schätzverfahren zur empirischen Spezifikation der Modellparameter für die Baupraxis. Das spektraladaptive Korrelations-Hauptachsenschätzverfahren (SAPCA) sichert die optimale Erfassung der räumlichen Wellenzüge durch Transformation der Messung auf Referenzkomponenten. Gleichzeitig liefert es - in Verbindung mit einem Korrekturverfahren für den Streichwinkel der Hauptachse - prägnante, assoziierte Hauptachsenverlaufsmuster, anhand derer Dominanzphasen für drei verallgemeinerte Wellenzüge zuverlässig identifiziert werden können. Innerhalb dieser Dominanzphasen werden die wellenzugspezifischen Parameter des Lastmodells bestimmt. Außerdem wird ein Algorithmus angegeben, um Rayleighwellen in Einzelmessungen zu identifizieren. Die Eignung des Modellansatzes und die Effizienz der Schätzverfahren werden anhand von Starkbebenmessungen des Northridge-Erdbebens 1994 verifiziert. Mit dem vorgestellten nichtstationären Modellansatz werden in herkömmlichen stochastischen Lastmodellen unterschätzte Lastanteile des Starkbebenwellenfeldes genauer abgebildet. Bisher unterschlagene oder pauschal modellierte Lastanteile werden erstmals der Analyse und Modellierung zugänglich gemacht. Das stochastische Modell wird bezüglich der wichtigsten lastgenerierenden Effekte physikalisch transparent und dadurch - trotz höherer Komplexität - in der Ingenieurpraxis besser handhabbar. Die Hauptachsenmethode (SAPCA) eignet sich auch für seismologische Analysen im Nahbereich, etwa zur Analyse von Bruchprozessen und topographischen Standorteffekten.
Strong earthquakes are a potential high risk for urban centres worldwide, which is, amongst others, confronted by methods of aseismic structural design. This is based on both assumptions and thorough knowledge about local seismic ground acceleration; limits are set, on the other side, by additional costs. Damage balance of recent strong quakes - also in industrialized countries - emphasize the need for further refinement of concepts and methods of earthquake resistant structural design. In this work, a new approach of stochastic seismic load modelling is presented, letting go the usual presupposition of a stationary, one-dimensional stochastic process for ground acceleration. The goal is site and wave specific load modelling, using information about physical and geotechnical invariants, which enables transparency and low cost approaches in aseismic structural design, but at least reduces seismic risk in comparison to common design methods. Those physical and geotechnical invariants are the structure of the seismic wave field according to physical laws as well as resonance properties of the soil strata at the local site. The proposed load model represents the local wave field as a composition of stochastic evolutionary sub-processes upon time-variant principal axes, which correspond to wave trains with specific load characteristics. Those load characteristics are described in the frequency and time as well as in the spatial domain by wave-specific shape functions, whose parameters strongly correlate to seismic and geotechnical entities. Main contributions of the work are newly developed estimation procedures based on correlation, which serve in the framework of empirical specification of the model parameters for the building practice. The Spectral-Adaptive Principal Correlation Axes (SAPCA) algorithm ensures an optimal covering of the spatial wave trains by transforming the recorded data onto Reference Components. At the same time - in connection with a correction algorithm for the strike angle of the principal axis - it delivers concise associated patterns in the course of the principal axis, which are in turn used to reliably identify dominance phases for three generalized wave trains. Within those wave dominance phases, the wave specific parameters of the load model are determined. Additionally, an algorithm is presented to identify Rayleigh waves in single site acceleration records. Adequacy of the modelling approach and efficiency of the estimation procedures are verified by means of strong motion records from the 1994 Northridge Earthquake The proposed non-stationary modelling approach describes with more accuracy load portions of the strong motion wave field underestimated in conventional stochastic load models. Load portions which are left out or lump-sum modelled so far are made available for analysis and modelling for the first time. The stochastic model gains physical transparency with respect to the most important load generating effects, and hence will be - despite higher complexity - easy to handle in engineering practice. The Principal Axis method will also be useful for seismological analyses in the near field, e.g., for the analysis of rupture processes and topographic site effects.
Des séismes forts sont un gros risque potentiel pour des centres urbains dans le monde entier, qui est, entre autres, confronté par des méthodes de conception aséismique de bâtiments. Ceci est fondé sur des hypothèses et la connaissance profonde au sujet de l'accélération séismique au sol locale. Limites sont placées, de l'autre côté, par des coûts additionnels. Les dommages des séismes forts récents, aussi dans les pays industrialisés, soulignent la nécessité de raffiner plus loin les concepts et les méthodes de conception aséismique de bâtiments. Dans cette oeuvre, une nouvelle approche à la modélisation stochastique de la charge séismique est présentée, qui renonce la présupposition habituelle d'un processus stationnaire et unidimensionnel pour l'accélération de sol. L'objectif est une modélisation spatiale de charge, spécifique d'ondes et de site, qui, par l'utilisation des informations sur des invariantes physiques, permet une mesure de bâtiment asismique transparente et économique, au moins toutefois réduit le risque par rapport aux méthodes de mesures courantes. De tels invariants séismiques et géotechniques sont la structure du champ des ondes séismiques déterminé par les lois de la physique et les qualités de résonance de la stratification de sol locale. Le modèle de charge proposé décrit le champ des ondes au site comme composition des sous-processes évolutionnaires stochastiques sur les axes principales variables dans le temps, qui correspondent aux trains des ondes qu'ont une caractéristique de charge respectivement spécifique. Cette caractéristique de charge est décrit dans le domaine temporel et de fréquence et aussi bien que spatial par les fonctions de forme spécifique d'ondes dont les paramètres corrèlent fortement à des dimensions séismiques et géotechniques. Une priorité d'oeuvre sont des nouvelles procédures d'estimation, pour la spécification empirique des paramètres de modèle pour la pratique de construction, qui se basent sur la sur la corrélation de croix de composante. La procédure adaptative spectrale d'estimation d'axes principals de corrélation (SAPCA) assure la saisie optimale des trains des ondes spatiaux par la transformation des enregistrements sur des composantes de référence. En même temps - en relation avec une procédure de correction d'angle égal d'axe prin¬ci¬pal - il livre des concises schémas associés de cours d'axes principals, au moyen de ceux peut être identifié fiable des phases de dominance pour trois trains généralisés des ondes. Dans ces phases de dominance, les paramètres du modèle de charge spécifiques pour chaque train des ondes sont déterminés. En outre, un algorithme est indiqué, pour identi¬fier des ondes de Rayleigh dans un enregistrement individuel de l'accélération de sol. La qualification de l'approche de modèle et l'efficience des procédures d'estimation sont vérifiées au moyen d'enregistrements de tremblement fort du séisme á Northridge 1994. Avec l'approche de modèle non-stationnaire présentée, tels des parts de charge du champ des ondes sismiques forts sont décrites plus précisément qui sont sous-estimées dans les modèles de charge stochastiques habituels. Des parts de charge q'ont été supprimés ou modelées forfaitairement jusqu'ici, sont rendues accessibles à l'analyse et à la modélisation pour la première fois. Le modèle stochastique devient physico-transparent concernant les effets les plus importants, produisants une charge sur le bâtiment, et ainsi - malgré la complexité plus élevée - mieux maniable en pratique d'ingénieur. La méthode d'axes principals adaptative spectrale (SAPCA) convient aussi pour des analyses sismologiques dans la proximité d'epicentre, par exemple à l'analyse des processus de rupture et des effets de site topographiques.
Por todo el mundo, los terremotos fuertes son un alto riesgo potencial para los centros urbanos, que está, entre otros, enfrentado por métodos de diseño estructural antisísmico. Estos métodos son basa en asunciones y conocimiento fundamentado sobre la aceleración de tierra sísmica local; los límites son fijados, en el otro lado, por costes adicionales. Balance de los daños de temblores fuertes recientes - también en países industrializados - acentúe la necesidad del refinamiento adicional de conceptos y de métodos de diseño estructural resistente del terremoto. En este trabajo, una nueva aproximación de modelar estocástico de la carga sísmica se presenta, superando la presuposición generalmente de un proceso estocástico unidimensional y estacionario para la aceleración de tierra. La meta avisada es un modelo de la carga específico del sitio y de las ondas que, con la información sobre las invariantes físicas y geotécnicas, permite las aproximaciones transparentes y económicas, en diseño estructural antisísmico; pero por lo menos reduce el riesgo sísmico en la comparación a los métodos usados de diseño. Esos invariantes son la estructura regular del campo de las ondas sísmicas, así como las características de la resonancia de los estratos del suelo en el sitio local. El modelo propuesto de la carga representa el campo local de las ondas sísmicas como composición de los procesos parciales evolutivos estocásticos sobre las hachas principales variables-temporales, que corresponden a los trenes de las ondas con características específicas de la carga. Esas características de la carga son descritas en el dominio de la frecuencia y del tiempo así como en el dominio espacial por las funciones de la forma, que parámetros son especificas por los trenos generalizados de la onda sísmica y correlacionan fuertemente a las entidades sísmicas y geotécnicas. La contribución principal de este trabajo son los procedimientos nuevamente desarrollados de la valoración basados en la correlación, que sirven en el contexto de la especificación empírica de los parámetros de modelo para la práctica de construcción. El algoritmo de las Ejes Mayor de la Correlación Espectral-Adaptante (SAPCA) asegura la recogida óptima de los trenes espaciales de la onda transformando los datos registrados sobre componentes de la referencia. En el mismo tiempo - en la conexión con un algoritmo de la corrección para el ángulo del acimut del eje mayor/principal – SAPCA entrega los patrones asociados concisos en el curso del eje principal, que después se utilizan para identificar confiablemente las fases de la dominación para tres trenes generalizados de la onda. Dentro de esas fases de la dominación de la onda, los parámetros específicos de la onda del modelo de la carga se determinan. Además, un algoritmo se presenta para identificar las ondas de Rayleigh en solos mensuras de la aceleración del sitio. La suficiencia del aproximación que modela y la eficacia de los procedimientos de la valoración se verifican por medio de los datos del terremoto catastrófico a Northridge 1994. La aproximación non-estacionaria que modela propuesto describe con más exactitud las porciones de la carga del campo de la onda del terremoto fuerte subestimado en modelos estocásticos convencionales de la carga. Cargue las porciones que se dejan hacia fuera o modelado global hasta ahora se hace disponible para el análisis y modelar para la primera vez. El modelo estocástico gana la transparencia física con respecto a la carga más importante que genera efectos, y por lo tanto será - a pesar de una complejidad más alta - fácil de dirigir en práctica de la ingeniería. El método principal del eje también será útil para los análisis sismológicos en el campo cercano, p. e., para el análisis de los procesos de la ruptura y de los efectos topográficos del sitio.
Сильные землетрясения всемирно являются потенциально высоким риском для ур¬банизированных центров. Для уменшения сейсмического риска развивются методы антисейсмичной структурной конструкции. Эти методы построены на предположениях, которые требуют тщательного эмпирического знания характеристик местного сейсмического ускорения грунта. Предел состоит, с другой стороны, в дополнительных стоимостях строительства. Убытки от недавних сильных землетрясений - также в индустриально развитых странах – подчеркивают потребность более глубокого уточнения прин¬ципиальных схем и методов антисейсмичного строительства. Эта работа представляет новый подход стохастического сейсмического моде¬лирования нагрузки, развивающий обычное предположениe стационарного, одномерного стохастического процессa нa сейсмическое ускорение грунта. Целью будет создание модели нагрузки, которая указана по отдельности для специфических характеристик и сейсмических волн и местного положения, делающей возможным, путём использования информации о физических и геотехничес¬ких инвариантностях, и прозрачных и недорогих подходов в антисейсмичной структурной конструкции, но по крайней мере уменьшающей сейсмический риск, по сравнению с общими методами антисейсмичного строительства. Эти инвариантности являются закономерной структурой волнового поля также, как и свойствами резонан¬са слоёв грунта в месте постройки.. Предложенная модель нагрузки представляет местное волновое поле как составляющая стохастических подпроцессов развития на главных осях зависящих от времени, которые соответствуют волновым пакетам со специфическими характе¬ристиками нагрузки. Те эти характеристики нагрузки описаны в диапазонах частоты и времени также, как трёхмерного объёма функциями формы, параметры которых указаны по отдельности для различных обобщанных волновых пакетов u сильно соотносят от сейсмичес¬ких и геотехнических величин. Главны вклад работы – это новые процедуры оценивания, основанные на корреляции, которые служат в рамках эмпирической спецификации модельных параметров для практики строительства. Новый Aлгоритм Спектрально-Приспо¬собительных Главных Oсей Kорреляции (SAPCA) обеспечивает оптимальное заволакивание трёхмерных волновых пакетов преобразованием записанных данных сейсмического ускорения грунта на калибровочные компоненты на этих главных осях. В то же самое время - в связи с алгоритмом коррекции для угла простирания главной оси - SAPCA поставляет сжатые связанные волновые картины в ходе временно-изменчивых глав¬ных осей, которые в свою очередь использованы для надежного определения доминантных фаз для трёх обобщенных волновых пакетов. В этих фазах засилья отдельного волного пакета, определёны волново-специфические параметры модели нагруз¬ки. Дополнительно, показан алгоритм для идентификации и определения волн типа Релея при одиночной регистрации сейсмической ускорении грунта. Адекватность моделированного подходa и эффективность процедур оценивания подтвержены посредством данных сильного землетрясения Northridge 1994. Предложенный нестационарный подход моделировании описывает с большей точностью части нагрузки волного поля сильных землетрясений недооцененных в обычных стохастических моделях нагрузки. Части нагрузки не рассматривающиеся или слишком обобщаемые до сих пор, при новым подходе впервые можно будет учитывать и анализировать. Стохастическая модель приобретает физической прозрачностей по отношению к самым важным влияниям, которые производят нагрузку, и следовательно будет – несмотря на более высокую сложность – легка для того, чтобы применять её в практике инженерных расчётов. Mетод Главных Oсей также будет полезенно для сейсмологических анализов в близком поле, например, для анализа процессов повреждения и топографических влиянии местного положения.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography