Academic literature on the topic 'Analyses par point fixe'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Analyses par point fixe.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Analyses par point fixe"

1

Abdel El-Rawas, Nisrine. "Espaces narratifs: l’expression du temps dans l’image fixe." Hawliyat 14 (October 20, 2018): 141–64. http://dx.doi.org/10.31377/haw.v14i0.141.

Full text
Abstract:
Nous ne sortirons point d'un sentier battu en avançant que la narration par l'image est un phénomène aussi ancien que l'homme, et en réaffirmant que la première expression de la civilisation humaine était iconique avant d'être linguistique. Le potentiel narratif que possède l'image nous semble une telle lapalissade, peut-être inhérente à nos souvenirs d'enfance quand nous nous délections en regardant les copieuses illustrations d'un livre pour en comprendre l'histoire avant de lire le texte, que nous nous soucions peu d'en analyser les techniques. Dépassant ce stade d'un plaisir infantile sans pour autant le supprimer, notre objectif devient donc bien plus sémiotique que philosophique, psychanalytique ou historique, quoique nous devions nous référer quelque peu à l'Histoire de la peinture; il consiste à passer en revue les procédés techniques élémentaires par lesquels l'image, essentiellement monstrative par sa nature, raconte une histoire. Au moyen de l'étude de quelques images ayant une intention narrative (images uniques d'une part, et bande dessinée d'autre parti ), cet article tentera d'expliquer comment un récit chronologique linéaire se trouve généré par un espace représentatif tabulaire.
APA, Harvard, Vancouver, ISO, and other styles
2

Keller, Brandon, Larry Stevens, Colleen Lui, James Murray, and Matthew Yaggie. "Les effets des mouvements oculaires bilatéraux sur la cohérence EEG lors du rappel d'un souvenir plaisant." Journal of EMDR Practice and Research 10, no. 2 (2016): 11E—28E. http://dx.doi.org/10.1891/1933-3196.10.2.11.

Full text
Abstract:
Dans une enquête sur le modèle de cohérence interhémisphérique (CIh) dans la désensibilisation et le retraitement par les mouvements oculaires (EMDR) et les effets des mouvements oculaires bilatéraux (MOB), 30 sujets ont été exposés à un point fixe, à un point clignotant vert/rouge ou à des MOB alternés pendant la visualisation d'un souvenir agréable. On a mesuré ensuite les électroencéphalogrammes (EEG) lors d'une étape de traitement où les participants avaient les yeux fermés. Les analyses n'ont pas révélé d'amélioration significative de la CIh pour la condition MOB, mais des augmentations importantes dans la cohérence MOB intrahémisphérique des EEG des ondes delta et bêta basses dans les zones frontales droites et gauches, respectivement, et une augmentation tendancielle dans la cohérence MOB des ondes bêta basses de la zone frontale droite. On a eu recours à la neuro-imagerie LORETA afin de visualiser les changements significatifs d'amplitude correspondants aux effets de cohérence observés. Nous exposons ici la signification fonctionnelle de ces effets de cohérence intrahémisphérique et nous suggérons d'étendre le modèle CIh à la cohérence corticale.
APA, Harvard, Vancouver, ISO, and other styles
3

Therrien, Aude, and Gérard Duhaime. "Le logement social au Nunavik." Recherches amérindiennes au Québec 47, no. 1 (January 15, 2018): 101–10. http://dx.doi.org/10.7202/1042902ar.

Full text
Abstract:
Cet article étudie la participation des acteurs régionaux à l’élaboration et à la mise en oeuvre de l’Entente concernant la mise en oeuvre de la Convention de la Baie-James et du Nord québécois en matière de logement au Nunavik. Cette entente est centrale, d’un point de vue politique, pour comprendre la situation du logement au Nunavik, puisqu’elle fixe les rôles de chacun des acteurs et établit le financement accordé pour la construction des logements sociaux. L’entente signée en 2000 a été renouvelée par deux fois, soit en 2005 et en 2010. Si elle avait pour but, lors de sa signature, de répondre aux besoins du Nunavik en logements, la mise en place en 2011 d’un mécanisme de résolution des différends et l’absence d’accord pour le renouvellement de l’entente quinquennale en 2015 sont des signes qu’elle ne rencontre pas les objectifs souhaités par l’ensemble des signataires. En ce sens, par une analyse historique de l’entente, les auteurs souhaitent mettre en lumière les pouvoirs que les différents acteurs régionaux exercent à travers cette politique et les différends qui les séparent.
APA, Harvard, Vancouver, ISO, and other styles
4

Oba, Marina K., Guido A. Marañón-Vásquez, Fábio L. Romano, and Christiano Oliveira-Santos. "Additional intraoral radiographs may change the judgment regarding the final position of orthodontic mini-implants." Dental Press Journal of Orthodontics 23, no. 2 (April 2018): 54–61. http://dx.doi.org/10.1590/2177-6709.23.2.054-061.oar.

Full text
Abstract:
ABSTRACT Objective: This study aimed to assess if additional vertical bitewing (VBW) and/or occlusal (OC) radiographs may change initial judgment based only on periapical radiograph (PAR) about the final position of orthodontic mini-implants (OMI). Methods: Subjective and objective analyses were performed. Radiographic images of 26 OMI were divided into four groups: PAR, PAR+VBW, PAR+OC and ALL (PAR+VBW+OC). For subjective analysis, five observers were asked to assess if the position of OMI was favorable to its success, using questionnaires with a four-point scale for responses: 1= definitely not favorable, 2= probably not favorable, 3= probably favorable, or 4= definitely favorable. Each group containing sets of images was presented to them in four different viewing sessions. Objective evaluation compared horizontal distances between OMI tip and the root nearest to the device in PAR and VBW. Results: Most of observers (3 out of 5) changed their initial judgment based on PAR about OMI position when additional radiographs were analyzed. Differences between groups (i.e. PAR vs. PAR+VBW; PAR vs. PAR+OC; and, PARvs.ALL) were statistically significant for these observers. For those that changed their judgment about OMI position, confidence level could significantly increase, decrease or even be maintained, not indicating a pattern. There was no agreement for distances between OMI tip and the root nearest to the device in PAR and VBW. Conclusion: Considering the limitations of the study, it is concluded that additional radiographic images may change the judgement about OMI final position without necessarily increasing the degree of certainty of such judgment.
APA, Harvard, Vancouver, ISO, and other styles
5

Aventin, Catherine. "L’engagement du spectateur de théâtre de rue. Revivre l’espace urbain." Tangence, no. 108 (May 30, 2016): 95–105. http://dx.doi.org/10.7202/1036456ar.

Full text
Abstract:
En tant qu’architecte, c’est par une approche pluridisciplinaire que Catherine Aventin étudie les arts de la rue, d’un point de vue spatial et sensible par le biais des ambiances architecturales et urbaines. Elle travaille entre autres sur la réception de ce type d’action artistique, où la scène peut être une rue, une place, voire une ville entière. Son article aborde la réception par les différentes composantes de l’espace public à l’oeuvre (physique, sociale et sensible) et montre quels liens peuvent se créer entre le lieu de représentation, l’événement artistique et les pratiques et représentations sociales. Elle présente aussi des stratégies développées par les spectateurs lors de représentations, ainsi que les changements de perception et de représentation des espaces après ces spectacles. Pour cela, elle s’appuie sur ses enquêtes et analyses menées en France (particulièrement à Grenoble et à Calais), principalement sur la base d’observations participantes, de spectacles de différentes échelles (intimes comme s’adressant à une ville entière) et de tous types (fixes, déambulatoires, courts ou durant plusieurs jours).
APA, Harvard, Vancouver, ISO, and other styles
6

van den Hoven, Adrian. "Les Conférences du Havre sur le roman." Sartre Studies International 24, no. 1 (June 1, 2018): 1–14. http://dx.doi.org/10.3167/ssi.2018.240102.

Full text
Abstract:
*Full article is in FrenchEnglish abstract:The five lectures of La Lyre havraise (November 1932– March 1933) constitute an attempt to elucidate the techniques of the modern novel. For this, Jean-Paul Sarture considers the dis - tinction between the novel and the récit introduced by Alain and Fernandez. The lectures consider Les Faux-Monnayeurs (The Counterfeiters) by André Gide; Point Counter Point by Aldous Huxley; Ulysses by James Joyce; The Waves, Mrs. Dalloway and Orlando by Virginia Woolf; Men of Good Will by Jules Romains; and The 42nd Parallel by John Dos Passos. The analysis prefigures the techniques employed by Sartre in the novels published later in his literary career.French abstract:Les cinq conférences de La Lyre havraise (novembre 1932–mars 1933) constituent une tentative d’élucidation des techniques du roman moderne. Pour cela, Sartre se base sur les distinctions entre le roman et le récit introduites par Alain et Fernandez. Ces conférences traitent des Faux-Monnayeurs d’André Gide, de Contrepoint d’Aldous Huxley, du monologue intérieur d’Ulysse de James Joyce, des Vagues, de Mrs. Dalloway et d’Orlando de Virginia Woolf, des Hommes de bonne volonté de Jules Romains et du 42ième Parallèle de John Dos Passos. Ces analyses préfigurent les techniques employées par Jean-Paul Sartre dans ses oeuvres romanesques qu’il publiera plus tard dans sa carrière littéraire.
APA, Harvard, Vancouver, ISO, and other styles
7

Passilly, Bruno, Benjamin Lamboul, and Jean-Michel Roche. "Indentation haute fréquence : vers le contrôle non-destructif des structures." Matériaux & Techniques 105, no. 1 (2017): 110. http://dx.doi.org/10.1051/mattech/2017026.

Full text
Abstract:
La nanoindentation est couramment utilisée pour déterminer les propriétés mécaniques locales des matériaux. La matière est sollicitée de façon quasi statique en appliquant un indenteur sur la surface à analyser. À partir de la courbe représentant la charge appliquée par l’indenteur sur le matériau en fonction du déplacement de l’indenteur, les modèles classiques permettent de déterminer le module d’Young local en tout point de test [Oliver & Pharr, AIP Conference proceedings 7 (1992) 1564-1583; Doerner & Nix, J. Mater. Res. 1 (1986) 601-609; Loubet et al., Vickers indentation curves of elastoplastic materials, in American Society for Testing and Materials STP 889, Microindentation Techniques in Materials Science and Engineering, Blau & Lawn eds, 1986, pp. 72-89]. Cet essai est surtout utilisé sur de petites surfaces de matière (<1 cm2), qui doivent présenter un état de surface poli et plan afin de ne pas fausser la mesure, mais n’est pas adapté sur des pièces de structure de type tôle ou sandwich composite (>1000 cm2). Par extension de la méthode CSM (Continuous Stiffness Measurement) [Asif et al., Rev. Sci. Instrum. 70 (1999) 2408-2413], l’indenteur peut servir de générateur de vibrations. Pour cela l’indenteur est positionné sur un empilement de céramiques piézoélectriques et est appliqué sur la surface à analyser à une charge fixe de 1000 mN. L’indenteur est soumis à une oscillation à une fréquence de 5 kHz, alimenté à 10 V. Les ondes ultrasonores ainsi générées, dites «ondes de Lamb», induisent un déplacement nanométrique de la surface, détectable par un vibromètre laser. Il est alors possible de suivre la propagation du front d’onde et de détecter ses interactions avec d’éventuels défauts de la structure inspectée [Boro Djordjevic, Quantitative ultrasonic guided wave testing of composites, The 39th Annual Review of Progress, 2013]. Il en résulte une cartographie complète de la surface. L’indenteur peut aussi être utilisé comme récepteur de l’onde générée. Le positionnement d’indenteurs récepteurs en plusieurs endroits de la structure permet de mesurer le temps de vol de l’onde entre l’indenteur émetteur et l’indenteur récepteur. La connaissance précise de la distance entre les points d’émission et de réception de l’onde permet de mesurer les vitesses en fonction de l’anisotropie du matériau, ce qui, à terme, peut permettre de remonter à ses constantes d’élasticité.
APA, Harvard, Vancouver, ISO, and other styles
8

Prémont, Marie-Claude. "Les transferts de technologie nord-sud en matière de télécommunications par satellites." Les Cahiers de droit 27, no. 4 (April 12, 2005): 853–89. http://dx.doi.org/10.7202/042773ar.

Full text
Abstract:
While the industrialized countries are already involved in the new information age, the developing countries are still trying to achieve some measure of industrialization. Although by themselves the satellite telecommunication systems will not solve all the problems of developing countries, nevertheless, they could facilitate the shortcircuiting of a number of preliminary steps leading to the new communications era. However, as most of the knowledge concerning these satellite systems — from their design until the final stage of production — are concentrated in industrialized nations, this leaves developing countries in a vulnerable and dependent position. This article analyses some of the established and evolving legal norms towards the promotion of technological parity between the industrialized and non-industrialized nations ; these can be grouped under five specific headings : 1. « Space Law »; 2. « New International Economic Order »; 3. « Right to Communicate » ; 4. « Code of Conduct on Technological Transfer » ; 5. « New International Law of Survival ». Following these legal considerations, we analyse the types and means of technological transfers taking place between industrialized and non-industrialized countries. In this connection, it is important to distinguish between the transfer of specified equipment only from that of its engineering. We examine these transfers, first following the initiative of the government-sponsored agencies, and second, as a transaction taking place on the free international market. In our study, it is evident that while non-industrialized countries have access to satellite communications equipment, this however does not apply to their engineering. Will the new rules of international law be capable to launch a free flow of technological knowledge between the industrialized and non-industrialized countries ? On this point, we express our reservations.
APA, Harvard, Vancouver, ISO, and other styles
9

Libus, Jiří, and Oldřich Mauer. "Forest regeneration under standards of pedunculate oak (Quercus robur L.)." Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis 57, no. 5 (2009): 197–204. http://dx.doi.org/10.11118/actaun200957050197.

Full text
Abstract:
Work objective was to establish reasons to the impaired vitality of woody and herbaceous vegetation growing under standards of pedunculate oak (Quercus robur L.). The paper analyzes the influence of insolation, root competition, soil moisture content and chemical composition of soil on the growth of pedunculate oak seedlings under standards. The analyses included three standards of pedunculate oak aged over 150 years, growing at an altitude of 160 m a.s.l. Controls were plots occurring at a distance of 20 m from the standards. Conclusions following out from the analyses are as follows: The number of seedlings emerged under the standards was lower than the number of seedlings emerged in the open area. Shoot height as well as root collar diameter of seedlings under the standards were lower than in the control seedlings. Total cover of herbaceous layer and height of herbs under the standards were lower than in the open area. The impaired vitality of woody and herbaceous vegetation resulted from a great amount of the fine roots of pedunculate oak standards, withdrawing water up to the wilting point. The amount of photosynthetic active radiation (PAR) under the standards was sufficient for the growth of plants.
APA, Harvard, Vancouver, ISO, and other styles
10

Srinivasan, P. S., and P. Veeramani. "On best proximity pair theorems and fixed-point theorems." Abstract and Applied Analysis 2003, no. 1 (2003): 33–47. http://dx.doi.org/10.1155/s1085337503209064.

Full text
Abstract:
The significance of fixed-point theory stems from the fact that it furnishes a unified approach and constitutes an important tool in solving equations which are not necessarily linear. On the other hand, if the fixed-point equationTx=xdoes not possess a solution, it is contemplated to resolve a problem of finding an elementxsuch thatxis in proximity toTxin some sense. Best proximity pair theorems analyze the conditions under which the optimization problem, namelyminx∈A d(x,Tx)has a solution. In this paper, we discuss the difference between best approximation theorems and best proximity pair theorems. We also discuss an application of a best proximity pair theorem to the theory of games.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Analyses par point fixe"

1

Peyre, Thierry. "Evaluation de performances sur le standard IEEE802.16e WiMAX." Phd thesis, Université d'Avignon, 2008. http://tel.archives-ouvertes.fr/tel-00796477.

Full text
Abstract:
Les dernières décennies ont connu l'apogée des transmissions hertziennes, et celles à venir connaîtront incontestablement le développement et le déploiement intense de systèmes de communications sans-fil. Dès à présent, il est possible de communiquer par onde sur petite et très petite distance (LAN et PAN). Les populations se sont familiariséesavec les interfaces bluetooth (IEEE802.15) présentes dans la majorité des objetscommuniquant (ordinateur portable, téléphone, PDA, etc...). Les foyers s'équipentmaintenant facilement et à bas prix d'interfaceWi-Fi (IEEE802.11), afin de profiter d'uneutilisation nomade de leur accès internet. Ainsi, la forte croissance dumarché des offresinternet combinée avec celle du marché des téléphones mobiles ont habitués un large spectre d'utilisateurs à communiquer sans fil. Ce contexte sociologique et financier encourage donc l'arrivée de solutions nouvelles répondant à des besoins latents. Parmi ceux-là, le marché met en évidence le manque de système de communication sur moyenne distance (MAN). Les réseaux ad-hoc peuvent répondre à ce genre de besoin. Mais àce jour, les performances sont trop faibles pour les besoins des utilisateurs et elles dépendenttrop fortement de la densité desmachines nomades. Aussi, le consortiumIEEEcherche au travers de sa norme IEEE802.16 à fournir un système complet de communicationsans-fil sur moyenne distance (MAN). Appelé aussiWiMAX, ce système se basesur une architecture composée d'une station de base (BS) et de nombreux mobiles utilisateurs(SS). Le standard IEEE802.16 définit les caractéristiques de la couche physiqueet de la couche MAC. Il décrit l'ensemble des interactions et événements pouvant avoirlieu entre la station de base et les stations mobiles. Enfin, le standard fournit différents paramètres et variables servant aux mécanismes de communication. Comme tout nouveau standard émergeant, la norme IEEE802.16 ne profite pas d'un état de l'art aussi développé que celui du IEEE802.11 par exemple. Aussi, de nombreuses études et idées sont à développer.En premier lieu, nous effectuons un large rappel de la norme WiMAX et en particulier le IEEE802.16e. Associé à cela, nous dressons un état de l'art des travaux traitant des aspects et perspectives liés au sujet de notre étude.Par la suite, nous proposons un modèle novateur de performance des communicationsIEEE802.16e. Au travers de ce modèle, nous développons une étude générale et exhaustive des principaux paramètres de communication. L'étude explicite l'impact deces paramètres ainsi que l'influence de leur évolutions possibles. De cela, nous critiquonsla pertinence de chacun d'eux en proposant des alternatives de configurations.5En sus, nous proposons un mécanisme novateur favorisant le respect de qualité de service(QoS) sur couche AC.Nous développons un principe original d'établissement de connexion favorisant l'accès aux communications sensibles aux délais de transmission.Dans une dernière partie, nous déterminons la capacité d'un système IEEE802.16 à gérer les arrivées et départs des utilisateurs. Tout en y associant une étude de performance d'un nouvel algorithme de contrôle d'admission. Cet algorithme d'admission vise à remplir des objectifs multiples : empêcher les famines de ressources sur les trafics les moins prioritaires, favoriser l'admission des utilisateurs en maintenant une gestion optimale de la ressource radio. Notre étude aboutit à une modélisation et une critique des variations de paramètre associés à ce nouvel algorithme. Nous y intégrons par la suite le principe de mobilité où les utilisateurs ont la capacité de se mouvoir au sein d'une cellule. Cette intégration se fait en y associant des mécanismes originaux afin d'assurer la pérennité du service aux utilisateurs mobiles.
APA, Harvard, Vancouver, ISO, and other styles
2

Peyre, Thierry. "Evaluation de performances sur le standard IEEE802. 16e WiMAX." Avignon, 2008. https://tel.archives-ouvertes.fr/tel-00796477.

Full text
Abstract:
Les dernières décennies ont connu l’apogée des transmissions hertziennes, et celles à venir connaîtront incontestablement le développement et le déploiement intense de systèmes de communications sans-fil. Dès à présent, il est possible de communiquer par onde sur petite et très petite distance (LAN et PAN). Les populations se sont familiariséesavec les interfaces bluetooth (IEEE802. 15) présentes dans la majorité des objetscommuniquant (ordinateur portable, téléphone, PDA, etc. . . ). Les foyers s’équipentmaintenant facilement et à bas prix d’interfaceWi-Fi (IEEE802. 11), afin de profiter d’uneutilisation nomade de leur accès internet. Ainsi, la forte croissance dumarché des offresinternet combinée avec celle du marché des téléphones mobiles ont habitués un large spectre d’utilisateurs à communiquer sans fil. Ce contexte sociologique et financier encourage donc l’arrivée de solutions nouvelles répondant à des besoins latents. Parmi ceux-là, le marché met en évidence le manque de système de communication sur moyenne distance (MAN). Les réseaux ad-hoc peuvent répondre à ce genre de besoin. Mais àce jour, les performances sont trop faibles pour les besoins des utilisateurs et elles dépendenttrop fortement de la densité desmachines nomades. Aussi, le consortiumIEEEcherche au travers de sa norme IEEE802. 16 à fournir un système complet de communicationsans-fil sur moyenne distance (MAN). Appelé aussiWiMAX, ce système se basesur une architecture composée d’une station de base (BS) et de nombreux mobiles utilisateurs(SS). Le standard IEEE802. 16 définit les caractéristiques de la couche physiqueet de la couche MAC. Il décrit l’ensemble des interactions et événements pouvant avoirlieu entre la station de base et les stations mobiles. Enfin, le standard fournit différents paramètres et variables servant aux mécanismes de communication. Comme tout nouveau standard émergeant, la norme IEEE802. 16 ne profite pas d’un état de l’art aussi développé que celui du IEEE802. 11 par exemple. Aussi, de nombreuses études et idées sont à développer. En premier lieu, nous effectuons un large rappel de la norme WiMAX et en particulier le IEEE802. 16e. Associé à cela, nous dressons un état de l’art des travaux traitant des aspects et perspectives liés au sujet de notre étude. Par la suite, nous proposons un modèle novateur de performance des communicationsIEEE802. 16e. Au travers de ce modèle, nous développons une étude générale et exhaustive des principaux paramètres de communication. L’étude explicite l’impact deces paramètres ainsi que l’influence de leur évolutions possibles. De cela, nous critiquonsla pertinence de chacun d’eux en proposant des alternatives de configurations. 5En sus, nous proposons un mécanisme novateur favorisant le respect de qualité de service(QoS) sur couche AC. Nous développons un principe original d’établissement de connexion favorisant l’accès aux communications sensibles aux délais de transmission. Dans une dernière partie, nous déterminons la capacité d’un système IEEE802. 16 à gérer les arrivées et départs des utilisateurs. Tout en y associant une étude de performance d’un nouvel algorithme de contrôle d’admission. Cet algorithme d’admission vise à remplir des objectifs multiples : empêcher les famines de ressources sur les trafics les moins prioritaires, favoriser l’admission des utilisateurs en maintenant une gestion optimale de la ressource radio. Notre étude aboutit à une modélisation et une critique des variations de paramètre associés à ce nouvel algorithme. Nous y intégrons par la suite le principe de mobilité où les utilisateurs ont la capacité de se mouvoir au sein d’une cellule. Cette intégration se fait en y associant des mécanismes originaux afin d’assurer la pérennité du service aux utilisateurs mobiles
The last decade witnessed the peak of the hertzian communications. The following ones will undoubtedly testify the intensive deployment and the development of all wireless ways of transmission. Due to cheaper equipments, the people are now used with all sorts of connected objects : laptop, smartphone, pad, and more recently, Connected Video display and audio diffusers. All these stuffs allow to keep an access to internet, even in a nomad use. This economical and sociological context promotes the emerging of new solutions metting latent needs by offering better performances. Consumer studies highlight particularly the lack of transmissions solution for Metropolitan Area Networks (MAN). Ad-hoc wireless solutions lead to satisfy the MAN needs, but the throughput is too importantly related to the ad-hoc customer capacity and density over the MAN coverage. The IEEE consortium seeks, through its IEEE802. 16e standart, to provide a wireless transmission technology specifically design for the middle range network. Knowed as WiMAX, this system are based on a point to multipoint architecture. WiMAX standart gathers Base Station (BS) and Subscriber Stations (SS), and defines for both the Physical and MAC layer in the OSI Model. In addition, the standart proposes a set of default parameters for the two first OSI Layers. As any emerging standart, the IEEE802. 16e suffers form a lack of litterature (works, studies and enhancement proposals). More studies are explicitly needed to craft and tune the IEEE802. 16e standart in order to better answer to the specific issues met in the actual context of transmission. In a first step, we present in a large scale the IEEE802. 16e standart specifications. In addition we highlight the main state of art linked to this subject. Second, we propose an original performance model, the first one that takes in account all the MAC layer parameters of the standart. Based on this model, we lead a general and exhaustive performance study of each communication parameters. This study highlights the importance of each parameters and propose some enhancements in fonction of the type of Quality of Service (QoS). In addition, we introduce an call engaging mecanism which respects the QoS on the MAC layer. In a last part, we compute the IEEE802. 16e capacity to manage the incoming and leaving calls. We perform this study by introducing a new Connection Admission Control (CAC). The CAC algorithm achieves sevral objectives : prevent from the lack of ressource for the lowest priority flows as well as optimize the radio resource consumption to facilitate the access for the users. Our study is concluded by proposing an new capacity model and algorithm for the CAC. Moreover, this last proposal prevents the call drop due to user mobility
APA, Harvard, Vancouver, ISO, and other styles
3

Deest, Gaël. "Implementation trade-offs for FGPA accelerators." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S102/document.

Full text
Abstract:
L'accélération matérielle désigne l'utilisation d'architectures spécialisées pour effectuer certaines tâches plus vite ou plus efficacement que sur du matériel générique. Les accélérateurs ont traditionnellement été utilisés dans des environnements contraints en ressources, comme les systèmes embarqués. Cependant, avec la fin des règles empiriques ayant régi la conception de matériel pendant des décennies, ces quinze dernières années ont vu leur apparition dans les centres de calcul et des environnements de calcul haute performance. Les FPGAs constituent une plateforme d'implémentation commode pour de tels accélérateurs, autorisant des compromis subtils entre débit/latence, surface, énergie, précision, etc. Cependant, identifier de bons compromis représente un défi, dans la mesure où l'espace de recherche est généralement très large. Cette thèse propose des techniques de conception pour résoudre ce problème. Premièrement, nous nous intéressons aux compromis entre performance et précision pour la conversion flottant vers fixe. L'utilisation de l'arithmétique en virgule fixe au lieu de l'arithmétique flottante est un moyen efficace de réduire l'utilisation de ressources matérielles, mais affecte la précision des résultats. La validité d'une implémentation en virgule fixe peut être évaluée avec des simulations, ou en dérivant des modèles de précision analytiques de l'algorithme traité. Comparées aux approches simulatoires, les méthodes analytiques permettent une exploration plus exhaustive de l'espace de recherche, autorisant ainsi l'identification de solutions potentiellement meilleures. Malheureusement, elles ne sont applicables qu'à un jeu limité d'algorithmes. Dans la première moitié de cette thèse, nous étendons ces techniques à des filtres linéaires multi-dimensionnels, comme des algorithmes de traitement d'image. Notre méthode est implémentée comme une analyse statique basée sur des techniques de compilation polyédrique. Elle est validée en la comparant à des simulations sur des données réelles. Dans la seconde partie de cette thèse, on se concentre sur les stencils itératifs. Les stencils forment un motif de calcul émergeant naturellement dans de nombreux algorithmes utilisés en calcul scientifique ou dans l'embarqué. À cause de cette diversité, il n'existe pas de meilleure architecture pour les stencils de façon générale : chaque algorithme possède des caractéristiques uniques (intensité des calculs, nombre de dépendances) et chaque application possède des contraintes de performance spécifiques. Pour surmonter ces difficultés, nous proposons une famille d'architectures pour stencils. Nous offrons des paramètres de conception soigneusement choisis ainsi que des modèles analytiques simples pour guider l'exploration. Notre architecture est implémentée sous la forme d'un flot de génération de code HLS, et ses performances sont mesurées sur la carte. Comme les résultats le démontrent, nos modèles permettent d'identifier les solutions les plus intéressantes pour chaque cas d'utilisation
Hardware acceleration is the use of custom hardware architectures to perform some computations faster or more efficiently than on general-purpose hardware. Accelerators have traditionally been used mostly in resource-constrained environments, such as embedded systems, where resource-efficiency was paramount. Over the last fifteen years, with the end of empirical scaling laws, they also made their way to datacenters and High-Performance Computing environments. FPGAs constitute a convenient implementation platform for such accelerators, allowing subtle, application-specific trade-offs between all performance metrics (throughput/latency, area, energy, accuracy, etc.) However, identifying good trade-offs is a challenging task, as the design space is usually extremely large. This thesis proposes design methodologies to address this problem. First, we focus on performance-accuracy trade-offs in the context of floating-point to fixed-point conversion. Usage of fixed-point arithmetic instead of floating-point is an affective way to reduce hardware resource usage, but comes at a price in numerical accuracy. The validity of a fixed-point implementation can be assessed using either numerical simulations, or with analytical models derived from the algorithm. Compared to simulation-based methods, analytical approaches enable more exhaustive design space exploration and can thus increase the quality of the final architecture. However, their are currently only applicable to limited sets of algorithms. In the first part of this thesis, we extend such techniques to multi-dimensional linear filters, such as image processing kernels. Our technique is implemented as a source-level analysis using techniques from the polyhedral compilation toolset, and validated against simulations with real-world input. In the second part of this thesis, we focus on iterative stencil computations, a naturally-arising pattern found in many scientific and embedded applications. Because of this diversity, there is no single best architecture for stencils: each algorithm has unique computational features (update formula, dependences) and each application has different performance constraints/requirements. To address this problem, we propose a family of hardware accelerators for stencils, featuring carefully-chosen design knobs, along with simple performance models to drive the exploration. Our architecture is implemented as an HLS-optimized code generation flow, and performance is measured with actual execution on the board. We show that these models can be used to identify the most interesting design points for each use case
APA, Harvard, Vancouver, ISO, and other styles
4

Asllanaj, Fatmir. "Etude et analyse numérique des transferts de chaleur couplés par rayonnement et conduction dans les milieux semi-transparents : application aux milieux fibreux." Nancy 1, 2001. http://docnum.univ-lorraine.fr/public/SCD_T_2001_0208_ASLLANAJ.pdf.

Full text
Abstract:
L'objet de ce travail est l'étude et l'analyse numérique des transferts de chaleur couplés par rayonnement et conduction à travers les milieux semi-transparents. Le modèle utilisé est constitué d'un système de deux équations aux dérivées partielles couplées : l'équation intégro-différentielle du transfert radiatif (ETR), qui a comme inconnue la luminance, et une équation non linéaire de la chaleur régissant la température dans le milieu. Dans le premier chapitre de la thèse, nous détaillons la modélisation avec les hypothèses simplificatrices qu'elle comporte. Dans le second chapitre, nous montrons l'existence et l'unicité du système couplé d'équations en régime stationnaire. Le troisième chapitre est consacré à la résolution numérique des équations en régime stationnaire. Pour résoudre l'ETR, nous discrétisons l'espace angulaire suivant plusieurs directions et nous utilisons une quadrature numérique pour approcher l'intégrale de l'équation. Il en résulte alors un système différentiel linéaire du premier ordre que nous résolvons par trois méthodes différentes. La deuxième équation est résolue à l'aide d'un schéma aux différences finies, associé à une transformation de Kirchhoff. Le couplage entre les deux équations est résolu par une méthode de point fixe. Dans le quatrième chapitre, nous étudions la convergence d'un schéma numérique en régime stationnaire. Dans le cinquième chapitre nous présentons une méthode numérique pour résoudre le système couplé en régime transitoire, d'une part lorsque les températures sont imposées aux frontières et, d'autre part, lorsque le milieu est soumis à des conditions de flux. L'équation de la chaleur est résolue en espace par la méthode des éléments finis P2. Le système différentiel en temps est résolu par une méthode de Runge-Kutta implicite, adaptée aux équations raides. Le dernier chapitre de ce travail analyse les résultats numériques obtenus par la simulation appliquée à un matériau isolant constitué de fibres de silice.
APA, Harvard, Vancouver, ISO, and other styles
5

Aimé, Thierry. "Une sémantique par point fixe pour les systèmes de réécriture orthogonaux." Bordeaux 1, 1996. http://www.theses.fr/1996BOR10622.

Full text
Abstract:
Les systemes de reecriture comme langage de programmation posent des problemes d'efficacite. Nous proposons une semantique pour les regles de reecriture capable de conduire des optimisations par specialisation. La semantique d'une regle est representee par les classes de comportement qu'elle induit sur ses instances, comportement en terme de sequences de reduction. Alors avec une relation de surreduction contrainte par des problemes d'unification, nous montrons comment exprimer notre semantique sous la forme d'une topdown collecting semantics, permettant a terme une approximation par interpretation abstraite
APA, Harvard, Vancouver, ISO, and other styles
6

GEMEINER, PASCALE. "Determination par voie radiometrique de la temperature de congelation du cuivre, point fixe de l'echelle internationale de temperature." Paris, CNAM, 1999. http://www.theses.fr/1999CNAM0318.

Full text
Abstract:
L'echelle internationale de temperature (eit-90) est materialisee par la temperature de congelation de metaux (argent, or, cuivre) au dessus de 960\c. Utilisant la loi de planck, des mesures ont deja ete realisees, vers 700 nm, en determinant la luminance de corps noirs a la temperature de congelation de ces metaux. Ce travail a pour objet la determination de la densite spectrale de luminance d'un corps noir a la temperature de congelation du cuivre a 1550 nm. Une source etalon de luminance a ete realisee et caracterisee pour servir de reference. Constituee d'une diode laser associee a une sphere integratrice, elle est monochromatique, lambertienne, compacte et transportable. Sa stabilite temporelle a ete etudiee et amelioree. Une cartographie de rayonnement a ete effectuee. Une photodiode au germanium a ete etudiee : linearite, variations thermiques, locales et spectrales de sensibilite. Grace a un radiometre cryogenique a substitution electrique, reference nationale de sensibilite, cette photodiode a ete etalonnee puis utilisee pour mesurer la luminance de la source etalon. Cette source a ensuite servi de reference pour determiner la densite spectrale de luminance d'un corps noir a la temperature de congelation du cuivre. La comparaison a ete realisee a l'aide d'un instrument permettant de fixer une etendue geometrique constante et operant une selection spectrale par le biais d'un monochromateur. La determination de la fonction de transfert n'entraine pas une incertitude superieure a 0,1 k. Les effets de lumiere diffusee, de polarisation et de filtrage spectral ont ete etudies en detail. La temperature obtenue 1357,07 k 1,5 k, en accord avec la valeur de l'eit-90 (1357,77 k) a une l'incertitude superieure aux objectifs que nous nous etions fixes au depart ( 0,1 k). Cependant ce travail a permis d'analyser toutes les causes d'incertitudes et constitue une etape essentielle pour la reduction a 0,1 k de l'incertitude de cette temperature thermodynamique.
APA, Harvard, Vancouver, ISO, and other styles
7

De, Sousa Duarte Marisa Emanuel. "Mesure au coeur d'un réacteur de profils spatiaux et temporels sur les phases liquide et solide par analyses spectroscopiques." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE1128.

Full text
Abstract:
Cette thèse s'inscrit dans le domaine de la catalyse hétérogène pour des applications en raffinage et enpétrochimie. Comme objet d'étude, nous avons ciblé les catalyseurs d'hydrotraitement qui permettentd'éliminer les impuretés contenues dans le pétrole comme le soufre. Ces catalyseurs d'hydrotraitementsont constitués de sulfures de molybdène supportés sur alumine et généralement promus par du nickelou du cobalt.L'enjeu de la thèse porte sur la compréhension des phénomènes mis en jeu au cours de la sulfuration etde la stabilisation (cokage, passivation, évolution de la phase sulfure) des catalyseurs pour la réactiond'hydrodésulfuration des gazoles visant à réduire sa teneur en soufre. Néanmoins, à l'heure actuelleaucune technique ne permet de caractériser le catalyseur lors de la sulfuration en phase liquide (avecou sans DMDS) et du test dans les conditions de température et de pression industrielles.La présente thèse vise le développement d'une « caractérisation operando » de ce type de réaction ens'appuyant sur la spectroscopie Raman, une des seules technique de laboratoire permettant d'effectuerdes analyses avec les contraintes précédemment citées. Une unité, composée d'un réacteur cylindriquetransparent, a donc été conçue et mise au point pour suivre par spectroscopie Raman les phases solideet liquide au cours de la sulfuration et de la réaction d'HDS. Parallèlement, une méthodologie decaractérisation et de focalisation a été développée. Ce montage et cette méthodologie ont permisd'accéder pour la première fois à des profils spatiaux et temporels sur la phase solide dans lesconditions d'activation et de réaction d'hydrotraitement des gazoles en condition d'hydrodésulfuration(30 bar, 350 °C).Malgré un signal de fluorescence probablement liée à la décomposition radicalaire du précurseur desoufre (DMDS) entre 200 et 260°C, il a été possible de suivre au cours du temps à des positions fixesla disparition de la phase oxyde, l'évolution de la phase sulfure et du coke . Ces résultats ontnotamment permis d'étudier l'impact de la charge sur la cinétique de sulfuration .Le suivi spatial, lelong du réacteur par exemple, s'avère plus délicat et nécessitera de développer des méthodes pourcompenser les variations d'intensité du signal Raman induites par le positionnement aléatoire desgrains ainsi que l'écoulement. Concernant la phase liquide, une approche multivariée utilisant desoutils chimiométriques a été appliquée afin de relier l'émission de fluorescence intrinsèque à denombreux diesels à certaines de leurs propriétés (teneur en soufre et en aromatiques, densité…). Lesmodèles ont été développés à partir de spectres acquis à température ambiante et à pressionatmosphérique, mais leurs performances satisfaisantes encouragent à étendre l'approche auxconditions réactionnelles d'HDS qui reste une perspective de ce travail
This thesis is in the field of heterogeneous catalysis for the applications in refining and petrochemistry. As an aim of this study, we have focused on the hydrotreatment catalysts that are applied to remove some of the impurities from crude oils, like sulfur. Such hydrotreatment catalysts consist of alumina supported molybdenum sulfides, being generally promoted by nickel or cobalt.The aim of this work have consisted on the understanding of the phenomena occurring during the stabilization phase (coking, passivation, evolution of the sulfide phase) during the catalysts sulfidation and under the reaction of hydrodesulfurization, HDS, aiming to reduce the diesels sulfur content. A better understanding of these phenomena would ease the development of new generations of more efficient catalysts. This thesis aims at extending the operando characterization methods to allow a spatial and temporal follow-up of liquid and catalyst during this type of reactions. A unit was designed and built to follow the solid and liquid phases during the catalysts sulfidation and under HDS reaction. The operando follow-up was done using Raman spectroscopy through a cylindrical transparent reactor. In parallel, a methodology has been developed to focus and acquire good quality spectra through the reactor..With these reactor and methodology , we were able to access for the first time to time-space resolved profiles of the solid phase during the sulfidation and the diesel hydrodesulfurization (under 350 °C and 30 bar). Temporal profiles concerns the oxide phase disappearance, sulfide phase growth and coke formation. Spatial profiles are more challenging and will require a methodology more robust to signal changes induced by the random position of pellets and flow pattern. With respect to the liquid phase, a multivariate approach based on chemometrics has gave properties of diesels at room temperature and atmospheric pressure. The good results are encouraging enough to propose to extend the approach to HDS conditions that constitutes one of the perspectives of the work
APA, Harvard, Vancouver, ISO, and other styles
8

Taftaf, Ala. "Développements du modèle adjoint de la différentiation algorithmique destinés aux applications intensives en calcul." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4001/document.

Full text
Abstract:
Le mode adjoint de la Différentiation Algorithmique (DA) est particulièrement intéressant pour le calcul des gradients. Cependant, ce mode utilise les valeurs intermédiaires de la simulation d'origine dans l'ordre inverse à un coût qui augmente avec la longueur de la simulation. La DA cherche des stratégies pour réduire ce coût, par exemple en profitant de la structure du programme donné. Dans ce travail, nous considérons d'une part le cas des boucles à point-fixe pour lesquels plusieurs auteurs ont proposé des stratégies adjointes adaptées. Parmi ces stratégies, nous choisissons celle de B. Christianson. Nous spécifions la méthode choisie et nous décrivons la manière dont nous l'avons implémentée dans l'outil de DA Tapenade. Les expériences sur une application de taille moyenne montrent une réduction importante de la consommation de mémoire. D'autre part, nous étudions le checkpointing dans le cas de programmes parallèles MPI avec des communications point-à-point. Nous proposons des techniques pour appliquer le checkpointing à ces programmes. Nous fournissons des éléments de preuve de correction de nos techniques et nous les expérimentons sur des codes représentatifs. Ce travail a été effectué dans le cadre du projet européen ``AboutFlow''
The adjoint mode of Algorithmic Differentiation (AD) is particularly attractive for computing gradients. However, this mode needs to use the intermediate values of the original simulation in reverse order at a cost that increases with the length of the simulation. AD research looks for strategies to reduce this cost, for instance by taking advantage of the structure of the given program. In this work, we consider on one hand the frequent case of Fixed-Point loops for which several authors have proposed adapted adjoint strategies. Among these strategies, we select the one introduced by B. Christianson. We specify further the selected method and we describe the way we implemented it inside the AD tool Tapenade. Experiments on a medium-size application shows a major reduction of the memory needed to store trajectories. On the other hand, we study checkpointing in the case of MPI parallel programs with point-to-point communications. We propose techniques to apply checkpointing to these programs. We provide proof of correctness of our techniques and we experiment them on representative CFD codes
APA, Harvard, Vancouver, ISO, and other styles
9

Nehmeh, Riham. "Quality Evaluation in Fixed-point Systems with Selective Simulation." Thesis, Rennes, INSA, 2017. http://www.theses.fr/2017ISAR0020/document.

Full text
Abstract:
Le temps de mise sur le marché et les coûts d’implantation sont les deux critères principaux à prendre en compte dans l'automatisation du processus de conception de systèmes numériques. Les applications de traitement du signal utilisent majoritairement l'arithmétique virgule fixe en raison de leur coût d'implantation plus faible. Ainsi, une conversion en virgule fixe est nécessaire. Cette conversion est composée de deux parties correspondant à la détermination du nombre de bits pour la partie entière et pour la partie fractionnaire. Le raffinement d'un système en virgule fixe nécessite d'optimiser la largeur des données en vue de minimiser le coût d'implantation tout en évitant les débordements et un bruit de quantification excessif. Les applications dans les domaines du traitement d'image et du signal sont tolérantes aux erreurs si leur probabilité ou leur amplitude est suffisamment faible. De nombreux travaux de recherche se concentrent sur l'optimisation de la largeur de la partie fractionnaire sous contrainte de précision. La réduction du nombre de bits pour la partie fractionnaire conduit à une erreur d'amplitude faible par rapport à celle du signal. La théorie de la perturbation peut être utilisée pour propager ces erreurs à l'intérieur des systèmes à l'exception du cas des opérations un- smooth, comme les opérations de décision, pour lesquelles une erreur faible en entrée peut conduire à une erreur importante en sortie. De même, l'optimisation de la largeur de la partie entière peut réduire significativement le coût lorsque l'application est tolérante à une faible probabilité de débordement. Les débordements conduisent à une erreur d'amplitude élevée et leur occurrence doit donc être limitée. Pour l'optimisation des largeurs des données, le défi est d'évaluer efficacement l'effet des erreurs de débordement et de décision sur la métrique de qualité associée à l'application. L'amplitude élevée de l'erreur nécessite l'utilisation d'approches basées sur la simulation pour évaluer leurs effets sur la qualité. Dans cette thèse, nous visons à accélérer le processus d'évaluation de la métrique de qualité. Nous proposons un nouveau environnement logiciel utilisant des simulations sélectives pour accélérer la simulation des effets des débordements et des erreurs de décision. Cette approche peut être appliquée à toutes les applications de traitement du signal développées en langage C. Par rapport aux approches classiques basées sur la simulation en virgule fixe, où tous les échantillons d'entrée sont traités, l'approche proposée simule l'application uniquement en cas d'erreur. En effet, les dépassements et les erreurs de décision doivent être des événements rares pour maintenir la fonctionnalité du système. Par conséquent, la simulation sélective permet de réduire considérablement le temps requis pour évaluer les métriques de qualité des applications. De plus, nous avons travaillé sur l'optimisation de la largeur de la partie entière, qui peut diminuer considérablement le coût d'implantation lorsqu'une légère dégradation de la qualité de l'application est acceptable. Nous exploitons l'environnement logiciel proposé auparavant à travers un nouvel algorithme d'optimisation de la largeur des données. La combinaison de cet algorithme et de la technique de simulation sélective permet de réduire considérablement le temps d'optimisation
Time-to-market and implementation cost are high-priority considerations in the automation of digital hardware design. Nowadays, digital signal processing applications use fixed-point architectures due to their advantages in terms of implementation cost. Thus, floating-point to fixed-point conversion is mandatory. The conversion process consists of two parts corresponding to the determination of the integer part word-length and the fractional part world-length. The refinement of fixed-point systems requires optimizing data word -length to prevent overflows and excessive quantization noises while minimizing implementation cost. Applications in image and signal processing domains are tolerant to errors if their probability or their amplitude is small enough. Numerous research works focus on optimizing the fractional part word-length under accuracy constraint. Reducing the number of bits for the fractional part word- length leads to a small error compared to the signal amplitude. Perturbation theory can be used to propagate these errors inside the systems except for unsmooth operations, like decision operations, for which a small error at the input can leads to a high error at the output. Likewise, optimizing the integer part word-length can significantly reduce the cost when the application is tolerant to a low probability of overflow. Overflows lead to errors with high amplitude and thus their occurrence must be limited. For the word-length optimization, the challenge is to evaluate efficiently the effect of overflow and unsmooth errors on the application quality metric. The high amplitude of the error requires using simulation based-approach to evaluate their effects on the quality. In this thesis, we aim at accelerating the process of quality metric evaluation. We propose a new framework using selective simulations to accelerate the simulation of overflow and un- smooth error effects. This approach can be applied on any C based digital signal processing applications. Compared to complete fixed -point simulation based approaches, where all the input samples are processed, the proposed approach simulates the application only when an error occurs. Indeed, overflows and unsmooth errors must be rare events to maintain the system functionality. Consequently, selective simulation allows reducing significantly the time required to evaluate the application quality metric. 1 Moreover, we focus on optimizing the integer part, which can significantly decrease the implementation cost when a slight degradation of the application quality is acceptable. Indeed, many applications are tolerant to overflows if the probability of overflow occurrence is low enough. Thus, we exploit the proposed framework in a new integer word-length optimization algorithm. The combination of the optimization algorithm and the selective simulation technique allows decreasing significantly the optimization time
APA, Harvard, Vancouver, ISO, and other styles
10

Petitjean, Julien. "Contributions au traitement spatio-temporel fondé sur un modèle autorégressif vectoriel des interférences pour améliorer la détection de petites cibles lentes dans un environnement de fouillis hétérogène Gaussien et non Gaussien." Thesis, Bordeaux 1, 2010. http://www.theses.fr/2010BOR14157/document.

Full text
Abstract:
Cette thèse traite du traitement adaptatif spatio-temporel dans le domaine radar. Pour augmenter les performances en détection, cette approche consiste à maximiser le rapport entre la puissance de la cible et celle des interférences, à savoir le bruit thermique et le fouillis. De nombreuses variantes de cet algorithme existent, une d’entre elles est fondée sur une modélisation autorégressive vectorielle des interférences. Sa principale difficulté réside dans l’estimation des matrices autorégressives à partir des données d’entrainement ; ce point constitue l’axe de notre travail de recherche. En particulier, notre contribution porte sur deux aspects. D’une part, dans le cas où l’on suppose que le bruit thermique est négligeable devant le fouillis non gaussien, les matrices autorégressives sont estimées en utilisant la méthode du point fixe. Ainsi, l’algorithme est robuste à la distribution non gaussienne du fouillis.D’autre part, nous proposons une nouvelle modélisation des interférences différenciant le bruit thermique et le fouillis : le fouillis est considéré comme un processus autorégressif vectoriel, gaussien et perturbé par le bruit blanc thermique. Ainsi, de nouvelles techniques d'estimation des matrices autorégressives sont proposées. La première est une estimation aveugle par bloc reposant sur la technique à erreurs dans les variables. Ainsi, l’estimation des matrices autorégressives reste robuste pour un rapport faible entre la puissance de la cible et celle du fouillis (< 5 dB). Ensuite, des méthodes récursives ont été développées. Elles sont fondées sur des approches du type Kalman : filtrage de Kalman étendu et filtrage par sigma point (UKF et CDKF), ainsi que sur le filtre H∞.Une étude comparative sur des données synthétiques et réelles, avec un fouillis gaussien ou non gaussien, est menée pour révéler la pertinence des différents estimateurs en terme de probabilité de détection
This dissertation deals with space-time adaptive processing in the radar’s field. To improve the detection’s performances, this approach consists in maximizing the ratio between the target’s power and the interference’s one, i.e. the thermal noise and the clutter. Several variants of its algorithm exist, one of them is based on multichannel autoregressive modelling of interferences. Its main problem lies in the estimation of autoregressive matrices with training data and guides our research’s work. Especially, our contribution is twofold.On the one hand, when thermal noise is considered negligible, autoregressive matrices are estimated with fixed point method. Thus, the algorithm is robust against non-gaussian clutter.On the other hand, a new modelling of interferences is proposed. The clutter and thermal noise are separated : the clutter is considered as a multichannel autoregressive process which is Gaussian and disturbed by the white thermal noise. Thus, new estimation’s algorithms are developed. The first one is a blind estimation based on errors in variable methods. Then, recursive approaches are proposed and used extension of Kalman filter : the extended Kalman filter and the Sigma Point Kalman filter (UKF and CDKF), and the H∞ filter. A comparative study on synthetic and real data with Gausian and non Gaussian clutter is carried out to show the relevance of the different algorithms about detection’s probability
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Analyses par point fixe"

1

Newman, Mark. Dynamical systems on networks. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198805090.003.0017.

Full text
Abstract:
An introduction to the theory of dynamical systems on networks. This chapter starts with a short introduction to classical (non-network) dynamical systems theory, including linear stability analysis, fixed points, and limit cycles. Dynamical systems on networks are introduced, focusing initially on systems with only one variable per node and progressing to multi-variable systems. Linear stability analysis is developed in detail, leading to master stability conditions and the connection between stability and the spectral properties of networks. The chapter ends with a discussion of synchronization phenomena, the stability of limit cycles, and master stability conditions for synchronization.
APA, Harvard, Vancouver, ISO, and other styles
2

Peterson, Martin. The Ethics of Technology. Oxford University Press, 2017. http://dx.doi.org/10.1093/acprof:oso/9780190652265.001.0001.

Full text
Abstract:
This book develops an analytic ethics of technology based on a geometric account of moral principles. The author argues that geometric concepts such as points, lines, and planes are useful for clarifying the structure and scope of five moral principles: (1) the cost-benefit principle, (2) the precautionary principle, (3) the sustainability principle, (4) the fairness principle, and (5) the autonomy principle. The geometric approach derives its normative force from the Aristotelian dictum that we should “treat like cases alike.” The more similar a pair of cases are, the more reason do we have to treat the cases alike. These similarity relations can be analyzed and represented geometrically. In such a geometric representation, the distance in moral space between cases reflects their degree of similarity. The more similar a pair of cases are from a moral point of view, the shorter is the distance between them. To assess to what extent the geometric method is practically useful for analyzing real-world cases the author has conducted three experimental studies based on data gathered from academic philosophers in the United States and Europe and engineering students at Texas A&M University. The results indicate that experts (philosophers) and laypeople (engineering students) do in fact apply geometrically construed moral principles in roughly, but not exactly, the manner advocates of geometrically construed principles believe they ought to be applied.
APA, Harvard, Vancouver, ISO, and other styles
3

van Craenenbroeck, Jeroen, and Tanja Temmerman, eds. The Oxford Handbook of Ellipsis. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198712398.001.0001.

Full text
Abstract:
This handbook is the first volume to provide a comprehensive, in-depth, and balanced discussion of ellipsis phenomena, whereby a perceived interpretation is fuller than would be expected based solely on the presence of linguistic forms. Natural language abounds in these apparently incomplete expressions, such as I laughed but Ed didn’t, in which the final portion of the sentence, the verb ‘laugh’, remains unpronounced but is still understood. The range of phenomena involved raise general and fundamental questions about the workings of grammar, but also constitute a treasure trove of fine-grained points of inter- and intralinguistic variation. The volume is divided into four parts. In the first, the authors examine the role that ellipsis plays and how it is analyzed in different theoretical frameworks and linguistic subdisciplines, such as HPSG, construction grammar, inquisitive semantics, and computational linguistics. Chapters in the second part highlight the usefulness of ellipsis as a diagnostic tool for other linguistic phenomena including movement and islands and codeswitching, while Part III focuses instead on the types of elliptical constructions found in natural language, such as sluicing, gapping, and null complement anaphora. Finally, the last part of the book contains case studies that investigate elliptical phenomena in a wide variety of languages, including Dutch, Japanese, Persian, and Finnish Sign Language.
APA, Harvard, Vancouver, ISO, and other styles
4

Cloonan, William. Frères Ennemis. Liverpool University Press, 2018. http://dx.doi.org/10.3828/liverpool/9781786941329.001.0001.

Full text
Abstract:
Frères Ennemis focuses on Franco-American tensions as portrayed in works of literature. An Introduction is followed by nine chapters, each centred on a French or American literary text which shows the evolution/devolution of the relations between the two nations at a particular point in time. While the heart of the analysis consists of close textual readings, social, cultural and political contexts are introduced to provide a better understanding of the historical reality influencing the individual novels, a reality to which these novels are also responding. Chapters One through Five, covering a period from the mid-1870s to the end of the Cold War, discuss significant aspects of the often fraught relationship in part from the theoretical perspective of Roland Barthes’ theory of modern myth, described in his Mythologies. Barthes’ theory helps situate Franco-American tensions in a paradigmatic structure, which remains supple enough to allow for shifts and reversals within the paradigm. Subsequent chapters explore new French attitudes toward the powerful, potentially dominant influence of American culture on French life. In these sections I argue that recent French fiction displays more openness to the American experience than has existed in the past, and contrast this overture to the new with the relatively static, even indifferent attitude of American writers toward French literature.
APA, Harvard, Vancouver, ISO, and other styles
5

Nayyar, Deepak, ed. Asian Transformations. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198844938.001.0001.

Full text
Abstract:
Gunnar Myrdal published his magnum opus, Asian Drama: An Inquiry into the Poverty of Nations, in 1968. He was deeply pessimistic about development prospects in Asia. The fifty years since then have witnessed a remarkable social and economic transformation in Asia – even if it has been uneven across countries and unequal between people – that would have been difficult to imagine, let alone predict at the time. This book analyses the fascinating story of economic development in Asia spanning half a century. The study is divided into three parts. The first part sets the stage by discussing the contribution of Gunnar Myrdal, the author, and Asian Drama, the book, to the debate on development then and now, and by providing a long-term historical perspective on Asia in the world. The second part comprises cross-country thematic studies on governments, economic openness, agricultural transformation, industrialization, macroeconomics, poverty and inequality, education and health, employment and unemployment, institutions and nationalisms, analysing processes of change while recognizing the diversity in paths and outcomes. The third part is constituted by country-studies on China, India, Indonesia and Vietnam, and sub-region studies on East Asia, Southeast Asia and South Asia, highlighting turning points in economic performance and analysing factors underlying success or failure. This book, with in-depth studies by eminent economists and social scientists, is the first to examine the phenomenal changes which are transforming economies in Asia and shifting the balance of economic power in the world, while reflecting on the future prospects in Asia over the next twenty-five years. It is a must-read.
APA, Harvard, Vancouver, ISO, and other styles
6

Hogan, Patrick Colm. Style in Narrative. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780197539576.001.0001.

Full text
Abstract:
Style has often been understood both too broadly and too narrowly. In consequence, it has not defined a psychologically coherent area of study. In the opening chapter, Hogan first defines style so as to make possible a systematic theoretical account through cognitive and affective science. This definition stresses that style varies by both scope and level—thus, the range of text or texts that may share a style (from a single passage to an historical period) and the components of a work that might involve a shared style (including story, narration, and verbalization). Hogan illustrates the main points of this chapter by reference to several works, prominently Woolf’s Mrs. Dalloway. Subsequent chapters in the first part focus on under-researched aspects of literary style. The second chapter explores the level of story construction for the scope of an authorial canon, treating Shakespeare. The third turns to verbal narration in a single work, Faulkner’s As I Lay Dying. Part two, on film style, begins with another theoretical chapter. It turns, in chapter five, to the perceptual interface in the genre of “painterly” films, examining works by Rodriguez, Mehta, Rohmer, and Husain. The sixth chapter treats the level of plot in the postwar films of Ozu. The remaining film chapter turns to visual narration in a single work, Lu’s Nanjing! Nanjing! The third part addresses theoretical and interpretive issues bearing on style in graphic fiction, with a focus on Spiegelman’s Maus. An Afterword touches briefly on implications of stylistic analysis for political critique.
APA, Harvard, Vancouver, ISO, and other styles
7

Jackson, Patrick Thaddeus. What is Theory? Oxford University Press, 2018. http://dx.doi.org/10.1093/acrefore/9780190846626.013.361.

Full text
Abstract:
The concept of theory takes part in a conceptual network occupied by some of the most common subjects of European Enlightenment, such as “science” and “reason.” Generally speaking, a theory is a rational type of abstract or generalizing thinking, or the results of such thinking. Theories drive the exercise of finding facts rather than of reaching goals. To formulate a theory, or to “theorize,” is to assert something of a privileged epistemic status, manifested in the traditional scholarly hierarchy between theorists and those who merely labor among the empirical weeds. In so doing, a theory provides a fixed point upon which analysis can be founded and action can be performed. Scholar and author Kenneth W. Thompson describes a nexus of relations between and among three different senses of the word “theory:” normative theory, a “general theory of politics,” and the set of assumptions on the basis of which a given actor is acting. These three types of theory are somehow paralleled by Marysia Zalewski’s triad of theory as “tool,” theory as “critique,” and theory as “everyday practice.” While Thompson’s and Zalewski’s interpretations of theory are each inherently consistent, both signal a different philosophical ontology. Thompson’s viewpoint is dualist, presuming the existence of a mind-independent world to which knowledge refers; while Zalewski’s is more of a monist, rejecting the mind/world dichotomy in favor of a more complex interrelationship between observers and their objects of study.
APA, Harvard, Vancouver, ISO, and other styles
8

Pascoe, Daniel. Last Chance for Life: Clemency in Southeast Asian Death Penalty Cases. Oxford University Press, 2019. http://dx.doi.org/10.1093/oso/9780198809715.001.0001.

Full text
Abstract:
All five contemporary practitioners of the death penalty in the Association of Southeast Asian Nations (ASEAN)—Indonesia, Malaysia, Thailand, Singapore, and Vietnam—have performed executions on a regular basis over the past few decades. Amnesty International currently classifies each of these nations as death penalty ‘retentionists’. However, notwithstanding a common willingness to execute, the number of death sentences passed by courts that are reduced to a term of imprisonment, or where the prisoner is released from custody altogether, through grants of clemency by the executive branch of government varies remarkably among these neighbouring political allies. This book uncovers the patterns which explain why some countries in the region award commutations and pardons far more often than do others in death penalty cases. Over the period under analysis, from 1991 to 2016, the regional outliers were Thailand (with more than 95 per cent of condemned prisoners receiving clemency after exhausting judicial appeals) and Singapore (with less than 1 per cent of condemned prisoners receiving clemency). Malaysia, Indonesia, and Vietnam fall at various points in between these two extremes. This is the first academic study anywhere in the world to compare executive clemency across national borders using empirical methodology, the latter being a systematic collection of clemency data in multiple jurisdictions using archival and ‘elite’ interview sources. Last Chance for Life: Clemency in Southeast Asian Death Penalty Cases will prove an authoritative resource for legal practitioners, criminal justice policymakers, scholars, and activists throughout the ASEAN region and around the world.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Analyses par point fixe"

1

A, Joy Christy, and Umamakeswari A. "Performance Enhancement of Outlier Removal Using Extreme Value Analysis-Based Mahalonobis Distance." In Handling Priority Inversion in Time-Constrained Distributed Databases, 240–52. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2491-6.ch014.

Full text
Abstract:
Outlier detection is a part of data analytics that helps users to find discrepancies in working machines by applying outlier detection algorithm on the captured data for every fixed interval. An outlier is a data point that exhibits different properties from other points due to some external or internal forces. These outliers can be detected by clustering the data points. To detect outliers, optimal clustering of data points is important. The problem that arises quite frequently in statistics is identification of groups or clusters of data within a population or sample. The most widely used procedure to identify clusters in a set of observations is k-means using Euclidean distance. Euclidean distance is not so efficient for finding anomaly in multivariate space. This chapter uses k-means algorithm with Mahalanobis distance metric to capture the variance structure of the clusters followed by the application of extreme value analysis (EVA) algorithm to detect the outliers for detecting rare items, events, or observations that raise suspicions from the majority of the data.
APA, Harvard, Vancouver, ISO, and other styles
2

Reynolds, R. "Web Credibility Measurment." In Handbook of Research on Electronic Surveys and Measurements, 296–98. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59140-792-8.ch035.

Full text
Abstract:
Several researchers (e.g., Carter & Greenberg, 1965; Flanagin, & Metzger, 2000; Fogg, 2002; Johnson & Kaye, 2004; Newhagen & Nass, 1989) discuss or mention the concept of media or web credibility. The classic concept of credibility (typically attributed to Aristotle’s Rhetoric) identifies credibility as a multidimensional perception on the part of the receiver that the source of a message has a moral character, practical wisdom, and a concern for the common good. Warnick (2004) points out that the “authorless” nature of the online environment complicates the use of traditional analyses of credibility. The most common set of web credibility scales cited in the research are the Flanagin and Metzger (2000) items. The five Flanagin and Metzger scale items address the believability, accuracy, trustworthiness, bias, and completeness of the information on the web site. Other researchers have added other items such as fairness or depth of information. Flanagin and Metzger used a 7-point response format with anchors for each term (e.g., “Not At All Believable” to “Extremely Believable”). Other researchers have used a 5-point response format.
APA, Harvard, Vancouver, ISO, and other styles
3

AFANASEVA, Larisa. "Analyse de stabilité de modèles de files d’attente basée sur la méthode de synchronisation." In Théorie des files d’attente 2, 7–39. ISTE Group, 2021. http://dx.doi.org/10.51926/iste.9004.ch1.

Full text
Abstract:
Ce chapitre étudie les conditions de stabilité pour un système de file d'attente multiserveur avec des serveurs hétérogènes et un flux d'entrée régénératif X. L'idée principale est de construire un processus de service auxiliaire Y ainsi que la détermination des points communs de régénération pour les deux processus X et Y. Les possibilités de l'approche proposée sont illustrées par des exemples. Nous présentons également les applications à l'analyse de capacité des systèmes de transport.
APA, Harvard, Vancouver, ISO, and other styles
4

Havran, Maryana, and Roksolana Vysotska Vysotska. "THE HISTORICAL DEVELOPMENT OF THE EXPRESSION OF NEGATION IN FRENCH." In Pedagogical concept and its features, social work and linguology (1st ed.), 81–89. Primedia eLaunch LLC, 2020. http://dx.doi.org/10.36074/pcaifswal.ed-1.07.

Full text
Abstract:
This research paper studies the evolution of the syntactic realization of negation in French. In the introduction part, it is defined the objective of the research as tracing the development of using words aucun, personne and rien in French negative structures presenting their different practical applications. One of the tasks is to find out if these constructions could be used with the negative marker pas, without inducing a double negation interpretation. The novelty of the research paper is the evolution of the expression of French negation proved by the given examples taken mostly from literary, legal or epistolary texts. The stages of evolution in French negation part shows a detailed analysis of negative markers ne and pas through different stages of their developments presented by various scientists. The evolution of negation in French is generally analyzed by using three main stages, but the investigation proves that there are other approaches for characterizing it, the so-called four stages evolution or even five stages; nevertheless, the final stage is identified by negation expressed by using only marker pas. The next part of this paper The expression of negation in Old and modern French studies the evolution of the undefined aucun, personne and rien, which are presented in the negative constructions in modern French. It is proved that the negation was strengthened by adding an undefined non-negative post verbal element or representing a small amount, such as pas, point, mie, brin, goutte, aucun, personne, rien, etc. It is concluded that in the course of historical development, these words being initially of positive value have taken on a negative coloration and value in modern French.
APA, Harvard, Vancouver, ISO, and other styles
5

"Landscape Influences on Stream Habitats and Biological Assemblages." In Landscape Influences on Stream Habitats and Biological Assemblages, edited by Catherine M. Riseng, Michael J. Wiley, R. Jan Stevenson, Troy G. Zorn, and Paul W. Seelbach. American Fisheries Society, 2006. http://dx.doi.org/10.47886/9781888569766.ch27.

Full text
Abstract:
<em>Abstract.</em>—We used data sets of differing geographic extents and sampling intensities to examine how data structure affects the outcome of biological assessment. An intensive sampling (<em>n </em>= 97) of the Muskegon River basin provided our example of fine scale data, while two regional and statewide data sets (<em>n </em>= 276, 310) represented data sets of coarser geographic scales. We constructed significant multiple linear regression models (<EM>R</EM><sup>2</sup> from 21% to 79%) to predict expected fish assemblage metrics (total fish, game fish, intolerant fish, and benthic fish species richness) and to regionally normalize Muskegon basin samples. We then examined the sensitivity of assessments based on each of five data sets with differing geographic extents to landscape stressors (urban and agricultural land use, dam density, and point source discharges). Assessment scores generated from the different data extents were significantly correlated and suggested that the Muskegon basin was generally in good condition. However, using coarser scale data extents to determine reference conditions resulted in greater sensitivity to land-use stressors (urban and agricultural land use). This was due in part to significant covariance between land use and drainage area in the fine scale data set. Our results show that the scale of data used to determine reference condition can significantly influence the results of a biological assessment. The training data sets with broader spatial range appeared to produce the most sensitive and accurate catchment assessment. A covariance structure analysis using a data set with broad spatial range suggested that impounded channels and point source discharges have the strongest negative effects on intolerant fish richness in the Muskegon River basin, which provides a focus for conservation, mitigation, and rehabilitation opportunities.
APA, Harvard, Vancouver, ISO, and other styles
6

Chaka, Chaka. "Virtualization and Cloud Computing." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 176–90. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2533-4.ch009.

Full text
Abstract:
This chapter explores the interface between virtualization and cloud computing for global enterprise mobility. It also investigates the potential both virtualization and cloud computing hold for global enterprises. In this context, it argues that the virtualization of computing operations, applications, and services and the consumerization of digital technologies serve as one of the key drivers of cloud computing. Against this backdrop, the chapter first provides an overview of virtualization, consumerization, and cloud computing. Second, it showcases real life instances in which five enterprises leverage virtualization and cloud computing as part of their cloud business solutions. Third, it outlines some of the hollows and pain points characterizing cloud computing. Fourth and last, the chapter briefly presents possible future trends likely to typify cloud computing.
APA, Harvard, Vancouver, ISO, and other styles
7

Coppens, Pieter. "Sufi Qurʾan Commentaries: The Rise of a Genre." In Seeing God in Sufi Qur'an Commentaries, 39–82. Edinburgh University Press, 2018. http://dx.doi.org/10.3366/edinburgh/9781474435055.003.0002.

Full text
Abstract:
This chapter discusses the historical background of the rise of the genre of Sufi commentaries on the Qurʾan in 5th/11th-century Nishapur. It chronologically introduces the five authors that are central to this study: Sulamī, Qushayrī, Daylamī, Maybudī and Rūzbihān al-Baqlī. After highlighting the most important facts from their biographies and placing them within their broader circles of influence, it discusses their works of tafsīr and the hermeneutical practices that they proposed and defended in these works. Based on this analysis it is concluded that it is legitimate to consider these works as part of a ‘genre’ of Sufi tafsīr that takes al-Sulamī’s tafsīr as its collective reference point.
APA, Harvard, Vancouver, ISO, and other styles
8

Polivova, Maria, and Anna Brook. "Detailed Investigation of Spectral Vegetation Indices for Fine Field-Scale Phenotyping." In Vegetation Index and Dynamics [Working Title]. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.96882.

Full text
Abstract:
Spectral vegetation indices (VIs) are a well-known and widely used method for crop state estimation. These technologies have great importance for plant state monitoring, especially for agriculture. The main aim is to assess the performance level of the selected VIs calculated from space-borne multispectral imagery and point-based field spectroscopy in application to crop state estimation. The results obtained indicate that space-borne VIs react on phenology. This feature makes it an appropriate data source for monitoring crop development, crop water needs and yield prediction. Field spectrometer VIs were sensitive for estimating pigment concentration and photosynthesis rate. Yet, a hypersensitivity of field spectral measures might lead to a very high variability of the calculated values. The results obtained in the second part of the presented study were reported on crop state estimated by 17 VIs known as sensitive to plant drought. An alternative approach for identification early stress by VIs proposed in this study is Principal Component Analysis (PCA). The results show that PCA has identified the degree of similarity of the different states and together with reference stress states from the control plot clearly estimated stress in the actual irrigated field, which was hard to detect by VIs values only.
APA, Harvard, Vancouver, ISO, and other styles
9

Gomes, Jessica Luciano, and Miriam Gomes Saraiva. "Case Study." In Research Methods in the Social Sciences: An A-Z of key concepts, 38–41. Oxford University Press, 2021. http://dx.doi.org/10.1093/hepl/9780198850298.003.0009.

Full text
Abstract:
This chapter explores the case study, which is a very common research method in the field of social sciences. Case studies are important because they provide the examination of samples of a larger atmosphere, therefore enabling researchers with a variety of possibilities: to deepen the analysis of a particular occurrence in the world, to contribute to an existing theoretical framework, and to serve as an instrument of comparative analysis. Although it might sound simplistic, the research framework for case studies usually has to satisfy a few key points. Case studies can be divided into separate categories: exploratory, descriptive, and explanatory. They are also directly related to the type of research question being posed from the traditional five types of survey questions: ‘who’, ‘what’, ‘where’, ‘how’, and ‘why’. One can often find case studies among both qualitative and quantitative approaches, focusing on a case study per se or on cross-case method.
APA, Harvard, Vancouver, ISO, and other styles
10

Santos, Flávia, Thiago Rodrigues, Henrique Donancio, Iverton Santos, Diana F. Adamatti, Graçaliz P. Dimuro, Glenda Dimuro, and Esteban De Manuel Jerez. "Towards a Multi-Agent-Based Tool for the Analysis of the Social Production and Management Processes in an Urban Ecosystem." In Public Affairs and Administration, 1926–50. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-8358-7.ch099.

Full text
Abstract:
The SJVG-MAS Project addresses, in an interdisciplinary approach, the development of MAS-based tools for the simulation of the social production and management processes observed in urban ecosystems, adopting as case study the social vegetable garden project conducted at the San Jerónimo Park (Seville/Spain), headed by the confederation “Ecologistas en Acción.” The authors aim at the analysis of the current reality of the SJVG project, allowing discussions on the adopted social management processes, and also for investigating how possible changes in the social organization (e.g., roles assumed by the agents in the organization, actions, behaviors, (in)formal interaction/communication protocols, regulation norms), especially from the point of view of the agent's participation in the decision making processes, may transform this reality, from the social, environmental and economic point of view, then contributing for the sustainability of the project. The MAS was conceived as a multi-dimensional BDI-like agent social system, involving the development of five components: the agents' population, the system's organization, the system's environment, the set of interactions executed among agents playing organizational roles (e.g., communication protocols for reaching agreements) and the normative policy structure (internal regulation established by SJVG community). The aim of this chapter is to discuss the problems faced and to present the solution found for the modeling of SJVG social organization using JaCAMo framework. The chapter shows the integration of the considered dimensions, discussing the adopted methodology, which may be applied in several other contexts.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Analyses par point fixe"

1

Hilaire, Thibault, Daniel Menard, and Olivier Sentieys. "Bit accurate roundoff noise analysis of fixed-point linear controllers." In 2008 IEEE International Conference on Computer-Aided Control Systems (CACSD) part of the Multi-Conference on Systems and Control. IEEE, 2008. http://dx.doi.org/10.1109/cacsd.2008.4627366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Manolescu, Alexandru, Adriana Fota, and Gheorghe Oancea. "Recognizing Algorithm for Digitized Rotational Parts." In ASME 2012 11th Biennial Conference on Engineering Systems Design and Analysis. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/esda2012-82334.

Full text
Abstract:
It is a well known fact that Reverse Engineering techniques involve the following steps: scanning the object, pre-processing a cloud of points, processing the cloud of points, redesigning, and manufacturing the part. Difficulties arise when processing clouds of points resulted from digitization, obtaining geometrical parameters of the scanned object itself and getting the final associated CAD model. This paper presents an algorithm for the recognition of a rotational part form. The part has been previously scanned and will be redesigned for re-manufacturing. To determine the surfaces of a rotational part, it is necessary to scan the part in order to obtain the cloud of points which is afterwards cleared of noise points. Beginning with the cloud of points, an algorithm is built that automatically determines the part’s axis. The axis is then used to generate the required sections. The same tool also facilitates the recognition of simple, basic shapes like cylinders, cones and spheres. The points cloud data are stored in a text file. The text file contains all the points’ coordinates of the cloud. After running the software on the data file we obtain the geometrical data necessary for the parametric model. This data can then be exported to a 3D design environment to redesign the digitized part. This paper contains two case studies in which a part was scanned and then, following the steps outlined above, the geometrical data of the part are obtained. With the geometrical data, the part can be modelled like a parameterized object.
APA, Harvard, Vancouver, ISO, and other styles
3

Gao, Bingjun, Bin Liu, Junhua Dong, and JinHua Shi. "Application of Sub-Modeling in the Finite Element Analysis of a Large Fixed Tubesheet Heat Exchanger." In ASME 2017 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/pvp2017-65287.

Full text
Abstract:
There are enormous tubes in large fixed tubesheet heat exchangers making it difficult to model all the tubes and tubesheet connections in detail for a finite element (FE) analysis. Alternatively, an equivalent solid plate with various simplified tube connections are employed in the FE modeling. But this fails to yield an accurate stress field around the connecting region of tubes and the tubesheet. Given that the maximum stress of the equivalent solid plate generally occurs adjacent to the solid tubesheet, a novel finite element modeling methodology is proposed in this paper. A two part process is used. The first part is a coarse FE model and the second part is a more detailed FE model. In the coarse FE model, the equivalent solid plate is employed in the central region of the tubesheet with simplified tube connections such as equivalent cylinder by multi-points contacting. And quasi detailed tube and tubesheet connections are used in the coarse FE model for the region adjacent to the solid plate, in which the tube and the tubesheet are simply connected with same nodes. This means that both tubesheet and tubes are established in this quasi detailed FE modeling region. Although both the weld and contacting condition between the tube and the tubesheet are not included, the coarse model is enough to yield a believable stress field for the determination of the maximum stress point of the quasi detailed FE modeling region. In the second part, the sub-model methodology is utilized in the predetermined maximum stress point, in which the detailed connecting structure of the tube and the tubesheet is included, such as the weld and the contact condition. The proposed modeling methodology is helpful to have an insight into the stress around the connecting region of the tube and tubesheet for the effective evaluation of the tubesheet and the connection.
APA, Harvard, Vancouver, ISO, and other styles
4

Zhu, W. D., X. K. Song, and N. A. Zheng. "Dynamics Stability of a Translating String With a Sinusoidally Varying Velocity." In ASME 2010 International Mechanical Engineering Congress and Exposition. ASMEDC, 2010. http://dx.doi.org/10.1115/imece2010-38692.

Full text
Abstract:
A new parametric instability phenomenon characterized by infinitely compressed, shock-like waves with a bounded displacement and an unbounded vibratory energy is discovered in a translating string with a constant length and tension and a sinusoidally varying velocity. A novel method based on the wave solutions and the fixed point theory is developed to analyze the instability phenomenon. The phase functions of the wave solutions corresponding to the phases of the sinusoidal part of the translation velocity, when an infinitesimal wave arrives at the left boundary, are established. The period number of a fixed point of a phase function is defined as the number of times that the corresponding infinitesimal wave propagates between the two boundaries before the phase repeats itself. The instability conditions are determined by identifying the regions in a parameter plane where attracting fixed points of the phase functions exist. The period-1 instability regions are analytically obtained, and the period-i (i > 1) instability regions are numerically calculated using bifurcation diagrams. The wave patterns corresponding to different instability regions are determined, and the strength of instability corresponding to different period numbers is analyzed.
APA, Harvard, Vancouver, ISO, and other styles
5

Yu, Hongjie, Aobo Xiong, Caifu Qian, Donald Mackenzie, and Xingguo Zhou. "Mechanical Study on Fixed Tubesheet Based on Finite Element Analysis." In ASME 2014 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/pvp2014-28909.

Full text
Abstract:
Mechanical study on a fixed tubesheet which are welded with tube bundles and equipment shell using finite element method is performed. Effects of number of tubes on the strength of tubesheet are studied. Variation of the distributions and magnitudes of the stress and deflection of the tubesheet with the size of the unpierced part of the tubesheet are investigated. Results show with support of tubes, the tubesheet does not behave as a flat solid plate in terms of the stress and deflection distribution features. Specifically, If the tubesheet is partly supported with the tubes in the center, the largest stress intensity occurs at the point which depends on the size of the unpierced region. The maximum deflection is near the unpierced region.
APA, Harvard, Vancouver, ISO, and other styles
6

Machado, Maria Margarida, Paulo Flores, and Jorge Ambrósio. "A Lookup Table-Based Approach for Spatial Analysis of Contact Problems." In ASME 2013 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/detc2013-13041.

Full text
Abstract:
The aim of this work is to present an efficient methodology to deal with general 3D-contact problems. This approach embraces three steps: geometrical definition of 3D-surfaces; detection of the candidate contact points; evaluation of the contact forces. The 3D-contact surfaces are generated and represented by using parametric functions due to their simplicity and easiness to handle freeform shapes. This task is carried in preprocessing, performed preliminarily to the implementation of the multibody code. The preprocessing procedure can be condensed into four steps: a regular and representative surface collection of points is extracted from the 3D-parametric surface; for each point the tangent vectors to the u and v directions of the parametric surface and the normal vector are computed; the geometrical information on each point is saved in a lookup table, including the parametric point coordinates, the corresponding Cartesian coordinates and the Cartesian components of the normal, tangent and binormal vectors; the lookup table is rearranged such that the u-v mapping is converted into a 3D-matrix form. In the last step, the surface data is saved as a direct access file. Regarding the detection of the contact points, the relative distance between the candidate contact points are computed and used to check if the bodies are in contact. The actual contact points are selected as those that correspond to the maximum relative indentation. The contact forces are determined as functions of the indentation, impact velocity and geometric and material properties of the contacting surfaces. In general, lookup tables are used to reduce the computation time in dynamic simulations. However, the application of these schemes involves an increase of memory needs. Within the proposed approach, the amount of memory used is significantly reduced, as a result of a partial upload into memory of the lookup table. A slider-crank mechanism with a cup on the top of the slider and a marble ball is used as demonstrative example. A contact pair is considered between a cup and a marble ball, being the contact forces computed using a dissipative contact model.
APA, Harvard, Vancouver, ISO, and other styles
7

Figliolini, Giorgio, Pierluigi Rea, and Salvatore Grande. "Higher-Pair Reuleaux-Triangle in Square and its Derived Mechanisms." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-71332.

Full text
Abstract:
This paper deals with the formulation of a mathematical procedure for the motion analysis of mechanisms, which are derived by the higher-pair Reuleaux-triangle in square, which curve-triangle shows a rounded vertex. In particular, the trajectory of the center of the modified Reuleaux triangle, the fixed and moving centrodes, the trajectory of any moving point, with particular reference to the center of the rounded corner of the curve-triangle, have been obtained in algebraic form. Exact square trajectories can be also obtained for specific rounded corners.
APA, Harvard, Vancouver, ISO, and other styles
8

Midha, Ashok, Sushrut G. Bapat, Adarsh Mavanthoor, and Vivekananda Chinta. "Analysis of a Fixed-Guided Compliant Beam With an Inflection Point Using the Pseudo-Rigid-Body Model (PRBM) Concept." In ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/detc2012-71400.

Full text
Abstract:
This paper provides an efficient method of analysis for a fixed-guided compliant beam with an inflection point, subjected to beam end load or displacement boundary conditions, or a combination thereof. To enable this, such a beam is modeled as a pair of well-established pseudo-rigid-body models (PRBMs) for fixed-free compliant beam segments. The analysis procedure relies on the properties of inflection in developing the necessary set of static equilibrium equations for solution. The paper further discusses the multiplicity of possible solutions, including displacement configurations, for any two specified beam end boundary conditions, depending on the locations of the effecting force and/or displacement boundary conditions. A unique solution may exist when a third beam end boundary condition is specified; however, this selection is not unconditional. A deflection domain concept is proposed to assist with the selection of the third boundary condition in a more realistic manner.
APA, Harvard, Vancouver, ISO, and other styles
9

Comanescu, L., S. Sawh, M. Wei, B. Lee, A. Petrescu, A. Nainer, R. K. Jaitly, A. F. Jean, D. F. Basque, and D. S. Mullin. "Point Lepreau Refurbishment Project Level 2 PSA Applications." In 17th International Conference on Nuclear Engineering. ASMEDC, 2009. http://dx.doi.org/10.1115/icone17-75972.

Full text
Abstract:
A Probabilistic Safety Assessment (PSA) for Point Lepreau Generating Station has been completed as part of the plant Refurbishment Project. The main objective of this PSA is to provide insights into plant safety design and performance, including the identification of dominant risk contributors and assessing options for reducing risk. The scope of this assessment covers Level 1 and 2 PSA and includes internal events for full power and shutdown, internal fires and internal floods as well as PSA-based seismic margin assessment (SMA) for full power operation. Following the accident sequence quantification for internal events, fire and flood, the results were integrated to provide an overall estimation of the Severe Core Damage Frequency (SCDF) and the Large Release Frequency (LRF) for the refurbished Point Lepreau plant. Importance analysis was performed on the integrated results to identify risk-significant failures, using Fussell-Vesely and Risk Achievement Worth indices, and risk-contributors using Risk Reduction Worth indices. Based on the importance measures, analysis was performed to evaluate the sensitivity of the SCDF and LRF results to the dominant contributors. Uncertainty analysis was also performed to provide qualitative discussions and quantitative measures of the uncertainties in the results of the PSA, namely the frequency of severe core damage or external releases. Based on the results, recommendations were made to improve maintenance, testing, training procedures as well as housekeeping. Also, results of the Level 1 and 2 PSA have been used to determine the safety important systems and components. This paper discusses the key results and recommendations of the Level 2 PSA as well as the methodology used to determine the safety important systems.
APA, Harvard, Vancouver, ISO, and other styles
10

Gutkowski, L. J., and Gary L. Kinzel. "Kinematic Transformation Matrices for 3D Surface Contact Joints." In ASME 1992 Design Technical Conferences. American Society of Mechanical Engineers, 1992. http://dx.doi.org/10.1115/detc1992-0346.

Full text
Abstract:
Abstract A generalized procedure is presented for the development of a pair matrix that describes kinematic joints formed by contact between three-dimensional surfaces. The pair matrix is useful in the matrix-based kinematic analysis procedure put forth by Sheth and Uicker (1971) previously. Any two surfaces may make up the joint as long as the surfaces can be described parametrically, and contact takes place at one point. The corresponding pair matrix is a function of five pair variables.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Analyses par point fixe"

1

Some complex approaches to training micro-cycles formation among cadetsweightlifters taking into account biotypes. Ilyas N. Ibragimov, Zinaida M. Kuznetsova, Ilsiyar Sh. Mutaeva, March 2021. http://dx.doi.org/10.14526/2070-4798-2021-16-1-39-46.

Full text
Abstract:
Training cadets-weightlifters at all stages has a multipurpose orientation, that is why it is important to define and plan a rational combination of the training means use. Distribution of such micro structures in the cycle of training, as the days, months of training, provides effective volume, intensity and other values of physical load distribution. The structure of training cadets-weightlifters is based on taking into account the regularities and principles of sports training as the condition for physical readiness and working capacity increase. Any power oriented sports demands components characteristics in the structure of micro cycles. We consider the methodology of the training lessons organization by the example of the micro cycle of training taking into account bioenergetic profile of cadets-weightlifters. We revealed the necessity to distribute the macro cycle to structural components as the condition for the effectiveness of different variants of the training effects distribution. Materials and methods. We analyzed the range of training lessons among cadets-weightlifters in order to create the variants of gradual training problems solution according to the kinds of training. We analyzed training programs of cadets taking into consideration the level of readiness and their bioenergetic profiles. We created the content of the training work in the micro cycle of the preparatory period for cadets-weightlifters with different bioenergetic profiles. The main material of the research includes the ratio of the training effects volume in one micro cycle taking into account cadets’ bioenergetic profile. Cadets-weightlifters from Tyumen Higher Military-Engineering Command College (military Institute) took part in the research (Tyumen, Russia). Results. We created the content of the training work by the example of one micro cycle for cadets-weightlifters taking into account bioenergetic profile. The created variant of the training loads structure includes the main means of training taking into account the kind of training. Realization orientation in five regimens of work fulfillment with the effectiveness estimation of a total load within one lesson and a week in general is estimated according to a point system. Conclusion. The created variant of a micro cycle considers kinds of training realization taking into account the percentage of the ratio. Taking into account bioenergetic profiles helps to discuss strong and weak sides of muscle activity energy supply mechanisms. We consider the ability to fulfill a long-term aerobic load among the representatives of the 1st and the 2nd bioenergetic profiles. The representatives of the 3rd and the 4th biotype are inclined to fulfill the mixed load. The representatives of the 5th biotype are characterized by higher degree of anaerobic abilities demonstration. The technology of planning the means taking into account the regimens of work realization with point system helps to increase physical working capacity and rehabilitation processes in cadets’ organisms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography