Dissertations / Theses on the topic 'Contrôle du débit de fuite'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Contrôle du débit de fuite.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Azzam, Tarik. "Aérodynamique et contrôle de l'écoulement de jeu dans un ventilateur axial obtenu par rotomoulage." Thesis, Paris, ENSAM, 2018. http://www.theses.fr/2018ENAM0080/document.
Full textNowadays, the manufacture of turbomachinery is conditioned by more and more restrictive rules. The industrial challenge for researchers has to consider optimal solutions to reduce sources of energy loss, instability and noise, particularly the tip clearance flow (leakage flow rate). Preliminary actions have been developed at Arts & ParisTech on rotational molding process used for the automobile cooling axial fan. The idea of this work is to use the hollow shape induced by rotational molding process in order to exploit it in the control of tip clearance flow through rotary steady air injection. For this, the shroud ring is composed of injection holes oriented in such away to reduce both of leakage flow rate and the torque. In this work, the thesis focuses on three parts. The first concerns the build of the fan by rotational molding process. The second concerns the experimental study carried out in the ISO 5801 test bench. This study involves the realization of drive system dedicated to rotary steady air injection, metrology for performance determination and the characterization of the near wake axial velocity. The third part deals with the numerical modeling of efficient experimental conditions, then the extrapolation of work towards high injection rates. For this latter, it is possible to cancel leakage flow rate with a considerable gain of the torque thus putting the fan in autorotation
Sogbossi, Hognon Eric Arnaud. "Etude de l'évolution de la perméabilité du béton en fonction de son endommagement : transposition des résultats de laboratoire à la prédiction des débits de fuite sur site." Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30210/document.
Full textThe building reactor of the nuclear power plants are designed to provide precise containment and sealing properties in normal use situations and in the event of a nuclear accident, to prevent the spread of radioelements in the environment. Since these enclosures are made of concrete, controlling the evaluation of the permeability of concrete and its evolutions under stress would make it possible to evaluate the leakage rates that may occur over time under certain conditions. Until today, there are several techniques for measuring permeability and these techniques lead to different results for the same concrete specimen. The first study we carried out was therefore to propose a standardization of the permeability measurement: this standardization resulted in the determination of a characteristic permeability of concrete and independent of the measurement technique. In parallel with this approach, we also proposed to evaluate the permeability of concrete using observables from Non-Destructive Testing such as permittivity and electrical resistivity. The results obtained show the possibility of estimating the permeability under concrete conditions on site. The second study carried out relates to the control of the permeability under constraints. In the laboratory, we investigated the permeability of concrete specimens of different sizes under various conditions of drying, thermal stress, mechanical and coupled damage. We could establish permeability-damage models according to each source of damage. The third study carried out relates to the transposition from laboratory results to the site, using nuclear power plants mock-up of larger dimensions and representative of the actual structure (VeRCoRs at scale 1/3). All the results of the first two studies have been used and have led to calculations of leak rates and Time to Reach Steady State (TRSS) consistent with the calculation assumptions
Hammi, Rim. "Contrôle actif de transmission de flux vidéo par régulation du débit." Paris 13, 2002. http://www.theses.fr/2002PA132024.
Full textPoulin, Annie. "TRENTE PIASTRES de récompense : Le contrôle et la fuite d’esclaves à la Nouvelle-Orléans, 1816-1827." Mémoire, Université de Sherbrooke, 2014. http://hdl.handle.net/11143/5407.
Full textHrarti, Miryem. "Optimisation du contrôle de débit de H. 264/AVC basée sur une nouvelle modélisation Débit-Quantification et une allocation sélective de bits." Poitiers, 2011. http://nuxeo.edel.univ-poitiers.fr/nuxeo/site/esupversions/181e500b-af8c-42a0-b134-b1ce33fd4c56.
Full textThe explosion of multimedia applications is largely due to the efficiency of the compression techniques used. H. 264/AVC, also known as MPEG-4 Part 10 is the newest video coding standard. It is more effective than previous standards (MPEG1, MPEG2, part 4, H26x…) and achieves significant compression gains. As for other standards, the rate control is a key element of H. 264/AVC because it helps to regulate the visual quality of the reconstructed sequence while respecting the bandwidth constraints imposed by the channel transmission. In accordance with the specified target bit-rate, the rate control algorithm determines appropriately the quantization parameters. Basically, a first Rate-Quantization function elaborates a relationship between the rate and the quantization parameter (QP). A second function called Distortion-Quantization estimates the distortion (or quality) of the reconstructed video based on the used quantization parameter. These two functions lead together to a relationship (usually quadratic) between the quantization parameter, the target number of bits and the basic unit (Frame, Slice, macroblock or set of macroblocks) statistics. These functions have been the subject of several studies. The models that have been adopted and therefore recommended by the group standardization, do not generally offer better performances and they are far from optimal. This thesis is in this context. Its main objective is to develop and design new techniques to improve the performance of the rate control algorithm and those of the H. 264/AVC standard. These techniques are based on both a detailed analysis of the major current limitations and a wide literature review. Our purpose is to provide a more appropriate determination of the quantization parameter, a selective bit allocation that integrates Human Visual System properties and enhances the reconstructed video quality. To determine accurately the quantization parameter, two Rate-Quantization models (R-Q) have been proposed. The first model designed for Intra-Frames, is a non-linear one. It is used to determine the optimal initial quantization parameter, while exploiting the relationship between the target bit-rate and the complexity of Intra-Frames. The second model is a logarithmic one and it is designed for Inter coding units. It replaces the two models used by the H. 264/AVC rate controller and reduces the computational complexity. The frame layer bit allocation of the H. 264/AVC baseline profile remains basic. It assumes that GOPs (Groups Of Pictures) have similar characteristics and the target number of bits are fairly allocated to coding units regardless of their complexity. For a more accurate bit allocation, a new model has been proposed including two complexity measures. The first is a motion ratio determined from the actual bits used to encode the previous frames. The second measure uses the difference between adjacent frames and the histogram of this difference. Finally, to better control the visual quality of the reconstructed video, a saliency map is included into the bit allocation process. The saliency map generated by a bottom-up approach, simulates the human visual attention. It has been used to adjust the quantization parameter at frame layer. This adjustment allows the assignment of more bits to frames containing more salient regions (supposed to be more important than others). At macroblock layer, the saliency map is exploited to efficiently allocate the number of bits among the macroblocks of the same frame. This bit repartition by ''region'' of interest improves the visual quality of the frame. Experimental simulations show that the proposed models, when compared with two recent algorithms of rate control (JVT-O016 and JM15. 0), improve significantly the coding performances in terms of average bit-rates and PSNR. More consistent quality and therefore a quality smoothness through frames is also observed
Bara, Aude. "Contribution à l'étude de l'efficacité des rideaux d'eau face à une fuite d'ammoniac." Aix-Marseille 1, 2000. http://www.theses.fr/2000AIX11051.
Full textGirin, Fanny. "La « sécurité » en fuite : la construction du contrôle à partir des relations entre groupes dans une raffinerie." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0214/document.
Full textThe theme of security generally invites to consider work practices in hazard industries from the point of view of rules. This thesis shifts the questioning towards the analysis of a non-explicit activity in a formal organization: the maintenance of facilities. A diffuse collective is formed on this basis; it units several sited collectives, defined from the organization without being restricted to it. The maintenance consist in catching up an efficient operation that constantly escapes beyond any control, due to material deterioration of facilities and to just-in-time constraints. In an urgency atmosphere, the workers try to avoid accidents and production arrests, intricately linked and always latent. They regulate their cooperation by trying to gain control on machines and on their own career paths, and thus on the composition of collectives. In parallel, security procedures relate to a larger bureaucratic apparatus, which is both elusive and omnipresent. On behalf of « security », this latest is supposed to conciliate just-in-time production with accident prevention through a control of workforces. It intervenes in practice as a benchmark but mainly as a threat: workers, unable to measure the deviations from reality to requirements, fear to be charged in case of accident. Participative actions supposed to improve this apparatus do not allow emphasizing the uncontrollable nature of machines. The members of the diffuse collective thus avoid participating in order to minimize the hierarchical hold on the in-house-built social order
Ruiz, Marta. "Contrôle actif de la perte par transmission d'une plaque par minimisation du débit volumique." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0031/MQ67324.pdf.
Full textJiang, Shengming. "Techniques de contrôle d'accès à haut débit basées sur l'ATM pour les applications multimédia." Versailles-St Quentin en Yvelines, 1995. http://www.theses.fr/1995VERS0001.
Full textRuiz, Marta. "Contrôle actif de la perte par transmission d'une plaque par minimisation du débit volumique." Mémoire, Université de Sherbrooke, 2000. http://savoirs.usherbrooke.ca/handle/11143/1121.
Full textNicoletti, Nathalie. "Contrôle dimensionnel par vision : Une application à des pièces découpées sur presses à grand débit." Besançon, 1988. http://www.theses.fr/1988BESA2036.
Full textChesneau, Olivier. "Un outil d'aide à la maîtrise des pertes dans les réseaux d'eau potable : La modélisation dynamique de différentes composantes du débit de fuite." Université Louis Pasteur (Strasbourg) (1971-2008), 2006. https://publication-theses.unistra.fr/public/theses_doctorat/2006/CHESNEAU_Olivier_2006.pdf.
Full textThe degradation of the performance of water networks due to ageing has to be fought. The high financial stakes involved in keeping the quality of service at an acceptable level impose the need for a controlled and motivated management of assets. Leaks, whose effects are particularly detrimental for the resource, are one of the key indicators of the infrastructures deterioration. This study presents a tool to foresee the evolution of leaks, considering both the natural growth in time and the interventions of the operator to reduce them. Leak flows and repair records on District Metered Areas are used to elaborate this tool and calibrate a dynamic model, named the “three states model”. It is based on the double hypothesis that the leaks appear according to a Yule process and that they transform with time to pass through three successive states (background leaks, unreported leaks and manifest bursts). Considering the mean age of the DMAs, the model first restores the selected records by decomposing the leak flow into a part due to the background leaks and another part linked with the unreported leaks. Then, it allows the evolution of the leak flow at several time steps to be followed when various management scenarios are applied to the DMA. The simulation of the benefits of leak-finding or renewal operations is made possible by the model formulation and so can guide the choices and the programming of operator interventions
Omnès, Nathalie. "Analyse d'outils de contrôle de la qualité de service dans les réseaux de paquets haut débit." Rennes 1, 2001. http://www.theses.fr/2001REN10134.
Full textGerman, Yolla. "L'imagerie cellulaire à haut débit révèle le contrôle de la synapse immunologique par le cytosquelette d'actine." Thesis, Toulouse 3, 2020. http://www.theses.fr/2020TOU30015.
Full textCytotoxic lymphocytes rely on actin cytoskeleton remodeling to achieve their function. In particular cytotoxic T lymphocytes and NK cells assemble the immunological synapse (IS), a complex actin-rich structure that allows the interaction with target cells, such as infected cells or tumor cells, and permits the polarized delivery of lytic granules. Although actin cytoskeleton remodeling is known to be a driving force of IS assembly and dynamics, our understanding of the molecular control of actin remodeling sustaining IS dynamics remains fragmented. This PhD project consisted in developing a high-content imaging approach to unbiasedly define the metrics of IS from human T and NK lymphocytes and to characterize the requirements for actin cytoskeleton integrity in organizing the IS architecture.For that purpose, the stimulation and staining of cell lines and primary cells in multiwell plates and acquisition of a unique set of >100.000 confocal images with a fully automatized high-content imager was optimized. The images were analyzed with two complementary CellProfiler analytical pipelines to characterize the morphological features associated with different treatments and disease status. We first extracted 16 morphological features pertaining to F-actin, LFA-1 or lytic molecules based on prior knowledge of IS assembly, and included features pertaining to the nucleus. We show that IS assembly in Jurkat and NK-92 cells is characterized by increased F-actin intensity and cell area. For Jurkat cells, we report an increase in LFA-1 intensity and surface area, and for NK-92 cells an increase in lytic granule detection at the IS plane. We then treated NK-92 cells with seven drugs known to affect different aspects of actin dynamics and investigated the associated effects on IS features. We report concentration dependent effects, not only on F-actin intensity, as expected, but also on lytic granule polarization. Furthermore, using a high-resolution morphological profiling based on >300 features, we show that each drug inflicts distinct alterations of IS morphology. In a next step, we applied our experimental pipeline to primary NK cells isolated from the blood of healthy donors. Distinct morphological features were characterized among the NK cells from different donors, highlighting the sensitivity of our approach, but also revealing an unsuspected variability of immune cell morphologies among donors. We then further applied our approach to primary CD8+ T cells from patients with a rare immunodeficiency due to mutations in the gene encoding the actin regulator ARPC1B. ARPC1B deficiency results in decreased F-actin intensity, as well as in lytic granule polarization. This prompted us to assess the ability of these cells to kill target cells, which was markedly reduced. These results illustrate how the systematic analysis of the IS might be used to assist the exploration of fonctional defects of lymphocyte populations in pathological settings. In conclusion, our study reveals that although assembly of the IS can be characterized by a few features such as F-actin intensity and cell spreading, capturing fine alterations of that complex structure that arise from cytoskeleton dysregulation requires a high-content analysis. The pipeline we developed through this project holds promises for the morphological profiling of lymphocytes from primary immunodeficiency patients whose genetic defect has not yet been identified. Moreover, the discriminative power of our high-content approach could be exploited to characterize the response of lymphocytes to various stimuli and to monitor lymphocyte activation in multiple immune-related pathologies and treatment settings
Khalifé, Hicham. "Techniques de contrôle pour réseaux sans fils multi-sauts." Paris 6, 2008. http://www.theses.fr/2008PA066458.
Full textFnaiech, Emna-Amira. "Développement d'un outil de simulation du procédé de contrôle non destructif des tubes ferromagnétiques par un capteur à flux de fuite." Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00752882.
Full textFnaiech, Emna Amira. "Développement d’un outil de simulation du procédé de contrôle non destructif des tubes ferromagnétiques par un capteur à flux de fuite." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112311/document.
Full textThe principle of the non destructive testing by magnetic flux leakage (MFL) is to magnetize the part to be inspected by a magnetic field and to detect a flaw thanks to magnetic leakage field lines due to the strong decreasing of the magnetic permeability in the flawed region. In order to improve the performance of detection, the CEA and the Vallourec society collaborate to develop a numerical model dedicated to the virtual NDT of longitudinal defects in ferromagnetic tubes. The experimental system includes a magnetic circuit rotating at a constant speed around the tube to be inspected. The modeling task is started without considering the effects of the rotational speed, so the magnetostatic regime is considered to solve the modeling problem. In the framework of this thesis, we propose to compare a semi-analytical approach based on the formalism of integral equations method (IEM) and a purely numerical approach using finite element method (FEM).In the first part of this thesis, the theoretical formalism was established. A first simple discretization scheme is been implemented in the linear regime considering a constant magnetic permeability. This first numerical model has been validated for a simplified MFL configuration extracted and modified from the literature.For better detection, it is wishable to magnetically saturate the piece under-test. The ferromagnetic material is then characterized by a B(H) curve. Therefore, the second part of the thesis was devoted to the implementation of the model in the non-linear regime that takes into account this non-linear characteristic. Different discretization schemes have been studied in order to reduce the number of unknowns and the computational time. The originality of the thesis lies in the use of basis function of high order (Legendre polynomials) associated to a Galerkin approach for the discretization of integral equations. The first numerical result has been validated on a simplified MFL system. The first results of the experimental validation based on simulated data obtain by FEM have been performed in two steps. The first one consists to verify the distribution of the magnetic field for a ferromagnetic tube without any defect and in the magnetostatic regime. The objective of the second one was to compute the response of the flaw and to evaluate the effects of the rotational speed of the magnetic circuit around the tube
Claudio, Karim. "Mise en place d'un modèle de fuite multi-états en secteur hydraulique partiellement instrumenté." Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0482/document.
Full textThe evolution of equipment on drinking water networks has considerably bettered the monitoring of these lasts. Automatic meter reading (AMR) is clearly the technology which has brought the major progress these last years in water management, as for the operator and the end-users. This technology has allowed passing from an annual information on water consumption (thanks to the manual meter reading) toan infra-daily information. But as efficient as AMR can be, it has one main inconvenient : its cost. A complete network instrumentation generates capital expenditures that some operators can’t allowed themselves. The constitution of a sample of meters to equip enables then to estimate the network total consumption while minimizing the investments. This sample has to be built smartly so the inaccuracy of the estimator shouldn’t be harmful to the consumption estimation. A precise knowledge on water consumption allowsquantifying the water lost volumes on the network. But even an exact assessment of losses is still not enough to eliminate all the leaks on the network. Indeed, if the water distribution network is buried, and so invisible, so do the leaks. A fraction of leaks are invisible and even undetectable by the current technologies of leakage control, and so these leaks are un-reparable. The construction of a multi-state model enables us to decompose the leakage flow according to the different stages of appearance of a leak : invisible and undetectable, invisible but detectable with leakage control and finally detectable. This semi-Markovian model takes into account operational constrains, in particular the fact that we dispose of panel data. The leakage flow decomposition allows a better network monitoring but targeting and adapting the action of leakage reduction to set up according to the degradation state of the network
Brochu, Francis. "Amélioration du contrôle de qualité de produits sanguins utilisant la spectrométrie de masse à haut-débit et l'apprentissage automatique." Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/29875.
Full textThis memoir describes work concerning the treatment and analysis of high-throughput mass spectrometry. Mass spectrometry is a tried and tested method of chemical measurement in a sample. Applied to biological samples, mass spectrometry becomes a metabolomic measurement technique, meaning that it measures the metabolites contained in a sample, which are small molecules present in the biological fluid that interact with the individual’s metabolism. The project that is presented here is a partnership with Hema-Québec in order to conceive new quality control tests from mass spectrometry measurements. The application of the LDTD ionisation source in mass spectrometry makes the acquisition of spectra in high-throughput possible. This represents a large benefit in terms of experimental costs and in time. Large datasets of mass spectra can then be obtained in a short period of time. The computer science domain of machine learning can then be applied to this data. Statistical machine learning can then be used to classify the spectra of blood product samples and provide statistical guarantees on this classification. The use of sparse and interpretable machine learning algorithms can also lead to the discovery of biomarkers. The work presented in this memoir concerns the design of two methods of treatment of mass spectra. The first of these methods is the correction by virtual lock masses, used to correct any uniform shift in the masses in a spectra. The second is a new method of peak alignment used to correct slight measuring errors. In addition, a new kernel method, a method to mathematically compare examples, was designed specifically for application on mass spectra data. Finally, results of classification on mass spectra acquired with an LDTD ionisation source and by liquid chromatography mass spectrometry will be presented.
Harb, Hassan. "Conception du décodeur NB-LDPC à débit ultra-élevé." Thesis, Lorient, 2018. http://www.theses.fr/2018LORIS504/document.
Full textThe Non-Binary Low Density Parity Check (NB-LDPC) codes constitutes an interesting category of error correction codes, and are well known to outperform their binary counterparts. However, their non-binary nature makes their decoding process of higher complexity. This PhD thesis aims at proposing new decoding algorithms for NB-LDPC codes that will be shaping the resultant hardware architectures expected to be of low complexity and high throughput rate. The first contribution of this thesis is to reduce the complexity of the Check Node (CN) by minimizing the number of messages being processed. This is done thanks to a pre-sorting process that sorts the messages intending to enter the CN based on their reliability values, where the less likely messages will be omitted and consequently their dedicated hardware part will be simply removed. This reliability-based sorting enabling the processing of only the highly reliable messages induces a high reduction of the hardware complexity of the NB-LDPC decoder. Clearly, this hardware reduction must come at no significant performance degradation. A new Hybrid architectural CN model (H-CN) combining two state-of-the-art algorithms - Forward-Backward CN (FB-CN) and Syndrome Based CN (SB-CN) - has been proposed. This hybrid model permits to effectively exploit the advantages of pre-sorting. This thesis proposes also new methods to perform the Variable Node (VN) processing in the context of pre-sorting-based architecture. Different examples of implementation of NB-LDPC codes defined over GF(64) and GF(256) are presented. For decoder to run faster, it must become parallel. From this perspective, we have proposed a new efficient parallel decoder architecture for a 5/6 rate NB-LDPC code defined over GF(64). This architecture is characterized by its fully parallel CN architecture receiving all the input messages in only one clock cycle. The proposed new methodology of parallel implementation of NB-LDPC decoders constitutes a new vein in the hardware conception of ultra-high throughput rate decoders. Finally, since the NB-LDPC decoders requires the implementation of a sorting function to extract P minimum values among a list of size Ns, a chapter is dedicated to this problematic where an original architecture called First-Then-Second-Extrema-Selection (FTSES) has been proposed
Djafa, tchuspa Steve moses. "Développement et optimisation d'un modèle numérique 3D pour la simulation d'un système dédié au contrôle non destructif des tubes ferromagnétiques par flux de fuite." Phd thesis, Université Paris Sud - Paris XI, 2013. http://tel.archives-ouvertes.fr/tel-00953405.
Full textPaparone, Julien. "Contrôle de l’émission dans des nanostructures plasmoniques : nanoantennes multimères et plasmons long-range." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1178/document.
Full textThe object of this thesis is the coupling between luminescent nanocristals and metallic nanostructures. These structures show numerous interest in a large variety of applications thanks to the apparition of electromagnetic surface wave known as plasmons whose properties are tailored with the geometry of these structures. In this thesis, two types of geometry will be adressed : the long-range plasmons, and plasmonic nanoantennas. In a first time, the study focuses on a geometry in which two propagative surface plasmons are coupled through a thin metal film; creating a new type of plasmons with extended propagation lenghts. By coupling the emission of nanocristals in such a geometry, the energy repartition in the different desexcitation channels available has been adressed. The viccinity of the metal has also proved to increase the spontaneous decay rate up to 1.7. The non trivial contribution of conventional waveguide modes has also been demonstrated. In a second time, the potential of using metallic nanoparticles in a pillar geometry as nanoantennas to enhance and redirect the spontaneous emission has been investigated. The structure is composed of a metallic dimer creating a hotspot on top of which another metallic nanoparticles has been placed. FDTD simulations has shown that this kind of geometry can lead to few loss (<10%), a strong enhancement of the emission rate (>x80), a redirection of the emission and paves the way to wavelenght multiplexing possibilities. Besides, these structures present the advantage to be compatible with modern thin film elaboration techniques. Preliminary realisations have then been introduced
Peigat, Laurent. "Modélisation d'un joint viscoplastique pour la filière hydrogène." Phd thesis, Ecole Nationale Supérieure des Mines de Paris, 2012. http://pastel.archives-ouvertes.fr/pastel-00756297.
Full textLaffay, Paul. "Réduction du bruit propre d'une pale de ventilateur par la mise en place d'une fente de soufflage et de dents de scie sur le bord de fuite du profil." Mémoire, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/7966.
Full textGallezot, Matthieu. "Simulation numérique du contrôle non-destructif des guides d’ondes enfouis." Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0040/document.
Full textVarious elements of civil engineering structures are elongated and partially embedded in a solid medium. Guided waves can be used for the nondestructive evaluation (NDE) of such elements. The latteris therefore considered as an open waveguide, in which most of waves are attenuated by leakage losses into the surrounding medium. Furthermore, the problem is difficult to solve numerically because of its unboundedness. In aprevious thesis, it has been shown that the semi-analytical finite-element method (SAFE) and perfectly matched layers(PML) can be coupled for the numerical computation of modes. It yields three types of modes: trapped modes,leaky modes and PML modes. Only trapped and leaky modes are useful for the post-processing of dispersion curves. PML modes are non-intrinsic to the physics. The major aim of this thesis is to obtain the propagated and diffracted fields, based on modal superpositions on the numerical modes. First, we show that the three types of modes belong to the modal basis. To guarantee the uniqueness of the solutions an orthogonality relationship is derived on the section including the PML. The forced response can then be obtained very efficiently with a modal expansion at any point of the waveguide. Modal expansions are also used to build transparent boundaries at the cross-sections of a small finite-element domain enclosing a defect, thereby yielding the diffracted field. Throughout this work, we study whether solutions can be obtained with modal expansions on leaky modes only, which enables to reduce the computational cost. Besides, solutions are obtained at high frequencies (which are of interest for NDE) and in tridimensional waveguides, which demonstrates the generality of the methods. The second objective of this thesis is to propose an imaging method to locate defects. The topological imaging method is applied to a waveguide configuration. The general theoretical framework is recalled, based on constrained optimization theory. The image can be quickly computed thanks to the modal formalism. The case of a damaged waveguide is then simulated to assess the influence on image quality of the emitted field characteristics (monomodal, dispersive or multimodal)and of the measurement configuration
Larrieu, Nicolas. "Contrôle de congestion et gestion du trafic à partir de mesures pour l'optimisation de la qualité de service dans l'Internet." Toulouse, INSA, 2005. http://www.theses.fr/2005ISAT0007.
Full textInternet monitoring has only been used for research, engineering and design of Internet networks since few years (since the beginning of years 2000), but it is more and more popular and spreads rapidly. It deals with studying, characterizing, analyzing and modeling traffic on the different Internet links in order to understand network behaviors when facing traffics which are largely unknown at this time. In particular, guarantying QoS for the Internet is currently one of the most challenging issues. This thesis aims at designing new communication protocols and architectures able to reduce the traffic LRD in order to optimize the use of communication resources. Then, new protocol and architectural mechanisms could be perfectly suited to users’ needs and traffic constraints. Thus, this PhD work deals with a new approach for the Internet, aiming at improving traffic management, QoS and more generally network services. This approach, called Measurement Based Networking (MBN), is built on the use of active and passive monitoring techniques to evaluate in real time different network parameters and analyze its traffic in order to react very quickly and accurately to specific events arising in the network (for instance, congestion events). We will illustrate, in particular, the MBN approach by designing a new measurement based congestion control mechanism (MBCC) which will be evaluated thanks to NS-2 simulations. We will show, in particular, how this new mechanism can improve traffic characteristics as well as Internet QoS, despite the complexity and variability of current Internet traffics
Batchati, Pinalessa. "Étude d'une distribution hydraulique pilotée par P. W. M. Et modélisation et contrôle de débit d'une pompe à cylindrée variable pilotée par microprocesseur." Compiègne, 1994. http://www.theses.fr/1994COMPD702.
Full textCroville, Guillaume. "Séquençage et PCR à haut débit : application à la détection et la caractérisation d'agents pathogènes respiratoires aviaires et au contrôle de pureté microbiologique des vaccins." Thesis, Paris Sciences et Lettres (ComUE), 2017. http://www.theses.fr/2017PSLEP028/document.
Full textDetection of pathogens becomes an increasing challenge, since infectious diseases represent major risks for both human and animal health. Globalization of trade and travels, evolution of farming practices and global climatic changes, as well as mass migrations are impacting the biology of pathogens and their emerging potential. This manuscript describes three approaches, based on three innovative technologies of molecular biology applied to the detection of pathogens in three different settings : (i) detection of a list of pathogens using real-time quantitative PCR on a microfluidic platform, (ii) unbiased detection of pathogens in complex matrix, using metagenomics and Illumina (Miseq) sequencing and (iii) genotyping of pathogens without isolation of PCR-enrichment using a 3rd generation NGS (Next Generation Sequencing) platform MinION from Oxford Nanopore Technologies. The three studies shown the contribution of these techniques, each representing distinctive features, suitable for the respective applications. Beyond application of these techniques to the field of microbial diagnostics, their use for the control of veterinary immunological drugs is a priority of this project. Veterinary vaccines are not only submitted to mandatory detection of listed pathogens to be excluded, but also to validation of the genetic identity of vaccine strains. The exponential availability and performances of new PCR or sequencing technologies open cutting-edge perspectives in the field of microbial diagnostic and control
Sarazin, Alexis. "Analyse bioinformatique du contrôle des éléments transposables par les siARN chez Arabidopsis thaliana." Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112258/document.
Full textMany mechanisms control and limit the proliferation of transposable elements (TEs) which could otherwise threaten the structural and functional integrity of the genome. In plants RNA interference (RNAi) plays an important role in this control through small RNAs that guide the expression regulation of endogenous or exogenous sequences by two types of mechanisms. The first such mechanism, shared by many eukaryotic organisms, acts at the post-transcriptionnal level to inhibit the activity of mRNA. A second type of regulation allows the transcriptional control of TEs activity through a mechanism called RNA directed DNA methylation (RdDM) which involves 24nt long siRNA ("short-interfering RNA") that guide DNA methylation specifically on TEs sequences. Furthermore, siRNAs are also involved in the progressive restoration of DNA methylation after a loss induced by mutation of the DDM1 gene (Decrease in DNA Methylation 1). The aim of this thesis is to take advantage of high-throughput sequencing technologies to characterize these TEs controls mechanisms by siRNA in the model plant Arabidopsis thaliana .At first, I developed methods and bioinformatics tools to effectively manage data produced by high-throughput sequencing of small RNA libraries. These tools, combined in a pipeline, are designed to allow the study the accumulation of siRNA corresponding to TE sequences or TE families as well as their global or detailed visualization.These tools were applied to characterize, in a wild type background, the association between siRNA and TEs in order to define factors that may explain the observed differences in siRNA abundance . These analyses were performed by taking into account both DNA methylation states and genomic context. It provides a static view of siRNA control of TEs and their impact on nearby genes. Then, analysis of small RNA libraries from mutants of the RNAi pathway was performed to better characterize the impact of DNA methylation loss on siRNA populations and to define the mechanisms involved in the production of 21nt siRNA induced in the ddm1 mutant. These comparative analyses of the TE control after loss of DNA methylation allow us to highlight the production of 24nt siRNA independently of the classical RdDM pathway and to propose a model explaining the production of 21nt siRNA in the ddm1 mutant. At last, I tried to clarify the involvement of siRNA in the restoration of DNA methylation. Changes in DNA methylation induced by ddm1 mutation were characterized as well as their transgenerational stability in an epiRIL population. The stability of DNA hypomethylation has been studied in relation to high-throughput sequencing of small RNAs data from WT, ddm1 and 4 epiRIL lines. It provides a temporal view of the TE control by siRNA. The results highlight the important role of small RNAs in the control of transposable elements in order to preserve structural and functional integrity of the genome through a variety of mechanisms depending on TE sequences. This work opens the way to the analysis of the siRNA control on TEs based on approaches that combine TEs in networks based on their shared siRNA sequences. It would allow to study "siRNA-connections" between TEs in order to explore, for example, the action in trans of siRNA in the restoration of DNA methylation defect
Khelalfa, Raouf. "Contribution à l'étude des écoulements de filtration au voisinage du point critique." Paris 6, 2008. http://www.theses.fr/2008PA066173.
Full textThe assembling of some metal parts by flanges, creates a path of leakage which can let a fluid pass. In this study we aim at understanding the phenomenology of confined flow through the vicinity of critical point and its influence on the value of the mass flow. The problem is limited to viscous stationary flows. The leakage geometry has assimilated to a capillary tube. The wall temperature is fixed at the fluid critical value. A difference of pressure was imposed between a supercritical inlet and subcritique outlet. The phenomenology approach has revealed the existence of a transition zone at passing the critical point. In this region of strong expansion, it has shown the existence of a coupling thermo-dynamics and importance of thermal convection mechanism associated with the conduction for the heat transport. In the end, it was found that the fluid increase outwards is slowed by a stopper phenomenon
Alvim, Mário. "Des approches formelles pour le cachement d'information: Une analyse des systèmes interactifs, contrôle de divulgation statistique, et le raffinement des spécifications." Phd thesis, Ecole Polytechnique X, 2011. http://tel.archives-ouvertes.fr/tel-00639948.
Full textBurel, Catherine. "Le contrôle de la digestion anaérobie des eaux usées : signification des différents paramètres de la méthanisation : choix du débit de biogaz comme indicateur de fonctionnement." Compiègne, 1986. http://www.theses.fr/1986COMPI229.
Full textGuillier, Romaric. "Méthodologies et Outils pour l'évaluation des Protocoles de Transport dans le contexte des réseaux très haut débit." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2009. http://tel.archives-ouvertes.fr/tel-00529664.
Full textAttar, Batoul. "Modélisation réaliste en conditions extrêmes des servovalves électrohydrauliques utilisées pour le guidage et la navigation aéronautique et spatiale." Toulouse, INSA, 2008. http://eprint.insa-toulouse.fr/archive/00000184/.
Full textThe electro-hydraulic actuators are widely used in the flight controls (like as a position control). They include an electro-hydraulic servovalve that is a power amplifier between the electrical and the hydraulic one. The simulation of the servovalve is an important step in the cycle of virtual prototyping in order to reproduce its behaviour whatever the work’s conditions. For this purpose, this work is divided into three parts: The first concerns the development of the model for the hydraulic fluid whose the physical properties change as a function of temperature and pressure. The methodology of developing the model using provided data by the standards was presented. The second concerns the development of an advanced model to reproduce the characteristic of the hydraulic stage of the servovalve (2nd one). It is particularly important to study the characteristics that affect the performance as the pressure gain, the flow rate gain and the leakage. A realistic lumped parameters approach was associated with an original model that covering all modes of operation (positive or negative opening, laminar or turbulent flow). The last item deals with the external model of the dynamics of servovalve. It provides a model that reproduces the effect of the supply pressure, the amplitude of the input signal on the dynamic performance of the servovalve. The models are identified and validated for an aeronautic servovalve and a similar industrial servovalve
Slimani, Hicham. "Protocoles coopératifs pour réseaux sans fil." Phd thesis, Toulouse, INPT, 2013. http://oatao.univ-toulouse.fr/10309/1/slimani.pdf.
Full textPampouille, Eva. "Analyse haut-débit du déterminisme de défauts musculaires impactant la qualité de la viande chez le poulet." Thesis, Tours, 2019. http://www.theses.fr/2019TOUR4010.
Full textPoultry industry is facing muscular defects which that impair chicken meat quality. Genetic and genomic studies were carried out in addition to histological measuremnets to better understand the etiology of these defects and to contribute to the development of new indicators useful for diagnosis and selection. Studies focused on two complementary genetic models : 1) two divergent chicken lines selectied on breast meat ultimate pH and 2) a line with strong muscular development more severely affected by the defects, qwhich was studied in comparison with a slow-growing strain free from lesions. The thesis helped to describe the metabolic and structural changes observed in case of severe myopathies. It also led to the identification of the first QTL regions controlling muscular defects in chicken and to the establishment of a set of genes correlated with histological measurements of myopathies that will serve after validation as tool for selection and breeding
Lopez, Pacheco Dino Martín. "Propositions pour une version robuste et inter-operable d' eXpliciit Control Protocol dans des réseaux hétérogènes à haut débit." Phd thesis, Ecole normale supérieure de lyon - ENS LYON, 2008. http://tel.archives-ouvertes.fr/tel-00295072.
Full textAinsi, nous avons montré que ce sont les protocoles basés sur l'assistance de routeurs fournissant aux émetteurs un taux d'émission explicite (``Explicit Rate Notification'' - ERN) partagent mieux les ressources du réseaux entre les utilisateurs et évitent mieux la congestion que les protocoles de bout-en-bout (comme les protocoles basés sur TCP). Cependant, l'absence d'inter-opérabilité des protocoles ERN avec les routeurs non-ERN (par exemple, les routeurs DropTail) et les protocoles de congestion de bout-en-bout (comme TCP), empêche leur mise en place dans les réseaux actuels.
Pour résoudre les problèmes d'inter-opérabilité des protocoles ERN avec des routeurs non-ERN, nous avons proposé des stratégies et des mécanismes capables de détecter les groupes des routeurs non-ERN entre deux routeurs ERN, d'estimer la bande passante minimale disponible à l'intérieur de ces groupes, et de créer des routeurs virtuels qui remplacent chaque groupe de routeurs non-ERN par un routeur ERN.
Nous avons également proposé un système d'équité intra-protocolaire entre les protocoles ERN et les protocoles de bout-en-bout. Avec notre solution, les routeurs ERN dans une première étape estiment les besoins en terme de bande passante des flux ERN et non-ERN puis, dans une deuxième étape, limitent le débit des flux non-ERN en rejetant leurs paquets avec une probabilité qui dépend des résultats de la première étape.
Le succès des protocoles ERN est basé sur l'information de rétroalimentation, calculée par les routeurs, et renvoyée par les récepteurs aux émetteurs. Nous avons montré que les protocoles ERN deviennent instables face aux pertes de l'information de rétroalimentation dans des environnements hétérogènes et à bande passante variable. Dans ce cadre là, nous avons proposé une nouvelle architecture qui améliore la robustesse des protocoles ERN, ainsi que la réactivité des émetteurs.
Toutes nos propositions, applicables sur d'autres protocoles ERN, ont été expérimentées et validées sur le protocole eXplicit Control Protocol (XCP). Ainsi, nous avons montré que nos solutions surmontent les défis concernant les problèmes d'inter-opérabilité des protocoles ERN dans un grand nombre des scénarios et de topologies.
De cette façon, nous avons développé les bases pour bénéficier d'un protocole ERN capable d'être déployé dans des réseaux hétérogènes à grand produit bande passante - délai où le transfert de grande quantité des données est nécessaire, tels que les réseaux de grilles (ex. GÉANT).
Farah, Elias. "Detection of water leakage using innovative smart water system : application to SunRise Smart City demonstrator." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10103/document.
Full textThis work concerns the use of the Smart Water Technology for the detection of water leakage. It is a part of SunRise project which aims at turning the Scientific Campus of the University of Lille into a large scale demonstrator site of the "Smart and Sustainable City". The campus is representative to a small town of 25000 inhabitants. This work is also a part of the European Project SmartWater4 Europe, which aims to develop 4 demonstrators of the Smart Water Technology. This thesis includes five parts. The first part includes a literature review concerning the technologies used in leakage detection. The second part presents the SunRise Smart City demonstrator, which is used as a basis for this thesis. This section details the instrumentation installed in the demo site as well as leak simulations tests to analyze the efficiency of an acoustic system of leakage detection. The third part focuses on the analysis of the water consumption at different time scales. Analysis concerns both sub-meters and bulk meters. It is conducted using a platform for the aggregation and the interpretation of the data. This part presents also major leakage events in 2015. The fourth part concerns leak detection using the water balance calculation based on the top down and bottom up approaches. It also presents the Active Leakage Control (ALC) strategy applied to the demo site in order to reduce the level of Non-Revenue Water (NRW). The last part concerns the use of advanced methods for leak detection with application on the campus data. These methods include the Comparison of Flow Pattern Distribution Method (CFPD), the Minimum Night Flow (MNF) method and two developed statistical approaches
Fedida, Vincent. "Etude des défauts des machines électriques tournantes par analyse du champ magnétique de fuite : Application au diagnostic de machines de faibles puissances dans un contexte de production en grande série." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT023.
Full textDiagnostic and identification of defaults providing in electrical machines (mainly single phased asynchrone motors) by stray flux measurement
Boussa, Hocine. "Structures en béton soumises à des sollicitations thermomécaniques sévères : évolution des dommages et des perméabilités." Cachan, Ecole normale supérieure, 2000. http://www.theses.fr/2000DENS0004.
Full textKarray, Mohamed Kadhem. "Evaluation analytique des performanes des réseaux sans-fil par un processus de Markov spatial prenant en compte leur géométrie, leur dynamique et leurs algorithmes de contrôle." Phd thesis, Télécom ParisTech, 2007. http://pastel.archives-ouvertes.fr/pastel-00003009.
Full textJaumouillé, Elodie. "Contrôle de l'état hydraulique dans un réseau d'eau potable pour limiter les pertes." Phd thesis, Université Sciences et Technologies - Bordeaux I, 2009. http://tel.archives-ouvertes.fr/tel-00443827.
Full textGadari, M’hammed El. "Étude expérimentale et numérique du comportement des joints à lèvre." Thesis, Poitiers, 2013. http://www.theses.fr/2013POIT2304/document.
Full textIt's about sixty years that we are interested in understanding and modeling the Elastohydrodynamic behavior (EHD) of rotary lip seals. However, we can consider that, until now, their modeling has not been accurately treated. Even though many studies have been devoted to this model, several questions have been raised and are still the subject of controversy among researchers, namely the parameters influencing on the rotary lip seals performance, such as: the shaft surface textured, the law adopted for the mechanical behavior of lip seals, the approach used to develop the matrix of compliance, the importance of assuming the smooth or rough shaft, and finally the ratio between the width of contact and the wavelength according the circumferential direction of the lip roughness.The main goal of this thesis is to answer rigorously these questions by developing and validating a numerical tool for EHD rotary lip seals modeling, that takes into account: the lip law behavior, the compliance matrix rigorously validated by assuming smooth shaft case, or rough and textured shaft case. In addition, an analytical approach is proposed, models the vibratory behavior of the "squeeze film". This implies a nonlinear comportment that is taken into account
Morlot, Thomas. "La gestion dynamique des relations hauteur-débit des stations d'hydrométrie et le calcul des incertitudes associées : un indicateur de gestion, de qualité et de suivi des points de mesure." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENU029/document.
Full textDealer or owner operator of electricity production structures, EDF is responsible for their operation in safe condition and for the respect of the limits imposed by the regulations. Thus, the knowledge of water resources is one of EDF main concerns since the company remains preoccupied about the proper use of its facilities. The knowledge of streamflow is one of its priorities to better respond to three key issues that are plant safety, compliance with regulatory requirements, and optimizing the means of production. To meet these needs, EDF-DTG (Division Technique Générale) operates an observation network that includes both climatic parameters such as air and temperature, then the precipitations and the snow, but also the streamflow. The data collected allows real time monitoring of rivers, as well as hydrological studies and the sizing of structures. Ensuring the quality of the stream flow data is a priority. Up to now it is not possible to measure continuously the flow of a river since direct measurements of discharge are time consuming and expensive. In common cases the flow of a river can be deduced from continuous measurements of water level. Punctual measurements of discharge called gaugings allow to develop a stage-discharge relationship named rating curve. These are permanently installed equipment on rivers for measuring levels that are called hydrometric station. It is clear that the whole process constitutes an indirect way of estimating the discharge in rivers whose associated uncertainties need to be described. Quantification of confidence intervals is however not the only problem of the hydrometer. Fast changes in the stage-discharge relationship often make the streamflow real time monitoring quite difficult while the needs of continuous high reliability data is obvious. The historical method to produce the rating curve based on a construction from a suffcient number of gaugings chronologically contiguous and well distributed over the widest possible range of discharge remains poorly adapted to fast or cyclical changes of the stage-discharge relationship. The classical method does not take suffciently into account the erosion and sedimentation processes as well as the seasonal vegetation growth. Besides, the ability to perform gaugings by management teams generally remains quite limited. To get the most accurate streamflow data and to improve their reliability, this thesis explores an original dynamic method to compute rating curves based on historical gaugings from a hydrometric station while calculating the associated uncertainties. First, a dynamic rating curve assessment is created in order to compute a rating curve for each gauging of a considered hydrometric station. After the tracing, a model of uncertainty is built around each computed rating curve. It takes into account the uncertainty of gaugings, but also the uncertainty in the measurment of the water height, the sensitivity of the stage discharge relationship and the quality of the tracing. A variographic analysis is used to age the gaugings and the rating curves and obtain a final confidence interval increasing with time, and actualizing at each new gauging since it gives rise to a new rating curve more reliable because more recent for the prediction of discharge to come. Chronological series of streamflow data are the obtained homogeneously and with a confidence interval that takes into consideration the aging of the rating curves. By taking into account the variability of the flow conditions and the life of the hydrometric station, the method can answer important questions in the field of hydrometry such as « How many gauging a year have to be made so as to produce stream flow data with an average uncertainty of X\% ? » and « When and in which range of water flow do we have to realize those gaugings ? »
Tran, Minh Anh. "Insensibilité dans les réseaux de files d'attente et applications au partage de ressources informatiques." Phd thesis, Télécom ParisTech, 2007. http://tel.archives-ouvertes.fr/tel-00196718.
Full textLi, Ao. "Performances des codes correcteurs d’erreur LDPC appliqués au lien Fronthaul optique haut-débit pour l’architecture C-RAN du réseau 5G : conception et implantation sur FPGA." Thesis, Limoges, 2017. http://www.theses.fr/2017LIMO0110/document.
Full textNowadays, the architecture of the mobile network is in full evolution to ensure the increase in terms of bit rate between the Central (CO) (core networks) and various terminals such as mobiles, computers, tablets in order to satisfy the users. To address these challenges of the future, the C-RAN (Cloud or Centralized-RAN) network is known as a 5G solution. In the C-RAN context, all BBUs (Base Band Units) are centralized in the CO, only the RRH (Remote Radio Head) remain at the head of the base station (BS). A new segment between BBUs and RRHs appears called "fronthaul". It is based on D-ROF (digital radio-overfiber) transmissions and carries the digital radio signal at a high bit rate using the Common Public Radio Interface (CPRI) protocol. Taking into account CAPEX and OPEX, the ANR LAMPION project has proposed the Self-seeded Reflective Semiconductor Optical Amplifier (RSOA) technology in order to make the solution more flexible and overcome the need for colored transmitters / receivers in the context of PON-WDM (Wavelength Division Multiplexing Passive Optical Network). Nevertheless, it is necessary to add a FEC (forward error corrector) in the transmission to ensure the quality of service. So the objective of this thesis is to find the most suitable FEC to apply in the C-RAN context. Our work has focused on the use of LDPC codes, chosen after performance comparisons with other types of codes. We have specified the parameters (code performance, matrix size, cycle, etc.) required for LDPC codes to obtain the best performance. Hard-decision LDPC algorithms were chosen after considering the tradeoff between circuit complexities and performance. Among these hard-decision algorithms, the GDBF (gradient descent bit-flipping) was the best solution. Taking into account a CAN 2-Bit in the channel led us to propose a variant: the BWGDBF (Balanced weighted GDBF). Optimizations have also been made with respect to the convergence of the algorithm and latency. Finally, we managed to implement our own algorithm on the Spartan FPGA 6 xc6slx16. Several methods have been proposed to achieve a latency of 5 μs desired in the C-RAN context. This thesis was supported by the project ANR LAMPION (Lambada-based Access and Metropolitan Passive Optical Networks)
Boineau, Frédéric. "Applications du fluxmètre gazeux à pression constante ; caractérisation métrologique et comparaisons aux méthodes de référence pour les mesures de débit de 4×10-12 mol/s à 4×10-7 mol/s." Thesis, Paris, CNAM, 2016. http://www.theses.fr/2016CNAM1072/document.
Full textThis dissertation concerns the development and applications of a constant pressure gas flowmeter, the primary reference instrument used by National metrology laboratories to measure very low gas flows. It guarantees the traceability of low absolute pressures, via the continuous expansion method, and that of helium leaks, both related to applications in the field of vacuum. In addition, we have shown that the Laboratoire commun de métrologie (LCM) constant pressure flowmeter is well suited to micro-flow measurements, a sub-field of flow metering. Besides key points of the design and metrological characterization, this document describes the study of the continuous expansion method and work on comparisons of the constant pressure gas flowmeter with reference methods used at LCM, in particular the dynamic gravimetric method
Rojatkar, Ashish. "Développement d'une méthodologie pour l'évaluation de l'exposition réelle des personnes aux champs électromagnétiques." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC028/document.
Full textThe work presented in the thesis is directed towards addressing the requirement for determining the radio frequency (RF) exposure due to mobile phones under typical usage/ real-life scenarios and also to develop a method to predict and compare mobile phones for their real-life RF exposure. The mobile phones are characterized for their specific absorption rate (SAR) and for transmit and receive performance given by the over-the-air (OTA) characterization. Using the SAR and the total radiated power (TRP) characterization, an exposure index referred to as the SAROTA index was previously proposed to predict the real-life exposure due to mobile phones which would also serve as a metric to compare individual phones. In order to experimentally determine the real-life RF exposure, various software modified phones (SMP) are utilized for the study. These phones contain an embedded software capable of recording the network parameters. The study is undertaken in the following order: (a) Characterization of the available tools and resources for performing targeted measurements/experiments, (b) identifying the important radio resource parameters and metrics to perform the targeted measurements, (c) investigation of the actual implementation of the power control mechanism in a live network for various received signal level and received quality environments, (d) investigating the correlation of the over-the-air performance of the mobile phones and the extent of actual power control realization, (e) comparing the actual exposure and the real-life exposure as predicted by the SAROTA index. Based on the logistical and technical challenges encountered, the experiments were restricted to indoor environments to enable repeatability. During the first phase of the study, the stability of the indoor environment was evaluated. During the second phase, the influence of hand phantom on the SAR and TRP of the mobile phones and the capability of the SAROTA index to predict the exposure was investigated. Further developing on the insights from the hand phantom experiments, in the third phase, a set of identical software modified phones were externally modified to alter the TRP performance and the methodology to determine the real-life exposure and also verify the capability of the SAROTA index to predict the exposure levels was investigated. The experiments demonstrate that the SAROTA index is capable of predicting the real-life exposure and comparing the mobile phones
Lopez-Pacheco, Dino-Martin. "Propositions for a robust and inter-operable eXplicit Control Protocol on heterogeneous high speed networks." Lyon, École normale supérieure (sciences), 2008. http://www.theses.fr/2008ENSL0459.
Full text[The congestion control protocols aim to fairly share the network resources between users and avoid congestion. In this thesis, we have shown that the routers-assisted protocols providing explicit rate notification (ern protocols) accomplish those goals better than end-to-end protocols (e. G. Tcp-based protocols). However, ern protocols, like the explicit control protocol (xcp), are not interoperable with current technologies. Thus, ern protocols cannot be gradually deployed in current networks. Our research in this thesis resulted in a set of solutions that allow ern protocols to be tcp-friendly, robust against losses of routers informations and interoperable with non-ern networks equipments in a wide range of scenarios. Thus, we provided the basis for creating an ern protocol able to be gradually deployed in current heterogeneous high speed networks, where users frequently move very large amount of data]
Samara, Ziyad. "Etude de l’origine du comportement chaotique de la ventilation." Paris 6, 2008. http://www.theses.fr/2008PA066243.
Full textAlthough the human ventilatory flow resembles a periodic phenomenon, it is not and is truly chaotic. This means that the trajectory of the flow is bounded and depends on deterministic processes, but at the same time, is also complex, sensitive to the initial conditions and unpredictable in the long-term. The theory of chaos provides mathematical tools for quantifying these characteristics. However, physiological significance and clinical interest of ventilatory chaos will depend on its source that is unclear. The gaol of this thesis was thus to contribute to its identification. By comparing, in humans, the ventilatory flow recorded at the mouth with a pneumotachometer to that reconstructed by inductive plethysmography, we showed that neither the respiratory route (nose or mouth), nor the recording technique affected the nature of chaos, even if they changed its complexity. Therefore, they could not be its origin. Inspiratory threshold or resistive loading changed neither the nature nor the features of the chaotic dynamics of the ventilatory flow, though respiratory compensation of these kinds of load depends on the pre-motor cortex. The latter is probably not a source of ventilatory chaos either. We finally showed that the neural respiratory output of in vitro isolated brainstems of post metamorphic tadpoles was always chaotic. Stimulation of this output by CO2 augmented the intensity of chaos. Chaos was rarely found in pre-metamorphic preparations, suggesting ontogenetic changes. In conclusion, the intrinsic properties of the automatic ventilatory command, located in the brainstem, could be a sufficient source of ventilatory chaos