To see the other types of publications on this topic, follow the link: The PERT technique.

Dissertations / Theses on the topic 'The PERT technique'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'The PERT technique.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Turner, Lyle Robert. "Production structure models and applications within a Statistical Activity Cost Theory (SACT) Framework." Thesis, Queensland University of Technology, 2007. https://eprints.qut.edu.au/16310/1/Lyle_Turner_Thesis.pdf.

Full text
Abstract:
Statistical Activity Cost Theory (SACT) is an axiomatic and statistical theory of basic accounting measurement practice. The aim of the SACT analysis, among others, is to determine the statistical nature of both the physical production system of an accounting entity and its related costs, which can then be examined and applied to various decision-making problems. A central proposition of SACT is that the physical system of the entity, and the costs related to this system, are separate structures which can be modelled as such. To date, however, mini- mal progress has been made in describing production process structures within the SACT framework, and nor have there been any advances made in applying common statistical techniques to such an analysis. This thesis, therefore, moves to extend the basic theory that has already been developed, presenting a novel method for representing and examining the physical processes that make up an entity's production system. It also examines the costing of these physical models, such that transactional data can be examined and related back to the underlying production processes. The thesis concludes by giving an example of such an application in a case study. The analysis developed in this thesis has been applied in a larger project which aims to produce generic modelling and decision tools, based upon SACT, to support return and risk management.
APA, Harvard, Vancouver, ISO, and other styles
2

Turner, Lyle Robert. "Production structure models and applications within a Statistical Activity Cost Theory (SACT) Framework." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16310/.

Full text
Abstract:
Statistical Activity Cost Theory (SACT) is an axiomatic and statistical theory of basic accounting measurement practice. The aim of the SACT analysis, among others, is to determine the statistical nature of both the physical production system of an accounting entity and its related costs, which can then be examined and applied to various decision-making problems. A central proposition of SACT is that the physical system of the entity, and the costs related to this system, are separate structures which can be modelled as such. To date, however, mini- mal progress has been made in describing production process structures within the SACT framework, and nor have there been any advances made in applying common statistical techniques to such an analysis. This thesis, therefore, moves to extend the basic theory that has already been developed, presenting a novel method for representing and examining the physical processes that make up an entity's production system. It also examines the costing of these physical models, such that transactional data can be examined and related back to the underlying production processes. The thesis concludes by giving an example of such an application in a case study. The analysis developed in this thesis has been applied in a larger project which aims to produce generic modelling and decision tools, based upon SACT, to support return and risk management.
APA, Harvard, Vancouver, ISO, and other styles
3

Beaudoin, Vincent. "Développement de nouvelles techniques de compression de données sans perte." Thesis, Université Laval, 2009. http://www.theses.ulaval.ca/2009/25945/25945.pdf.

Full text
Abstract:
L'objectif de ce mémoire est d'introduire le lecteur à la compression de données générale et sans perte et de présenter deux nouvelles techniques que nous avons développées et implantées afin de contribuer au domaine. La première technique que nous avons développée est le recyclage de bits et elle a pour objectif de réduire la taille des fichiers compressés en profitant du fait que plusieurs techniques de compression de données ont la particularité de pouvoir produire plusieurs fichiers compressés différents à partir d'un même document original. La multiplicité des encodages possibles pour un même fichier compressé cause de la redondance. Nous allons démontrer qu'il est possible d'utiliser cette redondance pour diminuer la taille des fichiers compressés. La deuxième technique que nous avons développée est en fait une méthode qui repose sur l'énumération des sous-chaînes d'un fichier à compresser. La méthode est inspirée de la famille des méthodes PPM (prediction by partial matching). Nous allons montrer comment la méthode fonctionne sur un fichier à compresser et nous allons analyser les résultats que nous avons obtenus empiriquement.
APA, Harvard, Vancouver, ISO, and other styles
4

Hamide, Mahmoud. "Schedule and Cost Performance Analysis and Prediction in Louisiana DOTD." ScholarWorks@UNO, 2017. http://scholarworks.uno.edu/td/2311.

Full text
Abstract:
Many construction projects in the United States are facing the risk of cost overrun and schedule delays. This is also happening here in the State of Louisiana. When these things happen, it causes cost overrun which can then be passed on to the tax payers and may also cause the state to take on less projects than they normal. Many researchers have studied the reasons behind both the cost overrun and the delays resulting in private firms, developing project management tools and best practices to prevent this risk. In this research, I aim to study the historical trend in 2912 publically funded projects in the State of Louisiana. The study will reveal the overall state level of accuracy of forecasting cost and schedule. A forecasting formula based on those historical projects will be developed to assist estimators at the Parish level in predicting cost and schedule performance. The State of Louisiana has so many projects that deal with the transportation system (roadway, bridges, drainage, traffic sign, traffic signal, lighting etc...) My Dissertation will be a study and analysis of time and cost of the projects in LADOTD, whether the projects finish on time, before time or after time as well as the cost of the project that has been completed overrun or underrun or the exact amount that the bid amount was. With this study and analysis, my intention is to create time schedule and cost to be used to on reaching accuracy on finishing the project on time and the exact bid amount of the project (exclude whether condition, extra work, and some unexpected problems that may arise during the length of the project).
APA, Harvard, Vancouver, ISO, and other styles
5

Machavaram, Venkata Rajanikanth. "Micro-machining techniques for the fabrication of fibre Fabry-Perot sensors." Thesis, Cranfield University, 2006. http://dspace.lib.cranfield.ac.uk/handle/1826/1210.

Full text
Abstract:
Fabry-Perot optical fibre sensors have been used extensively for measuring a variety of parameters such as strain, temperature, pressure and vibration. Conventional extrinsic fibre Fabry-Perot sensors are associated with problems such as calibration of the gauge length of each individual sensor, their relatively large size compared to the diameter of optical fibre and a manual manufacturing method that leads to poor reproducibility. Therefore, new designs and fabrication techniques for producing fibre Fabry-Perot sensors are required to address the problems of extrinsic fibre Fabry-Perot sensors. This thesis investigates hydrofluoric acid etching and F2-laser micro-machining of optical fibres to produce intrinsic Fabry-Perot cavities. Chemical etching of single mode fused silica fibres produced cavities across the core of the fibres due to preferential etching of the doped-region. Scanning electron microscope, interferometric surface profiler and CCD spectrometer studies showed that the optical quality of the etched cavities was adequate to produce Fabry-Perot interference. Controlled fusion splicing of etched fibres produced intrinsic Fabry-Perot cavities. These sensors were surface-mounted on composite coupons and their response to applied strain was studied using low coherence interferometry. These sensors showed linear and repeatable response with the strain measured by the electrical resistance strain gauges. To carry out F2-laser micro-machining of fused silica and sapphire substrates, a micro-machining station was designed and constructed. This involved the design of illumination optics for 157 nm laser beam delivery, the design and construction of beam delivery chamber, target alignment and monitoring systems. Ablation of fused silica and sapphire disks was carried out to determine ablation parameters suitable for micro-machining high aspect ratio microstructures that have adequate optical quality to produce Fabry-Perot interference. Cavities were micro-machined through the diameter of SMF 28 and SM 800 fibres at different energy densities. CCD interrogation of these intrinsic fibre cavities ablated at an energy density of 25 x 10 4 Jm -2 produced Fabry-Perot interference fringes. The feasibility of micro-machining high aspect ratio cavities at the cleaved end-face of the fused silica fibres and through the diameter of sapphire fibres was demonstrated. A technique based on in-situ laser-induced fluorescence monitoring was developed to determine the alignment of optical fibres and ablation depth during ablation through the fibre diameter. Ablation of cavities through the diameter of fibre Bragg gratings showed that the heat-generated inside the cavity during ablation had no effect on the peak reflection and the integrity of core and cladding of the fibre. Finally, a pH-sensor, a chemical sensor based on multiple cavities ablated in multimode fibres and a feasible design for pressure sensor fabrication based on ablated cavity in a single mode fibre were demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
6

Ezbiri, A. "Passive signal processing techniques for miniature fibre fabry-perot interferometric sensors." Thesis, Cranfield University, 1996. http://dspace.lib.cranfield.ac.uk/handle/1826/11243.

Full text
Abstract:
This thesis describes new signal processing techniques applicable to miniature low finesse Fabry-Perot interferometric sensors. The principle of operation behind the techniques presented resides in the use of the axial modes from a single multimode laser diode to produce a series of phrase shifted interferometric outputs in conjunction with the path imbalance of the sensor microactivity.
APA, Harvard, Vancouver, ISO, and other styles
7

Machavaram, V. R. "Micro-machining Techniques for the Fabrication of Fibre Fabry-Perot Sensors." Thesis, Cranfield University, 2006. http://hdl.handle.net/1826/1210.

Full text
Abstract:
Fabry-Perot optical fibre sensors have been used extensively for measuring a variety of parameters such as strain, temperature, pressure and vibration. Conventional extrinsic fibre Fabry-Perot sensors are associated with problems such as calibration of the gauge length of each individual sensor, their relatively large size compared to the diameter of optical fibre and a manual manufacturing method that leads to poor reproducibility. Therefore, new designs and fabrication techniques for producing fibre Fabry-Perot sensors are required to address the problems of extrinsic fibre Fabry-Perot sensors. This thesis investigates hydrofluoric acid etching and F2-laser micro-machining of optical fibres to produce intrinsic Fabry-Perot cavities. Chemical etching of single mode fused silica fibres produced cavities across the core of the fibres due to preferential etching of the doped-region. Scanning electron microscope, interferometric surface profiler and CCD spectrometer studies showed that the optical quality of the etched cavities was adequate to produce Fabry-Perot interference. Controlled fusion splicing of etched fibres produced intrinsic Fabry-Perot cavities. These sensors were surface-mounted on composite coupons and their response to applied strain was studied using low coherence interferometry. These sensors showed linear and repeatable response with the strain measured by the electrical resistance strain gauges. To carry out F2-laser micro-machining of fused silica and sapphire substrates, a micro-machining station was designed and constructed. This involved the design of illumination optics for 157 nm laser beam delivery, the design and construction of beam delivery chamber, target alignment and monitoring systems. Ablation of fused silica and sapphire disks was carried out to determine ablation parameters suitable for micro-machining high aspect ratio microstructures that have adequate optical quality to produce Fabry-Perot interference. Cavities were micro-machined through the diameter of SMF 28 and SM 800 fibres at different energy densities. CCD interrogation of these intrinsic fibre cavities ablated at an energy density of 25 x 10 4 Jm -2 produced Fabry-Perot interference fringes. The feasibility of micro-machining high aspect ratio cavities at the cleaved end-face of the fused silica fibres and through the diameter of sapphire fibres was demonstrated. A technique based on in-situ laser-induced fluorescence monitoring was developed to determine the alignment of optical fibres and ablation depth during ablation through the fibre diameter. Ablation of cavities through the diameter of fibre Bragg gratings showed that the heat-generated inside the cavity during ablation had no effect on the peak reflection and the integrity of core and cladding of the fibre. Finally, a pH-sensor, a chemical sensor based on multiple cavities ablated in multimode fibres and a feasible design for pressure sensor fabrication based on ablated cavity in a single mode fibre were demonstrated.
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Cong. "New technique for radiolabelling tracer with 64CU for positron emission particles tracking (PEPT) experiments." Master's thesis, University of Cape Town, 2015. http://hdl.handle.net/11427/13709.

Full text
Abstract:
Positron emission particle tracking (PEPT) is a non-invasive technique for studying the flow of particulate systems within industrial equipment. The technique tracks a tracer particle labelled with a positron emitting radionuclide moving within the field-of-view of a positron emission tomography (PET) scanner. Two important components of the technique are a PET camera and PEPT tracers, which are particle tracers labelled with a positron emitting radionuclide. Currently, the majority of PEPT tracers are made with 68Ga or 18F. However, the relatively short half-life of these two radionuclides limits the application of PEPT to a maximum of 3 hours of experimental time. 64Cu is a potential candidate for PEPT tracer fabrication due to its relatively long half-life (12.7 h) which could extend the experimental running time of PEPT experiments to two uninterrupted days. The objective of the research described in this thesis was to develop a technique for radiolabelling tracers with 64Cu, and to test their efficacy in PEPT experiment. The work was conducted at Radionuclide Production Department, iThemba LABS near Cape Town, where high purity 64Cu was obtained by a two stages separation method using ion exchange chromatography.
APA, Harvard, Vancouver, ISO, and other styles
9

Li, Yunning. "Identify the gas and solid flow structures within bubbling fluidized beds by using the PEPT technique." Thesis, University of Edinburgh, 2016. http://hdl.handle.net/1842/23652.

Full text
Abstract:
Fluidized beds have been applied in many industrial processes (e.g. coal combustion, gasification and granulation) as an effective means for providing excellent gas and solids contact and mixing, as well as good heat transfer. Although research on the fluidized bed has been carried out for more than 70 years, uncertainties and difficulties still remain. These challenges exist primarily due to the complex and dynamic flow structure within fluidized beds and the lack of reliable measurement techniques. The positron emission particle tracking (PEPT) technique, developed at the University of Birmingham, enables individual particles to be tracked non-invasively in opaque three-dimensional (3-D) fluidized beds and offers favourable temporal and spatial resolutions. PEPT is considered to be a powerful tool for fluidized bed studies and was utilized in the current study to investigate the dynamic behaviour of solid and gas in fluidized beds. The experiments in this study were conducted in a 150-mm inner diameter (I.D.) column and operated in the bubbling fluidization regime at ambient conditions. The effects of various factors on the solid flow structure were examined: solid properties, superficial gas velocity, bed height-to-diameter aspect ratio (H/D) and pore size of the air distributor. The solid flow structure was classified into four patterns, namely patterns A, B, C and D, in which pattern C was newly observed in this thesis. The solid motion, bubble behaviour (i.e., bubble spatial distribution, bubble size and bubble rise velocity) and solid mixing were assessed for each flow pattern to understand their unique fluidization behaviours. This assessment was achieved by the development of three methods: a method to reconstruct bubble behaviours based on solid motion, and two methods for estimating the solid mixing profile in this thesis. The results were discussed and compared with the published literature. The bubble rise velocity and bubble size calculated in this research from the PEPT-measured data was in agreement with other research, particularly that of Kunii and Levenspiel, Yasui and Johanson, and Mori and Wen. Finally, a parameter was developed to predict and control flow patterns based on particle kinetic energy and various factors. The outcomes of this study advance the understanding of the complicated dynamics of bubbling fluidized beds and may benefit several industries in the enhancement of fluidized bed design and control to achieve desirable qualities and efficiencies.
APA, Harvard, Vancouver, ISO, and other styles
10

Castagnino, Vera Romina. "Ecological study of the ocelote (Leopardus pardalis) using the camera trap technique, in Las Piedras Region, Madre de Dios-Peru." Pontificia Universidad Católica del Perú. Centro de Investigación en Geografía Aplicada, 2017. http://repositorio.pucp.edu.pe/index/handle/123456789/119873.

Full text
Abstract:
The study focuses in the ecology and conservation of the ocelot (Leopardus pardalis) in the conservation and tourism concession owned by the ARCC. The study site is 11 000 hectares and it is located in the Las Piedras Region, north of Tambopata province, Madre de Dios. Camera traps were used to monitor the ocelot population during a 7-month period (from August 2012 to February 2013), divided in 9 rounds were 73 cameras were installed. The camera traps found 8 independent ocelots, from which only 3 (A1, A3 and A6) were recaptured in more than one occasion. The study did a capture-recapture analysis. The distance traveled by the ocelots from a capture to a recapture site was used to estimate the effective sampled area using the Mean Maximum Distance Moved - MMDM and Half MMDM. The methods yielded a density of 70 individuals/100km2 and 180 individuals/100km2, with full MMDM and Half MMDM, respectively. The study also analyzed the camera trap capture probability with PRESENCE software. Using a closed CR analysis followed by a model of constant capture probability, it yielded a capture probability rate of 0,3 (SE 0,0567). Finally, the ocelot’s habitat preference was also studied using a combination of satellite imagery and GIS software. It was found that these animals frequently use transects aimed for tourists, prefer sites near water and that they avoid bamboo forests.<br>Este estudio trata sobre la ecología y conservación del ocelote (Leopardus pardalis), en la concesión de conservación y ecoturismo del albergue Amazon Research and Conservation Center - ARCC. El área de estudio, de 11 000 hectáreas, se encuentra ubicada en el distrito de Las Piedras, norte de la provincia de Tambopata, departamento de Madre de Dios, Perú. Se utilizaron cámaras trampa para monitorear la población del felino en un período de siete meses (de agosto de 2012 a febrero de 2013), dividido en nueve rondas donde se instalaron 73 cámaras en total. Fueron ocho ocelotes independientes los identificados, de los cuales solo tres (A1, A3 y A6) fueron recapturados visualmente en más de una ocasión. Se realizó un análisis de captura-recaptura. Las distancias recorridas por los ocelotes entre captura y recaptura se utilizaron para estimar el área efectiva de muestreo usando el método del Promedio de la Máxima Distancia Recorrida - MMDM y Mitad del MMDM. Los métodos dieron como resultado una densidad poblacional de 700 ocelotes/100 km2 y 180 ocelotes/100 km2 con MMDM y Mitad del MMDM, respectivamente. Por otro lado, se analizó la probabilidad de captura de las cámaras trampa con el software PRESENCE. Utilizando un análisis poblacional cerrado y un modelo constante, se halló una detección por ronda de 0,3 (SE 0,0567). Finalmente, también se evaluó la preferencia de hábitat de los ocelotes a través de imágenes satélite. Se halló que la mayoría de los felinos usan transectos turísticos, que prefieren las llanuras aluviales cercanas a las riberas de los ríos y cochas, y que evitan los pantanos.
APA, Harvard, Vancouver, ISO, and other styles
11

Guay, Manon. "Validation d'un algorithme utilisé par l'auxiliaire aux services de santé et sociaux lors de la détermination du besoin d'équipements au bain avec les personnes en perte d'autonomie vivant à domicile." Mémoire, Université de Sherbrooke, 2008. http://savoirs.usherbrooke.ca/handle/11143/3939.

Full text
Abstract:
Dans la plupart des sociétés modernes, se laver est considéré comme une activité essentielle. Or, 31% des aînés vivant à domicile présentent des difficultés à prendre leur bain. Dans ces situations, l'ergothérapeute est l'expert pour évaluer les causes des problèmes rencontrés et le cas échéant, recommander l'ajout d'équipements au bain. Cependant, dans un contexte de pénurie de ressources humaines et face au nombre croissant de demandes de services, les ergothérapeutes ne suffisent plus à la tâche. Certains établissements de santé ont donc développé de nouveaux modèles d'organisation clinique qui impliquent l'auxiliaire aux services de santé et sociaux (ASSS) lors de la détermination du besoin d'équipements. Cette politique institutionnelle d'inclusion du personnel auxiliaire présente deux avantages: (1) les ASSS sont en mesure d'évaluer leurs propres conditions de travail ce qui diminue le risque de lésions professionnelles et (2) leur implication favorise le maintien de leur santé mentale par la reconnaissance de leurs compétences. L'objectif de cette étude consistait à valider l'algorithme Préalables aux soins d'hygiène, un outil de travail qui guide les observations de l'ASSS lors de la détermination du besoin d'équipements an bain ou à la douche des personnes en perte d'autonomie vivant à domicile. Plus spécifiquement, le projet visait à (1) établir la capacité d'une ASSS qui utilise l'algorithme à identifier les situations cliniques qui relèvent de sa compétence, (2) mesurer la concordance entre les recommandations émises par l'ergothérapeute (critère) et celles formulées par l'ASSS qui utilise l'algorithme et (3) comparer la concordance en fonction du contexte dans lequel s'inscrit la demande: nouvelles demandes vs réévaluations des services. Des personnes incapables de se laver seules sans difficulté (n=96) ont tout d'abord été évaluées par l'ergothérapeute à leur domicile. Par la suite, à moins d'une semaine d'intervalle, une ASSS utilisant l'algorithme a rencontré les participants. Pour s'assurer de l'intégrité des données, l'ergothérapeute a transmis ses recommandations à l'insu de celles émises par l'ASSS et vice versa. Les participants, en majorité des femmes (68%), étaient âgés en moyenne de 77 ans et vivaient avec un proche. La sensibilité de l'ASSS qui utilise l'algorithme à identifier les situations qui relèvent de sa compétence est de 96% tandis que la spécificité est de 69%. La concordance entre les recommandations des deux intervenants varie de substantielle à presque parfaite pour le lieu ([delta]=0,93 [0,85 ; 1,00]) ainsi que pour le besoin d'une barre d'appui ([delta]=0,77 [0,63 ; 0,91]) tandis qu'elle varie de passable à modérée (Kp =0,63 [0,52 ; 0,75]) pour le siège de bain. Il n'y a pas de différence significative lorsque la concordance entre les recommandations est comparée en fonction du contexte (nouvelles demandes vs réévaluations). Ainsi, l'ASSS qui utilise l'algorithme identifie correctement les situations qui relèvent de sa compétence, le lieu où l'hygiène doit être complétée ainsi que le besoin d'une barre d'appui. Afin d'augmenter l'accord pour le siège de bain, it est recommandé de retirer de l'algorithme la possibilité de choisir certains modèles moins fréquemment utilisés. Les résultats de l'étude augmentent la confiance face à l'algorithme et encourage l'inclusion de l'ASSS dans le processus de détermination du besoin d'équipements au bain. Ce nouveau modèle d'organisation clinique est prometteur car il favorise l'accès à des services de santé de qualité en plus d'avoir une incidence positive sur la santé et la sécurité des ASSS.
APA, Harvard, Vancouver, ISO, and other styles
12

Baker, Sandra E. "Developing aversion management techniques for use with European badgers Meles meles and red foxes Vulpes vulpes." Thesis, University of Oxford, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.275371.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Riba, Pi Edgar. "Geometric Computer Vision Techniques for Scene Reconstruction." Doctoral thesis, Universitat Autònoma de Barcelona, 2021. http://hdl.handle.net/10803/671624.

Full text
Abstract:
Des dels inicis de la Visió per Computador, la reconstrucció d’escenes ha estat un dels temes més estudiats que ha portat a una àmplia varietat de nous descobriments i aplicacions. La manipulació d’objectes, la localització i mapeig, o fins i tot la generació d’efectes visuals són diferents exemples d’aplicacions en les que la reconstrucció d’escenes ha pres un paper important per a indústries com la robòtica, l’automatització de fàbriques o la producció audiovisual. No obstant això, la reconstrucció d’escenes és un tema extens que es pot abordar de moltes formes diferents amb solucions ja existents que funcionen de manera efectiva en entorns controlats. Formalment, el problema de la reconstrucció d’escenes pot formular-se com una seqüència de processos independents. En aquesta tesi, analitzem algunes parts de la seqüència de reconstrucció a partir de les quals contribuïm amb nous mètodes que fan servir Convolutional Neural Networks (CNN), proposant solucions innovadores que consideren l’optimització dels mètodes de forma conjunta. En primer lloc, revisem l’estat de l’art dels detectors i descriptors de característiques local clàssiques i contribuïm amb dos mètodes nous que milloren intrínsecament les solucions preexistents al problema de reconstrucció d’escenes. És un fet que la informàtica i l’enginyeria del software són dos camps que solen anar de la mà i evolucionen segons necessitats mútues facilitant el disseny d’algoritmes complexos i eficients. Per aquesta raó, contribuïm amb Kornia, un llibreria dissenyada específicament per treballar amb tècniques clàssiques de visió per computador conjuntament amb xarxes neuronals profundes. En essència, hem creat un marc que facilita el disseny de processos complexes per algoritmes de visió per computador perquè es puguin incloure dins les xarxes neuronals i usar-se per propagar gradients dins d’un marc d’optimització comú. Finalment, en l’últim capítol d’aquesta tesi desenvolupem el concepte abans esmentat de dissenyar sistemes de forma conjunta amb geometria projectiva clàssica. Per tant, proposem una solució a el problema de la generació de vistes sintètiques mitjançant l’al·lucinació de vistes noves d’objectes altament deformables utilitzant un sistema conjunt amb la geometria de l’escena. En resum, en aquesta tesi demostrem que amb un disseny adequat que combini els mètodes clàssics de visió geomètrica per computador amb tècniques d’aprenentatge profund pot conduir a la millora de solucions per al problema de la reconstrucció d’escenes.<br>Desde los inicios de la Visión por Computador, la reconstrucción de escenas ha sido uno de los temas más estudiados que ha llevado a una amplia variedad de nuevos descubrimientos y aplicaciones. La manipulación de objetos, la localización y mapeo, o incluso la generación de efectos visuales son diferentes ejemplos de aplicaciones en las que la reconstrucción de escenas ha tomado un papel importante para industrias como la robótica, la automatización de fábricas o la producción audiovisual. Sin embargo, la reconstrucción de escenas es un tema extenso que se puede abordar de muchas formas diferentes con soluciones ya existentes que funcionan de manera efectiva en entornos controlados. Formalmente, el problema de la reconstrucción de escenas puede formularse como una secuencia de procesos independientes. En esta tesis, analizamos algunas partes del pipeline de reconstrucción a partir de las cuales contribuimos con métodos novedosos utilizando Redes Neuronales Convolucionales (CNN) proponiendo soluciones innovadoras que consideran la optimización de los métodos de forma end-to-end. En primer lugar, revisamos el estado del arte de los detectores y descriptores de características locales clásicas y contribuimos con dos métodos novedosos que mejoran las soluciones preexistentes en el problema de reconstrucción de escenas. Es un hecho que la informática y la ingeniería de software son dos campos que suelen ir de la mano y evolucionan según necesidades mutuas facilitando el diseño de algoritmos complejos y eficientes. Por esta razón, contribuimos con Kornia, una libreria diseñada específicamente para trabajar con técnicas clásicas de visión por computadora conjuntamente con redes neuronales profundas. En esencia, creamos un marco que facilita el diseño de procesos complejos para algoritmos de visión por computadora para que puedan incluirse dentro de las redes neuronales y usarse para propagar gradientes dentro de un marco de optimización común. Finalmente, en el último capítulo de esta tesis desarrollamos el concepto antes mencionado de diseñar sistemas de forma conjunta con geometría proyectiva clásica. Por lo tanto, proponemos una solución al problema de la generación de vistas sintéticas mediante la alucinación de vistas novedosas de objetos altamente deformables utilizando un sistema conjunto con la geometría de la escena. En resumen, en esta tesis demostramos que con un diseño adecuado que combine los métodos clásicos de visión geométrica por computador con técnicas de aprendizaje profundo puede conducir a mejores soluciones para el problema de la reconstrucción de escenas.<br>From the early stages of Computer Vision, scene reconstruction has been one of the most studied topics leading to a wide variety of new discoveries and applications. Object grasping and manipulation, localization and mapping, or even visual effect generation are different examples of applications in which scene reconstruction has taken an important role for industries such as robotics, factory automation, or audio visual production. However, scene reconstruction is an extensive topic that can be approached in many different ways with already existing solutions that effectively work in controlled environments. Formally, the problem of scene reconstruction can be formulated as a sequence of independent processes which compose a pipeline. In this thesis, we analyse some parts of the reconstruction pipeline from which we contribute with novel methods using Convolutional Neural Networks (CNN) proposing innovative solutions that consider the optimisation of the methods in an end-to-end fashion. First, we review the state of the art of classical local features detectors and descriptors and contribute with two novel methods that inherently improve pre-existing solutions in the scene reconstruction pipeline. It is a fact that computer science and software engineering are two fields that usually go hand in hand and evolve according to mutual needs making easier the design of complex and efficient algorithms. For this reason, we contribute with Kornia, a library specifically designed to work with classical computer vision techniques along with deep neural networks. In essence, we created a framework that eases the design of complex pipelines for computer vision algorithms so that can be included within neural networks and be used to backpropagate gradients throw a common optimisation framework. Finally, in the last chapter of this thesis we develop the aforementioned concept of designing end-to-end systems with classical projective geometry. Thus, we contribute with a solution to the problem of synthetic view generation by hallucinating novel views from high deformable cloths objects using a geometry aware end-to-end system. To summarize, in this thesis we demonstrate that with a proper design that combine classical geometric computer vision methods with deep learning techniques can lead to improve pre-existing solutions for the problem of scene reconstruction.
APA, Harvard, Vancouver, ISO, and other styles
14

Marris, Hélène. "Métrologie de la fraction fine de l'aérosol métallurgique : apport des techniques micro-analytiques (microspectrométrie X et spectroscopie de perte d'énergie des électrons." Phd thesis, Université du Littoral Côte d'Opale, 2012. http://tel.archives-ouvertes.fr/tel-00871711.

Full text
Abstract:
Les poussières émises par l'industrie métallurgique concourent à la qualité de l'air des zones urbaines voisines. Ces particules, émises par des procédés à "haute température", sont susceptibles d'évoluer rapidement au sein des panaches. L'objectif de l'étude est de caractériser la phase particulaire sur un site d'émission métallurgique et de déterminer la nature et l'amplitude des transformations physico-chimiques subies par ces particules dans les premières minutes de leur émission. Des prélèvements d'aérosols ont été réalisés au sein des cheminées et dans l'environnement proche d'une usine métallurgique (production d'alliage de ferromanganèse), dont l'atelier d'agglomération est le principal émissaire. Le spectre granulométrique des particules dans l'environnement montre un enrichissement de nanoparticules (10-100nm) après survol des masses d'air au dessus du site industriel. Les rejets caractéristiques de l'usine (émission d'oxydes de fer et de manganèse, mais également d'aluminosilicates) se trouvent la plupart du temps sous forme d'agglomérats de composition chimique hétérogène et de structure morphologique complexe. Ces agglomérats semblent évoluer rapidement par adsorption de composés organiques volatils ou de suies. L'étude de la spéciation du Fe et du Mn au sein de ces particules indique qu'elles sont sujettes à des réactions d'oxydation via des mécanismes de conversion gaz/particules au sein même du procédé industriel, aboutissant notamment à une oxydation du fer inversement proportionnelle à la taille des particules. Par contre, aucune évolution significative du degré d'oxydation du Fe et du Mn n'a été observée dans l'environnement proche de l'émissaire.
APA, Harvard, Vancouver, ISO, and other styles
15

Bhatia, Vikram. "Signal processing techniques for optical fiber sensors using white light interferometry." Thesis, This resource online, 1993. http://scholar.lib.vt.edu/theses/available/etd-09192009-040440/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Delacroix, Dimitri. "Modes d'existence sémiotiques des automates à réaction dans l'accompagnement des personnes en perte d'autonomie. : Penser la formation pour les personnes âgées souffrant de maladies neurodégénératives." Thesis, Limoges, 2020. http://www.theses.fr/2020LIMO0020.

Full text
Abstract:
La réflexion porte sur la manière de réactiver le processus d’individuation chez les personnes âgées souffrant de maladie d’Alzheimer et maladies apparentés. Ces maladies entravent la relation à soi, aux autres et à son environnement ; elles deviennent donc une « maladie de la relation » entraînant la disparition de l’être de relation et la personne en tant que sujet. Afin de répondre à cette problématique, une recherche-action est conduite. Le terrain consiste en des ateliers robotiques et non robotiques qui se déroulent pendant six mois à raison d’une séance d’une heure par semaine en alternant chaque type de séance (même lieu, même horaire, mêmes accompagnants). Au niveau théorique, il s’agit de fournir un modèle pour appréhender, accompagner et évaluer une pratique non médicamenteuse intégrant et s’appuyant sur un objet technique en tant qu’objet médiateur. La constitution de ce modèle a été alimentée par le rapprochement de la sémiotique et de la pensée opérative de Gilbert Simondon ; il permet appréhender un processus d’appropriation-individuation participante et intégrant l’individuation de la connaissance elle-même. Le résultat se concrétise dans la réalisation d’ateliers de formation à destination des personnes âgées souffrant de démence et d’un outil théorico-pratique permettant d’appréhender au mieux la pratique tant au niveau individuel que collectif dans un cours d’action qui doit être maintenu sans cesse dans un état métastable<br>The thinking deals with the way to reactivate the individuation process for people with dementia. Alzheimer disease makes the relationship difficult related to the individual himself, professional and non-professional caregivers and the environment: it becomes a relationship issue deleting “l’être de relation” and the person as a conscious subject. In order to answering to this issue, an action research is conducted. The practice part is an alternative robotic and non-robotic sessions during six months with one-hour session a week (same place, same planning and same caregivers). Regarding to the theoretical part, the scope is to provide a model in order to understand, support and evaluate a nonpharmacological practice integrating and relying on technical thing that is creating relationship between humans. This model is based on a blinder of semiotics and Gilbert Simondon‘s theories and approaches; it deals with participating individuation with necessary a individuation of the knowledge. The result is the delivery of training session for people with dementia with a strategic tool managing the practice at the individual and collective level in order to maintain a metastability state
APA, Harvard, Vancouver, ISO, and other styles
17

Dobler, Jeremy Todd. "Novel Alternating Frequency Doppler Lidar Instrument for Wind Measurements in the Lower Troposphere." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1358%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Dyer, Mary Anne. "Threads of Time: Technique, Structure and Iconography in an Embroidered Mantle from Paracas." VCU Scholars Compass, 1996. http://scholarscompass.vcu.edu/etd/1501.

Full text
Abstract:
This thesis analyzes the structure, technique and iconography of an embroidered burial mantle from Wari Kayan Necropolis on the Paracas Peninsula, Peru, which dates between approximately 100 B.C. and A.D. 100. The mantle is currently in the collection of the American Museum of Natural History in New York City (Accession no. 41.2/632), and will be referred to subsequently as the AMNH mantle. This study will consist of a structural analysis of the burial mantle, addressing the design of the textile and the iconography. In addition to examining the origin and iconography of the double-headed bird motif which appears throughout the mantle, this study analyzes technical and design considerations involved in the creation of the mantle, including style of embroidery, structure, and color repeats. Ethnographic studies of Andean cultures will also be considered in the analysis of the symbolic and ritual aspects of textiles, and how they relate to the symbolic function of the mantle in its burial context.
APA, Harvard, Vancouver, ISO, and other styles
19

Lapparent, Matthieu de. "De la valeur du temps à la valeur du risque de perte en temps dans les transports : le cas des déplacements domicile-travail." Paris 1, 2004. http://www.theses.fr/2004PA010017.

Full text
Abstract:
Cette thèse étudie les mécanismes individuels de valorisation du temps et de sa variabilité dans le contexte de déplacements domicile-travail. Les concepts de valeurs et primes de fiabilité du temps de transport sont présentés et discutés. Nous explicitons d'abord les facteurs constitutifs de la demande de transport, l'ensemble des choix spatiaux, calendaires et technologiques réalisables, et leurs influences sur le bien être de l'individu en termes de consommation, loisir et mobilité. Nous précisons ensuite les conditions de notre analyse pour un déplacement particulier, reposant sur une séparation des préférences par motif d'activité. Selon la nature de l'environnement, le comportement de choix d'un individu se modélise différemment. En information parfaite, la théorie microéconomique du consommateur est suffisante. En présence de risque, nous utilisons la théorie de l'espérance d'utilité dépendante du rang, qui distingue l'attitude face au risque (optimisme, pessimisme) de la perception du niveau du bien-être. Nous appliquons nos problématiques à deux cadres empiriques: le choix du mode de transport en Ile-de-France et le choix d'un itinéraire aérien sur le corridor Paris-Londres. Le champ des modèles probabilistes de choix discrets est large. Nous détaillons ceux qui nous sont utiles: transformations de Box-Cox, paramètres aléatoires, hétéroscédasticité des perturbations interviennent dans des modèles Logit et Probit dichotomiques. Nos résultats expliquent l'impact des offres technologiques et tarifaires sur les choix du mode de transport et fournissent une gamme de valeurs et primes de fiabilité du temps utiles pour la planification de l'offre de transport
APA, Harvard, Vancouver, ISO, and other styles
20

Leoni, Elia. "Initial Access Techniques for 5G systems." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17708/.

Full text
Abstract:
Nei prossimi anni è previsto un aumento del traffico dati, e la quinta generazione cellulare 5G dovrà fare affidamento su nuove tecnologie, come le onde millimetriche e il massive-MIMO, per soddisfare tale richiesta. Lo spettro di frequenze sotto i 6 GHz risulta infatti sovra-utilizzato, e le frequenze relative alle onde millimetriche promettono di garantire alte velocità di trasmissione dei dati, grazie alla grande disponibilità di banda, specialmente attorno ai 60 GHz. Nonostante questo aspetto favorevole, si ha però un elevato path loss e la difficoltà nel penetrare gli ostacoli. Per ovviare a tali problemi, l'utilizzo di tecniche di beamforming, ottenibili grazie all'uso congiunto di frequenze a onde millimetriche e massive-MIMO, permette di direzionare il pattern dell'antenna nelle direzioni spaziali desiderate, e di compensare il path loss grazie all'aumento della direttività. Considerando un sistema cellulare 5G, una comunicazione di tipo direttivo impone che i beam dell'utente e della stazione radio base debbano essere allineati per garantire la comunicazione, introducendo possibili ritardi nella fase di accesso iniziale. Di conseguenza, lo studio di algoritmi ad-hoc, progettati per velocizzare questa fase rappresenta un sfida importante per l'ottimizzazione dei futuri sistemi 5G. Nell'ottica quindi di velocizzare l'accesso iniziale nelle reti 5G, in questa tesi prima di tutto mostriamo gli approcci proposti nello stato dell'arte, mettendo in evidenza gli aspetti che possono essere migliorati. Successivamente viene spiegato il simulatore che abbiamo implementato su Matlab, e infine viene introdotto un nuovo algoritmo. In particolare, l'algoritmo proposto si basa sulla memoria degli utenti visti per settore e sull'utilizzo di diverse configurazioni dei beam. Questi due aspetti combinati tra loro risultano innovativi rispetto allo stato dell'arte. I risultati numerici ottenuti dimostrano la bontà della tecnica proposta negli scenari 5G considerati.
APA, Harvard, Vancouver, ISO, and other styles
21

Rossi, Daniele <1984&gt. "Techniques for Lagrangian modelling of dispersion in geophysical flows." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6356/1/rossi-scienze_della_terra-phd_thesis.pdf.

Full text
Abstract:
Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.
APA, Harvard, Vancouver, ISO, and other styles
22

Rossi, Daniele <1984&gt. "Techniques for Lagrangian modelling of dispersion in geophysical flows." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6356/.

Full text
Abstract:
Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.
APA, Harvard, Vancouver, ISO, and other styles
23

Ráez, de Ramírez Matilde. "The present situation about the teaching of Rorschach and other proyective tests in Peru." Pontificia Universidad Católica del Perú, 2013. http://repositorio.pucp.edu.pe/index/handle/123456789/101740.

Full text
Abstract:
This papee presents a brief history about the origin and development of the proyecrive techniques in Peru. A questionnaire was administercd ro universitiy professors and members of the Peruvian Rorschach and Proyecrive Merhods Sociery in order ro collect data concerning the test rraining situation. Results indicare that there is a broad vicw towards scientific thought that stresses research and rhe use of standardized tests. lt is also pointed out, rhat the conceptual frame work is based on current changes in psychology.<br>Se presenta una breve historia sobre el origen y desarrollo de las pruebas psicológicas proyectivas en el Perú. Se administró una encuesta a profesores universitarios y a miembros de la Sociedad Peruana de Rorschach y Métodos Proyectivos, con el propósito de conocer la situación actual de la enseñanza de pruebas. Los resultados demuestran que existe una corriente de apertura hacia el pensamiento científico, con énfasis en la investigación y en el empleo de pruebas estandarizadas. Asimismo se muestra que el marco conceptual de trabajo está basado en los cambios actuales en psicología.
APA, Harvard, Vancouver, ISO, and other styles
24

Lund, Mark Andrew. "Aspects of the ecology of a degraded Perth wetland (Lake Monger, Western Australia) and implications for Bio manipulation and other restoration techniques." Thesis, Lund, Mark Andrew (1992) Aspects of the ecology of a degraded Perth wetland (Lake Monger, Western Australia) and implications for Bio manipulation and other restoration techniques. PhD thesis, Murdoch University, 1992. https://researchrepository.murdoch.edu.au/id/eprint/51730/.

Full text
Abstract:
Lake Monger (3204'S 115°20'E) was sampled intensively (18 occasions) between October 1988 and October 1989. Ordination and classification of the water chemistry, plankton and macroinvertebrates revealed three seasonal groups, spring/summer, summer/autumn and winter. In terms of the water chemistry these groups corresponded to periods of hypertrophy, eutrophy and mesotrophy respectively. The lake was found to be shallow (1-1.5 m deep) and polymictic. Internal release from the sediments was believed to be responsible for the high levels of P (> 700 jig L1) recorded during summer. In summer, the limiting nutrient for algal growth appeared to be N. Two species alternated in dominance of the zooplankton, the cladoceran Daphnia carinata King in winter and the copepod Mesocyclops sp in summer groups. These species accounted for >80% of the abundance and biomass of the zooplankton when they were dominant. Changes in the edibility of summer algal populations, rather than the effects of zooplanktivorous fish (Gambusia holbrooki (Girard)), invertebrate predators (e.g. hemipterans) or temperature was believed responsible for the summer declines in D. carinata. 70 macroinvertebrate taxa were recorded, substantially higher than found in other studies at the lake. Mean species richness was highest from areas of emergent reeds. Compilation of available data revealed a decline in water quality in the lake since European settlement, resulting from nutrient enrichment, introductions of exotic biota, removal of native vegetation, physical modification (landfill and dredging) and changes in hydrology (artificial maintenance of water levels). The study year (1988/89) appeared to be similar to other years in the late 1980's. Changes in fertilizer usage around the lake at the start of the 1990's appeared to have been responsible for subsequent significant improvements in the water quality. Twelve in-lake enclosures (1.5 m3) were used to assess the influence of predation by G. holbrooki, N limitation and gilvin (brown colour), on zooplankton and water chemistry. Increased levels of primary productivity were recorded in untreated control enclosures. Only low levels of gilvin were produced and these resulted in a slight increase in chlorophyll a rather than the anticipated decrease. Gambusia holbrooki was not found to be responsible for any changes in the zooplankton structure. Variability between replicate enclosures and P limitation in the lake meant that N limitation could not be established. The presence of large numbers of D. carinata was found to significantly improve water quality through grazing and removal of seston. There appeared to be a nutrient threshold of 150 fig T1 of P in the water column, above which algal composition or size was inedible for D. carinata. Biomanipulation, involving fish manipulations, appeared unlikely to be successful in improving water quality as the link between fish predation and summer declines in D. carinata appeared to be tenuous. Reductions in fertilizers used on lawns around the lake appeared to have had a significant effect on improving the water quality of the lake.
APA, Harvard, Vancouver, ISO, and other styles
25

Taquet, Jonathan. "Techniques avancées pour la compression d'images médicales." Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00629429.

Full text
Abstract:
La compression d'images médicales et biologiques, en particulier sur les modalités d'imagerie telles que la tomodensitométrie (TDM), l'imagerie par résonance magnétique (IRM) et les lames virtuelles en anatomo-pathologie (LV), est un enjeu économique important, notamment pour leur archivage et pour leur transmission. Cette thèse dresse un panorama des besoins et des solutions existantes en compression, et cherche à proposer, dans ce contexte, de nouveaux algorithmes de compression numérique efficaces en comparaison aux algorithmes de référence standardisés. Pour les TDM et IRM, les contraintes médico-légales imposent un archivage de très bonne qualité. Ces travaux se sont donc focalisés sur la compression sans perte et presque sans perte. Il est proposé de i) fusionner le modèle prédictif hiérarchique par interpolation avec le modèle prédictif DPCM adaptatif afin de fournir une représentation scalable en résolution efficace pour la compression sans perte et surtout presque sans perte, ii) s'appuyer sur une optimisation pour la compression sans perte d'une décomposition en paquets d'ondelettes, spécifique au contenu de l'image. Les résultats de ces deux contributions montrent qu'il existe encore une marge de progression pour la compression des images les plus régulières et les moins bruitées. Pour les LV, la lame physique peut être conservée, la problématique concerne donc plus le transfert pour la consultation à distance que l'archivage. De par leur contenu, une approche basée sur l'apprentissage des spécificités structurelles de ces images semble intéressante. Cette troisième contribution vise donc une optimisation hors-ligne de K transformées orthonormales optimales pour la décorrélation des données d'apprentissage (K-KLT). Cette méthode est notamment appliquée pour réaliser un apprentissage concernant des post-transformées sur une décomposition en ondelettes. Leur application dans un modèle de compression scalable en qualité montre que l'approche peut permettre d'obtenir des gains de qualité intéressants en terme de PSNR.
APA, Harvard, Vancouver, ISO, and other styles
26

TARIS, ALESSANDRA. "Multivariate techniques applied on spectroscopic data for process analysis and monitoring." Doctoral thesis, Università degli Studi di Cagliari, 2017. http://hdl.handle.net/11584/249570.

Full text
Abstract:
L’analisi e il monitoraggio di processo sono diventati di fondamentale importanza per garantire le prestazioni del processo e mantenere la qualità del prodotto. A tal scopo, la spettroscopia rappresenta uno strumento innovativo che permette di superare le problematiche che si incontrano con le tecniche analitiche convenzionali (per esempio, la gas cromatografia), poichè è veloce e non distruttiva e può fornire informazioni sullo stato chimico del processo in tempo reale. Tuttavia, a causa della grande quantità di informazioni presenti nelle misure raccolte, l’interpretazione e l’estrazione di informazione non è un compito semplice. A tal proposito, le tecniche multivariate agevolano significativamente il trattamento dei dati e permettono di inferire informazioni sul sistema analizzato. In questa tesi, quattro sistemi sono indagati mediante misure spettroscopiche per mostrare la varietà di problemi che possono sorgere quando si trattano dati complessi e altamente informativi provenienti da differenti tecniche spettroscopiche. Per questo motivo, sono state esplorate differenti tecniche multivariate e sono mostrate le loro potenzialità e limitazioni: (i) si suggeriscono strategie basate sulla Principal Component Analysis e Partial Least Squares Regression per un migliore e più robusto monitoraggio di qualità dei detergenti commerciali liquidi; (ii) la Moving Window Principal Component Analysis è proposta per il monitoraggio di processi che si evolvono come la cristallizzazione di un Ingrediente Farmaceutico Attivo per identificare la nucleazione; (iii) la Time Window Statistical Total Correlation Spectroscopy insieme alla Multivariate Curve Resolution sono proposte per indagare la reazione di formazione di un materiale cementizio; (iv) la Multivariate Curve Resolution è utilizzata per ottenere informazioni sulla dissoluzione nello spazio e nel tempo di una pasta costituita da tensioattivi a partire da dati iperspettrali . Perciò, le tecniche multivariate applicate a dati spettroscopici si dimostrano capaci di raggiungere i seguenti risultati: a) Nel caso di detergenti commerciali, le osservazioni che non rispecchiano le condizioni di riferimento sono classificate correttamente. Inoltre, l’approccio proposto identifica quando la stima della concentrazione dei composti non può essere considerata accurata; b) Riguardo la cristallizzazione dell’ingrediente farmaceutico, la nucleazione è stata individuata in modo accurato; c) Gli spettri e la concentrazione dei composti coinvolti nella reazione di presa di un materiale cementizio sono stati stimati e l’evoluzione temporale del processo può essere seguita; d) La velocità di dissoluzione dei tensioattivi presenti nella pasta è stata valutata. Di conseguenza, i metodi multivariati implementati su misure spettroscopiche si rivelano essenziali per trattare i dati e agevolare la comprensione e il monitoraggio di processo.<br>Process analysis and monitoring has become essential in industry to ensure improvement of the process performances and to maintain a specific product quality. To this aim, spectroscopy represents an innovative tool that allows to overcome the issues encountered with conventional analytical techniques (e.g. gas chromatography), since it is fast and non-destructive and can give information about the chemical state of the process in real time. Nevertheless, due to the huge amount of information present in the collected data, the interpretation and information extraction is not a straightforward task. For this purpose, multivariate techniques significantly aid the treatment of the data and allow to infer information about the system analyzed. In this thesis, four systems are investigated by means of spectroscopy to show the variety of problems that may arise when dealing with complex and highly informative data coming from different spectroscopic techniques. To this aim, different multivariate techniques are explored and their potentialities and limitations are shown: (i) Strategies based on Principal Component Analysis and Partial Least Squares Regression are suggested for an improved and more robust quality monitoring of liquid commercial detergents; (ii) Moving Window Principal Component Analysis is proposed for the monitoring of an evolving process like the crystallization of an Active Pharmaceutical Ingredient in order to detect the nucleation; (iii) Time Window Statistical Total Correlation Spectroscopy combined with Multivariate Curve Resolution are proposed to investigate the setting reaction of a cementing material; (iv) Multivariate Curve Resolution is employed to infer information from hyperspectral data about the dissolution of a surfactants paste. Therefore, multivariate techniques applied to spectroscopic data demonstrate capable of achieving the following results: a) in case of commercial detergents, they correctly classify observations that do not agree with the reference conditions. Moreover, the approach proposed is able to assess when the estimation of the compounds concentration cannot be considered accurate, this scenario may occur when the deviations of one compound is not taken into account during model calibration; b) for the crystallization of the pharmaceutical ingredient, the nucleation is accurately detected; c) spectra and concentration of the compounds involved in the setting reaction of a cementing material are estimated and time evolution of the process can be tracked; d) the dissolution rate of the surfactants present in the paste is estimated. As a result, multivariate methods applied to spectroscopic data reveal essential to treat data and aid process understanding and monitoring.
APA, Harvard, Vancouver, ISO, and other styles
27

Cattel, Julien. "Utilisation des bactéries Wolbachia pour lutter contre une espèce invasive et ravageur de cultures, Drosophila suzukii." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1325/document.

Full text
Abstract:
Depuis sa récente invasion dans les continents européen et américain, la drosophile à aile tachetées, Drosophila suzukii est devenue un ravageur majeur des cultures de fruits rouges. Contrairement aux autres espèces de drosophiles, D. suzukii, est capable de pondre ses œufs dans des fruits sains avant la récolte, à l'aide de son ovipositeur sclérotinisé. Les pertes économiques liées à la présence de D. suzukii s'élèvent annuellement à plusieurs millions de dollars. Le contrôle des populations se fait principalement par l'utilisation de pesticides. Ici, nous avons testé si la bactérie Wolbachia pouvait être efficace pour lutter contre cette espèce. Ce symbiote est présent chez de nombreuses espèces d'insectes et induit souvent de l'incompatibilité cytoplasmique (IC) : les descendants des mâles infectés meurent, exceptés si l'œuf est sauvé par la même infection, héritée de la mère et qui va protéger l'embryon contre cette toxine encore non identifiée. La Technique de l'Insecte Incompatible (TII) repose sur l'utilisation de l'IC pour contrôler les populations d'insecte par des lâchers de mâles infectés. Nous avons montré que D. suzukii est naturellement infecté par une souche de Wolbachia, nommée wSuz, avec des prévalences intermédiaire et qui n'induit pas un taux d'IC élevé. Pour le développement de la TII chez D. suzukii, nous avons réalisé des transferts de souches de Wolbachia entre D. simulans et D. suzukii pour identifier des souches qui peuvent stériliser les femelles D. suzukii, en dépit de la présence de wSuz. Nous avons identifié deux souches de Wolbachia comme candidates pour le développement de la TII. Ces souches induisent des taux d'IC très élevés chez ce ravageur, qui n'est pas atténué par la présence de wSuz chez les femelles. Les mâles stérilisants ont une compétitivité sexuelle similaire comparés à celle des mâles infectés ou non par wSuz, et sont capable d'induire des taux d'IC élevés tout au long de leurs vie. Finalement nous avons montré que, dans de grandes cages à population, la TII pouvait être très efficace pour limiter l'augmentation de la taille des populations de D. suzukii. L'ensemble des résultats confirment que la TII est une approche prometteuse pour contrôler les populations de D. suzukii et mérite de dépasser le stade du laboratoire. Associé à une technique de sexage efficace, la TII peut être un outil puissant, spécifique et respectueux de l'environnement<br>Since its recent invasion of the European and American continents, the spotted wing Drosophila, D. suzukii has become a major burden of the fruit industry. Armed with a highly sclerotized ovipositor, females can lay eggs in a wide variety of ripening and healthy fruits, in contrast to other Drosophila species. Economic losses due to D. suzukii reach millions of dollars annually and methods to control natural populations in the field mainly rely on the use of chemical pesticides. Here we test if Wolbachia bacteria can represent a potential ally to control this pest species. These symbionts are naturally present in many insects and often induce a form of conditional sterility called Cytoplasmic Incompatibility (CI): the offspring of infected males die, unless the eggs are rescued by the same infection inherited from the mother which protects the embryo against a yet unidentified toxin. As long recognized, a strategy called the Incompatible Insect Technique (IIT) makes use of the CI phenotype to control insect populations through the mass release of infected males. D. suzukii is naturally infected by a single Wolbachia strain, named wSuz, which has an intermediate prevalence in field populations and which does not induce a high level of CI. To implement IIT in D. suzukii, we used back and forth Wolbachia transfers between D. suzukii and D. simulans to identify Wolbachia strains that can fully sterilize D. suzukii females despite the presence of wSuz. We identified two potential candidates, both induce a very high level of CI in this pest which is not attenuated by the presence of wSuz in females. The transinfected males showed a similar competitiveness compared naturally infected and uninfected males and are able to induce a high level of CI during all their life. Finally we demonstrated that, in large population cage, the IIT can be very efficient to limit the D. suzukii population size. All the results confirmed that the IIT is a promising approach to control D. suzukii population and merit to go out the laboratory. Associate with a perfect sexing technique, IIT can be a powerful tool to fight against D. suzukii, which is not polluting and species specific
APA, Harvard, Vancouver, ISO, and other styles
28

Lally, Evan M. "A Narrow-Linewidth Laser at 1550 nm Using the Pound-Drever-Hall Stabilization Technique." Thesis, Virginia Tech, 2006. http://hdl.handle.net/10919/34739.

Full text
Abstract:
Linewidth is a measure of the frequency stability of any kind of oscillator, and it is a defining characteristic of coherent lasers. Narrow linewidth laser technology, particularly in the field of fiber-based infrared lasers, has progressed to the point where highly stable sources are commercially available with linewidths on the order of 1-100 kHz. In order to achieve a higher level of stability, the laser must be augmented by an external frequency stabilization system. This paper presents the design and operation of a frequency locking system for infrared fiber lasers. Using the Pound-Drever-Hall technique, the system significantly reduces the linewidth of an input laser with an un-stabilized linewidth of 2 kHz. It uses a high-finesse Fabry-Perot cavity, which is mechanically and thermally isolated, as a frequency reference to measure the time-varying frequency of the input laser. An electronic feedback loop works to correct the frequency error and maintain constant optical power. Testing has proven the Pound-Drever-Hall system to be highly stable and capable of operating continuously for several seconds at a time.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
29

Pistoia, Jenny <1983&gt. "Development of SuperEnsemble Techniques for the Mediterranean Ocean Forecasting System." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4303/4/Pistoia_Jenny_tesi.pdf.

Full text
Abstract:
This research activity studied how the uncertainties are concerned and interrelated through the multi-model approach, since it seems to be the bigger challenge of ocean and weather forecasting. Moreover, we tried to reduce model error throughout the superensemble approach. In order to provide this aim, we created different dataset and by means of proper algorithms we obtained the superensamble estimate. We studied the sensitivity of this algorithm in function of its characteristics parameters. Clearly, it is not possible to evaluate a reasonable estimation of the error neglecting the importance of the grid size of ocean model, for the large amount of all the sub grid-phenomena embedded in space discretizations that can be only roughly parametrized instead of an explicit evaluation. For this reason we also developed a high resolution model, in order to calculate for the first time the impact of grid resolution on model error.
APA, Harvard, Vancouver, ISO, and other styles
30

Pistoia, Jenny <1983&gt. "Development of SuperEnsemble Techniques for the Mediterranean Ocean Forecasting System." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amsdottorato.unibo.it/4303/.

Full text
Abstract:
This research activity studied how the uncertainties are concerned and interrelated through the multi-model approach, since it seems to be the bigger challenge of ocean and weather forecasting. Moreover, we tried to reduce model error throughout the superensemble approach. In order to provide this aim, we created different dataset and by means of proper algorithms we obtained the superensamble estimate. We studied the sensitivity of this algorithm in function of its characteristics parameters. Clearly, it is not possible to evaluate a reasonable estimation of the error neglecting the importance of the grid size of ocean model, for the large amount of all the sub grid-phenomena embedded in space discretizations that can be only roughly parametrized instead of an explicit evaluation. For this reason we also developed a high resolution model, in order to calculate for the first time the impact of grid resolution on model error.
APA, Harvard, Vancouver, ISO, and other styles
31

Nicolini, Andrea. "Multipath tracking techniques for millimeter wave communications." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/17690/.

Full text
Abstract:
L'obiettivo di questo elaborato è studiare il problema del tracciamento efficiente e continuo dell'angolo di arrivo dei cammini multipli dominanti in un canale radio ad onde millimetriche. In particolare, viene considerato uno scenario di riferimento in cui devono essere tracciati il cammino diretto da una stazione base e due cammini riflessi da ostacoli in diverse condizioni operative e di movimento dell'utente mobile. Si è assunto che l'utente mobile può effettuare delle misure rumorose di angolo di arrivo dei tre cammini, uno in linea di vista e gli altri due non in linea di vista, ed eventualmente delle misure di distanza tra esso e le tre "sorgenti" (ad esempio ricavandole da misure di potenza ricevuta). Utilizzando un modello "spazio degli stati", sono stati investigati due diversi approcci: il primo utilizza un fitraggio di Kalman direttamente sulle misure di angolo di arrivo, mentre il secondo adotta un metodo a due passi in cui lo stato è rappresentato dalle posizioni della stazione base e dei due ostacoli, dalle quali vengono valutate le stime degli angoli di arrivo. In entrambi i casi è stato investigato l'impatto che ha sulla stima la fusione dei dati ottenuti dai sensori inerziali integrati nel dispositivo, ovvero velocità angolare ed accelerazione del mobile, con le misure di angolo di arrivo. Successivamente ad una fase di modellazione matematica dei due approcci, essi sono stati implementati e testati in MATLAB, sviluppando un simulatore in cui l'utente possa scegliere il valore di vari parametri a seconda dello scenario desiderato. Le analisi effettuate hanno mostrato la robustezza delle strategie proposte in diverse condizioni operative.
APA, Harvard, Vancouver, ISO, and other styles
32

Skidmore, Amanda R. "IMPACT OF SELECTED INTEGRATED PEST MANAGEMENT TECHNIQUES ON ARTHROPODS IN CUCURBIT PRODUCTION SYSTEMS." UKnowledge, 2018. https://uknowledge.uky.edu/entomology_etds/44.

Full text
Abstract:
Cucurbits (i.e. squash, melons, pumpkins, gourds) are high value crops of global importance. Insect pests in these systems are often controlled by chemical insecticides, which are not always effective and can be damaging to the environment. Many integrated pest management (IPM) techniques have been developed for the control of pests in these systems, with a goal of improving system stability and reducing chemical inputs. The overarching goal of my research was to investigate the impact of select IPM techniques on arthropod populations and yield in organic and conventional cucurbit systems. This dissertation can be divided into three major projects which were conducted between 2013 and 2017. First, an investigation was conducted to understand the impact of two commonly used IPM practices (tillage regime and the use of row covers) on pest insect populations, beneficial arthropod populations, and plant yield. By developing studies in both organic and conventionally managed squash and melon production, four independent studies were conducted and analyzed to provide a broad understanding of these IPM strategies. In all systems, plant yields and pests were greatest in the plasticulture systems, but reduced tillage had a positive impact on the natural enemy arthropods within these crops. Row cover use resulted in larger plants and increased yields, but had an inconsistent influence on arthropods in the systems studied. From these initial studies, an additional investigation was developed to better understand the impacts of cultivation on the specialist pollinator Peponapis pruinosa [Hymenoptera: Apidae]. Nesting site selection was examined in two independent experiments. By conducting choice studies to allow P. pruinosa to select preferred nesting sites, we determined that P. pruinosa prefer to build nests in loose soils and show reduced nest making in compact soils. This poses interesting management challenges since less-compact soils are within high tillage zones. This research supports the need for the development of cultivation management plans that consider of pollinator habitat and reproduction needs. A multi-year, multi-farm study was developed for the comparison of parasitism in cucumber beetles (Acalymma vittatum and Diabrotica undecimpunctata) in organic and conventional growing systems. Parasitoids were reared from beetles collected from working organic and conventional cucurbit farms in central Kentucky. Our results show that there is some seasonal variation in parasitism, but that there is no significant difference between organic and conventional production. We conclude that IPM techniques can be effective in contributing to the control cucurbit pests in agroecosystems and the improvement of crop yields. These studies show that natural enemies and pollinators react differently to IPM practices, which should be considered when developing IPM plans in cucurbit production. By researching these management techniques we are able to develop production systems that have increased stability.
APA, Harvard, Vancouver, ISO, and other styles
33

Zondo, Patience Thembelihle. "Assessment of inoculation techniques to evalute apple resistance to Phytophthora cactorum." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52141.

Full text
Abstract:
Thesis (MSc)--University of Stellenbosch, 2001.<br>ENGLISH ABSTRACT: Phytophthora cactorum (Lebert & Cohn) Schrot. is the primary cause of crown, collar and root rot diseases of apple (Malus domestica Borkh.) trees worldwide. This pathogen is most destructive in commercial apple orchards under waterlogged soil conditions and has recently been identified as causing serious disease in some South African apple orchards. Crown, collar and root diseases are difficult to control because of their unpredictability and catastrophic nature. The use of resistant cultivars and rootstocks is economical and environmentally considerate. Therefore the need to develop screening techniques that will enable the selection of desirable disease resistant traits as part of an apple-breeding program in South Africa was identified. The work undertaken in this study was aimed at optimizing different techniques to test resistance. Using two direct inoculation techniques (excised stem and intact stem) the aggressiveness of lO isolates of P. cactorum on apple rootstocks was determined. The susceptibilities of five apple rootstocks were also compared. Results have shown isolate by rootstock interaction which means isolate aggressiveness was influenced by rootstocks tested. The selectivity of isolates suggests that there may be several strains of the pathogen. Population studies of the pathogen might contribute valuable information that could lead to better interpretation of results. Rootstock susceptibility was monitored in vitro throughout the season by inoculating at monthly intervals for 26-months. It was observed that during winter, rootstock susceptibility was low compared to high susceptibility during summer. These results have revealed new information regarding changes in the relative resistance of the different rootstocks over the growing season, e.g. the susceptibility pattern of rootstock MMl06 occurred 1 to -2 months later than that of other rootstocks. This finding has important implications on the way in which resistance test results are interpreted, and emphasizes the importance of not relying on point sampling. Furthermore, useful information has been acquired regarding the epidemiology of the disease with regard to "windows of susceptibility". The phenomenon of a phase shift in susceptibility of different rootstocks needs to be tested on a broader scale to assess whether it has any practical application on resistance testing. Although different inoculation techniques are applied in breeding programs, up to now there is no consensus on which technique works best for seedling selections. Since large numbers of individuals must be tested to improve the chances of detecting resistant genotypes, mass inoculations of young seedlings is a rapid way of identifying resistant individuals. Two different screening methods were tested during this study. Using the sand-bran technique, seedlings were transplanted onto inoculated soil and the root mass was used as a measure of resistance. In a second method zoospore inoculum was applied to seedlings growing in a sand:bark mixture at different concentrations and the seedlings were subjected either to water drenching or not. In both trials the aggressiveness of isolates differed significantly from each other and only higher inoculum concentrations were effective in causing disease. The age of seedlings used in tests emerged as an important factor. Seedlings under five-months-old should not be used. Drenching inoculated seedlings enhanced disease development but the production of sufficiently high numbers of zoospores was a laborious task. Thus, it is recommended that the sand-bran inoculum technique be tested with the drenching treatment for mass selection. In conclusion this study confirms the importance of both choice of isolate and choice of inoculation intervals in determining susceptibility of rootstocks to infection. In spite of the fact that stem inoculation bioassays have limited resemblance to natural disease situations, these bioassays are useful for obtaining an indication as to whether genotypes have a degree of resistance and merit further testing. For this reason refinement of the stem inoculation bioassay is worthwhile pursuing. With regard to seedling trials, both the sand-bran and the zoospore technique appear promising but refinement of these techniques is necessary in order to present a more practical way of testing large volumes of seedlings.<br>AFRIKAANSE OPSOMMING: Evaluering van inokulasietegnieke om weerstand teen Phytophthora cactorum in appels te evalueer: Phytophthora cactorum (Lebert & Cohn) Schrot. is die primêre oorsaak van kroon-, kraag en wortelvrot van appelbome (Malus domestica Borkh.). Dit is die mees verwoestende patogeen in kommersiële appelboorde waar daar versuipte toestande grond voorkom. P. cactorum is onlangs identifiseer as die patogeen wat ernstige kroon- en kraag-verotting in Suid Afrikaanse appelboorde veroorsaak. Kroon-, kraag- en wortelvrot is moeilik om te beheer as gevolg van die onvoorspelbaarheid en rampspoedige aard van die siekte. Die gebruik van kultivars en onderstamme wat weerstandbiedend is teen siektes en plae is omgewingsvriendelik en is ekonomies van belang, dus het die behoefte ontstaan om inokulasietegnieke te ontwikkelom weerstandige saailinge te identifiseer en te selekteer as deel van 'n appelteelprogram in Suid Afrika. Die doelwit van hierdie studie is om verskillende inokulasietegnieke te toets en te verfyn om weerstand in appelsaailinge te identifiseer. Deur gebruik te maak van twee inokulasietegnieke (die afgesnyde loot- en intakte loot tegniek), is die relatiewe aggressiwiteit van 10 isolate van P. cactorum en die vatbaarheid van vyf appelonderstamme ondersoek. Resultate het aangetoon dat die aggressiwiteit van die isolate gevarieer het na aanleiding van die onderstam wat getoets is. Die selektiwiteit van die isolate is 'n aanduiding dat daar moontlik verskeie rasse van die patogeen voorkom. Toekomstige studies op die populasiestruktuur van P. cactorum sal 'n belangrike bydrae maak tot die interpretasie van resultate oor weerstand en weerstandsteling. Die vatbaarheid van onderstamme was ook in in vitro proewe ondersoek deur maandelikse inokulasies toe te pas oor 'n tydperk van 26 maande. Dit is opgemerk dat die onderstamvatbaarheid gedurende die winter laag was in vergelyking met die somer. Nie al die onderstamme het dieselfe gereageer gedurende verskillende toetstye nie. Hierdie resultate toon aan dat die relatiewe weerstand van verskillende onderstamme oor die groeiseisoen verskil, byvoorbeeld die vatbare reaksie van die onderstam 'l\.1MI06' het een tot twee maande later voorgekom in vergelyking met ander onderstamme wat getoets is. Hierdie bevinding het belangrike implikasies op die interpretasie van weerstandstoetsing en beklemtoon die moontlike tekortkominge in enkelproefwaarnemings. Bruikbare inligting ten opsigte van die epidemiologie van die siekte is versamel wat beskryf kan word in terme van vensters van vatbaarheid wat verskil van onderstam tot onderstam. Verdere ondersoeke in die verband word aanbeveel. Hoewel verskeie inokulasietegnieke bestaan om jong saailinge vir weerstand te toets, is daar tot op hierdie stadium nog nie ooreenstemming oor die beste tegniek wat toegepas moet word om saailingseleksie te doen nie. Omdat groot getalle saailinge getoets moet tydens die seleksieproses sal massa-inokulasie van saailinge die aangewese metode wees. Twee verskillende inokulasie tegnieke is getoets in die studie. Deur gebruik te maak van die sandsemel tegniek, is saailinge geplant in geinfesteerde plantmedium, waartydens die wortelmassa van saailinge gebruik is om die reaksie op infeksie te kwantifiseer. Die soëspoor inokulasietegniek was toegepas op saailinge wat in 'n sand en basmengsel geplant is teen verskillende inokulurnkonsentrasies. 'n Waterverdrenkingsbehandeling is ook getoets. In albei hierdie proewe het die aggressiwiteit van die isolate van mekaar verskil. Slegs die hoër inokulumkonsentrasies was effektief in die ontwikkeling van die siekte. Die ouderdom van saailinge is ook uitgewys as 'n belangrike faktor wat 'n rol speel in weerstandstoetsing. Saailinge jonger as 5 maande word nie aanbeveel vir hierdie toetse nie. Verdrenking van saailinge het die voorkoms van die siekte verhoog, maar die produksie van groot getalle soëspore was 'n beperkende faktor in die uitvoering van die proef Dit word aanbeveel dat die sand-semel inokulasietegniek verder evalueer moet word onder verskeie toestande, onder andere deur dit met verdrenkinghte kombineer. Die belang van die keuse van isolaat en inokulasiedatum in bepaling van relatiewe weerstand van onderstamme teen P. cactorum is tydens die studie bevestig. Afgesien van die beperking van die staminokulasietegnieke in soverre dit verwyderd is van natuurlike infeksie, word die tegnieke aanbeveel om 'n indikasie te kry van die relatiewe weerstand van onderstamme. Beide die sand-semel en soëspoor tegnieke kan gebruik word om weerstandige saailinge te identifiseer, maar tegniese verfyning van hierdie tegnieke is nodig om saailinge in massa te evalueer.
APA, Harvard, Vancouver, ISO, and other styles
34

D'AMICO, LILIANA. "Light Management Strategies and Nanostructuring Techniques to Improve Efficiency in Solar Cells." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2015. http://hdl.handle.net/2108/202323.

Full text
Abstract:
The demand for cheap and clean energy sources, not based on fossil fuels and having low impact on the environment, is becoming nowadays an urgent matter. In this regard photovoltaic (PV) power generation technology is one of the most promising. A large variety of photovoltaic devices, or most commonly “solar cells”, is currently on market but all of them share an identical research aspect: the need of increase their photovoltaic conversion efficiency in a cost effective way. In addition to improving the electronic properties of the materials used for solar devices construction, there is a very promising approach towards the efficiency enhancement based on the so called Light Management (LM) techniques. LM techniques are based on the introduction of particular photonic nanostructures which depending on the positioning along the cell architecture and depending on their morphological characteristics, can serve for the obtaining of different phenomena including: diffractive effects, modulation of the refractive index, coupling to waveguide modes through surface structuring, and modification of the photonic band structure of the device. Anyway, the goal of LM concept is the enhance of the probability of photons interaction with cell active layer for the generation of an increased quantity of charge carriers involved in the photovoltaic process. Continuous improvement in nanotechnology manufacturing field have led to a great attention for LM techniques applied to photovoltaics and the present work has given a contribute to this interesting field, focusing on a particular type of PV device, Dye Sensitized Solar Cells (DSC). A Bragg grating with defined morphological parameters (theoretically predicted by FEM calculation) has been realized on a high performance photoresist by means of Laser Interference Lithography (LIL) and then replicated on a mesoporous TiO2 layer. Replication process takes place by means of a low-cost Soft Lithographic (SL) process which exploits a PDMS mold for pattern transferring from one layer to the other. The nanostructures good quality replication, over a large area have been demonstrated by microscopic analysis. The nanostructured TiO2 layer was then soaked into a dye and the DSC cell assembled. PV properties of the build-up nanostructured cell and those of a traditional bare one, both realized following identical experimental procedures and differing only for the Bragg grating presence, were compared. Results confirmed an enhanced efficiency, in term of IPCE, of 31% for the nanostructured cell. Therefore, the most important achievement of this study has been the successful easy and low cost TiO2 nanostructuring. The second part of this work concerns on preliminary guidelines for the realization and ordering of different type of nanostructures. In particular a LIL method for 2D Bragg grating structure production has been proposed to be employed for photovoltaic antireflective coating. A transfer method that exploit PDMS mold to align gold nanoparticles (NPs) on a PEDOT:PSS layer of an organic solar cell was applied. The deposition and ordering of such Au NPs along specific patterns, permits to combine the photonic effect, whose effectiveness has been demonstrated in the first part of the work, with the plasmonic one. The presented result demonstrated the great potential of low-cost soft lithographic procedures in LM field.
APA, Harvard, Vancouver, ISO, and other styles
35

PISANO, BARBARA. "Machine Learning Techniques for Detection of Nocturnal Epileptic Seizures from Electroencephalographic Signals." Doctoral thesis, Università degli Studi di Cagliari, 2018. http://hdl.handle.net/11584/255953.

Full text
Abstract:
Epilepsy is one of the major neurological disorders that affects more than 50 million people around the world; it is characterized by unpredictable seizures due to an abnormal electrical activity in the brain. In this thesis nocturnal epilepsy has been investigated. In particular, Nocturnal Frontal Lobe Epilepsy (NFLE), that is a form of epilepsy in which seizures occur predominantly during sleep with symptoms including nocturnal awakenings, dystonic and tonic postures and clonic limb convulsions. The electroencephalographic (EEG) signals, which record the electrical activity of the brain, are used by neurologists to diagnose epilepsy. However, in almost 50% of NFLE cases, the EEG does not show abnormality during seizures, making the neurologists work to identify the epileptic events very difficult, thereby requiring the support of video recording to verify the epileptic events, with a subsequent time-consuming procedure. In literature few scientific contributions address the classification of nocturnal epileptic seizures. In this thesis, the automatic systems, both customized for single patient and generalized have been developed to find the best nocturnal epileptic seizure detection system from EEG signals. The combination of feature extraction and selection methods, associated to classification models based on Self Organizing Map (SOM), have been investigated following the classical machine learning approach. The ability of SOM to represent data from a high-dimensional space in a low-dimensional space, preserving the topological properties of the original space, has been exploited to identify nocturnal epileptic seizures and track the temporal projection of the EEG signals on the map. The proposed methods allow the definition of maps capable of presenting meaningful information on the actual brain state, revealing the mapping potential of clustering data coming from seizure and non-seizure states. The results obtained show that the patient-specific system achieves better performance than a patient-independent system. Moreover, comparing the performances with those of a binary classifier, widely used in epileptic seizure detection problems, the Support Vector Machine (SVM), the SOM model achieves good and, for some patients, higher performances. In particular, the patient-customized system using SOM model, reaches an average value of sensitivity and specificity equal to 82.85% and 89.92%, respectively; whereas the SVM classifier achieved an average sensitivity and specificity equal to 82.11% and 82.85%, respectively, suggesting the use of SOM model as a good alternative for nocturnal epileptic seizure detection. The discriminating power of SOM and the possibility to follow the temporal sequence of the EEG recordings on the map can provide information on an imminent epileptic seizure, highlighting the possibility to promote therapies aimed at rapid and targeted disarming the seizures.
APA, Harvard, Vancouver, ISO, and other styles
36

Stabile, Tony Alfredo <1977&gt. "High frequency seismic and underwater acoustic wave propagation and imaging techniques." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/1145/1/Tesi_Stabile_Tony_Alfredo.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Stabile, Tony Alfredo <1977&gt. "High frequency seismic and underwater acoustic wave propagation and imaging techniques." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2008. http://amsdottorato.unibo.it/1145/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Lakade, Sameer Shamrao. "Evaluation of Novel Sorptive Extraction Techniques." Doctoral thesis, Universitat Rovira i Virgili, 2017. http://hdl.handle.net/10803/458368.

Full text
Abstract:
L'objectiu d'aquesta Tesi Doctoral és l'avaluació de les noves tècniques d'extracció per sorció, com fabric phase sorptive extraction (FPSE), FPSE en mode dinàmic, capsule phase microextraction (CPME) i extracció de fase sòlida dispersiva (d-SPE) emprant partícules magnètiques amb propietats hypercrosslinked per a l'extracció de diferent contaminants orgànics emergents d’un ampli ventall de polaritats en aigües de procedència ambiental. A la primera secció es van avaluar la FPSE en dos modes d'extracció diferents, estàtic i dinàmic, per a l’extracció d’un grup de productes farmacèutics i d’higiene personal. A la segona secció, es va avaluar una tècnica de CPME, que ha estat recentment introduïda, per a l’extracció d’un grup de productes d’higiene personal. I, en l'últim apartat, es van utilitzar les partícules magnètiques de caràcter hypercrosslinked per a l'extracció d'edulcorants utilitzant la tècnica d-SPE. En tots el mètodes desenvolupats es va utilitzar la cromatografia líquida seguida de l'espectrometria de masses en tàndem (LC-MS/MS) per a la determinació dels contaminants. A partir de l'avaluació d'aquestes tècniques s’ha demostrat que són tècniques d'extracció alternatives fet que ens encoratja a provar-les per a diferents tipus d'analits en altres tipus de mostres.<br>El objetivo de esta Tesis Doctoral es la evaluación de las nuevas técnicas de extracción por sorción, fabric phase sorptive extraction (FPSE), FPSE en modo dinámico, capsule phase microextraction (CPME) y la extracción dispersiva en fase sólida (d-SPE) utilizando partículas magnéticas de propiedades hypercrosslinked para la extracción de diferentes contaminantes orgánicos emergentes de un amplio rango de polaridad en aguas de procedencia medioambiental. En la primera sección, FPSE con dos modos de extracción diferentes, estático y dinámico, fueron evaluados para la extracción de un grupo de productos farmacéuticos y de cuidado personal. En la segunda sección, se evaluó una técnica de CPME recientemente introducida para el grupo de productos de cuidado personal; Y, en la última sección, las partículas magnéticas con propiedades hypercrosslinked se emplearon para la extracción de edulcorantes usando la técnica d-SPE. En todos los métodos desarrollados se utilizó la cromatografía líquida seguida de la espectrometría de masas en tándem (LC-MS/MS) para la determinación de los contaminantes. A partir de la evaluación de estas técnicas se ha demostrado que son técnicas de extracción alternativas la cual cosa nos anima a probarlas para diferentes tipos de analitos en otros tipos de muestras.<br>The objective of this Doctoral Thesis is the evaluation of the novel sorptive extraction techniques, such as fabric phase sorptive extraction (FPSE), dynamic FPSE, (CPME) and dispersive solid-phase extraction (d-SPE) using hypercrosslinked magnetic particles (MPs) for the extraction of different emerging organic contaminants of a wide range of polarity from environmental water samples. In the first section, FPSE with two different extraction modes, static and dynamic, were evaluated to extract a group of pharmaceuticals and personal care products. In the second section, a recently introduced CPME technique was evaluated for a group of personal care products; and, in last section, MPs with hypercrosslinked properties were employed for the extraction of sweeteners using d-SPE technique. In all methods developed liquid chromatography followed by tandem mass spectrometry (LC-MS/MS) was used for the determination of the contaminants. The evaluation of these techniques showed that they are alternative extraction techniques and encourage us to test them for different types of analytes in different kind of samples.
APA, Harvard, Vancouver, ISO, and other styles
39

MacKay, W. Iain. "The development of pre-Hispanic art forms in Peru : seen as an outgrowth of textile techniques and their influence upon art forms and depiction of symbols." Thesis, University of St Andrews, 1988. http://hdl.handle.net/10023/7359.

Full text
Abstract:
Pre-Hispanic geometric art forms In Peru and the Andean Area are taken to be an outgrowth of textile techniques. Textiles and fibre arts predate ceramics by several millennia In the Central Andean Area. The artist who created these textiles developed an art style which was to go largely unaltered until the arrival of the Spaniards. The foundations of the Andean art form date to the Pre-ceramic. The restrictive, rather Inflexible nature of the warp and the weft of the cloth (the geometric grid) was to influence the methods of represention that were to follow. Geometric designs were well suited to fit Into the rigid framework. A series of conventions were developed for the representation of symbols. With the development of ceramics, there was leeway for a new style to come Into being. However, this was not to be the case. The potter borrowed extensively from the weaving tradition and Its associated styles (only in Moche times did the potter make a break the highly geometric style developed centuries before, and even then this break with tradition was a short lived one). The pre-Columbian artist often portrayed birds, cats, fish and reptiles. Many of these designs were used frequently and repeatedly throughout the centuries, but none, I would maintain. was represented as frequently as the double-headed serpent, and with so few variants. Andean art Is a truly distinctive art form; very different from European art, and through Its geometricity It conveyed and still conveys a totally different approach to nature and the world surrounding Andean man.
APA, Harvard, Vancouver, ISO, and other styles
40

Bhatnagar, Mohit. "Multiplexing of interferometric fiber optic sensors for smart structure applications using spread spectrum techniques." Thesis, This resource online, 1994. http://scholar.lib.vt.edu/theses/available/etd-12052009-020246/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Kishimoto, Kenny, Gabriel Medina, Fernando Sotelo, and Carlos Raymundo. "Application of lean manufacturing techniques to increase on-time deliveries: Case study of a metalworking company with a make-to-order environment in Peru." Springer Verlag, 2020. http://hdl.handle.net/10757/656093.

Full text
Abstract:
El texto completo de este trabajo no está disponible en el Repositorio Académico UPC por restricciones de la casa editorial donde ha sido publicado.<br>The purpose of this paper is to provide a proposal of a production management model using lean manufacturing techniques to improve on-time deliveries of a metal-mechanic company that works in a make-to-order (MTO) manufacture of industrial fans environment. The proposal of the research refers to the implementation of a production management model in a metal mechanic company in Peru and analyze the effect in on-time delivery rate of the company. In one month of operation after the implementation of the pro-posed model, the on-time delivery rate of the company increased from 35% to 80%. Likewise, the problems present in the metalworking research company may be the same in many metalworking companies in the country, so the present investigation will serve as an example for the possible solution of the problems of other companies.
APA, Harvard, Vancouver, ISO, and other styles
42

Terán, Jimmy Efrén Liendo. "A construção da cidade: Diretrizes para um projeto no árido Cono Norte. Arequipa - Peru." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/16/16138/tde-05072017-111946/.

Full text
Abstract:
O presente estudo investiga as condições que dão suporte à habitabilidade em cidades e em regiões áridas no Peru. Através da avaliação de uma realidade concreta pretende-se revisitar raciocínios sobre a ocupação dos solos áridos, a utilização de recursos e as condições naturais disponíveis, empregados nos últimos 1.500 anos na história local; e, a partir deste estudo, evidenciar a forma como ainda hoje representam e estimulam alternativas projetuais para habitar solos desérticos no Sul do Peru.<br>The present study investigates the conditions that support habitability in arid cities and regions in Peru. Through the evaluation of a concrete reality we intend to revisit reasonings about the occupation of the dry soils, the use of resources and the available natural conditions, used in the last 1,500 years in the local history; and, based on this study, to highlight the way in which they still represent and stimulate alternative projects for inhabiting desert soils in southern Peru.
APA, Harvard, Vancouver, ISO, and other styles
43

Hamer, Ute. "Priming effects of dissolved organic substrates on the mineralisation of lignin, peat, soil organic matter and black carbon determined with 14C and 13C isotope techniques." [S.l.] : [s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=972001964.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Somov, Andrey. "Power Management and Power Consumption Optimization Techniques in Wireless Sensor Networks." Doctoral thesis, Università degli studi di Trento, 2009. https://hdl.handle.net/11572/367818.

Full text
Abstract:
A Wireless Sensor Network (WSN) is a distributed collection of resource constrained tiny nodes capable of operating with minimal user attendance. Due to their flexibility and low cost, WSNs have recently become widely applied in traffic regulation, fire alarm in buildings, wild fire monitoring, agriculture, health monitoring, building energy management, and ecological monitoring. However, deployment of the WSNs in difficult-to-access areas makes it difficult to replace the batteries - the main power supply of a sensor node. It means that the power limitation of the sensor nodes appreciably constraints their functionality and potential applications. The use of harvesting components such as solar cells alone and energy storage elements such as super capacitors and rechargeable batteries is insufficient for the long-term sensor node operation. With this thesis we are going to show that long-term operation could be achieved by adopting a combination of hardware and software techniques along with energy efficient WSN design. To demonstrate the hardware power management, an energy scavenging module was designed, implemented and tested. This module is able to handle both alternating current (AC) based and direct current (DC) based ambient sources. The harvested energy is stored in two energy buffers of different kind, and is delivered to the sensor node in accordance with an efficient energy supply switching algorithm. The software part of the thesis presents an analytical criterion to establish the value of the synchronization period minimizing the average power dissipated by a WSN node. Since the radio chip is usually the most power hungry component on a board, this approach can help one to decrease the amount of power consumption and prolong the lifetime of the entire WSN. The following part of the thesis demonstrates a methodology for power consumption evaluation of WSN. The methodology supports the Platform Based Design (PBD) paradigm, providing power analysis for various sensor platforms by defining separate abstraction layers for application, services, hardware and power supply modules. Finally, we present three applications where we use the designed hardware module and apply various power management strategies. In the first application we apply the WSN paradigm to the entertainment area, and in particular to the domain of Paintball. The second one refers to a wireless sensor platform for monitoring of dangerous gases and early fire detection. The platform operation is based on the pyrolysis product detection which makes it possible to prevent fire before inflammation. The third application is connected with medical research. This work describes the powering of wireless brain-machine interfaces.
APA, Harvard, Vancouver, ISO, and other styles
45

Ricciu, Marta. "Advanced techniques for Environmental Risk Assessment within the Oil & Gas sector." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Oil and gas installations in sensitive areas with harsh environmental conditions may require improved risk monitoring, assessment, and management in order to prevent and limit the damage caused by accidental hydrocarbon spills in the sea. This issue is a priority when the installation under examination is located in an area defined as sensitive. The present work deals with a real reference case study, an offshore installation located in the Barents Sea, which represents a relevant example of innovative facility operating offshore in the Arctic sensitive region. Hydrocarbon process leaks are a major contributor to offshore risk. The scenarios that may develop from the Process area of this installation have been selected through the application of a leak frequency model and the modeling of the safety barriers. The Process Leak for Offshore installation Frequency Assessment Model estimates the topside process leak frequencies for use in Quantitative Risk Analysis of fire and explosion at installations located on the Norwegian Continental Shelf. It is based on the assumption that the leak frequency is proportional to the number of each type of equipment. The performances of the safety barriers have been used as QRA parameters. The environmental risk is evaluated through an exposure-based analysis, based on duration, rate and amount of the release as well as oil drift simulation. This step has been carried out thanks to the SINTEF’s software OSCAR – Oil Spill Contingency and Response. The level of estimated risk is then compared with the stringent tolerability criteria to which installations located in sensitive areas are subjected. Further information about the impact on the ecosystem is given by the EIF factor related to different release categories.
APA, Harvard, Vancouver, ISO, and other styles
46

Somov, Andrey. "Power Management and Power Consumption Optimization Techniques in Wireless Sensor Networks." Doctoral thesis, University of Trento, 2009. http://eprints-phd.biblio.unitn.it/154/1/Andrey_Somov_PhD-Thesis_UniTN_17Dec2009.pdf.

Full text
Abstract:
A Wireless Sensor Network (WSN) is a distributed collection of resource constrained tiny nodes capable of operating with minimal user attendance. Due to their flexibility and low cost, WSNs have recently become widely applied in traffic regulation, fire alarm in buildings, wild fire monitoring, agriculture, health monitoring, building energy management, and ecological monitoring. However, deployment of the WSNs in difficult-to-access areas makes it difficult to replace the batteries - the main power supply of a sensor node. It means that the power limitation of the sensor nodes appreciably constraints their functionality and potential applications. The use of harvesting components such as solar cells alone and energy storage elements such as super capacitors and rechargeable batteries is insufficient for the long-term sensor node operation. With this thesis we are going to show that long-term operation could be achieved by adopting a combination of hardware and software techniques along with energy efficient WSN design. To demonstrate the hardware power management, an energy scavenging module was designed, implemented and tested. This module is able to handle both alternating current (AC) based and direct current (DC) based ambient sources. The harvested energy is stored in two energy buffers of different kind, and is delivered to the sensor node in accordance with an efficient energy supply switching algorithm. The software part of the thesis presents an analytical criterion to establish the value of the synchronization period minimizing the average power dissipated by a WSN node. Since the radio chip is usually the most power hungry component on a board, this approach can help one to decrease the amount of power consumption and prolong the lifetime of the entire WSN. The following part of the thesis demonstrates a methodology for power consumption evaluation of WSN. The methodology supports the Platform Based Design (PBD) paradigm, providing power analysis for various sensor platforms by defining separate abstraction layers for application, services, hardware and power supply modules. Finally, we present three applications where we use the designed hardware module and apply various power management strategies. In the first application we apply the WSN paradigm to the entertainment area, and in particular to the domain of Paintball. The second one refers to a wireless sensor platform for monitoring of dangerous gases and early fire detection. The platform operation is based on the pyrolysis product detection which makes it possible to prevent fire before inflammation. The third application is connected with medical research. This work describes the powering of wireless brain-machine interfaces.
APA, Harvard, Vancouver, ISO, and other styles
47

Rodriguez, Cancio Marcelino. "Contributions on approximate computing techniques and how to measure them." Thesis, Rennes 1, 2017. http://www.theses.fr/2017REN1S071/document.

Full text
Abstract:
La Computation Approximée est basée dans l'idée que des améliorations significatives de l'utilisation du processeur, de l'énergie et de la mémoire peuvent être réalisées, lorsque de faibles niveaux d'imprécision peuvent être tolérés. C'est un concept intéressant, car le manque de ressources est un problème constant dans presque tous les domaines de l'informatique. Des grands superordinateurs qui traitent les big data d'aujourd'hui sur les réseaux sociaux, aux petits systèmes embarqués à contrainte énergétique, il y a toujours le besoin d'optimiser la consommation de ressources. La Computation Approximée propose une alternative à cette rareté, introduisant la précision comme une autre ressource qui peut à son tour être échangée par la performance, la consommation d'énergie ou l'espace de stockage. La première partie de cette thèse propose deux contributions au domaine de l'informatique approximative: Aproximate Loop Unrolling : optimisation du compilateur qui exploite la nature approximative des données de séries chronologiques et de signaux pour réduire les temps d'exécution et la consommation d'énergie des boucles qui le traitent. Nos expériences ont montré que l'optimisation augmente considérablement les performances et l'efficacité énergétique des boucles optimisées (150% - 200%) tout en préservant la précision à des niveaux acceptables. Primer: le premier algorithme de compression avec perte pour les instructions de l'assembleur, qui profite des zones de pardon des programmes pour obtenir un taux de compression qui surpasse techniques utilisées actuellement jusqu'à 10%. L'objectif principal de la Computation Approximée est d'améliorer l'utilisation de ressources, telles que la performance ou l'énergie. Par conséquent, beaucoup d'efforts sont consacrés à l'observation du bénéfice réel obtenu en exploitant une technique donnée à l'étude. L'une des ressources qui a toujours été difficile à mesurer avec précision, est le temps d'exécution. Ainsi, la deuxième partie de cette thèse propose l'outil suivant : AutoJMH : un outil pour créer automatiquement des microbenchmarks de performance en Java. Microbenchmarks fournissent l'évaluation la plus précis de la performance. Cependant, nécessitant beaucoup d'expertise, il subsiste un métier de quelques ingénieurs de performance. L'outil permet (grâce à l'automatisation) l'adoption de microbenchmark par des non-experts. Nos résultats montrent que les microbencharks générés, correspondent à la qualité des manuscrites par des experts en performance. Aussi ils surpassent ceux écrits par des développeurs professionnels dans Java sans expérience en microbenchmarking<br>Approximate Computing is based on the idea that significant improvements in CPU, energy and memory usage can be achieved when small levels of inaccuracy can be tolerated. This is an attractive concept, since the lack of resources is a constant problem in almost all computer science domains. From large super-computers processing today’s social media big data, to small, energy-constraint embedded systems, there is always the need to optimize the consumption of some scarce resource. Approximate Computing proposes an alternative to this scarcity, introducing accuracy as yet another resource that can be in turn traded by performance, energy consumption or storage space. The first part of this thesis proposes the following two contributions to the field of Approximate Computing :Approximate Loop Unrolling: a compiler optimization that exploits the approximative nature of signal and time series data to decrease execution times and energy consumption of loops processing it. Our experiments showed that the optimization increases considerably the performance and energy efficiency of the optimized loops (150% - 200%) while preserving accuracy to acceptable levels. Primer: the first ever lossy compression algorithm for assembler instructions, which profits from programs’ forgiving zones to obtain a compression ratio that outperforms the current state-of-the-art up to a 10%. The main goal of Approximate Computing is to improve the usage of resources such as performance or energy. Therefore, a fair deal of effort is dedicated to observe the actual benefit obtained by exploiting a given technique under study. One of the resources that have been historically challenging to accurately measure is execution time. Hence, the second part of this thesis proposes the following tool : AutoJMH: a tool to automatically create performance microbenchmarks in Java. Microbenchmarks provide the finest grain performance assessment. Yet, requiring a great deal of expertise, they remain a craft of a few performance engineers. The tool allows (thanks to automation) the adoption of microbenchmark by non-experts. Our results shows that the generated microbencharks match the quality of payloads handwritten by performance experts and outperforms those written by professional Java developers without experience in microbenchmarking
APA, Harvard, Vancouver, ISO, and other styles
48

Abdelal, Qasem M. "Methodology for Using a Non-Linear Parameter Estimation Technique for Reactive Multi-Component Solute Transport Modeling in Ground-Water Systems." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/29758.

Full text
Abstract:
For a numerical or analytical model to be useful it should be ensured that the model outcome matches the observations or field measurements during calibration. This process has been typically done by manual perturbation of the model input parameters. This research investigates a methodology for using non linear parameter estimation technique (the Marquardt-Levenberg technique) with the multi component reactive solute transport model SEAM3D. The reactive multi-component solutes considered in this study are chlorinated ethenes. Previous studies have shown that this class of compounds can be degraded by four different biodegradation mechanisms, and the degradation path is a function of the prevailing oxidation reduction conditions. Tests were performed in three levels; the first level utilized synthetic model-generated data. The idea was to develop a methodology and perform preliminary testing where "observations" can be generated as needed. The second level of testing involved performing the testing on a single redox zone model. The methodology was refined and tested using data from a chlorinated ethenes-contaminated site. The third level involved performing the tests on a multiple redox zone model. The methodology was tested, and statistical validation of the recommended methodology was performed. The results of the tests showed that there is a statistical advantage for choosing a subgroup of the available parameters to optimize instead of the optimizing the whole available group. Therefore, it is recommended to perform a parameter sensitivity study prior to the optimization process to identify the suitable parameters to be chosen. The methodology suggests optimizing the oxidation-reduction species parameters first then calibrating the chlorinated ethenes model. The results of the tests also proved the advantage of the sequential optimization of the model parameters, therefore the parameters of the parent compound are optimized, updated in the daughter compound model, for which the parameters are then optimized so on. The test results suggested considering the concentrations of the daughter compounds when optimizing the parameters of the parent compounds. As for the observation weights, the tests suggest starting the applied observation weights during the optimization process at values of one and changing them if needed. Overall the proposed methodology proved to be very efficient. The optimization methodology yielded sets of model parameters capable of generating concentration profiles with great resemblance to the observed concentration profiles in the two chlorinated ethenes site models considered.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Babel, Marie. "Compression d'images avec et sans perte par la méthode LAR (Locally Adaptive Resolution)." Phd thesis, INSA de Rennes, 2005. http://tel.archives-ouvertes.fr/tel-00131758.

Full text
Abstract:
La diversité des standards actuels en compression d'images vise à la fois à proposer des schémas de codage efficaces et des fonctions de service adaptées à chaque type d'utilisation. Les travaux de cette thèse s'inscrivant dans ce contexte d'une telle compression, leur objectif majeur consiste en l'élaboration d'une méthode unifiée pour un codage avec et sans perte des images fixes et en séquences, intégrant des fonctionnalités avancées du type : scalabilité et représentation en régions hiérarchique, robustesse<br />aux erreurs. <br /><br />La méthode LAR (Locally Adaptive Resolution) de base a été élaborée à des fins de compression avec pertes à bas-débits. Par l'exploitation des propriétés intrinsèques du LAR, la définition d'une représentation en régions auto-extractibles apporte une solution de codage efficace à la fois en termes de débit et en termes de qualité d'images reconstruites. Le codage à débit localement variable est facilité par l'introduction de la notion de région d'intérêt ou encore de VOP (Video Object Plane).<br /><br />L'obtention d'un schéma de compression sans perte s'est effectuée conjointement à l'intégration de la notion de scalabilité, par l'intermédiaire de méthodes pyramidales. Associés à une phase de prédiction, trois codeurs différents répondant à ces exigences ont vu le jour : le LAR-APP, l'Interleaved S+P et le RWHT+P. Le LAR-APP (Approche Pyramidale Prédictive) se fonde sur l'exploitation d'un contexte de prédiction enrichi obtenu par un parcours original des niveaux de la pyramide construite. L'entropie des erreurs d'estimation résultantes (estimation réalisée dans le domaine spatial) s'avère ainsi réduite. Par la définition d'une solution opérant dans le domaine transformé, il nous a été possible d'améliorer plus encore les performances<br />entropiques du codeur scalable sans perte. L'Interleaved S+P se construit ainsi par l'entrelacement de deux pyramides de coefficients transformés. Quant à la méthode RWHT+P, elle s'appuie sur une forme nouvelle de la transformée Walsh-Hadamard bidimensionnelle. Les performances en termes d'entropie brute se révèlent bien supérieures à celles de l'état-de-l'art : des résultats tout à fait remarquables sont obtenus notamment sur les<br />images médicales.<br /><br />Par ailleurs, dans un contexte de télémédecine, par l'association des méthodes pyramidales du LAR et de la transformée Mojette, un codage conjoint source-canal efficace, destiné à la transmission sécurisée des images médicales compressées sur des réseaux bas-débits, a été défini. Cette technique offre une protection différenciée intégrant la nature hiérarchique des flux issus des méthodes multirésolution du LAR pour une qualité de service exécutée de bout-en-bout.<br /><br />Un autre travail de recherche abordé dans ce mémoire vise à l'implantation automatique des codeurs LAR sur des architectures parallèles hétérogènes multi-composants. Par le biais de la description des algorithmes sous le logiciel SynDEx, il nous a été possible en particulier de réaliser le prototypage de<br />l'Interleaved S+P sur des plate-formes multi-DSP et multi-PC.<br /><br />Enfin, l'extension du LAR à la vidéo fait ici l'objet d'un travail essentiellement prospectif. Trois techniques différentes sont proposées, s'appuyant sur un élément commun : l'exploitation de la représentation en régions précédemment évoquée.
APA, Harvard, Vancouver, ISO, and other styles
50

GALEOTTI, MATTEO. "Electrochemical impedance spectroscopy (EIS) techniques for the evaluation of the state of health (SOH) of batteries." Doctoral thesis, Università degli Studi di Roma "Tor Vergata", 2014. http://hdl.handle.net/2108/203205.

Full text
Abstract:
Electrochemical energy storage has become important in the last years to improve the energetic efficiency of stationary and hybrid energy production systems. It becomes important the monitoring of the state of the battery to know online the energy content, to avoid dangerous operating conditions and to extend the useful life of the batteries. The energy content of the batteries can be evaluated through the knowledge of the State of Charge (SOC) and the State of Health (SOH). In literature several methods already exist to evaluate the SOC with precision, but the SOH is a parameter difficult to evaluate. Indeed the batteries, also belonging to the same technology, have different ageing phenomena. There is lack of methods and results to evaluate the SOH of the batteries online, while they are operative. The target of this work is the identification of the SOH through innovative techniques based on Electrochemical Impedance Spectroscopy (EIS). Compared to other methods, EIS is a powerful diagnostic tool to study the physical and chemical properties of any electrochemical system, allowing the possibility to study physical parameters for the monitoring of the batteries. A combination of theory, experimental and simulation tools have been therefore individuated to determine the SOH. The tested electrochemical cells are nickel-metal hydride (NiMH) of 1.3 Ah, 1.2 V and lithium polymer (LiPO) of 1.05 Ah, 3.7 V of nominal capacity and voltage respectively. To reach the goal, we have defined an experimental test procedure to accelerate the ageing of the cells, in order to study the ageing phenomena and to acquire experimental data. The impedance spectra (IS) from the EIS have been acquired at different SOC and SOH. Through a fitting procedure performed on the acquired IS, we have extracted the parameters of an equivalent circuit model. The information extracted by the IS and the parameters extracted by the model have been used to build some diagnostic diagrams, that in the following have been interpreted through the formalism of the Theory of Evidence (ToE) to determine the SOH. Combining the points obtained by the IS at different SOC and SOH, the ToE can improve the estimation of the SOH iteratively, a great advantage compared to the classic Theory of Probability. After the achievement of the goal and making sure that the IS provide information about the SOH of the batteries, a prototype has been realized to acquire online the IS. In the future the prototype can be integrated in a battery management system (BMS) to improve the estimation of the SOC and to evaluated accurately the SOH.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!