To see the other types of publications on this topic, follow the link: Particle trackng simulation.

Dissertations / Theses on the topic 'Particle trackng simulation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Particle trackng simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Brighton, Marc. "Tracing particle movement for simulation of light history and algal growth in airlift photobioreactors using Positron Emission Particle Tracking (PEPT)." Doctoral thesis, University of Cape Town, 2017. http://hdl.handle.net/11427/27112.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sun, Yuanyuan. "Water Quality Simulation with Particle Tracking Method." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-129265.

Full text
Abstract:
In the numerical simulation of fluid flow and solute transport in porous media, finite element method (FEM) has long been utilized and has been proven to be efficient. In this work, an alternative approach called random walk particle tracking (RWPT) method is proposed. In this method, a finite number of particles represent the distribution of a solute mass. Each particle carries a certain fraction of the total mass and moves in the porous media according to the velocity field. The proposed RWPT model is established on a scientific software platform OpenGeoSys (OGS), which is an open source initiative for numerical simulation of thermo-hydro-mechanical-chemical (THMC) processes in porous media. The flow equation is solved using finite element method in OGS. The obtained hydraulic heads are numerically differentiated to obtain the velocity field. The particle tracking method does not solve the transport equation directly but deals with it in a physically stochastic manner by using the velocity field. Parallel computing concept is included in the model implementation to promote computational efficiency. Several benchmarks are developed for the particle tracking method in OGS to simulate solute transport in porous media and pore space. The simulation results are compared to analytical solutions and other numerical methods to test the presented method. The particle tracking method can accommodate Darcy flow as it is the main consideration in groundwater flow. Furthermore, other flow processes such as Forchheimer flow or Richards flow can be combined with as well. Two applications indicate the capability of the method to handle theoretical real-world problems. This method can be applied as a tool to elicit and discern the detailed structure of evolving contaminant plumes<br>Bei der numerischen Simulation von Strömung und Stofftransport in porösen Medien hat die Nutzung der Finite-Elemente-Methode (FEM) eine lange Tradition und wird sich als effizient erweisen. In dieser Arbeit wird ein alternativer Ansatz, die random walk particle tracking (RWPT) Methode vorgeschlagen. Bei diesem Verfahren stellt eine endliche Anzahl von Partikeln die Verteilung eines gelösten Stoffes dar. Jedes Teilchen trägt einen bestimmten Bruchteil der Gesamtmasse und bewegt sich in den porösen Medien gemäß des Geschwindigkeitsfeldes. Das vorgeschlagene RWPT Modell basiert auf der wissenschaftlichen Softwareplattform OpenGeoSys (OGS), die eine Open-Source-Initiative für die numerische Simulation thermo-hydro-mechanisch-chemischen (THMC) in porösen Medien darstellt. Die Strömungsgleichung wird in OGS mit der Finite-Elemente-Methode gelöst. Der Grundwasserstand wird numerisch berechnet, um das Geschwindigkeitsfeld zu erhalten. Die Partikel-Tracking-Methode löst die Transportgleichung nicht direkt, sondern befasst sich mit ihr in einer physikalisch stochastische Weise unter Nutzung des Geschwindigkeitsfeldes. Zur Berücksichtigung der Recheneffizienz ist ein Parallel Computing-Konzept in der Modell-Implementierung enthalten. Zur Simulation des Stofftransports in porösen Medien und im Porenraum wurden mehrere Benchmarks für die Partikel-Tracking-Methode in OGS entwickelt. Die Simulationsergebnisse werden mit analytischen Lösungen und andere numerische Methoden verglichen, um die Aussagefähigkeit des vorgestellten Verfahrens zu bestätigen. Mit der Partikel-Tracking-Methode kann die Darcy-Strömung gelöst werden, die das wichtigste Kriterium in der Grundwasserströmung ist. Außerdem bewältigt die Methode auch andere Strömungsprozesse, wie die Forchheimer-Strömung und die Richards-Strömung. Zwei Anwendungen zeigen die Leistungsfähigkeit der Methode bei der prinzipiellen Handhabung von Problemen der realen Welt. Die Methode kann als ein Instrument zur Aufdeckung Erkennung der detaillierte Struktur von sich entwickelnden Schadstofffahnenangewendet werden
APA, Harvard, Vancouver, ISO, and other styles
3

Borovies, Drew A. "Particle filter based tracking in a detection sparse discrete event simulation environment." Thesis, Monterey, Calif. : Naval Postgraduate School, 2007. http://bosun.nps.edu/uhtbin/hyperion.exe/07Mar%5FBorovies.pdf.

Full text
Abstract:
Thesis (M.S. in Modeling, Virtual Environment, and Simulation (MOVES))--Naval Postgraduate School, March 2007.<br>Thesis Advisor(s): Christian Darken. "March 2007." Includes bibliographical references (p. 115). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
4

Hanafy, Shalaby Hemdan. "ON THE POTENTIAL OF LARGE EDDY SIMULATION TO SIMULATE CYCLONE SEPARATORS." Doctoral thesis, Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700133.

Full text
Abstract:
This study was concerned with the most common reverse flow type of cyclones where the flow enters the cyclone through a tangential inlet and leaves via an axial outlet pipe at the top of the cyclone. Numerical computations of two different cyclones were based on the so-called Stairmand cyclone. The difference in geometry between these two cyclones was basically characterized by the geometrical swirl number Sg of 3.5 and 4. Turbulent secondary flows inside a straight square channel have been studied numerically by using Large Eddy Simulation (LES) in order to verify the implementation process. Prandtl’s secondary motion calculated by LES shows satisfying agreement with both, Direct Numerical Simulation (DNS) and experimental results. Numerical calculations were carried out at various axial positions and at the apex cone of a gas cyclone separator. Two different NS-solvers (a commercial one, and a research code), based on a pressure correction algorithm of the SIMPLE method have been applied to predict the flow behavior. The flow was assumed as unsteady, incompressible and isothermal. A k − epsilon turbulence model has been applied first using the commercial code to investigate the gas flow. Due to the nature of cyclone flows, which exhibit highly curved streamlines and anisotropic turbulence, advanced turbulence models such as RSM (Reynolds Stress Model) and LES (Large Eddy Simulation) have been used as well. The RSM simulation was performed using the commercial package CFX4.4, while for the LES calculations the research code MISTRAL/PartFlow-3D code developed in our multiphase research group has been applied utilizing the Smagorinsky model. It was found that the k − epsilon model cannot predict flow phenomena inside the cyclone properly due to the strong curvature of the streamlines. The RSM results are comparable with LES results in the area of the apex cone plane. However, the application of the LES reveals qualitative agreement with the experimental data, but requires higher computer capacity and longer running times than RSM. These calculations of the continuous phase flow were the basis for modeling the behavior of the solid particles in the cyclone separator. Particle trajectories, pressure drop and the cyclone separation efficiency have been studied in some detail. This thesis is organized into five chapters. After an introduction and overview, chapter 2 deals with continuous phase flow turbulence modeling including the governing equations. The emphasis will be based on LES modelling. Furthermore, the disperse phase motion is treated in chapter 3. In chapter 4, the validation process of LES implementation with channel flow is presented. Moreover, prediction profiles of the gas flow are presented and discussed. In addition, disperse phase flow results are presented and discussed such as particle trajectories; pressure drop and cyclone separation efficiency are also discussed. Chapter 5 summarizes and concludes the thesis.
APA, Harvard, Vancouver, ISO, and other styles
5

Sharma, Gaurav. "Direct numerical simulation of particle-laden turbulence in a straight square duct." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/155.

Full text
Abstract:
Particle-laden turbulent flow through a straight square duct at Reτ = 300 is studied using direct numerical simulation (DNS) and Lagrangian particle tracking. A parallelized 3-D particle tracking direct numerical simulation code has been developed to perform the large-scale turbulent particle transport computations reported in this thesis. The DNS code is validated after demonstrating good agreement with the published DNS results for the same flow and Reynolds number. Lagrangian particle transport computations are carried out using a large ensemble of passive tracers and finite-inertia particles and the assumption of one-way fluid-particle coupling. Using four different types of initial particle distributions, Lagrangian particle dispersion, concentration and deposition are studied in the turbulent straight square duct. Particles are released in a uniform distribution on a cross-sectional plane at the duct inlet, released as particle pairs in the core region of the duct, distributed randomly in the domain or distributed uniformly in planes at certain heights above the walls. One- and two-particle dispersion statistics are computed and discussed for the low Reynolds number inhomogeneous turbulence present in a straight square duct. New detailed statistics on particle number concentration and deposition are also obtained and discussed.
APA, Harvard, Vancouver, ISO, and other styles
6

Nerisson, Philippe. "Modélisation du transfert des aérosols dans un local ventilé." Thesis, Toulouse, INPT, 2009. http://www.theses.fr/2009INPT001H/document.

Full text
Abstract:
La protection des opérateurs et la surveillance des ambiances de travail en cas de mise en suspension d’aérosols radioactifs, dans un local ventilé d’une installation nucléaire, requièrent la connaissance de l’évolution spatio-temporelle de la concentration en particules, en tout point du local considéré. L’estimation précise de cette concentration a fait l’objet du développement de modèles spécifiques de transport et de dépôt d’aérosols dans un local ventilé, dans le cadre d’une thèse cofinancée par l’IRSN et EDF, en collaboration avec l’IMFT. Un formalisme eulérien de glissement est utilisé pour modéliser le transport des aérosols. Celui-ci est basé sur une unique équation de transport des concentrations en particules (« Diffusion-Inertia model »). L’étude spécifique du dépôt d’aérosols en parois a permis de développer un modèle de couche limite, qui consiste à déterminer précisément le flux de dépôt de particules en parois, quels que soient le régime de dépôt et l’orientation de la surface considérée. Les modèles de transport et de dépôt finalement retenus ont été implantés dans Code_Saturne, un logiciel de mécanique des fluides. La validation de ces modèles a été effectuée à partir de données de la littérature en géométries simples, puis sur la base de campagnes expérimentales de traçage dans des locaux ventilés d’environ 30 m&#179; et 1500 m&amp;#179<br>When particulate radioactive contamination is likely to become airborne in a ventilated room, assessment of aerosol concentration in every point of this room is important, in order to ensure protection of operators and supervision of workspaces. Thus, a model of aerosol transport and deposition has been developed as part of a project started with IRSN, EDF and IMFT. A simplified eulerian model, called “diffusion-inertia model” is used for particle transport. It contains a single transport equation of aerosol concentration. The specific study of deposition on walls has permitted to develop a boundary condition approach, which determines precisely the particle flux towards the wall in the boundary layer, for any deposition regime and surface orientation.The final transport and deposition models retained have been implemented in a CFD code called Code_Saturne. These models have been validated according to literature data in simple geometries and tracing experiments in ventilated rooms, which have been carried out in 30 m&#179; and 1500 m&#179; laboratory rooms
APA, Harvard, Vancouver, ISO, and other styles
7

Pachler, Klaus, Thomas Frank, and Klaus Bernert. "Simulation of Unsteady Gas-Particle Flows including Two-way and Four-way Coupling on a MIMD Computer Architectur." Universitätsbibliothek Chemnitz, 2002. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200200352.

Full text
Abstract:
The transport or the separation of solid particles or droplets suspended in a fluid flow is a common task in mechanical and process engineering. To improve machinery and physical processes (e.g. for coal combustion, reduction of NO_x and soot) an optimization of complex phenomena by simulation applying the fundamental conservation equations is required. Fluid-particle flows are characterized by the ratio of density of the two phases gamma=rho_P/rho_F, by the Stokes number St=tau_P/tau_F and by the loading in terms of void and mass fraction. Those numbers (Stokes number, gamma) define the flow regime and which relevant forces are acting on the particle. Dependent on the geometrical configuration the particle-wall interaction might have a heavy impact on the mean flow structure. The occurrence of particle-particle collisions becomes also more and more important with the increase of the local void fraction of the particulate phase. With increase of the particle loading the interaction with the fluid phase can not been neglected and 2-way or even 4-way coupling between the continous and disperse phases has to be taken into account. For dilute to moderate dense particle flows the Euler-Lagrange method is capable to resolve the main flow mechanism. An accurate computation needs unfortunately a high number of numerical particles (1,...,10^7) to get the reliable statistics for the underlying modelling correlations. Due to the fact that a Lagrangian algorithm cannot be vectorized for complex meshes the only way to finish those simulations in a reasonable time is the parallization applying the message passing paradigma. Frank et al. describes the basic ideas for a parallel Eulererian-Lagrangian solver, which uses multigrid for acceleration of the flow equations. The performance figures are quite good, though only steady problems are tackled. The presented paper is aimed to the numerical prediction of time-dependend fluid-particle flows using the simultanous particle tracking approach based on the Eulerian-Lagrangian and the particle-source-in-cell (PSI-Cell) approach. It is shown in the paper that for the unsteady flow prediction efficiency and load balancing of the parallel numerical simulation is an even more pronounced problem in comparison with the steady flow calculations, because the time steps for the time integration along one particle trajectory are very small per one time step of fluid flow integration and so the floating point workload on a single processor node is usualy rather low. Much time is spent for communication and waiting time of the processors, because for cold flow particle convection not very extensive calculations are necessary. One remedy might be a highspeed switch like Myrinet or Dolphin PCI/SCI (500 MByte/s), which could balance the relative high floating point performance of INTEL PIII processors and the weak capacity of the Fast-Ethernet communication network (100 Mbit/s) of the Chemnitz Linux Cluster (CLIC) used for the presented calculations. Corresponding to the discussed examples calculation times and parallel performance will be presented. Another point is the communication of many small packages, which should be summed up to bigger messages, because each message requires a startup time independently of its size. Summarising the potential of such a parallel algorithm, it will be shown that a Beowulf-type cluster computer is a highly competitve alternative to the classical main frame computer for the investigated Eulerian-Lagrangian simultanous particle tracking approach.
APA, Harvard, Vancouver, ISO, and other styles
8

Winter, Henry deGraffenried III. "Combining hydrodynamic modeling with nonthermal test particle tracking to improve flare simulations." Thesis, Montana State University, 2009. http://etd.lib.montana.edu/etd/2009/winter/WinterH0509.pdf.

Full text
Abstract:
Solar flares remain a subject of intense study in the solar physics community. These huge releases of energy on the Sun have direct consequences for humans on Earth and in space. The processes that impart tremendous amounts of energy are not well understood. In order to test theoretical models of flare formation and evolution, state of the art, numerical codes must be created that can accurately simulate the wide range of electromagnetic radiation emitted by flares. A direct comparison of simulated radiation to increasingly detailed observations will allow scientists to test the validity of theoretical models. To accomplish this task, numerical codes were developed that can simulate both the thermal and nonthermal components of a flaring plasma, their interactions, and their emissions. The HYLOOP code combines a hydrodynamic equation solver with a nonthermal particle tracking code in order to simulate the thermal and nonthermal aspects of a flare. A solar flare was simulated using this new code with a static atmosphere and with a dynamic atmosphere, to illustrate the importance of considering hydrodynamic effects on nonthermal beam evolution. The importance of density gradients in the evolution of nonthermal electron beams was investigated by studying their effects in isolation. The importance of the initial pitch-angle cosine distribution to flare dynamics was investigated. Emission in XRT filters were calculated and analyzed to see if there were soft X-ray signatures that could give clues to the nonthermal particle distributions. Finally the HXR source motions that appeared in the simulations were compared to real observations of this phenomena.
APA, Harvard, Vancouver, ISO, and other styles
9

Elmasdotter, Ajla. "An Interactive Eye-tracking based Adaptive Lagrangian Water Simulation Using Smoothed Particle Hydrodynamics." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281978.

Full text
Abstract:
Many water animations and simulations usually depend on time consuming algorithms that create realistic water movement and visualization. However, the intrigue for realistic, real-time and interactive simulations is steadily growing for, among others, the game and Virtual Reality industry. A common method used for particle based water simulations is the Smoothed Particle Hydrodynamics, which also allows for refinement and adaptivity that focuses the computational power on the parts of the simulation that require it the most. This study suggests an eye-tracking based adaptive method for water simulations using Smoothed Particle Hydrodynamics, which is based on where a user is looking, with the assumption that what a user cannot see nor perceive is not of a greater importance. Its performance is evaluated by comparing the suggested method to a surface based adaptive method, by measuring frames per second, the amount of particles in the simulation, and the execution time . It is concluded that the eye-tracking based adaptive method performs better than the surface based adaptive method in four out of five scenarios and should hence be considered a method to further evaluate and possibly use when creating applications or simulations requiring real-time water simulations, with the restriction that eye-tracking hardware would be necessary for the method to work.<br>Flertalet vattensimuleringar samt animeringar brukar ofta vara beroende av tidskrävande algoritmer som skapar realistiskt utséende och realistiska rörelser. Däremot har intresset för realistiska, interaktiva realtidssimuleringar och liknande applikationer börjat växa inom, bland annat, spel- och virtual-realityindustrin. Smoothed Particle Hydrodynamics är en vanlig metod som används inom partikelbaserade vattensimuleringar, som även tillåter adaptivitet vilket fokuserar resurserna i datorn på de delar av simuleringen som kräver dem mest. Denna studie föreslår en eye-trackingbaserad adaptiv metod för vattensimuleringar som använder sig av Smoothed Particle Hydrodynamics, som fokuserar adaptiviteten där användaren tittar i simuleringen med antagandet att det en användare inte kan uppfatta eller se inte är av relevans. Metodens prestanda evalueras genom jämförelse mot en adaptiv method som fokuserar adaptiviteten på vattnets yta och objekt runt vattnet, genom att mäta antalet renderade bilder per sekund, antalet partiklar i simulationen, samt exikveringstiden. Slutsatsen är att den eye-trackingbaserade adaptiva metoden presterar bättre än metoden som fokuserar adaptiviteten på vattnets yta i fyra av fem scenarion, och bör därför ses som en metod som har potential att utforskas vidare samt en metod som kan användas vi realtidssimuleringar av vatten, med begränsningen att hårdvara för eye-tracking behövs.
APA, Harvard, Vancouver, ISO, and other styles
10

Contro, Alessandro. "Multi-sensing Data Fusion: Target tracking via particle filtering." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/16835/.

Full text
Abstract:
In this Master's thesis, Multi-sensing Data Fusion is firstly introduced with a focus on perception and the concepts that are the base of this work, like the mathematical tools that make it possible. Particle filters are one class of these tools that allow a computer to perform fusion of numerical information that is perceived from real environment by sensors. For this reason they are described and state of the art mathematical formulas and algorithms for particle filtering are also presented. At the core of this project, a simple piece of software has been developed in order to test these tools in practice. More specifically, a Target Tracking Simulator software is presented where a virtual trackable object can freely move in a 2-dimensional simulated environment and distributed sensor agents, dispersed in the same environment, should be able to perceive the object through a state-dependent measurement affected by additive Gaussian noise. Each sensor employs particle filtering along with communication with other neighboring sensors in order to update the perceived state of the object and track it as it moves in the environment. The combination of Java and AgentSpeak languages is used as a platform for the development of this application.
APA, Harvard, Vancouver, ISO, and other styles
11

Jansova, Markéta. "Search for the supersymmetric partner of the top quark and measurements of cluster properties in the silicon strip tracker of the CMS experiment at Run 2." Thesis, Strasbourg, 2018. http://www.theses.fr/2018STRAE018/document.

Full text
Abstract:
Cette thèse présente trois études différentes basées sur les données de CMS du Run 2. Les deux premières sont des mesures des propriétés des amas dans le trajectographe à pistes de silicium de CMS, liées respectivement aux particules hautement ionisantes (HIP) et au partage de charge entre les pistes voisines (également appelé diaphonie). Le dernier sujet abordé dans ce document est la recherche du partenaire supersymétrique du quark top, appelé stop. Une augmentation de l’inefficacité de reconstruction des hits dans le trajectographe à pistes de silicium de CMS a été observée au cours des années 2015 et 2016. Les particules hautement ionisantes ont été identifiées comme une cause possible de ces inefficacités. Cette thèse apporte des résultats qualitatifs et quantitatifs sur l’effet HIP et sa probabilité. Le HIP n’était pas la source la plus importante d’inefficacité et, une fois la source identifiée et corrigée, les nouvelles données révèlent qu’après cette correction, le HIP représente à présent la principale source d’inefficacité. La seconde étude présentée porte sur les conditions utilisées dans la simulation du trajectographe par CMS afin de fournir des résultats réalistes. Ces conditions changent avec les conditions de fonctionnement du trajectographe et évoluent avec le vieillissement du trajectographe résultant des dommages causés par le rayonnement. Nous avons constaté que les paramètres de diaphonie obsolètes avaient une grande incidence sur la forme de l’amas. Dans cette thèse, les paramètres ont été réévalués et il a été confirmé que les nouveaux paramètres améliorent grandement l’accord des amas entre données et simulation. La dernière partie décrit en profondeur la recherche de stop en utilisant les données collectées en 2016 (correspondant à ∫L = 35.9 fb−1) avec un lepton dans l’état final. Aucun excès n’a été observé par rapport aux prédictions attendues par le modèle standard et les résultats ont été interprétés en terme de limites d’exclusion sur des modèles simplifiés<br>This thesis presents three different studies based on the CMS Run 2 data. The first two are measurements of the cluster properties in the CMS silicon strip tracker related respectively to the highly ionizing particles (HIP) and the charge sharing among neighboring strips (also known as cross talk). The last topic discussed in this document is the search for the supersymmetric partner of the top quark, called the stop. An increase in the hit inefficiency of the CMS silicon strip tracker was observed during the years 2015 and 2016. The highly ionizing particles were identified as a possible cause of these inefficiencies. This thesis brings qualitative and quantitative results on the HIP effect and its probability. The HIP was found not to be the largest source of inefficiency at that time and once the source was identified and fixed, the new data revealed that after this fix the HIP now represents the major source of the hit inefficiency. The second study presented in this thesis focuses on the conditions plugged in CMS tracker simulation in order to provide realistic results. These conditions change with the tracker operating conditions and also evolve with tracker ageing resulting from the radiation damage. We identified that the outdated cross talk parameters largely impact the cluster width and seed charge. In this thesis the parameters were remeasured and it was confirmed that the new parameters largely improve the agreement of clusters between data and simulation. The last part describes deeply the stop analysis using data recorded in 2016 (corresponding to ∫L =35.9 fb−1) with single lepton in the final state. No excess was observed in the full 2016 data (∫L = 35.9 fb−1) with respect to the standard model background predictions and therefore exclusion limits in terms of simplified model spectra were derived
APA, Harvard, Vancouver, ISO, and other styles
12

Ikardouchene, Syphax. "Analyses expérimentale et numérique de l'interaction departicules avec un jet d'air plan impactant une surface.Application au confinement particulaire." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC1046.

Full text
Abstract:
La thèse vise à qualifier les performances de confinement de rideaux d’air vis-à-vis de pollution particulaire. Plus précisément, elle vise à mettre en place, caractériser et améliorer des barrières de confinement particulaire par des jets d'air plans placés en périphérie de machines tournantes abrasives utilisées pour décaper les surfaces amiantées<br>The thesis aims to qualify the containment barriers for particles. Specifically, it aims to develop, characterize and improve particulate confinement barriers by jets of air placed at the periphery of abrasive rotating machines used to scour the surfaces containing asbestos
APA, Harvard, Vancouver, ISO, and other styles
13

Hu, Senmiao. "Simulation and Verification of Fluid Jet Polishing." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6515.

Full text
Abstract:
Fluid jet polishing (FJP) is a new advanced polishing technology that finds applications in many industries, especially in the optics industry. With the broad application of various surfaces in optics, the sub-micrometric scale and the nanometric surface roughness accuracy are major challenges. Fluid jet polishing is a technology developed from abrasive water jet machining. This technology is a water jet cutting technology, which uses high-pressure flow to cut/remove materials. In this thesis, the working principle, and simulations, as well as verification of fluid jet polishing are thoroughly investigated. The verification of fluid jet polishing in this thesis includes velocity distribution and material removal derivations. The amount of material removed is directly related to the impact velocity of a particle with a surface, which helps define its abrasive particle velocity. During polishing, the particles travel in a solution called slurry. Due to the relatively similar velocity of the particles and the slurry, the particles and the slurry are assumed to be traveling at the same rate. In this thesis, three specific examples are investigated through the creation of an advanced model using FLUENT, a computational fluid dynamics software. The model simulates the particle path during the fluid jet polishing process, and this thesis compares the simulation results to prior analytical and experimental results. The results indicate that the fluid jet polishing erosion area at a particular location is axisymmetric when the 2D cross-section shape is investigated. As the impingement angle of the fluid jet is reduced, the center dead area, where no polishing is observed, approaches zero. vii Additionally, the horizontal component of the velocity vector initially increases then decreases as one moves away from the center stagnation point. Finally, this thesis demonstrates that the erosion depth into the surface that is polished increases when the working pressure of the fluid is increased. This thesis finds that when the distance between the fluid jet and the workpiece is 7 mm, material removal is maximum.
APA, Harvard, Vancouver, ISO, and other styles
14

Pitiot, Thomas. "Outils multirésolutions pour la gestion des interactions en simulation temps réel." Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAD048/document.

Full text
Abstract:
La plupart des simulations interactives ont besoin d'un modèle de détection de collisions. Cette détection nécessite d'une part d'effectuer des requêtes de proximité entre les entités concernées et d'autre part de calculer un comportement à appliquer. Afin d'effectuer ces requêtes, les entités présentes dans une scène sont soit hiérarchisées dans un arbre ou dans un graphe de proximité, soit plongées dans une grille d'enregistrement. Nous présentons un nouveau modèle de détection de collisions s'appuyant sur deux piliers : une représentation de l'environnement par des cartes combinatoires multirésolutions et un suivi en temps réel de particules plongées dans ces cartes. Ce modèle nous permet de représenter des environnements complexes tout en suivant en temps réel les entités évoluant dans cet environnement. Nous présentons des outils d'enregistrement et de maintien de l'enregistrement de particules, d'arêtes et de surfaces dans des cartes combinatoires volumiques multirésolutions<br>Most interactive simulations need a collision detection system. First, this system requires the querying of the proximity between the objects and then the computing of the behaviour to be applied. In order to perform these queries, the objects present in a scene are either classified in a tree, in a proximity graph, or embedded inside a registration grid.Our work present a new collision detection model based on two main concepts: representing the environment with a combinatorial multiresolution map, and tracking in real-time particles embedded inside this map. This model allows us to simulate complex environments while following in real-time the entities that are evolving within it.We present our framework used to register and update the registration of particles, edges and surfaces in volumetric combinatorial multiresolution maps. Results have been validated first in 2D with a crowd simulation application and then in 3D, in the medical field, with a percutaneous surgery simulation
APA, Harvard, Vancouver, ISO, and other styles
15

Akhtar, Kareem. "A Numerical Study of Supersonic Rectangular Jet Impingement and Applications to Cold Spray Technology." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/71711.

Full text
Abstract:
Particle-laden supersonic jets impinging on a flat surface are of interest to cold gas-dynamic spray technology. Solid particles are propelled to a high velocity through a convergent-divergent nozzle, and upon impact on a substrate surface, they undergo plastic deformation and adhere to the surface. For given particle and substrate materials, particle velocity and temperature at impact are the primary parameters that determine the success of particle deposition. Depending on the particle diameter and density, interactions of particles with the turbulent supersonic jet and the compressed gas region near the substrate surface can have significant effects on particle velocity and temperature. Unlike previous numerical simulations of cold spray, in this dissertation we track solid particles in the instantaneous turbulent fluctuating flow field from the nozzle exit to the substrate surface. Thus, we capture the effects of particle-turbulence interactions on particle velocity and temperature at impact. The flow field is obtained by direct numerical simulations of a supersonic rectangular particle-laden air jet impinging on a flat substrate. An Eulerian-Lagrangian approach with two-way coupling between solid particles and gas phase is used. Unsteady three-dimensional Navier-Stokes equations are solved using a six-order compact scheme with a tenth-order compact filter combined with WENO dissipation, almost everywhere except in a region around the bow shock where a fifth-order WENO scheme is used. A fourth-order low-storage Runge-Kutta scheme is used for time integration of gas dynamics equations simultaneously with solid particles equations of motion and energy equation for particle temperature. Particles are tracked in instantaneous turbulent jet flow rather than in a mean flow that is commonly used in the previous studies. Supersonic jets for air and helium at Mach number 2.5 and 2.8, respectively, are simulated for two cases for the standoff distance between the nozzle exit and the substrate. Flow structures, mean flow properties, particles impact velocity and particles deposition efficiency on a flat substrate surface are presented. Different grid resolutions are tested using 2, 4 and 8 million points. Good agreement between DNS results and experimental data is obtained for the pressure distribution on the wall and the maximum Mach number profile in wall jet. Probability density functions for particle velocity and temperature at impact are presented. Deposition efficiency for aluminum and copper particles of diameter in the range 1 micron to 40 microns is calculated. Instantaneous flow fields for the two standoff distances considered exhibit different flow characteristics. For large standoff distance, the jet is unsteady and flaps both for air (Mach number 2.5) and for helium (Mach number 2.8), in the direction normal to the large cross-section of the jet. Linear stability analysis of the mean jet profile validates the oscillation frequency observed in the present numerical study. Available experimental data also validate oscillation frequency. After impingement, the flow re-expands from the compressed gas region into a supersonic wall jet. The pressure on the wall in the expansion region is locally lower than ambient pressure. Strong bow shock only occurs for small standoff distance. For large standoff distance multiple/oblique shocks are observed due to the flapping of the jet. The one-dimensional model based on isentropic flow calculations produces reliable results for particle velocity and temperature. It is found that the low efficiency in the low-pressure cold spray (LPCS) compared to high-pressure cold spray (HPCS) is mainly due to low temperature of the particles at the exit of the nozzle. Three-dimensional simulations show that small particles are readily influenced by the large-scale turbulent structures developing on jet shear layers, and they drift sideways. However, large particles are less influenced by the turbulent flow. Particles velocity and temperature are affected by the compressed gas layer and remain fairly constant in the jet region. With a small increase in the particles initial temperature, the deposition efficiency in LPCS can be maximized. There is an optimum particle diameter range for maximum deposition efficiency.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Hamidi, Mohamed Salim. "Direct numerical simulations of flow in dense fluid-particle systems." Electronic Thesis or Diss., Perpignan, 2024. http://www.theses.fr/2024PERP0004.

Full text
Abstract:
Les écoulements de fluide particules jouent un rôle important dans une variété d’applications industrielles, particulièrement dans le contexte des centrales solaires à concentration de troisième génération, où ils peuvent être utilisés à la fois comme fluide caloporteur et moyen de stockage thermique. Cependant, l’étude de ces écoulements présente des défis considérables en raison de la complexité des interactions multi-échelles qui les régissent. La simulation numérique, et en particulier les méthodes de Simulation Numérique Directe (DNS) où la résolution est inférieure au diamètre des particules, émerge comme un outil prometteur pour mieux comprendre ces écoulements. L’augmentation des moyens de calcul et la performance des algorithmes numériques ont rendu les simulations de lits fluidisés avec particules résolues de plus en plus réalisablespour des études à des échelles représentatives. Dans cette thèse, nous présentons une méthode numérique basée sur la formulation mono-fluide. Cette méthode combine la méthode de suivi de front avec la méthode de pénalisation visqueuse pour simuler les comportements des écoulements particulaires. La méthode de suivi de front utilise un système de maillage double. Ce système suit efficacement les interfaces solides en mouvement, représentées par un maillage mobile, à travers une grille de simulation fixe, garantissant ainsi la précision dans la représentation des mouvements des particules. La méthode de pénalisation visqueuse, quant à elle, joue un rôle essentiel pour assurer la condition de non-déformation à l’intérieur des particules. Cela est réalisé en traitant le fluide à l’intérieur des particules comme un milieu extrêmement visqueux, permettantainsi à la simulation de reproduire de manière réaliste le comportement des particules solides dans diverses conditions. Pour les interactions à courte distance entre les particules, un modèle de collision combiné est utilisé. Ce modèle prend habilementen compte à la fois la dissipation visqueuse et la dissipation solide, principalement dues aux effets de lubrification et aux contacts inélastiques entre les particules, respectivement. L’approche nuancée de ce modèle permet des simulations plus naturelles des interactions entre particules, réduisant le nombre de paramètres numériques à utiliser dans le modèle. L’algorithme résultant est implémenté dans TrioCFD un code open-source conçu pour le calcul parallèle massif. La précision et la fiabilité du code de simulation ont été testées contre des références bien établies dans la littérature. De plus,la thèse inclut une simulation paramétrique d’un lit fluidisé à l’échelle de laboratoire, comparant la précision de l’algorithme aux résultats expérimentaux et numériques. Ces comparaisons démontrent que l’algorithme proposé reproduit correctement lesréférences établies<br>Fluid particle flows hold significant importance in a variety of industrial applications, particularly in the context of third-generation concentrated solar power plants, where they can be used as both a heat transfer fluid and a storage medium. However, studying these flows presents considerable challenges due to the complex multiscale interactions governing them. Numerical simulation, particularly Direct Numerical Simulation (DNS) methods where the resolution is smaller than the particle diameter, emerges as a promising tool for better understanding these flows and aiding in the design of pilot-scale industrial applications. The increase in computational capabilities and the performance of numerical algorithms has made the particle resolved simulations of fluidized beds increasingly feasible for representative studies.In this thesis, we present a numerical method based on the one-fluid formulation. This method combines the front tracking method with the viscous penalty method to simulate fluid particle flow behaviors. The front tracking method employs a dual mesh system. This system effectively tracks the moving solid interfaces, represented as a moving mesh, across a fixed simulation grid, ensuring accuracy in representing the particle movements. The viscous penalty method, on the other hand, plays a pivotal role in ensuring the fidelity of rigid body motion within the particles. This is achieved by treating the fluid within the particles as an extremely viscous medium, thereby enabling the simulation to realistically mimic the behavior of fluid particles under various conditions.For short-term interactions between particles, a combined collision model is used. This model adeptly accounts for both viscous dissipation and solid dissipation, primarily due to lubrication effects and inelastic contacts between particles, respectively. The nuanced approach of this model allows for more natural simulations of particle interactions, reducing the reliance on arbitrary numerical parameters often seen in other models cited in the literature. The algorithm is implemented in TrioCFD an open-source framework designed for massively parallel computing.The accuracy and reliability of the simulation code were rigorously tested against well-established benchmarks in the literature. Furthermore, the thesis includes a parametric simulation of a lab-scale fluidized bed, comparing the accuracy of the algorithm against both experimental and numerical results. These comparisons demonstrate that the proposed algorithm aligns well with established benchmarks and exhibits good accuracy in its predictions
APA, Harvard, Vancouver, ISO, and other styles
17

Ege, Emre. "A Comparative Study Of Tracking Algorithms In Underwater Environment Using Sonar Simulation." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608866/index.pdf.

Full text
Abstract:
Target tracking is one the most fundamental elements of a radar system. The aim of target tracking is the reliable estimation of a target&#039<br>s true state based on a time history of noisy sensor observations. In real life, the sensor data may include substantial noise. This noise can render the raw sensor data unsuitable to be used directly. Instead, we must filter the noise, preferably in an optimal manner. For land, air and surface marine vehicles, very successful filtering methods are developed. However, because of the significant differences in the underwater propagation environment and the associated differences in the corresponding sensors, the successful use of similar principles and techniques in an underwater scenario is still an active topic of research. A comparative study of the effects of the underwater environment on a number of tracking algorithms is the focus of the present thesis. The tracking algorithms inspected are: the Kalman Filter, the Extended Kalman Filter and the Particle Filter. We also investigate in particular the IMM extension to KF and EKF filters. These algorithms are tested under several underwater environment scenarios.
APA, Harvard, Vancouver, ISO, and other styles
18

Barnard, Jack Michael. "Simulation of non-conservative transport using particle tracking methods with an application to soils contaminated with heavy metals." Thesis, Durham University, 2014. http://etheses.dur.ac.uk/10625/.

Full text
Abstract:
This thesis focuses on the development and application of a discrete time random walk particle tracking model to the simulation of non-conservative transport in porous media. The model includes the simulation of solute transport, reversible bimolecular reactions, and sorption. The functionality of the discrete time random walk method is expanded to allow for the simulation of more complicated chemical systems than previously achieved. The bimolecular reaction simulation is based on a colocation probability function method. This reaction simulation method is analysed to investigate the effects of the controlling parameters on its behaviour. This knowledge is then used to inform a discussion of its application to the simulation of mixing limited reactive transport and comparison with other approaches. The reaction simulation method developed in the thesis possesses a greater flexibility than previously developed methods for the simulation of reactions using particle tracking. The developed model is also applied, in combination with a chemical speciation model, to enable the production of a reduced complexity model to simulate effects of an amendment scheme on soils contaminated with heavy metals. The effect of the soil amendment scheme on the partitioning of Pb between solution, soil surfaces, and dissolved organic matter is approximated by rules fitted as functions of concentrations of single components within the soil amendment. This allows for the simulation of complicated chemical systems using particle tracking methods. As well as expanding the functionality of particle tracking methods the issue of the computational expense is also addressed. A scheme for the optimization of the reaction simulation is presented and its effectiveness investigated. Together with the use of graphics processing units for code acceleration, the computational and temporal expense of the solution is reduced. The combination of the expansion in functionality and reduction in run time makes particle tracking a more attractive simulation method.
APA, Harvard, Vancouver, ISO, and other styles
19

Dimou, Konstantina. "3-D hybrid Eulerian-Lagrangian / particle tracking model for simulating mass transport in coastal water bodies." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/28011.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Dolan, Kevin. "Simulations of Aerosol Exposure from a Dusty Table Source." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1562673613531829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lambert, Andrew Ryan. "Regional deposition of particles in an image-based airway model: Cfd simulation and left-right lung ventilation asymmetry." Thesis, University of Iowa, 2010. https://ir.uiowa.edu/etd/537.

Full text
Abstract:
Regional deposition and ventilation of particles by generation, lobe and lung during steady inhalation in a computed tomography (CT) based human airway model are investigated numerically. The airway model consists of a seven-generation human airway tree, with oral cavity, pharynx and larynx. The turbulent flow in the upper respiratory tract is simulated by large-eddy simulation. The flow boundary conditions at the peripheral airways are derived from CT images at two lung volumes to produce physiologically-realistic regional ventilation. Particles with diameter less than 2.5 microns are selected for study because smaller particles tend to penetrate to the more distal parts of the lung. The current generational particle deposition efficiencies agree well with existing measurement data. Generational deposition efficiencies exhibit similar dependence on particle Stokes number regardless of generation, whereas deposition and ventilation efficiencies vary by lobe and lung, depending on airway morphology and airflow ventilation. In particular, regardless of particle size, the left lung receives a greater proportion of the particle bolus as compared to the right lung in spite of greater flow ventilation to the right lung. This observation is supported by the left-right lung asymmetry of particle ventilation observed in medical imaging. It is found that the particle-laden turbulent laryngeal jet flow, coupled with the unique geometrical features of the airway, causes a disproportionate amount of particles to enter the left lung.
APA, Harvard, Vancouver, ISO, and other styles
22

Sun, Yuanyuan [Verfasser], Olaf [Akademischer Betreuer] Kolditz, Rudolf [Akademischer Betreuer] Liedl, and Markus [Akademischer Betreuer] Weitere. "Water Quality Simulation with Particle Tracking Method / Yuanyuan Sun. Gutachter: Olaf Kolditz ; Rudolf Liedl. Betreuer: Olaf Kolditz ; Rudolf Liedl ; Markus Weitere." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://d-nb.info/1068154489/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Fu, Sijie. "Vélocimétrie par suivi 3D de particules pour la caractérisation des champs thermo-convectifs dans le bâtiment." Thesis, Université Côte d'Azur (ComUE), 2016. http://www.theses.fr/2016AZUR4078/document.

Full text
Abstract:
L'objectif de cette thèse est de réaliser une étude approfondie sur la vélocimétrie par suivi 3D de particules (3D Particle Tracking Velocimetry, 3D PTV) pour l'air intérieur convective thermique. Ce travail se concentre principalement sur l'étude de la littérature, l'évaluation des performances des algorithmes de mesure de la 3D PTV, et l'étude expérimentale pour l'air intérieur convective thermique en utilisant la technologie 3D PTV. Tout d'abord, la technologie 3D PTV typique et ses principales applications précédentes pour l'étude de d'air intérieur sont examinés. Ensuite, les performances de différents algorithmes de mesure de la 3D PTV sont évalués numériquement et expérimentalement. Deux sections se compose de cette partie, on est de comparer les performances de mesure de l'algorithme de PIV typique et l'algorithme de 3D PTV, une autre est de comparer les performances des sept algorithmes complets de 3D PTV. Enfin, sur la base de l'analyse présentée dans la thèse, l'étude expérimentale de l'écoulement d'air intérieur généré par la méthode de ventilation mélange est réalisée<br>The objective of this thesis is to conduct a comprehensive study on 3D Particle Tracking velocimetry (PTV) for thermal convective indoor airflow. This work mainly concentrates on the literature survey, the performance evaluation of 3D PTV measurement algorithms, and the experimental investigation for thermal convective indoor airflow using 3D PTV measurement technology. First, typical 3D PTV technology and its main previous applications for indoor airflow study are carefully reviewed. Then, the performances of different 3D PTV measurement algorithms are evaluated numerically and experimentally. Two sections consist of this part, one is to compare the measurement performances of typical PIV algorithm and 3D PTV algorithm, another is to compare the performances of seven complete 3D PTV algorithms. Last, based on the analysis in the thesis, the experimental investigation for indoor airflow generated by mixing ventilation method is conducted
APA, Harvard, Vancouver, ISO, and other styles
24

Butaye, Edouard. "Modélisation et simulations résolues d'écoulement fluide-particules : du régime de Stokes aux lits fluidisés anisothermes." Electronic Thesis or Diss., Perpignan, 2024. http://www.theses.fr/2024PERP0029.

Full text
Abstract:
Les centrales solaires à tour utilisent le flux solaire concentré pour chauffer un fluide caloporteur et générer de l'électricité grâce à un cycle thermodynamique. Pour augmenter le rendement de conversion thermique/électrique, on cherche à augmenter la température de sortie du récepteur à au moins 800°C. Une alternative aux fluides conventionnels réside dans l'utilisation de particules fluidisées par de l'air pour ainsi augmenter la température de travail et maximiser le transfert de chaleur pariétal. Les particules solides utilisées peuvent supporter des températures dépassant les 1000°C sans dégradation de leurs propriétés physiques et peuvent également stocker efficacement la chaleur. Pour répondre à ces enjeux, il est nécessaire de caractériser l'écoulement au sein du tube récepteur ainsi que les mécanismes physiques de transfert de chaleur dans ces configurations. Ce travail s'intéresse particulièrement à la description locale des écoulements anisothermes fluide-particules à l'aide de simulations numériques directes en particules résolues (PR-DNS) réalisées en calcul hautes performances. Des améliorations de l'outil permettant de réaliser des simulations résolues de ces écoulements sont tout d'abord apportées au code pour calculer des grandeurs d'intérêts et optimiser la méthode. Ensuite, plusieurs configurations de lits fluidisés liquide-solide sont étudiées pour caractériser extensivement la dynamique de l'écoulement. Les transferts thermiques pariétaux sont également capturés ainsi que les transferts thermiques entre le fluide et les particules. Des configurations gaz-solide sont étudiées afin de valider l'outil de simulation numérique pour modéliser ces écoulements. Finalement, une nouvelle échelle de résolution est proposée, en particules résolues avec une correction sous-mailles (PR-SCS). Cette échelle permet de modéliser précisément les efforts hydrodynamiques malgré une résolution grossière<br>Solar tower power plants harness concentrated solar flux to heat a fluid and generate electricity through a thermodynamic cycle that generates steam and drives a turbo-alternator. To increase thermal/electrical conversion efficiency, it is a required to raise the receiver outlet temperature to at least 800°C. An alternative to conventional fluids is to use air-fluidized particles to raise the working temperature and maximize parietal heat transfer. The solid particles used can withstand temperatures in excess of 1000°C without degrading their physical properties, and store heat efficiently. To meet these challenges, it is necessary to characterize the flow within the receiving tube, as well as the physical mechanisms of heat transfer in these configurations. This work focuses on the local description of anisothermal fluid-particle flows using particle-resolved direct numerical simulations (PR-DNS) with high-performance computing. Improvements are first implemented in the code to compute quantities of interest and optimize the numerical method. Next, several liquid-solid fluidized bed configurations are studied to extensively characterize flow dynamics. Parietal heat transfers are also computed as well as fluid-particle heat transfers. Gas-solid configurations are studied to validate the numerical simulation tool for modeling these flows. Finally, a new scale of resolution is proposed, referred to as Particle Resolved - Subgrid Corrected Simulation (PR-SCS). This scale enables hydrodynamic forces to be accurately modeled despite the coarse resolution
APA, Harvard, Vancouver, ISO, and other styles
25

MERICO, DAVIDE. "Tracking with high-density, large-scale wireless sensor networks." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2010. http://hdl.handle.net/10281/7785.

Full text
Abstract:
Given the continuous technological advances in computing and communication, it seems that we are rapidly heading towards the realization of paradigms commonly described as ubiquitous computing, pervasive computing, ambient intelligence, or, more recently, "everyware". These paradigms envision living environments pervaded by a high number of invisible technological devices affecting and improving all aspects of our lives. Therefore, it is easy to justify the need of knowing the physical location of users. Outdoor location-aware applications are already widespread today, their growing popularity showing that location-awareness is indeed a very useful functionality. Less obvious is how the growing availability of these locations and tracks will be exploited for providing more intelligent "situation-understanding" services that help people. My work is motivated by the fact that, thanks to location-awareness systems, we are more and more aware of the exact positions of the users but unfortunately we are rarely capable of exactly understanding what they are doing. Location awareness should rapidly evolve and become "situation-awareness" otherwise the ubiquitous-computing vision will become impracticable. The goal of this thesis is devising alternative and innovative approaches to the problem of indoor position estimation/assessment and evaluating them in real environments. These approaches are be based on: (i) a low-cost and energy-aware localization infrastructure; (ii) multi-sensor, statistically-based, localization algorithms; (iii) logic-based situation assessment techniques. The algorithms and techniques that are the outcome of this thesis have all been tested by implementing them and measuring (both in a quantitative sense and in a qualitative sense) the performance in the field.
APA, Harvard, Vancouver, ISO, and other styles
26

Koyama, Tomofumi. "Stress, Flow and Particle Transport in Rock Fractures." Doctoral thesis, Stockholm : Mark- och vattenteknik, Kungliga Tekniska högskolan, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4485.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Shah, Anant Pankaj. "Development and application of a dispersed two-phase flow capability in a general multi-block Navier Stokes solver." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/36101.

Full text
Abstract:
Gas turbines for military applications, when operating in harsh environments like deserts often encounter unexpected operation faults. Such performance deterioration of the gas turbine decreases the mission readiness of the Air Force and simultaneously increases the maintenance costs. Some of the major factors responsible for the reduced performance are ingestion of debris during take off and landing, distorted intake flows during low altitude maneuvers, and hot gas ingestion during artillery firing. The focus of this thesis is to study ingestion of debris; specifically sand. The region of interest being the internal cooling ribbed duct of the turbine blade. The presence of serpentine passages and strong localized cross flow components makes this region prone to deposition, erosion, and corrosion (DEC) by sand particles. A Lagrangian particle tracking technique was implemented in a generalized coordinate multi-block Navier-Stokes solver in a distributed parallel framework. The developed algorithm was validated by comparing the computed particle statistics for 28 microns lycopodium, 50 microns glass, and 70 microns copper with available data [2] for a turbulent channel flow at Ret=180. Computations were performed for a particle-laden turbulent flow through a stationary ribbed square duct (rib pitch / rib height = 10, rib height / hydraulic diameter = 0.1) using an Eulerian-Lagrangian framework. Particle sizes of 10, 50, and 100 microns with response times (normalized by friction velocity and hydraulic diameter) of 0.06875, 1.71875, and 6.875 respectively are considered. The calculations are performed for a nominal bulk Reynolds number of 20,000 under fully developed conditions. The carrier phase was solved using Large Eddy Simulation (LES) with Dynamic Smagorinsky Model [1]. Due to low volume fraction of the particles, one-way fluid-particle coupling was assumed. It is found that at any given instant in time about 40% of the total number of 10 micron particles are concentrated in the vicinity (within 0.05 Dh) of the duct surfaces, compared to 26% of the 50 and 100 micron particles. The 10 micron particles are more sensitive to the flow features and are prone to preferential concentration more so than the larger particles. At the side walls of the duct, the 10 micron particles exhibit a high potential to erode the region in the vicinity of the rib due to secondary flow impingement. The larger particles are more prone to eroding the area between the ribs and towards the center of the duct. At the ribbed walls, while the 10 micron particles exhibit a fairly uniform propensity for erosion, the 100 micron particles show a much higher tendency to erode the surface in the vicinity of the reattachment region. The rib face facing the flow is by far the most susceptible to erosion and deposition for all particle sizes. While the top of the rib does not exhibit a large propensity to be eroded, the back of the rib is as susceptible as the other duct surfaces because of particles which are entrained into the recirculation zone behind the rib.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
28

Miyawaki, Shinjiro. "Automatic construction and meshing of multiscale image-based human airway models for simulations of aerosol delivery." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/1990.

Full text
Abstract:
The author developed a computational framework for the study of the correlation between airway morphology and aerosol deposition based on a population of human subjects. The major improvement on the previous framework, which consists of a geometric airway model, a computational fluid dynamics (CFD) model, and a particle tracking algorithm, lies in automatic geometry construction and mesh generation of airways, which is essential for a population-based study. The new geometric model overcomes the shortcomings of both centerline (CL)-based cylindrical models, which are based on the skeleton and average branch diameters of airways called one-dimensional (1-D) trees, and computed tomography (CT)-based models. CL-based models are efficient in terms of pre- and post-processing, but fail to represent trifurcations and local morphology. In contrast, in spite of the accuracy of CT-based models, it is time-consuming to build these models manually, and non-trivial to match 1-D trees and three-dimensional (3-D) geometry. The new model, also known as a hybrid CL-CT-based model, is able to construct a physiologically-consistent laryngeal geometry, represent trifurcations, fit cylindrical branches to CT data, and create the optimal CFD mesh in an automatic fashion. The hybrid airway geometries constructed for 8 healthy and 16 severe asthmatic (SA) subjects agreed well with their CT-based counterparts. Furthermore, the prediction of aerosol deposition in a healthy subject by the hybrid model agreed well with that by the CT-based model. To demonstrate the potential application of the hybrid model to investigating the correlation between skeleton structure and aerosol deposition, the author applied the large eddy simulation (LES)-based CFD model that accounts for the turbulent laryngeal jet to three hybrid models of SA subjects. The correlation between diseased branch and aerosol deposition was significant in one of the three SA subjects. However, whether skeleton structure contributes to airway abnormality requires further investigation.
APA, Harvard, Vancouver, ISO, and other styles
29

Mahee, Durude. "Numerical Simulation and Graphical Illustration of Ionization by Charged Particles as a Tool toward Understanding Biological Effects of Ionizing Radiation." University of Cincinnati / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1535381068931831.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Mårtensson, Oskar. "LHCb Upstream Tracker box : Thermal studies and conceptual design." Thesis, Umeå universitet, Institutionen för fysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-116163.

Full text
Abstract:
The LHC (Large Hadron Collider) will have a long shut down in the years of 2019 and 2020, referred to as LS2. During this stop the LHC injector complex will be upgraded to increase the luminosities, which will be the first step of the high luminosity LHC program (which will be realized during LS3 that takes place in 2024-2026). The LHCb experiment, whose main purpose is to study the CP-violation, will during this long stop be upgraded in order to withstand a higher radiation dose, and to be able to read out the detector at a rate of 40MHz,compared to 1MHz at present. This change will improve the trigger efficiency significantly. One of the LHCb sub-detectors the Trigger Tracker (TT), will be replaced by a new sub-detector called UT. This report presents the early stage design (preparation for mock-up building) of the box that will be isolating the new UT detector from the surroundings and to ensure optimal detector operation. Methods to fulfill requirements such as light and gas tightness, Faraday-cage behavior and condensation free temperatures, without breaking the fragile beryllium beam pipe, are established.<br>LHC (Large Hadron Collider) kommer under åren 2019-2020 att ha ett längre driftstopp. Under detta driftstopp så kommer LHC's injektionsanordningar att uppgraderas för att kunna sätta fler protoner i circulation i LHC, och därmed öka antalet partikelkollisioner per tidsenhet. Denna uppgradering kommer att vara första steget i "High Luminocity LHC"-programmet som kommer att realiseras år 2024-2026. LHCb-experimentet, vars främsta syfte är att studera CP-brott, kommer också att uppgraderas under stoppet 2019-2020. Framför allt så ska avläsningsfrekvensen ökas från dagens 1MHz till 40MHz, och experimentet ska förberedas för de högre strålningsdoser som kommer att bli aktuella efter stoppet 2024-2026. En av LHCb's deldetektorer, TT detektorn, kommer att bytas ut mot en ny deldetektor som kallas UT. Den här rapporten presenterar den förberedande designen av den låda som ska isolera UT från dess omgivning och försäkra optimala förhållanden för detektorn. Kraven på den isolerande lådan och tillvägagångssätt för att uppfylla dessa krav presenteras.<br>LHCb, LS2 and LS3 Upgrade
APA, Harvard, Vancouver, ISO, and other styles
31

Arimboor, Chinnan Jacob. "Simulation and validation of in-cylinder combustion for a heavy-duty Otto gas engine using 3D-CFD technique." Thesis, KTH, Förbränningsmotorteknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-245172.

Full text
Abstract:
Utsläpp från bilar har spelat stor roll de senaste decennierna. Detta har lett till ökad användning av Otto gasmotorer som använder naturgas som bränsle. Nya motordesigner behöver optimeras för att förbättra motorens effektivitet. Ett effektivt sätt att göra detta på är genom användningen av simuleringar för att minska ledtiden i motorutvecklingen. Verifiering och validering av simuleringarna spelar stor roll för att bygga förtroende för och förutsägbarhet hos simuleringsresultaten. Syftet med detta examensarbete är att föreslå förbränningsmodellparametrarna efter utvärdering av olika kombinationer av förbrännings- och tändmodeller för Otto förbränning, vad gäller beräkningstid och noggrannhet. In-cylindertrycksspår från simulering och mätning jämförs för att hitta den bästa kombinationen av förbrännings- och tändmodell. Inverkan av tändtid, antal motorcykler och randvillkor för simuleringsresultatet studeras också. Resultaten visar att ECFM-förbränningsmodellen förutsäger simuleringsresultaten mer exakt när man jämför med mätningarna. Effekten av tändningstiden på olika kombinationer av förbrännings- och tändningsmodell utvärderas också. Stabiliteten hos olika förbränningssimuleringsmodeller diskuteras också under körning för fler motorcykler. Jämförelse av beräkningstid görs även för olika kombinationer av förbrännings- och tändmodeller. Resultaten visar också att flamspårningsmetoden med Euler är mer känslig för cellstorlek och kvalitet hos simuleringsnätet, jämfört med övriga studerade modeller. Rekommendationer och förslag ges om nät- och simulerings-inställningar för att prediktera förbränningen på ett så bra sätt som möjligt. Några möjliga förbättringsområden ges som framtida arbete för att förbättra noggrannheten i simuleringsresultaten.<br>Emission from automobiles has been gaining importance for past few decades. This has gained a lot of impetus in search for alternate fuels among the automotive manufacturers. This led to the increase usage of Otto gas engine which uses natural gas as fuel. New engine designs have to be optimized for improving the engine efficiency. This led to usage of virtual simulations for reducing the lead time in the engine development. The verification and validation of actual phenomenon in the virtual simulations with respect to the physical measurements was quite important.  The aim of this master thesis is to suggest the combustion model parameters after evaluating various combination of combustion and ignition models in terms of computational time and accuracy. In-cylinder pressure trace from the simulation is compared with the measurement in order to find the nest suited combination of combustion and ignition models. The influence of ignition timing, number of engine cycles and boundary conditions on the simulation results are also studied. Results showed that ECFM combustion model predicts the simulation results more accurately when compare to the measurements. Impact of ignition timing on various combination of combustion and ignition model is also assessed. Stability of various combustion simulation models is also discussed while running for more engine cycles. Comparison of computational time is also made for various combination of combustion and ignition models. Results also showed that the flame tracking method using Euler is dependent on the mesh resolution and the mesh quality.  Recommendations and suggestions are given about the mesh and simulation settings for predicting the combustion simulation accurately. Some possible areas of improvement are given as future work for improving the accuracy of the simulation results.
APA, Harvard, Vancouver, ISO, and other styles
32

Lanoiselée, Yann. "Revealing the transport mechanisms from a single trajectory in living cells." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLX081/document.

Full text
Abstract:
Cette thèse est dédiée à l’analyse et la modélisation d'expériences où la position d'un traceur dans le milieu cellulaire est enregistrée au cours du temps. Il s’agit de pouvoir de retirer le maximum d’information à partir d’une seule trajectoire observée expérimentalement. L’enjeu principal consiste à identifier les mécanismes de transport sous-jacents au mouvement observé. La difficulté de cette tâche réside dans l’analyse de trajectoires individuelles, qui requiert de développer de nouveaux outils d’analyse statistique. Dans le premier chapitre, un aperçu est donné de la grande variété des dynamiques observables dans le milieu cellulaire. Notamment, une revue de différents modèles de diffusion anormale et non-Gaussienne est réalisée. Dans le second chapitre, un test est proposé afin de révéler la rupture d'ergodicité faible à partir d’une trajectoire unique. C’est une généralisation de l’approche de M. Magdziarz et A. Weron basée sur la fonction caractéristique du processus moyennée au cours du temps. Ce nouvel estimateur est capable d’identifier la rupture d’ergodicité de la marche aléatoire à temps continu où les temps d'attente sont distribués selon une loi puissance. Par le calcul de la moyenne de l’estimateur pour plusieurs modèles typiques de sous diffusion, l’applicabilité de la méthode est démontrée. Dans le troisième chapitre, un algorithme est proposé afin reconnaître à partir d’une seule trajectoire les différentes phases d'un processus intermittent (e.g. le transport actif/passif à l'intérieur des cellules, etc.). Ce test suppose que le processus alterne entre deux phases distinctes mais ne nécessite aucune hypothèse sur la dynamique propre dans chacune des phases. Les changements de phase sont capturés par le calcul de quantités associées à l’enveloppe convexe locale (volume, diamètre) évaluées au long de la trajectoire. Il est montré que cet algorithme est efficace pour distinguer les états d’une large classe de processus intermittents (6 modèles testés). De plus, cet algorithme est robuste à de forts niveaux de bruit en raison de la nature intégrale de l’enveloppe convexe. Dans le quatrième chapitre, un modèle de diffusion dans un milieu hétérogène où le coefficient de diffusion évolue aléatoirement est introduit et résolu analytiquement. La densité de probabilité des déplacements présente des queues exponentielles et converge vers une Gaussienne au temps long. Ce modèle généralise les approches précédentes et permet ainsi d’étudier en détail les hétérogénéités dynamiques. En particulier, il est montré que ces hétérogénéités peuvent affecter de manière drastique la précision de mesures effectuées sur une trajectoire par des moyennes temporelles. Dans le dernier chapitre, les méthodes d’analyses de trajectoires individuelles sont utilisées pour étudier deux expériences. La première analyse effectuée révèle que les traceurs explorant le cytoplasme montrent que la densité de probabilité des déplacements présente des queues exponentielles sur des temps plus longs que la seconde. Ce comportement est indépendant de la présence de microtubules ou du réseau d’actine dans la cellule. Les trajectoires observées présentent donc des fluctuations de diffusivité témoignant pour la première fois de la présence d’hétérogénéités dynamiques au sein du cytoplasme. La seconde analyse traite une expérience dans laquelle un ensemble de disques de 4mm de diamètre a été vibré verticalement sur une plaque, induisant un mouvement aléatoire des disques. Par une analyse statistique approfondie, il est démontré que cette expérience est proche d'une réalisation macroscopique d'un mouvement Brownien. Cependant les densités de probabilité des déplacements des disques présentent des déviations par rapport à la Gaussienne qui sont interprétées comme le résultat des chocs inter-disque. Dans la conclusion, les limites des approches adoptées ainsi que les futures pistes de recherches ouvertes par ces travaux sont discutées en détail<br>This thesis is dedicated to the analysis and modeling of experiments where the position of a tracer in the cellular medium is recorded over time. The goal is to be able to extract as much information as possible from a single experimentally observed trajectory. The main challenge is to identify the transport mechanisms underlying the observed movement. The difficulty of this task lies in the analysis of individual trajectories, which requires the development of new statistical analysis tools. In the first chapter, an overview is given of the wide variety of dynamics that can be observed in the cellular medium. In particular, a review of different models of anomalous and non-Gaussian diffusion is carried out. In the second chapter, a test is proposed to reveal weak ergodicity breaking from a single trajectory. This is a generalization of the approach of M. Magdziarz and A. Weron based on the time-averaged characteristic function of the process. This new estimator is able to identify the ergodicity breaking of continuous random walking where waiting times are power law distributed. By calculating the average of the estimator for several subdiffusion models, the applicability of the method is demonstrated. In the third chapter, an algorithm is proposed to recognize the different phases of an intermittent process from a single trajectory (e.g. active/passive transport within cells, etc.).This test assumes that the process alternates between two distinct phases but does not require any hypothesis on the dynamics of each phase. Phase changes are captured by calculating quantities associated with the local convex hull (volume, diameter) evaluated along the trajectory. It is shown that this algorithm is effective in distinguishing states from a large class of intermittent processes (6 models tested). In addition, this algorithm is robust at high noise levels due to the integral nature of the convex hull. In the fourth chapter, a diffusion model in a heterogeneous medium where the diffusion coefficient evolves randomly is introduced and solved analytically. The probability density function of the displacements presents exponential tails and converges towards a Gaussian one at long time. This model generalizes previous approaches and thus makes it possible to study dynamic heterogeneities in detail. In particular, it is shown that these heterogeneities can drastically affect the accuracy of measurements made by time averages along a trajectory. In the last chapter, single-trajectory based methods are used for the analysis of two experiments. The first analysis carried out shows that the tracers exploring the cytoplasm show that the probability density of displacements has exponential tails over periods of time longer than the second. This behavior is independent of the presence of both microtubules and the actin network in the cell. The trajectories observed therefore show fluctuations in diffusivity, indicating for the first time the presence of dynamic heterogeneities within the cytoplasm. The second analysis deals with an experiment in which a set of 4mm diameter discs was vibrated vertically on a plate, inducing random motion of the disks. Through an in-depth statistical analysis, it is demonstrated that this experiment is close to a macroscopic realization of a Brownian movement. However, the probability densities of disks’ displacements show deviations from Gaussian which are interpreted as the result of inter-disk shocks. In the conclusion, the limits of the approaches adopted as well as the future research orientation opened by this thesis are discussed in detail
APA, Harvard, Vancouver, ISO, and other styles
33

Contin, Giacomo. "The Silicon Strip Detector (SSD) for the ALICE experiment at LHC: construction, characterization and charged particles multiplicity studies." Doctoral thesis, Università degli studi di Trieste, 2009. http://hdl.handle.net/10077/3070.

Full text
Abstract:
2007/2008<br>La presente tesi descrive le attivita' di ricerca legate alla costruzione, la caratterizzazione e la validazione del rivelatore a micro-strisce di silicio (SSD) per il sistema di tracciamento dell'esperimento ALICE presso il collisionatore LHC al CERN. Nel primo capitolo della tesi si introduce la fisica delle collisioni fra ioni pesanti e si descrivono le principali osservabili che saranno utilizzate dall'esperimento ALICE per studiare la formazione e la successiva evoluzione del Plasma di Quark e Gluoni. Nel secondo capitolo e' presentata una descrizione del rivelatore di ALICE e sono discusse in particolare le caratteristiche del sistema di tracciamento, di cui l'SSD e' parte integrante, e le sue prestazioni in relazione alla fisica di ALICE. La terza parte della tesi riguarda le attivita' correlate alla costruzione e alla caratterizzazione dell'SSD: dopo la produzione e i test di accettazione, e' stata condotta un'indagine estensiva ed approfondita sui moduli difettosi, al fine di comprendere l'origine delle problematiche riscontrate e di elaborare soluzioni appropriate. Il lavoro effettuato ha permesso di recuperare numerosi moduli e di innalzare la qualita' globale del rivelatore. Dopo le operazioni di assemblaggio, il rivelatore nella configurazione finale e' stato completamente caratterizzato prima dell'installazione nel sito sperimentale. Una volta installato, le funzionalita' dell'SSD e la sua integrazione in ALICE sono state infine verificate durante la fase di commissioning, attraverso un elevato numero di acquisizioni di dati di rumore e di raggi cosmici. La caratterizzazione del rivelatore completo ha dimostrato l'importanza di un' efficace correzione hardware del common mode per l'efficienza e la qualita' globali dell'SSD. A tal fine, gli effetti di questa particolare fonte di rumore sono stati studiati attraverso una serie di simulazioni. I risultati di questo studio sono presentati nel quarto capitolo della tesi e due algoritmi sono proposti per un efficiente trattamento e reiezione del rumore di common mode. Infine, nell'ultimo capitolo viene descritto uno studio di fattibilita' della misura della molteplicita' di particelle cariche con l'SSD. In vista della prima fase di acquisizione dati dell'esperimento ALICE, e' stato simulato un campione di eventi protone-protone a 900 GeV di energia; l'efficienza di ricostruzione dei segnali di particella e' stata studiata e misurata in funzione delle caratteristiche funzionali del detector. Infine, la correlazione tra i segnali ricostruiti nell'SSD e le osservabili fisiche simulate dal Monte Carlo e' stata usata per caratterizzare l'interazione primaria.<br>The present thesis is focused on the construction, characterization and performance assessment of the Silicon Strip Detector (SSD) for the tracking system of the ALICE experiment at LHC. The first part introduces to the physics of heavy ions collisions and describes the main observables which are going to be studied with the ALICE detector as possible signatures of the onset of the Quark Gluon Plasma phase transition. A description of the ALICE detector is presented in the second chapter, with particular emphasis on the tracking system, where the SSD plays an important role, and on its performances related to the ALICE physics topics. The third part of the thesis deals with the activities related to the construction and characterization of the SSD: after the production and the acceptance tests, an extensive work was performed on the malfunctioning modules, in order to find out the origin of several defects and to develop proper solutions. Their application allowed to recover a fair number of modules and to enhance the global quality of the SSD. After the assembling operations, a complete characterization of the detector in the final configuration was performed before its installation in the experimental site. Once installed, it was tested and characterized during the commissioning phase, through a large set of noise and cosmic rays data acquisitions. The characterization showed the relevance of a proper hardware common mode correction for the efficiency and overall quality of the SSD. In order to improve the SSD data quality, the effects of this particular source of noise were studied with a set of simulations. The results are presented in the fourth part of the present thesis; in addition, two different algorithms for an efficient common mode noise treatment and rejection are proposed. In the last part of the thesis, the feasibility of a charged particles multiplicity measurement with the SSD was explored. In view of the first data taking phase of the ALICE experiment, a 900 GeV proton-proton collisions sample was simulated, and the particles signals reconstruction efficiency was studied and measured as a function of the SSD quality, represented by the number of properly operating channels and their noise characteristics. Finally, the correlation between SSD reconstructed signals and the Monte Carlo simulated physical observables was used to characterize the primary interactions.<br>XXI Ciclo<br>1978
APA, Harvard, Vancouver, ISO, and other styles
34

Bandieramonte, Marilena. "Muon Portal project: Tracks reconstruction, automated object recognition and visualization techniques for muon tomography data analysis." Doctoral thesis, Università di Catania, 2015. http://hdl.handle.net/10761/3751.

Full text
Abstract:
The present Ph.D. thesis is contextualized within the Muon Portal project, a project dedicated to the creation of a tomograph for the control and scanning of containers at the border in order to reveal smuggled fissile material by means of the cosmic muons scattering. This work aims to extend and consolidate the research in the field of muon tomography in the context of applied physics. The main purpose of the thesis is to investigate new techniques for reconstruction of muon tracks within the detector and new approaches to the analysis of data from muon tomography for the automatic objects recognition and the 3D visualization, thus making possi- ble the realization of a tomography of the entire container. The research work was divided into different phases, described in this thesis document: from a prelimi- nary speculative study of the state of the art on the tracking issue and on the tracks reconstruction algorithms, to the study on the Muon Portal detector performance in the case of particle tracking at low and high multiplicity. A substantial part of the work was devoted to the study of different image reconstruction techniques based on the POCA algorithm (Point of Closest Approach) and the iterative EM-LM algorithm (Expectation-Maximization). In addition, more advanced methods for the tracks reconstruction and visualization, such as data-mining techniques and clustering algorithms have been the subject of the research and development ac- tivity which has culminated in the development of an unsupervised multiphase clustering algorithm (modified-Friends-of-Friends) for the muon tomography data analysis.
APA, Harvard, Vancouver, ISO, and other styles
35

Lindfeldt, Olov. "Railway operation analysis : Evaluation of quality, infrastructure and timetable on single and double-track lines with analytical models and simulation." Doctoral thesis, KTH, Trafik och Logistik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-12727.

Full text
Abstract:
This thesis shows the advantages of simple models for analysis of railway operation. It presents two tools for infrastructure and timetable planning. It shows how the infrastructure can be analysed through fictive line designs, how the timetable can be treated as a variable and how delays can be used as performance measures. The thesis also gives examples of analyses of complex traffic situations through simulation experiments. Infrastructure configuration, timetable design and delays play important roles in the competitiveness of railway transportation. This is especially true on single-track lines where the run times and other timetable related parameters are severely restricted by crossings (train meetings). The first half of this thesis focuses on the crossing time, i.e. the time loss that occurs in crossing situations. A simplified analytical model, SAMFOST, has been developed to calculate the crossing time as a function of infrastructure configuration, vehicle properties, timetable and delays for two crossing trains. Three measures of timetable flexibility are proposed and they can be used to evaluate how infrastructure configuration, vehicle properties, punctuality etc affect possibilities to alter the timetable. Double-track lines operated with mixed traffic show properties similar to those of single-tracks. In this case overtakings imply scheduled delays as well as risk of delay propagation. Two different methods are applied for analysis of double-tracks: a combinatorial, mathematical model (TVEM) and simulation experiments. TVEM, Timetable Variant Evaluation Model, is a generic model that systematically generates and evaluates timetable variants. This method is especially useful for mixed traffic operation where the impact of the timetable is considerable. TVEM may also be used for evaluation of different infrastructure designs. Analyses performed in TVEM show that the impact on capacity from the infrastructure increases with speed differences and frequency of service for the passenger trains, whereas the impact of the timetable is strongest when the speed differences are low and/or the frequency of passenger services is low. Simulation experiments were performed to take delays and perturbations into account. A simulation model was set up in the micro simulation tool RailSys and calibrated against real operational data. The calibrated model was used for multi-factor analysis through experiments where infrastructure, timetable and perturbation factors were varied according to an experimental design and evaluated through response surface methods. The additional delay was used as response variable. Timetable factors, such as frequency of high-speed services and freight train speed, turned out to be of great importance for the additional delay, whereas some of the perturbation factors, i.e. entry delays, only showed a minor impact. The infrastructure factor, distance between overtaking stations, showed complex relationships with several interactions, principally with timetable factors.<br>QC20100622<br>Framtida infrastruktur och kvalitet i tågföring
APA, Harvard, Vancouver, ISO, and other styles
36

Chenouard, Nicolas. "Avancées en suivi probabiliste de particules pour l'imagerie biologique." Phd thesis, Télécom ParisTech, 2010. http://tel.archives-ouvertes.fr/tel-00560530.

Full text
Abstract:
Le suivi de particules est une méthode de choix pour comprendre les mécanismes intra-cellulaires car il fournit des moyens robustes et précis de caractériser la dynamiques des objets mobiles à l'échelle micro et nano métrique. Cette thèse traite de plusieurs aspects liés au problème du suivi de plusieurs centaines de particules dans des conditions bruitées. Nous présentons des techniques nouvelles basée sur des méthodes mathématiques robustes qui nous permettent des suivre des particules sous-résolutives dans les conditions variées qui sont rencontrées en imagerie cellulaire. Détection de particules : nous avons tout d'abord traité le problème de la détection de particules dans les images fluorescentes contenant un fond structuré. L'idée clé de la méthode est l'utilisation d'une technique de séparation de sources : l'algorithme d'Analyse en Composantes Morphologiques (ACM), pour séparer le fond des particules en exploitant leur différence de morphologie dans les images. Nous avons effectué un certain nombre de modifications à l'ACM pour l'adapter aux caractéristiques des images biologiques en fluorescence. Par exemple, nous avons proposé l'utilisation du dictionnaire de Curvelet et d'un dictionnaire de d'ondelettes, avec des à priori de parcimonie différents, afin de séparer le signal des particules du fond. Une fois la séparation de sources effectuée, l'image sans fond peut être analysée pour identifier de manière robuste la position des particules et pour les suivre au cours du temps. Modélisation du problème de suivi : nous avons proposé un cadre de travail statistique global qui tient compte des nombreux aspects du problème de suivi de particules dans des conditions bruitées. Le cadre de travail probabiliste que nous avons mis au point contient de nombreux modèles qui sont dédiés à l'imagerie biologique, tels que des modèles statistiques de mouvement des particules en milieu cellulaire. Nous avons aussi défini la concept de perceiability d'une cible dans le cas des particules biologiques. Grâce à ce modèle l'existence d'une particule est explicitement modélisée et quantifiée, ce qui nous permet de résoudre les problèmes de création et de terminaison des trajectoires au sein même de notre cadre probabiliste de suivi. Le cadre de travail proposé bénéficie d'une grande flexibilité mais reste facile à adapter car chaque paramètre du modèle trouve une interprétation simple et intuitive. Ainsi, notre modèle probabiliste de suivi nous a permis de modéliser de manière exhaustive un grand nombre de systèmes biologiques différents. Mise au point d'un algorithme de suivi : nous avons reformulé l'algorithme de suivi nommé Multiple Hypothesis Tracking (MHT) pour qu'il inclue notre modèle probabiliste de suivi dédié aux particules biologiques, et nous avons proposé une implémentation rapide qui permet de suivre de nombreuses particules dans des conditions d'imagerie dégradées. L'\textit{Enhanced} MHT (E-MHT) que nous avons proposé tire pleinement partie du modèle de suivi en incorporant la connaissance des images futures, ce qui augmente significativement le pouvoir discriminant des critères statistiques. En conséquence, l'E-MHT est capable d'identifier automatiquement les détections erronée et de détecter les événements d'apparition et de disparition des particules. Nous avons résolu le problème de la complexité de la tache de suivi grâce à un design de l'algorithme que exploite la topologie en arbre des solution et à la possibilité d'effectuer les calculs de manière parallèle. Une série de tests comparatifs entre l'E-MHT et des méthodes existantes de suivi a été réalisée avec des séquences d'images synthétiques 2D et avec des jeux de données réels 2D et 3D. Dans chaque cas l'E-MHT a montré des performances supérieures par rapport aux méthodes standards, avec une capacité remarquable à supporter des conditions d'imagerie très dégradées. Nous avons appliqué les méthodes de suivi proposées dans le cadre de plusieurs projets biologiques, ce qui a conduit à des résultats biologiques originaux. La flexibilité et la robustesse de notre méthode nous a notamment permis de suivre des prions infectant des cellules, de caractériser le transport de protéines lors du développement de l'ovocyte de la drosophile, ainsi que d'étudier la trafic d'ARN messager dans l'ovocyte de drosophile.
APA, Harvard, Vancouver, ISO, and other styles
37

Guyot, Coline. "Optimisation of electron beam performance for high peak current laser-plasma and multi-pass energy recovery accelerators with 6D tracking start-to-end simulations." Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPASP007.

Full text
Abstract:
Dans la quête d'accélérateurs d'électrons plus compacts et moins consommateurs d'énergie, le courant crête tend à être augmenté pour différentes raisons. Dans le contexte de la thèse deux approches alternatives aux accélérateurs plus conventionnels sont explorées: laser-plasma et linacs à récupération d'énergie (ERL). Pour les accélérateurs à laser-plasma, le courant crête est dû à la durée extrêmement courte des paquets, tandis que pour le cas de l'ERL, le courant de crête est dû à la charge par paquet.Les faisceaux laser-plasma sont des faisceaux d'électrons atypiques en raison de leurs grandes dispersion d'énergie et divergences. Celles-ci influencent fortement leur transport, car cette combinaison conduit à de fortes corrélations entre les propriétés longitudinales et transverses. Les défis de design d'une ligne de transport compacte sont discutés, avec notamment la question des contraintes dues à la forte divergence et des conséquences de la focalisation sur la qualité du faisceau, ainsi que la sélection systématique de l'énergie qui peut être mise en œuvre. Les problèmes de variations d'un tir à l'autre des injecteurs laser-plasma sont également abordés grâce au système de sélection en énergie proposé ici. Dans ce contexte, un compromis entre la qualité du faisceau et la charge est également étudié.L'accélérateur multi-passages à récupération d'énergie a la particularité de combiner des difficultés des accélérateurs circulaires et linéaires. Le processus de récupération d'énergie impose également une phase d'accélération et de décélération avec une propagation du faisceau dans une structure multi-passages, où le faisceau doit être recirculé plusieurs fois dans des arcs dédiés lors des deux phases. L'évolution de l'espace des phases longitudinal est un facteur déterminant. La thèse met l'accent sur l'impact de la longueur des paquets sur le transport du faisceau et la conservation de sa qualité, avec un compromis entre les effets collectifs single-bunch, en particulier le rayonnement synchrotron cohérent, et les effets chromatiques afin de minimiser les pertes et de maintenir la qualité du faisceau<br>In the quest for more compact and less energy-consuming electron accelerators, the peak current tends to be increased for different reasons. The context of the thesis is to explore two alternative approaches to more conventional accelerators: laser plasma and energy recovery linacs ones. For laser-plasma accelerators, the peak current is due to the extremely short bunch duration whereas for the ERL case, the peak current is due to the charge per bunch. The goal of the thesis studies is to optimise the conditions of the electron beam transport while maximising their quality and minimising the losses, including higher order trackings both longitudinal and transverse as well as collective effects.Laser-plasma beams are atypical electron beams because of their large energy spread and divergence. These heavily influence their transport, as this combination leads to strong correlations between the longitudinal and the transverse properties. The challenges of designing a compact transport line are discussed with the constraints due to the high divergence and the consequences of the focusing on the beam quality as well as the systematic selection in energy that can be implemented. The shot-to-shot variations issues of laser-plasma injectors are also addressed through the energy selection system proposed here. In this context, a trade-off between beam quality and the charge is explored as well.Multi-pass Energy Recovery Accelerator has the particularities to combined difficulties of circular accelerators and linear ones without radiation damping. The energy recovery process also imposes an accelerating and decelerating phase of the beam propagation within a multi-pass structure, where the beam has to be re-circulated several times in dedicated arcs for both cases. A determinant factor is the evolution of the longitudinal phase space. A focus is done in the thesis on the impact of the bunch length on the beam transport and the conservation of its quality, with a trade-off between single-bunch collective effects, especially coherent synchrotron radiation, and chromatic effects to minimise losses and to maintain the beam quality
APA, Harvard, Vancouver, ISO, and other styles
38

Kusumoto, Tamon. "Radial electron fluence around ion tracks as a new physical concept for the detection threshold of PADC detector." Thesis, Strasbourg, 2017. http://www.theses.fr/2017STRAE046/document.

Full text
Abstract:
La structure et le processus de formation des traces latentes dans le poly (allyl diglycol carbonate), PADC, ont été étudiés par spectroscopie FT-IR et par simulation Monte Carlo. La quantité de groupes OH formés est équivalente à la quantité de disparition des groupes éther. L’utilisation de radiations à faible TLE a prouvé que les fonctions carbonyle ne disparaissent que lorsque deux électrons au minimum interagissent avec une seule unité de répétition du polymère. Les résultats obtenus avec des protons de haute énergie permettent de comprendre la différence entre des traces révélables et non-révélables. Sur la base de ces résultats, un nouveau concept physique de Fluence Electronique Radiale autour de la Trace d’un Ion, défini comme la densité d'électrons secondaires qui traversent une surface cylindrique de rayon donné, est proposé pour décrire le seuil de détection du PADC en utilisant le code Geant4-DNA. Les connaissances acquises sont utiles pour trouver des agencements moléculaires appropriés pour de nouveaux détecteurs de sensibilités désirées<br>The structure and formation process of latent tracks in poly(allyl diglycol carbonate), PADC, have been examined using the combination of FT-IR spectrometry and a Monte Carlo simulation. The generation amount of OH groups is almost equivalent to the loss amount of ether. An important role of the secondary electron that the carbonyl can be broken only when more than two electrons pass through a single repeat unit is clarified by experiments using low LET radiations. Results of high energy protons lead us to the elucidation of the difference between etchable and un-etchable tracks. Based on these results, a new physical concept of Radial Electron Fluence around Ion Tracks, which is defined as the number density of secondary electron that pass through the cylinder surface with a certain radius is proposed for the detection threshold of PADC using Geant4-DNA. Obtained knowledge is helpful to find appropriate molecule arrangements for new etched track detector with desired sensitivities
APA, Harvard, Vancouver, ISO, and other styles
39

Fansi, Tchango Arsène. "Reconnaissance comportementale et suivi multi-cible dans des environnements partiellement observés." Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0156.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au problème du suivi comportemental des piétons au sein d'un environnement critique partiellement observé. Tandis que plusieurs travaux de la littérature s'intéressent uniquement soit à la position d'un piéton dans l'environnement, soit à l'activité à laquelle il s'adonne, nous optons pour une vue générale et nous estimons simultanément à ces deux données. Les contributions présentées dans ce document sont organisées en deux parties. La première partie traite principalement du problème de la représentation et de l'exploitation du contexte environnemental dans le but d'améliorer les estimations résultant du processus de suivi. L'état de l'art fait mention de quelques études adressant cette problématique. Dans ces études, des modèles graphiques aux capacités d'expressivité limitées, tels que des réseaux Bayésiens dynamiques, sont utilisés pour modéliser des connaissances contextuelles a priori. Dans cette thèse, nous proposons d'utiliser des modèles contextuelles plus riches issus des simulateurs de comportements d'agents autonomes et démontrons l’efficacité de notre approche au travers d'un ensemble d'évaluations expérimentales. La deuxième partie de la thèse adresse le problème général d'influences mutuelles - communément appelées interactions - entre piétons et l'impact de ces interactions sur les comportements respectifs de ces derniers durant le processus de suivi. Sous l'hypothèse que nous disposons d'un simulateur (ou une fonction) modélisant ces interactions, nous développons une approche de suivi comportemental à faible coût computationnel et facilement extensible dans laquelle les interactions entre cibles sont prises en compte. L'originalité de l'approche proposée vient de l'introduction des "représentants'', qui sont des informations agrégées issues de la distribution de chaque cible de telle sorte à maintenir une diversité comportementale, et sur lesquels le système de filtrage s'appuie pour estimer, de manière fine, les comportements des différentes cibles et ceci, même en cas d'occlusions. Nous présentons nos choix de modélisation, les algorithmes résultants, et un ensemble de scénarios difficiles sur lesquels l’approche proposée est évaluée<br>In this thesis, we are interested in the problem of pedestrian behavioral tracking within a critical environment partially under sensory coverage. While most of the works found in the literature usually focus only on either the location of a pedestrian or the activity a pedestrian is undertaking, we stands in a general view and consider estimating both data simultaneously. The contributions presented in this document are organized in two parts. The first part focuses on the representation and the exploitation of the environmental context for serving the purpose of behavioral estimation. The state of the art shows few studies addressing this issue where graphical models with limited expressiveness capacity such as dynamic Bayesian networks are used for modeling prior environmental knowledge. We propose, instead, to rely on richer contextual models issued from autonomous agent-based behavioral simulators and we demonstrate the effectiveness of our approach through extensive experimental evaluations. The second part of the thesis addresses the general problem of pedestrians’ mutual influences, commonly known as targets’ interactions, on their respective behaviors during the tracking process. Under the assumption of the availability of a generic simulator (or a function) modeling the tracked targets' behaviors, we develop a yet scalable approach in which interactions are considered at low computational cost. The originality of the proposed approach resides on the introduction of density-based aggregated information, called "representatives’’, computed in such a way to guarantee the behavioral diversity for each target, and on which the filtering system relies for computing, in a finer way, behavioral estimations even in case of occlusions. We present the modeling choices, the resulting algorithms as well as a set of challenging scenarios on which the proposed approach is evaluated
APA, Harvard, Vancouver, ISO, and other styles
40

Bergere, Bastien. "Apprentissage automatique d'un modèle en Tomographie par Emission de Positons à partir de la simulation Monte Carlo. : Application à la reconstruction en 2D et 3D." Electronic Thesis or Diss., université Paris-Saclay, 2025. http://www.theses.fr/2025UPAST052.

Full text
Abstract:
La simulation Monte Carlo, adossée à un code de transport de particules, offre la modélisation la plus précise du processus générant les données en Tomographie par Emission de Positons (TEP). Cependant, la simulation est de nature purement générative et ne constitue pas un modèle statistique explicite. Dans ce travail, qui appartient au domaine de l'inférence par simulation, nous proposons une méthode d'approximation analytique de la vraisemblance implicite du modèle génératif.Étant donné la grande dimension des paramètres (carte d'émission) et des observations (sinogramme), l'estimation directe de la vraisemblance (tout comme la loi a posteriori) est un problème difficile. Nous montrons cependant que la fonction de vraisemblance peut être approchée en estimant la loi conditionnelle d'une unique coïncidence sachant le lieu l'émission du positon. Le problème d'estimation de densité conditionnelle en faible dimension est résolu en utilisant un estimateur faisant appel au deep-learning, à savoir un modèle de mélange gaussien conditionnel défini par un réseau de neurones (Mixture Density Network). L'estimateur appris est ensuite utilisé dans une reconstruction itérative basée sur le maximum de vraisemblance ou maximum a posteriori, aussi bien en TEP 2D qu'en 3D.L'approche adoptée consiste à apprendre un modèle analytique pour les coïncidences non-diffusées, ainsi qu'un modèle analytique pour les coïncidences diffusées, dont la distribution est plus complexe. Ce second modèle permet non seulement d'estimer les comptages diffusés dans le sinogramme, mais aussi d'utiliser les diffusés pour inférer la carte d'émission<br>Monte Carlo simulation relying on a particle transport offers the most accurate modelling of the Positon Emission Tomography (PET) data generation process. However, the simulation is purely generative in nature and does not constitute an explicit statistical model. In this thesis, which falls within the scope of simulation-based inference, we provide a method to analytically approximate the implicit likelihood of the generative model.Given the large dimension of the parameters (emission map) and observations (sinogram), direct likelihood estimation (as well as direct posterior estimation) is a difficult problem. However, we show that the likelihood function can be approximated by estimating the conditional density of a single coincidence given the positron emission location. The conditional density estimation problem in low dimensions is solved by using an estimator based on deep learning, namely a conditional Gaussian mixture model defined by a neural network (Mixture Density Network).The learned estimator is then used in an iterative reconstruction based on maximum likelihood or maximum a posteriori estimation, in both 2D and 3D PET.The chosen approach enables us to learn an analytical model for non-scattered coincidences, as well as an analytical model for scattered coincidences, whose distribution is more complex.The latter model makes it possible to estimate the scatter component of the sinogram, but also to use the scattered data to help infer the emission map
APA, Harvard, Vancouver, ISO, and other styles
41

Obr, Jakub. "Pokročilá simulace a vizualizace kapaliny." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-237084.

Full text
Abstract:
This thesis concentrates on physically based simulation of fluids followed by its photorealistic visualization. It describes one form of Smooth Particle Hydrodynamics methods for viscoelastic fluid simulation and includes its extension for multiple interacting fluids. It also deals with SPH boundary problem and investigates its solution by fixed boundary particles. For visualization of fluids there is a method of Ray Tracing described in detail and it's extended with light absorption in transparent materials. In connection with this method there is also discussed a problem of infinite total reflections and some solution techniques are offered. To extract the surface of the fluid there is used a Marching cubes method and its discussed in terms of Ray Tracing.
APA, Harvard, Vancouver, ISO, and other styles
42

Fansi, Tchango Arsène. "Reconnaissance comportementale et suivi multi-cible dans des environnements partiellement observés." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0156/document.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au problème du suivi comportemental des piétons au sein d'un environnement critique partiellement observé. Tandis que plusieurs travaux de la littérature s'intéressent uniquement soit à la position d'un piéton dans l'environnement, soit à l'activité à laquelle il s'adonne, nous optons pour une vue générale et nous estimons simultanément à ces deux données. Les contributions présentées dans ce document sont organisées en deux parties. La première partie traite principalement du problème de la représentation et de l'exploitation du contexte environnemental dans le but d'améliorer les estimations résultant du processus de suivi. L'état de l'art fait mention de quelques études adressant cette problématique. Dans ces études, des modèles graphiques aux capacités d'expressivité limitées, tels que des réseaux Bayésiens dynamiques, sont utilisés pour modéliser des connaissances contextuelles a priori. Dans cette thèse, nous proposons d'utiliser des modèles contextuelles plus riches issus des simulateurs de comportements d'agents autonomes et démontrons l’efficacité de notre approche au travers d'un ensemble d'évaluations expérimentales. La deuxième partie de la thèse adresse le problème général d'influences mutuelles - communément appelées interactions - entre piétons et l'impact de ces interactions sur les comportements respectifs de ces derniers durant le processus de suivi. Sous l'hypothèse que nous disposons d'un simulateur (ou une fonction) modélisant ces interactions, nous développons une approche de suivi comportemental à faible coût computationnel et facilement extensible dans laquelle les interactions entre cibles sont prises en compte. L'originalité de l'approche proposée vient de l'introduction des "représentants'', qui sont des informations agrégées issues de la distribution de chaque cible de telle sorte à maintenir une diversité comportementale, et sur lesquels le système de filtrage s'appuie pour estimer, de manière fine, les comportements des différentes cibles et ceci, même en cas d'occlusions. Nous présentons nos choix de modélisation, les algorithmes résultants, et un ensemble de scénarios difficiles sur lesquels l’approche proposée est évaluée<br>In this thesis, we are interested in the problem of pedestrian behavioral tracking within a critical environment partially under sensory coverage. While most of the works found in the literature usually focus only on either the location of a pedestrian or the activity a pedestrian is undertaking, we stands in a general view and consider estimating both data simultaneously. The contributions presented in this document are organized in two parts. The first part focuses on the representation and the exploitation of the environmental context for serving the purpose of behavioral estimation. The state of the art shows few studies addressing this issue where graphical models with limited expressiveness capacity such as dynamic Bayesian networks are used for modeling prior environmental knowledge. We propose, instead, to rely on richer contextual models issued from autonomous agent-based behavioral simulators and we demonstrate the effectiveness of our approach through extensive experimental evaluations. The second part of the thesis addresses the general problem of pedestrians’ mutual influences, commonly known as targets’ interactions, on their respective behaviors during the tracking process. Under the assumption of the availability of a generic simulator (or a function) modeling the tracked targets' behaviors, we develop a yet scalable approach in which interactions are considered at low computational cost. The originality of the proposed approach resides on the introduction of density-based aggregated information, called "representatives’’, computed in such a way to guarantee the behavioral diversity for each target, and on which the filtering system relies for computing, in a finer way, behavioral estimations even in case of occlusions. We present the modeling choices, the resulting algorithms as well as a set of challenging scenarios on which the proposed approach is evaluated
APA, Harvard, Vancouver, ISO, and other styles
43

Mirsaleh, Kohan Leila. "Comparison of the Effects of Cobalt-60 [gamma]-Rays and Tritium [beta][superscript -]Particles on Water Radiolysis and Aqueous Solutions and Radiolysis of the Ceric-Cerous Sulfate Dosimeter at Elevated Temperature." Mémoire, Université de Sherbrooke, 2014. http://savoirs.usherbrooke.ca/handle/11143/168.

Full text
Abstract:
Abstract : Monte Carlo simulations have proven to be very powerful techniques to study the radiolysis of water and the mechanisms underlying this radiolysis. Monte Carlo simulations particularly become important when there are no experimental results available in the literature due, for instance, to the difficulty of performing such experiments. This thesis presents a study of the radiolysis of water irradiated by different types of radiation and at various temperatures, employing Monte Carlo simulations. The first part of the thesis uses Monte Carlo simulations to elucidate the mechanisms involved in the self-radiolysis of tritiated water and to examine the importance of the effects of higher “linear energy transfer” (LET) by comparing [[superscript 3]H [beta][superscript -] radiations (mean initial energy of ~5.7 keV) with [superscript 60]Co [gamma]-rays (~1 MeV electrons). Our simulations showed that, for [superscript 3]H [beta][superscript -], we observe lower radical and higher molecular yields than in γ-radiolysis. These differences in yields are consistent with differences in the nonhomogeneous distribution of primary transient species in the two cases. Overall, our results corroborate well with previously reported work, and support a picture of [superscript 3]H [beta][superscript -] radiolysis mainly driven by the chemical action of “short tracks” of high local LET. This same trend in yields of radical and molecular products was also found under acidic conditions as well as in the aerated Fricke dosimeter. One of our main findings was that the measured Fricke yield G(Fe[superscript 3+]) could be best reproduced if a single, mean “equivalent” electron energy of ~7.8 keV were used to mimic the energy deposition by the tritium [beta][superscript -] particles (rather than the commonly used mean of ~5.7 keV), in full agreement with a previous recommendation of ICRU Report 17. The second part of this thesis investigates the radiolysis of the ceric-cerous sulfate dosimeter at elevated temperatures. In this radiolysis, H[superscript •] (or HO[subscript 2][superscript •] in the presence of oxygen) and H[subscript 2]O[subscript 2] produced by the radiolytic decomposition of water both reduce Ce[superscript 4+] ions to Ce[superscript 3+] ions, while [superscript •]OH radicals oxidize the Ce[superscript 3+] present back to Ce[superscript 4+]. Our simulations showed that the net Ce[superscript 3+] yield decreases almost linearly with increasing temperature up to ~250 °C, in excellent agreement with experiment. Above 250 °C, our model predicts that G(Ce[superscript 3+]) drops markedly with temperature until, instead of Ce[superscript 4+] reduction, Ce[superscript 3+] oxidation is observed. This drop is shown to result from the occurrence of the reaction of H[superscript •] atoms with water in the homogeneous chemical stage.//Résumé : La méthodologie de simulation Monte-Carlo s’est révélée être une très puissante technique dans l’étude des mécanismes de la radiolyse de l’eau. En particulier, la simulation Monte-Carlo se rend même plus importante quand les résultats expérimentaux ne sont pas disponibles, notamment dû aux difficultés techniques. Le mémoire actuel représente une étude sur la radiolyse de l’eau irradiée par différents rayonnements à différentes températures, en utilisant la simulation Monte-Carlo. Dans la première partie de ce mémoire, on examine les mécanismes d’auto-radiolyse de l’eau tritiée ainsi que l’importance de l’effet de « transfert linéaire d'énergie » (TLE) en comparant les électrons [béta][indice supérieur -] de [indice supérieur 3]H avec les rayons [béta][indice supérieur -] de [indice supérieur 60]Co. Nos simulations montrent que, pour les rayons [béta][indice supérieur -] de [indice supérieur 3]H, on observe moins de production de radicaux libres et plus de produits moléculaires. Ces différences de rendement sont en accord avec les différences de distribution non-homogène des espèces primaires transitoires dans les deux cas. En résumé, nos résultats corroborent bien avec les travaux publiés précédemment et donnent une perspective de la radiolyse [béta][indice supérieur -] de [indice [supérieur 3]H qui est en majorité contrôlée par l’action chimique de « trajectoires courtes » de TLE local élevé. La même tendance pour la production des radicaux libres et des produits moléculaires a été trouvée en milieu acide ainsi que pour le dosimètre aéré de Fricke. Un de nos résultats principaux montre que le rendement G(Fe[indice supérieur 3+]) du dosimètre de Fricke peut être mieux reproduit si une seule énergie électronique moyenne « équivalente » de ~7.8 keV est utilisée pour mimer la déposition d’énergie par les particules [béta][indice supérieur -] du tritium (au lieu de la valeur moyenne de ~5.7 keV qui est utilisée fréquemment). Ceci est en complet accord avec une recommandation du rapport 17 de l’ICRU. La deuxième partie de ce mémoire concerne la radiolyse du dosimètre au sulfate cérique-céreux à températures élevées. Lors de cette radiolyse, H[indice supérieur •] (ou HO[indice inférieur 2][indice supérieur •] en présence d’oxygène) et H[indice inférieur 2]O[indice inférieur 2] produits par la décomposition radiolytique de l’eau réduisent les ions cériques Ce[indice supérieur 4+] en ions céreux Ce[indice supérieur 3+], tandis que les radicaux [indice supérieur •]OH oxydent Ce[indice supérieur 3+] en Ce[indice supérieur 4+]. Nos simulations montrent que le rendement G (Ce[indice supérieur 3+]) décroît quasi linéairement avec la température entre 25 et 250 ° C, en excellent accord avec l’expérience . Au-dessus de 250 °C, notre modèle prédit une diminution marquée de G (Ce[indice supérieur 3+]) jusqu’à ce qu’on l’observe, au lieu d’une réduction de Ce[indice supérieur 4+], une oxydation de Ce[indice supérieur 3+]. Nous montrons que cette diminution est due à l’intervention de la réaction des atomes H[indice supérieur •] avec l’eau en milieu homogène.
APA, Harvard, Vancouver, ISO, and other styles
44

Mustaree, Shayla. "The •OH scavenging effect of bromide ions on the yield of H[subscript 2]O[subscript 2] in the radiolysis of water by [superscript 60]Co γ-rays and tritium β-particles at room temperature : a Monte Carlo simulation study". Mémoire, Université de Sherbrooke, 2016. http://hdl.handle.net/11143/8183.

Full text
Abstract:
Abstract: Monte Carlo simulations were used here to compare the radiation chemistry of pure water and aqueous bromide solutions after irradiation with two different types of radiation, namely, tritium β-electrons (~7.8 keV) and [superscript 60]Co γ-rays/fast electron (~1 MeV) or high energy protons. Bromide ions (Br-) are known to be selective scavengers of hydroxyl radicals •OH precursors of hydrogen peroxide H[subscript 2]O[subscript 2]. These simulations thus allowed us to determine the yields (or G-values) of H[subscript 2]O[subscript 2] in the radiolysis of dilute aqueous bromide solutions by the two types of radiations studied, the first with low linear energy transfer (LET) (~0.3 keV/μm) and the second with high LET (~6 keV/μm) at 25 °C. This study was carried out under a wide range of Br- concentrations both in the presence and the absence of oxygen. Simulations clearly showed that irradiation by tritium β-electrons favored a clear increase in G(H[subscript 2]O[subscript 2]) compared to [superscript 60]Co γ-rays. We found that these changes could be related to differences in the initial spatial distributions of radiolytic species (i.e., the structure of the electron tracks, the low-energy β-electrons of tritium depositing their energy as cylindrical “short tracks” and the energetic Compton electrons produced by γ-radiolysis forming mainly spherical “spurs”). Moreover, simulations also showed that the presence of oxygen, a very good scavenger of hydrated electrons (e-[subscript aq]) and H• atoms on the 10[superscript-7] s time scale (i.e., before the end of spur expansion), protected H[subscript 2]O[subscript 2] from further reactions with these species in the homogeneous stage of radiolysis. This protection against e-[subscript aq] and H• atoms therefore led to an increase in the H[subscript 2]O[subscript 2] yields at long times, as seen experimentally. Finally, for both deaerated and aerated solutions, the H[subscript 2]O[subscript 2] yield in tritium β-radiolysis was found to be more easily suppressed than in the case of cobalt-60 γ-radiolysis, and interpreted by the quantitatively different chemistry between short tracks and spurs. These differences in the scavengeability of H[subscript 2]O[subscript 2] precursors in passing from low-LET [superscript 60]Co γ-ray to high-LET tritium β-electron irradiation were in good agreement with experimental data, thereby lending strong support to the picture of tritium-β radiolysis in terms of short tracks of high local LET.<br>Résumé: Les simulations Monte Carlo constituent une approche théorique efficace pour étudier la chimie sous rayonnement de l'eau et des solutions aqueuses. Dans ce travail, nous avons utilisé ces simulations pour comparer l’action de deux types de rayonnement, à savoir, le rayonnement γ de [indice supérieur 60]Co (électrons de Compton ~1 Me V) et les électrons β du tritium (~ 7,8 keV), sur la radiolyse de l’eau et des solutions aqueuses diluées de bromure. Les ions Br- sont connus comme d’excellents capteurs des radicaux hydroxyles •OH, précurseurs du peroxyde d’hydrogène H[indice inférieur 2]O[indice inférieur 2]. Les simulations Monte Carlo nous ont donc permis de déterminer les rendements (ou valeurs G) de H[indice inférieur 2]O[indice inférieur 2] à 25 °C pour les deux types de rayonnements étudiés, le premier à faible transfert d'énergie linéaire (TEL) (~0,3 keV/μm) et le second à haut TEL (~6 keV/μm). L’étude a été menée pour différentes concentrations d’ions Br-, à la fois en présence et en absence d'oxygène. Les simulations ont montré que l’irradiation par les électrons β du tritium favorisait nettement la formation de H[indice inférieur 2]O[indice inférieur 2] comparativement aux rayons γ du cobalt. Ces changements ont pu être reliés aux différences qui existent dans les distributions spatiales initiales des espèces radiolytiques (i.e., la structure des trajectoires d'électrons, les électrons β du tritium déposant leur énergie sous forme de «trajectoires courtes» de nature cylindrique, et les électrons Compton produits par la radiolyse γ formant principalement des «grappes» de géométrie plus ou moins sphérique). Les simulations ont montré également que la présence d'oxygène, capteur d’électrons hydratés et d’atomes H• sur l'échelle de temps de ~10[indice supérieur -7] s (i.e., avant la fin des grappes), protégeait H[indice inférieur 2]O[indice inférieur 2] d’éventuelles réactions subséquentes avec ces espèces. Une telle «protection» conduit ainsi à une augmentation de G(H[indice inférieur 2]O[indice inférieur 2]) à temps longs. Enfin, en milieu tant désaéré qu’aéré, les rendements en H[indice inférieur 2]O[indice inférieur 2] obtenus lors de la radiolyse par les électrons β du tritium ont été trouvés plus facilement supprimés que lors de la radiolyse γ. Ces différences dans l’efficacité de capture des précurseurs de H[indice inférieur 2]O[indice inférieur 2] ont été interprétées par les différences quantitatives dans la chimie intervenant dans les trajectoires courtes et les grappes. Un excellent accord a été obtenu avec les données expérimentales existantes.
APA, Harvard, Vancouver, ISO, and other styles
45

Jung, Paul Matthew. "Hybrid macro-particle moment accelerator tracking algorithm." Thesis, 2020. http://hdl.handle.net/1828/12036.

Full text
Abstract:
A particle accelerator simulation which straddles the gap between multi-particle and moment codes is derived. The hybrid approach represents the beam using macro-particles which contain discrete longitudinal coordinates and transverse second moments. The discretization scheme for the macro-particles is derived using variational principles, as a natural extension of well known variational approaches. This variational discretization allows for exact transverse emittance conservation. The electrostatic self-potential is discrete in the longitudinal direction and solved semi-analytically in the transverse direction using integrated Green’s functions. The algorithm is implemented and tested against both a moment and multi-particle code.<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
46

Chia-ChiChen and 陳加奇. "Flow Analysis and Particle Tracking Simulation for Electrochemical Machining." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/tdka25.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Sipaj, Andrej. "Simulation, design and construction of a gas electron multiplier for particle tracking." Thesis, 2012. http://hdl.handle.net/10155/293.

Full text
Abstract:
The biological effects of charged particles is of interest in particle therapy, radiation protection and space radiation science and known to be dependent on both absorbed dose and radiation quality or LET. Microdosimetry is a technique which uses a tissue equivalent gas to simulate microscopic tissue sites of the order of cellular dimensions and the principles of gas ionization devices to measure deposited energy. The Gas Electron Multiplier (GEM) has been used since 1997 for tracking particles and for the determination of particle energy. In general, the GEM detector works in either tracking or energy deposition mode. The instrument proposed here is a combination of both, for the purpose of determining the energy deposition in simulated microscopic sites over the charged particle range and in particular at the end of the range where local energy deposition increases in the so‐called Bragg‐peak region. The detector is designed to track particles of various energies for 5 cm in one dimension, while providing the particle energy deposition every 0.5 cm of its track. The reconfiguration of the detector for different particle energies is very simple and achieved by adjusting the pressure of the gas inside the detector and resistor chain. In this manner, the detector can be used to study various ion beams and their dose distributions to tissues. Initial work is being carried out using an isotopic source of alpha particles and this thesis will describe the construction of the GEM‐based detector, computer modelling of the expected gas‐gain and performance of the device as well as comparisons with experimentally measured data of segmented energy deposition.<br>UOIT
APA, Harvard, Vancouver, ISO, and other styles
48

Winter, Henry deGraffenried. "Combining hydrodynamic modeling with nonthermal test particle tracking to improve flare simulations." 2009. http://etd.lib.montana.edu/etd/2009/winter/WinterH0509.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

"Micro-particle Streak Velocimetry - Theory, Simulation Methods and Applications." Doctoral diss., 2011. http://hdl.handle.net/2286/R.I.14462.

Full text
Abstract:
abstract: This dissertation describes a novel, low cost strategy of using particle streak (track) images for accurate micro-channel velocity field mapping. It is shown that 2-dimensional, 2-component fields can be efficiently obtained using the spatial variation of particle track lengths in micro-channels. The velocity field is a critical performance feature of many microfluidic devices. Since it is often the case that un-modeled micro-scale physics frustrates principled design methodologies, particle based velocity field estimation is an essential design and validation tool. Current technologies that achieve this goal use particle constellation correlation strategies and rely heavily on costly, high-speed imaging hardware. The proposed image/ video processing based method achieves comparable accuracy for fraction of the cost. In the context of micro-channel velocimetry, the usability of particle streaks has been poorly studied so far. Their use has remained restricted mostly to bulk flow measurements and occasional ad-hoc uses in microfluidics. A second look at the usability of particle streak lengths in this work reveals that they can be efficiently used, after approximately 15 years from their first use for micro-channel velocimetry. Particle tracks in steady, smooth microfluidic flows is mathematically modeled and a framework for using experimentally observed particle track lengths for local velocity field estimation is introduced here, followed by algorithm implementation and quantitative verification. Further, experimental considerations and image processing techniques that can facilitate the proposed methods are also discussed in this dissertation. Unavailability of benchmarked particle track image data motivated the implementation of a simulation framework with the capability to generate exposure time controlled particle track image sequence for velocity vector fields. This dissertation also describes this work and shows that arbitrary velocity fields designed in computational fluid dynamics software tools can be used to obtain such images. Apart from aiding gold-standard data generation, such images would find use for quick microfluidic flow field visualization and help improve device designs.<br>Dissertation/Thesis<br>Ph.D. Electrical Engineering 2011
APA, Harvard, Vancouver, ISO, and other styles
50

Harshavardhana, A. U. "Flame Particle Tracking Analysis of Turbulence-Premixed Flame Interaction." Thesis, 2018. http://etd.iisc.ac.in/handle/2005/4187.

Full text
Abstract:
This work describes the computational and theoretical developments made in the understanding of turbulence-premixed flame interaction, using both lean and rich H2-air mixtures, in a flow field of near-isotropic turbulence. Two classical flame geometries are considered for the present study viz., 1) statistically planar flame in an in flow-out flow channel (type-I) and 2) premixed igniting flame kernel in a box (type-II). These simple geometries, which could be considered as building blocks of turbulent flames in practical combustors, elucidate the intricate physics of turbulence-premixed ame interaction. In the present work, using direct numerical simulations (DNS) and flame particle tracking (FPT) framework, we investigate two cases of turbulent premixed flames: 1) an intensely burning flame, and 2) extinguishing ignition kernels. The interaction between turbulence, molecular transport, and energy transport coupled with chemistry determine the characteristics of intensely turbulent premixed flames such as the evolution of flame surface geometry, propagation, annihilation, and local extinction/re-ignition. In the first part of the thesis, we describe the turbulence-premixed flame interactions for intensely burning turbulent premixed flames using the type-I configuration. The objective is to 1) understand the behavior of ame displacement speed (Sd) in the negatively curved regions and/or flame islands, corresponding to two different isotherms (665 K and 1321 K), which eventually dissolve in the product gases, and 2) decipher the role of transport in this behavior. This is carried out by considering lean H2-air mixtures ( = 0:81 and 0:7; Le < 1). DNS computations are performed with different initial conditions and turbulence intensity levels and FPT is used to analyze these Eulerian datasets. An increase in Sd with time for the annihilating regions of isotherms is a common trend observed across four simulation conditions considered. Further investigation reveals that the sharp increase in Sd is due to: 1) heat conduction, 2) increased negative curvature of the flame surface, and 3) eventual homogenization of temperature gradients (jrT j ! 0). The curves of normalized flame displacement speed (hSd=SL;T i) vs. stretch rate (hKaSi) in the normalized time for four different cases of turbulence intensity levels collapse on a narrow band for < 1, suggesting a unified behavior in the Lagrangian description. Principal curvature evolution statistics show an ellipsoidal geometry for the annihilating ame islands/pockets. The second part of the thesis addresses the extinction dynamics of igniting kernels in a rich H2-air ( = 4; Le > 1) premixture in near-isotropic turbulence. This is accomplished using the configuration of type-II. Here, the analysis procedures, DNS and FPT remain the same as mentioned earlier. Turbulence is found to extinguish a freshly ignited, initially spherical premixed ame kernel, which otherwise sustains in a quiescent flow field by propagating beyond the minimum radius. The mechanism of kernel extinction is investigated by tracking lifetime trajectories of ame particles on an O2 mass-fraction iso-surface in the ame displacement speed-curvature (Sd ) space using the well-known concept of minimum radius from laminar flames. The classical S -curve in the temperature-Damkohler number (T Da) space was also analyzed. Ensemble-averaged Sd and S -curves display corresponding turning points which help to elucidate the intricate mechanisms involved in turbulent ignition kernel extinction dynamics. Turbulence locally wrinkles the YO2 iso-surface to positively curved structures 2 which lead to turning points in Sd space, such that the minimum radius is never reached either locally or as an ensemble. A budget analysis of the principal curvature evolution equation highlights the role of turbulence in bending the surface to form positively curved pointed structures where heat loss is enhanced, further lowering Da towards extinction. The novel Lagrangian viewpoint of flame particle tracking applied on solutions of DNS thus emerges as a powerful tool where turbulence, flame, and their interaction dynamics can be systematically analyzed. These eventually provide unified viewpoints of local flame propagation and flame extinction in turbulence.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography