To see the other types of publications on this topic, follow the link: MIL simulation.

Dissertations / Theses on the topic 'MIL simulation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'MIL simulation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Deniz, Ertan. "Dds Based Mil-std-1553b Data Bus Interface Simulation." Master's thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12614852/index.pdf.

Full text
Abstract:
This thesis describes distributed simulation of MIL-STD-1553B Serial Data Bus interface and protocol based on the Data Distribution Service (DDS) middleware standard. The data bus connects avionics system components and transports information among them in an aircraft. It is important for system designers to be able to evaluate and verify their component interfaces at the design phase. The 1553 serial data bus requires specialized hardware and wiring to operate, thus it is expensive and complex to verify component interfaces. Therefore modeling the bus on commonly available hardware and networking infrastructure is desirable for evaluation and verification of component interfaces. The DDS middleware provides publish-subscribe based communications with a number of QoS (Quality Of Service) attributes. DDS makes it easy to implement distributed systems by providing an abstraction layer over the networking interfaces of the operating systems. This thesis takes the advantage of the DDS middleware to implement a 1553 serial data bus simulation tool. In addition, the tool provides XML based interfaces and scenario definition capabilities, which enable easy and quick testing and validation of component interfaces. Verification of the tool was performed over a case study using a scenario based on the MIL-STD-1760 standard.
APA, Harvard, Vancouver, ISO, and other styles
2

Soltani, Saeed. "Dynamic Architectural Simulation Model of YellowCar in MATLAB/Simulink Using AUTOSAR System." Master's thesis, Universitätsbibliothek Chemnitz, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-212596.

Full text
Abstract:
The YellowCar at the professorship of computer engineering of TU Chemnitz is a demonstration vehicle. The car is equipped with multiple networked Electronic Control Unit (ECU)s. There are regular software and hardware updates. Before introduction of any new update, it is essential to test the behavior of the car. This can be done through simulation. Since the majority of the ECU in YellowCar are AUTOSAR based, several AUTOSAR simulation tools can be used to do so. However non-AUTOSAR ECU applications can still not be simulated in these tools. Moreover simulating with such tools need the whole application to be implemented and also very expensive. Simulink is one of the most powerful tools for the purpose of Model-in-the-Loop (MIL) testing which is a popular strategy in the embedded world. The scope of this Master thesis is analyzing the YellowCar and its architecture to develop a dynamic Simulink architectural model that can be modified and extended to facilitate future updates. The outcome of this thesis is an implementation of a model for the YellowCar which allows both AUTOSAR and non-AUTOSAR ECUs to be simulated as one system. Also the model supports extension by easy addition of new modules like ECU or sensor through a graphical user interface.
APA, Harvard, Vancouver, ISO, and other styles
3

Lyubchyk, Andriy. "Gas adsorption in the MIL-53(AI) metal organic framework. Experiments and molecular simulation." Doctoral thesis, Faculdade de Ciências e Tecnologia, 2013. http://hdl.handle.net/10362/10932.

Full text
Abstract:
Dissertação para obtenção do Grau de Doutor em Engenharia Química<br>FCT - PhD Fellowship at Universidade Nova de Lisboa, Department of Chemistry (bolsa N SFRH/BD/45477/2008); FCT Program, project PTDC/AAC-AMB/108849/2008; NANO_GUARD, Project N°269138; Programme “PEOPLE” – Call ID “FP7-PEOPLE-2010-IRSES”
APA, Harvard, Vancouver, ISO, and other styles
4

Baca, Dawnielle C. "DATA ACQUISITION, ANALYSIS, AND SIMULATION SYSTEM (DAAS)." International Foundation for Telemetering, 1994. http://hdl.handle.net/10150/608561.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1994 / Town & Country Hotel and Conference Center, San Diego, California<br>The Data Acquisition, Analysis, and Simulation System (DAAS) is a computer system designed to allow data sources on spacecraft in the Flight System Testbed (FST) to be monitored, analyzed, and simulated. This system will be used primarily by personnel in the Flight System Testbed, flight project designers, and test engineers to investigate new technology that may prove useful across many flight projects. Furthermore, it will be used to test various spacecraft design possibilities during prototyping. The basic capabilities of the DAAS involve unobtrusively monitoring various information sources on a developing spacecraft. This system also provides the capability to generate simulated data in appropriate formats at a given data rate, and to inject this data onto the communication line or bus, using the necessary communication protocol. The DAAS involves Serial RS232/RS422, Ethernet, and MIL-STD-1553 communication protocols, as well as LabVIEW software, VME hardware, and SunOS/UNIX operating systems.
APA, Harvard, Vancouver, ISO, and other styles
5

Musil, Filip. "Modelování a HIL simulace ovládání pátých dveří osobního automobilu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2017. http://www.nusl.cz/ntk/nusl-318154.

Full text
Abstract:
This diploma thesis focuses on an analysis, a model creation and simulations of a car boot door mechanism. The problem was analyzed on the basis of real measurements made on three different vehicles. Based on the measurements, computational models describing the real system at different levels of complexity were created. Matlab/Simulink was used to create and calculate the models. The output of the thesis is the simulator of a car boot door which also includes simplified model of a control unit. The simulator should provide an approximation of current and kinematic quantities of these mechanisms. The model is implemented on dSPACE platform that allows real-time simulations. The simulator can be modified in terms of changing the parameters of the mechanism and modifying some of its results.
APA, Harvard, Vancouver, ISO, and other styles
6

Filho, Geraldo Macias Martins. "Estudo do motor de indução linear utilizado como posicionador e simulações computacionais." Universidade de São Paulo, 2003. http://www.teses.usp.br/teses/disponiveis/18/18133/tde-26052017-111516/.

Full text
Abstract:
Desde a idealização das máquinas elétricas girantes, o motor de indução linear (MIL) fora também considerado viável e, ao longo do tempo, foi pesquisado e foram desenvolvidas novas tecnologias para sua melhoria e para o seu acionamento. Muitos trabalhos acadêmicos foram realizados em todos os centros de pesquisa do mundo e, no Brasil, os primeiros a estudá-lo com rigor científico foram os professores Morency Arouca e Délio Pereira Guerrini, no início da década de 70, no departamento de Engenharia Elétrica da EESC/USP. Até hoje no Brasil, pouco se desenvolve neste campo de estudo da Engenharia Elétrica e, quando se desenvolve, ficam restritos à simulações e levantamentos de parâmetros característicos aos MIL\'s específicos. O presente trabalho tem como objetivo fazer uma explanação teórica e matemática que servirá de base ao estudo e compreensão do MIL e propor uma aplicação ao mesmo: o MIL acionado por inversor de freqüência e utilizado para obtenção de posicionamento com auxílio de simulação computacional de algumas malhas de controle para tal posicionamento.<br>Since the rotating machines had been idealized, the Linear Induction Motor (LIM) has been researched and, along the time, new technologies are being developed in order to improve it. Many academic works were accomplished in several research centers and universities around the world. In Brazil, the first ones to study it with scientific approach were Professors Morency Arouca and Délio Pereira Guerrini, in the early 70\'s. Even nowadays, works are seldom developed, and most of them are dedicated to simulation of the LIM or obtaining of parameters of specific LIM\'s. The present work has as objective to study the mathematical and theoretical modeling, which will be used as basis in order to study and understand the LIM and to propose an application: the LIM fed by Frequency Inverter, used in order to obtain positioning, with simulation of specific control systems.
APA, Harvard, Vancouver, ISO, and other styles
7

Zouhar, Štěpán. "HIL testovací stav pro soustavu univerzálních elektronických řídících jednotek." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-399401.

Full text
Abstract:
This diploma thesis is focused on testing of Electronic Control Units, especially functional testing in which hardware and software is verified and also Model in the Loop, Software in the Loop, Processor in the Loop and Hardware in the Loop testing methods. Within practical part of this thesis testing stand for functional test of the ECU was developed and manufactured. It is connected to PC via Input/Output card, testing is controlled by MATLAB script. Whole process of testing is automated from initial upload of testing firmware to tested ECU over all phases of test up to bootloader flashing. Hardware in the Loop test was also created, in which ECU works as controller and DC motor is simulated in real time with PC in MATLAB environment.
APA, Harvard, Vancouver, ISO, and other styles
8

Ševčík, Martin. "Modelování a implementace řídicího algoritmu 3D tiskárny." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-403747.

Full text
Abstract:
The content of the thesis is the modeling and implementation of the CNC control algorithm. This master thesis contains the description of issues of computer-controlled machines, the research of interpolation algorithms for control CNC machines and the modeling of the selected control algorithm with motor S-curve shaped speed profiles and simulation of chosen algorithms with S-shaped speed profiles in Matlab & Simulink.
APA, Harvard, Vancouver, ISO, and other styles
9

Tasche, Jos. "Theory and simulation of solubility and partitioning in polymers : application of SAFT-γ Mie and molecular dynamics simulations". Thesis, Durham University, 2018. http://etheses.dur.ac.uk/12672/.

Full text
Abstract:
Theory, simulation and experiment were used in this work to study the solubility and partitioning in polymer systems. The recently published SAFT-γ Mie equation of state was implemented into a stand-alone program together with all algorithms for parametrising new models and predicting phase equilibria. An analysis of the transferability of low-molecular weight Mie potential parameters for predicting the miscibility of polymer mixtures and partitioning of oligomers in polymer systems revealed the need for new models optimised for polymers. A systematic overview and analysis of available and typical experimental polymer data concluded that pure component polymer melt densities and cloud point temperatures (liquid–liquid equilibria) are the best and most practical choice for parametrising new SAFT-γ Mie models. New polymer models were developed for a range of pure polymers, several binary mixtures and one ternary polymer mixture. All models showed very good agreement with the experimental data included in the model development. Good agreement was found for predicted properties and conditions not included in the parametrising process. Coarse-grained (CG) force fields were developed with the help of the SAFT-γ Mie equation of state. Excellent agreement was found for the direct translation of Mie potentials to CG force fields for modelling properties of low-molecular weight compounds and densities of polymer melts. Coarse-grained models for molecular dynamics (MD) simulations of polymer phase equilibria are more challenging to develop due to greater computational resource requirements and less perfect agreement between SAFT-γ Mie and MD force fields. The challenges were demonstrated and discussed for a polystyrene solution and a binary mixture of polystyrene and polyisoprene. The synergistic power of SAFT-γ Mie and MD simulations was used for developing coarse grained models for describing the surface of a oligomer/polymer blend. Pure component parameters were optimised within SAFT-γ Mie. The SAFT-γ Mie CG model reproduced experimental partial density surface profiles as a function of blend composition without the need to rescale length scales. Oligomer surface enrichment, wetting transition and wetting layers were correctly predicted with a single model.
APA, Harvard, Vancouver, ISO, and other styles
10

Chan, Hobart. "Vehicular racing simulation: a MEL scripting approach." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/1124.

Full text
Abstract:
The purpose of this thesis is to develop an automated technique for controlling the animation of computer-generated cars for the application of motorsports, also known as car racing. The basic idea is similar to previous work simulating flocks of birds and schools of fish. This simulation system provides a behavior model for each car and driver in a group of cars that enables them to race on a track while avoiding collisions. The technique is implemented using a commercial software package, called MAYA, utilizing its scripting language and built-in dynamics engine. While not a complete real-world dynamic simulation, the cars exhibit realism in both racing behavior and in visual motion attributes. This system allows the animator to control the number of vehicles, their properties, and their general path using an interactive interface. The automated technique replaces manual animation of each individual car and expedites production for animation or live-action effects film that includes computer-generated racing cars.
APA, Harvard, Vancouver, ISO, and other styles
11

Choi, Nag Jung. "Multimedia electronic mail : standards and performance simulation." Thesis, Monterey, California. Naval Postgraduate School, 1992. http://hdl.handle.net/10945/23975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Mercado-Solis, Rafael David. "Simulation of thermal fatigue in hot mill work rolls." Thesis, University of Sheffield, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269378.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Darth, Pontus. "Simulation of Rolling Mill to Computeand Improve Load Distribution." Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85595.

Full text
Abstract:
This master thesis was done at Swerim AB in cooperation with SSAB and the Technical University of Luleå in the purpose of preventing spalling problems in hot rolling mills. Spallings are a fatigue damage that occurs on the rolls due to extreme loads and unfavorable conditions between the rolls in a mill. This report describes how the roughing mill, which is the first of a series of hot rolling mills is modelled and simulated in order to compute and improve the load distribution between the rolls. The load distribution tells a lot where the spalling problems occurs. By computer aided design and simulations with the finite element method a parametric computational model was created and used to simulate the load distribution between the work roll and backup roll with worn andfresh rolls. These simulations showed what the load distribution looks like when using new rolls and that the load distribution is especially bad when the work roll is worn. The computational model was used to simulate how the load distribution changes with different geometries on the backup roll to provide valuable input and suggest new designs on the backup roll currently used by SSAB Borlänge.
APA, Harvard, Vancouver, ISO, and other styles
14

Lindhorst, Lutz. "Numerische Simulation des Plasma-MIG-Unterwasserschweißens : Eigenspannungen, Gefüge und Bruchmechanik /." Düsseldorf : VDI-Verl, 1999. http://www.gbv.de/dms/bs/toc/266389937.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Mazumder, AKM Monayem Hossain. "Development of a Simulation Model for Fluidized Bed Mild Gasifier." ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/101.

Full text
Abstract:
A mild gasification method has been developed to provide an innovative clean coal technology. The objective of this study is to developed a numerical model to investigate the thermal-flow and gasification process inside a specially designed fluidized-bed mild gasifier using the commercial CFD solver ANSYS/FLUENT. Eulerain-Eulerian method is employed to calculate both the primary phase (air) and secondary phase (coal particles). The Navier-Stokes equations and seven species transport equations are solved with three heterogeneous (gas-solid), two homogeneous (gas-gas) global gasification reactions. Development of the model starts from simulating single-phase turbulent flow and heat transfer to understand the thermal-flow behavior followed by five global gasification reactions, progressively with adding one equation at a time. Finally, the particles are introduced with heterogeneous reactions. The simulation model has been successfully developed. The results are reasonable but require future experimental data for verification.
APA, Harvard, Vancouver, ISO, and other styles
16

Yesilay, Yasemin Ayse. "A Computer Simulator For Ball Mill Grinding." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605350/index.pdf.

Full text
Abstract:
Ball mill grinding is an important operation in the processing of most minerals, in that it may be used to produce particles of the required size and shape, to liberate minerals from each other for concentration purposes, and to increase the powder surface area. Grinding of minerals is probably the most energy consuming task and optimization of this operation has vital importance in processing plant operations to achieve the lowest operating costs. Predicting the complete product size distribution, mill specifications and power draw are important parameters of this optimization. In this study, a computer simulation program is developed in MATLAB environment to simulate grinding operations using the kinetic model in which comminution is considered as a process continuous in time. This type of model is commonly and successfully used for tumbling grinding mills having strongly varying residence time as a function of feed rate. The program developed, GRINDSIM, is capable of simulating a ball mill for a specified set of model parameters, estimating grinding kinetic parameters from experimental batch grinding data and calculating continuous open and closed-circuit grinding behavior with mill power input. The user interacts with the program through graphical user interfaces (GUI&rsquo<br>s).
APA, Harvard, Vancouver, ISO, and other styles
17

Hersén, Nicklas. "Measuring Coverage of Attack Simulations on MAL Attack Graphs." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292640.

Full text
Abstract:
With the transition from traditional media and the increasing number of digital devices, the threats against digital infrastructure is greater than ever before. New and stricter security requirements are placed on digital platform in order to protect sensitive information against external cyber threats. Threat modeling is a process which involves identifying threats and weakness of a system with the purpose of eliminating vulnerabilities before they are exploited. The Meta Attack Language is a probabilistic threat modeling language which allows security researchers to instantiate specific attack scenarios through the use of attack simulations. Currently there is no support for gathering coverage data from these simulations other than manually checking the compromised state of all objects present in a simulation. The purpose of this work is to develop a coverage extension in order to simplify the threat modeling process. The coverage extension is able to produce coverage estimates from attack simulations executed on specific Meta Attack Language threat models. These metrics are adaptations of existing code- and model coverage metrics commonly used for software- and model testing. There are limitations in what type of data can be effectively presented (such as for exponentially growing data sets) due to the simplicity of the models.<br>Övergången från traditionella medier till digitala plattformar har lett till en ökad hotbild mot digital infrastruktur. Vikten av att designa säkra plattformar och enheter för att skydda känslig information har lett till framkomsten av nya strängare säkerhetskrav. Hotmodellering är en process med syfte att förebygga att svagheter i ett system utnyttjas av externa parter genom att identifiera brister i systemet. Meta Attack Language är ett hotmodelleringsspråk med stöd för simulering av specifika attack scenarion genom attacksimuleringar. I nuläget finns inget stöd för insamling av täckningsdata från dessa simuleringar. Syftet med detta arbete är att utveckla en tilläggstjänst för insamling av täckningsdata i syfte att underlätta hotmodelleringsprocessen. Den utvecklade tillägstjänsten kan ge en uppskattning av hur väl en modell täcks av en mängd simuleringar. Täckningsvärderna som används av tilläggstjänsten är anpassningar av befintliga mätvärden som används inom uppskattning av källkods- och modelltäckning. Nuvarande implementation har ett flertal begränsningar gällande presentationen av viss typ av data, till exempel exponentiellt växande mätvärden. Detta beror på att modellerna inte är anpassade för denna typ av testning.
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Lingchang XI. "Development of a Hardware-In-the-Loop Simulator for Battery Management Systems." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1397656909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Farley, Mark Harrison. "Predicting machining accuracy and duration of an NC mill by computer simulation." Thesis, Georgia Institute of Technology, 1990. http://hdl.handle.net/1853/16499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Opacic, Luke. "Developing simulation models to improve the production process of a parallam mill." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/58340.

Full text
Abstract:
Engineered wood products are manufactured by adhering small pieces of wood together with a bonding agent. They have many benefits. They allow the logs to be used more completely and more efficiently. They can increase the structural efficiency of wood frame construction, and natural wood defects can be dispersed in the product, which increases the uniformity of the mechanical and physical properties. Parallam® is one of these engineered wood products. It is manufactured in only two facilities in the world – Delta, British Columbia, Canada, and Buckhannon, West Virginia, United States. Parallam is manufactured from a grade of veneer that is not suitable for other products using Douglas Fir at the Canadian plant, and various species of pine at the American plant. The veneer is cut into strands, which are then adhered into long billets and are cut into the desired sizes. The Canadian plant was experiencing limitations in their total throughput, and was interested in exploring solutions to improve it. Since production operations are complex and subject to a variety of uncertainties and complexities, discrete-event simulation modelling was used to analyze the processes and evaluate potential improvement scenarios. Two projects were conducted in this research where simulation models were developed to analyze different scenarios for possible alternative plant configurations or policies. The first project analyzed the replacement of a machine, changing the policy of order customization, and the flow of quality assurance pieces. The main finding was that the machine replacement had no positive impact on the throughput and should not be done. In addition, it was determined that a decrease in the amount of customization could increase the throughput by 20%. The second project analyzed the worker-machine interactions within the entire mill and the automation of an outfeed conveyor. The main finding was that the addition of one worker to the packaging station and the automation of the conveyor could result in a 22% increase in throughput. Further research should be conducted to assess the impact of quality assurance pieces through the mill, or to assess the impact of different workers’ schedules instead of just their assignments.<br>Forestry, Faculty of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
21

Ang, Keng Lin (Jason). "Investigation of rheological properties of concentrated milk and the effect of these properties on flow within falling film evaporators." Thesis, University of Canterbury. Chemical and Process Engineering, 2011. http://hdl.handle.net/10092/6720.

Full text
Abstract:
The falling film flow of milk was studied both analytically and experimentally. Experiments were carried out for concentrations from 19.93% to 62.09% to obtain the rheological data of milk while analytical studies were done to derive the solutions of the problem. Studies which include calculations and simulations were carried out for a typical milk flow in a falling film evaporator. It was found that milk was non-Newtonian at high concentrations and Herschel-Bulkley model was able to model the milk flow. The typical falling film flow was able to be simulated as a two phase flow in COMSOL to gain a better understanding of the flow. It was found that there were counter-current flow between the film and air in the evaporator. A Matlab program was also used to study the analytical solutions of the film temperature change while it flows down the tube with results showing that heat transfer was not linear as would have believed. Results from several experiments also enabled the change of milk viscosity with time to be modeled. Milk viscosity increased steadily with time and higher at higher total solids from 35.47% to 49.25% for three hours. Calculations revealed that film thickness of milk was very thin, from 0.00116 m at the entrance of tube to 0.00146 m at the tube exit. From the use of models developed of the rheological parameters, results showed that these parameters have impacts on film flow except the yield stress. However, the viscosity and yield stress are factors that will limit the operating range available for falling film evaporator.
APA, Harvard, Vancouver, ISO, and other styles
22

Kemppainen, Hanne. "Mapping of causations for edospore formation and process optimization at pulp-and paper mill." Thesis, KTH, Skolan för bioteknologi (BIO), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-150698.

Full text
Abstract:
BillerudKorsnäs is a manufacturer of fiber-based cartonboard and liquid packaging board. Microbial growth occurs at several steps in cartonboard production due to favourable environment and the good access to nutrients from the raw material, and additives such as starch. Vegetative bacteria are usually not harmful in the production and die in the hot drying end of the cartonboard machine. The most abundant microflora at paper- and cartonboard factories consists largely of sporeforming microorganisms from the genera Bacillus and Paenibacillus. The endospores are highly resistant and can stay in the final end product, which is undesirable. Levels of endospores from these species at BillerudKorsnäs production unit KM5 are usually low, but an occational increase can be seen when a new cartonboard product, KW1 is produced. Today, the method used for controlling the microbiology is by adding biocides to broke towers. This has shown to be both expensive and non-effective at KM5. A new method is needed for controlling the microbiology at KM5 that is more effective, costbeneficial and environmental friendly.The aim of this project was to test a hypothesis for spore formation at a paper board factory in lab-scale experiments. A suggestion of a technical change in the process would be made that could minimize spore formation and the use of biocides at KM5. A model organism Bacillus licheniformis (E-022052) was used to study effects of environmental conditions on spore formation. Experiments were also performed in controlled bioreactor trials, where methods to minimize spore formation were tested.The experiments showed that nutrient deficiency of a primary carbon source was the major reason for spore formation and should be avoided at KM5. Further, the experiments showed that oxygen limitation significantly decreases the endospore formation.The conclusion reached, was that spore formation could be minimized by a feed addition of glucose to Broke tower 1 during the few days production of KW1. A second alternative includes using a feed of concentrated pulp that could be used to minimize spore formation without the use of biocides and without the need for rebuilding of the mill.
APA, Harvard, Vancouver, ISO, and other styles
23

Mathonnière, Sylvain. "Design and fabrication of long wavelength mid-infrared Quantum Cascade Laser." Doctoral thesis, Humboldt-Universität zu Berlin, 2020. http://dx.doi.org/10.18452/21072.

Full text
Abstract:
Der Quantenkaskadenlaser hat sich als die beste Technologie für das mittlere Infrarot erwiesen, da sich seine Emissionswellenläge durch die Änderung seiner Geometrie einstellen lässt. Das Ziel dieser Dissertation ist eine neue aktive Regionen für die Mid-InfrarotRegion über 10 µm zu analysieren und zu entwickeln. Diese Arbeit konzentriert sich in Kapitel 2 zunächst auf das Verständnis der wichtigsten Prozesse, die in der aktiven Region der langen Wellenlänge des Quantenkaskadenlasers auftreten. In diesem Kapitel werden die typischen Simulationen des aktiven Bereichs und die Simulationen von Intensitäts-Spannungskurven und Verstärkungskurven ausführlich erläutert. Das Kapitel 3 konzentriert sich auf das Design von Quantenkaskadenlasern. Die Hauptpunkte sind das Verständnis des Verstärkungsprozesses in einer aktiven Region sowie die verschiedenen Arten von aktiven Regionen. Schliesslich wird ein halbautomatisches Programm beschrieben, das es ermöglicht, aktive Bereiche zu entwerfen und dessen Nützlichkeit wird dargelegt. Kapitel 4 behandelt den technologischen Prozess des Quantenkaskadenlasers und beinhaltet die Erfahrungen. Kapitel 5 ist das Schlüsselkapitel dieser Arbeit. In diesem Kapitel werden mehrere Laser nach unterschiedlichen Designs entwickelt, die mit Hilfe des im Kapitel 3 beschriebenen Programms erhalten wurden. Diese Designs werden dann sorgfältig analysiert und verglichen, um die grundlegenden Mechanismen besser zu verstehen. Schliesslich werden diese Entwürfe mit dem Stand der Technik verglichen. Die letzten beiden Kapitel konzentrieren sich mehr auf die Verbesserung des einmal gewachsenen Quantenkaskadenlasers. Kapitel 6 zeigt die Widerstandsfähigkeit des Quantenkaskadenlasers beim Glühen und zeigt sogar eine Leistungssteigerung bei bestimmten Glühtemperaturen. Kapitel 7 das Konzept der Wellenlängenabstimmung von Quantenkaskadenlasern durch Hinzufügen eines externen Hohlraums mit einem wellenlängenselektiven Element für die Spektroskopie.<br>The Quantum Cascade Laser has proven to be the best technology for the mid-infrared due to its unique feature of engineering of its emission wavelength simply by changing its geometry. The goal of this PhD thesis is to analyse and design new active regions for the mid-infrared region above 10 µm. To this effect, this thesis focuses first on the understanding of the key processes occurring in the active region of long wavelength quantum cascade lasers during chapter 2. This chapter explains in details the typical simulations of active region and the simulations of intensity-voltage curves and gain curves. Chapter 3 focuses on the design of quantum cascade lasers. The main points are the understanding of the gain process in an active region as well as the different types of active regions to achieve gain. Finally, a semi-automatic program allowing to design active regions is described and its usefulness demonstrated. The next chapter, chapter 4 is here to treat the technological process of quantum cascade laser and to gather the experiences acquired over the length of this PhD in the laboratory. Chapter 5 is the key chapter in this thesis. In this chapter, several lasers are grown following one of the design obtained thanks to the program described in chapter 3. Those design are then carefully analysed and compared to understand better the mechanisms at plays. Finally, those designs are compared with state of the art designs. The last two chapters are more focus on improving the quantum cascade laser, once grown. Chapter 6 demonstrates the resilience of quantum cascade laser to annealing. Finally, chapter 7 illustrates the concept of wavelength tuning of quantum cascade lasers by adding an external cavity with a wavelength selective element .This chapter focuses on the development of a compact external cavity quantum cascade laser as well as its application for spectroscopy.
APA, Harvard, Vancouver, ISO, and other styles
24

Andersson, Per. "A dynamic Na/S balance of a kraft pulp mill : Modeling and simulation of a kraft pulp mill using WinGEMS." Thesis, Karlstads universitet, Institutionen för ingenjörs- och kemivetenskaper, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-31694.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Aykent, Baris. "Etude des lois de commande de la plateforme de simulation de conduite et influence sur le mal de simulateur." Thesis, Paris, ENSAM, 2013. http://www.theses.fr/2013ENAM0053/document.

Full text
Abstract:
La simulation de conduite est fortement utilisée dans la recherche et le développement pour l'industrie automobile. Les simulateurs de conduite sont utilisés pour évaluer les prototypes véhicules pour la dynamique du véhicule et les systèmes d'aide à la conduite. Cependant, l'utilisation des simulateurs de conduite induit une problématique scientifique qui peut limiter son développement. En raison de son principe même, le simulateur de conduite ne restitue pas des mouvements du véhicule à l'échelle 1. Ce verrou cause des phénomènes de mal du simulateur qu'il est important d'étudier.Cette thèse propose d'étudier des méthodes et outils à mettre en œuvre dans les simulateurs de conduite statique ou dynamique. De cette mise en œuvre, des études sur le mal du simulateur sont menées grâce à des mesures objectives (via un capteur de suivi de mouvement, plate-forme de stabilité du corps, électromyographie) et subjectives (par l'intermédiaire de questionnaires). Des solutions algorithmiques et matérielles sont proposées et évaluées dans le contexte de la simulation de conduite.Les approches proposées dans cette thèse pour réduire le mal du simulateur sont:- Elaborer et évaluer les algorithmes de contrôle de la plate-forme mobile hexapode: sept algorithmes différents sont mis en œuvre.- Mesurer les effets liés au mal de simulateur sur les sujets aux niveaux vestibulaire, neuromusculaire et posturale.- Evaluer l'influence de l'implication des sujets sur le mal de simulateur (conducteurs et passagers)<br>Simulation has been intensively involved nowadays in research and development for automotive industry. Driving simulators are one of those simulation techniques which are used to evaluate the prototypes for the vehicle dynamics and driving assistance systems. However with the driving simulator, there is a lock associated with its use. Because representing a permanent scenario as scale 1 is quite difficult. Because of that difficulty, motion/simulator sickness is an inevitably important topic to study.This thesis proposes to explore methods and tools to implement in static or dynamic simulators. In this implementation, studies of simulator sickness are conducted with objective measures (via a motion tracking sensor, platform for body stability, electromyography) and subjective (through questionnaires). These algorithmic or hardware solutions studies should be defined and applied at simulators. The proposed approaches to reduce or avoid simulator sickness in this thesis are:- Building control algorithms of motion hexapod platform: seven different algorithms are implemented.- Measuring the effects of inertia on subjects at vestibular, neuromuscular and postural levels.- Assessing the involvement of subjects (drivers and passengers)
APA, Harvard, Vancouver, ISO, and other styles
26

Gill, Satinder Singh Medeiros D. J. Ray Charles Dean. "Simulation based comparison of push versus pull production in a hardwood dimension mill." [University Park, Pa.] : Pennsylvania State University, 2009. http://etda.libraries.psu.edu/theses/approved/PSUonlyIndex/ETD-3530/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Ribeiro, Alvaro John. "SuperDARN Data Simulation, Processing, Access, and Use in Analysis of Mid-latitude Convection." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/24469.

Full text
Abstract:
Super Dual Auroral Radar Network (SuperDARN) data is a powerful tool for space science research. Traditionally this data has been processed using a routine with known limitations. A large issue preventing the development and implementation of new processing algorithms was the lack of a realistic test dataset. We have implemented a robust data simulator based on physical principles which is presented in Chapter 2. The simulator is able to generate SuperDARN data with realistic statistical fluctuations and known input Doppler velocity and spectral width. Using the simulator to generate a test data set, we was able to test new algorithms for processing SuperDARN data. The algorithms which were tested included the traditional method (FITACF), a new approach using the bisection method (FITEX2), and the Levenberg-Marquardt algorithm for nonlinear curve fitting (LMFIT). FITACF is found to have problems when processing data with high (> 1~km/s) Doppler velocity, and is outperformed by both FITEX2 and LMFIT. LMFIT is found to produce slightly better fitting results than FITEX2, and is thus my recommendation to be the standard SuperDARN data fitting algorithm. The construction of the new midlatitude SuperDARN chain has revealed that nighttime, quiet-time plasma irregularities with low Doppler velocity and spectral width are a very common (> 50% of nights) occurrence. Following on previous work, we have conducted a study of nighttime midlatitude convection using SuperDARN data. First, the data are processed into convection patterns, and the results are presented. The drifts are mainly zonal and westward throughout the night. The plasma drifts also display significant seasonal variability. Additionally, a large latitudinal gradient is observed in the zonal velocity during the winter months. This is attributed to processes in the conjugate hemisphere, and possible causes are discussed. During my graduate studies, we have been part of the development of a software package for enabling and accelerating space science research known as DaViTpy. This software package is completely free and open source. It allows access to several different space science datasets through a single simple interface, without having to write any code for reading data files. It also incorporates several space science models in a single install. The software package represents a paradigm shift in the space science community, and is presented in Appendix A.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Cattabiani, Alessandro. "Simulation of low- and mid-frequency response of shocks with a frequency approach." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN012/document.

Full text
Abstract:
Récemment, les industries aérospatiale et automobile sont de plus en plus intéressées par les tests virtuels, car ils accélèrent le processus de conception et ils réduisent les coûts. C’est particulièrement vrai dans le cas des industries spatiales où les maquettes sont très coûteuses car les fusées sont uniques ou produites en nombre limité. Ariane 5 (et 6 dans l'avenir) est un lanceur qui est fabriqué par CNES et Airbus DS. Pendant le lancement, la coiffe est détachée par des charges pyrotechniques quand la fusée est à une hauteur suffisante (généralement au-dessus de 100 km). Les vibrations générées par les explosions se propagent dans les coques de la fusée jusqu'à la charge utile qui peut être endommagé. Le test au sol età pleine échelle HSS3+ a été effectué par CNES et Airbus DS pour enquêter sur cette éventualité. Cette thèse développe un logiciel capable de simuler le test HSS3+ pour caractériser les efforts produits par les charges pyrotechniques et pour réduire dans l'avenir le nombre de tests réels nécessaires. La tâche est difficile car la bande de fréquence considérée est très large (jusqu'à moyenne fréquence), les efforts des explosions sont inconnus, la géométrie est complexe et la maquette est composé des coques composites et sandwich. Le logiciel appelé Transient Analysis for PYROtechnic Shocks in Shells (TAPYROSS) est basée sur la Théorie variationnelle des Rayons Complexes (TVRC) qui est une méthode de Trefftz spécifiquement développé pour analyser la moyenne fréquence. De nombreuses améliorations de la théorie et des performances ont été introduits pour étudier ce cas réel de complexité industriel. Le dernier chapitre est dédié aux comparaisons entre les données réelles et les simulations pour valider TAPYROSS et caractériser les efforts des explosions<br>Recently, aerospace and automotive industries are increasingly interested in virtual testing since it speeds-up the design process and reduces costs. This is particularly true in case of space industries where specimens are very costly because rockets are unique or produced in limited number. Ariane 5 (and 6 in the future) is a heavy lift launch rocket manufactured by CNES and Airbus DS. During launch the protective fairing is severed from the rocket by pyrotechnic charges once sufficient altitude is reached (typically above 100 km). Shock vibrations propagate throughout rocket shell structure to the payload which can be damaged. The HSS3+ ground full-scale test was developed by CNES and Airbus DS to investigate such eventuality. This thesis develops a software capable of simulating the HSS3+ test to characterize explosion loads and to reduce the number of future required real tests. The task is difficult since the interesting frequency band is wide (up to mid-frequency), the explosion loads are unknown, the geometry is complex, and the specimen is composed of sandwich composite shells. The software called Transient Analysis for PYROtechnic Shocks in Shells (TAPYROSS) is based on the Variational Theory of Complex Rays (VTCR) which is a Trefftz method specifically developed to analyze the mid-frequency band. Many theory and performance improvements are introduced to address this real industrial test case. At the end, comparisons between real data and simulations validate TAPYROSS and characterize explosion loads
APA, Harvard, Vancouver, ISO, and other styles
29

Agne, Aboubakry. "Modélisation et simulation numérique des étapes de déliantage et frittage du procédé de Moulage par Injection de poudres Métalliques (MIM)." Thesis, Bourgogne Franche-Comté, 2019. http://www.theses.fr/2019UBFCD025.

Full text
Abstract:
Les étapes de déliantage et de frittage sont des phases cruciales du procédé de Moulage par Injection de poudres Métalliques (MIM). Elles sont généralement à l’origine des principales variations dimensionnelles. Pour quantifier et prédire les pertes de masse et déformations associées à ces séquences, différents modèles, inspirés de la littérature, décrivant au mieux les mécanismes de chaque étape, ont été proposés. Ils sont appliqués à une formulation industrielle à base de poudres de superalliage à base de nickel (Inconel 718) et d’un liant multi-ingrédients. La formulation a été caractérisée par des analyses thermiques et gravimétriques. L’objectif de ces travaux a été, dans un premier temps, de prédire les quantités de matière éliminées pendant les déliantages solvant et thermique, et de suivre l’évolution des dimensions grâce à la simulation numérique par la méthode des éléments-finis dans la plateforme Comsol Multiphysics ®. Dans un second temps, les travaux se sont focalisés sur la modélisation de la déformation et l’évolution de la densité relative pendant le frittage conventionnel en phase solide. La perte de masse pendant le déliantage aqueux a été modélisée par une expression analytique pilotée par un paramètre de diffusion obtenu expérimentalement. Elle a été associée au gonflement hygroscopique et à la dilation thermique qui sont les principales sources de déformation du déliantage solvant. Le déliantage par CO2 supercritique, qui est une méthode innovante d’extraction du polyéthylène glycol (PEG), a fait l’objet d’une caractérisation expérimentale et de simulations numériques rendant possible la prédiction de la perte de masse en PEG sur Comsol Multiphysics ® de composants développés au sein du laboratoire. La cinétique de dégradation pendant le déliantage thermique a été décrite expérimentalement avec des analyses thermogravimétriques réalisées sur des échantillons à base du feedstock industriel. Les méthodes d’Ozawa et de Kissinger ont permis d’estimer les énergies d’activation de la cinétique de dégradation de l’ensemble des constituants pour la simulation numérique. Le couplage entre la dégradation et le transfert de chaleur a rendu possible la simulation complète de cette étape avec pour objectif de prédire la distribution géométrique et temporelle du liant et les déformations générées. Le frittage en phase solide de l’Inconel 718 a fait l’objet de caractérisations expérimentales et de simulations par éléments-finis. La loi de comportement thermo-élasto-viscoplastique, permettant de quantifier la déformation totale et la densité relative tout au long de la densification, a été formulée et implémentée dans le code éléments-finis ABAQUS ®. La viscosité uni-axiale a été identifiée en utilisant une méthode de chargement de type compression par intermittence. Elle a permis par la suite d’identifier la contrainte de frittage. La pertinence de cette méthode a été discutée par comparaison avec une autre méthode basée sur la vitesse de déformation du frittage libre avec moins d’incertitudes de mesures. Les résultats de l’ensemble des développements numériques réalisés ont été confrontés à des résultats expérimentaux obtenus sur une géométrie de pièce MIM aéronautique, conduisant à une bonne estimation de la valeur des dilatations et retraits<br>The debinding and sintering steps are crucial for the Metal Injection Moulding (MIM) process. The main dimensional changes are generated from these two steps. In order to predict the mass losses and the deformation behaviours, different models fitting to the mechanical mechanisms observed were obtained from the state of art. They are adapted and performed for each step. These models used for the numerical simulation are applied to industrial components based on a formulation composed of Inconel 718 superalloy powders and a multi-ingredient binder system. The formulation is characterized by thermal and gravimetric analyses. The aim of the thesis is, at first, to predict the weight loss after the complete debinding step including the solvent and the thermal debinding, followed by the modelling of the solid state sintering of the material. The weight loss during solvent debinding is expressed by an analytic function controlled by a diffusion parameter, which is directly identified from experimental results. Hygroscopic swelling and thermal expansion are coupled to the weight loss to follow as well the expansion during the binder extraction. Supercritical debinding, an innovative way to remove by diffusion some polymers, is also investigated in order to predict the extraction of the polyethylene glycol (PEG) in in-house components by numerical simulation using the finite-element method on Comsol Multiphysics ® software. Thermogravimetric analyses were employed to characterize the kinetics during the thermal debinding of the industrial formulation. The Ozawa and Kissinger methods are introduced to estimate the activation energies of each polymer of the binder system for the numerical simulation. A coupled model is developed by using the heat transfer principle and a thermal degradation law in order to visualize the binder distribution and the shrinkage due to its elimination. Experimental and numerical studies are carried out on solid state sintering of the Inconel 718. A thermal elasto-viscoplastic law based on the continuum mechanics is adopted and built in the commercial software ABAQUS ® to simulate the shrinkage and the density field during the sintering step. The uniaxial viscosity, a main parameter of the constitutive equations, is evaluated using intermittent compression tests. The sintering stress is identified from experimental densification thanks to the viscosity and then, used for the numerical simulation. The relevance of this methodology is discussed by comparison with a different method based on the densification rate that showed a lower level of uncertainties. The numerical analysis of the debinding and the sintering steps showed in this thesis are compared with the experimental measurements performed on MIM aeronautical components
APA, Harvard, Vancouver, ISO, and other styles
30

Baudet, Alvaro. "Optimize cold sector material flow of a steel rolling mill." Thesis, KTH, Industriell produktion, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-50380.

Full text
Abstract:
The steel production is a highly capital and energy intensive industry that due to recent raw materials’ price increase and lowered demand, it has been squeezed and forced to look more deeply on how to add value to the customer at lower operative costs. The project was carried out on site at the ArcelorMittal’s millin Esch-Belval, Luxembourg which comprises an integrated melt shop, continuous casting plant and the rolling mill with the objectives of proposing optimization rules for the cold sector of the rolling mill and to analyze the impact of the future truckbay shipment area. The course of action followed was to draw a Value Stream Map (VSM) in order to understand the plants’ current status and serve as a roadmap to build a discrete event simulation model that after its validation, served as a support tool to analyze what-if scenarios. Similarly, a current status analysis of the  shipment/stock area was conducted collecting statistics about potential truckshipments and finally proposing a series of recommendations for its operation. The main proposed solutions to optimize the rolling mill’s cold sector were:(a) Integer programming model to globally optimize the scrap level when cutting the mother beams to customer size beams. (b) Updating pacemaker parameters and (c) Local process time improvements. Concerning the future truck loading, the simulation model was used as a support tool to dimension the transition area between the cranes’ and forklift operations resulting in a 6-9 bundles buffer capacity. Additionally, the current length-based storage policy was found to have competitive objectives so a turnover class-based storage policy is proposed with A, B, C classes which should provide an improved organization of the stock and travel distance of the cranes. The evaluation of the cranes’ performance remains an issue since there are currently no objective measures like, for instance, travelled distance. Optical measuring devices are suggested as one option to have a performance indicator that would help further investigate root cause problems in the shipping/stock area.
APA, Harvard, Vancouver, ISO, and other styles
31

Halling, Jon. "1553-Simulator. In-/uppspelning av databusstrafik med hjälp av FPGA." Thesis, Linköping University, Department of Electrical Engineering, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-1187.

Full text
Abstract:
<p>At Saab Aerospace in Linköping, components for measurement systems to the fighter aircraft JAS 39 Gripen are developed. In this activity you sometimes want to record the traffic transmitted on the data busses that connects different sys-tems. This traffic on the data busses is using the military standard MIL-STD-1553. </p><p>This project has aimed to create a system for recording and sending 1553-data. The system is used on an ordinary personal computer, equipped with a recon- figurable I/O card that among others has a programmable logic circuit (FPGA). The recorded data are stored on a hard drive. The system has a graphical user interface, where the user can configure different methods of filtering the data, and other preferences. </p><p>The completed system has currently the capacity to record one channel. This works excellent and the system basically meets all the requirements stated at the start of the project. By using this system instead of the commercial available systems on the market one will get a competitive alternative. If the system where to be developed further, with more channels, it would get even more price worth. Both in case of price per channel, but also in functionality. This is because it is possible to design exactly the functions the user demands. But the current version is already fully functional and competitive compared to commercial systems.</p>
APA, Harvard, Vancouver, ISO, and other styles
32

Torres, Toledo Victor [Verfasser]. "Design, simulation and validation of small-scale solar milk cooling systems / Victor Torres Toledo." Aachen : Shaker, 2018. http://d-nb.info/1188549162/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Elmquist, Helena. "Environmental systems analysis of arable, meat and milk production /." Uppsala : Dept. of Biometry and Engineering, Swedish University of Agricultural Sciences, 2005. http://epsilon.slu.se/200512.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Sorce, Jenny. "From Spitzer Mid-InfraRed Observations and Measurements of Peculiar Velocities to Constrained Simulations of the Local Universe." Thesis, Lyon 1, 2014. http://www.theses.fr/2014LYO10078.

Full text
Abstract:
Les galaxies sont des sondes observationnelles pour l'étude des structures de l'Univers. Leur mouvement gravitationnel permet de tracer la densité totale de matière. Par ailleurs, l'étude de la formation des structures et galaxies s'appuie sur les simulations numériques cosmologiques. Cependant, un seul univers observable à partir d'une position donnée, en temps et espace, est disponible pour comparaison avec les simulations. La variance cosmique associée affecte notre capacité à interpréter les résultats. Les simulations contraintes par les données observationnelles constituent une solution optimale au problème. Réaliser de telles simulations requiert les projets Cosmicflows et CLUES. Cosmicflows construits des catalogues de mesures de distances précises afin d'obtenir les déviations de l'expansion. Ces mesures sont principalement obtenues avec la corrélation entre la luminosité des galaxies et la vitesse de rotation de leur gaz. La calibration de cette relation est présentée dans le mi-infrarouge avec les observations du télescope spatial Spitzer. Les estimations de distances résultantes seront intégrées au troisième catalogue de données du projet. En attendant, deux catalogues de mesures atteignant 30 et 150 h−1 Mpc ont été publiés. Les améliorations et applications de la méthode du projet CLUES sur les deux catalogues sont présentées. La technique est basée sur l'algorithme de réalisation contrainte. L'approximation de Zel'dovich permet de calculer le champ de déplacement cosmique. Son inversion repositionne les contraintes tridimensionnelles reconstruites à l'emplacement de leur précurseur dans le champ initial. La taille inégalée, 8000 galaxies jusqu'`a une distance de 150 h−1 Mpc, du second catalogue a mis en évidence l'importance de minimiser les biais observationnels. En réalisant des tests sur des catalogues de similis, issus des simulations cosmologiques, une méthode de minimisation des biais peut être dérivée. Finalement, pour la première fois, des simulations cosmologiques sont contraintes uniquement par des vitesses particulières de galaxies. Le procédé est une réussite car les simulations obtenues ressemblent à l'Univers Local. Les principaux attracteurs et vides sont simulés à des positions approchant de quelques mégaparsecs les positions observationnelles, atteignant ainsi la limite fixée par la théorie linéaire<br>Galaxies are observational probes to study the Large Scale Structure. Their gravitational motions are tracers of the total matter density and therefore of the Large Scale Structure. Besides, studies of structure formation and galaxy evolution rely on numerical cosmological simulations. Still, only one universe observable from a given position, in time and space, is available for comparisons with simulations. The related cosmic variance affects our ability to interpret the results. Simulations constrained by observational data are a perfect remedy to this problem. Achieving such simulations requires the projects Cosmicflows and CLUES. Cosmicflows builds catalogs of accurate distance measurements to map deviations from the expansion. These measures are mainly obtained with the galaxy luminosity-rotation rate correlation. We present the calibration of that relation in the mid-infrared with observational data from Spitzer Space Telescope. Resulting accurate distance estimates will be included in the third catalog of the project. In the meantime, two catalogs up to 30 and 150 h−1 Mpc have been released. We report improvements and applications of the CLUES’ method on these two catalogs. The technique is based on the constrained realization algorithm. The cosmic displacement field is computed with the Zel’dovich approximation. This latter is then reversed to relocate reconstructed three-dimensional constraints to their precursors’ positions in the initial field. The size of the second catalog (8000 galaxies within 150 h−1 Mpc) highlighted the importance of minimizing the observational biases. By carrying out tests on mock catalogs, built from cosmological simulations, a method to minimize observational bias can be derived. Finally, for the first time, cosmological simulations are constrained solely by peculiar velocities. The process is successful as resulting simulations resemble the Local Universe. The major attractors and voids are simulated at positions approaching observational positions by a few megaparsecs, thus reaching the limit imposed by the linear theory<br>Die Verteilung der Galaxien liefert wertvolle Erkenntnisse über die großräumigen Strukturen im Universum. Ihre durch Gravitation verursachte Bewegung ist ein direkter Tracer für die Dichteverteilung der gesamten Materie. Die Strukturentstehung und die Entwicklung von Galaxien wird mithilfe von numerischen Simulationen untersucht. Es gibt jedoch nur ein einziges beobachtbares Universum, welches mit der Theorie und den Ergebnissen unterschiedlicher Simulationen verglichen werden muß. Die kosmische Varianz erschwert es, das lokale Universum mit Simulationen zu reproduzieren. Simulationen, deren Anfangsbedingungen durch Beobachtungsdaten eingegrenzt sind (“Constrained Simulations”) stellen eine geeignete Lösung dieses Problems dar. Die Durchführung solcher Simulationen ist das Ziel der Projekte Cosmicflows und CLUES. Im Cosmicflows-Projekt werden genaue Entfernungsmessungen von Galaxien erstellt, welche die Abweichung von der allgemeinen Hubble- Expansion abbilden. Diese Messungen werden hauptsächlich aus der Korrelation zwischen Leuchtkraft und Rotationsgeschwindigkeit von Spiralgalaxien gewonnen. In dieser Arbeit wird die Kalibrierung dieser Beziehung im mittleren Infrarot mithilfe von Daten vom Spitzer Space Telescope vorgestellt. Diese neuen Entfernungsbestimmungen werden im dritten Katalog des Cosmicflows Projekts enthalten sein. Bisher wurden zwei Kataloge veröffentlicht, mit Entfernungen bis zu 30 beziehungsweise 150 h−1 Mpc. In dieser Arbeit wird die CLUESMethode auf diese zwei Kataloge angewendet und Verbesserungen warden vorgestellt und diskutiert. Zunächst wird das kosmische Verschiebungsfeld mithilfe der Zeldovich-Näherung bestimmt. In umgekehrter Richtung kann man damit die aus heutigen Beobachtungsdaten rekonstruierten dreidimensionalen Constraints an ihren Ursprungsort im frühen Universum zurückzuversetzen. Durch den großen Datenumfang des cosmicflows-2 Katalogs (8000 Galaxien bis zu einer Entfernung von 150 h−1 Mpc) ist es besonders wichtig, den Einfluss verschiedener Beobachtungsfehler zu minimieren. Eine für das lokale Universum angepasste Korrekturmethode lässt sich durch die Untersuchung von Mock-Katalogen finden, welche aus kosmologischen Simulationen gewonnen werden. Schließlich stellt diese Arbeit erstmals kosmologische Simulationen vor, die ausschließlich durch Pekuliargeschwindigkeiten eingegrenzt sind. Der Erfolg dieser Methode wird dadurch bestätigt, dass die dadurch erzeugten Simulationen dem beobachteten lokalen Universum sehr ähnlich sind. Die relevanten Attraktoren und Voids liegen in den Simulationen an Positionen, welche bis auf wenige Megaparsec mit den beobachteten Positionen übereinstimmen. Die Simulationen erreichen damit die durch die lineare Theorie gegebene Genauigkeitsgrenze
APA, Harvard, Vancouver, ISO, and other styles
35

Scullion, Eamon. "Investigating jets in the lower-to-mid solar atmosphere : observations & numerical simulations." Thesis, University of Sheffield, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.556987.

Full text
Abstract:
Understanding the fine structures and transient phenomena associated with the origins of the fast solar wind, in the lower-to-mid solar atmosphere, remains one of the biggest unanswered questions in solar physics today. A variety of jet-like transient events appear in cool plasmas in the form of spicules, macrospicules and surges. Transient and explosive jet-like phenomena are not only dominant in the lower solar atmosphere in UV and EUV temperatures but, also, in the mid-to-upper coronal atmosphere at X-Ray (10-300 key) and soft X-ray (1-10 key) temperatures. The recent launch of highly tuned space observatories, such as Hinode (2006) and STEREO (2007) and more recently SDO (2010), provide a wealth of data concerning the complex dynamics ongoing within the thin band of atmosphere above the photospheric surface of the Sun. In the lower-to-mid solar atmosphere the hot and the cold plasma are separated at the solar transition region. Beneath this layer mass and energy are coupled from the solar interior into the hot outer solar atmosphere via a number of different physical mechanisms which are controlled by either wave-driven or magnetic reconnect ion driven processes. Spicules are one of the most common features in the lower-to-mid solar atmosphere and have recently become the subject of significant debate with regard to their principle formation mechanisms. The spicule family tree can be classified into three branches, namely, type-I, type-II and macrospicule. Both type-I spicules and macrospicules are generally believed to be formed through unique processes. Type-I spicules are wave driven and macrospicules are driven by an explosive magnetic reconnect ion process. The recently discovered type-II spicules (2007) pose an interesting problem such that they are thought to exhibit characteristics from both models. Consequently, type-II spicules are interesting for a number of reasons, i.e., they might address the long standing questions concerning the origin of the fast solar wind and the coronal heating problem. In this thesis we examine the nature of spicular structures in the lower-to-mid solar atmosphere through observational analysis with supporting numerical simulations (via SAC: Sheffield Advanced Code: Shelyag et al, 2008). The observational approach is two-fold, involving a spectroscopic study of jets observed in polar coronal holes, for both on-disk and off-limb events, as well as, a multi-instrumental imaging approach involving recently launched instruments. The numerical simulations are three-fold. We will investigate the wave-driven model behind type-I spicule formation in 3D. the process of magnetic flux emergence in the solar chromosphere in 2.5D to understand the process leading to macrospicule formation and magnetic reconnection in 2.5D in order to investigate type-II spicule formation. In supporting our numerical models with high spatial, temporal and spectral resolution observations we have discovered a number of interesting phenomena. Firstly, we have modelled the formation of transition region quakes (TRQs). We have evaluated the response of the transition region to the propagation of p-modes from the lower chromosphere and acoustic wave energy transmission in 3D. Secondly, we have simulated convection flows associated with type-I spicules in the corona, which is the first example of any convection based process in the corona. In this spectroscopic analysis, we successfully isolated an important criteria in controlling the heating potential of macrospicules, i.e., the helicity of the magnetic flux emergence.
APA, Harvard, Vancouver, ISO, and other styles
36

Zhang, Qing Jun. "CFD simulations of fluid flow and heat transfer in a model milk vat." Thesis, University of Canterbury. Chemical and Process Engineering, 1998. http://hdl.handle.net/10092/6887.

Full text
Abstract:
To ensure that raw milk quality is maintained during storage, milk needs to be chilled and kept at a certain temperature. To prevent the milk from creaming and to provide uniform temperature distribution, the milk needs to be smoothly stirred. Thus the milk storage process combines heat transfer and fluid flow. This work is part of a project studying the optimisation of the design and operation of farm milk vats used for storing milk awaiting collection on New Zealand dairy farms. It concentrates on CFD simulations of the fluid flow and heat transfer in an unbaffled agitated model milk vats, In previous experimental work, fresh tap water was used instead of milk, as a medium to minimise costs and heat transfer coefficients were measured for the heating process, instead of cooling. The CFD simulations in this work were also performed for heating instead of cooling of the fluid in the vat to permit comparison with available experimental results. The geometry simulated was that of the experimental milk vat in the laboratory, being a one-third linear scale model of a commercial vat. Computational Fluid Dynamics (CFD) package, CFX4.1, was used to solve the three-dimensional fluid flow and heat transfer in the milk vat. The impeller boundaries were directly simulated using the rotating reference frame. The solution accuracy has been numerically examined using a set of different sized grids and two turbulence models, the k-ε model and the DS model. It was found that the DS model gave better prediction than the k-ε model, but required excessive computing time. Balancing the simulation results and the available computing facility, the k-ε model in conjunction with the rotating reference frame fixed on the impeller has been employed in this work. The simulated impeller rotational speed ranged from 18 rpm up to 117 rpm, with the corresponding Reynolds number of about 20,000 to 144,000 resulting fully turbulent flow. The simulations of fluid flow for the batch operation mode show that the higher the impeller speed, the stronger the circulation flow is, and therefore the larger the impeller pumping capacity. However, both the pumping number and the circulation number are almost independent of the impeller speed. To provide a steady state heat transfer process, a cooling liquid stream was introduced to the milk vat directly. This was defined as the continuous operation mode. The incoming liquid affects the discharge flow produced by the impeller, and therefore the circulation flow, but this effect is not significant at the high Reynolds numbers. The predicted heat transfer coefficients were compared with the available experimental data. The comparison shows that the k-ε model in conjunction with heat transfer can give a reasonable prediction of the heat transfer coefficients in the range of Reynolds number simulated.
APA, Harvard, Vancouver, ISO, and other styles
37

Mills, Jonathan. "Mathematical modelling of starch digestion in the lactating dairy cow." Thesis, University of Reading, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.326969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Minamoto, Yuki. "Physical aspects and modelling of turbulent MILD combustion." Thesis, University of Cambridge, 2014. https://www.repository.cam.ac.uk/handle/1810/245204.

Full text
Abstract:
Moderate or Intense Low-oxygen Dilution (MILD) combustion is one of combustion technologies which can improve efficiency and reduce emissions simultaneously. This combustion type is characterised by the highly preheated reactant temperature and the relatively small temperature rise during combustion due to the intense dilution of the reactant mixture. These unique combustion conditions give MILD combustion very attractive features such as high combustion efficiency, reduction of pollutant emissions, attenuation of combustion instabilities and flexibility of the flow field. However, our understanding of MILD combustion is not enough to employ the MILD combustion technology further for modern combustion devices. In this thesis, Direct Numerical Simulation (DNS) has been carried out for turbulent MILD combustion under four MILD and classical premixed conditions. A two-phase strategy is employed in the DNS to include the effect of imperfect mixing between fresh and exhaust gases before intense chemical reactions start. In the simulated instantaneous MILD reaction rate fields, both thin and distributed reaction zones are observed. Thin reaction zones having flamelet like characteristics propagate until colliding with other thin reaction zones to produce distributed reaction zones. Also, the effect of such interacting reaction zones on scalar gradient has to be taken into account in flamelet approaches. Morphological features of MILD reaction zones are investigated by employing Minkowski functionals and shapefinders. Although a few local reaction zones are classified as thin shape, the majority of local reaction zones have pancake or tube-like shapes. The representative scales computed by the shapefinders also show a typical volume where intense reactions appear. Given high temperature and existence of radicals in the diluted reactants, both reaction dominated and flame-propagation dominated regions are locally observed. These two phenomena are closely entangled under a high dilution condition. The favourable conditions for these phenomena are investigated by focusing on scalar fluxes and reaction rate. A conditional Probability Density Function (PDF) is proposed to investigate flamelet/non-flamelet characteristics of MILD combustion. The PDF can be obtained by both numerically and experimentally. The PDF shows that MILD combustion still has the direct relationship between reaction rate and scalar gradient, although the tendency is statistically weak due to the distributed nature of MILD reaction zones. Finally, based on the physical aspects of MILD combustion explained in this work, a representative model reactor for MILD combustion is developed. The model reactor is also used in conjunction with the presumed PDF for a mean and filtered reaction rate closure. The results show a good agreement between the modelled reaction rate and the DNS results.
APA, Harvard, Vancouver, ISO, and other styles
39

McIlwain, Stuart. "Large eddy simulation of the near field of round and coaxial jets with mild swirl." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2001. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq56091.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hou, Yu. "Dem simulation and analysis of operating parameters on grinding performance of a vertical stirred media mill." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/46504.

Full text
Abstract:
Stirred media mills have been increasingly used in ultra-fine grinding. The VXPmill is a vertical high speed stirred media mill for grinding mineral ores with high efficiency. Since it is a new technology in the industry, there is little understanding on the breakage kinetics of the mill. In order to gain more knowledge about the VXPmill, computer modelling of the mill was performed. Laboratory grinding trials were also conducted on a pilot scale mill to provide more information about the mill’s capability, as well as verify simulation results. DEM (Discrete Element Method) is a powerful tool in predicting particle behaviour, which is ideal for the study of stirred media milling. The CFD (Computational Fluid Dynamics) is used to model the motion of slurry by numerically solving the Navier–Stokes equations facilitated with the Volume of Fluid (VOF) and multiphase flow models. Simulation results suggested that a velocity gradient exists in the fluid field and grinding media in the mill. The highest grinding media velocity was reached near the disc edge in the horizontal direction and near the bottom of the mill in vertical direction. Those are the most active grinding zones in a vertical stirred mill. Different operating parameters such as stirrer rotational speed, slurry solid content and slurry viscosity have an influence on mill performance. Simulation results show that operating the mill at a high impeller speed helps to improve mineral liberation, while at too high impeller speed leads to a waste of electric energy without much improvement in mineral liberation. As well, a mid-level slurry solid content (15% v/v to 30% v/v) was found to achieve the best energy utilization during grinding. The slurry viscosity should be kept low to minimize the effect of high shear stress in the slurry. The influence of various operating parameters can be combined into the ‘stress intensity’ which describes the capability of a stirred media mill. Operating parameters also have an influence on the magnitude of force magnitude between grinding media which will result in different breakage mechanism. Fracture breakage mechanism plays a more important role than attrition in VXPmill.
APA, Harvard, Vancouver, ISO, and other styles
41

Hinton, John S. "Laboratory simulation of microstructural evolution in AISI 430 ferritic stainless steel during the Steckel mill process." Thesis, University of Sheffield, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.434616.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Balls, M. Reed. "Economic Simulation of Selected Management Strategies for a Typical Dairy Farm Faced with Declining Milk Prices." DigitalCommons@USU, 1989. https://digitalcommons.usu.edu/etd/4207.

Full text
Abstract:
The purpose of this thesis is to study the effect of lower milk support prices trigger ed by chronic surplus production problems and to offer alter native management strategies for dairymen caught in the cash flow squeeze precipitated by resulting cuts in the producer price of milk. Historical dairy policy is reviewed and recommendations are offered for consideration in developing dairy policy over the next decade. FLIPSIM V, a powerful, firm-level computerized simulation model is employed to predict the probable outcome of employing alternative management strategies designed to improve profitability for individual dairymen. The study focuses on a typical farm devised from survey data to be representative of Utah's dairy industry. A five-year planning horizon is simulated.
APA, Harvard, Vancouver, ISO, and other styles
43

Wang, Yunsong. "Optimization of Monte Carlo Neutron Transport Simulations with Emerging Architectures." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLX090/document.

Full text
Abstract:
L’accès aux données de base, que sont les sections efficaces, constitue le principal goulot d’étranglement aux performances dans la résolution des équations du transport neutronique par méthode Monte Carlo (MC). Ces sections efficaces caractérisent les probabilités de collisions des neutrons avec les nucléides qui composent le matériau traversé. Elles sont propres à chaque nucléide et dépendent de l’énergie du neutron incident et de la température du matériau. Les codes de référence en MC chargent ces données en mémoire à l’ensemble des températures intervenant dans le système et utilisent un algorithme de recherche binaire dans les tables stockant les sections. Sur les architectures many-coeurs (typiquement Intel MIC), ces méthodes sont dramatiquement inefficaces du fait des accès aléatoires à la mémoire qui ne permettent pas de profiter des différents niveaux de cache mémoire et du manque de vectorisation de ces algorithmes.Tout le travail de la thèse a consisté, dans une première partie, à trouver des alternatives à cet algorithme de base en proposant le meilleur compromis performances/occupation mémoire qui tire parti des spécificités du MIC (multithreading et vectorisation). Dans un deuxième temps, nous sommes partis sur une approche radicalement opposée, approche dans laquelle les données ne sont pas stockées en mémoire, mais calculées à la volée. Toute une série d’optimisations de l’algorithme, des structures de données, vectorisation, déroulement de boucles et influence de la précision de représentation des données, ont permis d’obtenir des gains considérables par rapport à l’implémentation initiale.En fin de compte, une comparaison a été effectué entre les deux approches (données en mémoire et données calculées à la volée) pour finalement proposer le meilleur compromis en termes de performance/occupation mémoire. Au-delà de l'application ciblée (le transport MC), le travail réalisé est également une étude qui peut se généraliser sur la façon de transformer un problème initialement limité par la latence mémoire (« memory latency bound ») en un problème qui sature le processeur (« CPU-bound ») et permet de tirer parti des architectures many-coeurs<br>Monte Carlo (MC) neutron transport simulations are widely used in the nuclear community to perform reference calculations with minimal approximations. The conventional MC method has a slow convergence according to the law of large numbers, which makes simulations computationally expensive. Cross section computation has been identified as the major performance bottleneck for MC neutron code. Typically, cross section data are precalculated and stored into memory before simulations for each nuclide, thus during the simulation, only table lookups are required to retrieve data from memory and the compute cost is trivial. We implemented and optimized a large collection of lookup algorithms in order to accelerate this data retrieving process. Results show that significant speedup can be achieved over the conventional binary search on both CPU and MIC in unit tests other than real case simulations. Using vectorization instructions has been proved effective on many-core architecture due to its 512-bit vector units; on CPU this improvement is limited by a smaller register size. Further optimization like memory reduction turns out to be very important since it largely improves computing performance. As can be imagined, all proposals of energy lookup are totally memory-bound where computing units does little things but only waiting for data. In another word, computing capability of modern architectures are largely wasted. Another major issue of energy lookup is that the memory requirement is huge: cross section data in one temperature for up to 400 nuclides involved in a real case simulation requires nearly 1 GB memory space, which makes simulations with several thousand temperatures infeasible to carry out with current computer systems.In order to solve the problem relevant to energy lookup, we begin to investigate another on-the-fly cross section proposal called reconstruction. The basic idea behind the reconstruction, is to do the Doppler broadening (performing a convolution integral) computation of cross sections on-the-fly, each time a cross section is needed, with a formulation close to standard neutron cross section libraries, and based on the same amount of data. The reconstruction converts the problem from memory-bound to compute-bound: only several variables for each resonance are required instead of the conventional pointwise table covering the entire resolved resonance region. Though memory space is largely reduced, this method is really time-consuming. After a series of optimizations, results show that the reconstruction kernel benefits well from vectorization and can achieve 1806 GFLOPS (single precision) on a Knights Landing 7250, which represents 67% of its effective peak performance. Even if optimization efforts on reconstruction significantly improve the FLOP usage, this on-the-fly calculation is still slower than the conventional lookup method. Under this situation, we begin to port the code on GPGPU to exploit potential higher performance as well as higher FLOP usage. On the other hand, another evaluation has been planned to compare lookup and reconstruction in terms of power consumption: with the help of hardware and software energy measurement support, we expect to find a compromising solution between performance and energy consumption in order to face the "power wall" challenge along with hardware evolution
APA, Harvard, Vancouver, ISO, and other styles
44

Shum, Pak Ho. "Simulating interactions among multiple characters." Thesis, University of Edinburgh, 2010. http://hdl.handle.net/1842/9961.

Full text
Abstract:
In this thesis, we attack a challenging problem in the field of character animation: synthesizing interactions among multiple virtual characters in real-time. Although there are heavy demands in the gaming and animation industries, no systemic solution has been proposed due to the difficulties to model the complex behaviors of the characters. We represent the continuous interactions among characters as a discrete Markov Decision Process, and design a general objective function to evaluate the immediate rewards of launching an action. By applying game theory such as tree expansion and min-max search, the optimal actions that benefit the character the most in the future are selected. The simulated characters can interact competitively while achieving the requests from animators cooperatively. Since the interactions between two characters depend on a lot of criteria, it is difficult to exhaustively precompute the optimal actions for all variations of these criteria. We design an off-policy approach that samples and precomputes only meaningful interactions. With the precomputed policy, the optimal movements under different situations can be evaluated in real-time. To simulate the interactions for a large number of characters with minimal computational overhead, we propose a method to precompute short durations of interactions between two characters as connectable patches. The patches are concatenated spatially to generate interactions with multiple characters, and temporally to generate longer interactions. Based on the optional instructions given by the animators, our system automatically applies concatenations to create a huge scene of interacting crowd. We demonstrate our system by creating scenes with high quality interactions. On one hand, our algorithm can automatically generate artistic scenes of interactions such as the fighting scenes in movies that involve hundreds of characters. On the other hand, it can create controllable, intelligent characters that interact with the opponents for real-time applications such as 3D computer games.
APA, Harvard, Vancouver, ISO, and other styles
45

Rai, Jitender Kumar. "FEM-MILL: a finite element based 3D transient milling simulation environment for process plan verification and optimization /." Lausanne : EPFL, 2008. http://library.epfl.ch/theses/?nr=4190.

Full text
Abstract:
Thèse Ecole polytechnique fédérale de Lausanne EPFL, no 4190 (2008), Faculté des sciences et techniques de l'ingénieur STI, Programme doctoral Systèmes de production et Robotique, Institut de génie mécanique IGM (Laboratoire des outils informatiques pour la conception et la production LICP). Dir.: Paul Xirouchakis.
APA, Harvard, Vancouver, ISO, and other styles
46

Ng, Seng-Leong. "A simulation study of acoustic variability due to internal solitary waves on the mid-Atlantic continental shelf." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1997. http://handle.dtic.mil/100.2/ADA331078.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Picot, Jean-Baptiste. "Modélisation et simulation de l'atelier de régénération de l'usine Kraft." Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENI063/document.

Full text
Abstract:
L’atelier de régénération d’une usine kraft permet d’extraire des liqueurs noires les élémentschimiques nécessaires à la cuisson du bois et de les régénérer sous leur forme active, ainsi quede valoriser la fraction organique dissoute sous forme de chaleur. Les opérations unitaires enoeuvre sont nombreuses, complexes, et souvent mal décrites. Ce travail vise à permettre unemeilleure compréhension de la régénération, par la réalisation de modèles fiables décrivant lesphénomènes et processus dans chaque opération unitaire, leur implémentation algorithmiqueet leur exploitation par la simulation du procédé global<br>Chemical recovery at the kraft mill is the process whereby the valuable inorganic elements areextracted from spent kraft liquors and regenerated under their form effective to the cooking ofthe wood and energy is producted from the dissolved organic fraction. Many unit operations areinvolved, often poorly described. This work aims at a better understanding of the recovery processes.Reliable models describing the physical phenomena were proposed for each operation andimplemented as a computer algorithm. The whole chemical recovery unit was then simulated
APA, Harvard, Vancouver, ISO, and other styles
48

Jägenstedt, Gabriel. "Analysis and Simulation of Threats in an Open, Decentralized, Distributed Spam Filtering System." Thesis, Linköpings universitet, Databas och informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-81012.

Full text
Abstract:
The existance of spam email has gone from a fairly small amounts of afew hundred in the late 1970’s to several billions per day in 2010. Thiscontinually growing problem is of great concern to both businesses andusers alike.One attempt to combat this problem comes with a spam filtering toolcalled TRAP. The primary design goal of TRAP is to enable tracking ofthe reputation of mail senders in a decentralized and distributed fashion.In order for the tool to be useful, it is important that it does not haveany security issues that will let a spammer bypass the protocol or gain areputation that it should not have.As a piece of this puzzle, this thesis makes an analysis of TRAP’s protocoland design in order to find threats and vulnerabilies capable of bypassingthe protocol safeguards. Based on these threats we also evaluate possiblemitigations both by analysis and simulation. We have found that althoughthe protocol was not designed with regards to certain attacks on the systemitself most of the attacks can be fairly easily stopped.The analysis shows that by adding cryptographic defenses to the protocola lot of the threats would be mitigated. In those cases where cryptographywould not suffice it is generally down to sane design choices in the implementationas well as not always trusting that a node is being truthful andfollowing protocol.
APA, Harvard, Vancouver, ISO, and other styles
49

Hjort, Victor, and Anton Jonasson. "Simulation of the slab cutting system at SSAB Oxelösund AB – Streamlining the product flow of the rolling mill." Thesis, Linköpings universitet, Kommunikations- och transportsystem, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-131149.

Full text
Abstract:
Organisationer som tillämpar Lean Production kan använda sig av kraftfulla metoder för att effektivisera sitt produktflöde och minska slöseri genom att kartlägga var den svagaste länken är i produktionen. SSAB Oxelösund AB ska implementera ett uppdrag, PV3000, som baseras på ett Lean-orienterat tankesätt med målet att minska både lagernivåerna och genomloppstiderna i valsverket. I valsverket anses valsningsprocessen vara en trång sektor som sätter höga krav på att ämneskap-systemet innan ska kunna leverera rätt mängd i rätt tid. Syftet med studien är att effektivisera flödet i ämneskap-systemet med hjälp av simulering genom att undersöka var flaskhals(ar) uppstår, samt att ta fram olika åtgärder som kan vidtas för att både kunna möta efterfrågan från ämnesugnarna och för att minska slöseri i ämneskap-systemet. Med målet att skapa en modell som simulerar verkligheten så nära som möjligt, följdes en simuleringsmetodik beståendes av nio steg med allt från planering och modellering till experimentering och implementering. Vid insamlingen av relevanta data användes både historiska data erhållet från SSAB:s databaser och manuellt insamlade data. De manuellt insamlade data erhölls i vissa fall genom fältstudier där författarna tog tid på processerna vid ämneskap-systemet och i andra fall genom uppskattningar. Samtliga data approximerades sedan med hjälp av matematiska sannolikhetsfunktioner och modellerades in i simuleringsmodellen. Efter en simulering av nuläget visar resultatet på att ämneskap 1 kapar rätt mängd i rätt tid. Ämneskap 2 klarar däremot inte av ämnesugnarnas efterfrågan och har samtidigt en hög andel aktiv tid. Även ämnessågarna har en hög andel aktiv tid och klarar inte av att kapa i rätt takt. Simuleringsmodellen användes sedan för att utföra olika experiment baserat på olika åtgärder, som exempelvis att investera i en ny såg, en ny rullbana eller en ny gaskap, men även att förändra arbetssättet i ämneskap-systemet. Åtgärderna sammanställdes i 18 olika scenarion uppdelat i två experiment vars resultat presenteras i form av genomloppstider, takter och tillståndsfördelningar. Resultaten visar att två scenarion tillfredsställer ämnesugnarnas efterfrågan och överstiger inte det uppsatta målet för lagernivåerna. Resultatet och analysen tyder på att ämneskap 2 är den primära och att ämnessågarna är den sekundära flaskhalsen för ämneskap-systemet. Vidare för att kunna nå ämnesugnarnas efterfrågan och minska slöseri i ämneskap-systemet behövs dels ett förändrat arbetssätt i ämneskap-systemet och dels antingen en ny såg och reducerade kapningstider eller en investering av en ny ämneskap 1. Slutligen, innan eventuella implementeringar i verkligheten, rekommenderas ytterligare arbete kring de slutsatser som erhölls genom att utföra analyser om hur studiens begränsningar kunde ha påverkat resultaten.
APA, Harvard, Vancouver, ISO, and other styles
50

Peng, Yan. "Synthèse et caractérisation de poussières carbonées dans une décharge radiofréquence." Thesis, Nancy 1, 2009. http://www.theses.fr/2009NAN10128/document.

Full text
Abstract:
La formation de poussières carbonées dans les tokamaks pose actuellement un réel problème (pertes de combustibles liées à la sécurité, pertes énergétiques …). Afin de comprendre les mécanismes de formation de ces poudres (distribution en taille, distribution spatiale et transport) et donc de trouver une méthode pour limiter leur rôle, une étude expérimentale a été réalisée dans une décharge radiofréquence Ar/C2H2. Le plasma et des poudres carbonées ont été caractérisés par différentes techniques (spectroscopie optique d’émission, diffusion du rayonnement, FTIR in-situ, caméra rapide, MEB et FTIR ex-situ). La diffusion du rayonnement polychromatique (IR et UV-visible-proche IR) a été utilisée afin d’obtenir des informations sur la distribution spatiale des poudres et l’évolution de leur distribution en taille. Un modèle, basé sur la théorie de Mie et associé à une méthode de Monte Carlo, a été développé afin de reproduire les mesures de diffusion in-situ. La comparaison entre expériences et simulations numériques à ouvert de nouvelles voies en termes d'interprétation et d'analyse des données. Cette étude est un premier pas vers la détermination en temps réel de la taille et de la densité des poussières en couplant les mesures optiques avec un modèle numérique basé sur la théorie de Mie<br>The formation of carbon dust in tokamaks raises currently several real problems (safety, energy losses ...). To understand the mechanisms of these powders’ formation (size distribution, spatial distribution and transportation) and then find out a way to limit their role, an experimental study was carried out in a radiofrequency discharge Ar/C2H2. The plasma and these carbon powders were characterized by different techniques (optical emission spectroscopy, scattering of radiation, in-situ FTIR, fast camera, SEM and ex-situ FTIR). The scattering of polychromatic radiation (IR and Ultra violet-visible-near infrared) was used to obtain some information about the powders’ spatial distribution and the evolution of their size distribution. A model, based on the Mie theory and associated with the method of Monte Carlo, was developed to reproduce the optical measurements in-situ. The comparison between experiments and numerical simulations provides new roads in terms of interpretation and analysis of their results. This study is the first step to determine in real-time the dust size and density by coupling the optical measurements with the numerical model based on the Mie theory
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!