To see the other types of publications on this topic, follow the link: Evaluation of simulation models.

Dissertations / Theses on the topic 'Evaluation of simulation models'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Evaluation of simulation models.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yilmaz, Deniz. "Evaluation And Comparison Of Helicopter Simulation Models With Different Fidelities." Master's thesis, METU, 2008. http://etd.lib.metu.edu.tr/upload/12609707/index.pdf.

Full text
Abstract:
This thesis concerns the development, evaluation, comparison and testing of a UH-1H helicopter simulation model with various fidelity levels. In particular, the well known minimum complexity simulation model is updated with various higher fidelity simulation components, such as the Peters-He inflow model, horizontal tail contribution, improved tail rotor model, control mapping, ground eect, fuselage interactions, ground reactions etc. Results are compared with available flight test data. The dynamic model is integrated into the open source simulation environment called Flight Gear. Finally, the model is cross-checked through evaluations using test pilots.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, Shie. "Appliance simulation models for the evaluation of energy management policies." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for telematikk, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-26710.

Full text
Abstract:
A home energy management system is a system in a house that assists consumers in monitoring and optimizing electricity usage in order to lower electricity costs while maintaining consumers’ comfort. One approach to achieve this is by using energy management policies to schedule appliance activities at the appropriate time. A policy is a set of rules defining the events and the corresponding actions to reach goals. Simulations are needed to validate the policies. The SMASH (SiMulated Adaptable Smart Home) simulation platform has been developed at the Department of Telematics at NTNU. It is a model of the real adaptable smart home. It consists of entity energy models for various physical appliances. There are different types of simulation results such as cumulative energy consumption, cumulative electricity costs, and peak power. The feasibility and effectiveness of the policies can be validated. The platform developed so far only contains two energy entity models. Additional models and policies are needed to simulate the use of many policies targeting different appliances simultaneously.This project aims at developing at least two entity energy models and policies making use of the models. The models must include detailed energy consumption estimation during all the defined operations of the appliances. The tasks within this project includes 1) study how the platform works, 2) design entity energy models compatible with the platform, 3) create policies that use the designed models, 4) create case studies that combine the models and the policies, 5) implement the case studies and collect results, 6) analyze the results and evaluate the policies.
APA, Harvard, Vancouver, ISO, and other styles
3

Tonga, Melek Mehlika. "Uncertainty Evaluation Through Ranking Of Simulation Models For Bozova Oil Field." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613243/index.pdf.

Full text
Abstract:
Producing since 1995, Bozova Field is a mature oil field to be re-evaluated. When evaluating an oil field, the common approach followed in a reservoir simulation study is: Generating a geological model that is expected to represent the reservoir<br>building simulation models by using the most representative dynamic data<br>and doing sensitivity analysis around a best case in order to get a history-matched simulation model. Each step deals with a great variety of uncertainty and changing one parameter at a time does not comprise the entire uncertainty space. Not only knowing the impact of uncertainty related to each individual parameter but also their combined effects can help better understanding of the reservoir and better reservoir management. In this study, uncertainties associated only to fluid properties, rock physics functions and water oil contact (WOC) depth are examined thoroughly. Since sensitivity analysis around a best case will cover only a part of uncertainty, a full factorial experimental design technique is used. Without pursuing the goal of a history matched case, simulation runs are conducted for all possible combinations of: 19 sets of capillary pressure/relative permeability (Pc/krel) curves taken from special core analysis (SCAL) data<br>2 sets of pressure, volume, temperature (PVT) analysis data<br>and 3 sets of WOC depths. As a result, historical production and pressure profiles from 114 (2 x 3 x 19) cases are presented for screening the impact of uncertainty related to aforementioned parameters in the history matching of Bozova field. The reservoir simulation models that give the best match with the history data are determined by the calculation of an objective function<br>and they are ranked according to their goodness of fit. It is found that the uncertainty of Pc/krel curves has the highest impact on the history match values<br>uncertainty of WOC depth comes next and the least effect arises from the uncertainty of PVT data. This study constitutes a solid basis for further studies which is to be done on the selection of the best matched models for history matching purposes.
APA, Harvard, Vancouver, ISO, and other styles
4

Nilsson, Håkan O. "Comfort Climate Evaluation with Thermal Manikin Methods and Computer Simulation Models." Doctoral thesis, KTH, Civil and Architectural Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3726.

Full text
Abstract:
<p>Increasing concern about energy consumption and thesimultaneous need for an acceptable thermal environment makesit necessary to estimate in advance what effect differentthermal factors will have on the occupants. Temperaturemeasurements alone do not account for all climate effects onthe human body and especially not for local effects ofconvection and radiation. People as well as thermal manikinscan detect heat loss changes on local body parts. This factmakes it appropriate to develop measurement methods andcomputer models with the corresponding working principles andlevels of resolution. One purpose of this thesis is to linktogether results from these various investigation techniqueswith the aim of assessing different effects of the thermalclimate on people. The results can be used to facilitatedetailed evaluations of thermal influences both in indoorenvironments in buildings and in different types ofvehicles.</p><p>This thesis presents a comprehensive and detaileddescription of the theories and methods behind full-scalemeasurements with thermal manikins. This is done with new,extended definitions of the concept of equivalent temperature,and new theories describing equivalent temperature as avector-valued function. One specific advantage is that thelocally measured or simulated results are presented with newlydeveloped "comfort zone diagrams". These diagrams provide newways of taking into consideration both seat zone qualities aswell as the influence of different clothing types on theclimate assessment with "clothing-independent" comfort zonediagrams.</p><p>Today, different types of computer programs such as CAD(Computer Aided Design) and CFD (Computational Fluid Dynamics)are used for product development, simulation and testing of,for instance, HVAC (Heating, Ventilation and Air Conditioning)systems, particularly in the building and vehicle industry.Three different climate evaluation methods are used andcompared in this thesis: human subjective measurements, manikinmeasurements and computer modelling. A detailed description ispresented of how developed simulation methods can be used toevaluate the influence of thermal climate in existing andplanned environments. In different climate situationssubjective human experiences are compared to heat lossmeasurements and simulations with thermal manikins. Thecalculation relationships developed in this research agree wellwith full-scale measurements and subject experiments indifferent thermal environments. The use of temperature and flowfield data from CFD calculations as input produces acceptableresults, especially in relatively homogeneous environments. Inmore heterogeneous environments the deviations are slightlylarger. Possible reasons for this are presented along withsuggestions for continued research, new relationships andcomputer codes.</p><p><b>Key-words:</b>equivalent temperature, subject, thermalmanikin, mannequin, thermal climate assessment, heat loss,office environment, cabin climate, ventilated seat, computermodel, CFD, clothing-independent, comfort zone diagram.</p>
APA, Harvard, Vancouver, ISO, and other styles
5

Nilsson, Håkan O. "Comfort climate evaluation with thermal manikin methods and computer simulation models /." Stockholm : Arbetslivsinstitutet, förlagstjänst, 2004. http://ebib.arbetslivsinstitutet.se/ah/2004/ah2004_02.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Hedlund, André. "Evaluation of RANS turbulence models for the simulation of channel flow." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-238649.

Full text
Abstract:
The objective of this report is to investigate how RANS models perform on fully developed channel flow, for Re = 13 350, and the simulations are made with the open source software OpenFOAM. The velocity and turbulent kinetic energy profiles are compared with previously published DNS results. A short introduction to turbulence modelling is presented with focus on channel flow and the boundary layer. In total eleven models are evaluated, and the results are of varying quality. A convergence study is presented for two models,  and reveals that the expected second order convergence is fulfilled for one of them, whereas the study for the other model is more ambiguous without a clear conclusion. The OpenFOAM case setups for each model and the results gathered from the simulations are publicly available.
APA, Harvard, Vancouver, ISO, and other styles
7

Padilla, Ryan Michael. "Performance Evaluation of Optimal Rate Allocation Models for Wireless Networks." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3166.

Full text
Abstract:
Convex programming is used in wireless networks to optimize the sending or receiving rates of links or flows in a network. This kind of optimization problem is formulated into a rate allocation problem, where each node in the network will distributively solve the convex problem and all links or flows will converge to their optimal rate. The objective function and constraints of these problems are represented in a simplified model of contention, interference, and sending or receiving rates. The Partial Interference model is an optimal rate allocation model for use in wireless mesh networks that has been shown to be theoretically superior to other conceptual models. This paper compares the Partial Interference model to three other models of wireless networks using the ns-3 simulator to verify these claims. It discusses where the model works as expected, where the model fails to improve network utility, and the limitations inherent to its use.
APA, Harvard, Vancouver, ISO, and other styles
8

Kathrada, Muhammad. "Uncertainty evaluation of reservoir simulation models using particle swarms and hierarchical clustering." Thesis, Heriot-Watt University, 2009. http://hdl.handle.net/10399/2268.

Full text
Abstract:
History matching production data in finite difference reservoir simulation models has been and always will be a challenge for the industry. The principal hurdles that need to be overcome are finding a match in the first place and more importantly a set of matches that can capture the uncertainty range of the simulation model and to do this in as short a time as possible since the bottleneck in this process is the length of time taken to run the model. This study looks at the implementation of Particle Swarm Optimisation (PSO) in history matching finite difference simulation models. Particle Swarms are a class of evolutionary algorithms that have shown much promise over the last decade. This method draws parallels from the social interaction of swarms of bees, flocks of birds and shoals of fish. Essentially a swarm of agents are allowed to search the solution hyperspace keeping in memory each individual’s historical best position and iteratively improving the optimisation by the emergent interaction of the swarm. An intrinsic feature of PSO is its local search capability. A sequential niching variation of the PSO has been developed viz. Flexi-PSO that enhances the exploration and exploitation of the hyperspace and is capable of finding multiple minima. This new variation has been applied to history matching synthetic reservoir simulation models to find multiple distinct history 3 matches to try to capture the uncertainty range. Hierarchical clustering is then used to post-process the history match runs to reduce the size of the ensemble carried forward for prediction. The success of the uncertainty modelling exercise is then assessed by checking whether the production profile forecasts generated by the ensemble covers the truth case.
APA, Harvard, Vancouver, ISO, and other styles
9

Ritchie-Dunham, James Loomis. "Balanced scorecards, mental models, and organizational performance : a simulation experiment /." Thesis, Full text (PDF) from UMI/Dissertation Abstracts International, 2002. http://wwwlib.umi.com/cr/utexas/fullcit?p3082891.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gustavsson, Niklas. "Evaluation and Simulation of Black-box Arc Models for High-Voltage Circuit-Breakers." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2299.

Full text
Abstract:
<p>The task for this Master thesis was to evaluate different black-box arc models for circuit-breakers with the purpose of finding criteria for the breaking ability. A black-box model is a model that requires no knowledge from the user of the underlying physical processes. Black-box arc models have been used in circuit-breaker development for many years. Arc voltages from tests made in the High Power Laboratory in Ludvika were used for validation, along with the resistance calculated at current zero, R0, and 500 ns before current zero, R500. </p><p>Three different arc models were evaluated: Cassie-Mayr, KEMA and an arc model based on power calculations. The third model gave very good results and if the model is developed further, the breaking ability could easily be estimated. </p><p>The arc model based on power calculations could be improved by using better approximations of the quantities in the model, and by representing the current better. A further suggestion for the following work is to combine the second arc model tested, the KEMA model, with the model based on power calculations in order to estimate the KEMA model parameters. </p><p>The R0 and R500 values should also be calculated from more tests, in order to find a clear limit of the breaking ability.</p>
APA, Harvard, Vancouver, ISO, and other styles
11

Dawson, Christopher Peter. "The evaluation of two models of Human Resource Accounting using a simulation methodology." Thesis, City University London, 1992. http://openaccess.city.ac.uk/7890/.

Full text
Abstract:
This research is concerned with the subject of Human Resource Accounting (HRA) and how two particular HRA models may be operationalized. The two models concerned are the "Stochastic Rewards Valuation Model" and the "Replacement Cost Model". The research takes the form of three case exercises in which managers from different organisations used a computerized simulation model to assist them in their employee resourcing. In the process of using this simulation figures were obtained that could operationalize the two HRA models under consideration. These figures and the managers explanation for heir computation and application are used to compare the two HRA models and to evaluate their utility. The conclusions drawn are that whilst managers implicitly hold human resources to have value, in the normally accepted nontechnical sense, the two HRA models do not provide a consistent way of measuring it. Whilst the models themselves may therefore have limited utility it was however concluded that the process of operationalising them is seen to be useful.
APA, Harvard, Vancouver, ISO, and other styles
12

Khanta, Pothu Raju. "Evaluation of traffic simulation models for work zones in the New England area." Connect to this title, 2008. http://scholarworks.umass.edu/theses/184/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Forsell, Johannes, and Elias Furenstam. "Cash Flow Simulation in Private Equity : An evaluation and comparison of two models." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-148620.

Full text
Abstract:
The uncertain pattern of cash flows poses a liquidity and risk management challenge for investors of private equity funds. The structure of a private equity investment, where the total committed capital will be paid out in portions at an undetermined schedule, makes it vital for the investor to have sufficient levels of cash in order to meet the called capital from the fund manager. As an investor can hold several investments, it is important to predict future cash flows in order to have effective cash management.   The purpose of this thesis is to increase the supporting institution’s knowledge in cash flow predictions of investments in private equity. To do this, an analysis and evaluation of two models have been executed for cash flow predictions from the view of a limited partner, i.e. the investor. The comparison is done between a deterministic model, the Yale model, that is currently used by the supporting institution to this thesis and a new stochastic model, the Stochastic model, that has been implemented during the work of this thesis.   The evaluation of the models has been done by backtests and with a coefficient of determination test, R2 test, of the Institution’s portfolio. It is hard to make an absolute conclusion on the performance of the two models as they outperform each other on different periods. Overall, the Yale model was better than the Stochastic model on the conducted tests, but the Stochastic model offers desirable attributes from a risk management perspective that the deterministic model lacks. This gives the Stochastic model potential to outperform the Yale model as a better option for cash flow simulation in private equity, provided a better parameter estimation.
APA, Harvard, Vancouver, ISO, and other styles
14

Schmitz, Anja. "Basic anesthesia skills simulation curriculum for medical students development and empirical evaluation based on an instructional design model /." [S.l. : s.n.], 2006. http://nbn-resolving.de/urn:nbn:de:bsz:16-opus-71870.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

BAPAT, SACHIN VASUDEO. "THE PERFORMANCE EVALUATION OF VHDL-AMS SIMULATORS BY CREATING LARGE, SCALABLE VHDL-AMS MODELS." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1032179532.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Anissian, H. Lucas. "In vitro evaluation of hip prostheses /." Stockholm, 2001. http://diss.kib.ki.se/2001/20010420anis/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Nader, Hannah Milad Bahaa. "Evaluation des simulations de feux de forêts." Thesis, Corte, 2015. http://www.theses.fr/2015CORT0005/document.

Full text
Abstract:
L’évaluation de performance de modèles est une étape fondamentale de leur développement et amélioration. Le travail de recherche présenté dans ce manuscrit est consacré l’évaluation des modèles de propagation des incendies des forêts. Une revue des travaux a montré que si de nombreux éléments étaient disponibles, aucune solution normalisée et automatisable était proposée dans ce champ applicatif. Une solution à ce problème est proposée en déclinant une approche formelle développée dans le cadre de la théorie de la modélisation et simulation. Cette étape a permis de déterminer conceptuellement quels composants devaient être développés et comment les interconnecter.La réalisation de ce cadre a requis premièrement la normalisation de données disponibles pour les incendies de forêts, aucun standard de fichier ou même nomenclature n’étant disponible et/ou utilisé par les modélisateurs ou ingénieurs (observations ou simulation). Un ensemble de nom, notation et format d’encodage des données dans un conteneur NetCDF a pour cela été proposée. Une seconde étape a consisté à déterminer les métriques nécessaires à quantifier les erreurs de simulation (score de simulation). Si quatre méthodes standard ont pu être identifiées dans la littérature, nous avons pu montrer qu’elles se limitaient à la comparaison à un instant donné, ne pouvant donc rendre compte de la performance de la dynamique d’une simulation incendie. Cette problématique a été traitée en proposant deux nouvelles méthodes de calcul de score spécifiques. Ces différentes méthodes d’évaluations ont étés implantées au sein d’une bibliothèque de calcul. Enfin la réalisation d’une évaluation de modèles a été réalisée à l’aide d’une implantation du cadre définit précédemment. Cette évaluation a consisté à confronter quatre formulations de modèles de vitesse de front de flammes effectué sur 80 simulations d’incendies réels de manière complètement automatique. L’automatisation, et le non ajustement de paramètres, a ainsi permis de se rapprocher au plus près du contexte opérationnel où peu d’information locale est disponible, peu de temps après l’alerte de l’éclosion d’un incendie. Les résultats ont démontrés que cette approche est de nature à laisser apparaître une hiérarchie des performances de paramétrisations ou formulation relativement à une autre, sans toutefois être en mesure de donner une mesure absolue et objective de l’erreur modèle<br>Performance evaluation of models is a fundamental step towards an efficient development and improvement. The research work presented in this manuscript is devoted to the evaluation of forest fire simulation models. Review of current work showed that if many elements were available, there were not any standardized and automated solutions proposed in this field. A solution for this problem is thus proposed, built upon a formal approach from the theory of modelling and simulation. This formal framework allowed to identify conceptually which components should be developed and how they would be interconnected.Realization of this frame required to start with a normalization of available wildfires data, as no standardized file or even nomenclature were available and/or used by all modellers and engineers (observations or simulation). A set of standard notation and name as well as a standard encoding data format in a scientific container NetCDF is proposed along with associated software.A second step is devoted to the identification of the scoring methods required to quantify the simulation error. If four standard methods have been defined in the literature, we have shown that these methods were limited for the comparison at specific time, not reporting clearly the performance of the simulation dynamics. This issue has been solved by proposing two new specific score calculation. These different evaluations methods are implanted in an open source computation library.Eventually, the realization of models evaluation was performed using an implementation of the proposed experimental frame. This evaluation consisted to confront four formulations of flame front velocity models on 80 real wildfire simulations in a fully automatic way. Because it was automatic, it implied that no parameters adjustments could be performed by an operator after the fire was observed; being more representative of an operational context with little information available immediately after a fire has been reported.The results have shown that this approach, while unable to provide an absolute measure of the model error is capable to reveal a hierarchy of performances between parameterizations or formulations
APA, Harvard, Vancouver, ISO, and other styles
18

Mrabet, Radouane. "Reusability and hierarchical simulation modeling of communication systems for performance evaluation: Simulation environment, basic and generic models, transfer protocols." Doctoral thesis, Universite Libre de Bruxelles, 1995. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/212586.

Full text
Abstract:
<p align="justify">The main contribution of this thesis is the emphasis made on the reusability concept, on one side, for designing a simulation environment, and on the other side, for defining two different levels of granularity for reusable network component libraries.</p><p><p align="justify">The design of our simulation environment, called AMS for Atelier for Modeling and Simulation, was based on existing pieces of software, which proved their usefulness in their respective fields. In order to carry out this integration efficiently, a modular structure of the atelier was proposed. The structure has been divided into four phases. Each phase is responsible of a part of the performance evaluation cycle. The main novelty of this structure is the usage of a dedicated language as a means to define a clear border between the editing and simulation phases and to allow the portability of the atelier upon different platforms. A prototype of the atelier has been developed on a SUN machine running the SunOs operating system. It is developed in C language.</p><p><p align="justify">The kernel of the AMS is its library of Detailed Basic Models (DBMs). Each DBM was designed in order to comply with the most important criterion which is reusability. Indeed, each DBM can be used in aeveral network architectures and can be a component of generic and composite models. Before the effective usage of a DBM, it is verified and validated in order to increase the model credibility. The most important contribution of this research is the definition of a methodology for modeling protocol entities as DBMs. We then tried to partly bridge the gap between specification and modeling. This methodology is based on the concept of function. Simple functions are modeled as reusable modules and stored into a library. The Function Based Methodology was designed to help the modeler to build efficiently and rapidly new protocols designed for the new generation of networks where several services can be provided. These new protocols can be dynamically tailored to the user' s requirements.</p><p><br>Doctorat en sciences appliquées<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
19

Kahsu, Lidia. "Evaluation of a method for identifying timing models." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-15093.

Full text
Abstract:
In today’s world, embedded systems which have very large and highly configurable software systems, consisting of hundreds of tasks with huge lines of code and mostly with real-time constraints, has replaced the traditional systems. Generally in real-time systems, the WCET of a program is a crucial component, which is the longest execution time of a specified task. WCET is determined by WCET analysis techniques and the values produced should be tight and safe to ensure the proper timing behavior of a real-time system. Static WCET is one of the techniques to compute the upper bounds of the execution time of programs, without actually executing the programs but relying on mathematical models of the software and the hardware involved. Mathematical models can be used to generate timing estimations on source code level when the hardware is not yet fully accessible or the code is not yet ready to compile. In this thesis, the methods used to build timing models developed by WCET group in MDH have been assessed by evaluating the accuracy of the resulting timing models for a number of combinations of hardware architecture. Furthermore, the timing model identification is extended for various hardware platforms, like advanced architecture with cache and pipeline and also included floating-point instructions by selecting benchmarks that uses floating-points as well.
APA, Harvard, Vancouver, ISO, and other styles
20

Bosché, Kerry N. "An empirical evaluation of a factor effects screening procedure for exploring complex simulation models." Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2006. http://library.nps.navy.mil/uhtbin/hyperion/06Jun%5FBosche.pdf.

Full text
Abstract:
Thesis (M.S. in Applied Science (Operations Research))--Naval Postgraduate School, June 2006.<br>Thesis Advisor(s): Susan M. Sanchez. "June 2006." Includes bibliographical references (p. 33-34). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
21

Hernández, Aguilar José Ramón. "Computational and experimental evaluation of two models for the simulation of thermoplastics injection molding." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ64224.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Satish, Prabhu Nachiketh, and Ranjan Tunga Sarapady. "Evaluation of parametric CAD models from a manufacturing perspective to aid simulation driven design." Thesis, Linköpings universitet, Maskinkonstruktion, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167724.

Full text
Abstract:
Scania are known among to be the world’s leading supplier of transport solutions for heavy trucks and buses. Scania’s goal is to develop combustion engines that achieve low-pollutant emissions as well as lower carbon-footprint with higher efficiency. To achieve the above Scania has invested resources in Simulation Driven Design of parametric CAD models which drives design innovation rather than following the design. This enables in creating flexible and robust models in their design process. This master thesis is being conducted in collaboration with Scania exhaust after treatment systems department, focusing on developing a methodology to automatically evaluate the cost and manufacturability of a parametric model, which is intended for an agile working environment with fast iterations within Scania. From the thesis methodology’s data collection process literature study, former thesis work and interviews with designers and cost engineers at Scania, a proposed method is developed that can be implemented during the design process. The method involved four different phase they are Design phase, Analysis phase, Validation phase and Improvement phase. The proposed method is evaluated to check the method feasibility for evaluation on parameteric CAD parts for manufacturability and costing. This proposed method is applied on two different parts of a silencer as part of a case study which is mainly to evaluate the results from Improvement phase. The focus of this thesis is to realise the proposed method through simulation software like sheet metal stamping/forming simulation, cost evaluating tool where the simulation driven design process is achieved. This is done with the help of collaboration between parameteric CAD models and the above simulation software under a common MDO framework through DOE study run or optimisation study runs. The resultant designs is later considered to be improved design in terms of manufacturability and costing.
APA, Harvard, Vancouver, ISO, and other styles
23

Bosché, Kerry N. "An empirical evaluation of a factor effects screening procedure for exploring complex simulation models." Thesis, Monterey California. Naval Postgraduate School, 2006. http://hdl.handle.net/10945/2788.

Full text
Abstract:
Screening experiments are procedures designed to identify the most important factors in simulation models. Previously proposed one-stage procedures such as sequential bifurcation (SB) and controlled sequential bifurcation (CSB) require factor effects to be arranged according to estimated sign or magnitude prior to screening. FF-CSB is a two-stage screening procedure for simulation experiments proposed by Sanchez et al. (2005) which uses an efficient fractional factorial experiment to estimate factor effects automatically, removing the need for pre-estimation. Empirical results show that FF-CSB classifies factor effects as well as CSB in fewer runs when factors are only grouped by their sign (positive or negative). In theory, the procedure can achieve more efficient run times when factors are also sorted by estimated effect after the first stage. This analysis tests the efficiency and performance characteristics of a sorted FF-CSB procedure under a variety of conditions and finds that the procedure classifies factors as well as unsorted FF-CSB with significant improvement in run times. Additionally, various model- and user-determined scenarios are tested in an initial attempt to parameterize run times against parameters known or controlled by the modeler. Further experimentation is also suggested.<br>Ensign, United States Navy
APA, Harvard, Vancouver, ISO, and other styles
24

Kwon, Jaewook. "Evaluation of FDS V.4: Upward Flame Spread." Digital WPI, 2006. https://digitalcommons.wpi.edu/etd-theses/1022.

Full text
Abstract:
"NIST's Fire Dynamics Simulator (FDS) is a powerful tool for simulating the gas phase fire environment of scenarios involving realistic geometries. If the fire engineer is interested in simulating fire spread processes, FDS provides possible tools involving simulation of the decomposition of the condensed phase: gas burners and simplified pyrolysis models. Continuing to develop understanding of the capability and proper use of FDS related to fire spread will provide the practicing fire engineer with valuable information. In this work three simulations are conducted to evaluate FDS V.4's capabilities for predicting upward flame spread. The FDS predictions are compared with empirical correlations and experimental data for upward flame spread on a 5 m PMMA panel. A simplified flame spread model is also applied to assess the FDS simulation results. Capabilities and limitations of FDS V.4 for upward flame spread predictions are addressed, and recommendations for improvements of FDS and practical use of FDS for fire spread are presented."
APA, Harvard, Vancouver, ISO, and other styles
25

Renwick, Randall R. "Evaluation of a crop simulation model for potatoes." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ41762.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Patrick, Hugh Alton Jr. "An Empirical Evaluation of Human Figure Tracking Using Switching Linear Models." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4838.

Full text
Abstract:
One of the difficulties of human figure tracking is that humans move their bodies in complex, non-linear ways. An effective computational model of human motion could therefore be of great benefit in figure tracking. We are interested in the use of a class of dynamic models called switching linear dynamic systems for figure tracking. This thesis makes two contributions. First, we present an empirical analysis of some of the technical issues involved with applying linear dynamic systems to figure tracking. The lack of high-level theory in this area makes this type of empirical study valuable and necessary. We show that sensitivity of these models to perturbations in input is a central issue in their application to figure tracking. We also compare different types of LDS models and identification algorithms. Second, we describe 2-DAFT, a flexible software framework we have created for figure tracking. 2-DAFT encapsulates data and code involved in different parts of the tracking problem in a number of modules. This architecture leads to flexibility and makes it easy to implement new tracking algorithms.
APA, Harvard, Vancouver, ISO, and other styles
27

Watson, William J. "An evaluation of growth simulation models for non-industrial private forest stewardship planning in southern Illinois /." Available to subscribers only, 2009. http://proquest.umi.com/pqdweb?did=1967797541&sid=2&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Watson, William Joseph. "AN EVALUATION OF GROWTH SIMULATION MODELS FOR NON-INDUSTRIAL PRIVATE FOREST STEWARDSHIP PLANNING IN SOUTHERN ILLINOIS." OpenSIUC, 2009. https://opensiuc.lib.siu.edu/theses/143.

Full text
Abstract:
AN ABSTRACT OF THE THESIS OF WILLIAM J. WATSON, for the Master of Science degree in Forestry, presented on August 2009, at Southern Illinois University Carbondale. TITLE: An Evaluation of Growth Simulation Models for Non-Industrial Private Forest Stewardship Planning in Southern Illinois MAJOR PROFESSOR: DR: Andrew Carver After timber is harvested, changes occur that should be recorded to provide guidelines and justification for future forest stewardship practices. In 1997, an inventory of the Scherrer property timber was conducted in order to record the impacts that might result in changes in a stand following a timber harvest and a timber stand improvement project. The timber was harvested in the summer of 1998 using a 20-inch diameter limit, with a lower limit on sugar maples and hickory species. Pre-harvest and post-harvest data were collected using a Biltmore Cruising Stick, diameter tape, 100' tape measure, forester's compass, and a 10-factor wedge prism. Pre-harvest data was collected on 71 plots, and post-harvest data was collected on 61 plots. The data were analyzed using the SILVAH computer program for pre-harvest and post-harvest data. The Ned\Sips program was used to run four growth simulators. The growth simulators used were FIBER, NW-TWIGS, SILVAH, and OAKSIM, each programmed for a five year time horizon after harvest. The actual relative density measured before treatment was 77 percent per acre. The field-measured post-harvest density decreased to 57 percent. The computer simulated relative density ranged between 31 to 69 percent per acre. The pre-harvest board-foot volume was 8266 International ¼' scale. The actual field-measured post-harvest volume was 6591 board feet per acre. The range for simulated boardfoot volumes was 2395 to 6295 board feet per acre. The results of the study support the validity of using the NW-TWIGS simulator for stewardship planning in Southern Illinois. The NW-TWIGS model yielded results closest to the actual field measurements. FIBER and SILVAH produced the same result. OAKSIM was the least. A study of the hardwood regeneration indicates that, at present, ash is the dominant species overall. The white oak, hickory, sugar maple, and black oak volumes decreased in composition within stands. Ash, northern red oak, cherrybark oak, sweetgum, and beech composition remained largely unchanged. Whereas yellow poplar, black walnut and the chestnut oak group increased in percent presence within stands.
APA, Harvard, Vancouver, ISO, and other styles
29

Imam, Bisher 1960. "Evaluation of disaggregation model in arid land stream flow generation." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277033.

Full text
Abstract:
A Disaggregation model was tested for arid land stream flow generating. The test was performed on data from Black River, near Fort Apache, Arizona. The model was tested in terms of preserving the relevant historical statistics on both monthly and daily levels, the monthly time series were disaggregated to a random observation of their daily components and the daily components were then reaggregated to yield monthly values. A computer model (DSGN) was developed to perform the model implementation. The model was written and executed on the Macintosh plus personal computer Data from two months were studied; the October data represented the low flow season, while the April data represented the high flow season. Twenty five years of data for each month was used. The generated data for the two months was compared with the historical data.
APA, Harvard, Vancouver, ISO, and other styles
30

Madeira, de Campos Velho Pedro Antonio. "Evaluation de précision et vitesse de simulation pour des systèmes de calcul distribué à large échelle." Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENM027/document.

Full text
Abstract:
De nos jours, la grande puissance de calcul et l'importante capacité de stockage fournie par les systèmes de calcul distribué à large échelle sont exploitées par des applications dont les besoins grandissent continuellement. Les plates-formes de ces systèmes sont composées d'un ensemble de ressources reliées entre elles par une infrastructure de communication. Dans ce type de système, comme dans n'importe quel environnement de calcul, il est courant que des solutions innovantes soient étudiées. Leur adoption nécessite une phase d'expérimentation pour que l'on puisse les valider et les comparer aux solutions existantes ou en développement. Néanmoins, de par leur nature distribuée, l'exécution d'expériences dans ces environnements est difficile et coûteuse. Dans ces systèmes, l'ordre d'exécution dépend de l'ordre des événements, lequel peut changer d'une exécution à l'autre. L'absence de reproductibilité des expériences rend complexe la conception, le développement et la validation de nouvelles solutions. De plus, les ressources peu- vent changer d'état ou intégrer le système dynamiquement ; les architectures sont partagées et les interférences entre applications, ou même entre processus d'une même application, peuvent affecter le comportement général du système. Enfin, le temps d'exécution d'application à large échelle sur ces sys- tèmes est souvent long, ce qui empêche en général l'exploration exhaustive des valeurs des éventuels paramètres de cette application. Pour toutes ces raisons, les expérimentations dans ce domaine sont souvent basées sur la simulation. Diverses approches existent actuellement pour simuler le calcul dis- tribué à large-échelle. Parmi celles-ci, une grande partie est dédiée à des architectures particulières, comme les grappes de calcul, les grilles de calcul ou encore les plates-formes de calcul bénévole. Néan- moins, ces simulateurs adressent les mêmes problèmes : modéliser le réseau et gérer les ressources de calcul. De plus, leurs besoins sont les même quelle que soit l'architecture cible : la simulation doit être rapide et passer à l'échelle. Pour respecter ces exigences, la simulation de systèmes distribués à large échelle repose sur des techniques de modélisation pour approximer le comportement du système. Cependant, les estimations obtenues par ces modèles peuvent être fausses. Quand c'est le cas, faire confiance à des résultats obtenus par simulation peut amener à des conclusions aléatoires. En d'autres mots, il est nécessaire de connaître la précision des modèles que l'on utilise pour que les conclusions basées sur des résultats de simulation soient crédibles. Mais malgré l'importance de ce dernier point, il existe très rarement des études sur celui-ci. Durant cette thèse, nous nous sommes intéressés à la problématique de la précision des modèles pour les architectures de calcul distribué à large-échelle. Pour atteindre cet objectif, nous avons mené une évaluation de la précision des modèles existants ainsi que des nouveaux modèles conçus pendant cette thèse. Grâce à cette évaluation, nous avons proposé des améliorations pour atténuer les erreurs dues aux modèles en utilisant SimGrid comme cas d'étude. Nous avons aussi évalué les effets des ces améliorations en terme de passage à l'échelle et de vitesse d'exécution. Une contribution majeure de nos travaux est le développement de modèles plus intuitifs et meilleurs que l'existant, que ce soit en termes de précision, vitesse ou passage à l'échelle. Enfin, nous avons mis en lumière les principaux en- jeux de la modélisation des systèmes distribuées à large-échelle en montrant que le principal problème provient de la négligence de certains phénomènes importants<br>Large-Scale Distributed Computing (LSDC) systems are in production today to solve problems that require huge amounts of computational power or storage. Such systems are composed by a set of computational resources sharing a communication infrastructure. In such systems, as in any computing environment, specialists need to conduct experiments to validate alternatives and compare solutions. However, due to the distributed nature of resources, performing experiments in LSDC environments is hard and costly. In such systems, the execution flow depends on the order of events which is likely to change from one execution to another. Consequently, it is hard to reproduce experiments hindering the development process. Moreover, resources are very likely to fail or go off-line. Yet, LSDC archi- tectures are shared and interference among different applications, or even among processes of the same application, affects the overall application behavior. Last, LSDC applications are time consuming, thus conducting many experiments, with several parameters is often unfeasible. Because of all these reasons, experiments in LSDC often rely on simulations. Today we find many simulation approaches for LSDC. Most of them objective specific architectures, such as cluster, grid or volunteer computing. Each simulator claims to be more adapted for a particular research purpose. Nevertheless, those simulators must address the same problems: modeling network and managing computing resources. Moreover, they must satisfy the same requirements providing: fast, accurate, scalable, and repeatable simulations. To match these requirements, LSDC simulation use models to approximate the system behavior, neglecting some aspects to focus on the desired phe- nomena. However, models may be wrong. When this is the case, trusting on models lead to random conclusions. In other words, we need to have evidence that the models are accurate to accept the con- clusions supported by simulated results. Although many simulators exist for LSDC, studies about their accuracy is rarely found. In this thesis, we are particularly interested in analyzing and proposing accurate models that respect the requirements of LSDC research. To follow our goal, we propose an accuracy evaluation study to verify common and new simulation models. Throughout this document, we propose model improvements to mitigate simulation error of LSDC simulation using SimGrid as case study. We also evaluate the effect of these improvements on scalability and speed. As a main contribution, we show that intuitive models have better accuracy, speed and scalability than other state-of-the art models. These better results are achieved by performing a thorough and systematic analysis of problematic situations. This analysis reveals that many small yet common phenomena had been neglected in previous models and had to be accounted for to design sound models
APA, Harvard, Vancouver, ISO, and other styles
31

San, Jose Angel. "Analysis, design, implementation and evaluation of graphical design tool to develop discrete event simulation models using event graphs and SIMKIT." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2001. http://handle.dtic.mil/100.2/ADA397405.

Full text
Abstract:
Thesis (M.S. in Operations Research) Naval Postgraduate School, Sept. 2001.<br>Thesis advisor(s): Buss, Arnold; Miller, Nita. "September 2001." Includes bibliographical references (p. 109-110). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
32

Karnon, J. D. "Economic evaluation of health care technologies : a comparison of alternative decision modelling techniques." Thesis, Brunel University, 2001. http://bura.brunel.ac.uk/handle/2438/4806.

Full text
Abstract:
The focus of this thesis is on the application of decision models to the economic evaluation of health care technologies. The primary objective addresses the correct choice of modelling technique, as the attributes of the chosen technique could have a significant impact on the process, as well as the results, of an evaluation. Separate decision models, a Markov process and a discrete event simulation (DES) model are applied to a case study evaluation comparing alternative adjuvant therapies for early breast cancer. The case study models are built and analysed as stochastic models: whereby probability distributions are specified to represent the uncertainty about the true values of the model input parameters. Three secondary objectives are also specified. Firstly, the empirical application of the alternative decision models requires the specification of a 'modelling process' that is not well defined in the health economics literature. Secondly, a comparison of alternative methods for specifying probability distributions to describe the uncertainty in the model's input parameters is undertaken. The final secondary objective covers the application of methods for valuing the collection of additional information to inform the resource allocation decision. The empirical application of the two relevant modelling techniques clarifies the potential advantages derived from the increased flexibility provided by DES over Markov models. The thesis concludes that the use of DES should be strongly considered if either of the following issues appear relevant: model parameters are a function of the time spent in particular states, or the data describing the timing of events are not in the form of transition probabilities. The full description of the modelling process provides a resource for health economists wanting to use decision models. No definitive process is established, however, as there exist competing methods for various stages of the modelling process. The main conclusion from the comparison of methods for specifying probability distributions around the input parameters is that the theoretically specified distributions are most likely to provide a common baseline for comparisons between evaluations. The central question that remains to be addressed is which method is the most theoretically correct? The application of a Vol analysis provides useful insights into the methods employed and leads to the identification of particular methodological issues requiring future research in this area.
APA, Harvard, Vancouver, ISO, and other styles
33

Cornwall, Maxwell W. "MEEBS a model for multi-echelon evaluation by simulation /." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA237099.

Full text
Abstract:
Thesis (M.S. in Management)--Naval Postgraduate School, June 1990.<br>Thesis Advisor(s): McMasters, Alan W. ; Bailey, Michael P. "June 1990." Description based on signature page as viewed on October 21, 2009. DTIC Identifier(s): Computerized simulation, logistics management. Author(s) subject terms: Multi-echelon, simulation, SLAM II, models. Includes bibliographical references (p. 142-147). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
34

Razmpa, Ali. "An Assessment of Post-Encroachment Times for Bicycle-Vehicle Interactions Observed in the Field, a Driving Simulator, and in Traffic Simulation Models." PDXScholar, 2016. https://pdxscholar.library.pdx.edu/open_access_etds/3379.

Full text
Abstract:
Most safety analysis is conducted using crash data. Surrogate safety measures, such as various time-based measures of time-to-collision can be related to crash potential and used to gain insight into the frequency and severity of crashes at a specific location. One of the most common and acknowledged measures is post-encroachment time (PET) which defines the time between vehicles occupying a conflicting space. While commonly used in studies of motor vehicle interactions, studies of PET for bicycle-vehicle interactions are few. In this research, the PET of bicycle-vehicle interactions measured in the field, a driving simulator, and in a micro-simulation are compared. A total of 52 right-hook conflicts were identified in 135 hours of video footage over 14 days at a signalized intersection in Portland, OR (SW Taylor and SW Naito Pkwy). The results showed that 4 of 17 high-risk conflicts could not be identified by the conventional definition of PET and PET values of some conflicts did not reflect true risk of collision. Therefore, right-hook conflicts were categorized into two types and a modified measure of PET was proposed so that their frequency and severity were properly measured. PETs from the field were then compared to those measures in the Oregon State University driving simulator during research conducted by Dr. Hurwitz et al. (2015) studying the right-hook conflicts. Statistical and graphical methods were used to compare field PETs to those in the simulator. The results suggest that the relative validity of the OSU driving simulator was good but not conclusive due to differences in traffic conditions and intersections. To further explore the field-observed PET values, traffic simulation models of the field intersection were developed and calibrated. Right-hook conflicts were extracted from the simulation files and conflicts observed in PM-peak hours over 6 days in the field were compared to those obtained from 24 traffic simulation runs. The field-observed PET values did not match the values from the simulation values very well. However, the approach does show promise. Further calibration of driving and bicycling behaviors would likely improve the result.
APA, Harvard, Vancouver, ISO, and other styles
35

Tetsoane, S. T., Y. E. Woyessa, and W. A. Welderufael. "Evaluation of the SWAT model in simulating catchment hydrology : case study of the Modder River Basin." Interim : Interdisciplinary Journal, Vol 13, Issue 3: Central University of Technology Free State Bloemfontein, 2013. http://hdl.handle.net/11462/313.

Full text
Abstract:
Published Article<br>This paper presents the set-up and the performance of the SWAT model in the Modder River Basin. Two techniques widely used, namely quantitative statistics and graphical techniques, in evaluating hydrological models were used to evaluate the performance of SWAT model. Three quantitative statistics used were, Nash-Sutcliffe efficiency (NSE), present bias (PBIAS), and ratio of the mean square error to the standard deviation of measured data (RSR). The performance of the model was compared with the recommended statistical performance ratings for monthly time step data. The model performed well when compared against monthly model performance ratings during calibration and validation stage.
APA, Harvard, Vancouver, ISO, and other styles
36

Jeton, Anne Elizabeth 1956. "Vegetation management and water yield in a southwestern ponderosa pine watershed: An evaluation of three hydrologic simulation models." Thesis, The University of Arizona, 1990. http://hdl.handle.net/10150/277298.

Full text
Abstract:
Three hydrologic simulation models of different resolutions were evaluated to determine model response to predicting runoff under changing vegetation cover. Two empirically-based regression models (Baker-Kovner Streamflow Regression Model and ECOSIM) and one multiple component water balance model (Yield) were modified, using FORTRAN 77 and calibrated on a southwestern ponderosa pine ecosystem. Statistical analysis indicate no significant difference between the Baker-Kovner and Yield models, while ECOSIM consistently under predicts by as much as 50 percent from the observed runoff. This is mainly attributed to a sensitivity to the insolation factor. Yield is the best predictor for moderate and high flows, to within 10 and 20 percent respectively. Of the four watershed treatments, the light overstory thinning on Watershed 8 yielded the best response for all three models. This is in contrast to the strip-cut treatment on Watershed 14 which consistently over-predicted, in large part due an inaccurate estimation of snowpack evaporation on the exposed, south-facing strip-cuts. Runoff responses are highly influenced by the precipitation regime, soil and topographic characteristics of a watershed as well as by a reduction in evapotranspiration losses from changes in vegetation cover.
APA, Harvard, Vancouver, ISO, and other styles
37

Lu, Ming. "System Dynamics Model for Testing and Evaluating Automatic Headway Control Models for Trucks Operating on Rural Highways." Diss., This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-01292008-113749/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Ren, Weijia. "Impact of Design Features for Cross-Classified Logistic Models When the Cross-Classification Structure Is Ignored." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1322538958.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Kreku, J. (Jari). "Early-phase performance evaluation of computer systems using workload models and SystemC." Doctoral thesis, Oulun yliopisto, 2012. http://urn.fi/urn:isbn:9789514299902.

Full text
Abstract:
Abstract Novel methods and tools are needed for the performance evaluation of future embedded systems due to the increasing system complexity. Systems accommodate a large number of on-terminal and or downloadable applications offering the users with numerous services related to telecommunication, audio and video, digital television, internet and navigation. More flexibility, scalability and modularity is expected from execution platforms to support applications. Digital processing architectures will evolve from the current system-on-chips to massively parallel computers consisting of heterogeneous subsystems connected by a network-on-chip. As a consequence, the overall complexity of system evaluation will increase by orders of magnitude. The ABSOLUT performance simulation approach presented in this thesis combats evaluation complexity by abstracting the functionality of the applications with workload models consisting of instruction-like primitives. Workload models can be created from application specifications, measurement results, execution traces, or the source code. Complexity of execution platform models is also reduced since the data paths of processing elements need not be modelled in detail and data transfers and storage are simulated only from the performance point of view. The modelling approach enables early evaluation since mature hardware or software is not required for the modelling or simulation of complete systems. ABSOLUT is applied to a number of case studies including mobile phone usage, MP3 playback, MPEG4 encoding and decoding, 3D gaming, virtual network computing, and parallel software-defined radio applications. The platforms used in the studies represent both embedded systems and personal computers, and at the same time both currently existing platforms and future designs. The results obtained from simulations are compared to measurements from real platforms, which reveals an average difference of 12% in the results. This exceeds the accuracy requirements expected from virtual system-based simulation approaches intended for early evaluation<br>Tiivistelmä Sulautettujen tietokonejärjestelmien suorituskyvyn arviointi muuttuu yhä haastavammaksi järjestelmien kasvavan kompleksisuuden vuoksi. Järjestelmissä on suuri määrä sovelluksia, jotka tarjoavat käyttäjälle palveluita liittyen esimerkiksi telekommunikaatioon, äänen ja videokuvan toistoon, internet-selaukseen ja navigaatioon. Tästä johtuen suoritusalustoilta edellytetään yhä enemmän joustavuutta, skaalautuvuutta ja modulaarisuutta. Suoritusarkkitehtuurit kehittyvät nykyisistä System-on-Chip (SoC) -ratkaisuista Network-on-Chip (NoC) -rinnakkaistietokoneiksi, jotka koostuvat heterogeenisistä alijärjestelmistä. Sovellusten ja suoritusalustan muodostaman järjestelmän suorituskyvyn arviointiin tarvitaan uusia menetelmiä ja työkaluja, joilla kompleksisuutta voidaan hallita. Tässä väitöskirjassa esitettävä ABSOLUT-simulointimenetelmä pienentää suorituskyvyn arvioinnin kompleksisuutta abstrahoimalla sovelluksen toiminnallisuutta työkuormamalleilla, jotka koostuvat kuormaprimitiiveistä suorittimen käskyjen sijaan. Työkuormamalleja voidaan luoda sovellusten spesifikaatioista, mittaustuloksista, suoritusjäljistä tai sovellusten lähdekoodeista. Suoritusalustoista ABSOLUT-menetelmä käyttää yksinkertaisia kapasiteettimalleja toiminnallisten mallien sijaan: suoritinarkkitehtuurit mallinnetaan korkealla tasolla ja tiedonsiirto ja tiedon varastointi mallinnetaan vain suorituskyvyn näkökulmasta. Menetelmä mahdollistaa aikaisen suorituskyvyn arvioinnin, koska malleja voidaan luoda ja simuloida jo ennen valmiin sovelluksen tai suoritusalustan olemassaoloa. ABSOLUT-menetelmää on käytetty useissa erilaisissa kokeiluissa, jotka sisälsivät esimerkiksi matkapuhelimen käyttöä, äänen ja videokuvan toistoa ja tallennusta, 3D-pelin pelaamista ja digitaalista tiedonsiirtoa. Esimerkeissä käytetiin tyypillisiä suoritusalustoja sekä kotitietokoneiden että sulautettujen järjestelmien maailmasta. Lisäksi osa esimerkeistä pohjautui tuleviin tai keksittyihin suoritusalustoihin. Osa simuloinneista on varmennettu vertaamalla simulointituloksia todellisista järjestelmistä saatuihin mittaustuloksiin. Niiden välillä huomattiin keskimäärin 12 prosentin poikkeama, mikä ylittää aikaisen vaiheen suorituskyvyn simulointimenetelmiltä vaadittavan tarkkuuden
APA, Harvard, Vancouver, ISO, and other styles
40

Gustafsson, Magnus. "Evaluation of StochSD for Epidemic Modelling, Simulation and Stochastic Analysis." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-426227.

Full text
Abstract:
Classical Continuous System Simulation (CSS) is restricted to modelling continuous flows, and therefore, cannot correctly realise a conceptual model with discrete objects. The development of Full Potential CSS solves this problem by (1) handling discrete quantities as discrete and continuous matter as continuous, (2) preserving the sojourn time distribution of a stage, (3) implementing attributes correctly, and (4) describing different types of uncertainties in a proper way. In order to apply Full Potential CSS a new software, StochSD, has been developed. This thesis evaluates StochSD's ability to model Full Potential CSS, where the points 1-4 above are included. As a test model a well-defined conceptual epidemic model, which includes all aspects of Full Potential CSS, was chosen. The study was performed by starting with a classical SIR model and then stepwise add the different aspects of the Conceptual Model. The effects of each step were demonstrated in terms of size and duration of the epidemic. Finally, the conceptual model was also realised as an Agent Based Model (ABM). The results from 10 000 replications each of the CSS and ABM models were compared and no statistical differences could be confirmed. The conclusion is that StochSD passed the evaluation.
APA, Harvard, Vancouver, ISO, and other styles
41

Bobe, Bedadi Woreka. "Evaluation of soil erosion in the Harerge region of Ethiopia using soil loss models, rainfall simulation and field trails." Pretoria : [s.n.], 2004. http://upetd.up.ac.za/thesis/available/etd-08022004-141533.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Bobe, Bedadi Woreka. "Evaluation of soil erosion in the Harerge region of Ethiopia using soil loss models, rainfall simulation and field trials." Thesis, University of Pretoria, 2004. http://hdl.handle.net/2263/26929.

Full text
Abstract:
Accelerated soil erosion is one of the major threats to agricultural production in Ethiopia and the Harerge region is not exceptional. It is estimated that about 1.5 billion tones of soil is being eroded every year in Ethiopia. In the extreme cases, especially for the highlands, the rate of soil loss is estimated to reach up to 300 t ha-1yr-1 with an average of about 70 t ha -1yr-1 which is beyond any tolerable level. The government have made different attempts to avert the situation since 1975 through initiation of a massive program of soil conservation and rehabilitation of severely degraded lands. Despite considerable efforts, the achievements were far bellow expectations. This study was aimed at assessing the effect of some soil properties, rainfall intensity and slope gradients on surface sealing, soil erodibility, runoff and soil loss from selected sites in the Harerge region, eastern Ethiopia, using simulated rainfall. Soil loss was also estimated for the sites using Soil Loss Estimation Model for Southern Africa (SLEMSA) and the Universal soil Loss Equation (USLE). Moreover, the effectiveness of various rates and patterns of wheat residue mulching in controlling soil loss was also evaluated for one of the study sites, (i.e. Regosol of Alemaya University), under both rainfall simulation and field natural rainfall conditions. For most of the erosion parameters, the interaction among soil texture, slope gradient and rainfall intensity was significant. In general however, high rainfall intensity induced high runoff, sediment yield and splash. The effect of slope gradients on most of the erosion parameters was not significant as the slope length was too small to bring about a concentrated flow. The effect of soils dominated by any one of the three soil separates on the erosion parameters was largely dependent on rainfall intensity and slope gradient. The soils form the 15 different sites in Harerge showed different degrees of vulnerability to surface sealing, runoff and sediment yield. These differences were associated with various soil properties. Correlation of soil properties to the erosion parameters revealed that aggregate stability was the main factor that determined the susceptibility of soils to sealing, runoff and soil loss. This was in turn affected by organic carbon content, percent clay and exchangeable sodium percentage (ESP). Soils with relatively high ESP such as those at Babile (13.85) and Gelemso (7.18) were among the lowest in their aggregate stability (percent water stable aggregates of 0.25 –2.0mm diameter); and have highest runoff and sediment yield as compared to other soils in the study. Similarly, most of those soils with relatively low ESP, high organic carbon content (OC%) and high water stable aggregates such as Hamaressa, AU (Alemaya University) vertisol and AU regosol were among the least susceptible to sealing and interrill erosion. Nevertheless, some exceptions include soils like those of Hirna where high runoff was recorded whilst having relatively high OC%, low ESP and high water stable aggregates. Both the SLEMSA and USLE models were able to identify the erosion hazards for the study sites. Despite the differences in the procedures of the two models, significant correlation (r = 0.87) was observed between the values estimated by the two methods. Both models estimated higher soil loss for Gelemso, Babile, Karamara and Hamaressa. Soil loss was lower for Diredawa, AU-vertisol and AU-Alluvial all of which occur on a relatively low slope gradients. The high soil loss for Babile and Gelemso conforms with the relative soil erodibility values obtained under rainfall simulation suggesting that soil erodibility, among others, is the main factor contributing to high soil loss for these soils. The difference in the estimated soil losses for the different sites was a function of the interaction of the various factors involved. Though the laboratory soil erodibility values were low to medium for Hamaressa and Karamara, the estimated soil loss was higher owing to the field topographic situations such as high slope gradient. SLEMSA and USLE showed different degrees of sensitivities to their input variables for the conditions of the study sites. SLEMSA was highly sensitive to changes in rainfall kinetic energy (E) and soil erodibility (F) and less sensitive to the cover and slope length factors. The sensitivity of SLEMSA to changes in the cover factor was higher for areas having initially smaller percentage rainfall interception values. On the other hand, USLE was highly sensitive to slope gradient and less so to slope length as compared to the other input factors. The study on the various rates and application patterns of wheat residue on runoff and soil loss both in the laboratory rainfall simulation and under field natural rainfall conditions revealed that surface application of crop residue is more effective in reducing soil loss and runoff than incorporating the same amount of the residue into the soil. Likewise, for a particular residue application method, runoff and soil loss decreased with increasing application rate of the mulch. However, the difference was not significant between 4 Mg ha-1 and 8 Mg ha-1 wheat straw rates suggesting that the former can effectively control soil loss and can be used in areas where there is limitation of crop residues provided that other conditions are similar to that of the study site (AU Regosols). The effectiveness of lower rates of straw (i.e. less than 4 Mg ha-1 ) should also be studied. It should however be noted that the effectiveness of mulching in controlling soils loss and runoff could be different under various slope gradients, rainfall characteristics and cover types that were not covered in this study. Integrated soil and water conservation research is required to develop a comprehensive database for modelling various soil erosion parameters. Further research is therefore required on the effect of soil properties (with special emphasis to aggregate stability, clay mineralogy, exchangeable cations, soil texture and organic matter), types and rates of crop residues, cropping and tillage systems, mechanical and biological soil conservation measures on soil erosion and its conservation for a better estimation of the actual soil loss in the study sites. Copyright 2004, University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria. Please cite as follows: Bobe, BW 2004, Evaluation of soil erosion in the Harerge region of Ethiopia using soil loss models, rainfall simulation and field trials, PhD thesis, University of Pretoria, Pretoria, viewed yymmdd < http://upetd.up.ac.za/thesis/available/etd-08022004-141533 / ><br>Thesis (PhD (Soil Science))--University of Pretoria, 2004.<br>Plant Production and Soil Science<br>unrestricted
APA, Harvard, Vancouver, ISO, and other styles
43

Nguyen, Diep Thi. "Statistical Models to Test Measurement Invariance with Paired and Partially Nested Data: A Monte Carlo Study." Scholar Commons, 2019. https://scholarcommons.usf.edu/etd/7869.

Full text
Abstract:
While assessing emotions, behaviors or performance of preschoolers and young children, scores from adults such as parent psychiatrist and teacher ratings are used rather scores from children themselves. Data from parent ratings or from parents and teachers are often nested such as students are within teachers and a child is within their parents. This popular nested feature of data in educational, social and behavioral sciences makes measurement invariance (MI) testing across informants of children methodologically challenging. There was lack of studies that take into account the nested structure of data in MI testing for multiple adult informants, especially no simulation study that examines the performance of different models used to test MI across different raters. This dissertation focused on two specific nesting data types in testing MI between adult raters of children: paired and partial nesting. For the paired data, the independence assumption of regular MI testing is often violated because the two informants (e.g., father and mother) rate the same child and their scores are anticipated to be related or dependent. The partial nesting data refers to the research situation where teacher and parent ratings are compared. In this scenario, it is common that each parent has only one child to rate while each teacher has multiple children in their classroom. Thus, in case of teacher and parent ratings of the same children, data are repeated measures and also partially nested. Because of these unique features of data, MI testing between adult informants of children requires statistical models that take into account different types of data dependency. I proposed and evaluated the performance of the two statistical models that can handle repeated measures and partial nesting with several simulated research scenarios in addition to one commonly used and one potentially appropriate statistical models across several research scenario. Results of the two simulation studies in this dissertation showed that for the paired data, both multiple-group confirmatory factor analysis (CFA) and repeated measure CFA models were able to detect scalar invariance most of the time using Δχ2 test and ΔCFI. Although the multiple-group CFA (Model 2) was able to detect scalar invariance better than the repeated measure CFA model (Model 1), the detection rates of Model 1 were still at the high level (88% - 91% using Δχ2 test and 84% - 100% using ΔCFI or ΔRMSEA). For configural invariance and metric invariance conditions for the paired data, Model 1 had higher detection rate than Model 2 in almost examined research scenario in this dissertation. Particularly while Model 1 could detect noninvariance (either in intercepts only or in both intercepts and factor loadings) than Model 2 for paired data most of the time, Model 2 could rarely catch it if using suggested cut-off of 0.01 for RMSEA differences. For the paired data, although both Models 1 and 2 could be a good choice to test measurement invariance, Model 1 might be favored if researchers are more interested in detecting noninvariance due to its overall high detection rates for all three levels (i.e. configural, metric, and scalar) of measurement invariance. For scalar invariance with partially nested data, both multilevel repeated measure CFA and design-based multilevel CFA could detect invariance most of the time (from 81% to 100% of examined cases) with slightly higher detection rate for the former model than the later. Multiple-group CFA model hardly detect scalar invariance except when ICC was small. The detection rates for configural invariance using Δχ2 test or Satorra-Bentler LRT were also highest for Model 3 (82% to 100% except only two conditions with detection rates of 61%), following by Model 5 and lowest Model 4. Models 4 and 5 could reach these rates only with the largest sample sizes (i.e., large number of cluster or large cluster size or large in both factors) when the magnitude of noninvariance was small. Unlike scalar and configural invariance, the ability to detect metric invariance was highest for Model 4, following by Model 5 and lowest for Model 3 across many conditions using all of the three performance criteria. As higher detection rates for all configural and scalar invariance, and moderate detection rates for many metric invariance conditions (except cases of small number of clusters combined with large ICC), Model 3 could be a good candidate to test measurement invariance with partially nested data when having sufficient number of clusters or if having small number of clusters with small ICC. Model 5 might be also a reasonable option for this type of data if both the number of clusters and cluster size were large (i.e., 80 and 20, respectively), or either one of these two factors was large coupled with small ICC. If ICC is not small, it is recommended to have a large number of clusters or combination of large number of clusters and large cluster size to ensure high detection rates of measurement invariance for partially nested data. As multiple group CFA had better and reasonable detection rates than the design-based and multilevel repeated measure CFA models cross configural, metric and scalar invariance with the conditions of small cluster size (10) and small ICC (0.13), researchers can consider using this model to test measurement invariance when they can only collect 10 participants within a cluster (e.g. students within a classroom) and there is small degree of data dependency (e.g. small variance between clusters) in the data.
APA, Harvard, Vancouver, ISO, and other styles
44

Lo, Chang-yun. "Optimizing ship air-defense evaluation model using simulation and inductive learning." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Molten, K. W. "Development and preliminary evaluation of the simulation model C- maize VT1.0." Thesis, Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/50065.

Full text
Abstract:
C-Maize VT1.0 is a simulation model of corn (<i>Zea mays</i>) growth in a soil plant atmosphere continuum. The purpose of developing the C-Maize VT1.0 simulation model was to provide an additional tool to researcher’s investigating the affects of water and nitrogen stress on corn growth and the movement of water and nitrates in the soil. The user may select either a 1-dimensional or a 2-dimensional approach to the simulation of the soil system. After an initial series of runs and a preliminary assessment of the model’s credibility it was concluded that the 2-dimensional approach provided a ‘sufficiently credible’ solution to modeling all aspects of the soil-plant-atmosphere system. The 1-dimensional approach as currently programmed provides a ‘non-credible solution’. The 1-dimensional approach failed to adequately simulate the soil subsystem and failed to simulate the plant’s response to water and nitrogen stresses.<br>Master of Science<br>incomplete_metadata
APA, Harvard, Vancouver, ISO, and other styles
46

Ma, Jiajie. "Accuracy and reliability of non-linear finite element analysis for surgical simulation." University of Western Australia. School of Mechanical Engineering, 2006. http://theses.library.uwa.edu.au/adt-WU2010.0089.

Full text
Abstract:
In this dissertation, the accuracy and reliability of non-linear finite element computations in application to surgical simulation is evaluated. The evaluation is performed through comparison between the experiment and finite element analysis of indentation of soft tissue phantom and human brain phantom. The evaluation is done in terms of the forces acting on the cylindrical Aluminium indenter and deformation of the phantoms due to these forces. The deformation of the phantoms is measured by tracking 3D motions of X-ray opaque markers implanted in the direct neighbourhood under the indenter using a custom-made biplane X-ray image intensifiers (XRII) system. The phantoms are made of Sylgard® 527 gel to simulate the hyperelastic constitutive behaviour of the brain tissue. The phantoms are prepared layer by layer to facilitate the implantation of the X-ray opaque markers. The modelling of soft tissue phantom indentation and human brain phantom indentation is performed using the ABAQUSTM/Standard finite element solver. Realistic geometry model of the human brain phantom obtained from Magnetic Resonance images is used. Specific constitutive properties of the phantom layers determined through uniaxial compression tests are used in the model. The models accurately predict the indentation force-displacement relations and marker displacements in both soft tissue phantom indentation and human brain phantom indentation. Good agreement between the experimental and modelling results verifies the reliability and accuracy of the finite element analysis techniques used in this study and confirms the predictive power of these techniques in application to surgical simulation.
APA, Harvard, Vancouver, ISO, and other styles
47

Lindfeldt, Olov. "Railway operation analysis : Evaluation of quality, infrastructure and timetable on single and double-track lines with analytical models and simulation." Doctoral thesis, KTH, Trafik och Logistik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-12727.

Full text
Abstract:
This thesis shows the advantages of simple models for analysis of railway operation. It presents two tools for infrastructure and timetable planning. It shows how the infrastructure can be analysed through fictive line designs, how the timetable can be treated as a variable and how delays can be used as performance measures. The thesis also gives examples of analyses of complex traffic situations through simulation experiments. Infrastructure configuration, timetable design and delays play important roles in the competitiveness of railway transportation. This is especially true on single-track lines where the run times and other timetable related parameters are severely restricted by crossings (train meetings). The first half of this thesis focuses on the crossing time, i.e. the time loss that occurs in crossing situations. A simplified analytical model, SAMFOST, has been developed to calculate the crossing time as a function of infrastructure configuration, vehicle properties, timetable and delays for two crossing trains. Three measures of timetable flexibility are proposed and they can be used to evaluate how infrastructure configuration, vehicle properties, punctuality etc affect possibilities to alter the timetable. Double-track lines operated with mixed traffic show properties similar to those of single-tracks. In this case overtakings imply scheduled delays as well as risk of delay propagation. Two different methods are applied for analysis of double-tracks: a combinatorial, mathematical model (TVEM) and simulation experiments. TVEM, Timetable Variant Evaluation Model, is a generic model that systematically generates and evaluates timetable variants. This method is especially useful for mixed traffic operation where the impact of the timetable is considerable. TVEM may also be used for evaluation of different infrastructure designs. Analyses performed in TVEM show that the impact on capacity from the infrastructure increases with speed differences and frequency of service for the passenger trains, whereas the impact of the timetable is strongest when the speed differences are low and/or the frequency of passenger services is low. Simulation experiments were performed to take delays and perturbations into account. A simulation model was set up in the micro simulation tool RailSys and calibrated against real operational data. The calibrated model was used for multi-factor analysis through experiments where infrastructure, timetable and perturbation factors were varied according to an experimental design and evaluated through response surface methods. The additional delay was used as response variable. Timetable factors, such as frequency of high-speed services and freight train speed, turned out to be of great importance for the additional delay, whereas some of the perturbation factors, i.e. entry delays, only showed a minor impact. The infrastructure factor, distance between overtaking stations, showed complex relationships with several interactions, principally with timetable factors.<br>QC20100622<br>Framtida infrastruktur och kvalitet i tågföring
APA, Harvard, Vancouver, ISO, and other styles
48

Sefastsson, Ulf. "Evaluation of Missile Guidance and Autopilot through a 6 DOF Simulation Model." Thesis, KTH, Optimeringslära och systemteori, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-188897.

Full text
Abstract:
Missile guidance and autopilot have been active fields of research since the second world war. There are lots of literature on the subjects, but the bulk of which are confined to overly simplified models, and therefore the publications of the methods applied to more realistic models are scarce. In this report a nonlinear 6 DOF simulation model of a tail-controlled air-to-air missile is considered. Through several assumptions and simplifications a linearized approximation of the plant is obtained, which then is used in the implementation of 5 guidance laws and 2 autopilots. The guidance laws are all based on a linearized collision geometry, and the autopilots are based on model predictive control (MPC). Both autopilots use linear quadratic MPC (LQMPC), and one is more robust to modelling errors than the conventional LQMPC. The guidance laws and autopilots are then evaluated with respect to performance in terms of miss distance in 4 interception scenarios with a moving target. The results show that the in this model the autopilots perform equally well, and that the guidance laws with more information about the target generally exhibit smaller miss distances, but at the cost of a considerably larger flight time for some scenarios. The conclusions are that the simplifying assumptions in the modelling are legitimate and that the challenges of missile control probably does not lie in the guidance or autopilot, but rather in the target tracking. Therefore it is suggested that future work include measurement noise and process disturbances in the model.<br>Det har forskats kring styrlagarna och styrautomaterna för robotar sedan an-dra världskrigets. Det finns mycket litteratur på områdena, men merparten av de publicerade resultaten behandlar enbart grovt förenklade modeller, och därför är tillgången på publikationer där metoderna applicerats i en mer realistisk modell begränsat. I denna rapport behandlas en olinjär simuleringsmodell av en jaktrobot som styrs med stjärtfenor och har sex frihetsgrader. Genom en rad antaganden och förenklingar erhålls en linjäriserad modell av missilen, vilket sedan används för implementering av fem styrlagar och två styrautomater. Styr-lagarna är alla baserade på en linjäriserad kollisionsgeometri och styrautomaterna är baserade på modellprediktiv styrning (MPC). Båda styrautomaterna använder linjärkvadratisk MPC, där den ena påstås vara mer robust gentemot modellfel. Styrlagarna och -automaterna utvärderas ur ett prestandaperspektiv med fokus på bomavstånd i fyra realistiska genskjutningsscenarier med ett rörligt mål. Resultaten visar att båda styrautomaterna presterar lika bra, och att de styrlagar med mer information om målets position/hastighet/acceleration generellt presterar bättre, men att de för vissa skjutfall får en väsentligt längre flygtid. Slutsatserna är att förenklingarna och antagandena i linjäriseringen är välgrundade, och att utmaningarna i missilstyrning inte ligger i utformning av styrlag/-automat, utan förmodligen i målsökningen. Därför föreslås det slutligen att framtida arbete bl. a. inkluderar mätbrus och störningar.
APA, Harvard, Vancouver, ISO, and other styles
49

Liu, Mingyi. "Evaluation of a Simple Model for the Acoustics of Bat Swarms." Thesis, Virginia Tech, 2016. http://hdl.handle.net/10919/78168.

Full text
Abstract:
Bats using their biosonar while flying in dense swarms may face significant bioacoustic challenges, in particular mutual sonar jamming. While possible solutions to the jamming problem have been investigated multiple times in literature, the severity of this problem has received far less attention. To characterize the acoustics of bat swarms, a simple model of the acoustically relevant properties of a bat swarm has been set up and evaluated. The model contains only four parameters: bat spacial density, biosonar beamwidth, duty cycle, and a scalar measure for the smoothness of the flight trajectories. In addition, a threshold to define substantial jamming was set relative to the emission level. The simulations results show that all four model parameters can have a major impact on jamming probability. Depending on the combination of parameter values, situations with or without substantial jamming probabilities could be produced within reasonable ranges of all model parameters. Hence, the model suggests that not every bat swarm does necessarily impose grave jamming problem. A fitting process was introduced to describe the relationship between the four parameters and jamming probability, hence produce a function with jamming probability as output and four parameters as input. Since the model parameters should be comparatively easy to estimate for actual bat swarms, the simulation results could give researchers a way to assess the acoustic environment of actual bat swarms and determine cases where a study of biosonar jamming could be worthwhile.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
50

Helsing, Joseph. "Validation and Evaluation of Emergency Response Plans through Agent-Based Modeling and Simulation." Thesis, University of North Texas, 2018. https://digital.library.unt.edu/ark:/67531/metadc1157648/.

Full text
Abstract:
Biological emergency response planning plays a critical role in protecting the public from possible devastating results of sudden disease outbreaks. These plans describe the distribution of medical countermeasures across a region using limited resources within a restricted time window. Thus, the ability to determine that such a plan will be feasible, i.e. successfully provide service to affected populations within the time limit, is crucial. Many of the current efforts to validate plans are in the form of live drills and training, but those may not test plan activation at the appropriate scale or with sufficient numbers of participants. Thus, this necessitates the use of computational resources to aid emergency managers and planners in developing and evaluating plans before they must be used. Current emergency response plan generation software packages such as RE-PLAN or RealOpt, provide rate-based validation analyses. However, these types of analysis may neglect details of real-world traffic dynamics. Therefore, this dissertation presents Validating Emergency Response Plan Execution Through Simulation (VERPETS), a novel, computational system for the agent-based simulation of biological emergency response plan activation. This system converts raw road network, population distribution, and emergency response plan data into a format suitable for simulation, and then performs these simulations using SUMO, or Simulations of Urban Mobility, to simulate realistic traffic dynamics. Additionally, high performance computing methodologies were utilized to decrease agent load on simulations and improve performance. Further strategies, such as use of agent scaling and a time limit on simulation execution, were also examined. Experimental results indicate that the time to plan completion, i.e. the time when all individuals of the population have received medication, determined by VERPETS aligned well with current alternate methodologies. It was determined that the dynamic of traffic congestion at the POD itself was one of the major factors affecting the completion time of the plan, and thus allowed for more rapid calculations of plan completion time. Thus, this system provides not only a novel methodology to validate emergency response plans, but also a validation of other current strategies of emergency response plan validation.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography