To see the other types of publications on this topic, follow the link: Design-based simulations.

Dissertations / Theses on the topic 'Design-based simulations'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Design-based simulations.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Singh, Harpreet. "Computer simulations of realistic microstructures implications for simulation-based materials design/." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/22564.

Full text
Abstract:
Thesis (Ph. D.)--Materials Science and Engineering, Georgia Institute of Technology, 2008.<br>Committee Chair: Dr. Arun Gokhale; Committee Member: Dr. Hamid Garmestani; Committee Member: Dr. Karl Jacob; Committee Member: Dr. Meilin Liu; Committee Member: Dr. Steve Johnson.
APA, Harvard, Vancouver, ISO, and other styles
2

Hardman, Richard H. III. "Systemic Formation: Multi-Agent Simulations for Architecture." University of Cincinnati / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ucin155351382588639.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Lin, Yiben. "A quadrature-based technique for robust design with computer simulations." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/39699.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2007.<br>Includes bibliographical references (p. 73-76).<br>This thesis presents a method for estimating transmitted variance to enable robust parameter design in computer simulations. This method is based on the Hermite-Gaussian quadrature for a single input. It is extended to multiple variables, in which case, for simulations with n randomly varying inputs, the method requires 4n + 1 samples. For situations in which the polynomial response is separable, it is proven that 1) the method gives exact transmitted variance if the response is up to a fourth-order separable polynomial response and 2) the error of the transmitted variance estimated by the method is smaller than zero if the response is a fifth-order separable polynomial response. For situations in which the polynomial response is not separable, two probability models based on the effect hierarchy principle are used to generate a large number of polynomial response functions. The proposed method and alternative methods are applied to these polynomial response functions to investigate accuracy. For typical populations of problems, it is shown that the method has good accuracy, providing less than 5% error in 90% of cases.<br>(cont.) The proposed method provides much better accuracy than Latin Hypercube Sampling or Hammersley Sequence Sampling assuming these techniques are also restricted to using 4n + 1 samples. Hammersley Sequence Sampling requires at least ten times the number of samples to provide approximately the same degree of accuracy as the quadrature technique. A cubature method provides slightly better accuracy than the proposed method, though it requires n2 + 3n + 3 samples. As an independent check on these results, simulations of five engineering systems are developed and 12 case studies are conducted. Although the predicted accuracy in case-based evaluations is somewhat lower, the data from the case-based evaluations are consistent with the results from the model-based evaluation.<br>by Yiben Lin.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
4

Chauffour, Marie-Laure. "Shock-based waverider design with pressure gradient corrections and computational simulations." College Park, Md. : University of Maryland, 2004. http://hdl.handle.net/1903/1829.

Full text
Abstract:
Thesis (M.S.) -- University of Maryland, College Park, 2004.<br>Thesis research directed by: Dept. of Aerospace Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
5

Karim, Michael K. "A design template for the development of computer-based instructional simulations." The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu1298569562.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bacciaglia, Antonio <1993&gt. "Advanced voxel-based CAD modelling for FSI simulations for automotive structures design." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10181/1/Antonio_Bacciaglia_thesis.pdf.

Full text
Abstract:
Additive Manufacturing (AM) is nowadays considered an important alternative to traditional manufacturing processes. AM technology shows several advantages in literature as design flexibility, and its use increases in automotive, aerospace and biomedical applications. As a systematic literature review suggests, AM is sometimes coupled with voxelization, mainly for representation and simulation purposes. Voxelization can be defined as a volumetric representation technique based on the model’s discretization with hexahedral elements, as occurs with pixels in the 2D image. Voxels are used to simplify geometric representation, store intricated details of the interior and speed-up geometric and algebraic manipulation. Compared to boundary representation used in common CAD software, voxel’s inherent advantages are magnified in specific applications such as lattice or topologically structures for visualization or simulation purposes. Those structures can only be manufactured with AM employment due to their complex topology. After an accurate review of the existent literature, this project aims to exploit the potential of the voxelization algorithm to develop optimized Design for Additive Manufacturing (DfAM) tools. The final aim is to manipulate and support mechanical simulations of lightweight and optimized structures that should be ready to be manufactured with AM with particular attention to automotive applications. A voxel-based methodology is developed for efficient structural simulation of lattice structures. Moreover, thanks to an optimized smoothing algorithm specific for voxel-based geometries, a topological optimized and voxelized structure can be transformed into a surface triangulated mesh file ready for the AM process. Moreover, a modified panel code is developed for simple CFD simulations using the voxels as a discretization unit to understand the fluid-dynamics performances of industrial components for preliminary aerodynamic performance evaluation. The developed design tools and methodologies perfectly fit the automotive industry’s needs to accelerate and increase the efficiency of the design workflow from the conceptual idea to the final product.
APA, Harvard, Vancouver, ISO, and other styles
7

Zapata, Usandivaras Jose. "Surrogate models based on large eddy simulations and deep learning for coaxial rocket engine injector design." Electronic Thesis or Diss., Toulouse, ISAE, 2024. http://www.theses.fr/2024ESAE0024.

Full text
Abstract:
La conception des fusées est soumise à une pression croissante pour réduire leurs coûts de développement. L’utilisation de la CFD pour la simulation des processus de combustion des moteurs-fusées (LRE) peut constituer une alternative économique aux coûteuses expériences. Pourtant, une approche holistique pour la conception préliminaire avec la CFD n’est pas encore pratique. Des modèles de substitution appropriés peuvent contourner ce dilemme grâce à des temps de restitution rapides, sans perte de précision significative. La conception d’un injecteur a un impact direct sur l’efficacité de la combustion et les charges thermiques. Dans ce travail, nous procédons à l’évaluation des stratégies appelées data-driven pour obtenir des modèles de substitution des injecteurs coaxiaux. Un accent particulier est mis sur les techniques supervisées d’apprentissage profond (DL). Nous commençons par réaliser une validation du concept, en construisant une base de données de ∼3600 simulations 2D axisymétriques RANS (Reynolds Averaged Navier Stokes) d’injecteurs coaxiaux couvrant un espace de paramètres à 9 dimensions, comprenant la géométrie et le régime de combustion. Des modèles de quantités scalaires d’intérêt (QoI), du profil de flux de chaleur de paroi 1D et de champ de température moyen 2D, sont formés et validés. Les modèles utilisent des réseaux neuronaux entièrement connectés (FCNN), et un U-Net adapté pour le cas 2D. Les résultats se comparent bien à d’autres méthodes établies sur l’ensemble des données d’essai. L’approche RANS présente des lacunes évidentes lorsqu’il s’agit d’applications de combustion turbulente. Au lieu de cela, les simulations aux grandes échelles (LES), sont en principe mieux adaptées à la modélisation de la combustion turbulente. La méthodologie déployée sur les données RANS est donc appliquée sur une base de données de ∼100 LES d’injecteurs couvrant un espace de conception 3D, à un coût par échantillon beaucoup plus élevé que RANS. En raison des coûts de calculs élevés, des maillages grossiers ainsi que d’autres simplifications sont adoptés pour la génération de cette base de données LES, qui est ainsi qualifiée de basse fidélité (LF). Les FCNN et les U-Nets sont utilisés pour obtenir des modèles de substitution des QoI scalaires etdes champs stationnaires 2D avec des performances satisfaisantes pour la tâche de prédiction LF. Afin d’améliorer la qualité des modèles obtenus au sens de leur capacité à décrire les phénomènes physiques, sans pour autant devoir les entraîner sur des simulations plus raffinées et coûteuses, une approche multifidélité (MF) est envisagée en tirant parti de l’apprentissage par transfert inductif sur les U-Nets. Les modèles sont réentraînés et validés sur un ensemble plus petit de ∼10 échantillons de haute fidélité (HF). Le modèle MF donne de bons résultats dans la tâche de prédiction HF sur les échantillons de test, avec la topologie de flamme souhaitée, à un coût de calcul bien inférieur à ce qu’aurait coûté uniquement sur des données HF. Par ailleurs, les informations liées au comportement dynamique restituées par la LES sont exploitées pour le développement de modèles d’ordre réduit pour la prédiction spatio-temporelle de l’écoulement réactif. Nous développons des émulateurs d’un injecteur LRE au moyen d’autoencodeurs convolutifs (CNN-AE) et d’un multilayer perceptron (MLP). Le contenu spectral reconstruit du signal surpasse celui d’une POD équivalente, ce qui démontre la capacité de compression supérieure du CNN-AE. Cependant, des problèmes de régularité sont soulevés lors de la propagation de l’émulateur au-delà de l’horizon d’apprentissage. Enfin, ce travail met en évidence les défis et les opportunités de l’utilisation de la DL pour la prédiction des caractéristiques stationnaires et dynamiques des données LES de l’écoulement réactif dans un injecteur de moteur fusée<br>The design of rocket propulsion systems is under growing pressure of reducing development costs. The use of CFD codes for the simulation of rocket engine combustion processes can provide an economical alternative to costly experiments which have traditionally been at the core of liquid rocket engines (LREs) development. Nonetheless, a holistic approach for preliminary design analysis and optimization is not yet practical, as the exploration of the entire engine design space via high-fidelity numerical simulations is intractable. Appropriate surrogate models may circumvent this dilemma through fast restitution times, without significant accuracy loss. The liquid rocket engine injector is a key subsystem within the LRE, whose design directly impacts flame development, combustion efficiency, and thermal loads. The multiscale nature of turbulent, non-premixed combustion, makes the modeling of injection, particularly complex. In this work, we proceed to evaluate data driven strategies for obtaining surrogate models of LRE shear coaxial injectors. A specific emphasis is taken on supervised, deep learning (DL) techniques for regression tasks. The base injector configuration is inspired on an existing experimental rocket combustor from TUM, operating with a GOx/GCH 4 mixture. We begin by conducting a proof-of-concept (PoC), by offline sampling a database of ∼3600 Reynolds Averaged Navier Stokes (RANS), 2D axisymmetric simulations of single element coaxial injectors spanning a 9 dimensional parameter space comprising geometry and combustion regime. Subsequent models of scalar quantities of interest (QoIs),1D wall heat flux profile, and 2D average temperature field are trained and validated. The models use Fully Connected Neural Networks and an adapted U-Net for the 2D case. The results perform well against other established surrogate modeling methods over the test dataset. The RANS approach has evident shortcomings when dealing with turbulent combustion applications. Instead, Large Eddy Simulations (LES), are in principle, better suited to model turbulent combustion, while furnishing information about dynamical flow features. We proceed to replicate the (PoC) efforts, albeit on a database of ∼100 LES of shear coaxial injectors spanning a 3D design space, at a much larger cost per sample than RANS. A dedicated LES data generation pipeline is put in place. Due to the cost, the LES are low-fidelity (LF) in view of the modeling simplifications, i.e. coarse meshes, global chemistry, etc. CNNs and U-Nets are used to obtain surrogate models of scalar QoIs and 2D stationary fields with satisfactory performance over the LF prediction task. To improve the overall fidelity of the surrogate, a multi-fidelity (MF) approach is considered by leveraging inductive transfer learning over our U-Nets. The decoding layers are retrained and validated over a smaller pool of ∼10 of high-fidelity (HF) samples, i.e. finer resolution. The MF surrogate performs well in the HF prediction task over the test samples, with the desired flame topology, at a lower computational cost of the offline sampling stage. The dynamic data of LES, motivates the development of reduced order models (ROMs) for the spatio-temporal prediction of the injector flame. We develop emulators of a LRE injector flame by means of convolutional autoencoders (CNN-AE) and multi-layer perceptron (MLP) for propagating in time the latent vectors. The reconstructed spectral content of the signal outperforms that of a standard POD with equal latent space dimension, demonstrating the superior compression capability of the CNN-AE. However, manifold regularity concerns are raised when propagating the emulator beyond the training horizon. Finally, this work evidences the challenges and opportunities of the use of DL for the prediction of stationary and dynamical features of LES data for a complex reactive flow configuration of a LRE coaxial injector
APA, Harvard, Vancouver, ISO, and other styles
8

Vignetti, Matteo Maria. "Thermal simulations and design guidelines on multi-finger PAs based on 28nm FD-SOI technology." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-142677.

Full text
Abstract:
The electrical performance of Silicon-On-Insulator (SOI) devices can be dramatically enhanced in terms of reduced parasitic capacitances, leakage current and power consumption. On the other hand, self-heating effects (SHE) are more pronounced than in a bulk device because of the buried oxide which limits power dissipation through the substrate. This issue is particularly important in the design of power amplifiers (PAs) for mobile applications where excellent RF performance is required while at the same time the current carrying capability of the devices have to be very high. In the present work the thermal behavior of multi-finger FDSOI-MOSFET power amplifiers has been investigated and thermal design guidelines have been proposed. Nano-scale thermal conduction and heat generation in nano-devices have been preliminarily studied in order to account for nano-scale effects. A finite element analysis model (FEA model) has been realized in the COMSOL multi-physics environment. Thermal simulations have been performed and the thermal behaviour of the simulated devices with respect to geometrical parameters has been studied. Based on the simulation results, thermal design guidelines have been proposed and a PA unit cell design has been presented. LVT device having a pitch p = 130nm has found to be the best choice for the design of a multifinger MOSFET power amplifier and it has been adopted as the core for the design of a unit cell. Such a unit cell has been used for the design of a power amplifier to be manufactured in the first tape-out for the Dynamic-ULP project.
APA, Harvard, Vancouver, ISO, and other styles
9

Ploé, Patrick. "Surrogate-based optimization of hydrofoil shapes using RANS simulations." Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0012/document.

Full text
Abstract:
Cette thèse présente un framework d’optimisation pour la conception hydrodynamique de forme d’hydrofoils. L’optimisation d’hydrofoil par simulation implique des objectifs d’optimisation divergents et impose des compromis contraignants en raison du coût des simulations numériques et des budgets limités généralement alloués à la conception des navires. Le framework fait appel à l’échantillonnage séquentiel et aux modèles de substitution. Un modèle prédictif est construit en utilisant la Régression par Processus Gaussien (RPG) à partir des données issues de simulations fluides effectuées sur différentes géométries d’hydrofoils. Le modèle est ensuite combiné à d’autres critères dans une fonction d’acquisition qui est évaluée sur l’espace de conception afin de définir une nouvelle géométrie qui est testée et dont les paramètres et la réponse sont ajoutés au jeu de données, améliorant ainsi le modèle. Une nouvelle fonction d’acquisition a été développée, basée sur la variance RPG et la validation croisée des données. Un modeleur géométrique a également été développé afin de créer automatiquement les géométries d’hydrofoil a partir des paramètres déterminés par l’optimiseur. Pour compléter la boucle d’optimisation,FINE/Marine, un solveur fluide RANS, a été intégré dans le framework pour exécuter les simulations fluides. Les capacités d’optimisation ont été testées sur des cas tests analytiques montrant que la nouvelle fonction d’acquisition offre plus de robustesse que d’autres fonctions d’acquisition existantes. L’ensemble du framework a ensuite été testé sur des optimisations de sections 2Dd’hydrofoil ainsi que d’hydrofoil 3D avec surface libre. Dans les deux cas, le processus d’optimisation fonctionne, permettant d’optimiser les géométries d’hydrofoils et confirmant les performances obtenues sur les cas test analytiques. Les optima semblent cependant être assez sensibles aux conditions opérationnelles<br>This thesis presents a practical hydrodynamic optimization framework for hydrofoil shape design. Automated simulation based optimization of hydrofoil is a challenging process. It may involve conflicting optimization objectives, but also impose a trade-off between the cost of numerical simulations and the limited budgets available for ship design. The optimization frameworkis based on sequential sampling and surrogate modeling. Gaussian Process Regression (GPR) is used to build a predictive model based on data issued from fluid simulations of selected hydrofoil geometries. The GPR model is then combined with other criteria into an acquisition function that isevaluated over the design space, to define new querypoints that are added to the data set in order to improve the model. A custom acquisition function is developed, based on GPR variance and cross validation of the data.A hydrofoil geometric modeler is also developed to automatically create the hydrofoil shapes based on the parameters determined by the optimizer. To complete the optimization loop, FINE/Marine, a RANS flow solver, is embedded into the framework to perform the fluid simulations. Optimization capabilities are tested on analytical test cases. The results show that the custom function is more robust than other existing acquisition functions when tested on difficult functions. The entire optimization framework is then tested on 2D hydrofoil sections and 3D hydrofoil optimization cases with free surface. In both cases, the optimization process performs well, resulting in optimized hydrofoil shapes and confirming the results obtained from the analytical test cases. However, the optimum is shown to be sensitive to operating conditions
APA, Harvard, Vancouver, ISO, and other styles
10

Nordhoff, Helga Irene. "The design and implementation of a computer-based course using Merrill's model of instructional design." Thesis, Pretoria : [s.n.], 2002. http://upetd.up.ac.za/thesis/available/etd-08022002-094043/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Lunsford, Ian M. "SUBSYSTEM FAILURE ANALYSIS WITHIN THE HORIZON SIMULATION FRAMEWORK." DigitalCommons@CalPoly, 2016. https://digitalcommons.calpoly.edu/theses/1560.

Full text
Abstract:
System design is an inherently expensive and time consuming process. Engineers are constantly tasked to investigate new solutions for various programs. Model-based systems engineering (MBSE) is an up and coming successful method used to reduce the time spent during the design process. By utilizing simulations, model-based systems engineering can verify high-level system requirements quickly and at low cost early in the design process. The Horizon Simulation Framework, or HSF, provides the capability of simulating a system and verifying the system performance. This paper outlines an improvement to the Horizon Simulation Framework by providing information to the user regarding schedule failures due to subsystem failures and constraint violations. Using the C# language, constraint violation rates and subsystem failure rates are organized by magnitude and written to .csv files. Also, proper subsystem failure and constraint violation checking orders were stored for HSF to use as new evaluation sequences. The functionalities of the systemEval framework were verified by five test cases. The output information can be used for the user to improve their system and possibly reduce the total run-time of the Horizon Simulation Framework.
APA, Harvard, Vancouver, ISO, and other styles
12

Skordoulis, Dionysios. "Design and optimization of QoS-based medium access control protocols for next-generation wireless LANs." Thesis, Brunel University, 2013. http://bura.brunel.ac.uk/handle/2438/7375.

Full text
Abstract:
In recent years, there have been tremendous advances in wireless & mobile communications, including wireless radio techniques, networking protocols, and mobile devices. It is expected that different broadband wireless access technologies, e.g., WiFi (IEEE 802.11) and WiMAX (IEEE 802.16) will coexist in the future. In the meantime, multimedia applications have experienced an explosive growth with increasing user demands. Nowadays, people expect to receive high-speed video, audio, voice and web services even when being mobile. The key question that needs to be answered, then, is how do we ensure that users always have the "best" network performance with the "lowest" costs in such complicated situations? The latest IEEE 802.11n standards attains rates of more than 100 Mbps by introducing innovative enhancements at the PHY and MAC layer, e.g. MIMO and Frame Aggregation, respectively. However, in this thesis we demonstrate that frame aggregation's performance adheres due to the EDCA scheduler's priority mechanism and consequently resulting in the network's poor overall performance. Short waiting times for high priority flows into the aggregation queue resolves to poor channel utilization. A Delayed Channel Access algorithm was designed to intentionally postpone the channel access procedure so that the number of packets in a formed frame can be increased and so will the network's overall performance. However, in some cases, the DCA algorithm has a negative impact on the applications that utilize the TCP protocol, especially the when small TCP window sizes are engaged. So, the TCP process starts to refrain from sending data due to delayed acknowledgements and the overall throughput drops. In this thesis, we address the above issues by firstly demonstrating the potential performance benefits of frame aggregation over the next generation wireless networks. The efficiency and behaviour of frame aggregation within a single queue, are mathematically analysed with the aid of a M=G[a;b]=1=K model. Results show that a trade-off choice has to be taken into account over minimizing the waiting time or maximizing utilization. We also point out that there isn't an optimum batch collection rule which can be assumed as generally valid but individual cases have to be considered separately. Secondly, we demonstrate through extensive simulations that by introducing a method, the DCA algorithm, which dynamically determines and adapts batch collections based upon the traffic's characteristics, QoS requirements and server's maximum capacity, also improves e ciency. Thirdly, it is important to understand the behaviour of the TCP ows over the WLAN and the influence that DCA has over the degrading performance of the TCP protocol. We investigate the cause of the problem and provide the foundations of designing and implementing possible solutions. Fourthly, we introduce two innovative proposals, one amendment and one extension to the original DCA algorithm, called Adaptive DCA and Selective DCA, respectively. Both solutions have been implemented in OPNET and extensive simulation runs over a wide set of scenarios show their effectiveness over the network's overall performance, each in its own way.
APA, Harvard, Vancouver, ISO, and other styles
13

Park, In-Hee. "Computational Simulations of Protein-Ligand Molecular Recognition via Enhanced Samplings, Free Energy Calculations and Applications to Structure-Based Drug Design." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1276745410.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Tee, Chin Yen. "Market Design for the Future Electricity Grid: Modeling Tools and Investment Case Studies." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/856.

Full text
Abstract:
The future electricity grid is likely to be increasingly complex and uncertain due to the introduction of new technologies in the grid, the increased use of control and communication infrastructure, and the uncertain political climate. In recent years, the transactive energy market framework has emerged as the key framework for future electricity market design in the electricity grid. However, most of the work done in this area has focused on developing retail level transactive energy markets. There seems to be an underlying assumption that wholesale electricity markets are ready to support any retail market design. In this dissertation, we focus on designing wholesale electricity markets that can better support transactive retail market. On the highest level, this dissertation contributes towards developing tools and models for future electricity market designs. A particular focus is placed on the relationship between wholesale markets and investment planning. Part I of this dissertation uses relatively simple models and case studies to evaluate key impediments to flexible transmission operation. In doing so, we identify several potential areas of concern in wholesale market designs: 1. There is a lack of consideration of demand flexibility both in the long-run and in the short-run 2. There is a disconnect between operational practices and investment planning 3. There is a need to rethink forward markets to better manage resource adequacy under long-term uncertainties 4. There is a need for more robust modeling tools for wholesale market design In Part II and Part III of this dissertation, we make use of mathematical decomposition and agent-based simulations to tackle these concerns. Part II of this dissertation uses Benders Decomposition and Lagrangian Decomposition to spatially and temporally decompose a power system and operation problem with active participation of flexible loads. In doing so, we are able to not only improve the computational efficiency of the problem, but also gain various insights on market structure and pricing. In particular, the decomposition suggests the need for a coordinated investment market and forward energy market to bridge the disconnect between operational practices and investment planning. Part III of this dissertation combines agent-based modeling with state-machine based modeling to test various spot, forward, and investment market designs, including the coordinated investment market and forward energy market proposed in Part II of this dissertation. In addition, we test a forward energy market design where 75% of load is required to be purchased in a 2-year-ahead forward market and various transmission cost recovery strategies. We demonstrate how the different market designs result in different investment decisions, winners, and losers. The market insights lead to further policy recommendations and open questions. Overall, this dissertation takes initial steps towards demonstrating how mathematical decomposition and agent-based simulations can be used as part of a larger market design toolbox to gain insights into different market designs and rules for the future electricity grid. In addition, this dissertation identifies market design ideas for further studies, particularly in the design of forward markets and investment cost recovery mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
15

CAMMARANO, SILVIA. "Daylighting design for energy saving in a building global energy simulation context." Doctoral thesis, Politecnico di Torino, 2015. http://hdl.handle.net/11583/2603987.

Full text
Abstract:
A key factor to substantially reduce the energy consumption for electric lighting consists in a more widespread exploitation of daylight, associated with the use of the most energy efficient lighting technologies, including LEDs or electric lighting controls. At the same time daylight harvesting in indoor spaces can influence the global energy performance of a building also in terms of heating and cooling loads. For this reason, it’s always necessary to account for the balance between daylighting benefits and energy requirements. Furthermore the increasing awareness of the potential benefits of daylight has resulted in an increased need for objective information and data on the impact that different design solutions, in terms of architectural features, can have on the daylighting condition and energy demand of a space. Within this frame the research activity has been focusing on three main aspects: − Analyzing limits and potentials of the current daylighting design practice and proposing synthetic information and tools to be used by the design team during the earliest design stage to predict the daylight condition within a space. − Analyzing the effect of a proper daylighting design approach on energy requirements for electric lighting, associating with the use of efficient lighting technologies and control systems. − Assessing the influence of energy demand for electric lighting on the global energy performance. The methodology that was adopted relies on dynamic simulations carried out with Daysim and EnergyPlus used in synergy to perform a parametric study to assess the indoor daylighting conditions and the energy performance of rooms with different architectural features. Within the first phase the database of results of the lighting analysis was used to assess the sensitivity of new metrics which have been proposed by the scientific community as predictors of the dynamic variation of daylight. Furthermore it was analyzed how indoor daylight can be influenced by room’s architectural features. Than the energy demand for electric lighting for all simulated case studies have been analyzed so as to examine the influence of a proper daylighting design in presence of different lighting control systems. Finally results related to the amount of daylight available in a space were compared with annual energy demand for lighting, heating and cooling to highlight the influence of a proper daylighting design on the global energy performance.
APA, Harvard, Vancouver, ISO, and other styles
16

Loibl, André, René Andrae, and Peter Köhler. "Unterstützung bei der konstruktionsbegleitenden Simulation von Flanschverbindungen." Technische Universität Chemnitz, 2018. https://monarch.qucosa.de/id/qucosa%3A21509.

Full text
Abstract:
In diesem Beitrag wird am Beispiel von Flanschverbindungen dargestellt, wie Simulations- und Berechnungsmodelle wissensbasiert aufgebaut und verknüpft werden können. Dies führt zu einer teilautomatisierten Auslegung. In einem weiteren Schritt wird eine Methode zur Optimierung der Flanschverbindung vorgestellt. Hierbei wurden Parameter identifiziert, die zur Optimierung geeignet sind. Die prototypische Implementierung der beiden Methoden erfolgte im CAD-System Siemens NX10 (Siemens PLM Software).
APA, Harvard, Vancouver, ISO, and other styles
17

El, Kassem Bilal [Verfasser], Bernd [Akademischer Betreuer] Markert, and Peter [Akademischer Betreuer] Eberhard. "Machine design for auger dosing of powders by utilizing DEM simulations and multivariate-regression-based experimental analysis / Bilal El Kassem ; Bernd Markert, Peter Eberhard." Aachen : Universitätsbibliothek der RWTH Aachen, 2021. http://d-nb.info/1231434724/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Bjelic, Sinisa. "Molecular Simulation of Enzyme Catalysis and Inhibition." Doctoral thesis, Uppsala : Acta Universitatis Upsaliensis, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Sang, Shengbo [Verfasser], Hartmut [Akademischer Betreuer] Witte, Klaus [Akademischer Betreuer] Liefeith, and Theodor [Akademischer Betreuer] Doll. "An approach to the design of surface stress-based PDMS micro-membrane biosensors - concept, numerical simulations and prototypes / Shengbo Sang. Gutachter: Klaus Liefeith ; Theodor Doll. Betreuer: Hartmut Witte." Ilmenau : Universitätsbibliothek Ilmenau, 2010. http://d-nb.info/1010814516/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Suzuki, Seiichi [Verfasser], and Jan [Akademischer Betreuer] Knippers. "Topology-driven form-finding : interactive computational modelling of bending-active and textile hybrid structures through active-topology based real-time physics simulations, and its emerging design potentials / Seiichi Suzuki ; Betreuer: Jan Knippers." Stuttgart : Universitätsbibliothek der Universität Stuttgart, 2020. http://d-nb.info/1210926245/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ahmad, Aftab. "Effective development of haptic devices using a model-based and simulation-driven design approach." Doctoral thesis, KTH, Maskinkonstruktion (Avd.), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-144216.

Full text
Abstract:
Virtual reality (VR)-based surgical simulators using haptic devices can increase the effectiveness of surgical training for surgeons when performing surgical procedures in hard tissues such as bones or teeth milling. The realism of virtual surgery through a surgical simulator depends largely on the precision and reliability of the haptic device, which reflects the interaction with the virtual model. The quality of perceptiveness (sensation, force/torque) depends on the design of the haptic device, which presents a complex design space due to its multi-criteria and conflicting character of functional and performance requirements. These requirements include high stiffness, large workspace, high manipulability, small inertia, low friction, high transparency, and cost constraints. This thesis proposes a design methodology to improve the realism of force/torque feedback from the VR-based surgical simulator while fulfilling end-user requirements. The main contributions of this thesis are: 1. The development of a model-based and simulation-driven design methodology, where one starts from an abstract, top-level model that is extended via stepwise refinements and design space exploration into a detailed and integrated systems model that can be physically realized. 2. A methodology for creating an analytical and compact model of the quasi-static stiffness of a haptic device, which considers the stiffness of actuation systems, flexible links and passive joints. 3. A robust design optimization approach to find the optimal numerical values for a set of design parameters to maximize the kinematic, dynamic and kinetostatic performances of a 6-degrees of freedom (DOF) haptic device, while minimizing its sensitivity to variations in manufacturing tolerances and cost, and also satisfying the allowed variations in the performance indices. 4. A cost-effective approach for force/torque feedback control using force/torque estimated through a recursive least squares estimation. 5. A model-based control strategy to increase transparency and fidelity of force/torque feedback from the device by compensating for the natural dynamics of the device, friction in joints, gravity of platform, and elastic deformations.<br><p>QC 20140415</p>
APA, Harvard, Vancouver, ISO, and other styles
22

Ling, Jay Michael. "Managing Information Collection in Simulation-Based Design." Thesis, Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11504.

Full text
Abstract:
An important element of successful engineering design is the effective management of resources to support design decisions. Design decisions can be thought of as having two phasesa formulation phase and a solution phase. As part of the formulation phase, engineers must decide how much information to collect and which models to use to support the design decision. Since more information and more accurate models come at a greater cost, a cost-benefit trade-off must be made. Previous work has considered such trade-offs in decision problems when all aspects of the decision problem can be represented using precise probabilities, an assumption that is not justified when information is sparse. In this thesis, we use imprecise probabilities to manage the information cost-benefit trade-off for two decision problems in which the quality of the information is imprecise: 1) The decision of when to stop collecting statistical data about a quantity that is characterized by a probability distribution with unknown parameters; and 2) The selection of the most preferred model to help guide a particular design decision when the model accuracy is characterized as an interval. For each case, a separate novel approach is developed in which the principles of information economics are incorporated into the information management decision. The problem of statistical data collection is explored with a pressure vessel design. This design problem requires the characterization of the probability distribution that describes a novel material's strength. The model selection approach is explored with the design of an I-beam structure. The designer must decide how accurate of a model to use to predict the maximum deflection in the span of the structure. For both problems, it is concluded that the information economic approach developed in this thesis can assist engineers in their information management decisions.
APA, Harvard, Vancouver, ISO, and other styles
23

Boyd, James L. "Interactive simulations| Improving learning retention in knowledge-based online training courses." Thesis, Capella University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10261889.

Full text
Abstract:
<p> The purpose of this quasi-experimental quantitative study was to investigate whether online interactive simulations would provide a positive improvement in learners&rsquo; ability to apply critical thinking skills in a dangerous work environment. The course in which an improvement in critical thinking skills was the target outcome was a course which addressed electrical safety-related work practices for electrical apprentices in dangerous work environments. The interactive simulation identified for this study provided different levels of high-fidelity simulations in a dangerous electrical environment, in which the learner was subjected to scenarios where that learner could face simulated injury or death. Critical thinking was measured by a post-Test instrument developed using a DELPHI process and designed to evaluate critical thinking skills in electrical scenarios presented in the simulation. An Independent Samples t-Test was conducted to determine if there was a significant difference, as determined by the post-Test, between a comparison group that did not use the simulation and an experimental group who did use the simulation. In this study, there was no significant difference between the comparison group and the experimental group on the post-Test. The theoretical framework examined in this study included constructivism, self-guided study, cognitive overload, and motivation; and the effect of each was discussed in the study. This research study identifies the need for additional research into the best use of interactive simulations in online course development.</p>
APA, Harvard, Vancouver, ISO, and other styles
24

Almlöf, Martin. "Computational Methods for Calculation of Ligand-Receptor Binding Affinities Involving Protein and Nucleic Acid Complexes." Doctoral thesis, Uppsala University, Department of Cell and Molecular Biology, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-7421.

Full text
Abstract:
<p>The ability to accurately predict binding free energies from computer simulations is an invaluable resource in understanding biochemical processes and drug action. Several methods based on microscopic molecular dynamics simulations exist, and in this thesis the validation, application, and development of the linear interaction energy (LIE) method is presented.</p><p>For a test case of several hydrophobic ligands binding to P450cam it is found that the LIE parameters do not change when simulations are performed with three different force fields. The nonpolar contribution to binding of these ligands is best reproduced with a constant offset and a previously determined scaling of the van der Waals interactions.</p><p>A new methodology for prediction of binding free energies of protein-protein complexes is investigated and found to give excellent agreement with experimental results. In order to reproduce the nonpolar contribution to binding, a different scaling of the van der Waals interactions is neccesary (compared to small ligand binding) and found to be, in part, due to an electrostatic preorganization effect not present when binding small ligands.</p><p>A new treatment of the electrostatic contribution to binding is also proposed. In this new scheme, the chemical makeup of the ligand determines the scaling of the electrostatic ligand interaction energies. These scaling factors are calibrated using the electrostatic contribution to hydration free energies and proposed to be applicable to ligand binding.</p><p>The issue of codon-anticodon recognition on the ribosome is adressed using LIE. The calculated binding free energies are in excellent agreement with experimental results, and further predict that the Leu2 anticodon stem loop is about 10 times more stable than the Ser stem loop in complex with a ribosome loaded with the Phe UUU codon. The simulations also support the previously suggested roles of A1492, A1493, and G530 in the codon-anticodon recognition process.</p>
APA, Harvard, Vancouver, ISO, and other styles
25

Nnene, Obiora Amamifechukwu. "Simulation-based optimisation of public transport networks." Doctoral thesis, Faculty of Engineering and the Built Environment, 2020. http://hdl.handle.net/11427/32308.

Full text
Abstract:
Public transport network design deals with finding the most efficient network solution among a set of alternatives, that best satisfies the often-conflicting objectives of different network stakeholders like passengers and operators. Simulation-based Optimisation (SBO) is a discipline that solves optimisation problems by combining simulation and optimisation models. The former is used to evaluate the alternative solutions, while the latter searches for the optimal solution among them. A SBO model for designing public transport networks is developed in this dissertation. The context of the research is the MyCiTi Bus Rapid Transit (BRT) network in the City of Cape Town, South Africa. A multi-objective optimisation algorithm known as the Non-dominated Sorting Genetic Algorithm (NSGA-II) is integrated with Activity-based Travel Demand Model (ABTDM) known as the Multi-Agent Transport Simulation (MATSim). The steps taken to achieve the research objectives are first to generate a set of feasible network alternatives. This is achieved by manipulating the existing routes of the MyCiTi BRT with a computer based heuristic algorithm. The process is guided by feasibility conditions which guarantee that each network has routes that are acceptable for public transport operations. MATSim is then used to evaluate the generated alternatives, by simulating the daily plans of travellers on each network. A typical daily plan is a sequential ordering of all the trips made by a commuter within a day. Automated Fare Collection (AFC) data from the MyCiTi BRT was used to create this plan. Lastly, the NSGA-II is used to search for an efficient set of network solutions, also known as a Pareto set or a non-dominated set in the context of Multi-objective Optimisation (MOO). In each generation of the optimisation process, MATSim is used to evaluate the current solution. Hence a suitable encoding scheme is defined to enable a smooth iv translation of the solution between the NSGA-II and MATSim. Since the solution of multi-objective optimisation problems is a set of network solutions, further analysis is done to identify the best compromise solution in the Pareto set. Extensive computational testing of the SBO model has been carried out. The tests involve evaluating the computational performance of the model. The first test measures the repeatability of the model's result. The second computational test considers its performance relative to indicators like the hypervolume and spacing indicators as well as an analysis of the model's Pareto front. Lastly, a benchmarking of the model's performance when compared with other optimisation algorithms is carried out. After testing the so-called Simulation-based Transit Network Design Model (SBTNDM), it is then used to design pubic transport networks for the MyCiTi BRT. Two applications are considered for the model. The first application deals with the public transport performance of the network solutions in the Pareto front obtained from the SBTNDM. In this case study, different transport network indicators are used to measure how each solution performs. In the second scenario, network design is done for the 85th percentile of travel demand on the MyCiTi network over 12 months. The results show that the model can design robust transit networks. The use of simulation as the agency of optimisation of public transport networks represents the main innovation of the work. The approach has not been used for public transport network design to date. The specific contribution of this work is in the improved modelling of public transport user behaviour with Agent-based Simulation (ABS) within a Transit Network Design (TND) framework. This is different from the conventional approaches used in the literature, where static trip-based travel demand models like the four-step model have mostly been used. Another contribution of the work is the development of a robust technique that facilitates the simultaneous optimisation of network routes and their operational frequencies. Future endeavours will focus on extending the network design model to a multi-modal context.
APA, Harvard, Vancouver, ISO, and other styles
26

M, Venkata Raghu Chaitanya. "Model Based Aircraft Control System Design and Simulation." Thesis, Linköping University, Linköping University, Department of Management and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-19264.

Full text
Abstract:
<p>Development of modern aircraft has become more and more expensive and time consuming. In order to minimize the development cost, an improvement of the conceptual design phase is needed. The desired goal of the project is to enhance the functionality of an in house produced framework conducted at the department of machine design, consisting of parametric models representing a large variety of aircraft concepts.</p><p>The first part of the work consists of the construction of geometric aircraft control surfaces such as flaps, aileron, rudder and elevator parametrically in CATIA V5.</p><p>The second part of the work involves designing and simulating an Inverse dynamic model in Dymola software.</p><p>An Excel interface has been developed between CATIA and Dymola. Parameters can be varied in the interface as per user specification; these values are sent to CATIA or Dymola and vice versa. The constructed concept model of control surfaces has been tested for different aircraft shapes and layout. The simulation has been done in Dymola for the control surfaces.</p>
APA, Harvard, Vancouver, ISO, and other styles
27

Seidel, Sabine. "Optimal simulation based design of deficit irrigation experiments." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-100254.

Full text
Abstract:
There is a growing societal concern about excessive water and fertilizer use in agricultural systems. High water productivity while maintaining high crop yields can be achieved with appropriate irrigation scheduling. Moreover, freshwater pollution through nitrogen (N) leaching due to the widespread use of N fertilizers demands for an efficient N fertilization management. However, sustainable crop management requires good knowledge of soil water and N dynamics as well as of crop water and N demand. Crop growth models, which describe physical and physiological processes of crop growth as well as water and matter transport, are considered as powerful tools to assist in the optimization of irrigation and fertilization management. It is of a general nature that the reliability of simulation based predictions depends on the quality and quantity of the data used for model calibration and validation which can be obtained e.g. in field experiments. A lack of data or low data quality for model calibration and validation may cause low performance and large uncertainties in simulation results. The large number of model parameters to be calibrated requires appropriate calibration methods and a sequential calibration strategy. Moreover, a simulation based planning of the field design saves costs and expenditure while supporting maximal outcomes of experiments. An adjustment of crop growth modeling and experiments is required for model improvement and development to reliably predict crop growth and to generalize predicted results. In this research study, a new approach for simulation based optimal experimental design was developed aiming to integrate simulation models, experiments, and optimization methods in one framework for optimal and sustainable irrigation and N fertilization management. The approach is composed of three steps: 1. The preprocessing consists of the calibration and validation of the crop growth model based on existing experimental data, the generation of long time-series of climate data, and the determination of the optimal irrigation control. 2. The implementation comprises the determination and experimental application of the simulation based optimized deficit irrigation and N fertilization schedules and an appropriate experimental data collection. 3. The postprocessing includes the evaluation of the experimental results namely observed yield, water productivity (WP), nitrogen use efficiency (NUE), and economic aspects, as well as a model evaluation. Five main tools were applied within the new approach: an algorithm for inverse model parametrization, a crop growth model for simulating crop growth, water balance and N balance, an optimization algorithm for deficit irrigation and N fertilization scheduling, and a stochastic weather generator. Furthermore, a water flow model was used to determine the optimal irrigation control functions and for simulation based estimation of the optimal field design. The approach was implemented within three case studies presented in this work. The new approach combines crop growth modeling and experiments with stochastic optimization. It contributes to a successful application of crop growth modeling based on an appropriate experimental data collection. The presented model calibration and validation procedure using the collected data facilitates reliable predictions. The stochastic optimization framework for deficit irrigation and N fertilization scheduling proved to be a powerful tool to enhance yield, WP, NUE and profit<br>In der heutigen Gesellschaft gibt es zunehmend Bedenken gegenüber übermäßigem Wasser- und Düngereinsatz in der Landwirtschaft. Eine hohe Wasserproduktivität kann jedoch durch geeignete Bewässerungspläne mit hohen landwirtschaftlichen Erträgen in Einklang gebracht werden. Die mit der weitverbreiteten Stickstoffdüngung einhergehende Gewässerbelastung aufgrund von Stickstoffauswaschung erfordert zudem ein effizientes Stickstoffmanagement. Eine entsprechende ressourceneffiziente Landbewirtschaftung bedarf präzise Kenntnisse der Bodenwasser- und Stickstoffdynamiken sowie des Pflanzenwasser- und Stickstoffbedarfs. Als leistungsfähige Werkzeuge zur Unterstützung bei der Optimierung von Bewässerungs-und Düngungsplänen werden Pflanzenwachstumsmodelle eingesetzt, welche die physischen und physiologischen Prozesse des Pflanzenwachstums sowie die physikalischen Prozesse des Wasser- und Stofftransports abbilden. Hierbei hängt die Zuverlässigkeit dieser simulationsbasierten Vorhersagen von der Qualität und Quantität der bei der Modellkalibrierung und -validierung verwendeten Daten ab, welche beispielsweise in Feldversuchen erfasst werden. Fehlende Daten oder Daten mangelhafter Qualität bei der Modellkalibrierung und -validierung führen zu unzuverlässigen Simulationsergebnissen und großen Unsicherheiten bei der Vorhersage. Die große Anzahl an zu kalibrierenden Parametern erfordert zudem geeignete Kalibrierungsmethoden sowie eine sequenzielle Kalibrierungsstrategie. Darüber hinaus kann eine simulationsbasierte Planung des Versuchsdesigns Kosten und Aufwand reduzieren und zu weiteren experimentellen Erkenntnissen führen. Die Abstimmung von Pflanzenwachstumsmodellen und Versuchen ist zudem für die Modellentwicklung und -verbesserung sowie für eine Verallgemeinerung von Simulationsergebnissen unabdingbar. Im Rahmen dieser Arbeit wurde ein neuer Ansatz für ein simulationsbasiertes optimales Versuchsdesign entwickelt. Ziel war es, Simulationsmodelle, Versuche und Optimierungsmethoden in einem Ansatz für optimales und nachhaltiges Bewässerungs- und Düngungsmanagement zu integrieren. Der Ansatz besteht aus drei Schritten: 1. Die Vorbereitungsphase beinhaltet die auf existierenden Versuchsdaten basierende Kalibrierung und Validierung des Pflanzenwachstumsmodells, die Generierung von Klimazeitreihen und die Bestimmung der optimalen Bewässerungssteuerung. 2. Die Durchführungsphase setzt sich aus der Erstellung und experimentellen Anwendung der simulationsbasierten optimierten Defizitbewässerungs- und Stickstoffdüngungspläne und der Erfassung der relevanten Versuchsdaten zusammen. 3. Die Auswertungsphase schließt eine Evaluierung der Versuchsergebnisse anhand ermittelter Erträge, Wasserproduktivitäten (WP), Stickstoffnutzungseffizienzen (NUE) und ökonomischer Aspekte, sowie eine Modellevaluierung ein. In dem neuen Ansatz kamen im Wesentlichen folgende fünf Werkzeuge zur Anwendung: Ein Algorithmus zur inversen Modellparametrisierung, ein Pflanzenwachstumsmodell, welches das Pflanzenwachstum sowie die Wasser- und Stickstoffbilanzen abbildet, ein evolutionärer Optimierungsalgorithmus für die Generierung von defizitären Bewässerungs- und Stickstoffplänen und ein stochastischer Wettergenerator. Zudem diente ein Bodenwasserströmungsmodell der Ermittlung der optimalen Bewässerungssteuerung und der simulationsbasierten Optimierung des Versuchsdesigns. Der in dieser Arbeit vorgestellte Ansatz wurde in drei Fallbeispielen angewandt. Der neue Ansatz kombiniert Pflanzenwachstumsmodellierung und Experimente mit stochastischer Optimierung. Er leistet einen Beitrag zu einer erfolgreichen Pflanzenwachstumsmodellierung basierend auf der Erfassung relevanter Versuchsdaten. Die vorgestellte Modellkalibrierung und -validierung unter Verwendung der erfassten Versuchsdaten trug wesentlich zu zuverlässigen Simulationsergebnissen bei. Darüber hinaus stellt die hier vorgestellte stochastische Optimierung von defizitären Bewässerungs- und Stickstoffplänen ein leistungsfähiges Werkzeug dar, um Erträge, WP, NUE und den Profit zu erhöhen
APA, Harvard, Vancouver, ISO, and other styles
28

Hernandez, Enriquez Aurora. "Simulation-based process design and integration for retrofit." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/simulationbased-process-design-and-integration-for-retrofit(90c6bcf4-6421-4731-8f82-1e839478daab).html.

Full text
Abstract:
This research proposes a novel Retrofit Design Approach based on process simulation and the Response Surface Methodology (RSM).Retrofit Design Approach comprises: 1) a diagnosis stage in which the variables are screened and promising variables to improve system performance are identified through a sensitivity analysis, 2) an evaluation stage in which RSM is applied to assess the impact of those promising variables and the most important factors are determined by building a reduced model from the process response behaviour, and 3) an optimisation stage to identify optimal conditions and performance of the system, subject to objective function and model constraints. All these stages are simulation-supported. The main advantages of the proposed Retrofit Design Approach using RSM are that the design method is able to handle a large industrial-scale design problem within a reasonable computational effort, to obtain valuable conceptual insights of design interactions and economic trade-off existed in the system, as well as to systematically identify cost-effective solutions by optimizing the reduced model based on the most important factors. This simplifies the pathway to achieve pseudo-optimal solutions, and simultaneously to understand techno-economic and system-wide impacts of key design variables and parameters. In order to demonstrate the applicability and robustness of the proposed design method, the proposed Retrofit Design Approach has been applied to two case studies which are based on existing gas processing processes. Steady-state process simulation using Aspen Plus TM® has been carried out and the simulation results agree well with the plant data. Reduced models for both cases studies have been obtained to represent the techno-economic behaviour of plants. Both the continuous and discrete design options are considered in the retrofitting of the plant, and the results showed that the Retrofit Design Approach is effective to provide reliable, cost-effective retrofit solutions which yield to improvements in the studied processes, not only economically (i.e. cost and product recovery), but also environmentally linked (i.e. CO₂ emissions and energy efficiency). The main retrofitting solutions identified are, for the first case, column pressure change, pump-around arrangement and additional turbo-expansion capacity, while for the second case, columns pressure change, trays efficiency, HEN retrofit arrangements (re-piping) and onsite utility generation schemes are considered. These promising sets of retrofit design options were further investigated to reflect implications of capital investment for the retrofit scenarios, and this portfolio of opportunities can be very useful for supporting decision-making procedure in practice. It is important to note that in some cases a cost-effective retrofit does not always require structural modifications. In conclusion, the proposed Retrofit Design Approach has been found to be a reliable approach to address the retrofit problem in the context of industrial applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Iachini, Valerio. "Agent-based simulation for renewable energy incentive design." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/8047/.

Full text
Abstract:
In this thesis, we propose a novel approach to model the diffusion of residential PV systems. For this purpose, we use an agent-based model where agents are the families living in the area of interest. The case study is the Emilia-Romagna Regional Energy plan, which aims to increase the produc- tion of electricity from renewable energy. So, we study the microdata from the Survey on Household Income and Wealth (SHIW) provided by Bank of Italy in order to obtain the characteristics of families living in Emilia-Romagna. These data have allowed us to artificial generate families and reproduce the socio-economic aspects of the region. The families generated by means of a software are placed on the virtual world by associating them with the buildings. These buildings are acquired by analysing the vector data of regional buildings made available by the region. Each year, the model determines the level of diffusion by simulating the installed capacity. The adoption behaviour is influenced by social interactions, household’s economic situation, the environmental benefits arising from the adoption and the payback period of the investment.
APA, Harvard, Vancouver, ISO, and other styles
30

Lopez, Alejandro, and Mario Garcia. "Simulator-Based Design in Practice." Thesis, Linköping University, Department of Management and Engineering, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-12164.

Full text
Abstract:
<p>The automotive field is becoming more and more complex and cars are no longer just pure mechanical artifacts. Today much more than 50 % of the functionality of a car is computerized, so, a modern car system is obviously based on mixed technologies which emphasize the need for new approaches to the design process compared to the processes of yesterday. A corresponding technology shift has been experienced in the aerospace industry starting in the late sixties and today aircraft could not fly without its computers and the pilots’ environment has turned to a so called glass cockpit with no iron-made instrumentation left. A very similar change is still going on in the automotive area.</p><p>Simulator-Based Design (SBD) refers to design, development and testing new products, systems and applications which include an operator in their operation. Simulator-Based Design has been used for decades in the aviation industry. It has been a common process in this field. SBD may be considered as a more specific application of simulation-based design, where the specific feature is a platform, the simulator itself. The simulator could consist of a generic computer environment in combination with dedicated hardware components, for instance a cockpit. This solution gives us the possibility of including the human operator in the simulation.</p><p>The name of the project is Simulator-Based Design in Practice. The purpose of this master thesis is to get a complete practice in how to use a human-in-the-loop simulator as a tool in design activities focusing on the automotive area. This application area may be seen as an example of systems where an operator is included in the operation and thus experience from the car application could be transferred to other areas like aviation or control rooms in the process industry.</p><p>During the performance of the project we have gone through the main parts of the SBD process. There are many steps to complete the whole cycle and many of them have iterative loops that connect these steps with the previous one. This process starts with a concept (product/system) and continues with a virtual prototyping stage followed by implementation, test design, human-in-the-loop simulation, data analysis, design synthesis and in the end a product/system decision. An iterative process approach makes the cycle flexible and goal oriented.</p><p>We have learnt how to use the simulator and how to perform the whole cycle of SBD. We first started getting familiar with the simulator and the ASim software and then we were trying to reduce the number of computers in the simulator and changing the network in order to find good optimization pf the computer power. The second step has been to implement a new application to the simulator. This new application is the rear mirror view and consists of a new LCD monitor and the rear view vision that must be seen in the new monitor. Finally we updated the cockpit to the new language program Action Script 3.0.</p><p>The information gathering consisted of the course Human-System interaction in the University, the introduction course to ASim software and the course of Action Script 3.0.</p>
APA, Harvard, Vancouver, ISO, and other styles
31

Hu, Jun. "Semiconductor nanowire based optoelectronic devices: physics, simulation and design /." Diss., Digital Dissertations Database. Restricted to UC campuses, 2009. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Wittwer, Jonathan W. "Simulation-Based Design Under Uncertainty for Compliant Microelectromechanical Systems." Diss., CLICK HERE for online access, 2005. http://contentdm.lib.byu.edu/ETD/image/etd723.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lilly, M. T. "Computer based design and simulation for manufacturing facilities relayout." Thesis, University of Liverpool, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.370859.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

FERRARA, MARIA. "Simulation-based optimization for NZEB design: insights and beyond." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2725946.

Full text
Abstract:
Given the potentially significant role of buildings in reducing the urban greenhouse emissions and reshaping the urban future towards sustainability, in the last few years, many research activities have been focused on design of high-performing buildings. It is widely recognized that the highly-performing building of the future will have a very low energy demand and that renewable energy sources should have the leading role in supplying the energy needed. Furthermore, such high performance of the building should ensure not only the environmental but also the economic sustainability. In Europe, the European Directive on the Energy Performance of Building (EPBD - 2010/31/EU) introduced, together with the idea of nearly Zero Energy Buildings, the key-concept of cost optimality that provides a methodology for evaluating the economic performance of the building over the estimated lifecycle. Results from many applications of the cost optimal analysis across Europe demonstrated that nowadays there is still a gap between the building environmental optimum and the economic optimum and therefore many research activities are being carried out on design methods and policies and measures with the goal of reducing this gap. In this context, the ZEB design results as the solution of a complex optimization problem, as it was demonstrated that the best performance does not result from a single pre-defined set of energy efficiency measures applied to the building, but it is strictly related to the local scale, depending on the interaction of the many design variables with the boundary conditions (climate, availability of technologies, investment and operational costs, urban context, ...). Therefore, this work aims at identifying the key variables, stating the optimization objectives and developing an automated methodology for effectively supporting the solution of such a complex design optimization problem. A comprehensive review on the studies related to NZEB design reports the state of the art on the topic, emerging trends and open issues in the field. Based on this background, the NZEB design optimization problem is stated and investigated in all its phases. A simulation-based optimization method is set up and tailored on the so-defined design problem. Insights on different aspects related to the NZEB design problem are provided through several applications on representative case-studies, which validate and refine the proposed methodology framework. The mutual relationships between envelope and systems in determining the NZEB optimal design solutions are demonstrated and quantified. Special attention is given to the evaluation of the resilience of the resulting optimum to the variation of future climate scenarios: it was demonstrated that the higher performance of the building systems, the higher resilience to possible future climate change. The advantages of implementing an integrated approach by means of energy demand and supply simultaneous optimization rather than a traditional sequential approach are quantified and discussed. Energy and cost performance optimization for the building operation is also investigated in a context of social housing, while the combined evaluation of comfort conditions within the energy design optimization at the early design stage is studied. Such methodology is demonstrated to be able to effectively support the integrated NZEB design process with a high-level of accuracy and a manageable computation time. It was demonstrated it can lead to increase the building performance by up to 40\% (depending on the objective) with respect to current construction practice. Several applications in real contexts also demonstrate the high-potential of such framework in successfully supporting the design process beyond theory. In fact, the developed methodology framework was successfully applied to collaborate with other researchers and professionals to calibrate energy models, to optimize the design and operation of innovative energy systems, to optimally combine energy, cost and acoustic high-performance in a NZEB, to support effective communication to clients...and even to win an international contest on design and construction of a NZEB prototype.
APA, Harvard, Vancouver, ISO, and other styles
35

Panchal, Jitesh H. "A framework for simulation-based integrated design of multiscale products and design processes." Diss., Available online, Georgia Institute of Technology, 2005, 2005. http://etd.gatech.edu/theses/available/etd-11232005-112626/.

Full text
Abstract:
Thesis (Ph. D.)--Mechanical Engineering, Georgia Institute of Technology, 2006.<br>Eastman, Chuck, Committee Member ; Paredis, Chris, Committee Co-Chair ; Allen, Janet, Committee Member ; Rosen, David, Committee Member ; Tsui, Kwok, Committee Member ; McDowell, David, Committee Member ; Mistree, Farrokh, Committee Chair. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
36

Moon, Kyungjin. "Self-reconfigurable ship fluid-network modeling for simulation-based design." Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/34733.

Full text
Abstract:
Our world is filled with large-scale engineering systems, which provide various services and conveniences in our daily life. A distinctive trend in the development of today's large-scale engineering systems is the extensive and aggressive adoption of automation and autonomy that enable the significant improvement of systems' robustness, efficiency, and performance, with considerably reduced manning and maintenance costs, and the U.S. Navy's DD(X), the next-generation destroyer program, is considered as an extreme example of such a trend. This thesis pursues a modeling solution for performing simulation-based analysis in the conceptual or preliminary design stage of an intelligent, self-reconfigurable ship fluid system, which is one of the concepts of DD(X) engineering plant development. Through the investigations on the Navy's approach for designing a more survivable ship system, it is found that the current naval simulation-based analysis environment is limited by the capability gaps in damage modeling, dynamic model reconfiguration, and simulation speed of the domain specific models, especially fluid network models. As enablers of filling these gaps, two essential elements were identified in the formulation of the modeling method. The first one is the graph-based topological modeling method, which will be employed for rapid model reconstruction and damage modeling, and the second one is the recurrent neural network-based, component-level surrogate modeling method, which will be used to improve the affordability and efficiency of the modeling and simulation (M&S) computations. The integration of the two methods can deliver computationally efficient, flexible, and automation-friendly M&S which will create an environment for more rigorous damage analysis and exploration of design alternatives. As a demonstration for evaluating the developed method, a simulation model of a notional ship fluid system was created, and a damage analysis was performed. Next, the models representing different design configurations of the fluid system were created, and damage analyses were performed with them in order to find an optimal design configuration for system survivability. Finally, the benefits and drawbacks of the developed method were discussed based on the result of the demonstration.
APA, Harvard, Vancouver, ISO, and other styles
37

Lacroix, René. "A framework for the design of simulation-based greenhouse control." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=41652.

Full text
Abstract:
The main objectives were: (1) to develop tools to aid in the design of enclosed agro-ecosystems, and (2) to use these tools to develop a prototype simulation-based control system. Three tools were developed: (1) a conceptual framework, (2) a (simulated) greenhouse system and (3) a simulation approach within OS/2.<br>Part of the conceptual framework was dedicated to "conscious control", defined as a form of control practised by an entity that uses models of itself in its decision-making processes. The greenhouse system was composed of six modules (a simulation manager, a weather generator, a greenhouse model, a crop model, a Pavlovian controller and a cognitive controller), which were implemented under OS/2 as separate processes.<br>The greenhouse system was used to develop a prototype simulation-based controller. Primarily, the role of the controller was to determine temperature setpoints that would minimize the heating load. The simulation model used by the controller was an artificial neural network. The controller adapted temperature setpoints to anticipated meteorological conditions and reduced greenhouse energy consumption, in comparison with a more traditional controller.<br>Generally, the results showed the feasibility and illustrated some of the advantages of using simulation-based control. The research resulted in the definition of elements that will allow the creation of a methodological framework for the design of simulation-based control and, eventually, a theory of conscious control.
APA, Harvard, Vancouver, ISO, and other styles
38

Harb, Shadi. "WEB-BASED CIRCUIT DESIGN SIMULATION PACKAGE FOR SOLVING ELECTRICAL ENG." Master's thesis, University of Central Florida, 2004. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4027.

Full text
Abstract:
A Web-based circuit design package has been improved and evaluated to provide students with an enhanced and innovative teaching tools package for the electrical circuit design course. The project objectives can be summarized as follows: 1) developing enhanced problem solving skills using a Web-based environment, 2) developing the design skills and sharpening the critical thinking process, 3) developing a generic and comprehensive teaching/learning circuit package as an extention to the Electrical Engineering virtual lab environment, which gives students the capability to practice and experience all the circuit design skills with minimum cost and effort. The project provides the students with an enhanced and powerful graphical computer aided design (CAD) tool by which students can carry out an online simulation of AC and DC designs with the capability to plot simulation results graphically. The proposed prototype is implemented by JAVA, which is used to to implement Web-based applications with different platform support. The project provides students with an enhanced graphical user interface (GUI) by which they can build any electrical circuit using either text or schematic entry format, generate the Netlist, which describes all circuit information (circuit topology, circuit attributes and so on), and simulate the design by parsing the Netlist to CIRML format, which is sent over the network to the remote server. The server will process the CIRML data and run the simulation using PSPICE and eventually send back the simulation results to the client for display.<br>M.S.Cp.E.<br>Department of Electrical and Computer Engineering<br>Engineering and Computer Science<br>Computer Engineering
APA, Harvard, Vancouver, ISO, and other styles
39

Drawid, H. J. "Simulation-based design of agile cellular systems for manual assembly." Thesis, Queen's University Belfast, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.411108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Gbededo, Mijoh Ayodele. "Simulation-based impact analysis for sustainable manufacturing design and management." Thesis, University of Derby, 2018. http://hdl.handle.net/10545/623483.

Full text
Abstract:
This research focuses on effective decision-making for sustainable manufacturing design and management. The research contributes to the decision-making tools that can enable sustainability analysts to capture the aspects of the economic, environmental and social dimensions into a common framework. The framework will enable the practitioners to conduct a sustainability impact analysis of a real or proposed manufacturing system and use the outcome to support sustainability decision. In the past, the industries had focused more on the economic aspects in gaining and sustaining their competitive positions; this has changed in the recent years following the Brundtland report which centred on incorporating the sustainability of the future generations into our decision for meeting today's needs (Brundtland, 1987). The government regulations and legislation, coupled with the changes in consumers' preference for ethical and environmentally friendly products are other factors that are challenging and changing the way companies, and organisations perceive and drive their competitive goals (Gu et al., 2015). Another challenge is the lack of adequate tools to address the dynamism of the manufacturing environment and the need to balance the business' competitive goal with sustainability requirements. The launch of the Life Cycle Sustainability Analysis (LCSA) framework further emphasised the needs for the integration and analysis of the interdependencies of the three dimensions for effective decision-making and the control of unintended consequences (UNEP, 2011). Various studies have also demonstrated the importance of interdependence impact analysis and integration of the three sustainability dimensions of the product, process and system levels of sustainability (Jayal et al., 2010; Valdivia et al., 2013; Eastwood and Haapala, 2015). Although there are tools capable of assessing the performance of either one or two of the three sustainability dimensions, the tools have not adequately integrated the three dimensions or address the holistic sustainability issues. Hence, this research proposes an approach to provide a solution for successful interdependence impact analysis and trade-off amongst the three sustainability dimensions and enable support for effective decision-making in a manufacturing environment. This novel approach explores and integrates the concepts and principles of the existing sustainability methodologies and frameworks and the simulation modelling construction process into a common descriptive framework for process level assessment. The thesis deploys Delphi study to verify and validate the descriptive framework and demonstrates its applicability in a case study of a real manufacturing system. The results of the research demonstrate the completeness, conciseness, correctness, clarity and applicability of the descriptive framework. Thus, the outcome of this research is a simulation-based impact analysis framework which provides a new way for sustainability practitioners to build an integrated and holistic computer simulation model of a real system, capable of assessing both production and sustainability performance of a dynamic manufacturing system.
APA, Harvard, Vancouver, ISO, and other styles
41

Sweitzer, Timothy J. (Timothy James) 1972. "A simulation-based concurrent engineering approach for assembly system design." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/82902.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2002.<br>This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.<br>Includes bibliographical references (p. 81-82).<br>by Timothy J. Sweitzer.<br>S.M.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
42

Bolognini, Francesca. "An integrated simulation-based generative design method for microelectromechanical systems." Thesis, University of Cambridge, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.611409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Obaidat, Mohammad Salameh. "A 68000 based modular multiprocessor system : design and simulation analysis /." The Ohio State University, 1986. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487266362336169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Kernstine, Kemp H. "Design space exploration of stochastic system-of-systems simulations using adaptive sequential experiments." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44799.

Full text
Abstract:
The complexities of our surrounding environments are becoming increasingly diverse, more integrated, and continuously more difficult to predict and characterize. These modeling complexities are ever more prevalent in System-of-Systems (SoS) simulations where computational times can surpass real-time and are often dictated by stochastic processes and non-continuous emergent behaviors. As the number of connections continue to increase in modeling environments and the number of external noise variables continue to multiply, these SoS simulations can no longer be explored with traditional means without significantly wasting computational resources. This research develops and tests an adaptive sequential design of experiments to reduce the computational expense of exploring these complex design spaces. Prior to developing the algorithm, the defining statistical attributes of these spaces are researched and identified. Following this identification, various techniques capable of capturing these features are compared and an algorithm is synthesized. The final algorithm will be shown to improve the exploration of stochastic simulations over existing methods by increasing the global accuracy and computational speed, while reducing the number of simulations required to learn these spaces.
APA, Harvard, Vancouver, ISO, and other styles
45

Mocko, Gregory Michael. "A Knowledge Framework for Integrating Multiple Perspective in Decision-Centric Design." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/10522.

Full text
Abstract:
Problem: Engineering design decisions require the integration of information from multiple and disparate sources. However, this information is often independently created, limited to a single perspective, and not formally represented, thus making it difficult to formulate decisions. Hence, the primary challenge is the development of computational representations that facilitate the exchange of information for decision support. Approach: First, the scope of this research is limited to representing design decisions as compromise decision support problems (cDSP). To address this challenge, the primary hypothesis is that a formal language will enable the semantics of cDSP to be captured, thus providing a digital interface through which design information can be exchanged. The primary hypothesis is answered through the development of a description logic (DL) based formal language. The primary research question is addressed in four sub-questions. The first two research questions relate to the development of a vocabulary for representing the semantics of the cDSP. The first hypothesis used to answer this question is that formal information modeling techniques can be used to explicitly capture the semantics and structure of the cDSP. The second research question is focused on the realization of a computer-processible representation. The hypothesis used to answer this question is that DL can be used for developing computational-based representations. The third research question is related to the organization and retrieval of decision information. The hypothesis used to answer this question is DL reasoning algorithms can be used to support organization and retrieval. Validation: The formal language developed in this dissertation is theoretically and empirically validated using the validation square approach. Validation of the hypotheses is achieved by systematically building confidence through example problems. Examples include the cDSP construct, analysis support models, the design of a cantilever beam, and design of a structural fin array heat sink. Contributions: The primary contribution from this dissertation is a formal language for capturing the semantics of cDSPs and analysis support models comprised of: (1) a systematic methodology for decision formulation, (2) a cDSP vocabulary, (3) a graphical information model, and (4) a DL-based representation. The components, collectively, provide a means for exchanging cDSP information.
APA, Harvard, Vancouver, ISO, and other styles
46

Gehrig, Stefan K. "Design, simulation, and implementation of a vision based vehicle following system." [S.l. : s.n.], 2000. http://deposit.ddb.de/cgi-bin/dokserv?idn=963216104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Dreisewerd, Björn [Verfasser]. "A Contribution to Simulation-Based Process Design of Phytoextractions / Björn Dreisewerd." München : Verlag Dr. Hut, 2017. http://d-nb.info/1128467968/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Bell, Jonathan. "Interpretation of simulation for model-based design analysis of engineered systems." Thesis, Aberystwyth University, 2006. http://hdl.handle.net/2160/05880836-32b8-4f65-abfe-9b0ecf16cc42.

Full text
Abstract:
This thesis attempts to answer the question "Can we devise a language for interpretation of behavioural simulation of engineered systems (of arbitrary complexity) in terms of the systems’ purpose?" It does so by presenting a language that represents a device’s function as achieving some purpose if the device is in a state that is intended to trigger the function and the function’s expected effect is present. While most work in the qualitative and model-based reasoning community has been concerned with simulation, this language is presented as a basis for interpret- ing the results of the simulation of a system, enabling these results to be expressed in terms of the system’s purpose. This, in turn, enables the automatic production of draft design analysis reports using model based analysis of the subject system. The increasing behavioural complexity of modern systems (resulting from the increasing use of microprocessors and software) has led to a need to interpret the results of simulation in cases beyond the capabilities of earlier functional mod- elling languages. The present work is concerned with such cases and presents a functional modelling language that enables these complex systems to be analysed. Specifically, the language presented herein allows functional description and interpretation of the following. • Cases where it is desired to distinguish between partial and complete failure of a function. • Systems whose functionality depends on achieving a sequence of intermittent effects. • Cases where a function being achieved in an untimely manner (typically late) needs to be distinguished from a function failing completely. • Systems with functions (such as warning functions) that depend upon the state of some other system function. This offers significant increases both in the range of systems and of design analysis tasks for which the language can be used, compared to earlier work.
APA, Harvard, Vancouver, ISO, and other styles
49

Hu, Wenqing. "A model-based simulation tool for impact/blast-resistant structural design /." free to MU campus, to others for purchase, 2004. http://wwwlib.umi.com/cr/mo/fullcit?p3144424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Dhanal, Avirat. "Design and Analysis of Material Handling System with Simulation-Based Optimization." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-16411.

Full text
Abstract:
In today’s world, simulation and optimization are playing a vital role in reducing the time, cost and preserving resources. In manufacturing industries, there are ample amount of problems that go on with the expansion of the industry. In such cases, to tackle these problems simulation can be helpful to check whether any change in the current situation makes any effect on the current efficiency of the overall plant. In the presented case study, a solution to the problem of internal and external logistics has been designed by using simulation and optimization to improve part of a material flow of an organization. Basically, the organization whose major production is established in the south of Sweden deals with the manufacturing and assembly of equipment. Before the dispatch, all of them go to the painting section which is the expansion of the present shop floor. However, the design and analysis of the material handling system to feed the new painting line which is going to be established by the organization is the aim of this case study. While achieving this aim the literature regarding the discrete event simulation, Lean and Simulation-Based optimization related to the material handling system has been done. Furthermore, the appropriate material handling systems along with the different scenarios were suggested to reduce the cost and the lead times between the production line and the new painting line. To support this process a methodology combining simulation, optimization and lean production has been implemented under the framework of the design and creation research strategy. In the Kaizen workshop organized at a company with managers and stakeholders, the designed scenarios were presented and after some discussion one of them was chosen and the selected scenario was designed and optimized. Moreover, the Simulation-Based multi-objective optimization has been helpful for the optimization of the designed model proposed as a final solution.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography