To see the other types of publications on this topic, follow the link: Component optimization.

Dissertations / Theses on the topic 'Component optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Component optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Azumi, Takuya, Hiroaki Takada, and Hiroshi Oyama. "Optimization of Component Connections for an Embedded Component System." IEEE, 2009. http://hdl.handle.net/2237/13983.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Carlson, Susan Elizabeth. "Component selection optimization using genetic algorithms." Diss., Georgia Institute of Technology, 1993. http://hdl.handle.net/1853/17886.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Rezazadeh, Mehdi, and Reza Delavar. "Production cost reduction through optimization of machine component." Thesis, KTH, Industriell produktion, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-152364.

Full text
Abstract:
This thesis aiming to reduce the cost of production through analyzing and optimizing a set of weaving machine components including five legs; three main legs and two support legs. This set of legs has a reciprocating revolutionary movement around a central axis which is driven by a crank shaft. Finite element static Structural analysis, explicit and fatigue analyses been applied using Ansys workbench. The results show that some areas of legs are under small stresses far from material yield strength. This fact provides the potential for mass reduction of legs without significant effect on mechanical safety factors. Ansys Workbench parameter optimization and shape optimization been applied in this study in order to reduce mass while maintaining almost the same safety factors. Besides performing optimization on original legs, new optimized design alternatives presented for both main legs and support legs. Mass reduction of maximum 18% is obtained in new designs.
APA, Harvard, Vancouver, ISO, and other styles
4

Hilber, Patrik. "Component reliability importance indices for maintenance optimization of electrical networks." Licentiate thesis, Stockholm, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-274.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Meoni, Francesco <1987&gt. "Modeling, Component Selection and Optimization of Servo-controlled Automatic Machinery." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amsdottorato.unibo.it/8140/1/Meoni_Francesco_tesi.pdf.

Full text
Abstract:
A servo-controlled automatic machine can perform tasks that involve synchronized actuation of a significant number of servo-axes, namely one degree-of-freedom (DoF) electromechanical actuators. Each servo-axis comprises a servo-motor, a mechanical transmission and an end-effector, and is responsible for generating the desired motion profile and providing the power required to achieve the overall task. The design of a such a machine must involve a detailed study from a mechatronic viewpoint, due to its electric and mechanical nature. The first objective of this thesis is the development of an overarching electromechanical model for a servo-axis. Every loss source is taken into account, be it mechanical or electrical. The mechanical transmission is modeled by means of a sequence of lumped-parameter blocks. The electric model of the motor and the inverter takes into account winding losses, iron losses and controller switching losses. No experimental characterizations are needed to implement the electric model, since the parameters are inferred from the data available in commercial catalogs. With the global model at disposal, a second objective of this work is to perform the optimization analysis, in particular, the selection of the motor-reducer unit. The optimal transmission ratios that minimize several objective functions are found. An optimization process is carried out and repeated for each candidate motor. Then, we present a novel method where the discrete set of available motor is extended to a continuous domain, by fitting manufacturer data. The problem becomes a two-dimensional nonlinear optimization subject to nonlinear constraints, and the solution gives the optimal choice for the motor-reducer system. The presented electromechanical model, along with the implementation of optimization algorithms, forms a complete and powerful simulation tool for servo-controlled automatic machines. The tool allows for determining a wide range of electric and mechanical parameters and the behavior of the system in different operating conditions.
APA, Harvard, Vancouver, ISO, and other styles
6

Santi, Gian Maria <1991&gt. "Mesh Morphing Methods for Virtual Prototyping and Mechanical Component Optimization." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amsdottorato.unibo.it/9608/1/Tesi_Dottorato_Santi_Finale.pdf.

Full text
Abstract:
In this thesis, the coupling of mathematical geometry and its discretization (mesh) is performed using a method that fills the gap between simulation and design. Different modelling strategies are studied, tested and developed to bridge commercial CAD with a new methodology able to perform more accurate simulations without loosing the connection with the geometrical features. The aim of the thesis is to enhance the capabilities of Finite Element Methods (FEM) with the properties of Non-Uniform Radial Basis Functions (NURBS) inherited from CAD models in the design phase leading to a perfect representation of the model's boundary. The parametric space definition of the basis functions is borrowed from standard IGA (Isogeometric Analysis) and the possibility of process CAD models without the need for trivariate NURBS from NEFEM (NURBS Enhanced Finite Element Method). This particular combination yields to a bilinear Lagrangian basis and a new mapping between Cartesian and Parametric spaces for quadrilaterals. Using this new formulation it is possible to track the changes of the geometry and reduce the simulation's error up to 25-50% because of the perfect shape representation when compared to an equivalent FEM system. The problems presented are defined in a 2D space and solved using Matlab. NURBS are the key point to perform parametric morphing and simple optimizations while FEM remains the best way to perform simulations. This new method prevents to remodel B-Rep (Boundary Representation) parts after some simple modification due to the analysis and improves the geometry accuracy of the discretization. The geometrical file is directly imported from commercial software and processed by the method. Accuracy, convergence and seamless integration with commercial CAD packages are demonstrated applied to problems of arbitrary 2D geometry. The main problems treated are thermal analysis and solid mechanics where the better results are achieved.
APA, Harvard, Vancouver, ISO, and other styles
7

Stambaugh, Craig T. (Craig Todd) 1960. "Improving gas turbine engine control system component optimization by delaying decisions." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/91787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Svensson, Marcus. "Selection of a product component for topology optimization and additive manufacturing." Thesis, Jönköping University, JTH, Industriell produktutveckling, produktion och design, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-52791.

Full text
Abstract:
This is a master thesis research on how to select the right components in a product, considering reducing the weight with topology optimization (TO) and adaption for additive manufacturing (AM). It is well established that manufacturing of complex structures can be achieved with AM, the possibility of integrating assembled components and improve features will therefore be investigated. The new component structure must still withstand the loads that it is subjected to during usage, to not permanently deform or break. In this research the studied product was a handheld Husqvarna chainsaw. Initially a feasibility study was conducted, where the product was disassembled and physically investigated for potential component cases. Additional knowledge was gathered with one semi structured interview per case, with experienced design engineers. Followed by one semi structured interview with AM experts, regarding available AM technique and similar material. Selection of case to continue with was based on the interviews information and Pughs decision matrix, with weighted criterions. TO were used for finding the optimal material distribution. The new component design was analyzed with linear finite element analysis to fulfill both the component and material stress requirements. Component orientation and support structure for AM was analyzed with computer aided engineering software. This resulted with integrating thirteen components for the engines cylinder into one component. The new design resulted in a weight reduction of 31%, while utilizing only 57% of the allowed stress limit. Also, the first 23 natural frequencies were improved with a new type of cooling fin structure, with an increased area of 15%. These results encourage the thesis workflow methodology usage for other products. In conclusion the established workflow of methods resulted in selecting a suitable case for integrating components with feature improvement and adaption of the new design with TO for AM, to reduce the weight.
APA, Harvard, Vancouver, ISO, and other styles
9

Moeller, Robert D. (Robert David). "Optimization in-line vehicle sequencing systems : applications to Ford component manufacturing." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/10158.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1997, and Thesis (M.S.)--Massachusetts Institute of Technology, Sloan School of Management, 1997.<br>Includes bibliographical references (p. 155-156).<br>by Robert D. Moeller.<br>M.S.
APA, Harvard, Vancouver, ISO, and other styles
10

Bachmeyer, Paul Joseph. "Simulation-Based Design Strategies for Component Optimization in Steer-by-Wire Applications." NCSU, 2008. http://www.lib.ncsu.edu/theses/available/etd-03182008-041203/.

Full text
Abstract:
The objective of this thesis is to develop simulation-based design strategies for optimizing the selection of active, semi-active, and passive components for industrial steer-by-wire (SBW) applications. Experimental steering data is collected from an instrumented Honda Accord (1987 model) and used to validate a lateral vehicle model. This model is used to investigate the tactile feedback performance of various SBW configurations at specific driving conditions. Although peak performance is obtained with fully active components (direct-drive electric motors), comparable performance can be obtained using a combination of passive springs, semi-active dampers, and active motors with a 16.3% reduction in cost and an 87.7% reduction in electrical energy required.
APA, Harvard, Vancouver, ISO, and other styles
11

Hemadri, Vinayak B. "Thermodynamic analysis and optimization of multi-pressure, multi-component organic rankine cycle." Thesis, IIT Delhi, 2016. http://localhost:8080/iit/handle/2074/7039.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Olsson, Daniel. "Applications and Implementation of Kernel Principal Component Analysis to Special Data Sets." Thesis, KTH, Optimeringslära och systemteori, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-31130.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

CHAKKALAKKAL, JOSEPH JUNIOR. "Design of a weight optimized casted ADI component using topology and shape optimization." Thesis, KTH, Maskin- och processteknologi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-236518.

Full text
Abstract:
Structural Optimization techniques are widely used in product development process in ‘modern industry’ to generate optimal designs with only sufficient material to serve the purpose of the component. In conventional design problems, the design process usually generates overdesigned components with excess material and weight. This will in turn increase the life time cost of machines, both in terms material wastage and expense of usage. The thesis “Design of a weight optimized casted ADI component using topology and shape optimization” deals with redesigning a component from a welded steel plate structure into a castable design for reduced manufacturing cost and weight reduction. The component “Drill Steel Support” mounted in front of the drilling boom of a Face Drilling Machine is redesigned during this work. The main objective of the thesis is to provide an alternative design with lower weight that can be mounted on the existing machine layout without any changes in the mounting interfaces. This thesis report covers in detail procedure followed for attaining the weight reduction of the “Drill Steel Support” and presents the results and methodology which is based on both topology and shape optimization.<br>Strukturoptimering används ofta i produktutvecklingsprocessen i modern industri för att ta fram optimala konstruktioner med minsta möjliga materialåtgång för komponenten. Konventionella konstruktionsmetoder genererar vanligtvis överdimensionerade komponenter med överflödigt material och vikt. Detta ökar i sin tur livstidskostnaderna för maskiner både i termer av materialavfall och användning. Avhandlingen "Konstruktion av viktoptimerad gjuten ADI-komponent" behandlar omkonstruktionen av en komponent från en svetsad stålplåtstruktur till en gjutbar konstruktion med minskad tillverkningskostnad och vikt. Komponenten “Borrstöd” monterad i framkant av bommen på en ortdrivningsmaskin är omkonstruerad under detta arbete. Huvudsyftet med avhandlingen är ta fram en alternativ konstruktion med lägre vikt och som kan monteras på befintlig maskinlayout utan någon ändring i monteringsgränssnittet. Denna avhandling innehåller en detaljerad beskrivning av förfarandet för att uppnå viktminskningen av "borrstödet" och presenterar resultaten samt metodiken som baseras på både topologi- och parameter- optimering.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Xuemei. "Simulation-optimization in real-time decision making." Ohio : Ohio University, 1997. http://www.ohiolink.edu/etd/view.cgi?ohiou1184619898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Mahdavi, Babak. "The design of a distributed, object-oriented, component-based framework in multidisciplinary design optimization /." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=79039.

Full text
Abstract:
The Multidisciplinary Design Optimization (MDO) can be defined as a methodology for the design of complex engineering systems where collaboration and abilities to mutually interacting between different disciplines are fundamental. In this thesis, Virtual Aircraft Design and Optimization fRamework (VADOR), a distributed, object-oriented, component-based framework enabling MDO practice at Bombardier Aerospace is introduced. The purpose of the VADOR framework is to enable the seamless integration of commercial and in-house analysis applications in a heterogeneous, distributed computing environment, and allow the management and sharing of the data. The VADOR distributed environment offers visibility to the process, permitting the teams to monitor progress or track changes in design projects and problems. Documentation of the MDO process is vital to ensure clear communication of the process within the team defining it and in the broader design team interacting with it. VADOR is implemented in Java, providing an object-oriented, platform-independent framework. The concepts of design pattern and component-based approach are used along with multi-tiered distributed design to deliver highly modular and flexible architecture. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
16

Graf, Sebastian [Verfasser]. "Design and Optimization of Multi-Variant Automotive E/E Architecture Component Platforms / Sebastian Graf." München : Verlag Dr. Hut, 2015. http://d-nb.info/1079768335/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Ozden, Burak Samil. "Modeling And Optimization Of Hybrid Electric Vehicles." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615583/index.pdf.

Full text
Abstract:
The main goal of this thesis study is the optimization of the basic design parameters of hybrid electric vehicle drivetrain components to minimize fuel consumption and emission objectives, together with constraints derived from performance requirements. In order to generate a user friendly and flexible platform to model, select drivetrain components, simulate performance, and optimize parameters of series and parallel hybrid electric vehicles, a MATLAB based graphical user interface is designed. A basic sizing procedure for the internal combustion engine, electric motor, and battery is developed. Pre-defined control strategies are implemented for both types of hybrid configurations. To achieve better fuel consumption and emission values, while satisfying nonlinear performance constraints, multi-objective gradient based optimization procedure is carried out with user defined upper and lower bounds of optimization parameters. The optimization process is applied to a number of case studies and the results are evaluated by comparison with similar cases found in literature.
APA, Harvard, Vancouver, ISO, and other styles
18

Lichei, Andre. "Analysis and Optimization of the Packet Scheduler in Open MPI." Master's thesis, Universitätsbibliothek Chemnitz, 2006. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200601910.

Full text
Abstract:
We compared well known measurement methods for LogGP parameters and discuss their accuracy and network contention. Based on this, a new theoretically exact measurement method that does not saturate the network is derived and explained in detail. The applicability of our method is shown for the low level communication API of Open MPI across several interconnection networks. Based on the LogGP model, we developed a low overhead packet scheduling algorithm. It can handle different types of interconnects with different characteristics. It is able to produce schedules which are very close to the optimum for both small and large messages. The efficiency of the algorithm for small messages is show for a Open MPI implementation. The implementation uses the LogGP benchmark to obtain the LogGP parameters of the available interconnects and can so adapt to any given system.
APA, Harvard, Vancouver, ISO, and other styles
19

Schincariol, Simone. "Optimization of nanopaint with metallic touch for polymeric auto components." Master's thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/16443.

Full text
Abstract:
Mestrado em Engenharia Mecânica<br>The traditional chrome plating is a process that gives prestige to car components, but it is an expensive and lengthy process that requires metal substrates. This thesis proposes an alternative to chroming process that involves the use of polymeric substrates coated with a chrome paint doped with high thermal conductivity nano-paricles, in order to obtain plastic components that give the feeling of metalic touch. In this dissertation the process of production of nanopaint adding carbon nanotubes and nano-Fe3O4 has been optimized and RGB color tests, thermal analysis and electrical analysis were performed.<br>A cromagem de componentes para a industria automóvel é um processo que conferem prestigio aos mesmos, contudo é um processo significativamente caro, demorado e requer a adição de substratos metálicos. Esta dissertação tem como principal objetivo aumentar a sensação de toque metálico em componentes poliméricos para a industria automovél. Desta forma, propõem-se uma alternativa ao processo de cromagem, que envolve a utilização de tinta reforçada com nanopartículas de elevada condutividade térmica. Nesta dissertação, o processo de produção de nanotintas aditivadas com nanotubos de carbono e nanoparticluas de oxido de ferro foi optimizado e a nanotinta submetida a ensaios de cor RGB, análises térmicas e elétricas.
APA, Harvard, Vancouver, ISO, and other styles
20

Yesuf, Jemil N. "Determination of single and multi-component adsorption isotherms using nonlinear error functions and spreadsheet optimization technique /." Available to subscribers only, 2006. http://proquest.umi.com/pqdweb?did=1136096201&sid=12&Fmt=2&clientId=1509&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Kallio, R. (Roope). "Towards test suite optimization in software component testing by finding and analysing repeats in test trace." Master's thesis, University of Oulu, 2015. http://urn.fi/URN:NBN:fi:oulu-201505291729.

Full text
Abstract:
Regression testing is often a very resource and time consuming activity with large scale software. However, regression tests may be run in continuous integration where everything should run fast in order to get quick feedback. This thesis reviews the existing solutions to optimize regression test suites and test suites in general by looking at various existing heuristic, computational intelligence and other methods. In most cases, these methods are based on the fact that test suites may contain redundant test cases. They also require code coverage, fault detection capability or requirement coverage information of each test case in order to be implemented. Additionally, the thesis studies ways to detect redundancy in test trace. It is also discussed whether or not this kind of redundancy information can be used for test suite optimization instead. A new concept, pair-wise supermaximal repeats is presented. A case study is conducted in software component continuous integration testing environment in Nokia Networks. Main methods include looking for patterns in test trace manually in a plot representing the test trace as well as finding pair-wise supermaximal repeats in the test trace with an enhanced suffix array implementation developed in test driven development method. A new application was developed, MRS Finder. The repeats are then analyzed. The study shows that the there can be found huge amount of repetition which makes analyzing all of it hard by hand. The repeats also tend to occur more in the setup or teardown phase of a test case rather than in the action or assertion phase of the test cases. The study shows that MRS Finder offers different optimization possibilities than originally was thought<br>Regressiotestaus on usein hyvin resursseja ja aikaa kuluttavaa suurten ohjelmistojen kanssa. Regressiotestejä voidaan kuitenkin ajaa jatkuvassa integroinnissa, jossa senkin tulisi suoriutua nopeasti, jotta palauteaika pysyy lyhyenä. Tämä diplomityö käy läpi olemassa olevia ratkaisuja regressiotestien ja yleisestikin testisarjojen optimointia. Näistä käsitellään muun muassa heuristisia ja laskennallisen älykkyyden menetelmiä. Useimmissa tapauksissa nämä metodit perustuvat siihen, että testisarjat voivat sisältää testitapauksia, joissa on keskinäistä redundanssia. Menetelmät vaativat myös tietoa jokaisen testitapauksen koodikattavuudesta, virheiden paljastuskyvystä tai vaatimuskattavuudesta, jotta ne voidaan toteuttaa käytännössä. Tämä työ tutkii myös tapoja löytää redundanssia testien jäljistä. Työ pohtii, voitaisiinko tätä tietoa käyttää testisarjojen optimointiin edellä mainittujen tietojen sijaan. Tämä työ esittelee uuden käsitteen, parittaisen supermaksimaalisen toiston. Tapaustutkimus suoritettiin Nokia Networksin systeemikomponenttitason jatkuvan integroinnin järjestelmässä. Tapaustutkimuksessa kehitettiin testivetoisella kehityksellä MRS Finder -ohjelmisto, joka etsii toistoja parannetusta suffiksitaulusta ja tuottaa diagrammin, joka edustaa testien jälkiä. Diagrammista etsitään toistoja manuaalisesti ja suffiksitaulusta löydettyjä toistoja analysoidaan. Tutkimus näyttää, että testien jäljistä voidaan löytää suuri määrä toistoja, joka tekee analysoinnista vaikeaa käsin. Toistot esiintyvät useammin testi tapauksien alkuasetus (setup) ja poiskorjaus (teardown) -vaiheissa kuin toiminta- tai vakuutusvaiheissa (assertion). Tutkimus näyttää myös, että MRS Finderilla löydetään erilaisia testi tapauksien optimointimahdollisuuksia kuin aluksi ajateltiin
APA, Harvard, Vancouver, ISO, and other styles
22

Alghanmi, Sameer Alghanmi. "OPPORTUNISTIC MAINTENANCE OPTIMIZATION BASED ON STOCHASTIC DEPENDENCE FOR MULTI-COMPONENT SYSTEM EXPOSED TO DIFFERENT FAILURE MODES." University of Akron / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=akron1541695508236435.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

SINGH, BHUPINDER. "A HYBRID MSVM COVID-19 IMAGE CLASSIFICATION ENHANCED USING PARTICLE SWARM OPTIMIZATION." Thesis, DELHI TECHNOLOGICAL UNIVERSITY, 2021. http://dspace.dtu.ac.in:8080/jspui/handle/repository/18864.

Full text
Abstract:
COVID-19 (novel coronavirus disease) is a serious illness that has killed millions of civilians and affected millions around the world. Mostly as result, numerous technologies that enable both the rapid and accurate identification of COVID-19 illnesses will provide much assistance to healthcare practitioners. A machine learning- based approach is used for the detection of COVID-19. In general, artificial intelligence (AI) approaches have yielded positive outcomes in healthcare visual processing and analysis. CXR is the digital image processing method that plays a vital role in the analysis of Covid-19 disease. Due to the maximum accessibility of huge scale annotated image databases, excessive success has been done using multiclass support vector machines for image classification. Image classification is the main challenge to detect medical diagnosis. The existing work used CNN with a transfer learning mechanism that can give a solution by transferring information from GENETIC object recognition tasks. The DeTrac method has been used to detect the disease in CXR images. DeTrac method accuracy achieved 93.1~ 97 percent. In this proposed work, the hybridization PSO+MSVM method has worked with irregularities in the CXR images database by studying its group distances using a group or class mechanism. At the initial phase of the process, a median filter is used for the noise reduction from the image. Edge detection is an essential step in the process of COVID-19 detection. The canny edge detector is implemented for the detection of edges in the chest x-ray images. The PCA (Principal Component Analysis) method is implemented for the feature extraction phase. There are multiple features extracted through PCA and the essential features are optimized by an optimization technique known as swarm optimization is used for feature optimization. For the detection of COVID-19 through CXR images, a hybrid multi-class support vector machine technique is implemented. The PSO (particle swarm optimization) technique is used for feature optimization. The comparative analysis of various existing techniques is also depicted in this work. The proposed system has achieved an accuracy of 97.51 percent, SP of 97.49 percent, and 98.0 percent of SN. The proposed system is compared with existing systems and achieved better performance and the compared systems are DeTrac, GoogleNet, and SqueezeNet.
APA, Harvard, Vancouver, ISO, and other styles
24

Osberghaus, Anna Zofia [Verfasser], and J. [Akademischer Betreuer] Hubbuch. "Optimization of chromatographic multi-component separations in silico using HTS-data / Anna Zofia Osberghaus. Betreuer: J. Hubbuch." Karlsruhe : KIT-Bibliothek, 2012. http://d-nb.info/1027141757/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Usan, Massimo 1967. "Automotive component product development enhancement through multi-attribute system design optimization in an integrated concurrent engineering framework." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34814.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, System Design & Management Program, 2005.<br>Includes bibliographical references (p. 211-218).<br>Automotive industry is facing a tough period. Production overcapacity and high fixed costs constrain companies' profits and challenge the very same existence of some corporations. Strangulated by the reduced cash availability and petrified by the organizational and products' complexity, companies find themselves more and more inadequate to stay in synch with the pace and the rate of change of consumers' and regulations' demands. To boost profits, nearly everyone pursue cost cutting. However, aggressive cost cutting as the sole approach to fattening margins results invariably in a reduction of operational capabilities which is likely to result in a decline in sales volume that leads to further cost reductions in a continuous death spiral. Long-term profitable growth requires, instead, a continuous flow of innovative products and processes. The focus should be, therefore, shifted from cost reduction to increased throughput. Automotive companies need to change their business model, morphing into new organizational entities based on systems thinking and change, which are agile and can swiftly adapt to the new business environment. The advancement of technology and the relentless increase in computing power will provide the necessary means for this radical transformation. This transformation cannot happen if the Product Development Process (PDP) does not break the iron gate of cycle time-product cost-development expenses-reduced product performance that constrains it. A new approach to PD should be applied to the early phases, where the leverage is higher, and should be targeted to dramatic reduction of the time taken to perform design iterations, which, by taking 50-70% of the total development time, are a burden of today's practice. Multi-disciplinary Design<br>(cont.) Analysis and Optimization, enabled by an Integrated Concurrent Engineering virtual product development framework has the required characteristics and the potential to respond to today's and tomorrow's automotive challenges. In this new framework, the product or system is not defined by a rigid CAD model which is then manipulated by product team engineers, but by a parametric flexible architecture handled by optimization and analysis software, with limited user interaction. In this environment, design engineers govern computer programs, which automatically select appropriately combinations of geometry parameters and drive seamlessly the analyses software programs (structural, fluid dynamic, costing, etc) to compute the system's performance attributes. Optimization algorithms explore the design space, identifying the Pareto optimal set of designs that satisfy the multiple simultaneous objectives they are given and at the same time the problem's constraints. Examples of application of the MDO approach to automotive systems are multiplying. However, the number of disciplines and engineering aspects considered is still limited to few (two or three) thus not exploiting the full potential the approach deriving from multi-disciplinarity. In the present work, a prototype of an Enhanced Development Framework has been set up for a particular automotive subsystem: a maniverter (a combination of exhaust manifold and catalytic converter) for internal combustion engines ...<br>by Massimo Usan.<br>S.M.
APA, Harvard, Vancouver, ISO, and other styles
26

Cengel, Savas Mehmet. "A Case Study: Improvement Of Component Placement Sequence Of A Turret Style Smt Machine." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/2/12608057/index.pdf.

Full text
Abstract:
This study aims to improve component placement sequencing of a number of PCBs produced on a turret style SMT machine. After modeling the problem and having found that an optimal solution to the real PCB problem is hard to be achieved because of the concurrent behavior of the machine and the PCB design parameters, two heuristics are developed by oversimplifying the problem down to TSP. Performance of the heuristics and the lower bounds is evaluated by comparing the results with the optimal solution for two sets of randomly generated PCBs. The heuristic solutions are also compared with the lower bounds and the current implementation for the real PCBs. It is found out that the heuristics improve the current efficiency figures of the company.
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Xuning. "Passive Component Weight Reduction for Three Phase Power Converters." Diss., Virginia Tech, 2014. http://hdl.handle.net/10919/47788.

Full text
Abstract:
Over the past ten years, there has been increased use of electronic power processing in alternative, sustainable, and distributed energy sources, as well as energy storage systems, transportation systems, and the power grid. Three-phase voltage source converters (VSCs) have become the converter of choice in many ac medium- and high-power applications due to their many advantages, such as high efficiency and fast response. For transportation applications, high power density is the key design target, since increasing power density can reduce fuel consumption and increase the total system efficiency. While power electronics devices have greatly improved the efficiency, overall performance and power density of power converters, using power electronic devices also introduces EMI issues to the system, which means filters are inevitable in those systems, and they make up a significant portion of the total system size and cost. Thus, designing for high power density for both power converters and passive components, especially filters, becomes the key issue for three-phase converters. This dissertation explores two different approaches to reducing the EMI filter size. One approach focuses on the EMI filters itself, including using advanced EMI filter structures to improve filter performance and modifying the EMI filter design method to avoid overdesign. The second approach focuses on reducing the EMI noise generated from the converter using a three-level and/or interleaving topology and changing the modulation and control methods to reduce the noise source and reduce the weight and size of the filters. This dissertation is divided into five chapters. Chapter 1 describes the motivations and objectives of this research. After an examination of the surveyed results from the literature, the challenges in this research area are addressed. Chapter 2 studies system-level EMI modeling and EMI filter design methods for voltage source converters. Filter-design-oriented EMI modeling methods are proposed to predict the EMI noise analytically. Based on these models, filter design procedures are improved to avoid overdesign using in-circuit attenuation (ICA) of the filters. The noise propagation path impedance is taken into consideration as part of a detailed discussion of the interaction between EMI filters, and the key design constraints of inductor implementation are presented. Based on the modeling, design and implementation methods, the impact of the switching frequency on EMI filter weight design is also examined. A two-level dc-fed motor drive system is used as an example, but the modeling and design methods can also be applied to other power converter systems. Chapter 3 presents the impact of the interleaving technique on reducing the system passive weight. Taking into consideration the system propagation path impedance, small-angle interleaving is studied, and an analytical calculation method is proposed to minimize the inductor value for interleaved systems. The design and integration of interphase inductors are also analyzed, and the analysis and design methods are verified on a 2 kW interleaved two-level (2L) motor drive system. Chapter 4 studies noise reduction techniques in multi-level converters. Nearest three space vector (NTSV) modulation, common-mode reduction (CMR) modulation, and common-mode elimination (CME) modulation are studied and compared in terms of EMI performance, neutral point voltage balancing, and semiconductor losses. In order to reduce the impact of dead time on CME modulation, the two solutions of improving CME modulation and compensating dead time are proposed. To verify the validity of the proposed methods for high-power applications, a 100 kW dc-fed motor drive system with EMI filters for both the AC and DC sides is designed, implemented and tested. This topology gains benefits from both interleaving and multilevel topologies, which can reduce the noise and filter size significantly. The trade-offs of system passive component design are discussed, and a detailed implementation method and real system full-power test results are presented to verify the validity of this study in higher-power converter systems. Finally, Chapter 5 summarizes the contributions of this dissertation and discusses some potential improvements for future work.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
28

Teonacio, Bezerra Leonardo. "A component-wise approach to multi-objective evolutionary algorithms: From flexible frameworks to automatic design." Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/232586.

Full text
Abstract:
Multi-objective optimization is a growing field of interest for both theoretical and applied research, mostly due to the higher accuracy with which multi-objective problems (MOPs) model real- world scenarios. While single-objective models simplify real-world problems, MOPs can contain several (and often conflicting) objective functions to be optimized at once. This increased accuracy, however, comes at the expense of a higher difficulty that MOPs pose for optimization algorithms in general, and so a significant research effort has been dedicated to the development of approximate and heuristic algorithms. In particular, a number of proposals concerning the adaptation of evolutionary algorithms (EAs) for multi-objective problems can be seen in the literature, evidencing the interest they have received from the research community.This large number of proposals, however, does not mean that the full search power offered by multi- objective EAs (MOEAs) has been properly exploited. For instance, in an attempt to propose significantly novel algorithms, many authors propose a number of algorithmic components at once, but evaluate their proposed algorithms as monolithic blocks. As a result, each time a novel algorithm is proposed, several questions that should be addressed are left unanswered, such as (i) the effectiveness of individual components, (ii) the benefits and drawbacks of their interactions, and (iii) whether a better algorithm could be devised if some of the selected/proposed components were replaced by alternative options available in the literature. This component-wise view of MOEAs becomes even more important when tackling a new application, since one cannot antecipate how they will perform on the target scenario, neither predict how their components may interact. In order to avoid the expensive experimental campaigns that this analysis would require, many practitioners choose algorithms that in the end present suboptimal performance on the application they intend to solve, wasting much of the potential MOEAs have to offer.In this thesis, we take several significant steps towards redefining the existng algorithmic engineering approach to MOEAs. The first step is the proposal of a flexible and representative algorithmic framework that assembles components originally used by many different MOEAs from the literature, providing a way of seeing algorithms as instantiations of a unified template. In addition, the components of this framework can be freely combined to devise novel algorithms, offering the possibility of tailoring MOEAs according to the given application. We empirically demonstrate the efficacy of this component-wise approach by designing effective MOEAs for different target applications, ranging from continuous to combinatorial optimization. In particular, we show that the MOEAs one can tailor from a collection of algorithmic components is able to outperform the algorithms from which those components were originally gathered. More importantly, the improved MOEAs we present have been designed without manual assistance by means of automatic algorithm design. This algorithm engineering approach considers algorithmic components of flexible frameworks as parameters of a tuning problem, and automatically selects the component combinations that lead to better performance on a given application. In fact, this thesis also represents significant advances in this research direction. Primarily, this is the first work in the literature to investigate this approach for problems with any number of objectives, as well as the first to apply it to MOEAs. Secondarily, our efforts have led to a significant number of improvements in the automatic design methodology applied to multi-objective scenarios, as we have refined several aspects of this methodology to be able to produce better quality algorithms.A second significant contribution of this thesis concerns understanding the effectiveness of MOEAs (and in particular of their components) on the application domains we consider. Concerning combina- torial optimization, we have conducted several investigations on the multi-objective permutation flowshop problem (MO-PFSP) with four variants differing as to the number and nature of their objectives. Through thorough experimental campaigns, we have shown that some components are only effective when jointly used. In addition, we have demonstrated that well-known algorithms could easily be improved by replacing some of their components by other existing proposals from the literature. Regarding continuous optimization, we have conducted a thorough and comprehensive performance assessment of MOEAs and their components, a concrete first step towards clearly defining the state-of-the-art for this field. In particular, this assessment also encompasses many-objective optimization problems (MaOPs), a sub-field within multi-objective optimization that has recently stirred the MOEA community given its theoretical and practical demands. In fact, our analysis is instrumental to better understand the application of MOEAs to MaOPs, as we have discussed a number of important insights for this field. Among the most relevant, we highlight the empirical verification of performance metric correlations, and also the interactions between structural problem characteristics and the difficulty increase incurred by the high number of objectives.The last significant contribution from this thesis concerns the previously mentioned automatically generated MOEAs. In an initial feasibility study, we have shown that MOEAs automatically generated from our framework are able to consistently outperform the original MOEAs from where its components were gathered both for the MO-PFSP and for MOPs/MaOPs. The major contribution from this subset, however, regards continuous optimization, as we significantly advance the state-of-the-art for this field. To accomplish this goal, we have extended our framework to encompass approaches that are primarily used for this continuous problems, although the conceptual modeling we use is general enough to be applied to any domain. From this extended framework we have then automatically designed state-of- the-art MOEAs for a wide range of experimental scenarios. Moreover, we have conducted an in-depth analysis to explain their effectiveness, correlating the role of algorithmic components with experimental factors such as the stopping criterion or the performance metric adopted.Finally, we highlight that the contributions of this thesis have been increasingly recognized by the scientific community. In particular, the contributions to the research of MOEAs applied to continuous optimization are remarkable given that this is the primary application domain for MOEAs, having been extensively studied for a couple decades now. As a result, chapters from this work have been accepted for publication in some of the best conferences and journals from our field.<br>Doctorat en Sciences de l'ingénieur et technologie<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
29

Abdelal, Qasem M. "Methodology for Using a Non-Linear Parameter Estimation Technique for Reactive Multi-Component Solute Transport Modeling in Ground-Water Systems." Diss., Virginia Tech, 2006. http://hdl.handle.net/10919/29758.

Full text
Abstract:
For a numerical or analytical model to be useful it should be ensured that the model outcome matches the observations or field measurements during calibration. This process has been typically done by manual perturbation of the model input parameters. This research investigates a methodology for using non linear parameter estimation technique (the Marquardt-Levenberg technique) with the multi component reactive solute transport model SEAM3D. The reactive multi-component solutes considered in this study are chlorinated ethenes. Previous studies have shown that this class of compounds can be degraded by four different biodegradation mechanisms, and the degradation path is a function of the prevailing oxidation reduction conditions. Tests were performed in three levels; the first level utilized synthetic model-generated data. The idea was to develop a methodology and perform preliminary testing where "observations" can be generated as needed. The second level of testing involved performing the testing on a single redox zone model. The methodology was refined and tested using data from a chlorinated ethenes-contaminated site. The third level involved performing the tests on a multiple redox zone model. The methodology was tested, and statistical validation of the recommended methodology was performed. The results of the tests showed that there is a statistical advantage for choosing a subgroup of the available parameters to optimize instead of the optimizing the whole available group. Therefore, it is recommended to perform a parameter sensitivity study prior to the optimization process to identify the suitable parameters to be chosen. The methodology suggests optimizing the oxidation-reduction species parameters first then calibrating the chlorinated ethenes model. The results of the tests also proved the advantage of the sequential optimization of the model parameters, therefore the parameters of the parent compound are optimized, updated in the daughter compound model, for which the parameters are then optimized so on. The test results suggested considering the concentrations of the daughter compounds when optimizing the parameters of the parent compounds. As for the observation weights, the tests suggest starting the applied observation weights during the optimization process at values of one and changing them if needed. Overall the proposed methodology proved to be very efficient. The optimization methodology yielded sets of model parameters capable of generating concentration profiles with great resemblance to the observed concentration profiles in the two chlorinated ethenes site models considered.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
30

Sedin, John. "Optimization of Extreme Environment Cyclic Testing : Analysis of thermal cycle load cases on a plastic cab component through simulation and testing." Thesis, KTH, Fordonsdynamik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-198501.

Full text
Abstract:
The purpose of this Master thesis was to deepen the knowledge and understanding regarding control parameters for the Extreme Environment Cyclic Testing (EECT) on interior and exterior cab components. The investigated parameters were temperature gradient, length of the warm and cold sections and number of cycles. These parameters were investigated since they control the settings of the Extreme Environment Cyclic Testing. In addition, temperature data was also gathered in order to be analysed along with a simplified case of sun radiation. The method consisted of three parts, where the first part was to perform a literature survey to gather relative data and knowledge. The second part was to perform simulations in COMSOL Multiphysics and the third part consisted of physical testing at Scania and at SP in Borås. To gather temperature data a simulation of a field test was performed in a wind tunnel at Scania. The results displayed a difference of the thermal image of the component when a simplified sun case was compared to a case without applied sun light. Regarding temperatures and temperature gradients it was found that a temperature gradient, based on testing from South Africa, can be up to 2.91°C/min in nature. The temperature results displayed a clear difference between obtained temperatures in a cab compared to results from a car. The angle of the windscreen and the volume difference are believed to be parts of the explanation. The simulations showed that an increase of the temperature gradient to 2°C/min from 1°C/min can be done without changing the time that the temperature of the material is heated respectively cooled significantly. These results were supported from the component testing at Scania which displayed that the difference in strain range when the temperature gradient was changed between 1°C/min to 2°C/min was below 1.2 %, which corresponds to less than 1E-4. The testing at Scania also displayed that the change in maximum strain for different length configurations, 3 h cold 6 h warm, 4 h cold 8 h warm and 6 h cold 12 h warm, could be neglected. The deviation in strain range between the 3h6h and 4h8h configuration was found to be below 1 %, which in absolute terms was 5E-5. It was also showed that the variance of the strain range did not change significant after six cycles. The maximum deviation in strain range between six and ten cycles was 0.15 %. The testing at SP with deformation scans with structural light scans displayed fluctuation in the deformation for the first cycles and a consistent decrease of maximum deformation after 8 cycles. The conclusions from the sun light simulations in COMSOL Multiphysics were that the difference between a simplified sun radiation case with a homogenous ambient temperature and the more realistic one with a set temperature on one surface of the component in combination with a homogenous ambient temperature could be neglected for components with a height up to 0.01 m. This was only valid if the temperature difference was below 10°C. For a larger temperature difference it was found valid for a height up to 0.001 m. Based on the results the author recommends that the control parameters of the Extreme Environment Cyclic Testing are set accordingly to obtain a more efficient testing method:  The number of cycles in the EECT should be 8, since more cycles will not make a significant change on the results  The time should be 3 h in the cold section and 6 h in the warm section  The increase of temperature should be 2°C/min to improve testing efficiency Also, an additional suggestion is to investigate the possibility of a pre-thermal heating phase in the EECT.<br>Syftet med det här examensarbetet var att utöka kunskapen och förståelsen för kontrollsättande parametrar gällande extrem cyklisk klimatprovning (Extreme Environment Cyclic Testing) för interna och externa hyttkomponenter. Parametrarna som undersöktes var temperaturstigningen, periodlängd och antal cykler. Utöver dessa parametrar samlades temperaturdata in för analys och en jämförelse gjordes mellan ett förenklat fall av solstrålning och ett mer verklighetsbaserat. Metoderna som användes var simuleringar med hjälp av COMSOL Multiphysics, temperaturprovning på Scania och temperaturprovning med scanning med hjälp av strukturerat ljus på SP i Borås. Som komplettering till insamlande av temperaturdata utfördes även en simulering av ett fältprov i vindtunneln på Scania i syfte att utöka mängden temperaturdata. Resultaten visade på en tydlig skillnad av den termiska bilden i en komponent mellan det förenklade fallet av solstrålning och det mer verklighetsbaserade. Gällande temperaturer och temperaturstigning visade analysen av insamlade data från fältprov i Sydafrika att den högsta temperaturstigningen är 2,91°C/min i en naturlig miljö. När jämförelse gjordes mellan insamlad data för en hytt jämfört med data för personbilar visade resultaten på en markant skillnad. Vindrutans vinkel samt hyttens volym anses vara orsaken till att en faktor två kunde observeras för vissa mätpunkter. Resultaten från simuleringarna i COMSOL Multiphysics visade att en temperaturstigning om 2°C/min jämfört med 1°C/min kan användas utan att tiden som materialets temperatur är uppvärmt respektive nerkylt enligt satta minimum och maximum temperaturer i provet ändras markant. Denna trend kunde även påvisas från resultaten av temperaturprovningen på Scania som visade att skillnaden i uppmätt töjningsvidd när temperaturstigningen ändrades från 1°C/min till 2°C/min var lägre än 1,2 %, vilket motsvarar en töjning på 1E-4. Vid analys av provresultaten från Scania konstaterades det att skillnaden i maximal töjning mellan de olika periodkonfigurationerna på 3 h kallt 6 h varmt, 4 h kallt 8 h varmt och 6 h kallt 12 h varmt kunde negligeras. Maximala variationen i töjningsvidd mellan periodkonfiguration 3h6h och 4h8h uppmättes till 1 %, vilket motsvarar 5E-5. Resultaten visade även att variationen i töjningsvidd inte varierade nämnvärt efter sex cykler. Den maximala skillnaden mellan sex och tio cykler var 0,15 %. Provningen på SP där deformationen scannades med hjälp av strukturerat ljus visade på oscillationer under de första cyklerna och att den maximala deformationen minskade efter 8 cykler. Slutsatserna från solsimuleringen i COMSOL Multiphysics var att skillnaden mellan det förenklade fallet när en homogen omgivningstemperatur användes och det mer verklighetstrogna när en högre yttemperatur applicerades på ena sidan i kombination med en homogen omgivningstemperatur kunde försummas för komponenter med en maximal höjd av 0,01 m. Detta om temperaturskillnaden inte översteg 10°C. För en högre temperaturskillnad kunde det bara försummas om komponentens tjocklek var under 0,001 m.  Baserat på resultaten drogs slutsatsen att de kontrollsättande parametrarna för EECT bör sättas enligt följande:                           Antalet cykler bör vara 8 i EECT då fler cykler inte ger nämnvärda skillnader                          Tiden bör vara 3 timmar i den kalla respektive 6 timmar i den varma sektionen                          Temperaturstigningen bör vara 2°C/min för att effektivisera provningen  Dessutom föreslås att man bör undersöka möjligheten att införa ett förvärmningsprov i EECT.
APA, Harvard, Vancouver, ISO, and other styles
31

Shah, Aditya Arunkumar. "Combining mathematical programming and SysML for component sizing as applied to hydraulic systems." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33890.

Full text
Abstract:
In this research, the focus is on improving a designer's capability to determine near-optimal sizes of components for a given system architecture. Component sizing is a hard problem to solve because of the presence of competing objectives, requirements from multiple disciplines, and the need for finding a solution quickly for the architecture being considered. In current approaches, designers rely on heuristics and iterate over the multiple objectives and requirements until a satisfactory solution is found. To improve on this state of practice, this research introduces advances in the following two areas: a.) Formulating a component sizing problem in a manner that is convenient to designers and b.) Solving the component sizing problem in an efficient manner so that all of the imposed requirements are satisfied simultaneously and the solution obtained is mathematically optimal. In particular, an acausal, algebraic, equation-based, declarative modeling approach is taken to solve component sizing problems efficiently. This is because global optimization algorithms exist for algebraic models and the computation time is considerably less as compared to the optimization of dynamic simulations. In this thesis, the mathematical programming language known as GAMS (General Algebraic Modeling System) and its associated global optimization solvers are used to solve component sizing problems efficiently. Mathematical programming languages such as GAMS are not convenient for formulating component sizing problems and therefore the Systems Modeling Language developed by the Object Management Group (OMG SysML ) is used to formally capture and organize models related to component sizing into libraries that can be reused to compose new models quickly by connecting them together. Model-transformations are then used to generate low-level mathematical programming models in GAMS that can be solved using commercial off-the-shelf solvers such as BARON (Branch and Reduce Optimization Navigator) to determine the component sizes that satisfy the requirements and objectives imposed on the system. This framework is illustrated by applying it to an example application for sizing a hydraulic log splitter.
APA, Harvard, Vancouver, ISO, and other styles
32

Sharrett, Zachary T. "Design and synthesis of substituted boronic acid containing mono-viologen and bis-viologen quenchers : optimization of the two-component saccharide sensing system /." Diss., Digital Dissertations Database. Restricted to UC campuses, 2008. http://uclibs.org/PID/11984.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Gao, Huanhuan. "Categorical structural optimization : methods and applications." Thesis, Compiègne, 2019. http://www.theses.fr/2019COMP2471/document.

Full text
Abstract:
La thèse se concentre sur une recherche méthodologique sur l'optimisation structurelle catégorielle au moyen d'un apprentissage multiple. Dans cette thèse, les variables catégorielles non ordinales sont traitées comme des variables discrètes multidimensionnelles. Afin de réduire la dimensionnalité, les nombreuses techniques d'apprentissage sont introduites pour trouver la dimensionnalité intrinsèque et mapper l'espace de conception d'origine sur un espace d'ordre réduit. Les mécanismes des techniques d'apprentissage à la fois linéaires et non linéaires sont d'abord étudiés. Ensuite, des exemples numériques sont testés pour comparer les performances de nombreuses techniques d’apprentissage. Sur la base de la représentation d'ordre réduit obtenue par Isomap, les opérateurs de mutation et de croisement évolutifs basés sur les graphes sont proposés pour traiter des problèmes d'optimisation structurelle catégoriels, notamment la conception du dôme, du cadre rigide de six étages et des structures en forme de dame. Ensuite, la méthode de recherche continue consistant à déplacer des asymptotes est exécutée et fournit une solution compétitive, mais inadmissible, en quelques rares itérations. Ensuite, lors de la deuxième étape, une stratégie de recherche discrète est proposée pour rechercher de meilleures solutions basées sur la recherche de voisins. Afin de traiter le cas dans lequel les instances de conception catégorielles sont réparties sur plusieurs variétés, nous proposons une méthode d'apprentissage des variétés k-variétés basée sur l'analyse en composantes principales pondérées<br>The thesis concentrates on a methodological research on categorical structural optimizationby means of manifold learning. The main difficulty of handling the categorical optimization problems lies in the description of the categorical variables: they are presented in a category and do not have any orders. Thus the treatment of the design space is a key issue. In this thesis, the non-ordinal categorical variables are treated as multi-dimensional discrete variables, thus the dimensionality of corresponding design space becomes high. In order to reduce the dimensionality, the manifold learning techniques are introduced to find the intrinsic dimensionality and map the original design space to a reduced-order space. The mechanisms of both linear and non-linear manifold learning techniques are firstly studied. Then numerical examples are tested to compare the performance of manifold learning techniques mentioned above. It is found that the PCA and MDS can only deal with linear or globally approximately linear cases. Isomap preserves the geodesic distances for non-linear manifold however, its time consuming is the most. LLE preserves the neighbour weights and can yield good results in a short time. KPCA works like a non-linear classifier and we proves why it cannot preserve distances or angles in some cases. Based on the reduced-order representation obtained by Isomap, the graph-based evolutionary crossover and mutation operators are proposed to deal with categorical structural optimization problems, including the design of dome, six-story rigid frame and dame-like structures. The results show that the proposed graph-based evolutionary approach constructed on the reduced-order space performs more efficiently than traditional methods including simplex approach or evolutionary approach without reduced-order space. In chapter 5, the LLE is applied to reduce the data dimensionality and a polynomial interpolation helps to construct the responding surface from lower dimensional representation to original data. Then the continuous search method of moving asymptotes is executed and yields a competitively good but inadmissible solution within only a few of iteration numbers. Then in the second stage, a discrete search strategy is proposed to find out better solutions based on a neighbour search. The ten-bar truss and dome structural design problems are tested to show the validity of the method. In the end, this method is compared to the Simulated Annealing algorithm and Covariance Matrix Adaptation Evolutionary Strategy, showing its better optimization efficiency. In chapter 6, in order to deal with the case in which the categorical design instances are distributed on several manifolds, we propose a k-manifolds learning method based on the Weighted Principal Component Analysis. And the obtained manifolds are integrated in the lower dimensional design space. Then the method introduced in chapter 4 is applied to solve the ten-bar truss, the dome and the dame-like structural design problems
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Lu. "Nonnegative joint diagonalization by congruence for semi-nonnegative independent component analysis." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S141/document.

Full text
Abstract:
La Diagonalisation Conjointe par Congruence (DCC) d'un ensemble de matrices apparaît dans nombres de problèmes de traitement du signal, tels qu'en Analyse en Composantes Indépendantes (ACI). Les développements récents en ACI sous contrainte de non négativité de la matrice de mélange, nommée ACI semi-non négative, permettent de tirer profit d'une modélisation physique réaliste des phénomènes observés tels qu'en audio, en traitement d'image ou en ingénierie biomédicale. Par conséquent, durant cette thèse, l'objectif principal était non seulement de concevoir et développer des algorithmes d'ACI semi-non négative basés sur de nouvelles méthodes de DCC non négative où la matrice de passage recherchée est non négative, mais également d'illustrer leur intérêt dans le cadre d'applications pratiques de séparation de sources. Les algorithmes de DCC non négative proposés exploitent respectivement deux stratégies fondamentales d'optimisation. La première famille d'algorithmes comprend cinq méthodes semi-algébriques, reposant sur la méthode de Jacobi. Cette famille prend en compte la non négativité par un changement de variable carré, permettant ainsi de se ramener à un problème d'optimisation sans contrainte. L'idée générale de la méthode de Jacobi est de i) factoriser la matrice recherchée comme un produit de matrices élémentaires, chacune n'étant définie que par un seul paramètre, puis ii) d'estimer ces matrices élémentaires l'une après l'autre dans un ordre spécifique. La deuxième famille compte un seul algorithme, qui utilise la méthode des directions alternées. Un tel algorithme est obtenu en minimisant successivement le Lagrangien augmenté par rapport aux variables et aux multiplicateurs. Les résultats expérimentaux sur les matrices simulées montrent un gain en performance des algorithmes proposés par comparaison aux méthodes DCC classiques, qui n'exploitent pas la contrainte de non négativité. Il semble que nos méthodes peuvent fournir une meilleure précision d'estimation en particulier dans des contextes difficiles, par exemple, pour de faibles valeurs de rapport signal sur bruit, pour un petit nombre de matrices à diagonaliser et pour des niveaux élevés de cohérence de la matrice de passage. Nous avons ensuite montré l'intérêt de nos approches pour la résolution de problèmes pratiques de séparation aveugle de sources. Pour n'en citer que quelques-uns, nous sommes intéressés à i) l'analyse de composés chimiques en spectroscopie par résonance magnétique, ii) l'identification des profils spectraux des harmoniques (par exemple, de notes de piano) d'un morceau de musique mono-canal par décomposition du spectrogramme, iii) l'élimination partielle du texte se trouvant au verso d'une feuille de papier fin. Ces applications démontrent la validité et l'intérêt de nos algorithmes en comparaison des méthodes classique de séparation aveugle de source<br>The Joint Diagonalization of a set of matrices by Congruence (JDC) appears in a number of signal processing problems, such as in Independent Component Analysis (ICA). Recent developments in ICA under the nonnegativity constraint of the mixing matrix, known as semi-nonnegative ICA, allow us to obtain a more realistic representation of some real-world phenomena, such as audios, images and biomedical signals. Consequently, during this thesis, the main objective was not only to design and develop semi-nonnegative ICA methods based on novel nonnegative JDC algorithms, but also to illustrate their interest in applications involving Blind Source Separation (BSS). The proposed nonnegative JDC algorithms belong to two fundamental strategies of optimization. The first family containing five algorithms is based on the Jacobi-like optimization. The nonnegativity constraint is imposed by means of a square change of variable, leading to an unconstrained problem. The general idea of the Jacobi-like optimization is to factorize the matrix variable as a product of a sequence of elementary matrices which is defined by only one parameter, then to estimate these elementary matrices one by one in a specific order. The second family containing one algorithm is based on the alternating direction method of multipliers. Such an algorithm is derived by successively minimizing the augmented Lagrangian function of the cost function with respect to the variables and the multipliers. Experimental results on simulated matrices show a better performance of the proposed algorithms in comparison with several classical JDC methods, which do not use the nonnegativity as constraint prior. It appears that our methods can achieve a better estimation accuracy particularly in difficult contexts, for example, for a low signal-to-noise ratio, a small number of input matrices and a high coherence level of matrix. Then we show the interest of our approaches in solving real-life problems. To name a few, we are interested in i) the analysis of the chemical compounds in the magnetic resonance spectroscopy, ii) the identification of the harmonically fixed spectral profiles (such as piano notes) of a piece of signal-channel music record by decomposing its spectrogram, iii) the partial removal of the show-through effect of digital images, where the show-through effect were caused by scanning a semi-transparent paper. These applications demonstrate the validity and improvement of our algorithms, comparing with several state-of-the-art BSS methods
APA, Harvard, Vancouver, ISO, and other styles
35

Dinh, Duc-Hanh. "Opportunistic Predictive Maintenance for Multi-Component Systems with Multiple Dependences." Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0171.

Full text
Abstract:
Les dépendances (économiques, stochastiques et/ou structurelles) entre composants influencent de manière significative le processus de dégradation des composants ainsi que le processus de prise de décision en maintenance. En ce sens, la non prise en compte des dépendances entre composants dans la modélisation de la maintenance pourrait entraîner des surcoûts de maintenance et un planning de maintenance sous-optimal. En lien avec ces considérations de nombreux travaux en maintenance prédictive de systèmes multi-composants avec des dépendances entre composants ont été récemment faits. Cependant, la plupart des modèles de maintenance prédictive existants ne permettent de prendre en compte qu'un seul type de dépendances, car la considération de plusieurs dépendances entraîne une complexité plus importante lors de la modélisation de la dégradation mais aussi la formalisation des processus de décision et d’optimisation de la maintenance. Cependant, dans les cas réels de systèmes industriels, plusieurs types de dépendances peuvent exister ensemble, notamment les dépendances économiques et structurelles. Par exemple, la plupart des systèmes mécaniques sont construits sur une structure hiérarchique impliquant que la maintenance d'un composant nécessite le démontage d'autres composants. L’objectif de cette thèse est donc d’intégrer à la fois des dépendances économiques et structurelles dans le processus de modélisation de la dégradation et le processus de décision en maintenance d'un système à composants multiples dans le cadre de la maintenance prédictive. Plus précisément, cet objectif repose sur deux axes scientifiques majeurs. Le premier consiste à étudier l'impact des dépendances structurelles et économiques sur le processus de dégradation des composants et sur la structure des coûts de maintenance. Le deuxième axe de recherche a pour objet d’intégrer les impacts des dépendances économiques et structurelles dans les processus de décision et d'optimisation de la maintenance. Face à ces problématiques, dans cette thèse nous avons proposé trois contributions principales : (1)-Formalisation et proposition de modèles mathématiques permettant de modéliser les dépendances structurelles et économiques entre composants; (2)-Développement d'un modèle de dégradation considérant les impacts de la dépendance structurelle entre composants; (3)-Développement d'une politique de maintenance prédictive opportuniste adaptée permettant de prendre en considération les impacts des dépendances économiques et structurelles dans les processus de prise de décision et d'optimisation de la maintenance. Enfin, pour évaluer la faisabilité et la valeur ajoutée ainsi que les limites des modèles proposées dans un cadre d'optimisation de la maintenance, une étude numérique sur un convoyeur industriel est investiguée<br>Recently, maintenance modeling for multi-component systems with dependences (economic, stochastic, and/or structural dependences) has been extensively studied. However, most of the existing studies only consider one type of dependence since combining more than one makes the models too complicated to analyze and solve. However, in practice, several types of dependences, especially, the economic and structural dependences, may exist together in the system. To face this issue, the main objective of this thesis is to consider both economic and structural dependences in maintenance modeling and optimization for multi-component systems in framework of predictive maintenance. For this purpose, the impacts of economic and structural dependences on the maintenance cost, duration and the degradation process of the components are firstly investigated. Mathematical models for quantifying the impacts of the economic and structural dependences are then developed. Finally, a multi-level opportunistic maintenance policy is proposed to consider the impacts of these dependences between components.Due to the structural dependence between components, when a maintenance (preventive or corrective action) occurs, only few components need to be disassembled. The disassembled components are subjected to both economic and structural dependences while the non-disassembled components are subjected to only economic dependence. In that way, the proposed maintenance policy is characterized by one preventive threshold, that is used to select survival components for preventive maintenance, and two opportunistic maintenance thresholds, that are used for opportunistic maintenance. When a maintenance occurs, the first opportunistic threshold is defined to select the non-disassembled components (with only economic dependence) while the second opportunistic threshold is then developed to consider the disassembled components for opportunistic maintenance (with both economic and structural dependences). To evaluate the performance of the proposed opportunistic maintenance policy, a cost model is developed. Particle swarm optimization algorithm is then implemented to find the optimal decision variables. Finally, the proposed opportunistic maintenance policy is illustrated through a conveyor system to show its feasibility and added value in maintenance optimization framework
APA, Harvard, Vancouver, ISO, and other styles
36

Brushammar, Tobias, and Erik Windelhed. "An Optimization-Based Approach to the Funding of a Loan Portfolio." Thesis, Linköping University, Department of Mathematics, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2664.

Full text
Abstract:
<p>This thesis grew out of a problem encountered by a subsidiary of a Swedish multinational industrial corporation. This subsidiary is responsible for the corporation’s customer financing activities. In the thesis, we refer to these entities as the Division and the Corporation. The Division needed to find a new approach to finance its customer loan portfolio. Risk control and return maximization were important aspects of this need. The objective of this thesis is to devise and implement a method that allows the Division to make optimal funding decisions, given a certain risk limit. </p><p>We propose a funding approach based on stochastic programming. Our approach allows the Division’s portfolio manager to minimize the funding costs while hedging against market risk. We employ principal component analysis and Monte Carlo simulation to develop a multicurrency scenario generation model for interest and exchange rates. Market rate scenarios are used as input to three different optimization models. Each of the optimization models presents the optimal funding decision as positions in a unique set of financial instruments. By choosing between the optimization models, the portfolio manager can decide which financial instruments he wants to use to fund the loan portfolio. </p><p>To validate our models, we perform empirical tests on historical market data. Our results show that our optimization models have the potential to deliver sound and profitable funding decisions. In particular, we conclude that the utilization of one of our optimization models would have resulted in an increase in the Division’s net income over the past 3.5 years.</p>
APA, Harvard, Vancouver, ISO, and other styles
37

Kim, Yong Yook. "Inverse Problems In Structural Damage Identification, Structural Optimization, And Optical Medical Imaging Using Artificial Neural Networks." Diss., Virginia Tech, 2004. http://hdl.handle.net/10919/11111.

Full text
Abstract:
The objective of this work was to employ artificial neural networks (NN) to solve inverse problems in different engineering fields, overcoming various obstacles in applying NN to different problems and benefiting from the experience of solving different types of inverse problems. The inverse problems investigated are: 1) damage detection in structures, 2) detection of an anomaly in a light-diffusive medium, such as human tissue using optical imaging, 3) structural optimization of fiber optic sensor design. All of these problems require solving highly complex inverse problems and the treatments benefit from employing neural networks which have strength in generalization, pattern recognition, and fault tolerance. Moreover, the neural networks for the three problems are similar, and a method found suitable for solving one type of problem can be applied for solving other types of problems. Solution of inverse problems using neural networks consists of two parts. The first is repeatedly solving the direct problem, obtaining the response of a system for known parameters and constructing the set of the solutions to be used as training sets for NN. The next step is training neural networks so that the trained neural networks can produce a set of parameters of interest for the response of the system. Mainly feed-forward backpropagation NN were used in this work. One of the obstacles in applying artificial neural networks is the need for solving the direct problem repeatedly and generating a large enough number of training sets. To reduce the time required in solving the direct problems of structural dynamics and photon transport in opaque tissue, the finite element method was used. To solve transient problems, which include some of the problems addressed here, and are computationally intensive, the modal superposition and the modal acceleration methods were employed. The need for generating a large enough number of training sets required by NN was fulfilled by automatically generating the training sets using a script program in the MATLAB environment. This program automatically generated finite element models with different parameters, and the program also included scripts that combined the whole solution processes in different engineering packages for the direct problem and the inverse problem using neural networks. Another obstacle in applying artificial neural networks in solving inverse problems is that the dimension and the size of the training sets required for the NN can be too large to use NN effectively with the available computational resources. To overcome this obstacle, Principal Component Analysis is used to reduce the dimension of the inputs for the NN without excessively impairing the integrity of the data. Orthogonal Arrays were also used to select a smaller number of training sets that can efficiently represent the given system.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
38

Hrádek, Zbyněk. "Metodika modelování lepených spojů v automobilovém průmyslu." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2010. http://www.nusl.cz/ntk/nusl-229340.

Full text
Abstract:
This diploma thesis deals with the subject of modeling high strength glued joints which are used during barrier crashtest simulations according to EuroNCAP methods. The purpose of the thesis is to design a metodology of modelling glued joints which will give acceptable results of its behavior during experiment.
APA, Harvard, Vancouver, ISO, and other styles
39

Yildirim, Asil. "Analysis And Classification Of Spelling Paradigm Eeg Data And An Attempt For Optimization Of Channels Used." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612763/index.pdf.

Full text
Abstract:
Brain Computer Interfaces (BCIs) are systems developed in order to control devices by using only brain signals. In BCI systems, different mental activities to be performed by the users are associated with different actions on the device to be controlled. Spelling Paradigm is a BCI application which aims to construct the words by finding letters using P300 signals recorded via channel electrodes attached to the diverse points of the scalp. Reducing the letter detection error rates and increasing the speed of letter detection are crucial for Spelling Paradigm. By this way, disabled people can express their needs more easily using this application. In this thesis, two different methods, Support Vector Machine (SVM) and AdaBoost, are used for classification in the analysis. Classification and Regression Trees is used as the weak classifier of the AdaBoost. Time-frequency domain characteristics of P300 evoked potentials are analyzed in addition to time domain characteristics. Wigner-Ville Distribution is used for transforming time domain signals into time-frequency domain. It is observed that classification results are better in time domain. Furthermore, optimum subset of channels that models P300 signals with minimum error rate is searched. A method that uses both SVM and AdaBoost is proposed to select channels. 12 channels are selected in time domain with this method. Also, effect of dimension reduction is analyzed using Principal Component Analysis (PCA) and AdaBoost methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Ratliff, Adam R. "Designing a Surrogate Upper Body Mass for a Projectile Pedestrian Legform." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1204662790.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Perez, Gallardo Jorge Raúl. "Ecodesign of large-scale photovoltaic (PV) systems with multi-objective optimization and Life-Cycle Assessment (LCA)." Phd thesis, Toulouse, INPT, 2013. http://oatao.univ-toulouse.fr/10505/1/perez_gallardo_partie_1_sur_2.pdf.

Full text
Abstract:
Because of the increasing demand for the provision of energy worldwide and the numerous damages caused by a major use of fossil sources, the contribution of renewable energies has been increasing significantly in the global energy mix with the aim at moving towards a more sustainable development. In this context, this work aims at the development of a general methodology for designing PV systems based on ecodesign principles and taking into account simultaneously both techno-economic and environmental considerations. In order to evaluate the environmental performance of PV systems, an environmental assessment technique was used based on Life Cycle Assessment (LCA). The environmental model was successfully coupled with the design stage model of a PV grid-connected system (PVGCS). The PVGCS design model was then developed involving the estimation of solar radiation received in a specific geographic location, the calculation of the annual energy generated from the solar radiation received, the characteristics of the different components and the evaluation of the techno-economic criteria through Energy PayBack Time (EPBT) and PayBack Time (PBT). The performance model was then embedded in an outer multi-objective genetic algorithm optimization loop based on a variant of NSGA-II. A set of Pareto solutions was generated representing the optimal trade-off between the objectives considered in the analysis. A multi-variable statistical method (i.e., Principal Componet Analysis, PCA) was then applied to detect and omit redundant objectives that could be left out of the analysis without disturbing the main features of the solution space. Finally, a decision-making tool based on M-TOPSIS was used to select the alternative that provided a better compromise among all the objective functions that have been investigated. The results showed that while the PV modules based on c-Si have a better performance in energy generation, the environmental aspect is what makes them fall to the last positions. TF PV modules present the best trade-off in all scenarios under consideration. A special attention was paid to recycling process of PV module even if there is not yet enough information currently available for all the technologies evaluated. The main cause of this lack of information is the lifetime of PV modules. The data relative to the recycling processes for m-Si and CdTe PV technologies were introduced in the optimization procedure for ecodesign. By considering energy production and EPBT as optimization criteria into a bi-objective optimization cases, the importance of the benefits of PV modules end-of-life management was confirmed. An economic study of the recycling strategy must be investigated in order to have a more comprehensive view for decision making.
APA, Harvard, Vancouver, ISO, and other styles
42

Edwall, Bill. "Virtual Power Plant Optimization Utilizing the FCR-N Market : A revenue maximization modelling study based on building components and a Battery Energy Storage System. Based on values from Sweden's first virtual power plant, Väla." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-279520.

Full text
Abstract:
Renewable energy resources are projected to claim a larger part of the Swedish power mix in coming years. This could potentially increase frequency fluctuations in the power grid due to the intermittency of renewable power generating resources. These fluctuations can in turn cause issues in the power grid if left unchecked. In order to resolve these issues, countermeasures are employed. One such countermeasure is for private actors to regulate power; in exchange they are financially compensated through reserve markets. The reserve market studied in this thesis is called Frequency Containment Reserve – Normal (FCR-N). Currently hydroelectric power provides almost all regulated power within this market. As the need for power regulation is expected to increase in the coming years, there exists a need to study other technologies capable of power regulation. This thesis focuses on one such technology called, virtual power plants. While virtual power plants are operating in other parts of the world, there were no virtual power plants operating in Sweden. As a result, the nature of an optimized virtual power plant and the economic benefits of optimization had not been previously investigated. To answer such questions, this thesis modelled and optimized the revenue of a virtual power plant. The examined virtual power plant consisted of cooling chillers, lighting, ventilation fans and a battery energy storage system. Where varying their total power demand allowed for them to provide power regulation. With the virtual power plant market in Sweden being in its infancy, this thesis serves as a first look into how an optimized virtual power plant using these components could function. To put the economic results of the optimization into context, a comparative model was constructed. The comparative model was based on a semi-static linear model. This is what the thesis’s industry partner Siemens currently uses. For the simulated scenarios, the optimized model generated at least 85% higher net revenues than the semi-static linear model. The increase in revenue holds potential to increase the uptake of virtual power plants on the Swedish market, thus increasing stability in the power grid and easing the transition to renewable energy.<br>Då förnyelsebara energiresurser antas omfatta en större roll av den svenska elproduktionen inom kommande år, så kan detta leda till att frekvensfluktueringar i elnätet ökar. Detta sker på grund av att den oregelbundna elproduktionen från förnyelsebara energiresurser inte matchas med konsumtion. Om dessa fluktueringar inte hanteras kan det i sin tur leda till skadliga störningar inom elnätet. För att motverka detta och således stabilisera elnätet används diverse lösningar. Ett sätt att åstadkomma ökad stabilisering i elnätet är att låta privata aktörer kraftreglera. De privata aktörerna som står för kraftregleringen gör detta i utbyte mot ekonomisk kompensation, genom att delta i reservmarknader. Den reservmarknad som studerades inom detta examensarbete kallas Frequency Containment Reserve – Normal (FCR-N). I nuläget står vattenkraft för nästan all reglerad kraft inom den här marknaden. Men då behovet av kraftreglering antas öka inom kommande år så behövs nya teknologier studeras som kan bistå med kraftregleringen. Den studerade teknologin inom detta examensarbete var ett virtuellt kraftverk. Då inga virtuella kraftverk var i bruk i Sverige då denna uppsats skrevs fanns det osäkerheter kring hur man optimalt styr ett virtuellt kraftverk och de ekonomiska fördelarna som detta skulle kunna leda till. Detta examensarbete modellerade och optimerade ett virtuellt kraftverk ur ett vinstperspektiv. Det virtuella kraftverket var uppbyggt utav kylmaskiner, ljus, ventilationsfläktar och ett batterisystem. Deras kraftkonsumtion styrdes på ett sådant sätt som lätt de bidra till kraftreglering på reservmarknaden. För att kunna analysera de ekonomiska resultaten från det optimerade virtuella kraftverket, så byggdes en jämförelsemodell. Denna jämförelsemodell är baserad på en semistatisk linjär modell, vilket är det som examensarbetets industripartner Siemens använder. Den ekonomiska jämförelsens resultat påvisade att inkomsten från den optimerade modellen var minst 85% högre än den semistatiskt linjära modellen, inom de studerade scenarierna. Denna inkomstökning skulle potentiellt kunna öka användningen av virtuella kraftverk på den svenska reservmarknaden vilket i sin tur skulle medföra högre stabilitet på elnätet. Genom att öka stabiliteten på elnätet kan således förnyelsebara energiresurser i sin tur lättare implementeras.
APA, Harvard, Vancouver, ISO, and other styles
43

Rabêlo, Ricardo de Andrade Lira. "Componentes de software no planejamento da operação energética de sistemas hidrotérmicos." Universidade de São Paulo, 2010. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-15092010-102039/.

Full text
Abstract:
O planejamento da operação de sistemas hidrotérmicos pode ser classificado como um problema de um sistema acoplado no tempo e no espaço, não linear, não convexo, estocástico e de grande porte. A complexidade do problema justifica a necessidade de utilização de diversas ferramentas computacionais com abordagens variadas. Este trabalho tem como objetivo a realização de estudos relacionados ao planejamento da operação energética de sistemas hidrotérmicos de geração, pela aplicação de componentes de software e de sistemas de inferência fuzzy. Pretende-se apresentar e aplicar um processo de desenvolvimento (UML Components), baseado em componentes de software, para a construção de modelos computacionais de simulação e otimização para servir de apoio ao planejamento da operação energética do sistema hidrotérmico brasileiro. O processo de desenvolvimento UML Components é aplicado de forma a nortear o desenvolvimento do software, para englobar as diferentes atividades realizadas nos fluxos de trabalho, além de incluir os vários artefatos produzidos. Como contribuição adicional, paralelamente ao uso dos componentes de software, este trabalho apresenta uma política de operação energética para reservatórios baseada em sistemas de inferência fuzzy Takagi-Sugeno. A política proposta é baseada na otimização da operação energética das usinas hidrelétricas, empregando o modelo de otimização desenvolvido. Com a operação energética otimizada, obtém-se as relações entre a energia armazenada do sistema e o volume útil operativo de cada usina a reservatório. A partir dessas relações são ajustados os parâmetros do modelo Takagi-Sugeno de ordem um. Ao optar-se por um sistema de inferência fuzzy para determinar a política de operação energética de um conjunto de reservatórios, obtém-se uma estratégia de ação/controle que pode ser monitorada e interpretada, inclusive do ponto de vista lingüístico. Outra vantagem na aplicação de sistemas fuzzy deve-se ao fato dos operadores humanos (especialistas) poderem traduzir, de forma consistente, e em termos de regras lingüísticas, o seu processo de tomada de decisões, fazendo com que a ação do sistema fuzzy seja tão fundamentada e consistente quanto a deles.<br>The operation planning of hydrothermal power systems can be classified as a nonseparable, nonlinear, nonconvex, stochastic and of large scale optimization problem. The complexity of this problem justifies the need for the use of various computational tools with different approaches. This work aims the accomplishment of studies related to the operation planning of hydrothermal power systems through the implementation of software components and fuzzy inference systems. It is intended to provide and implement a development process (UML Components) based on software components for building computational model of optimization and simulation to support the operation planning of the Brazilian hydrothermal power systems. The UML Components development process is a applied in a way to guide the software development to encompass different activities realized on workflows, as well as to include the various artifacts produced. As additional contribution, in parallel to the use of software components, it is intended to present an operational policy of reservoirs based on Takagi-Sugeno fuzzy inference systems. The proposed policy is based on optimization of hydropower operation, using the optimization model developed. Through the optimized operation, relations between system stored energy and the reservoir volume of each plat are obtained. With these relationships, the parameters of the Takagi-Sugeno model are adjusted. In choosing a fuzzy inference system for determining the operational policy of a set of reservoirs, it is obtained as strategy of action/control that can be monitored and interpreted including linguistic standpoint. Another benefit of the fuzzy system application refers to the fact that human specialists can consistently represent, through linguistic rules, their decision making process, making the fuzzy system action as consistent and sound as theirs.
APA, Harvard, Vancouver, ISO, and other styles
44

Leslabay, Pablo Enrique [Verfasser], and A. [Akademischer Betreuer] Albers. "Robust optimization of mechanical systems affected by large system and component variability = Robustheitsbasierte Optimierung für mechanische Systeme mit großer Streuung der relevanten System- und Elementgrößen / Pablo Enrique Leslabay ; Betreuer: A. Albers." Karlsruhe : KIT-Bibliothek, 2021. http://d-nb.info/1235072568/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

DESHMUKH, DINAR VIVEK. "Design Optimization of Mechanical Components." University of Cincinnati / OhioLINK, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1028738547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Almeida, André Batista de 1978. "Otimização estrutural em componentes mecânicos utilizando algoritmos genéticos : Structural optimization in mechanics components using genetics algorithms." [s.n.], 2014. http://repositorio.unicamp.br/jspui/handle/REPOSIP/265956.

Full text
Abstract:
Orientador: Auteliano Antunes dos Santos Júnior<br>Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica<br>Made available in DSpace on 2018-08-24T20:50:58Z (GMT). No. of bitstreams: 1 Almeida_AndreBatistade_M.pdf: 6664818 bytes, checksum: 4641147a0beb2b3a4e2f768dda791fae (MD5) Previous issue date: 2014<br>Resumo: Estudou-se, neste trabalho, o emprego de uma ferramenta de otimização metaheurística, o método dos Algoritmos Genéticos, para a otimização de forma de componentes mecânicos. O trabalho envolveu o estudo das ferramentas usuais e de diversas novas ferramentas para otimização, com foco em AG. Para tanto, foram desenvolvidas rotinas computacionais específicas. Os resultados obtidos com este método foram comparados com os obtidos com o emprego de um programa comercial (Ansys®), que utiliza ferramentas tradicionais de otimização (método de aproximação por subproblema). O trabalho buscou ainda comparar os seus resultados com os obtidos em literatura. Os resultados mostraram que é possível utilizar AG para otimização de forma, com desempenho adequado em relação ao método de aproximação por subproblema e aos resultados de literatura. Mostraram ainda que, utilizando parâmetros otimizados, é possível obter resultados mais adequados com o programa desenvolvido nesta dissertação<br>Abstract: This study consisted of using a metaheuristic optimization tool, Genetic Algorithm, for the mechanical components shape optmization. It involved the study of usual and several new tools for optimization, focusing GA. For this, speci?c computing routines were developed. The obtained results were compared to the ones generated by using a commercial program (Ansys), that uses traditional optimization tools (subproblem approximation method). The study also compared its results with the ones obtained in the literature. The results showed that it is possible to use AG for shape optimization with adequate performance related to the subproblem approximated method and the literature results. It was also showed that it is possible to obtain more adequate results, using optimized parameters, with the program developed in this study<br>Mestrado<br>Mecanica dos Sólidos e Projeto Mecanico<br>Mestre em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
47

Olšarová, Nela. "Inference propojení komponent." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236505.

Full text
Abstract:
The Master Thesis deals with the design of hardware component interconnection inference algorithm that is supposed to be used in the FPGA schema editor that was integrated into educational integrated development environment VLAM IDE. The aim of the algorithm is to support user by finding an optimal interconnection of two given components. The editor and the development environment are implemented as an Eclipse plugin using GMF framework. A brief description of this technologies and the embedded systems design are followed by the design of the inference algorithm. This problem is a topic of combinatorial optimization, related to the bipartite matching and assignment problem. After this, the implementation of the algorithm is described, followed by tests and a summary of achieved results.
APA, Harvard, Vancouver, ISO, and other styles
48

Talevi, Iacopo. "Optimal and Automated Microservice Deployment: formal definition, implementation and validation of a deployment engine." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2019. http://amslaurea.unibo.it/18426/.

Full text
Abstract:
The main purpose of this work was to study the problem of optimal and automated deployment and reconfiguration (at the architectural level) of microservice systems, proving formal properties and realizing an implemented solution. It started from the Aeolus component model, which was used to formally define the problem of deploying component-based software systems and to prove different results about decidability and complexity. In particular, the Aeolus authors formally prove that, in the general case, such problem is undecidable. Starting from these results we expanded on the analysis of automated deployment and scaling, focusing on microservice architecture. Using a model inspired by Aeolus, considering the characteristics of microservices, we formally proved that the optimal and automated deployment and scaling for microservice architectures are algorithmically treatable. However, the decision version of the problem is NP-complete and to obtain the optimal solution it is necessary to solve an NP-optimization problem. To show the applicability of our approach we decided to also realize a model of a simple but realistic case-study. The model is developed using the Abstract Behavioral Specification (ABS) language, and to calculate the different deployment and scaling plans we used an ABS tool called SmartDepl. To solve the problem, SmartDepl relies on Zephyrus2. Zephyrus2 is a configuration optimizer that allows to compute the optimal deployment configuration of described applications. This work resulted in an extended abstract accepted at the Microservices 2019 conference in Dortmund (Germany), a paper accepted at the FASE 2019 (part of ETAPS) conference in Prague (Czech Republic), and an accepted book chapter.
APA, Harvard, Vancouver, ISO, and other styles
49

Berguin, Steven Henri. "A method for reducing dimensionality in large design problems with computationally expensive analyses." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/53504.

Full text
Abstract:
Strides in modern computational fluid dynamics and leaps in high-power computing have led to unprecedented capabilities for handling large aerodynamic problem. In particular, the emergence of adjoint design methods has been a break-through in the field of aerodynamic shape optimization. It enables expensive, high-dimensional optimization problems to be tackled efficiently using gradient-based methods in CFD; a task that was previously inconceivable. However, adjoint design methods are intended for gradient-based optimization; the curse of dimensionality is still very much alive when it comes to design space exploration, where gradient-free methods cannot be avoided. This research describes a novel approach for reducing dimensionality in large, computationally expensive design problems to a point where gradient-free methods become possible. This is done using an innovative application of Principal Component Analysis (PCA), where the latter is applied to the gradient distribution of the objective function; something that had not been done before. This yields a linear transformation that maps a high-dimensional problem onto an equivalent low-dimensional subspace. None of the original variables are discarded; they are simply linearly combined into a new set of variables that are fewer in number. The method is tested on a range of analytical functions, a two-dimensional staggered airfoil test problem and a three-dimensional Over-Wing Nacelle (OWN) integration problem. In all cases, the method performed as expected and was found to be cost effective, requiring only a relatively small number of samples to achieve large dimensionality reduction.
APA, Harvard, Vancouver, ISO, and other styles
50

Ebeling, A. (Andreas). "Cost optimization for wear resistant components." Master's thesis, University of Oulu, 2013. http://urn.fi/URN:NBN:fi:oulu-201308291655.

Full text
Abstract:
The goal of this thesis was to give a cost view on wear and how to optimize components cost-wise taking operating conditions into consideration, but later it was decided not to limit the focus only to wear resistant applications. The thesis was made for Componenta as a part of HICON project, which studies high friction and low wear contacts. HICON is a subproject of DEMAPP under Finnish consortium company FIMECC Oy, Finnish Metals and Engineering Competence Cluster. The work was done studying books and articles; having conversations in Componenta, HICON, and customers of the case examples; studying the results of the case examples; and evaluating the process in general. Five step material selection process was developed from which four are consecutive steps: finding out customer demands, preselection of materials, House of Quality (HOQ) and decision matrix; and one process long step: life cycle costing. HOQ, which is the first step of Quality Function Deployment (QFD), was used in order to find out which material properties are the most important and to get more objective weightings to decision matrix. Decision matrix is the fourth step to get a guiding suggestion for the optimized material. Life cycle costing was decided to separate as an independent process long step, because of its high importance in cost optimization. It should guide decisions throughout the process. Four wear related case examples were studied from actual customers of Componenta: sheave-wire contact, wear plate in a coupling, feed roller in a forestry machine, and wear plate in a concrete pump. It was found out that there are very promising aspects in this process, but the process is only guiding and the results shouldn’t be taken with certainty. Especially the life cycle costing step was found to be very promising, because when it is kept in mind throughout the process there can be found out significant savings in life cycle costs (LCC). For example in light trucks the significance of weight is approximately $12/kg or according to simple calculations made at Componenta 7 €/kg in life cycle costs. HOQ seems on the other hand very difficult to apply in deciding correlations between customer demands and material properties, and it’s very difficult to estimate the accuracy of its results. Decision matrix seems simpler and easier to understand, but it is only guiding. It is difficult to estimate the efficiency of the process based on four case examples, but it is customer oriented approach with the focus on life cycle costing, so if the process is used selectively with the parts that work best for the company, it could be useful. The process could probably be used with mild changes in many other applications as well, and the applicability could be even better<br>Työn tavoitteena oli tutkia komponenttien optimointia kulumisen ja kustannusten kannalta ottaen huomioon käyttöolosuhteet. Myöhemmin päädyttiin siihen, ettei työtä rajoiteta ainoastaan kulumiskestäviin sovelluksiin. Työ tehtiin Componentalle HICON projektin alaisuudessa, joka tutkii kontaktien korkeaa kitkaa ja vähäistä kulumista. HICON on DEMAPP’in alaprojekti suomalaisen konsortioyrityksen FIMECC Oy:n, Finnish Metals and Engineering Competence Cluster, alaisuudessa. Työ tehtiin tutkimalla kirjoja ja artikkeleita; keskustelemalla Componentalla sekä HICON-yritysten ja asiakkaiden kanssa, joilta tapausesimerkit saatiin; ja arvioimalla prosessia yleisesti. Viisivaiheinen materiaalinvalintaprosessi kehitettiin, joista neljä vaihetta on peräkkäisiä: asiakasvaatimusten selvittäminen, materiaalin esivalinta, laatutalo ja päätösmatriisi; ja yksi koko prosessin pituinen vaihe: elinkaarikustannusten arviointi. Laatutaloa, joka on Quality Function Deploymentin (QFD) ensimmäinen vaihe, käytettiin tärkeimpien materiaaliominaisuuksien selvittämiseen ja saadakseen objektiivisemmat painokertoimet päätösmatriisiin. Päätösmatriisi on neljäs vaihe, jolla saadaan ohjaavat ehdotukset materiaalin optimointiin. Elinkaarikustannusten arviointi päätettiin erotella erilliseksi prosessin mittaiseksi vaiheeksi, koska se on niin tärkeä kustannusten optimoinnin kannalta. Sen pitäisi ohjata päätöksiä koko prosessin ajan. Neljä kulumiseen liittyvää Componentan asiakkailta saatua tapausesimerkkiä tutkittiin: vetopyörä ja vaijerin kontakti, vetokidan kulutuslevy, metsäkoneen syöttörulla ja betonipumpun kulutuslevy. Prosessissa havaittiin olevan monta lupaavaa piirrettä, mutta se on ainoastaan ohjaava ja tuloksia ei pitäisi ottaa ehdottomasti. Erityisesti elinkaarikustannusten arviointivaihe havaittiin erittäin lupaavaksi, koska pitäessä sitä mielessä koko prosessin ajan voi löytyä merkittäviä säästöjä elinkaarikustannuksissa. Esimerkiksi kevyissä kuorma-autoissa massan merkitys elinkaarikustannuksissa on suurin piirtein $12/kg tai Componetalla tehtyjen yksinkertaisten laskujen mukaan 7 €/kg. Laatutalo taas vaikuttaa todella vaikealta soveltaa korrelaatioiden määrittämisen asiakasvaatimusten ja materiaaliominaisuuksien välillä, ja on todella vaikea arvioida tulosten tarkkuutta. Päätösmatriisi vaikuttaa yksinkertaisemmalta ja helpommalta ymmärtää, mutta se on ainoastaan ohjaava. On vaikea arvioida prosessin tehokkuutta perustuen neljään esimerkkitapaukseen, mutta kyseessä on asiakaslähtöinen lähestymistapa, jossa keskittyminen on elinkaarikustannuksissa. Jos prosessia käytetään valikoivasti valikoiden ne osat, jotka soveltuvat yritykseen parhaiten, se voi olla hyödyllinen. Prosessia voisi luultavasti käyttää myös muihin sovelluksiin pienten muutosten jälkeen ja sen soveltuvuus voisi olla jopa parempi
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography