To see the other types of publications on this topic, follow the link: Formulation optimization.

Dissertations / Theses on the topic 'Formulation optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Formulation optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Berg, Lisa. "Optimization of a biostimulant formulation." Thesis, KTH, Skolan för bioteknologi (BIO), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-215305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Navarro, Luis Fernando Piccino. "Application of hybrid-mixed stress formulation on topology optimization." reponame:Repositório Institucional da UFABC, 2018.

Find full text
Abstract:
Orientador: Prof. Dr. Wesley Góis
Dissertação (mestrado) - Universidade Federal do ABC, Programa de Pós-Graduação em Engenharia Mecânica, São Bernardo do Campo, 2018.
Nas últimas décadas o Método de Otimização Topológica se tornou um dos métodos mais populares tanto da indústria quanto na academia como uma ferramenta para projetos conceituais, e por esse motivo muitos avanços na área foram obtidos. Apesar da maturidade do método, apenas recentemente pesquisas propuseram a adoção de formulações nãoconvencionais do Método dos Elementos Finitos com o intuito de se trabalhar aspectos relacionados a presença de restrições de tensão, de instabilidade de tabuleiro de xadrez, materiais incompressíveis e também de problemas relacionados a cargas de pressão na Otimização Topológica. Nessa linha, a presente dissertação explora uma formulação alternativa do Método dos Elementos Finitos para o problema de otimização topológica de estruturas continuas. A formulação adotada é a Formulação Híbrido-Mista de Tensão (FHMT), na qual tanto a tensão no domínio quanto o deslocamento no domínio e no contorno são variáveis principais, isto é, aproximadas diretamente pelo método. Para analisar a otimização topológica com a nova formulação, o problema de minimização da função flexibilidade média com restrição de volume inteiramente descrito em termos do campo de tensão é examinado. Também é analisado o problema descrito com restrição de tensão global, feito por meio da norma Média ¿ P. Os resultados obtidos apresentaram projetos sem a presença da instabilidade de tabuleiro de xadrez e de acordo com os resultados apresentados na literatura, em sua maioria baseados no Método dos Elementos Finitos. Com relação ao problema com restrição de tensão, o Método apresentou resultados com alívio dos concentradores de tensão, como apresentado na literatura, embora não atingido layouts com tensão máxima abaixo do limite prescrito.
Over the past decades the Topology Optimization Method has become one of the most popular methods in industry and academia as a mechanical design tool for conceptual projects, and for this reason a lot of advances have been made. Although its maturity, only recently researches have proposed the adoption of non-conventional formulations of the Finite Element Method in order to handle the presence of stress constraints, check board pattern, incompressible media optimization and pressure load problems in Topology Optimization. In this sense, the present dissertation explores an alternative formulation for the topology optimization of continuum structures. The formulation adopted is the Hybrid-Mixed Stress Formulation (HMSF), in which the stress as well the displacement on the domain and on the boundary, are the main variables. A minimum compliance with volume constraint problems fully described in terms of the stress is examined with the compliance computed trough the calculation of the complementary energy. Moreover, to examine the potentiality of the formulation the stress constraint problem was performed and evaluated through a P-mean norm. The formulation has shown to achieve freely checkboard designs, good agreement with available references results based on classic Finite Element Methods and moreover optimized layouts with no fading on the edges. Concerning the stress constraints problems, the formulation has shown to alleviate the stress concentrations regions, agreeing with the results found on the literature, although not achieving layouts with maximum stress lower than the prescribed limit.
APA, Harvard, Vancouver, ISO, and other styles
3

Chen, Ying. "Formulation of a Multi-Disciplinary Design Optimization of Containerships." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/36069.

Full text
Abstract:
To develop a computer tool that will give the best ship design using an optimization technique is one of the objects of the FIRST project. Choosing a containership design as a test case, the Design Optimization Tools (DOT) package is used as the optimization tool. The problem is tackled from the ship owner's point of view. The required freight rate is chosen as the objective function because the most important thing that concerns the ship owner is whether the ship will make a profit or not, and if so, how much profit it can make. DOT, as well as any other numerical optimization tool, only gives an approximation of the optimum design and uses numerical approximation during the optimization. It is very important for the users to formulate carefully the optimization problem so that it will give a stable and reasonable solution. Development of a geometric module and choosing suitable empirical formulas for performance evaluation are also major issues of the project.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
4

Kato, Junji, and Ekkehard Ramm. "Multiphase Layout Optimization for Fiber Reinforced Composites applying a Damage Formulation." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1244047693853-06457.

Full text
Abstract:
The present study addresses an optimization strategy for maximizing the structural ductility of Fiber Reinforced Concrete (FRC) with long textile fibers. Due to material brittleness of both concrete and fiber in addition to complex interfacial behavior between above constituents the structural response of FRC is highly nonlinear. Consideration of this material nonlinearity including interface is mandatory to deal with this kind of composite. In the present contribution three kinds of optimization strategies based on a damage formulation are described. The performance of the proposed method is demonstrated by a series of numerical examples; it is verified that the ductility can be substantially improved.
APA, Harvard, Vancouver, ISO, and other styles
5

Kato, Junji. "Material optimization for fiber reinforced composites applying a damage formulation." Stuttgart Inst. für Baustatik und Baudynamik, 2010. http://d-nb.info/1001076508/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fazzolari, Antonio. "An aero-structure adjoint formulation for efficient multidisciplinary wing optimization." [S.l.] : [s.n.], 2005. http://www.digibib.tu-bs.de/?docid=00013997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Curtis, Shane Keawe. "A Method for Exploring Optimization Formulation Space in Conceptual Design." BYU ScholarsArchive, 2012. https://scholarsarchive.byu.edu/etd/3213.

Full text
Abstract:
Formulation space exploration is a new strategy for multiobjective optimization that facilitates both divergent searching and convergent optimization during the early stages of design. The formulation space is the union of all variable and design objective spaces identified by the designer as being valid and pragmatic problem formulations. By extending a computational search into the formulation space, the solution to an optimization problem is no longer predefined by any single problem formulation, as it is with traditional optimization methods. Instead, a designer is free to change, modify, and update design objectives, variables, and constraints and explore design alternatives without requiring a concrete understanding of the design problem a priori. To facilitate this process, a new vector/matrix-based definition for multiobjective optimization problems is introduced, which is dynamic in nature and easily modified. Additionally, a set of exploration metrics is developed to help guide designers while exploring the formulation space. Finally, several examples are presented to illustrate the use of this new, dynamic approach to multiobjective optimization.
APA, Harvard, Vancouver, ISO, and other styles
8

Ezechukww, Obinna Chidiebere. "Automated formulation of financial optimization models with support for multiple views." Thesis, Imperial College London, 2004. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.415016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Shiow-Min. "Formulation and evaluation of a methodology for network-wide signal optimization." [Florida] : State University System of Florida, 1999. http://etd.fcla.edu/etd/uf/1999/amp7369/lin.pdf.

Full text
Abstract:
Thesis (Ph. D.)--University of Florida, 1999.
Title from first page of PDF file. Document formatted into pages; contains xvii, 161 p.; also contains graphics. Vita. Includes bibliographical references (p. 152-159).
APA, Harvard, Vancouver, ISO, and other styles
10

Coleman, Jessica M. Ms. "Formulation and Optimization of Aliskiren Loaded Poly(Lactide-Co-Glycolide) Nanoparticles." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/honors/275.

Full text
Abstract:
Aliskiren is a non-peptide, orally active renin inhibitor with poor absorption and low bioavailability (~2.6%). In order to improve the current drug delivery system, a commercially available, biodegradable copolymer, poly(lactide-co-glycolide) (PLGA), was employed for a nanoparticle (NP) reformulation of aliskiren. An emulsion-diffusion-evaporation technique was implemented where aliskiren and PLGA were dissolved in dichloromethane, ethyl acetate, or ethyl acetate/acetone. To an aqueous phase containing 0.25% w/v didodecyldimethylammonium bromide (DMAB) as stabilizer, the previously prepared organic phase was added drop-wise. Following sonication, NP diffusion was expedited with the addition of water, and the organic phase was evaporated to form a suspension. Centrifugation was performed at 10,000 rpm, and the supernatant was analyzed for drug entrapment efficiency via ultraviolet-visible spectroscopy as well as particle morphology with the use of a transmission electron microscope (TEM). Having the highest entrapment efficiency (82.68 ± 1.18 %), ethyl acetate was used as the organic solvent in further testing, such as examining the effects of variation in DMAB stabilizer concentration (0.10, 0.25, 0.50, or 1.00% w/v) and centrifugation speed (10,000 or 12,000 rpm). The optimum formulation was ascertained through observing certain NP characteristics, such as entrapment efficiency particle size, zeta potential, and polydispersity index (PDI). A NICOMP Particle Sizer was used to measure particle size, zeta potential, and PDI. The smallest NP size (67.27 ± 0.87 nm) was accomplished with 0.50% w/v DMAB concentration using a centrifugation speed of 12,000 rpm, while the highest zeta potential (18.73 ± 0.03 mV) was detected with the 1.00% w/v DMAB concentration and a 10,000 rpm centrifugation speed. Further, the best entrapment efficiency and PDI (82.68 ± 1.18 % and 0.15 ± 0.03, respectively) were accomplished with 0.25% w/v DMAB and centrifugation at 10,000 rpm. The most favorable formulation yielding the highest zeta potential (18.73 ± 0.03 mV) was observed when DMAB stabilizer was 1.00% w/v and centrifuged at 10,000 rpm. Particle size and entrapment efficiency for this formulation were 75.67 ± 0.89 nm and 71.62 ± 0.11 %, respectively.
APA, Harvard, Vancouver, ISO, and other styles
11

Bakry, Marc. "Fiabilité et optimisation des calculs obtenus par des formulations intégrales en propagation d'ondes." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLY013/document.

Full text
Abstract:
Dans cette thèse, on se propose de participer à la popularisation des méthodes de résolution de problèmes de propagation d'onde basées sur des formulations intégrales en fournissant des indicateurs d'erreur a posteriori utilisable dans le cadre d'algorithmes de raffinement autoadaptatif. Le développement de tels indicateurs est complexe du fait de la non-localité des normes associées aux espaces de Sobolev et des opérateurs entrant en jeu. Des indicateurs de la littérature sont étendus au cas de la propagation d'une onde acoustique. On étend les preuves de convergence quasi-optimale (de la littérature) des algorithmes autoadaptatifs associés dans ce cas. On propose alors une nouvelle approche par rapport à la littérature qui consiste à utiliser une technique de localisation des normes, non pas basée sur des inégalités inverses, mais sur l'utilisation d'un opérateur Λ de localisation bien choisi.On peut alors construire des indicateurs d'erreur a posteriori fiables, efficaces, locaux et asymptotiquement exacts par rapport à la norme de Galerkin de l'erreur. On donne ensuite une méthode pour la construction de tels indicateurs. Les applications numériques sur des géométries 2D et 3D confirment l'exactitude asymptotique ainsi que l'optimalité du guidage de l'algorithme autoadaptatif.On étend ensuite ces indicateurs au cas de la propagation d'une onde électromagnétique. Plus précisément, on s'intéresse au cas de l'EFIE. On propose des généralisations des indicateurs de la littérature. On effectue la preuve de convergence quasi-optimale dans le cas d'un indicateur basé sur une localisation de la norme du résidu. On utilise le principe du Λ pour obtenir le premier indicateur d'erreur fiable, efficace et local pour cette équation. On en propose une seconde forme qui est également, théoriquement asymptotiquement exacte
The aim of this work is to participate to the popularization of methods for the resolution of wave propagation problems based on integral equations formulations by developping a posteriori error estimates in the context of autoadaptive mesh refinement strategies. The development of such estimates is difficult because of the non-locality of the norms associated to the Sobolev spaces and of the involved integral operators. Estimates from the literature are extended in the case of the propagation of an acoustic wave. The proofs of quasi-optimal convergence of the autoadaptive algorithms are established. We then introduce a new approach with respect to the literature which is based on a new norm-localization technique based on the use of a well-chosen Λ operator and not on inverse inequalities as it was the case previously.We then establish new a posteriori error estimates which are reliable, efficient, local and asymptotically exact with respect to the Galerkin norm of the error. We give a method for the construction of such estimates. Numerical applications on 2D and 3D geometries confirm the asymptotic exactness and the optimality of the autoadaptive algorithm.These estimates are extended in the case of the propagation of an electromagnetic wave. More precisely, we are interested in the EFIE. We suggest generalization of the estimates of the literature. A proof for quasi-optimal convergence is given for an estimate based on a localization of the norm of the residual. The principle of Λ is used to construct the first reliable, efficient, local error estimate for this equation. We give a second forme which is eventually theoretically asymptotically exact
APA, Harvard, Vancouver, ISO, and other styles
12

Stults, Ian Collier. "A multi-fidelity analysis selection method using a constrained discrete optimization formulation." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31706.

Full text
Abstract:
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Mavris, Dimitri; Committee Member: Beeson, Don; Committee Member: Duncan, Scott; Committee Member: German, Brian; Committee Member: Kumar, Viren. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
13

Thitilertdecha, Premrutai. "Formulation optimization for the topical delivery of active agents in traditional medicines." Thesis, University of Bath, 2013. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.582798.

Full text
Abstract:
In Thailand, Acanthus ebracteatus Vahl and Clerodendrum petasites S. Moore have been prescribed to treat skin diseases, such as rash, abscess, and urticaria, for at least 30 years. However, there is limited scientific support and no clinical trials that identify and verify the compounds that elicit useful pharmacological effects following their topical delivery. Vanillic acid was identified for the first time in A. ebracteatus together with verbascoside; furthermore, nine phenolic compounds, vanillic acid, 4-coumaric acid, ferulic acid, verbascoside, nepetin, luteolin, chrysin, naringenin, and hesperetin, and two reported, apigenin and hispidulin, were found in C. petasites. C. petasites (CP) was therefore chosen as the principal plant to be studied in this thesis. Hispidulin was quantified as a predominant compound, being present at 39 μmol/g (1.2% w/w) in a dried ethanolic extract. Various formulations of CP extracts were examined (a) in in vitro skin penetration experiments using Franz diffusion cells, and (b) in vivo using the tape-stripping method. Hispidulin penetrated through the skin within 3 hours; vanillic acid and nepetin were absorbed after 6 hours. In contrast, verbascoside was only taken up into the superficial layers of SC. There was no difference in the permeation of hispidulin, nepetin and vanillic acid from 10% w/w CP cream and lotion formulations. Hispidulin was percutaneously absorbed through the skin and taken up into the stratum corneum in the greatest amount, followed by vanillic acid and nepetin. It was found that the in vitro model was useful for preliminary formulation development, and that the tape-stripping method was robust and effective. Verbascoside, although a poor penetrant, was well released from the formulations in an in vitro release test, suggesting that it might be a potential skin surface-active compound, such as an antimicrobial. Hispidulin, nepetin and vanillic acid, based on their uptake and penetration into the skin, together with their known biological activities, may be considered as feasible candidates for the development of novel and effective antimicrobial, anti-inflammatory, and antioxidant formulations.
APA, Harvard, Vancouver, ISO, and other styles
14

Fogaça, Mateus Paiva. "A new quadratic formulation for incremental timing-driven placement." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/164067.

Full text
Abstract:
O tempo de propagação dos sinais nas interconexões é um fator dominante para atingir a frequência de operação desejada em circuitos nanoCMOS. Durante a síntese física, o posicionamento visa espalhar as células na área disponível enquanto otimiza uma função custo obedecendo aos requisitos do projeto. Portanto, o posicionamento é uma etapa chave na determinação do comprimento total dos fios e, consequentemente, na obtenção da frequência de operação desejada. Técnicas de posicionamento incremental visam melhorar a qualidade de uma dada solução. Neste trabalho, são propostas duas abordagens para o posicionamento incremental guiado à tempos de propagação através de suavização de caminhos e balanceamento de redes. Ao contrário dos trabalhos existentes na literatura, a formulação proposta inclui um modelo de atraso na função quadrática. Além disso, o posicionamento quadrático é aplicado incrementalmente através de uma operação, chamada de neutralização, que ajuda a manter as qualidades da solução inicial. Em ambas as técnicas, o comprimento quadrático de fios é ponderado pelo drive strength das células e a criticalidade dos pinos. Os resultados obtidos superam o estado-da-arte em média 9,4% e 7,6% com relação ao WNS e TNS, respectivamente.
The interconnection delay is a dominant factor for achieving timing closure in nanoCMOS circuits. During physical synthesis, placement aims to spread cells in the available area while optimizing an objective function w.r.t. the design constraints. Therefore, it is a key step to determine the total wirelength and hence to achieve timing closure. Incremental placement techniques aim to improve the quality of a given solution. Two quadratic approaches for incremental timing driven placement to mitigate late violations through path smoothing and net load balancing are proposed in this work. Unlike previous works, the proposed formulations include a delay model into the quadratic function. Quadratic placement is applied incrementally through an operation called neutralization which helps to keep the qualities of the initial placement solution. In both techniques, the quadratic wirelength is pondered by cell’s drive strengths and pin criticalities. The final results outperform the state-of-art by 9.4% and 7.6% on average for WNS and TNS, respectively.
APA, Harvard, Vancouver, ISO, and other styles
15

Kato, Junji [Verfasser]. "Material optimization for fiber reinforced composites applying a damage formulation / von Junji Kato." Stuttgart : Inst. für Baustatik und Baudynamik, 2010. http://d-nb.info/1001076508/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Lamamy, Julien-Alexandre 1978. "Methods and tools for the formulation, evaluation and optimization of rover mission concepts." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40354.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2007.
Page 256 blank.
Includes bibliographical references (p. 245-255).
Traditionally, Mars rover missions have been conceived with a single point design approach, exploring a limited architectural trade space. The design of future missions must resolve a conflict between increasingly ambitious scientific objectives and strict technical and programmatic constraints. Therefore, there is a need for advanced mission study engineers to consider a wider range of surface exploration concepts in order to identify those with superior performance and robustness with respect to evolving mission objectives. To this end, a three stage trade space exploration approach has been developed to supplement point design development in the early conceptual phase of Mars rover missions. The product is an integrated set of theoretical methods and analytical tools which enhances the understanding and enables the rapid exploration of the rover mission trade space. In the formulation stage, the first stage of the approach, a parallel decomposition of the functional and physical aspects of Mars exploration architectures is employed to explore trade space of surface mission concepts. At each step of the decomposition, architectural alternatives are assessed with respect to stakeholder figures of merit.
(cont.) The resulting concept development trees allow for a rapid assessment of a given design's strength and robustness with respect to stakeholder priorities. In the evaluation stage, the Mars Surface Exploration (MSE) rover system design tool is used to support quantitative analysis of the superior designs identified in the formulation stage. This tool, for advanced mission studies, offers unique functionality: breadth of exploration, system-level modeling fidelity and rapidity. As a demonstration of its capabilities, the tool is used to model and evaluate a multi-rover mission concept in less than two hours. In the optimization stage, two systems engineering methods are developed to optimize, with MSE, the more complex technical and physical aspects of rover mission architectures. The first method assesses the value of autonomy technologies in future missions; it is based on the principle that the monetary worth of autonomy can be evaluated by benchmarking its performance against competing solutions with known cost. The method is applied to value autonomy development for site-to-site traverse and sample approach activities.
(cont.) The second method optimizes platform strategies for space exploration systems; an innovative optimization technique is developed to enumerate of all platform options. In the six rover mission campaigns analyzed, the best platform strategies are shown to generate very limited savings compared to traditional strategies. The two case studies demonstrate that the analytical capabilities of MSE combined with a theoretical structure form a valuable decision making tool for early conceptual design trade-offs.
by Julien-Alexandre Lamamy.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
17

Kulkarni, Mandar D. "Continuum Sensitivity Analysis using Boundary Velocity Formulation for Shape Derivatives." Diss., Virginia Tech, 2016. http://hdl.handle.net/10919/73057.

Full text
Abstract:
The method of Continuum Sensitivity Analysis (CSA) with Spatial Gradient Reconstruction (SGR) is presented for calculating the sensitivity of fluid, structural, and coupled fluid-structure (aeroelastic) response with respect to shape design parameters. One of the novelties of this work is the derivation of local CSA with SGR for obtaining flow derivatives using finite volume formulation and its nonintrusive implementation (i.e. without accessing the analysis source code). Examples of a NACA0012 airfoil and a lid-driven cavity highlight the effect of the accuracy of the sensitivity boundary conditions on the flow derivatives. It is shown that the spatial gradients of flow velocities, calculated using SGR, contribute significantly to the sensitivity transpiration boundary condition and affect the accuracy of flow derivatives. The effect of using an inconsistent flow solution and Jacobian matrix during the nonintrusive sensitivity analysis is also studied. Another novel contribution is derivation of a hybrid adjoint formulation of CSA, which enables efficient calculation of design derivatives of a few performance functions with respect to many design variables. This method is demonstrated with applications to 1-D, 2-D and 3-D structural problems. The hybrid adjoint CSA method computes the same values for shape derivatives as direct CSA. Therefore accuracy and convergence properties are the same as for the direct local CSA. Finally, we demonstrate implementation of CSA for computing aeroelastic response shape derivatives. We derive the sensitivity equations for the structural and fluid systems, identify the sources of the coupling between the structural and fluid derivatives, and implement CSA nonintrusively to obtain the aeroelastic response derivatives. Particularly for the example of a flexible airfoil, the interface that separates the fluid and structural domains is chosen to be flexible. This leads to coupling terms in the sensitivity analysis which are highlighted. The integration of the geometric sensitivity with the aeroelastic response for obtaining shape derivatives using CSA is demonstrated.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Frits, Andrew P. "Formulation of an Integrated Robust Design and Tactics Optimization Process for Undersea Weapon Systems." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6827.

Full text
Abstract:
In the current Navy environment of undersea weapons development, the engineering aspect of design is decoupled from the development of the tactics with which the weapon is employed. Tactics are developed by intelligence experts, warfighters, and wargamers, while torpedo design is handled by engineers and contractors. This dissertation examines methods by which the conceptual design process of undersea weapon systems, including both torpedo systems and mine counter-measure systems, can be improved. It is shown that by simultaneously designing the torpedo and the tactics with which undersea weapons are used, a more effective overall weapon system can be created. In addition to integrating torpedo tactics with design, the thesis also looks at design methods to account for uncertainty. The uncertainty is attributable to multiple sources, including: lack of detailed analysis tools early in the design process, incomplete knowledge of the operational environments, and uncertainty in the performance of potential technologies. A robust design process is introduced to account for this uncertainty in the analysis and optimization of torpedo systems through the combination of Monte Carlo simulation with response surface methodology and metamodeling techniques. Additionally, various other methods that are appropriate to uncertainty analysis are discussed and analyzed. The thesis also advances a new approach towards examining robustness and risk: the treatment of probability of success (POS) as an independent variable. Examining the cost and performance tradeoffs between high and low probability of success designs, the decision-maker can make better informed decisions as to what designs are most promising and determine the optimal balance of risk, cost, and performance. Finally, the thesis examines the use of non-dimensionalization of parameters for torpedo design. The thesis shows that the use of non-dimensional torpedo parameters leads to increased knowledge about the scaleability of torpedo systems and increased performance of Designs of Experiments. The integration of these ideas concerning tactics, robust design with uncertainty, and non-dimensionalization of torpedo parameters has lead to the development of a general, powerful technique by which torpedo and other undersea weapon systems can be fully optimized, thereby increasing performance and decreasing the total cost of future weapon systems.
APA, Harvard, Vancouver, ISO, and other styles
19

Ghosh, Priyanka. "Formulation Optimization for Pore Lifetime Enhancement and Sustained Drug Delivery Across Microneedle Treated Skin." UKnowledge, 2013. http://uknowledge.uky.edu/pharmacy_etds/22.

Full text
Abstract:
Microneedle (MN) enhanced drug delivery is a safe, effective and efficient enhancement method for delivery of drug molecules across the skin. The “poke (press) and patch” approach employs solid stainless steel MN to permeablize the skin prior to application of a regular drug patch over the treated area. It has been previously shown that MN can be used to deliver naltrexone (NTX) at a rate that provides plasma concentrations in the lower end of the therapeutic range in humans. The drug delivery potential of this technique is, however, limited by the re-sealing of the micropores in a 48-72h timeframe. The goal of the current research was to optimize the formulation for a 7 day MN enhanced delivery system for NTX either by adding a second active pharmacological moiety or by optimizing formulation characteristics alone. Three different formulation strategies were explored: formulation pH optimization with NTX; a codrug approach with NTX and a nonspecific cyclooxygenase inhibitor, diclofenac (DIC); and a topical/transdermal approach with NTX and an enzyme inhibitor of the cholesterol synthesis pathway, fluvastatin (FLU). The results indicated that formulation pH cannot be used to extend micropore lifetime, although formulation optimization leads to enhanced transport and thus drug delivery across MN treated skin. The codrug approach was successful in extending the micropore lifetime and further screening of codrug structures and formulation optimization helped in selection of a codrug candidate suitable for evaluation in animal pharmacokinetic studies. Local treatment with FLU helped to keep the micropores open and enabled delivery of NTX for an extended period. The pores re-sealed on removal of treatment within a 30-45 minute timeframe, indicating that infection/irritation should not be a major issue, as in the case of other topical chemical enhancers. Thus, overall it can be concluded that different formulation strategies can be utilized to extend micropore lifetime and enhance delivery of drug molecules across the skin.
APA, Harvard, Vancouver, ISO, and other styles
20

Wijaya, Harry Martha. "Application of thermoanalytical techniques to the optimization and characterization of a freeze-dried formulation." Thesis, Queen's University Belfast, 2014. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.677851.

Full text
Abstract:
Azithromycin, a macrolide antibiotic, has a poor stability in solution . The hydrolysis of the a-glycosidic bond in acidic condition yields the major degradation product of azithromycin that is desosaminylazithromycin. One answer to this problem is the application of the freeze-drying process. However, freeze-drying is known as an expensive and time consuming process thus the optimum freeze-drying condition is absolutely important for cost-effective manufacturing. In this thesis, a new, more efficient freeze-drying process for an azithromycin formulation has been successfully developed. Integrated pharmaceutical freeze-dried product development beginning from characterization of the formulation solution, freeze-drying cycle optimization, stability study, and scale-up process was performed. Various thermoanalytical techniques including Differential Scanning Calorimetry, Freeze Drying Microscopy, Thermogravimetric Analysis, and Dynamic Vapor Sorption were used. Scanning Electron Microscopy, X-Ray Powder Diffraction, Raman Microscopy, and HPLC were used as complementary methods to fully characterize the formulation. A significant reduction in freeze-drying time compared to the established process from 98 to 46 hours was achieved without compromising product quality. This time saving could not only increase the production efficiency but also reduce the production cost due to lower power consumption by the freeze-dryer. The stability of the developed formulation was demonstrated for 6 months in 40 °C/75 % RH. The design space of primary drying process of azithromycin formulation was successfully developed based on the heat and mass transfer equation. This model was proven useful to predict the outcome of the product temperature which is the most important parameter that determines the product quality outcome in the freeze-drying process and furthermore makes the scale-up process straightforward. The last part of this thesis discussed the promising possibility of a new drug delivery system based on azithromycin encapsulation in liposomes. Freeze-drying and spray-drying were proven useful in the drying process of this liposomal azithromycin formulation.
APA, Harvard, Vancouver, ISO, and other styles
21

Stapel, Florian [Verfasser]. "Ontology-based representation of abstract optimization models for model formulation and system generation / Florian Stapel." Paderborn : Universitätsbibliothek, 2016. http://d-nb.info/1108389333/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Yu, Hang. "Reliability-based design optimization of structures : methodologies and applications to vibration control." Phd thesis, Ecole Centrale de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00769937.

Full text
Abstract:
Deterministic design optimization is widely used to design products or systems. However, due to the inherent uncertainties involved in different model parameters or operation processes, deterministic design optimization without considering uncertainties may result in unreliable designs. In this case, it is necessary to develop and implement optimization under uncertainties. One way to deal with this problem is reliability-based robust design optimization (RBRDO), in which additional uncertainty analysis (UA, including both of reliability analysis and moment evaluations) is required. For most practical applications however, UA is realized by Monte Carlo Simulation (MCS) combined with structural analyses that renders RBRDO computationally prohibitive. Therefore, this work focuses on development of efficient and robust methodologies for RBRDO in the context of MCS. We presented a polynomial chaos expansion (PCE) based MCS method for UA, in which the random response is approximated with the PCE. The efficiency is mainly improved by avoiding repeated structural analyses. Unfortunately, this method is not well suited for high dimensional problems, such as dynamic problems. To tackle this issue, we applied the convolution form to compute the dynamic response, in which the PCE is used to approximate the modal properties (i.e. to solve random eigenvalue problem) so that the dimension of uncertainties is reduced since only structural random parameters are considered in the PCE model. Moreover, to avoid the modal intermixing problem when using MCS to solve the random eigenvalue problem, we adopted the MAC factor to quantify the intermixing, and developed a univariable method to check which variable results in such a problem and thereafter to remove or reduce this issue. We proposed a sequential RBRDO to improve efficiency and to overcome the nonconvergence problem encountered in the framework of nested MCS based RBRDO. In this sequential RBRDO, we extended the conventional sequential strategy, which mainly aims to decouple the reliability analysis from the optimization procedure, to make the moment evaluations independent from the optimization procedure. Locally "first-torder" exponential approximation around the current design was utilized to construct the equivalently deterministic objective functions and probabilistic constraints. In order to efficiently calculate the coefficients, we developed the auxiliary distribution based reliability sensitivity analysis and the PCE based moment sensitivity analysis. We investigated and demonstrated the effectiveness of the proposed methods for UA as well as RBRDO by several numerical examples. At last, RBRDO was applied to design the tuned mass damper (TMD) in the context of passive vibration control, for both deterministic and uncertain structures. The associated optimal designs obtained by RBRDO cannot only reduce the variability of the response, but also control the amplitude by the prescribed threshold.
APA, Harvard, Vancouver, ISO, and other styles
23

Kempe, Henrik. "Advances in Separation Science : . Molecular Imprinting: Development of Spherical Beads and Optimization of the Formulation by Chemometrics." Doctoral thesis, Stockholm University, Department of Analytical Chemistry, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-6582.

Full text
Abstract:

An intrinsic mathematical model for simulation of fixed bed chromatography was demonstrated and compared to more simplified models. The former model was shown to describe variations in the physical, kinetic, and operating parameters better than the latter ones. This resulted in a more reliable prediction of the chromatography process as well as a better understanding of the underlying mechanisms responsible for the separation. A procedure based on frontal liquid chromatography and a detailed mathematical model was developed to determine effective diffusion coefficients of proteins in chromatographic gels. The procedure was applied to lysozyme, bovine serum albumin, and immunoglobulin γ in Sepharose™ CL-4B. The effective diffusion coefficients were comparable to those determined by other methods.

Molecularly imprinted polymers (MIPs) are traditionally prepared as irregular particles by grinding monoliths. In this thesis, a suspension polymerization providing spherical MIP beads is presented. Droplets of pre-polymerization solution were formed in mineral oil with no need of stabilizers by vigorous stirring. The droplets were transformed into solid spherical beads by free-radical polymerization. The method is fast and the performance of the beads comparable to that of irregular particles. Optimizing a MIP formulation requires a large number of experiments since the possible combinations of the components are huge. To facilitate the optimization, chemometrics was applied. The amounts of monomer, cross-linker, and porogen were chosen as the factors in the model. Multivariate data analysis indicated the influence of the factors on the binding and an optimized MIP composition was identified. The combined use of the suspension polymerization method to produce spherical beads with the application of chemometrics was shown in this thesis to drastically reduce the number of experiments and the time needed to design and optimize a new MIP.

APA, Harvard, Vancouver, ISO, and other styles
24

Fassi, Imen. "XFOR (Multifor) : A new programming structure to ease the formulation of efficient loop optimizations." Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAD043/document.

Full text
Abstract:
Nous proposons une nouvelle structure de programmation appelée XFOR (Multifor), dédiée à la programmation orientée réutilisation de données. XFOR permet de gérer simultanément plusieurs boucles "for" ainsi que d’appliquer/composer des transformations de boucles d’une façon intuitive. Les expérimentations ont montré des accélérations significatives des codes XFOR par rapport aux codes originaux, mais aussi par rapport au codes générés automatiquement par l’optimiseur polyédrique de boucles Pluto. Nous avons mis en œuvre la structure XFOR par le développement de trois outils logiciels: (1) un compilateur source-à-source nommé IBB, qui traduit les codes XFOR en un code équivalent où les boucles XFOR ont été remplacées par des boucles for sémantiquement équivalentes. L’outil IBB bénéficie également des optimisations implémentées dans le générateur de code polyédrique CLooG qui est invoqué par IBB pour générer des boucles for à partir d’une description OpenScop; (2) un environnement de programmation XFOR nommé XFOR-WIZARD qui aide le programmeur dans la ré-écriture d’un programme utilisant des boucles for classiques en un programme équivalent, mais plus efficace, utilisant des boucles XFOR; (3) un outil appelé XFORGEN, qui génère automatiquement des boucles XFOR à partir de toute représentation OpenScop de nids de boucles transformées générées automatiquement par un optimiseur automatique
We propose a new programming structure named XFOR (Multifor), dedicated to data-reuse aware programming. It allows to handle several for-loops simultaneously and map their respective iteration domains onto each other. Additionally, XFOR eases loop transformations application and composition. Experiments show that XFOR codes provides significant speed-ups when compared to the original code versions, but also to the Pluto optimized versions. We implemented the XFOR structure through the development of three software tools: (1) a source-to-source compiler named IBB for Iterate-But-Better!, which automatically translates any C/C++ code containing XFOR-loops into an equivalent code where XFOR-loops have been translated into for-loops. IBB takes also benefit of optimizations implemented in the polyhedral code generator CLooG which is invoked by IBB to generate for-loops from an OpenScop specification; (2) an XFOR programming environment named XFOR-WIZARD that assists the programmer in re-writing a program with classical for-loops into an equivalent but more efficient program using XFOR-loops; (3) a tool named XFORGEN, which automatically generates XFOR-loops from any OpenScop representation of transformed loop nests automatically generated by an automatic optimizer
APA, Harvard, Vancouver, ISO, and other styles
25

Twigg, Shannon. "Optimal Path Planning for Single and Multiple Aircraft Using a Reduced Order Formulation." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/14584.

Full text
Abstract:
High-flying unmanned reconnaissance and surveillance systems are now being used extensively in the United States military. Current development programs are producing demonstrations of next-generation unmanned flight systems that are designed to perform combat missions. Their use in first-strike combat operations will dictate operations in densely cluttered environments that include unknown obstacles and threats, and will require the use of terrain for masking. The demand for autonomy of operations in such environments dictates the need for advanced trajectory optimization capabilities. In addition, the ability to coordinate the movements of more than one aircraft in the same area is an emerging challenge. This thesis examines using an analytical reduced order formulation for trajectory generation for minimum time and terrain masking cases. First, pseudo-3D constant velocity equations of motion are used for path planning for a single vehicle. In addition, the inclusion of winds, moving targets and moving threats is considered. Then, this formulation is increased to using 3D equations of motion, both with a constant velocity and with a simplified varying velocity model. Next, the constant velocity equations of motion are expanded to include the simultaneous path planning of an unspecified number of vehicles, for both aircraft avoidance situations and formation flight cases.
APA, Harvard, Vancouver, ISO, and other styles
26

Junnarkar, Gunjan Harshad. "Effect of selected adjuvants on metronidazole release from poly(ortho ester) matrix and computer optimization of the formulation." Scholarly Commons, 1995. https://scholarlycommons.pacific.edu/uop_etds/2782.

Full text
Abstract:
In the present study, a 8 x 4 mm biodegradable device was formulated using poly(ortho ester) and metronidazole for treatment of periodontitis. Investigation focussed upon determination of formulation parameters in the form of drug (metronidazole) and adjuvant concentrations (oleic acid and palmitic acid) and device thickness necessary to achieve constant release of 0.6 μg/hr over a period of 7 days and complete degradation of the device over a period of 11 to 13 days. Presence of oleic or palmitic acid influenced the release and erosion profile considerably. Thickness of the device did not have significant influence on the drug release. The DSC and NMR studies indicated absence of interaction between drug and polymer. Computer optimization studies indicate that the optimum formulation for 7 day constant drug delivery and disappearance in 13 days should contain 0.28% w/w of oleic acid and 5.26% w/w of metronidazole at the thickness of 400-450 or 500-550 μm. This is in close agreement with the optimum formulation which was obtained with the experimental data.
APA, Harvard, Vancouver, ISO, and other styles
27

Mkentane, Kwezikazi. "The development and optimization of a cosmetic formulation that facilitates the process of detangling braids from African hair." Thesis, Nelson Mandela Metropolitan University, 2012. http://hdl.handle.net/10948/1662.

Full text
Abstract:
A large number of people throughout the world have naturally kinky hair that may be very difficult to manage. These people often subject their hair to vigorous and harsh treatment processes in order to straighten it and hence make it more manageable. Hair braiding is a popular and fashionable trend amongst many people, in particular people of African descent. Braided hairstyles serve to preserve hair and protect it, and to give it time to rejuvenate after a period of harsh treatment. During the braiding process synthetic hair is attached to natural hair by weaving a length of the natural hair into one end of each braid. Other materials like wool or cotton may be use used to achieve different hairstyles and textures. Several strands of natural hair are used to secure each braid. The braids are normally left intact for a number of weeks or even months. Although braiding is a helpful African hair grooming practice, the process of taking down or detangling the braids is labor intensive and entails each braid being cut just below where the natural hair ceases and the natural hair being untangled from the braid using a safety pin, a needle or a fine toothed comb. The labor and long hours required to detangle braided hairstyles often results in braid wearers frustratingly pulling on their braided hair. This behavior inevitably destroys the hair follicle and leaves the hair damaged. According to a study conducted by the University of Cape Town’s dermatology department, braiding may be the root cause of traction alopecia (TA) amongst braid wearers. Traction alopecia is a form of alopecia, or gradual hair loss that is caused primarily by excessive pulling forces applied to the hair. The purpose of this current study was to investigate the factors, other than braid tightness, that affect the way and ease with which braids are detangled from the human hair. The study hypothesized that frictional forces present in braided hair were amongst these factors. It was hypothesized that introducing a lubricating formulation in the braids would allow for easier braid detangling. In order to decrease the prevalence of traction alopecia from braided hair, two hair strengthening actives were included in the test formulation. The study investigated the effects of the test formulations on braid detangling, hair friction and on the tensile strength of human hair. The study found that the method used did not pick up any significant differences between the braid detangling forces of treated braids when compared to the braid vi detangling forces of untreated hair. The same method used to measure braid detangling forces was able to show that there are variations in the braid detangling forces of different sections along the braid length. The method to measure braid detangling was based on the principles of hair combability measurements. The study also found that although the method used to measure braid detangling forces was unsuccessful in picking up significant differences in braid detangling forces of treated hair and untreated hair, the method used to measure the frictional forces of human hair showed that the frictional forces of hair treated with test formulations were significantly different than that of untreated hair. The method used to measure frictional forces was based on the capstan approach. The Capstan method measures the forces required to slide a weighted hair fibre over a curved surface of reference material. The interaction between the weighted fibre and the reference material simulates the movement of hair out of a braid ensemble in the braid detangling process. The optimum mixture with the minimum coefficient of friction, predicted a coefficient of friction of 0.61 ± 0.04. The optimum formulation was found to be one that contained 30% Cyclopentasiloxane , 0% PEG-12 Dimethicone, 10% 18-MEA, 29% water, 10% hair strengthening actives, 12.86% emulsifier combination and 8% other oils. The study also showed that including hair strengthening actives, such as hydrolysed proteins had significant effects in the tensile strength properties of chemically treated African hair.
APA, Harvard, Vancouver, ISO, and other styles
28

Chadha, Aastha. "Design, Optimization and Evaluation of a Novel Emulgel of Ibuprofen for Enhanced Skin Delivery using Formulating for Efficacy™ software." University of Toledo Health Science Campus / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=mco1533217234921014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Selvatici, Elena. "Variational formulation for Granular Contact Dynamics simulation via the Alternating Direction Method of Multipliers." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021.

Find full text
Abstract:
The subject of this thesis is the development of a calculation software for the numerical analysis and the dynamic simulation of a granular flow. One of the major problems encountered in the dynamic analysis of the mechanical behaviour of a granular medium is the enormous computational time taken to reach the solution of the problem. Following studies that have verified the effectiveness of the implicit formulation proposed by the Granular Contact Dynamics approach, the idea of this thesis arises from the desire to apply the Alternating Direction Method of Multipliers for the optimization of the solution, a parallelizable algorithm already validated in similar contexts. The main part of the work consisted in the realization of the program using the Python programming language. During the process particular importance was given to computational optimization, and each part of the program has been designed to handle large scale problems. To generate the starting conditions we have implemented algorithms both for importing and for generating a model, and we have implemented methods for the introduction and management of the static and kinematic boundaries. As for the solution algorithm we reviewed the mathematical model of the GCD: the solution of the problem leads to the application of the principle of minimum of the total potential energy associated with the system. We introduced the augmented Lagrangian: its minimization, with respect to one of the primary variables and assuming the other unknowns as constants, constitutes the core of the ADMM. The software has been included within mechpy, an open source platform for the development of unconventional finite element formulations, and is able to manage both two-dimensional and three-dimensional models. The results are very promising: the output of the simulations has been compared with experimental results, and the noticeable correspondence validates the software functionality.
APA, Harvard, Vancouver, ISO, and other styles
30

Rabie, Ahmed Ibrahim El Said. "Nonlinear estimation of water network demands form limited measurement information." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-3132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Bonifonte, Anthony. "Optimal summer camp layout." Oberlin College Honors Theses / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=oberlin1350314765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Mojica, Velazquez Jose Luis. "A Dynamic Optimization Framework with Model Predictive Control Elements for Long Term Planning of Capacity Investments in a District Energy System." BYU ScholarsArchive, 2013. https://scholarsarchive.byu.edu/etd/3886.

Full text
Abstract:
The capacity expansion of a district heating system is studied with the objective of evaluating the investment decision timing and type of capacity expansion. District energy is an energy generation system that provides energy, such as heat and electricity, generated at central locations and distributed to the surrounding area. The study develops an optimization framework to find the optimal investment schedule over a 30 year horizon with the options of investing in traditional heating sources (boilers) or a next-generation combined heat and power (CHP) plant that can provide heat and electricity. In district energy systems, the investment decision on the capacity and type of system is dependent on demand-side requirements, energy prices, and environmental costs. The main contribution of this work is to formulate the capacity planning over a time horizon asa dynamic optimal control problem. In this way, an initial system configuration can be modified by a 'controller' that optimally applies control actions that drive the system from an initial state to an optimal state. The optimal control is a model predictive control (MPC) formulation that not only provides the timing and size of the capacity investment, but also guidance on the mode of operation that meets optimal economic objectives with the given capacity.
APA, Harvard, Vancouver, ISO, and other styles
33

Barjhoux, Pierre-Jean. "Towards efficient solutions for large scale structural optimization problems with categorical and continuous mixed design variables." Thesis, Toulouse, ISAE, 2020. http://depozit.isae.fr/theses/2020/2020_Barjhoux_Pierre-Jean.pdf.

Full text
Abstract:
Dans l’industrie aéronautique, les problèmes d’optimisation de structurepeuvent impliquer des changements de matériaux, de types de raidisseurs, et detailles d’éléments. Dans ce travail, il est ainsi proposé de résoudre des problèmes degrande taille (minimisation de masse) par rapport à des variables catégorielles et continues,sujets à des contraintes de stress et de déplacements. Trois algorithmes sontprésentés, discutés dans le manuscrit au regard de cas tests de plus en plus complexes.En tout premier lieu, un algorithme basé sur le "branch and bound" a été mis en place.Une formulation d’un problème dédié au calcul de minorants de la masse optimale estproposée. Bien que l’algorithme permette de trouver des solutions optimales, la tendancedu coût de calcul en fonction de l’augmentation du nombre d’éléments est exponentielle.Le second algorithme s’appuie sur une formulation bi-niveau du problème d’origine, oùle problème supérieur consiste à minimiser une approximation au premier ordre du résultatdu niveau inférieur. L’évolution du coût de calcul par rapport à l’augmentation dunombre d’éléments et de valeurs catégorielles est quasiment linéaire. Enfin, un troisièmealgorithme tire partie d’une reformulation du problème mixte catégoriel continu en unproblème bi-niveau mixte avec variables entières continûment relâchables. Les cas testsnumériques montrent la résolution d’un problème avec plus d’une centaine d’éléments.Également, le coût de calcul est quasi-indépendant du nombre de valeurs de variablescatégorielles disponibles par élément
Nowadays in the aircraft industry, structural optimization problemscan be really complex and combine changes in choices of materials, stiffeners, orsizes/types of elements. In this work, it is proposed to solve large scale structural weightminimization problems with both categorical and continuous variables, subject to stressand displacements constraints. Three algorithms have been proposed. As a first attempt,an algorithm based on the branch and bound generic framework has been implemented.A specific formulation to compute lower bounds has been proposed. According to thenumerical tests, the algorithm returned the exact optima. However, the exponentialscalability of the computational cost with respect to the number of structural elementsprevents from an industrial application. The second algorithm relies on a bi-level formulationof the mixed categorical problem. The master full categorical problem consists ofminimizing a first order like approximation of the slave problem with respect to the categoricaldesign variables. The method offers a quasi-linear scaling of the computationalcost with respect to the number of elements and categorical values. Finally, in the thirdapproach the optimization problem is formulated as a bi-level mixed integer non-linearprogram with relaxable design variables. Numerical tests include an optimization casewith more than one hundred structural elements. Also, the computational cost scalingis quasi-independent from the number of available categorical values per element
APA, Harvard, Vancouver, ISO, and other styles
34

Hadbi, Djamel. "Formulations de problèmes d’optimisation multiniveaux pour la conception de réseaux de bord électriques en aéronautique." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT115/document.

Full text
Abstract:
Dans le contexte de l’avion plus électrique, les réseaux électriques aéronautiques sont en pleine évolution. Cette évolution est poussée par le besoin d’une intégration à forte densité énergétique ce qui pose des défis aux concepteurs en termes d’architectures, de systèmes et de méthodes de dimensionnement.Un réseau de bord est composé d’un ensemble de systèmes électriques multidisciplinaire qui proviennent de différents fournisseurs dont le design est actuellement effectué en répondant à des standards de qualité spécifiés par l’agrégateur. L’objectif de la thèse est de proposer de nouvelles approches intégrées qui permettent de gérer la complexité des réseaux électriques tout en convergeant vers un résultat optimal, offrant des gains de masses en référence à un design par des « approches mécanistes » reposant sur un agrégat de boucles d’optimisation locales. Une approche multiniveau a été développée en s’inspirant des travaux sur la MDO « Multidisciplinary Design Optimization ». L’élaboration de cette approche a été le résultat d’une expertise accumulée en appliquant différentes méthodes disponibles dans la bibliographie. L’optimisation porte plus spécifiquement sur les filtres d’entrée des charges du réseau ainsi que sur le filtre de sortie du canal de génération du réseau électrique embarqué. L’optimisation multiniveau vise, dans un contexte collaboratif, à itérer entre le niveau agrégateur (niveau réseau) et le niveau équipementier (charges et source du réseau). L’utilisation d’une formulation agrégée au niveau réseau et le respect des causalités au niveau des sous-problèmes sont les principaux atouts de cette approche qui conduit à des solutions proches de l’optimum global de masse de filtres
Within more electric aircraft context, electric systems and networks have to evolve. High energy density integration pushes designers to reconsider their systems, architectures and tools.An aircraft network contains a large number of multidisciplinary systems which come from different manufacturers. Each manufacturer designs its system separately following quality standards specified by the aggregator. The goal of this thesis is to provide system approaches which could deal with the high-level of complexity of the network while reaching the optimal design of all the system and so reduce the total weight in comparison with mechanistic approaches based on independent optimization loops for the different subsystems.Consulting MDO “Multidisciplinary Design Optimization” researches, we have developed a multilevel approach based on our previous studies and conclusions on classical approaches used in the design of electrical systems. The optimization concerns the input filters of the loads connected to the HVDC bus and the output filter of the generating channel which supply the electric power. The multilevel collaborative optimization allows an automated exchange of data between the aggregator (system level) and manufacturers (sub-system level) and thanks to that, the optimal design of all the system is joined. The strong points of this approach are the aggregated formulation and causality connections between sub-systems
APA, Harvard, Vancouver, ISO, and other styles
35

Sakuda, Telma Mary. "Otimização de suspensões de benzoilmetronidazol." Universidade de São Paulo, 1993. http://www.teses.usp.br/teses/disponiveis/9/9139/tde-10062011-172958/.

Full text
Abstract:
Para o estudo de formulações da suspensão de benzoilmetronidazol recorreu-se a utilização de um planejamento estatístico dos ensaios. Este permitiu avaliar a pesquisa de maneira mais eficiente comparada ao método tradicional, onde o planejamento das experiências é conduzido estudando-se uma variável por vez, mantendo-se as outras constantes. A utilização do planejamento fatorial permitiu elucidar o efeito de diferentes fatores, individualmente ou sobre o ponto de vista das interações dos componentes da formulação da suspensão de benzoilmetronidazol. No trabalho foram utilizados o projeto fatorial fracionado quadrado greco-latino, onde se combinou 4 tipos de cada adjuvantes como, agentes tensoativos, agentes suspensores, poliois e conservantes obtendo-se 16 formulas. Por meio da análise de variância e a comparação das medias empregando o teste \"t\" de student selecionaram-se as melhores fórmulas. Por outro lado, para melhor compreensão das influências dos insumos na formulação procederam-se ensaios conforme o delineamento de meia fração de um projeto fatorial completo 24. A análise dos resultados foi realizada com as variáveis independentes codificadas avaliando-se seus efeitos estimados. Para a verificar se a região experimental já continha as melhores condições foi empregado o planejamento fatorial de primeira ordem. Por meio da equação para o modelo linear representativo desta região explorada, conclui-se que seria necessário realizar mais ensaios para obtenção de um modelo quadrático, ou seja, encontrar a região ótima por meio do arranjo fatorial de segunda ordem. Com o ponto ótimo estabelecido, pode-se determinar quais são os valores de cada variável independente. Por meio do modelo que representa a região, pode-se prever as respostas quando se varia os fatores dentro de limite estabelecido. O modelo foi representado em forma de equação e gráfico. A aplicação de técnicas de otimização no planejamento de fórmulas ampliaram, portanto, as perspectivas de racionalizar processos de formulação.
For the study of formulations of benzoyl metronidazole suspension went over the experimental design. This method permitted to evaluate the reserchs in a more efficient way when compared to the classical one variable-at-a time strategy. The use of fatorial design permitted to explain the effect of different factors, either individually or regarding the interactions of the components of formulation of the benzoyl metronidazole suspension. In this work it was used the fractional desings graeco-latin square, in which four types of adjuvants were combined as surfactant agents, suspending agents, polyols and preservatives to obtain sixteen formulations. Through the analysis of variance and comparison of average throug the \"t\" test of Student the more addequate formulation were selected. For a better comprehention of the influences of the adjuvants in the formulation experiments were conducted according to the delinegment of half-fraction of a complete factorial design 24. The analysis of the results were done with independent variables coded evaluating their estimated effects. To certify wether the experimental region had already the best condictions, the first order factorial design was used. By means of the equation for the representative linear model of the explored region, it was conclued that it would be necessary to do more experiments for the obtaintion of quadratic model, i.e. to locate the best re gion through thre factorial design of second order. With the optimum point could determine the best region for each dependent variable. Through the represented model, it is possible to foresee the answers to the factors when within the fixed limits. The model was represented in the form of equation and grafic. Therefore, the use of optimization techniques in the planning of formulations increase the perspectives in rationalizing formulations processes.
APA, Harvard, Vancouver, ISO, and other styles
36

Agrawal, Swati. "Investigation and Optimization of a Solvent / Anti-Solvent Crystallization Process for the Production of Inhalation Particles." VCU Scholars Compass, 2010. http://scholarscompass.vcu.edu/etd/2244.

Full text
Abstract:
Dry powder inhalers (DPIs) are commonly used to deliver drugs to the lungs. The drug particles used in these DPIs should possess a number of key properties. These include an aerodynamic particle size < 5μm and particle crystallinity for long term formulation stability. The conventionally used micronization technique to produce inhalation particles offers limited opportunities to control and optimize the particle characteristics. It is also known to induce crystalline disorder in the particles leading to formulation instability. Hence, this research project investigates and optimizes a solvent/anti-solvent crystallization process capable of directly yielding inhalation particles using albuterol sulfate (AS) as a model drug. Further, the feasibility of the process to produce combination particles of AS and ipratropium bromide monohydrate (IB) in predictable proportions and in a size suitable for inhalation is also investigated. The solvent / anti-solvent systems employed were water / ethyl acetate (EA) and water / isopropanol (IPA). Investigation and optimization of the crystallization variables with the water / EA system revealed that particle crystallinity was significantly influenced by an interaction between the drug solution / anti-solvent ratio (Ra ratio), stirring speed and crystal maturation time. Inducing a temperature difference between the drug solution and anti-solvent (Tdrug solution > Tanti-solvent) resulted in smaller particles being formed at a positive temperature difference of 65°C. IPA was shown to be the optimum anti-solvent for producing AS particles (IPA-AS) in a size range suitable for inhalation. In vitro aerosol performance of these IPA-AS particles was found to be superior compared to the conventionally used micronized particles when aerosolized from the Novolizer®. The solvent / anti-solvent systems investigated and optimized for combination particles were water / EA, water / IPA, and water / IPA:EA 1:10 (w/w). IPA was found to be the optimum anti-solvent for producing combination particles of AS and IB with the smallest size. These combination particles showed uniform co-deposition during in vitro aerosol performance testing from the Novolizer®. Pilot molecular modeling studies in conjunction with the analysis of particle interactions using HINT provided an improved understanding of the possible interactions between AS and IB within a combination particle matrix.
APA, Harvard, Vancouver, ISO, and other styles
37

Freitas, Luís Henrique de. "Otimização de formulações de fluidos para freios do tipo ABNT 3 (DOT3)." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/3/3137/tde-01082007-181618/.

Full text
Abstract:
O objetivo deste trabalho é a obtenção de formulações ótimas de fluidos para freios do tipo ABNT 3 e que satisfaçam especificações técnicas e de mercado, pela utilização de técnicas de modelagem e otimização de cinco propriedades físico-químicas. É proposta uma metodologia baseada na utilização de informações disponíveis em banco de dados que contém resultados de testes para o desenvolvimento de formulações comerciais. Propõe-se que o conhecimento que se encerra dentro destes bancos de dados seja explorado de maneira sistemática através da construção de modelos que correlacionam as características de interesse com as substâncias das formulações. As características de interesse das misturas neste trabalho são: Ponto de Ebulição, Viscosidade Cinemática a -40ºC, Perda por Evaporação, Ponto de Ebulição Úmido e Efeito sobre a Borracha de SBR a 120ºC. São construídos modelos de mistura através de técnicas adequadas a sistemas com informação incompleta ou redundante, tais como a Regressão por Componentes Principais (PCR) e a Regressão por Mínimos Quadrados Parciais (PLS). Os modelos são utilizados na formulação matemática do problema, que é resolvido através de técnicas de Programação Linear Mista Inteira (MILP). Podem ser adicionadas equações ao problema a fim de restringir a solução ao conjunto em que a informação está disponível de modo a evitar possíveis extrapolações que poderiam resultar em um excessivo número de ensaios para confirmação das predições. Os resultados obtidos pelos modelos desenvolvidos têm mostrado boa concordância com aqueles oriundos de experimentos de validação. Esta metodologia pode ser aplicada a outros tipos de fluidos para freios.
The aim of this work is to obtain optimal formulations of brake fluids ABNT 3 type while satisfying technical and market specifications, by utilizing modelling and optimization techniques of five physical and chemical properties. A methodology for the design of commercial products, based on the usage of the information available in databases where previous test results are recorded is proposed. It is proposed that the knowledge stored in those databases be employed in a systematic manner in order to build models that correlate final product properties with the substances in formulations. The characteristics of interest of the mixtures are: Boiling Point, Kinematic Viscosity at -40ºC, Evaporation Loss, Wet Boiling Point and Effect on SBR Rubber at 120ºC. Mixture models are built with adequate techniques for systems with incomplete or redundant information, such as Principal Components Regression (PCR) and Partial Least Squares Regression (PLS). The models are used in order to develop mathematical representations of the problem that is solved by Mixed Integer Linear Programming (MILP) techniques. Equations that restrain the solution to the set where the information is available to avoid possible extrapolations that could result in an excessive number of experiments to confirmation of predictions can be added to the problem. The results obtained by the models developed present a good agreement with the ones from validation experiments. This methodology can be applied to other types of brake fluids.
APA, Harvard, Vancouver, ISO, and other styles
38

Lunday, Brian Joseph. "Resource Allocation on Networks: Nested Event Tree Optimization, Network Interdiction, and Game Theoretic Methods." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/77323.

Full text
Abstract:
This dissertation addresses five fundamental resource allocation problems on networks, all of which have applications to support Homeland Security or industry challenges. In the first application, we model and solve the strategic problem of minimizing the expected loss inflicted by a hostile terrorist organization. An appropriate allocation of certain capability-related, intent-related, vulnerability-related, and consequence-related resources is used to reduce the probabilities of success in the respective attack-related actions, and to ameliorate losses in case of a successful attack. Given the disparate nature of prioritizing capital and material investments by federal, state, local, and private agencies to combat terrorism, our model and accompanying solution procedure represent an innovative, comprehensive, and quantitative approach to coordinate resource allocations from various agencies across the breadth of domains that deal with preventing attacks and mitigating their consequences. Adopting a nested event tree optimization framework, we present a novel formulation for the problem as a specially structured nonconvex factorable program, and develop two branch-and-bound schemes based respectively on utilizing a convex nonlinear relaxation and a linear outer-approximation, both of which are proven to converge to a global optimal solution. We also investigate a fundamental special-case variant for each of these schemes, and design an alternative direct mixed-integer programming model representation for this scenario. Several range reduction, partitioning, and branching strategies are proposed, and extensive computational results are presented to study the efficacy of different compositions of these algorithmic ingredients, including comparisons with the commercial software BARON. The developed set of algorithmic implementation strategies and enhancements are shown to outperform BARON over a set of simulated test instances, where the best proposed methodology produces an average optimality gap of 0.35% (compared to 4.29% for BARON) and reduces the required computational effort by a factor of 33. A sensitivity analysis is also conducted to explore the effect of certain key model parameters, whereupon we demonstrate that the prescribed algorithm can attain significantly tighter optimality gaps with only a near-linear corresponding increase in computational effort. In addition to enabling effective comprehensive resource allocations, this research permits coordinating agencies to conduct quantitative what-if studies on the impact of alternative resourcing priorities. The second application is motivated by the author's experience with the U.S. Army during a tour in Iraq, during which combined operations involving U.S. Army, Iraqi Army, and Iraqi Police forces sought to interdict the transport of selected materials used for the manufacture of specialized types of Improvised Explosive Devices, as well as to interdict the distribution of assembled devices to operatives in the field. In this application, we model and solve the problem of minimizing the maximum flow through a network from a given source node to a terminus node, integrating different forms of superadditive synergy with respect to the effect of resources applied to the arcs in the network. Herein, the superadditive synergy reflects the additional effectiveness of forces conducting combined operations, vis-à-vis unilateral efforts. We examine linear, concave, and general nonconcave superadditive synergistic relationships between resources, and accordingly develop and test effective solution procedures for the underlying nonlinear programs. For the linear case, we formulate an alternative model representation via Fourier-Motzkin elimination that reduces average computational effort by over 40% on a set of randomly generated test instances. This test is followed by extensive analyses of instance parameters to determine their effect on the levels of synergy attained using different specified metrics. For the case of concave synergy relationships, which yields a convex program, we design an inner-linearization procedure that attains solutions on average within 3% of optimality with a reduction in computational effort by a factor of 18 in comparison with the commercial codes SBB and BARON for small- and medium-sized problems; and outperforms these softwares on large-sized problems, where both solvers failed to attain an optimal solution (and often failed to detect a feasible solution) within 1800 CPU seconds. Examining a general nonlinear synergy relationship, we develop solution methods based on outer-linearizations, inner-linearizations, and mixed-integer approximations, and compare these against the commercial software BARON. Considering increased granularities for the outer-linearization and mixed-integer approximations, as well as different implementation variants for both these approaches, we conduct extensive computational experiments to reveal that, whereas both these techniques perform comparably with respect to BARON on small-sized problems, they significantly improve upon the performance for medium- and large-sized problems. Our superlative procedure reduces the computational effort by a factor of 461 for the subset of test problems for which the commercial global optimization software BARON could identify a feasible solution, while also achieving solutions of objective value 0.20% better than BARON. The third application is likewise motivated by the author's military experience in Iraq, both from several instances involving coalition forces attempting to interdict the transport of a kidnapping victim by a sectarian militia as well as, from the opposite perspective, instances involving coalition forces transporting detainees between interment facilities. For this application, we examine the network interdiction problem of minimizing the maximum probability of evasion by an entity traversing a network from a given source to a designated terminus, while incorporating novel forms of superadditive synergy between resources applied to arcs in the network. Our formulations examine either linear or concave (nonlinear) synergy relationships. Conformant with military strategies that frequently involve a combination of overt and covert operations to achieve an operational objective, we also propose an alternative model for sequential overt and covert deployment of subsets of interdiction resources, and conduct theoretical as well as empirical comparative analyses between models for purely overt (with or without synergy) and composite overt-covert strategies to provide insights into absolute and relative threshold criteria for recommended resource utilization. In contrast to existing static models, in a fourth application, we present a novel dynamic network interdiction model that improves realism by accounting for interactions between an interdictor deploying resources on arcs in a digraph and an evader traversing the network from a designated source to a known terminus, wherein the agents may modify strategies in selected subsequent periods according to respective decision and implementation cycles. We further enhance the realism of our model by considering a multi-component objective function, wherein the interdictor seeks to minimize the maximum value of a regret function that consists of the evader's net flow from the source to the terminus; the interdictor's procurement, deployment, and redeployment costs; and penalties incurred by the evader for misperceptions as to the interdicted state of the network. For the resulting minimax model, we use duality to develop a reformulation that facilitates a direct solution procedure using the commercial software BARON, and examine certain related stability and convergence issues. We demonstrate cases for convergence to a stable equilibrium of strategies for problem structures having a unique solution to minimize the maximum evader flow, as well as convergence to a region of bounded oscillation for structures yielding alternative interdictor strategies that minimize the maximum evader flow. We also provide insights into the computational performance of BARON for these two problem structures, yielding useful guidelines for other research involving similar non-convex optimization problems. For the fifth application, we examine the problem of apportioning railcars to car manufacturers and railroads participating in a pooling agreement for shipping automobiles, given a dynamically determined total fleet size. This study is motivated by the existence of such a consortium of automobile manufacturers and railroads, for which the collaborative fleet sizing and efforts to equitably allocate railcars amongst the participants are currently orchestrated by the \textit{TTX Company} in Chicago, Illinois. In our study, we first demonstrate potential inequities in the industry standard resulting either from failing to address disconnected transportation network components separately, or from utilizing the current manufacturer allocation technique that is based on average nodal empty transit time estimates. We next propose and illustrate four alternative schemes to apportion railcars to manufacturers, respectively based on total transit time that accounts for queuing; two marginal cost-induced methods; and a Shapley value approach. We also provide a game-theoretic insight into the existing procedure for apportioning railcars to railroads, and develop an alternative railroad allocation scheme based on capital plus operating costs. Extensive computational results are presented for the ten combinations of current and proposed allocation techniques for automobile manufacturers and railroads, using realistic instances derived from representative data of the current business environment. We conclude with recommendations for adopting an appropriate apportionment methodology for implementation by the industry.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
39

Amdouni, Saber. "Numerical analysis of some saddle point formulation with X-FEM type approximation on cracked or fictitious domains." Thesis, Lyon, INSA, 2013. http://www.theses.fr/2013ISAL0007/document.

Full text
Abstract:
Ce mémoire de thèse à été réalisée dans le cadre d'une collaboration scientifique avec "La Manufacture Française des Pneumatiques Michelin". Il porte sur l'analyse mathématique et numérique de la convergence et de la stabilité de formulations mixtes ou hybrides de problèmes d'optimisation sous contrainte avec la méthode des multiplicateurs de Lagrange et dans le cadre de la méthode éléments finis étendus (XFEM). Tout d'abord, nous essayons de démontrer la stabilité de la discrétisation X-FEM pour le problème d'élasticité linéaire incompressible en statique. Le deuxième axe, qui représente le contenu principal de la thèse est dédié à l'étude de certaines méthodes de multiplicateur de Lagrange stabilisées. La particularité de ces méthodes est que la stabilité du multiplicateur est assurée par l'ajout de termes supplémentaires dans la formulation faible. Dans ce contexte, nous commençons par l'étude de la méthode de stabilisation de Barbosa-Hughes appliquée au problème de contact unilatéral sans frottement avec XFEM cut-off. Ensuite, nous construisons une nouvelle méthode basée sur des techniques de projections locales pour stabiliser un problème de Dirichlet dans le cadre de X-FEM et une approche de type domaine fictif. Nous faisons aussi une étude comparative entre la stabilisation avec la technique de projection locale et la stabilisation de Barbosa-Hughes. Enfin, nous appliquons cette nouvelle méthode de stabilisation aux problèmes de contact unilatéral en élastostatique avec frottement de Tresca dans le cadre de X-FEM
This Ph.D. thesis was done in collaboration with "La Manufacture Française des Pneumatiques Michelin". It concerns the mathematical and numerical analysis of convergence and stability of mixed or hybrid formulation of constrained optimization problem with Lagrange multiplier method in the framework of the eXtended Finite Element Method (XFEM). First we try to prove the stability of the X-FEM discretization for incompressible elastostatic problem by ensured a LBB condition. The second axis, which present the main content of the thesis, is dedicated to the use of some stabilized Lagrange multiplier methods. The particularity of these stabilized methods is that the stability of the multiplier is provided by adding supplementary terms in the weak formulation. In this context, we study the Barbosa-Hughes stabilization technique applied to the frictionless unilateral contact problem with XFEM-cut-off. Then we present a new consistent method based on local projections for the stabilization of a Dirichlet condition in the framework of extended finite element method with a fictitious domain approach. Moreover we make comparative study between the local projection stabilization and the Barbosa-Hughes stabilization. Finally we use the local projection stabilization to approximate the two-dimensional linear elastostatics unilateral contact problem with Tresca frictional in the framework of the eXtended Finite Element Method X-FEM
APA, Harvard, Vancouver, ISO, and other styles
40

Ziane, Mikal, and François Bouillé. "Optimisation de requêtes pour un système de gestion de bases de données parallèle." Paris 6, 1992. http://www.theses.fr/1992PA066689.

Full text
Abstract:
Dans le cadre du projet ESPRIT II EDS nous avons conçu et réalisé un optimiseur physique pour un système de gestion de bases de données parallèle. Cet optimiseur prend en compte plusieurs types de parallélisme, d'algorithmes parallèles et de stratégies de fragmentation. D'autre part, nous dégageons quels types de connaissance déterminent l'extensibilité et l'efficacité d'un optimiseur. Enfin, nous proposons une nouvelle méthode d'optimisation de la traversée de chemins dans les bases de données à objets, qui améliore les méthodes traditionnelles.
APA, Harvard, Vancouver, ISO, and other styles
41

Tatjana, Veličković. "Optimizacija formulacije medijuma za proizvodnju antibiotika ciljanog delovanja primenom prirodnog izolata Streptomyces hygroscopicus." Phd thesis, Univerzitet u Novom Sadu, Tehnološki fakultet Novi Sad, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=108239&source=NDLTD&language=en.

Full text
Abstract:
Jedno od najvećih dostignuća u dvadesetom veku je bio pronalazak antibiotika i njihova primena u humanoj medicini. Međutim, vremenom se pokazalo da su izazivači bolesti mikroorganizmi koji “uče” i imaju sposobnost da se “menjaju”, što neminovno dovodi do pojave antibiotske rezistencije, a čemu ide u prilog široko rasprostranjena upotreba antibiotika u lečenju pacijenata i primena humanih antimikrobnih lekova u veterini i fitofarmaciji. Razvoj novih i unapređenje postojećih farmaceutika zahteva ogromna ulaganja koja se mogu ali i ne moraju vratiti godinama. Prvi korak u razvoju novog farmaceutika je identifikacija ciljanog antibiotskog delovanja metabolita izabranog proizvodnog mikroorganizma, nakon čega sledi optimizacija uslova biosinteze u smislu sastava medijuma za proizvodnju.Osnovni cilj ove doktorske disertacije je optimizacija formulacije medijuma za kultivaciju prirodnog izolata Streptomyces hygroscopicus u pogledu izvora makronutrijenata i njihovih količina, kako bi se metabolička aktivnost primenjenog proizvodnog mikroorganizma, u definisanim proizvodnim uslovima, usmerila ka sintezi antibiotika sa ciljanim delovanjem.Optimizacijom formulacije medijuma za kultivaciju prirodnog izolata Streptomyces hygroscopicus, u primenjenim eksperimentalnim uslovima, odabrani su najpogodniji izvori makronutrijenata i definisane su njihove optimalne koncentracije za proizvodnju antibiotika sa ciljanim delovanjem. Utvrđeno je da je biosinteza baktericida efikasnih protiv B. cereus ATCC 10876, S. aureus ATCC 11632 i P. aeruginosa ATCC 27853 najizraženija u medijumu sa fruktozom, sojinim brašnom i fosfatnim solima, dok je za dobijanje fungicida koji deluju na C. albicans ATCC 10231 i A. niger ATCC 16404 najadekvatnije primeniti medijum sa glukozom kao izvorom ugljenika i već pomenutim izvorima azota i fosfora pri čemu je odnos navedenih sastojaka medijuma specifičan za svaki test mikroorganizam. Sa tehnološkog aspekta, rezultati ovih istraživanja predstavljaju pouzdan izvor informacija za unapređenje proizvodnih karakteristika primenjenog biokatalizatora, izbor tehnike kultivacije, definisanje toka i optimizaciju postupka proizvodnje baktericida i fungicida sa krajnjim ciljem uvećanja razmera posmatranog bioprocesa.
One of the greatest achievements in the twentieth century was the invention of antibiotics and their application in human medicine. However, over time, it has been proven that disease susceptors are "learning" microorganisms and have the ability to "change", which inevitably leads to the emergence of antibiotic resistance, which is in favor of widespread use of antibiotics in the treatment of patients and the application of human antibiotics in veterinary medicine and phytopharmacy. The development of new and the advancement of existing pharmaceuticals requires huge investments that may or may not be back for years. The first step in the development of a new pharmaceutical is the identification of the target antibiotic activity of the metabolite of the selected production microorganism, followed by the optimization of the conditions of biosynthesis in terms of composition of the production medium.The main goal of this PhD thesis is the optimization of the formulation of the medium for the cultivation of the natural isolate Streptomyces hygroscopicus in terms of the source of macronutrients and their amounts, in order to direct the metabolic activity of the applied production microorganism, in the defined production conditions, towards the synthesis of antibiotics with targeted action. By optimizing the formulation of the medium for the culture of the natural isolate Streptomyces hygroscopicus, in the applied experimental conditions, the most suitable sources of macronutrients were selected and their optimal concentrations for the production of antibiotics with targeted action were defined. The biosynthesis of bactericides was found to be effective against B. cereus ATCC 10876, S. aureus ATCC 11632 and P. aeruginosa ATCC 27853 most pronounced in the medium with fructose, soy flour and phosphate salts, while for the production of fungicides acting on C. albicans ATCC 10231 and A. niger ATCC 16404 most appropriately apply a glucose medium as carbon source and already mentioned sources of nitrogen and phosphorus wherein the ratio of said ingredients to the medium is specific to each test microorganism. From a technological point of view, the results of these studies represent a reliable source of information for improving the production characteristics of the applied biocatalyst, a selection of cultivation techniques, defining the flow and optimizing the production of bactericides and fungicides with the ultimate goal of increasing the size of the observed bioprocess.
APA, Harvard, Vancouver, ISO, and other styles
42

Gugenheim, Dan. "Modélisation et optimisation d’un réseau de transport de gaz." Phd thesis, Toulouse, INPT, 2011. http://oatao.univ-toulouse.fr/11760/1/gugenheim.pdf.

Full text
Abstract:
Durant ces 40 dernières années, le gaz naturel a vu son utilisation augmenter jusqu’à constituer aujourd’hui la troisième ressource énergétique mondiale. Il est alors devenu nécessaire de l’acheminer sur des distances de plus en plus longues entre les lieux d’extraction et de consommation. Ce transport peut s’effectuer à l’état liquide par des méthaniers ou à l’état gazeux par le biais des réseaux de transport de gaz naturel composés de canalisations de grandes dimensions, tant en diamètre qu’en longueur. Cette thèse porte sur la modélisation et l’optimisation de la configuration des réseaux de transport de gaz naturel et sur l’application au cas du réseau principal de transport français qui présente plusieurs particularités. En effet, il s’agit d’un réseau de grandes dimensions, fortement maillé pour lequel plusieurs sources d’approvisionnement sont possibles pour desservir divers points de consommation. Il possède en outre, des stations d’interconnexion entre les canalisations. GRTgaz en est le gestionnaire. Ce travail concerne l’étude de la faisabilité de configurer le réseau de transport pour un scénario d’approvisionnement et de consommation. Le coeur de cette thèse porte sur le développement d’un modèle de réseau de transport de gaz et sur la détermination des flux et des configurations des stations d’interconnexion dans ce réseau à l’aide d’outils d’optimisation. L’une des innovations est la description et la modélisation des stations d’interconnexion, carrefours incontournables du réseau. Deux modèles sont ainsi proposés, faisant intervenir une formulation d’une part mixte non linéaire en nombres entiers et d’autre part, non linéaire continue. Leur efficacité en fonction de différents solveurs d’optimisation est ensuite discutée. Le choix de la meilleure formulation du problème de transport de gaz naturel a été étudié sur un ensemble de réseaux fictifs, mais représentatifs du réseau français. La meilleure stratégie, basée sur l’utilisation combinée d’une ormulation non linéaire continue, du choix de la pression comme variable et d’une initialisation par un sous-problème a ensuite été appliquée sur des instances de taille réelle. Les difficultés du passage à des instances réelles ont ensuite été résolues à l’aide de deux améliorations: d’une part, la mise à l’échelle des variables a permis de mieux conditionner le problème, puis d’autre part, une suite de relaxations a été employée afin de résoudre tous les cas réels. Les solutions sont finalement validées à l’aide de solutions métiers existantes.
APA, Harvard, Vancouver, ISO, and other styles
43

Gasnier, Swann. "Environnement d’aide à la décision pour les réseaux électriques de raccordement des fermes éoliennes en mer : conception et évaluation robuste sous incertitudes." Thesis, Ecole centrale de Lille, 2017. http://www.theses.fr/2017ECLI0013.

Full text
Abstract:
L’énergie éolienne en mer connaît une croissance forte. Sa compétitivité économique, mesurée par le LCOE (coût d’énergie actualisé), n’a pas encore atteint celle de l’éolien terrestre. Le coût du raccordement électrique affecte cette compétitivité. Selon la distance et la puissance de la ferme, un panel important d’architectures et technologies de réseau de raccordement peut être considéré (AC ou DC etc..). L’objectif de cette thèse est de fournir un cadre méthodologique décisionnel pour l’évaluation et la planification d’architecture du réseau de raccordement.L’évaluation des architectures repose sur les calculs des énergies annuelles dissipée dans le réseau, des coûts d’investissement du réseau et de l’énergie non distribuée en lien avec la fiabilité du réseau. Pour calculer ces quantités, des modèles et méthodes de calculs sont proposés. Il apparaît néanmoins nécessaire d’évaluer et de comparer des architectures ayant des dimensionnements optimaux. Ainsi, une formulation du problème de dimensionnement du réseau est proposée. La formulation est générique vis-à-vis des différentes architectures considérées. Une méthode de résolution heuristique rapide donnant des solutions quasi-optimale est mise en œuvre. L’environnement d’aide à la décision qui permet le dimensionnement puis l’évaluation d’une architecture est mis en œuvre sur plusieurs cas d’application incluant des architectures très différentes. Finalement, une méthode probabiliste analytique est proposée afin de prendre en compte les incertitudes sur les modèles et leurs propagations aux critères de décision
Offshore wind power is quickly developing. Its cost-effectiveness, measured with the LCOE (Levelized cost of Energy) has not reached the one of onshore wind power yet. The cost of electrical connection impacts this cost-effectiveness. Depending on the distance to the onshore grid, many possibilities of architectures and associated technologies can be considered for this connection network (AC, DC etc.). The goal of this research is to provide a decision support framework for the assessment and the planning of architectures for electrical connecting networks.The architecture assessment relies on the calculations of the annual energy dissipated through the network, of the investment costs and of the annual energy curtailed due to the network unavailability. To compute these quantities, models and methods are proposed.It appears that to compare architectures, these must be have near optimal designs? Thus, a formulation of the electrical network design optimization is proposed. The formulation is generic in regard to the various architectures which are considered. A quick heuristic solving approach which gives near optimal solutions is proposed and implemented.The decision support framework makes it possible the design and the assessment of an architecture and is applied to two very different architectures. Finally, a probabilistic analytical method is proposed to take into account the models uncertainties and to study their propagation to the decision criteria
APA, Harvard, Vancouver, ISO, and other styles
44

Shao, Shengzhi. "Integrated Aircraft Fleeting, Routing, and Crew Pairing Models and Algorithms for the Airline Industry." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/49609.

Full text
Abstract:
The air transportation market has been growing steadily for the past three decades since the airline deregulation in 1978. With competition also becoming more intense, airline companies have been trying to enhance their market shares and profit margins by composing favorable flight schedules and by efficiently allocating their resources of aircraft and crews so as to reduce operational costs. In practice, this is achieved based on demand forecasts and resource availabilities through a structured airline scheduling process that is comprised of four decision stages: schedule planning, fleet assignment, aircraft routing, and crew scheduling. The outputs of this process are flight schedules along with associated assignments of aircraft and crews that maximize the total expected profit. Traditionally, airlines deal with these four operational scheduling stages in a sequential manner. However, there exist obvious interdependencies among these stages so that restrictive solutions from preceding stages are likely to limit the scope of decisions for succeeding stages, thus leading to suboptimal results and even infeasibilities. To overcome this drawback, we first study the aircraft routing problem, and develop some novel modeling foundations based on which we construct and analyze an integrated model that incorporates fleet assignment, aircraft routing, and crew pairing within a single framework. Given a set of flights to be covered by a specific fleet type, the aircraft routing problem (ARP) determines a flight sequence for each individual aircraft in this fleet, while incorporating specific considerations of minimum turn-time and maintenance checks, as well as restrictions on the total accumulated flying time, the total number of takeoffs, and the total number of days between two consecutive maintenance operations. This stage is significant to airline companies as it directly assigns routes and maintenance breaks for each aircraft in service. Most approaches for solving this problem adopt set partitioning formulations that include exponentially many variables, thus requiring the design of specialized column generation or branch-and-price algorithms. In this dissertation, however, we present a novel compact polynomially sized representation for the ARP, which is then linearized and lifted using the Reformulation-Linearization Technique (RLT). The resulting formulation remains polynomial in size, and we show that it can be solved very efficiently by commercial software without complicated algorithmic implementations. Our numerical experiments using real data obtained from United Airlines demonstrate significant savings in computational effort; for example, for a daily network involving 344 flights, our approach required only about 10 CPU seconds for deriving an optimal solution. We next extend Model ARP to incorporate its preceding and succeeding decision stages, i.e., fleet assignment and crew pairing, within an integrated framework. We formulate a suitable representation for the integrated fleeting, routing, and crew pairing problem (FRC), which accommodates a set of fleet types in a compact manner similar to that used for constructing the aforementioned aircraft routing model, and we generate eligible crew pairings on-the-fly within a set partitioning framework. Furthermore, to better represent industrial practice, we incorporate itinerary-based passenger demands for different fare-classes. The large size of the resulting model obviates a direct solution using off-the-shelf software; hence, we design a solution approach based on Benders decomposition and column generation using several acceleration techniques along with a branch-and-price heuristic for effectively deriving a solution to this model. In order to demonstrate the efficacy of the proposed model and solution approach and to provide insights for the airline industry, we generated several test instances using historical data obtained from United Airlines. Computational results reveal that the massively-sized integrated model can be effectively solved in reasonable times ranging from several minutes to about ten hours, depending on the size and structure of the instance. Moreover, our benchmark results demonstrate an average of 2.73% improvement in total profit (which translates to about 43 million dollars per year) over a partially integrated approach that combines the fleeting and routing decisions, but solves the crew pairing problem sequentially. This improvement is observed to accrue due to the fact that the fully integrated model effectively explores alternative fleet assignment decisions that better utilize available resources and yield significantly lower crew costs.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
45

Wike, Carl E. 1948. "Supply chain optimization : formulations and algorithms." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/9763.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 1999.
Includes bibliographical references (leaves 103-106).
In this thesis, we develop practical solution methods for a supply chain optimization problem: a multi-echelon, un capacitated, time-expanded network of distribution cen­ters and stores, for which we seek the shipping schedule that minimizes total inventory, backlogging, and shipping costs, assuming deterministic, time-varying demand over a fixed time horizon for a single product. Because of fixed ordering and shipping costs, this concave cost network flow problem is in a class of NP-hard network design problems. We develop mathematical programming formulations, heuristic algorithms, and enhanced algorithms using approximate dynamic programming (ADP). We achieve a strong mixed integer programming (MIP) formulation, and fast, reliable algorithms, which can be extended to problems with multiple products. Beginning with a lot-size based formulation, we strengthen the formulation in steps to develop one which is a variation of a node-arc formulation for the network design problem. In addition, we present a path-flow formulation for the single product case and an enhanced network design formulation for the multiple product case. The basic algorithm we develop uses a dynamic lot-size model with backlogging together with a greedy procedure that emulates inventory pull systems. Four re­lated algorithms perform local searches of the basic algorithm's solution or explore alternative solutions using pricing schemes, including a Lagrangian-based heuristic. We show how approximate dynamic programming can be used to solve this sup­ply chain optimization problem as a dynamic control problem using any of the five algorithms. In addition to improving all the algorithms, the ADP enhancement turns the simplest algorithm into one comparable to the more complex ones. Our computational results illustrate that our enhanced network design formula­tion almost always produces integral solutions and can be used to solve problems of moderate size (3 distribution centers, 30 stores, 30 periods). Our heuristic methods, particularly those enhanced by ADP methods, produce near optimal solutions for truly large scale problems.
by Carl E. Wike.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
46

Morais, Máicon de. "Estudo da separação de glicocorticoides e aplicações em formulações farmacêuticas utilizando eletroforese capilar." Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/46/46136/tde-08022019-101018/.

Full text
Abstract:
Estudos envolvendo os glicocorticoides merecem destaque devido a serem hormônios responsáveis pela transferência de informações e instruções às células, desta forma regulando o metabolismo, desenvolvimento, crescimento, função imune e também auxiliam no controle das funções tanto reprodutivas quanto tecidual. Estes também são sintetizados e amplamente utilizados com finalidade terapêutica processos alérgicos, tratamento de doenças autoimunes, em transplantes no pré-operatório e/ou pós-operatório-, devido a sua eficiente ação como imunossupressores e anti-inflamatórios. Os dois primeiros capítulos deste trabalho exibem uma revisão da literatura com foco em considerações gerais sobre os glicocorticoides, metodologias empregadas na análise destes hormônios e fundamentos da eletroforese capilar. Na sequência, o quarto capitulo, mostra a otimização da separação de 17 glicocorticoides utilizando cromatografia eletrocinética micelar devido a alto grau hidrofóbico dos analitos. Para tal, a composição do eletrólito consistiu em 20mM de tetraborato de sódio (pH=9.3) e 30 mM de dodecil sulfato de sódio (como surfactante), e a interação soluto-micela e, portanto, retenção do soluto, foi manipulada com a adição (volume/volume) de solventes orgânicos na composição de até 20% acetonitrila (ACN), 20% etanol (EtOH) e 1% tetrahidrofurano (THF), a qual se baseia num modelo de desenho de misturas (totalizando dez diferentes eletrólitos), e através desta abordagem um ótimo de separação foi obtido (13,3% EtOH, 3,3% ACN e 0,17% THF). A melhor condição de separação foi testada qualitativamente numa amostra de urina de um voluntário que faz uso contínuo de prednisona como terapia corticoidal. As misturas de solventes estudadas neste trabalho afetam a solubilidade dos hormônios na fase aquosa e a estrutura micelar também sofre grande impacto,principalmente na camada de solvatação. O quarto capítulo busca racionalizar tais efeitos através da obtenção de descritores, e as informações contidas nos descritores hidrofóbicos e hidrofílicos são sempre relevantes e contribuem nas correlações encontradas. Obteve três grupos de comportamento distinto, onde a capacidade doadora e aceptora de prótons para a realização de ligações de hidrogênios foram as interações consideradas as mais relevantes para o comportamento observado da separação. E o capítulo final, apresenta possibilidades de aproveitamento no controle de qualidade na indústria farmacêutica, métodos baseados na injeção e tensão inversas foram propostos a fim de ganho de tempo de análise (máximo de 5 minutos), estes foram validados seguindo o protocolo preconizado pela ANVISA (Agência Nacional de Vigilância Sanitária) nos parâmetros: precisão, exatidão, seletividade, linearidade, limites de detecção e quantificação e robustez; e aplicados na quantificação de quatro (diferentes formulações comerciais contendo glicocorticoides (prednisona 20 mg, betametasona 4 mg, furoato de mometasona 200 mcg e dipropionato de beclometasona 200 mcg).
Studies involving glucocorticoids deserve to be highlighted because they are hormones responsible for the transfer of information and instructions to cells, thus regulating metabolism, development, growth, immune function and also assist in the control of both reproductive and tissue functions. These are also synthesized and widely used for therapeutic purposes - allergic processes, treatment of autoimmune diseases, in preoperative and/or postoperative transplants - due to their efficient action as immunosuppressants and anti-inflammatories. The first two chapters of this paper present a review of the literature focusing on general considerations about glucocorticoids, methodologies used in the analysis of these hormones and fundamentals of capillary electrophoresis. Subsequently, the fourth chapter shows the optimization of the separation of 17 glucocorticoids using micellar electrokinetic chromatography due to the high hydrophobic degree of the analytes. To this end, the electrolyte composition consisted of 20 mM sodium tetraborate (pH = 9.3) and 30 mM sodium dodecyl sulfate (as a surfactant), and the solute-micelle interaction and therefore solute retention was manipulated with organic solvent in the composition of up to 20% acetonitrile (ACN), 20% ethanol (EtOH) and 1% tetrahydrofuran (THF), which is based on a mixture design model (totaling ten different electrolytes), and through this approach an optimal separation was obtained (13.3% EtOH, 3.3% ACN and 0.17% THF). The best separation condition was qualitatively tested in a urine sample from a volunteer who makes continuous use of prednisone as corticosteroid therapy. The solvent mixtures studied in this work affect the solubility of the hormones in the aqueous phase and the micellar structure also has a great impact, especially on the solvation layer. The fourth chapter seeks to rationalize these effects by obtainingdescriptors, and the information contained in the hydrophobic and hydrophilic descriptors is always relevant and contributes to the correlations found. It obtained three groups of distinct behavior, where the donor and acceptor capacity of protons for the realization of hydrogen bonds were the interactions considered the most relevant for the observed behavior of the separation. And the final chapter presents possibilities of use in quality control in the pharmaceutical industry, methods based on injection and reverse voltage were proposed in order to gain analysis time (maximum of 5 minutes), these were validated following the protocol recommended by ANVISA (Brazilian National Agency of Sanitary Surveillance) in the parameters: precision, accuracy, selectivity, linearity, limits of detection and quantification and robustness; and applied in the quantification of four different commercial formulations containing glucocorticoids (prednisone 20 mg, betamethasone 4 mg, mometasone furoate 200 mcg and beclomethasone dipropionate 200 mcg).
APA, Harvard, Vancouver, ISO, and other styles
47

Liu, Mingli. "Supply Chain Management in Humanitarian Aid and Disaster Relief." Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31572.

Full text
Abstract:
Humanitarian aid and disaster relief are delivered in times of crises or natural disasters, such as after a conflict or in response to a hurricane, typhoon, or tsunami. Different from regular aid programs, aid and relief are provided to deal with emergency and immediate local areas, and to shelter affected people and refugees impacted by sudden traumatic events. There is evidence that natural and man-made disasters are increasing in numbers all around the world, affecting hundreds of millions of people every year. In spite of this fact, only in recent years – beginning in 2005 – has management of the supply chain of resources and materials for humanitarian aid and disaster relief been a topic of interest for researchers. Consequently, the academic literature in this field is comparatively new and still sparse, indicating a requirement for more academic studies. As a key part of the C-Change International Community-University Research Alliance (ICURA) project for managing adaptation to environmental change in coastal communities of Canada and the Caribbean, this thesis develops a framework and analytical model for domestic supply chain management in humanitarian aid and disaster relief in the event of severe storm and flooding in the Canadian C-Change community of Charlottetown, Prince Edward Island. In particular, the focus includes quantitative modeling of two specific aspects during the preparedness phase for emergency management: (1) inventory prepositioning and (2) transportation planning. In addition, this thesis proposes and analyses the characteristics of an effective supply chain management framework in practice to assist Canadian coastal communities in improving their preparation and performance in disaster relief efforts. The results indicate Charlottetown system effectiveness and decreased time to assist affected people are improved by distributing central emergency supply among more than one base station.
APA, Harvard, Vancouver, ISO, and other styles
48

Ganti, Mahapatruni Ravi Sastry. "New formulations for active learning." Diss., Georgia Institute of Technology, 2014. http://hdl.handle.net/1853/51801.

Full text
Abstract:
In this thesis, we provide computationally efficient algorithms with provable statistical guarantees, for the problem of active learning, by using ideas from sequential analysis. We provide a generic algorithmic framework for active learning in the pool setting, and instantiate this framework by using ideas from learning with experts, stochastic optimization, and multi-armed bandits. For the problem of learning convex combination of a given set of hypothesis, we provide a stochastic mirror descent based active learning algorithm in the stream setting.
APA, Harvard, Vancouver, ISO, and other styles
49

Cook, Laurence William. "Effective formulations of optimization under uncertainty for aerospace design." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/276146.

Full text
Abstract:
Formulations of optimization under uncertainty (OUU) commonly used in aerospace design—those based on treating statistical moments of the quantity of interest (QOI) as separate objectives—can result in stochastically dominated designs. A stochastically dominated design is undesirable, because it is less likely than another design to achieve a QOI at least as good as a given value, for any given value. As a remedy to this limitation for the multi-objective formulation of moments, a novel OUU formulation is proposed—dominance optimization. This formulation seeks a set of solutions and makes use of global optimizers, so is useful for early stages of the design process when exploration of design space is important. Similarly, to address this limitation for the single-objective formulation of moments (combining moments via a weighted sum), a second novel formulation is proposed—horsetail matching. This formulation can make use of gradient- based local optimizers, so is useful for later stages of the design process when exploitation of a region of design space is important. Additionally, horsetail matching extends straightforwardly to different representations of uncertainty, and is flexible enough to emulate several existing OUU formulations. Existing multi-fidelity methods for OUU are not compatible with these novel formulations, so one such method—information reuse—is generalized to be compatible with these and other formulations. The proposed formulations, along with generalized information reuse, are compared to their most comparable equivalent in the current state-of-the-art on practical design problems: transonic aerofoil design, coupled aero-structural wing design, high-fidelity 3D wing design, and acoustic horn shape design. Finally, the two novel formulations are combined in a two-step design process, which is used to obtain a robust design in a challenging version of the acoustic horn design problem. Dominance optimization is given half the computational budget for exploration; then horsetail matching is given the other half for exploitation. Using exactly the same computational budget as a moment-based approach, the design obtained using the novel formulations is 95% more likely to achieve a better QOI than the best value achievable by the moment-based design.
APA, Harvard, Vancouver, ISO, and other styles
50

Stefana, Janićijević. "Metode promena formulacija i okolina za problem maksimalne klike grafa." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2016. http://www.cris.uns.ac.rs/record.jsf?recordId=101446&source=NDLTD&language=en.

Full text
Abstract:
Doktorska disertacija se bavi temama rešavanja računarski teškihproblema kombinatorne optimizacije. Istaknut je problem maksimalneklike kao predstavnik određenih struktura u grafovima. Problemmaksimalne klike i sa njim povezani problemi su formulisani kaonelinearne funkcije. Rešavani su sa ciljem otkrivanja novih metodakoje pronalaze dobre aproksimacije rešenja za neko razumno vreme.Predložene su varijante Metode promenljivih okolina na rešavanjemaksimalne klike u grafu. Povezani problemi na grafovima se moguprimeniti na pretragu informacija, raspoređivanje, procesiranjesignala, teoriju klasifikacije, teoriju kodiranja, itd. Svi algoritmisu implementirani i uspešno testirani na brojnim različitimprimerima.
This Ph.D. thesis addresses topics NP hard problem solving approaches incombinatorial optimization and according to that it is highlighted maximumclique problem as a representative of certain structures in graphs. Maximumclique problem and related problems with this have been formulated as nonlinear functions which have been solved to research for new methods andgood solution approximations for some reasonable time. It has beenproposed several different extensions of Variable Neighborhood Searchmethod. Related problems on graphs could be applied on informationretrieval, scheduling, signal processing, theory of classi_cation, theory ofcoding, etc. Algorithms are implemented and successfully tested on variousdifferent tasks.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography