Dissertations / Theses on the topic 'Optimization of lens'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Optimization of lens.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
King, Angela Ph D. Massachusetts Institute of Technology. "Regression under a modern optimization lens." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/98719.
Full textCataloged from PDF version of thesis.
Includes bibliographical references (pages 131-139).
In the last twenty-five years (1990-2014), algorithmic advances in integer optimization combined with hardware improvements have resulted in an astonishing 200 billion factor speedup in solving mixed integer optimization (MIO) problems. The common mindset of MIO as theoretically elegant but practically irrelevant is no longer justified. In this thesis, we propose a methodology for regression modeling that is based on optimization techniques and centered around MIO. In Part I we propose a method to select a subset of variables to include in a linear regression model using continuous and integer optimization. Despite the natural formulation of subset selection as an optimization problem with an lo-norm constraint, current methods for subset selection do not attempt to use integer optimization to select the best subset. We show that, although this problem is non-convex and NP-hard, it can be practically solved for large scale problems. We numerically demonstrate that our approach outperforms other sparse learning procedures. In Part II of the thesis, we build off of Part I to modify the objective function and include constraints that will produce linear regression models with other desirable properties, in addition to sparsity. We develop a unified framework based on MIO which aims to algorithmize the process of building a high-quality linear regression model. This is the only methodology we are aware of to construct models that imposes statistical properties simultaneously rather than sequentially. Finally, we turn our attention to logistic regression modeling. It is the goal of Part III of the thesis to efficiently solve the mixed integer convex optimization problem of logistic regression with cardinality constraints to provable optimality. We develop a tailored algorithm to solve this challenging problem and demonstrate its speed and performance. We then show how this method can be used within the framework of Part II, thereby also creating an algorithmic approach to fitting high-quality logistic regression models. In each part of the thesis, we illustrate the effectiveness of our proposed approach on both real and synthetic datasets.
by Angela King.
Ph. D.
Rogers, Adam. "Gravitational lens modeling with iterative source deconvolution and global optimization of lens density parameters." Journal of the Royal Astronomical Society of Canada, 2012. http://hdl.handle.net/1993/5283.
Full textDong, Junwei. "Microwave Lens Designs: Optimization, Fast Simulation Algorithms, and 360-Degree Scanning Techniques." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/29081.
Full textPh. D.
Côté, Marie. "Optimization of waveguide coupling lenses using lens design software." Diss., The University of Arizona, 1995. http://hdl.handle.net/10150/187385.
Full textWei, Kang. "Bio-inspired Reconfigurable Elastomer-liquid Lens: Design, Actuation and Optimization." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1429657034.
Full textEricsson, Kenneth, and Robert Grann. "Image optimization algorithms on an FPGA." Thesis, Mälardalen University, School of Innovation, Design and Engineering, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-5727.
Full text
In this thesis a method to compensate camera distortion is developed for an FPGA platform as part of a complete vision system. Several methods and models is presented and described to give a good introduction to the complexity of the problems that is overcome with the developed method. The solution to the core problem is shown to have a good precision on a sub-pixel level.
Garcia, Gonzalez Hector Camerino. "Optimization of composite tubes for a thermal optical lens housing design." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969/383.
Full textFreiheit, Andrew J. "improving contact lens manufacturing through cost modeling and batch production scheduling optimization." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122592.
Full textThesis: S.M., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2019, In conjunction with the Leaders for Global Operations Program at MIT
Cataloged from PDF version of thesis.
Includes bibliographical references (page 55).
J&J Vision Care (JJVC) uses production scheduling methods that are not fully optimized, causing over-production of certain SKUs, and reducing capacity for other SKUs on backorder. This makes planning a weekly run-schedule for each line difficult. It is also difficult to understand where to invest capital to create an optimally flexible fleet of production lines. JJVC is currently capacity-constrained, so optimizing the production to increase output will directly translate to additional revenue. The three main areas that the leadership team wants to explore in this project are: 1. What is our current fleet flexibility? 2. How much capacity can be freed up if our fleet was more flexible? 3. Can we create a cost modeling tool that will provide more granularity in brand and sales channel profitability? First, the brands and SKUs on each line that are "validated" to run (by FDA, etc.) must be quantified.
Not all validated SKUs on a line are "runnable" though: Process issues often arise in the plant that prevent some of these validated SKUs from being produced (e.g. mechanical tolerances, chemistry, etc.). Therefore, the gap between validated and runnable SKUs will be an opportunity to explore. One constraint originally studied was the "runnable" vs "validated" prescriptions at the Jacksonville site; The percentage of runnable vs validated SKUs is only 73%, meaning that 27% of the prescriptions that J&J invested time and money to validate cannot be produced on certain lines due to manufacturing issues. The impact of this constraint and others can be quantified to identify improvement opportunities. Second, potential additional capacity can be calculated by running a sensitivity analysis with the planning tool (i.e. the optimization model) to analyze how outputs (e.g. throughput, changeover times, etc.) are affected by changing certain inputs: Mold, core, and pack change times, production rate, minimum lot sizes, service level, etc. It is also possible to change the objective function to place more weight on certain user-defined parameters.
The impact of these changes were observed by collecting the master planning data for a defined time-period and running optimization scenarios. Various time horizons were used to gain an accurate understanding of the impact. Third, to understand how the initiatives described above improve both revenue and costs, a clear understanding of the profitability of each lens must be considered before JJVC management makes high-level strategic decisions. To make this possible, a Total Delivered Cost (TDC) model was developed and published a for the Contact Lens supply chain.
by Andrew J. Freiheit.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Department of Mechanical Engineering
Khamlaj, Tariq A. "Analysis and Optimization of Shrouded Horizontal Axis Wind Turbines." University of Dayton / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1543845571758119.
Full textJabbour, Toufic. "DESIGN, ANALYSIS, AND OPTIMIZATION OF DIFFRACTIVE OPTICAL ELEMENTS UNDER HIGH NUMERICAL APERTURE FOCUSING." Doctoral diss., University of Central Florida, 2009. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2837.
Full textPh.D.
Optics and Photonics
Optics and Photonics
Optics PhD
Vladu, Adrian Valentin. "Shortest paths, Markov chains, matrix scaling and beyond : improved algorithms through the lens of continuous optimization." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112828.
Full textThis electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 289-302).
In this thesis, we build connections between classic methods from convex optimization and the modern toolkit from the fast Laplacian solver literature, in order to make progress on a number of fundamental algorithmic problems: *-- We develop a faster algorithm for the unit capacity minimum cost flow problem, which encompasses the shortest path with negative weights and minimum cost bipartite perfect matching problems. In the case of sparse graphs, this provides the first running time improvement for these problems in over 25 years. *-- We initiate the study of solving linear systems involving directed Laplacian matrices, and devise an almost-linear time algorithm for this task. This primitive enables us to also obtain almost-linear time algorithms for computing an entire host of quantities associated with Markov chains, such as stationary distributions, personalized PageRank vectors, hitting times, or escape probabilities. This significantly improves over the previous state-of-the-art, which was based on simulating random walks, or applying fast matrix multiplication. *-- We develop faster algorithms for scaling and balancing nonnegative matrices, two fundamental problems in scientific computing, significantly improving over the previously known best running times. In particular, if the optimal scalings/balancings have polynomially bounded condition numbers, our algorithms run in nearly-linear time. Beyond that, we leverage and extend tools from convex geometry in order to design an algorithm for online pricing with nearly-optimal regret. We also use convex optimization to shed a new light on the approximate Caratheodory problem, for which we give a deterministic nearly-linear time algorithm, as well as matching lower bounds.
by Adrian Valentin Vladu.
Ph. D.
Li, Qinggele. "Optimization of point spread function of a high numerical aperture objective lens : application to high resolution optical imaging and fabrication." Thesis, Cachan, Ecole normale supérieure, 2014. http://www.theses.fr/2014DENS0059/document.
Full textNowadays, far field optical microscopy is widely used in many fields, for fundamental research and applications. The low cost, simple operation, high flexibility are its main advantages. The key parameter of an optical microscope is the objective lens.This thesis's work focuses mainly on the characterization and optimization of the point spread function (PSF) of a high numerical aperture (NA) objective lens (OL) for applications of high resolution imaging and nano-fabrication.In the first part of the thesis, we have systematically investigated the dependency of polarization and intensity distributions of the focusing spot on numerous parameters, such as the phase, the polarization, and the beam mode of incident beam, as well as the refractive index mismatch. Then, we demonstrated theoretically different methods for manipulation of the polarization and intensity distributions of the focusing spot, which can have desired shapes and are useful for different applications. By using a home-made confocal microscope, we have experimentally verified some of the theoretical predictions, for example, vector properties of light beam under a tight focusing condition. In the second part of dissertation work, a new, simple and inexpensive method based on the one-photon absorption mechanism has been demonstrated theoretically and experimentally for 3D sub-micrometer imaging and fabrication applications. The theoretical calculation based on vectorial Debye approximation and taken into account the absorption effect of material shows that it is possible to focus the light tightly and deeply inside the material if the material presents a very low one-photon absorption (LOPA) at the excitation wavelength. We have then demonstrated experimentally that the LOPA microscopy allows to achieve 3D imaging and 3D fabrication with submicrometer resolution, similar to those obtained by two-photon absorption microscopy
Bayrac, Abdullah Tahir. "Optimization Of A Regeneration And Transformation System For Lentil (lens Culinaris M., Cv. Sultan-i) Cotyledonary Petioles And Epicotyls." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/3/12605453/index.pdf.
Full textM was found to enhance the percentage of somatic embryos by 25 % and reduce the necrosis 24 %. However none of the globular and heart shape embryos were able to regenerate. Transient GUS expression efficiencies of roots, shoot tips, and cotyledonary petioles were tested after Agrobacterium-mediated transformation. Transformation frequencies were 26, 74, and 38 % for cotyledonary petiole, shoot tips, and roots respectively.
De, Villiers Jason Peter. "Correction of radially asymmetric lens distortion with a closed form solution and inverse function." Diss., Pretoria : [s.n.], 2008. http://upetd.up.ac.za/thesis/available/etd-01232009-161525/.
Full textBarbosa, José Luiz Ferraz. "Metodologia de otimização de lentes para lâmpadas de LED." Universidade Federal de Goiás, 2013. http://repositorio.bc.ufg.br/tede/handle/tede/3180.
Full textApproved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-09-26T11:57:17Z (GMT) No. of bitstreams: 3 Dissertacao_JLFB_VFinal.pdf: 18601269 bytes, checksum: ec6c9955a2f1d95120cab2f797232e97 (MD5) Capa Dissertacao_JLFB_VFinal.pdf: 4769532 bytes, checksum: 3d15c52d57be870999449e2a7b78fbb3 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2014-09-26T11:57:17Z (GMT). No. of bitstreams: 3 Dissertacao_JLFB_VFinal.pdf: 18601269 bytes, checksum: ec6c9955a2f1d95120cab2f797232e97 (MD5) Capa Dissertacao_JLFB_VFinal.pdf: 4769532 bytes, checksum: 3d15c52d57be870999449e2a7b78fbb3 (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2013-06-14
The purpose of this work is to present a methodology for optimizing the geometry of the Light Emitting Diode (LED) secondary lens, in non-imaging applications, which focuses on the distribution of illuminance on a target plane. The simulation of Ray Tracing is produced by stochastic method and the optimization process based on heuristic search interacts with Ray Tracing to nd the optimized parameters of the LED secondary lens geometry.
O propósito deste trabalho e apresentar uma metodologia para otimiza ção da geometria da lente secund ária de Light Emitting Diode (LED) para aplica ção em iluminação, tendo como enfoque a distribuição da iluminância sobre um plano alvo. A simulação do Ray Tracing e produzida através do m etodo estoc astico e o processo de otimização interage com o Ray Tracing através de um m étodo heur ístico na busca dos parâmetros otimizados da geometria da lente secundária do LED.
Jarboui, Ahmed. "Etude de l'oxygénation de la cornée en présence d'un dispositif oculaire par des approches couplées de modélisation et d'expérimentations." Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTG038.
Full textThis work presents a study of the cornea oxygenation in the presence of an ocular device (sensor). This device aims to allow continuous measurement of intraocular pressure in order to better anticipate the onset of glaucoma. The oxygenation study was carried out using coupled modeling and experimental approaches. The experimental apparatus developed in this work, based on a chronoamperometric method, enabled to measure the oxygen permeability of the materials used to manufacture the sensor as well as the overall device permeability. Experimental OCT measurements concerning the change in corneal thickness have shown that corneal swelling, caused mainly by a lack of oxygenation, varies locally at the cornea. To explain this spatial heterogeneity, a mathematical model of corneal oxygenation has been developed in 2D geometry. The model involves the description of mass transfer phenomena (oxygen transfer and diffusion) and biochemical reactions within the cornea by aerobic and anaerobic pathways. The model enabled to identify the limiting phenomena of cornea oxygenation under different conditions of sensor wearing, by integrating a potential decentering, and for different designs of the device. As a predictive tool, the model also identified improvement strategies such as reducing the surface area of the circuit, implementing oxygen channels or increasing the permeability of manufacturing materials
Abbas, Ibtisam. "Optimization of the optical properties of electrostrictive polyurethane for a smart lens thesis submitted in partial fulfilment of the degree of Master of Engineering, Auckland University of Technology, February 2005." Full thesis. Abstract, 2005. http://puka2.aut.ac.nz/ait/theses/AbbasI.pdf.
Full textMeske, Ralf. "Non-parametric gradient-less shape optimization in solid mechanics /." Aachen : Shaker, 2007. http://www.gbv.de/dms/ilmenau/toc/538233001.PDF.
Full textHamaz, Idir. "Méthodes d'optimisation robuste pour les problèmes d'ordonnancement cyclique." Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30205/document.
Full textSeveral studies on cyclic scheduling problems have been presented in the literature. However, most of them consider that the problem parameters are deterministic and do not consider possible uncertainties on these parameters. However, the best solution for a deterministic problem can quickly become the worst one in the presence of uncertainties, involving bad schedules or infeasibilities. Many sources of uncertainty can be encountered in scheduling problems, for example, activity durations can decrease or increase, machines can break down, new activities can be incorporated, etc. In this PhD thesis, we focus on scheduling problems that are cyclic and where activity durations are affected by uncertainties. More precisely, we consider an uncertainty set where each task duration belongs to an interval, and the number of parameters that can deviate from their nominal values is bounded by a parameter called budget of uncertainty. This parameter allows us to control the degree of conservatism of the resulting schedule. In particular, we study two cyclic scheduling problems. The first one is the basic cyclic scheduling problem (BCSP). We formulate the problem as a two-stage robust optimization problem and, using the properties of this formulation, we propose three algorithms to solve it. The second considered problem is the cyclic jobshop problem (CJSP). As for the BCSP, we formulate the problem as two-stage robust optimization problem and by exploiting the algorithms proposed for the robust BCSP we propose a Branch-and-Bound algorithm to solve it. In order to evaluate the efficiency of our method, we compared it with classical decomposition methods for two-stage robust optimization problems that exist in the literature. We also studied a version of the CJSP where each task duration takes uniformly values within an interval and where the objective is to minimize the mean value of the cycle time. In order to solve the problem, we adapted the Branch-and-Bound algorithm where in each node of the search tree, the problem to be solved is the computation of a volume of a polytope. Numerical experiments assess the efficiency of the proposed methods
Phadke, Nandan Neelkanth. "OPTIMIZATIONS ON FINITE THREE DIMENSIONAL LARGE EDDY SIMULATIONS." The Ohio State University, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=osu1431084092.
Full textMeske, Ralf [Verfasser]. "Non-parametric gradient-less shape optimization in solid mechanics / Ralf Meske." Aachen : Shaker, 2007. http://d-nb.info/1166511979/34.
Full textTruong, Huu Tram. "Workflow-based applications performance and execution cost optimization on cloud infrastructures." Nice, 2010. http://www.theses.fr/2010NICE4091.
Full textCloud computing is increasingly exploited to tackle the computing challenges raised in both science and industry. Clouds provide computing, network and storage resources on demand to satisfy the needs of large-scale distributed applications. To adapt to the diversity of cloud infrastructures and usage, new tools and models are needed. Estimating the amount of resources consumed by each application in particular is a difficult problem, both for end users who aim at minimizing their cost and infrastructure providers who aim at controlling their resources allocation. Although a quasi-unlimited amount of resources may be allocated, a trade-off has to be found between the allocated infrastructure cost, the expected performance and the optimal performance achievable that depends on the level of parallelization of the applications. Focusing on medical image analysis, a scientific domain representative of the large class of data intensive distributed applications, this thesis propose a fine-grained cost function model relying on the expertise captured form the application. Based on this cost function model, four resources allocation strategies are proposed. Taking into account both computing and network resources, these strategies help users to determine the amount of resources to reserve and compose their execution environment. In addition, the data transfer overhead and the low reliability level, which are well-known problems of large-scale distributed systems impacting application performance and infrastructure usage cost, are also considered. The experiments reported in this thesis were carried out on the Aladdin/Grid’50000 infrastructure, using the HIPerNet virtualization middleware. This virtual platform manager enables the joint virtualization of computing and network resources. A real medical image analysis application was considered for all experimental validations. The experimental results assess the validity of the approach in terms of infrastructure cost and application performance control. Our contributions both facilitate the exploitation of cloud infrastructure, delivering a higher quality of services to end users, and help the planning of cloud resources delivery
Lin, Hongzhou. "Algorithmes d'accélération générique pour les méthodes d'optimisation en apprentissage statistique." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM069/document.
Full textOptimization problems arise naturally in machine learning for supervised problems. A typical example is the empirical risk minimization (ERM) formulation, which aims to find the best a posteriori estimator minimizing the regularized risk on a given dataset. The current challenge is to design efficient optimization algorithms that are able to handle large amounts of data in high-dimensional feature spaces. Classical optimization methods such as the gradient descent algorithm and its accelerated variants are computationally expensive under this setting, because they require to pass through the entire dataset at each evaluation of the gradient. This was the motivation for the recent development of incremental algorithms. By loading a single data point (or a minibatch) for each update, incremental algorithms reduce the computational cost per-iteration, yielding a significant improvement compared to classical methods, both in theory and in practice. A natural question arises: is it possible to further accelerate these incremental methods? We provide a positive answer by introducing several generic acceleration schemes for first-order optimization methods, which is the main contribution of this manuscript. In chapter 2, we develop a proximal variant of the Finito/MISO algorithm, which is an incremental method originally designed for smooth strongly convex problems. In order to deal with the non-smooth regularization penalty, we modify the update by introducing an additional proximal step. The resulting algorithm enjoys a similar linear convergence rate as the original algorithm, when the problem is strongly convex. In chapter 3, we introduce a generic acceleration scheme, called Catalyst, for accelerating gradient-based optimization methods in the sense of Nesterov. Our approach applies to a large class of algorithms, including gradient descent, block coordinate descent, incremental algorithms such as SAG, SAGA, SDCA, SVRG, Finito/MISO, and their proximal variants. For all of these methods, we provide acceleration and explicit support for non-strongly convex objectives. The Catalyst algorithm can be viewed as an inexact accelerated proximal point algorithm, applying a given optimization method to approximately compute the proximal operator at each iteration. The key for achieving acceleration is to appropriately choose an inexactness criteria and control the required computational effort. We provide a global complexity analysis and show that acceleration is useful in practice. In chapter 4, we present another generic approach called QNing, which applies Quasi-Newton principles to accelerate gradient-based optimization methods. The algorithm is a combination of inexact L-BFGS algorithm and the Moreau-Yosida regularization, which applies to the same class of functions as Catalyst. To the best of our knowledge, QNing is the first Quasi-Newton type algorithm compatible with both composite objectives and the finite sum setting. We provide extensive experiments showing that QNing gives significant improvement over competing methods in large-scale machine learning problems. We conclude the thesis by extending the Catalyst algorithm into the nonconvex setting. This is a joint work with Courtney Paquette and Dmitriy Drusvyatskiy, from University of Washington, and my PhD advisors. The strength of the approach lies in the ability of the automatic adaptation to convexity, meaning that no information about the convexity of the objective function is required before running the algorithm. When the objective is convex, the proposed approach enjoys the same convergence result as the convex Catalyst algorithm, leading to acceleration. When the objective is nonconvex, it achieves the best known convergence rate to stationary points for first-order methods. Promising experimental results have been observed when applying to sparse matrix factorization problems and neural network models
Khan, Ejaz. "Techniques itératives pour les systèmes CDMA et algorithmes de détection MIMO." Paris, ENST, 2003. http://www.theses.fr/2003ENST0020.
Full textWe focus on low complexity maximum likelihood detection. The em algorithm is a broadly applicable approach to the iterative computation of ml estimates, useful in variety of incomplete-data problems, where algorithms such as the newton-raphson method may turn out to be more complicated. In the first part of the thesis, we use em algorithm to estimate the channel amplitudes blindly and compare the results with the cramer-rao bound (crb). The second part of the thesis concerns the detection problem in mimo systems. We are able to device an algorithm for approximate ml detection using a discrete geometric approach. The advantage of this algorithm is that its performance is polynomial irrespective of the snr and no heuristic is employed in our algorithm. An alternative way to ml problem is to devise low complexity algorithms whose performance is close to the exact ml. This can be done using semidefinite programming (sdp) approach. The computational complexity of the sdp approach is comparable to the average complexity of the sphere decoder but still it is quite complicated for large systems. We obtained low complexity (by reducing the number of the variables) approximate ml by second order cone programming (socp) approach. In the above discussion the channel state information is assumed to be known at the receiver. We further looked into the problem of detection with no channel knowledge at the receiver. The result was the joint channel-symbol estimation. We obtained the results of joint channel-symbol estimation using em algorithm and in order to reduce the complexity of the resulting em algorithm, we used mean field theory (mft) approach
Li, Yang. "Simple techniques for piezoelectric energy harvesting optimization." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0077/document.
Full textPiezoelectric energy harvesting is a promising technique for battery-less miniature electronic devices. The object of this work is to evaluate simple and robust approaches to optimize the extracted power. First, a lightweight equivalent circuit derived from the Mason equivalent circuit is proposed. It’s a comprehensive circuit, which is suitable for piezoelectric seismic energy harvester investigation and power optimization. The optimal charge impedance for both the resistive load and complex load are given and analyzed. When complex load type can be implemented, the power output is constant at any excitation frequency with constant acceleration excitation. This power output is exactly the maximum power that can be extracted with matched resistive load without losses. However, this wide bandwidth optimization is not practical due to the high sensitivity the reactive component mismatch. Another approach to improve power extraction is the capability to implement a network of piezoelectric generators harvesting on various frequency nodes and different locations on a host structure. Simulations are conducted in the case of direct harvesting on a planar structure excited by a force pulse. These distributed harvesters, equipped with nonlinear technique SSHI (Synchronized Switching Harvesting on Inductor) devices, were connected in parallel, series, independently and other complex forms. The comparison results showed that the energy output didn’t depend on the storage capacitor connection method. However, only one set of SSHI circuit for a whole distributed harvesters system degrades the energy scavenging capability due to switching conflict. Finally a novel non-linear approach is proposed to allow optimization of the extracted energy while keeping simplicity and standalone capability. This circuit named S3H for “ Synchronized Serial Switch Harvesting” does not rely on any inductor and is constructed with a simple switch. The power harvested is more than twice the conventional technique one on a wide band of resistive load
Zhang, Bo. "Self-optimization of infrastructure and platform resources in cloud computing." Thesis, Lille 1, 2016. http://www.theses.fr/2016LIL10207/document.
Full textElasticity is considered as an important solution to handle the performance issues in scalable distributed system. However, most of the researches of elasticity only concern the provisioning and de-provisioning resources in automatic ways, but always ignore the resource utilization of provisioned resources. This might lead to resource leaks while provisioning redundant resources, thereby causing unnecessary expenditure. To avoid the resource leaks and redundant resources, my research therefore focus on how to maximize resource utilization by self resource management. In this thesis, relevant to diverse problems of resource usage and allocation in different layers, I propose two resource management approaches corresponding to infrastructure and platform, respectively. To overcome infrastructure limitation, I propose CloudGC as middleware service which aims to free occupied resources by recycling idle VMs. In platform-layer, a self-balancing approach is introduced to adjust Hadoop configuration at runtime, thereby avoiding memory loss and dynamically optimizing Hadoop performance. Finally, this thesis concerns rapid deployment of service which is also an issue of elasticity. A new tool, named "hadoop-benchmark", applies docker to accelerate the installation of Hadoop cluster and to provide a set of docker images which contain several well-known Hadoop benchmarks.The assessments show that these approaches and tool can well achieve resource management and self-optimization in various layers, and then facilitate the elasticity of infrastructure and platform in scalable platform, such as Cloud computing
Belabed, Dallal. "Design and Evaluation of Cloud Network Optimization Algorithms." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066149/document.
Full textThis dissertation tries to give a deep understanding of the impact of the new Cloud paradigms regarding to the Traffic Engineering goal, to the Energy Efficiency goal, to the fairness in the endpoints offered throughput, and of the new opportunities given by virtualized network functions.In the first part of our dissertation we investigate the impact of these novel features in Data Center Network optimization, providing a formal comprehensive mathematical formulation on virtual machine placement and a metaheuristic for its resolution. We show in particular how virtual bridging and multipath forwarding impact common DCN optimization goals, Traffic Engineering and Energy Efficiency, assess their utility in the various cases in four different DCN topologies.In the second part of the dissertation our interest move into better understand the impact of novel attened and modular DCN architectures on congestion control protocols, and vice-versa. In fact, one of the major concerns in congestion control being the fairness in the offered throughput, the impact of the additional path diversity, brought by the novel DCN architectures and protocols, on the throughput of individual endpoints and aggregation points is unclear.Finally, in the third part we did a preliminary work on the new Network Function Virtualization paradigm. In this part we provide a linear programming formulation of the problem based on virtual network function chain routing problem in a carrier network. The goal of our formulation is to find the best route in a carrier network where customer demands have to pass through a number of NFV node, taking into consideration the unique constraints set by NFV
Wang, Chen. "Variants of Deterministic and Stochastic Nonlinear Optimization Problems." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112294/document.
Full textCombinatorial optimization problems are generally NP-hard problems, so they can only rely on heuristic or approximation algorithms to find a local optimum or a feasible solution. During the last decades, more general solving techniques have been proposed, namely metaheuristics which can be applied to many types of combinatorial optimization problems. This PhD thesis proposed to solve the deterministic and stochastic optimization problems with metaheuristics. We studied especially Variable Neighborhood Search (VNS) and choose this algorithm to solve our optimization problems since it is able to find satisfying approximated optimal solutions within a reasonable computation time. Our thesis starts with a relatively simple deterministic combinatorial optimization problem: Bandwidth Minimization Problem. The proposed VNS procedure offers an advantage in terms of CPU time compared to the literature. Then, we focus on resource allocation problems in OFDMA systems, and present two models. The first model aims at maximizing the total bandwidth channel capacity of an uplink OFDMA-TDMA network subject to user power and subcarrier assignment constraints while simultaneously scheduling users in time. For this problem, VNS gives tight bounds. The second model is stochastic resource allocation model for uplink wireless multi-cell OFDMA Networks. After transforming the original model into a deterministic one, the proposed VNS is applied on the deterministic model, and find near optimal solutions. Subsequently, several problems either in OFDMA systems or in many other topics in resource allocation can be modeled as hierarchy problems, e.g., bi-level optimization problems. Thus, we also study stochastic bi-level optimization problems, and use robust optimization framework to deal with uncertainty. The distributionally robust approach can obtain slight conservative solutions when the number of binary variables in the upper level is larger than the number of variables in the lower level. Our numerical results for all the problems studied in this thesis show the performance of our approaches
Costa, da Silva Marco Aurelio. "Applications and algorithms for two-stage robust linear optimization." Thesis, Avignon, 2018. http://www.theses.fr/2018AVIG0229/document.
Full textThe research scope of this thesis is two-stage robust linear optimization. We are interested in investigating algorithms that can explore its structure and also on adding alternatives to mitigate conservatism inherent to a robust solution. We develop algorithms that incorporate these alternatives and are customized to work with rather medium or large scale instances of problems. By doing this we experiment a holistic approach to conservatism in robust linear optimization and bring together the most recent advances in areas such as data-driven robust optimization, distributionally robust optimization and adaptive robust optimization. We apply these algorithms in defined applications of the network design/loading problem, the scheduling problem, a min-max-min combinatorial problem and the airline fleet assignment problem. We show how the algorithms developed improve performance when compared to previous implementations
Nikbakht, Silab Rasoul. "Unsupervised learning for parametric optimization in wireless networks." Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/671246.
Full textAqueta tesis estudia l’optimització paramètrica a les xarxes cel.lulars i xarxes cell-free, explotant els paradigmes basats en dades i basats en experts. L’assignació i control de la potencia, que ajusten la potencia de transmissió per complir amb diferents criteris d’equitat com max-min o max-product, son tasques crucials en les telecomunicacions inalàmbriques pertanyents a la categoria d’optimització paramètrica. Les tècniques d’última generació per al control i assignació de la potència solen exigir enormes costos computacionals i no son adequats per aplicacions en temps real. Per abordar aquesta qüestió, desenvolupem una tècnica de propòsit general utilitzant aprenentatge no supervisat per resoldre optimitzacions paramètriques; i al mateix temps ampliem el reconegut algoritme de control de potencia fraccionada. En el paradigma basat en dades, creem un marc d’aprenentatge no supervisat que defineix una xarxa neuronal (NN, sigles de Neural Network en Anglès) especifica, incorporant coneixements experts a la funció de cost de la NN per resoldre els problemes de control i assignació de potència. Dins d’aquest enfocament, s’entrena una NN de tipus feedforward mitjançant el mostreig repetit en l’espai de paràmetres, però, en lloc de resoldre completament el problema d’optimització associat, es pren un sol pas en la direcció del gradient de la funció objectiu. El mètode resultant ´es aplicable tant als problemes d’optimització convexos com no convexos. Això ofereix una acceleració de dos a tres ordres de magnitud en els problemes de control i assignació de potencia en comparació amb un algoritme de resolució convexa—sempre que sigui aplicable. En el paradigma dirigit per experts, investiguem l’extensió del control de potencia fraccionada a les xarxes sense cèl·lules. La solució tancada resultant pot ser avaluada per a l’enllaç de pujada i el de baixada sense esforç i assoleix una solució (gaire) òptima en el cas de l’enllaç de pujada. En ambdós paradigmes, ens centrem especialment en els guanys a gran escala—la quantitat d’atenuació que experimenta la potencia mitja local rebuda. La naturalesa de variació lenta dels guanys a gran escala relaxa la necessitat d’una actualització freqüent de les solucions tant en el paradigma basat en dades com en el basat en experts, permetent d’aquesta manera l’ús dels dos mètodes en aplicacions en temps real.
Esta tesis estudia la optimización paramétrica en las redes celulares y redes cell-free, explorando los paradigmas basados en datos y en expertos. La asignación y el control de la potencia, que ajustan la potencia de transmisión para cumplir con diferentes criterios de equidad como max-min o max-product, son tareas cruciales en las comunicaciones inalámbricas pertenecientes a la categoría de optimización paramétrica. Los enfoques más modernos de control y asignación de la potencia suelen exigir enormes costes computacionales y no son adecuados para aplicaciones en tiempo real. Para abordar esta cuestión, desarrollamos un enfoque de aprendizaje no supervisado de propósito general que resuelve las optimizaciones paramétricas y a su vez ampliamos el reconocido algoritmo de control de potencia fraccionada. En el paradigma basado en datos, creamos un marco de aprendizaje no supervisado que define una red neuronal (NN, por sus siglas en inglés) específica, incorporando conocimiento de expertos a la función de coste de la NN para resolver los problemas de control y asignación de potencia. Dentro de este enfoque, se entrena una NN de tipo feedforward mediante el muestreo repetido del espacio de parámetros, pero, en lugar de resolver completamente el problema de optimización asociado, se toma un solo paso en la dirección del gradiente de la función objetivo. El método resultante es aplicable tanto a los problemas de optimización convexos como no convexos. Ofrece una aceleración de dos a tres órdenes de magnitud en los problemas de control y asignación de potencia, en comparación con un algoritmo de resolución convexo—siempre que sea aplicable. Dentro del paradigma dirigido por expertos, investigamos la extensión del control de potencia fraccionada a las redes cell-free. La solución de forma cerrada resultante puede ser evaluada para el enlace uplink y el downlink sin esfuerzo y alcanza una solución (casi) óptima en el caso del enlace uplink. En ambos paradigmas, nos centramos especialmente en las large-scale gains— la cantidad de atenuación que experimenta la potencia media local recibida. La naturaleza lenta y variable de las ganancias a gran escala relaja la necesidad de una actualización frecuente de las soluciones tanto en el paradigma basado en datos como en el basado en expertos, permitiendo el uso de ambos métodos en aplicaciones en tiempo real.
Ismaïl, Mohamed Amine. "Study and optimization of data protection, bandwidth usage and simulation tools for wireless networks." Nice, 2010. http://www.theses.fr/2010NICE4074.
Full textToday, many technical challenges remain in the design of wireless networks to support emerging services. The main contributions of this thesis are three-fold in addressing some of these issues. The first contribution addresses the reliability of wireless links, in particular through data protection against long fading time (also known as slow fading) in the context of a direct satellite-to-mobile link. We propose an innovative algorithm, called Multi Burst Sliding Encoding (MBSE), that extends the existing DVB-H intra-burst (MPEFEC) protection to an inter-burst protection. Our MBSE algorithm allows complete burst losses to be recovered, while taking into account the specificity of mobile hand-held devices. Based on an optimized data organization, our algorithm provides protection against long term fading, while still using the Reed-Solomon code already implemented in mobile hand-held chipsets. MBSE has been approved by the DVB Forum and was integrated in the DVB-SH standard in which it now plays a key role. The second contribution is related to the practical optimization of bandwidth usage in the context of wireless links. We have proposed WANcompress, a bandwidth compression technique for detecting and eliminating redundant network traffic by sending only a label instead of the original packets. It differs from standard compression techniques in that it removes redundant patterns over a large range of time (days/weeks, i. E. Giga-bytes) where as existing compression techniques operate on a smaller windwos scales (seconds, i. E. Few kilo-bytes). We performed intensive experiments that achieved compression factors up to 25 times, and acceleration factors up to 22 times. In a corporate trial conducted over a WiMAX network for one week, WANcompress improved the bitrate up to 10 times, and on average 33% of the bandwidth was saved. The third contribution is related to the simulation of wireless networks. We have proposed a 802. 16 WiMAX module for the widely used ns-3 simulator. Our module provides a detailed and standard-compliant implementation of the Point to Multi-Point (PMP) topology with Time Division Duplex (TDD) mode. It supports a large number of features, thus enabling the simulation of a rich set of WiMAX scenarios, and providing close-to-real results. These features include Quality of Service (QoS) management, efficient scheduling for both up-link and downlink, packet classification, bandwidth management, dynamic flow creation, as well as scalable OFDM physical layer simulation. This module was merged with the main development branch of the ns-3 simulator, and has become one of its standard features as of version v3. 8
Ben, Halima Kchaou Rania. "Cost optimization of business processes based on time constraints on cloud resources." Electronic Thesis or Diss., Institut polytechnique de Paris, 2020. http://www.theses.fr/2020IPPAS014.
Full textMotivated by the need of "optimizing the deployment cost of business processes" organizations outsource some of their operations to cloud computing. Cloud providers offer competitive pricing strategies (e.g., on-demand, reserved, and spot) specified based on temporal constraints to accommodate users' changing and last-minute demands. Besides, the organizations' business processes are time constrained and any violation to these constraints could lead to serious consequences. Therefore, there is a need to formally verify that the cloud resource allocation in a business process is temporally correct. However, due to the lack of a formal definition of cloud pricing strategies, specified in natural language, the temporal correctness of cloud resource allocation in a business process management context can not be verified. Furthermore, the variety of cloud resources, pricing strategies, and activities requirements do not help the business process designer to easily find the optimal business process's deployment cost. In this thesis, our objectives are to: (i) improve the business processes support of temporal constraints on activities and cloud resources, as well as pricing strategies and (ii) minimize the business process deployment cost. To this end, we propose a formal specification for cloud resources, pricing strategies, and activities' temporal constraints. This specification is used to formally verify the temporal correctness of cloud resource allocation in time-aware business processes. Then, we propose two linear program models, binary linear program and mixed integer program, to find the optimal deployment cost of time-aware business processes in cloud resources
Morros, Rubio Josep Ramon. "Optimization of Segmentation-Based Video Sequence Coding Techniques. Application to content based functionalities." Doctoral thesis, Universitat Politècnica de Catalunya, 2004. http://hdl.handle.net/10803/6888.
Full textEn este trabajo se estudia el problema de la compresión de vídeo utilizando funcionalidades basadas en el contenido en el marco teórico de los sistemas de codificación de secuencias de vídeo basados en regiones. Se tratan básicamente dos problemas: El primero está relacionado con la obtención de una codificación óptima en sistemas de codificación de vídeo basados en regiones. En concreto, se muestra como se puede utilizar un metodología de 'rate-distortion' para este tipo de problemas. El segundo problema tratado es como introducir funcionalidades basadas en el contenido en uno de estos sistemas de codificación de vídeo.La teoría de 'rate-distortion' define la optimalidad en la codificación como la representación de una señal que, para un tasa de bits dada, resulta en una distorsión mínima al reconstruir la señal. En el caso de sistemas de codificación basados en regiones, esto implica obtener una partición óptima y al mismo tiempo, un reparto óptimo de los bits entre las diferentes regiones de esta partición. Este problema se formaliza para sistemas de codificación no escalables y se propone un algoritmo para solucionar este problema. Este algoritmo se aplica a un sistema de codificación concreto llamado SESAME. En SESAME, cada cuadro de la secuencia de vídeo se segmenta en un conjunto de regiones que se codifican de forma independiente. La segmentación se hace siguiendo criterios de homogeneidad espacial y temporal. Para eliminar la redundancia temporal, se utiliza un sistema predictivo basado en la información de movimiento tanto para la partición como para la textura. El sistema permite seguir la evolución temporal de cada región a lo largo de la secuencia. Los resultados de la codificación son óptimos (o casi-óptimos) para el marco dado en un sentido de 'rate-distortion'. El proceso de codificación incluye encontrar una partición óptima y también encontrar la técnica de codificación y nivel de calidad más adecuados para cada región.Más adelante se investiga el problema de la codificación de vídeo en sistemas con escalabilidad y que suporten funcionalidades basadas en el contenido. El problema se generaliza incluyendo en el esquema de codificación las dependencias espaciales y temporales entre los diferentes cuadros o entre las diferentes capas de escalabilidad. En este caso, la solución requiere encontrar la partición óptima y las técnicas de codificación de textura óptimas tanto para la capa base como para la capa de mejora. A causa de les dependencias que hay entre estas capas, la partición y el conjunto de técnicas de codificación para la capa de mejora dependerán de las decisiones tomadas en la capa base. Dado que este tipo de soluciones generalmente son muy costosas computacionalmente, también se propone una solución que no tiene en cuenta estas dependencias.Los algoritmos obtenido se usan en la extensión de SESAME. El sistema de codificación extendido, llamado XSESAME soporta diferentes tipos de escalabilidad (PSNR, espacial y temporal) así como funcionalidades basadas en el contenido y la posibilidad de seguimiento de objetos a través de la secuencia de vídeo. El sistema de codificación permite utilizar dos modos diferentes por lo que hace referencia a la selección de les regiones de la partición de la capa de mejora: El primer modo (supervisado) está pensado para utilizar funcionalidades basadas en el contenido. El segundo modo (no supervisado) no soporta funcionalidades basadas en el contenido y su objetivo es simplemente obtener una codificación óptima en la capa de mejora.Otro tema investigado es la integración de un método de seguimiento de objetos en el sistema de codificación.En el caso general, el seguimiento de objetos en secuencias de vídeo es un problema muy complejo. Si este seguimiento se quiere integrar en un sistema de codificación aparecen problemas adicionales debido a que los requisitos necesarios para obtener eficiencia en la codificación pueden entrar en conflicto con los requisitos para obtener una buena precisión en el seguimiento de objetos. Esta aparente incompatibilidad se soluciona usando un enfoque basado en una doble partición de cada cuadro de la secuencia. La partición que se usa para codificar se resegmenta usando criterios puramente espaciales. Proyectando esta segunda partición se obtiene una mejor adaptación de los contornos al objeto a seguir. El exceso de regiones que implicaría esta resegmentación se elimina con una etapa de fusión de regiones realizada a posteriori.
This work addresses the problem of video compression with content-based functionalities in the framework of segmentation-based video coding systems. Two major problems are considered. The first one is related with coding optimality in segmentation-based coding systems. Regarding this subject, the feasibility of a rate-distortion approach for a complete region-based coding system is shown. The second one is how to address content-based functionalities in the coding system proposed as a solution of the first problem. Optimality, as defined in the framework of rate-distortion theory, deals with obtaining a representation of the video sequence that leads to a minimum distortion of the coded signal for a given bit budget. In the case of segmentation-based coding systems this means to obtain an 'optimal' partition together with the best coding technique for each region of this partition so that the result is optimal in an operational rate-distortion sense. The problem is formalized for independent, non-scalable coding.An algorithm to solve this problem is provided as well.This algorithms is applied to a specific segmentation-based coding system, the so called SESAME. In SESAME, each frame is segmented into a set of regions, that are coded independently. Segmentation involves both spatial and motion homogeneity criteria. To exploit temporal redundancy, a prediction for both the partition and the texture of the current frame is created by using motion information. The time evolution of each region is defined along the sequence (time tracking). The results are optimal (or near-optimal) for the given framework in a rate-distortion sense. The definition of the coding strategy involves a global optimization of the partition as well as of the coding technique/quality level for each region. Later, the investigation is also extended to the problem of video coding optimization in the framework of a scalable video coding system that can address content-based functionalities. The focus is set in the various types of content-based scalability and object tracking. The generality of the problem has also been extended by including the spatial and temporal dependencies between frames and scalability layers into the optimization schema. In this case the solution implies finding the optimal partition and set of quantizers for both the base and the enhancement layers. Due to the coding dependencies of the enhancement layer with respect to the base layer, the partition and the set of quantizers of the enhancement layer depend on the decisions made on the base layer. Also, a solution for the independent optimization problem (i.e. without tacking into account dependencies between different frames of scalability layers) has been proposed to reduce the computational complexity. These solutions are used to extend the SESAME coding system. The extended coding system, named XSESAME, supports different types of scalability (PSNR, Spatial and temporal) as well as content-based functionalities, such as content-based scalability and object tracking. Two different operating modes for region selection in the enhancement layer have been presented: One (supervised) aimed at providing content-based functionalities at the enhancement layer and the other (unsupervised) aimed at coding efficiency, without content-based functionalities. Integration of object tracking into the segmentation-based coding system is also investigated.In the general case, tracking is a very complex problem. If this capability has to be integrated into a coding system, additional problems arise due to conflicting requirements between coding efficiency and tracking accuracy. This is solved by using a double partition approach, where pure spatial criteria are used to re-segment the partition used for coding. The projection of the re-segmented partition results in more precise adaptation to object contours. A merging step is performed a posteriori to eliminate the excess of regions originated by the re-segmentation.
Morros, Rubió Josep Ramon. "Optimization of Segmentation-Based Video Sequence Coding Techniques. Application to content based functionalities." Doctoral thesis, Universitat Politècnica de Catalunya, 2004. http://hdl.handle.net/10803/6888.
Full textLa teoria de 'rate-distortion' defineix l'optimalitat en la codificació com la representació d'un senyal que, per una taxa de bits donada, resulta en una distorsió mínima al reconstruir el senyal. En el cas de sistemes de codificació basats en regions, això implica obtenir una partició òptima i al mateix temps, un repartiment òptim dels bits entre les diferents regions d'aquesta partició. Aquest problema es formalitza per sistemes de codificació no escalables i es proposa un algorisme per solucionar-lo. Aquest algorisme s'aplica a un sistema de codificació concret anomenat SESAME. En el SESAME, cada quadre de la seqüència de video es segmenta en un conjunt de regions que es codifiquen de forma independent. La segmentació es fa seguint criteris d'homogeneitat espaial i temporal. Per eliminar la redundància temporal, s'utilitza un sistema predictiu basat en la informació de moviment tant per la partició com per la textura. El sistema permet seguir l'evolució temporal de cada regió per tota la seqüència. Els resultats de la codificació són òptims (o quasi-òptims) pel marc donat en un sentit de 'rate-distortion'. El procés de codificació inclou trobar una partició òptima i també trobar la tècnica de codificació i nivell de qualitat més adient per cada regió.
Més endavant s'investiga el problema de codificació de video en sistemes amb escalabilitat i que suporten funcionalitats basades en el contingut. El problema es generalitza incloent en l'esquema de codificació les dependències espaials i temporals entre els diferents quadres o entre les diferents capes d'escalabilitat. En aquest cas, la solució requereix trobar la partició òptima i les tècniques de codificació de textura òptimes tant per la capa base com per la capa de millora. A causa de les dependències que hi ha entre aquestes capes, la partició i el conjunt de tècniques de codificació per la capa de millora dependran de les decisions preses en la capa base. Donat que aquest tipus de solucions generalment són molt costoses computacionalment, també es proposa una solució que no té en compte aquestes dependències.
Els algorismes obtinguts s'apliquen per extendre SESAME. El sistema de codificació extès, anomenat XSESAME suporta diferents tipus d'escalabilitat (PSNR, espaial i temporal) així com funcionalitats basades en el contingut i la possibilitat de seguiment d'objectes a través de la seqüència de video. El sistema de codificació permet utilitzar dos modes diferents pel que fa a la selecció de les regions de la partició de la capa de millora: El primer mode (supervisat) està pensat per utilitzar funcionalitats basades en el contingut. El segon mode (no supervisat) no suporta funcionalitats basades en el contingut i el seu objectiu és simplement obtenir una codificació òptima a la capa de millora.
Un altre tema que s'ha investigat és la integració d'un mètode de seguiment d'objectes en el sistema de codificació. En el cas general, el seguiment d'objectes en seqüències de video és un problema molt complex. Si a més aquest seguiment es vol integrar en un sistema de codificació apareixen problemes addicionals degut a que els requisits necessaris per obtenir eficiència en la codificació poden entrar en conflicte amb els requisits per una bona precisió en el seguiment d'objectes. Aquesta aparent incompatibilitat es soluciona utilitzant un enfocament basat en una doble partició de cada quadre de la seqüència. La partició que s'utilitza per la codificació es resegmenta utilitzant criteris purament espaials. Al projectar aquesta segona partició permet una millor adaptació dels contorns de l'objecte a seguir. L'excés de regions que implicaria aquesta re-segmentació s'elimina amb una etapa de fusió de regions realitzada a posteriori.
En este trabajo se estudia el problema de la compresión de vídeo utilizando funcionalidades basadas en el contenido en el marco teórico de los sistemas de codificación de secuencias de vídeo basados en regiones. Se tratan básicamente dos problemas: El primero está relacionado con la obtención de una codificación óptima en sistemas de codificación de vídeo basados en regiones. En concreto, se muestra como se puede utilizar un metodología de 'rate-distortion' para este tipo de problemas. El segundo problema tratado es como introducir funcionalidades basadas en el contenido en uno de estos sistemas de codificación de vídeo.
La teoría de 'rate-distortion' define la optimalidad en la codificación como la representación de una señal que, para un tasa de bits dada, resulta en una distorsión mínima al reconstruir la señal. En el caso de sistemas de codificación basados en regiones, esto implica obtener una partición óptima y al mismo tiempo, un reparto óptimo de los bits entre las diferentes regiones de esta partición. Este problema se formaliza para sistemas de codificación no escalables y se propone un algoritmo para solucionar este problema. Este algoritmo se aplica a un sistema de codificación concreto llamado SESAME. En SESAME, cada cuadro de la secuencia de vídeo se segmenta en un conjunto de regiones que se codifican de forma independiente. La segmentación se hace siguiendo criterios de homogeneidad espacial y temporal. Para eliminar la redundancia temporal, se utiliza un sistema predictivo basado en la información de movimiento tanto para la partición como para la textura. El sistema permite seguir la evolución temporal de cada región a lo largo de la secuencia. Los resultados de la codificación son óptimos (o casi-óptimos) para el marco dado en un sentido de 'rate-distortion'. El proceso de codificación incluye encontrar una partición óptima y también encontrar la técnica de codificación y nivel de calidad más adecuados para cada región.
Más adelante se investiga el problema de la codificación de vídeo en sistemas con escalabilidad y que suporten funcionalidades basadas en el contenido. El problema se generaliza incluyendo en el esquema de codificación las dependencias espaciales y temporales entre los diferentes cuadros o entre las diferentes capas de escalabilidad. En este caso, la solución requiere encontrar la partición óptima y las técnicas de codificación de textura óptimas tanto para la capa base como para la capa de mejora. A causa de les dependencias que hay entre estas capas, la partición y el conjunto de técnicas de codificación para la capa de mejora dependerán de las decisiones tomadas en la capa base. Dado que este tipo de soluciones generalmente son muy costosas computacionalmente, también se propone una solución que no tiene en cuenta estas dependencias.
Los algoritmos obtenido se usan en la extensión de SESAME. El sistema de codificación extendido, llamado XSESAME soporta diferentes tipos de escalabilidad (PSNR, espacial y temporal) así como funcionalidades basadas en el contenido y la posibilidad de seguimiento de objetos a través de la secuencia de vídeo. El sistema de codificación permite utilizar dos modos diferentes por lo que hace referencia a la selección de les regiones de la partición de la capa de mejora:
El primer modo (supervisado) está pensado para utilizar funcionalidades basadas en el contenido. El segundo modo (no supervisado) no soporta funcionalidades basadas en el contenido y su objetivo es simplemente obtener una codificación óptima en la capa de mejora.
Otro tema investigado es la integración de un método de seguimiento de objetos en el sistema de codificación.
En el caso general, el seguimiento de objetos en secuencias de vídeo es un problema muy complejo. Si este seguimiento se quiere integrar en un sistema de codificación aparecen problemas adicionales debido a que los requisitos necesarios para obtener eficiencia en la codificación
pueden entrar en conflicto con los requisitos para obtener una buena precisión en el seguimiento de objetos. Esta aparente incompatibilidad se soluciona usando un enfoque basado en una doble partición de cada cuadro de la secuencia. La partición que se usa para codificar se resegmenta usando criterios puramente espaciales. Proyectando esta segunda partición se obtiene una mejor adaptación de los contornos al objeto a seguir. El exceso de regiones que implicaría esta resegmentación se elimina con una etapa de fusión de regiones realizada a posteriori.
This work addresses the problem of video compression with content-based functionalities in the framework of segmentation-based video coding systems. Two major problems are considered. The first one is related with coding optimality in segmentation-based coding systems. Regarding this subject, the feasibility of a rate-distortion approach for a complete region-based coding system is shown. The second one is how to address content-based functionalities in the coding system proposed as a solution of the first problem.
Optimality, as defined in the framework of rate-distortion theory, deals with obtaining a representation of the video sequence that leads to a minimum distortion of the coded signal for a given bit budget. In the case of segmentation-based coding systems this means to obtain an 'optimal' partition together with the best coding technique for each region of this partition so that the result is optimal in an operational rate-distortion sense. The problem is formalized for independent, non-scalable coding.
An algorithm to solve this problem is provided as well.
This algorithms is applied to a specific segmentation-based coding system, the so called SESAME. In SESAME, each frame is segmented into a set of regions, that are coded independently. Segmentation involves both spatial and motion homogeneity criteria. To exploit temporal redundancy, a prediction for both the partition and the texture of the current frame is created by using motion information. The time evolution of each region is defined along the sequence (time tracking). The results are optimal (or near-optimal) for the given framework in a rate-distortion sense. The definition of the coding strategy involves a global optimization of the partition as well as of the coding technique/quality level for each region.
Later, the investigation is also extended to the problem of video coding optimization in the framework of a scalable video coding system that can address content-based functionalities. The focus is set in the various types of content-based scalability and object tracking. The generality of the problem has also been extended by including the spatial and temporal dependencies between frames and scalability layers into the optimization schema. In this case the solution implies finding the optimal partition and set of quantizers for both the base and the enhancement layers. Due to the coding dependencies of the enhancement layer with respect to the base layer, the partition and the set of quantizers of the enhancement layer depend on the decisions made on the base layer. Also, a solution for the independent optimization problem (i.e. without tacking into account dependencies between different frames of scalability layers) has been proposed to reduce the computational complexity.
These solutions are used to extend the SESAME coding system. The extended coding system, named XSESAME, supports different types of scalability (PSNR, Spatial and temporal) as well as content-based functionalities, such as content-based scalability and object tracking.
Two different operating modes for region selection in the enhancement layer have been presented: One (supervised) aimed at providing content-based functionalities at the enhancement layer and the other (unsupervised) aimed at coding efficiency, without content-based functionalities.
Integration of object tracking into the segmentation-based coding system is also investigated.
In the general case, tracking is a very complex problem. If this capability has to be integrated into a coding system, additional problems arise due to conflicting requirements between coding efficiency and tracking accuracy. This is solved by using a double partition approach, where pure spatial criteria are used to re-segment the partition used for coding. The projection of the re-segmented partition results in more precise adaptation to object contours. A merging step is performed a posteriori to eliminate the excess of regions originated by the re-segmentation.
Lei, Shenghui. "CFD analysis/optimization of thermo-acoustic instabilities in liquid fuelled aero stationary gas turbine combustors." Thesis, University of Manchester, 2010. https://www.research.manchester.ac.uk/portal/en/theses/cfd-analysis-optimization-of-thermoacoustic-instabilities-in-liquid-fuelled-aero-stationary-gas-turbine-combustors(38bc317e-aa3d-4fd6-825d-e45e7637e841).html.
Full textDaghsen, Ahmed. "Methodology of analysis and optimization of real-time embedded systems : application to automotive field." Compiègne, 2013. http://www.theses.fr/2013COMP2062.
Full textToday, the design and development of automotive software system becomes very complex. This complexity is due to the high number of functions, execution codes and diversity of communication bus embedded in the vehicle. Also, the heterogeneity of the architecture makes the design of such system more difficult and time consuming. The introduction of Model-Based Development (MBD) in the automotive field promised to improve the development process by allowing continuity between requirements definition, system design and the distributed system implementation. In the same direction, the apparition of AUTOSAR consortium standardized the design of such automotive embedded system by allowing the portability of software functions on the hardware architecture and their reuse. It defines a set of rules and interfaces to design, interconnect, deploy and configure a set of application software components (SWCs). However, designing an embedded system according to AUTOSAR standard necessitates the configuration of hundreds of parameters and requires several software allocation decisions. Each decision may influence the system performance and also the development cost. This architectural complexity leads to a large design decision space which is difficult to be explored without using an analytical method or a design tool. We introduce in this thesis a methodology that permits to assist and help the system designer to configure an AUTOSAR-compliant system. This is based on the Design Space Exploration (DSE) framework that permits to evaluate and analyze several design alternatives in order to identify the optimal solutions. The DSE task relies on a multi-objectives evolutionary algorithm. The DSE could be performed for two purposes : (1) the mapping of SWCs to ECUs and the mapping of runnables (code entities) to OS tasks, (2) the configuration of the software parameters like OS tasks priorities and types. The flexibility and scalability of the DSE framework allows applying it to other description and modeling languages such as SysML/MARTE
Vaudolon, Julien. "Electric field determination and magnetic topology optimization in Hall thrusters." Thesis, Orléans, 2015. http://www.theses.fr/2015ORLE2026/document.
Full textElectric propulsion is facing new challenges. Recently, the launch of "all-electric" satellites has marked the debut of a new era. Going all-electric now appears as an interesting alternative to conventional systems for telecom operators. A laser spectroscopy technique was used during this research to investigate the ion velocity distribution dynamics. The different methods for determining the electric field in Hall thrusters were exposed. Two unstable ion regimes were identified and examined. Measurement uncertainties using electrostatic probes were assessed. Planar probed have been designed and tested. A thorough investigation of the influence of the magnetic field parameters on the performance of Hall thrusters was performed. The wall-less Hall thruster design was presented, and preliminary experiments have revealed its interest for the electric propulsion community
Grassberger, Lena [Verfasser]. "Towards cost-efficient preparation of nanoporous materials: formation kinetics, process optimization and material characterization / Lena Grassberger." München : Verlag Dr. Hut, 2016. http://d-nb.info/1100968482/34.
Full textSrivastav, Abhinav. "Sur les aspects théoriques et pratiques des compromis dans les problèmes d'allocation des ressources." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM009/document.
Full textThe content of this thesis is divided into two parts. The first part of the thesis deals with the study of heuristic based approaches for the approximation Pareto fronts. We propose a new Double Archive Pareto local search algorithm for solving multi-objective combinatorial optimization problems. We embed our technique into a genetic framework where our algorithm restarts with the set of new solutions formed by recombination and mutation of solutions found in the previous run. This method improves upon the existing Pareto local search algorithm for bi-objective and tri-objective quadratic assignment problem.In the second part of the thesis, we focus on non-preemptive scheduling algorithms. Here, we study the online problem of minimizing maximum stretch on a single machine. We present both positive and negative theoretical results. Then, we provide an optimally competitive semi-online algorithm. Furthermore, we study the problem of minimizing stretch on a single machine in a recently proposed rejection model. We show that there exists an O(1)-approximation ratio for minimizing average stretch. We also show that there exists an O(1)-approximation ratio for minimizing average flow time on a single machine. Lastly, we study the weighted average flow time minimization problem in online settings. We present a mathematical programming based framework that unifies multiple resource augmentation. Using the concept of duality, we show that there exists an O(1)-competitive algorithm for solving the weighted average flow time problem on unrelated machines. Furthermore, we proposed that this idea can be extended to minimizing l_k norms of weighted flow problem on unrelated machines
Cassisa, Cyril. "Optical flow estimation with subgrid model for study of turbulent flow." Phd thesis, Ecole Centrale de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00674772.
Full textFutrzynski, Romain. "Effect of drag reducing plasma actuators using LES." Doctoral thesis, KTH, Farkost och flyg, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-199873.
Full textArbetet som utförts i denna avhandling undersöker nya sätt att minska luftmotstånd hos markfordon. Speciellt undersöks numeriskt effekten av plasmaaktuatorer med avsikten att uppnå fördröjd separation av strömningen kring en halvcylinder, en geometri vald för att representera en förenklad A-stolpe på en lastbil. För att kunna utföra studien behöver plasmaaktuatorer kunna ingå i beräkningar av turbulenta strömningsfält. Därför undersöks först sätt för att hitta en numerisk modell som kan reproducera effekten av plasma utan att öka beräkningskostnad. Plasmaaktuatorn modelleras i detta arbete genom att ett källterm adderas till Navier-Stokes ekvationer. För att bestämma styrkan och den rumsliga utbredningen hos källtermen, utförs en optimering för att minimera skillnaden mellan experimentella och simulerade profiler av plasma inducerad strömningshastighet. Plasmaaktuatormodellen används därefter i Large Eddy Simulations (LES) för att beräkna strömningen kring en halvcylinder med Reynolds tal Re=65*10^3 och Re=32*10^3. Två typer av fall studeras. I det första fallet används en enda aktuator. I det andra fallet, är ett par på varandra följande aktuatorer placerade, där aktuatorernas position på halvcylinder ändras. Resultaten visar att en luftmotståndsminskning på upp till 10% kan erhållas. Den idealiska platsen för aktuatorn bedöms vara nära den punkt där strömningen utan aktuator separerar. Slutligen undersöks Dynamic Mode Decomposition (DMD) som ett verktyg för att extrahera koherenta dynamiska strukturer i en turbulent strömning. DMD används först för att analysera pulserande kanalströmning där pulsationen har en känd frekvens. Resultaten visar att DMD ger liknande resultat som då fas-medelvärdesbildning görs vid oscillationsfrekvensen. Förekomsten av turbulens buller hindrar dock möjligheten att identifiera moder vid högre övertoner. DMD används också för att analysera strömningen kring halv-cylindern. I avhandlingen visas att spektrat i vaken är bredbandigt men att även moder inom distinkta frekvensintervall fanns vara belägna i avgränsade områden i vaken.
QC 20170117
Nguyen, Dinh Chuong. "Contribution à l'optimisation du rendement d'électroluminescence des LED de puissance : décorrélation des différentes composantes du rendement." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAY027/document.
Full textThis PhD. works, which was carried out inside CEA-LETI, aims to dissociate the various mechanisms occurring inside a GaN-based LED employing numerical simulation and experimental characterization. In the chapters 1 and 2, various mechanisms occurring inside a diode/LED are theoretically described. In chapter 3, through numerical simulation, the dominant mechanisms as well as their locations in a VTF ("vertical thin film") LED structure are determined for different voltage ranges. A parametric study follows to assess the interactions between the mechanisms.In chapter 4, the simulations are carried out with an additional field-dependent model for charge carrier mobility. With this model enabled, the simulated LED-electrical-and-optical characteristics approximate the real LED characteristics.Carrier-velocity characterization on p-type GaN, using a specific sample structure and the resistivity method, is also shown in chapter 4. It can be inferred from the results that under strong electric-fields, the carrier velocity might saturate, or the carrier mobility might decrease. These results strengthen the hypothesis used for the simulations in this chapter 4.The simulations introduced in the chapters 3 and 4 allow the proposition of an equivalent circuit for a GaN-based LED by dissociating the different mechanisms and retaining the dominant ones. This equivalent circuit could help, for instance, identify the different regimes in a real-LED electrical characteristics in order to improve the LED’s performance.Chapter 5 introduces pulsed electroluminescence, a frequential characterization method, on commercial LEDs. The studies of rise-time and fall-time of electro-optical signals, and the study of the differential lifetime of charge carriers in an LED would provide supplementary information concerning the carrier injection into the LED
Futrzynski, Romain. "Drag reduction using plasma actuators." Licentiate thesis, KTH, Farkost och flyg, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-161409.
Full textDenna avhandling behandlar tillämpningen av aktiv strömningskontroll för lastbilshytter, vilket är en ny metod för minskning av luftmotståndet. Mer i detalj är det övergripande målet att visa på hur plasmaaktuatorer kan användas för att minska luftmotståndet orsakat av avlösningen runt A-stolparna. In denna avhandling studeras detta genom numeriska simuleringar. Arbetet är en del av ett projekt där även experimentella försök görs. Effekten av plasmaaktuatorer modelleras genom en masskraft, vilket inte ger nämnvärd ökning av beräkningstiden och är lämplig för implementering i de flesta CFD-lösare. Den rumsliga fördelningen av kraften bestäms av koefficienter vilka i detta arbete beräknades utifrån experimentella data. Modellen har visat sig kunna återskapa en stråle nära väggen med god noggrannhet av en enskild plasmaaktuator för en halvcylinder utan strömning. Samma geometri - en halvcylinder som här används som förenklad geometri av A-stolpen på en lastbil - användes i en preliminär LES studie som visade att enbart aktuatorn vid kontinuerlig drift inte var tillräckligt för att uppnå en signifikant minskning av luftmotståndet. En signifikant minskning av luftmotståndet erhölls genom att helt enkelt öka styrkan på kraften, vilket visats att denna typ av strömningskontroll är relevant för minskning av luftmotståndet. I syfte att förbättra effektiviteten hos aktuatorn, studerades dynamic mode decomposition, som ett verktyg för efterbehandling för att få fram flödesstrukturer. Dessa strukturer identifieras genom deras rumsupplösning och frekvens och kan hjälpa till att förstå hur aktuatorerna bör användas för att minska luftmotståndet. En parallelliserad kod för dynamic mode decomposition utvecklades för att underlätta efterbehandlingen av de stora datamängder som fås från LES-beräkningarna. Slutligen, utvärderades denna kod och LES-beräkningar på ett strömningsfall med pulserande kanalflöde. Metoden, dynamic mode decomposition, visade sig kunna extrahera de oscillerande flödesprofilerna med hög noggrannhet för den påtvingade frekvensen. Övertoner med lägre amplitud jämfört med turbulensintensiteten kunde dock inte erhållas.
QC 20150312
Quintero, Garcia Karla Rossa. "Optimisation d'alignements d'un réseau de pipelines basée sur les algèbres tropicales et les approches génétiques." Thesis, Lyon, INSA, 2015. http://www.theses.fr/2015ISAL0030/document.
Full textThis thesis addresses operations optimization in an oil seaport with the fundamental purpose of assisting supervision operators. The objective is to provide candidate solutions for pipeline alignment (path) selection and for scheduling of oil transfer operations, as well as maintenance operations, considering that the system has limited and conflicting resources. Informed decision making should consider operations scheduling and alignment selection based on : devices availability, operative capacity of the network, financial aspects (penalties) due to late service, and a predefined maintenance schedule. The optimization problem is addressed by tropical approaches given their potential for yielding concise and intuitive representations when modeling synchronization phenomena. The proposals developed herein start by algebraic mono-objective optimization models. They subsequently become more complex, as new aspects are included, leading to the formulation of hybrid multi-objective optimization models based on artificial intelligence approaches as well as (max,+)-linear system theory. Firstly, a mono-objective optimization model is proposed in (max,+) algebra for penalty minimization. This model integrates different nature phenomena into one single constraint. It is nonlinear, considers predefined alignments for transfer operations and is validated through an optimization solver (LINGO). Secondly, linearization of such model is introduced for prioritization of conflicting operations. Within this context, 2 criteria are addressed: potential penalties for clients and, on the other hand, operations criticality. The nonlinear (max,+) model minimizing penalties is extended in order to consider alignment search and delay minimization for maintenance operations. In order to address the multi-objective nature of the problem, an approach based on genetic algorithms and (max,+)-linear system theory is proposed. Finally, in a more formal framework, a new synchronous product for tropical automata exploiting parallelism phenomena at the earliest is defined in order to minimize the makespan. The proposed models and methods herein have been validated by industrial data gathered from the oil company PDVSA and from the supervision solutions provider Thales Group. The main contributions of this research are, firstly, the application of tropical approaches to this specific optimization problem, yielding concise and potentially linear models. Secondly, the proposal of a hybrid approach based on genetic algorithms and (max,+)-linear systems, which exploits the advantages of the distributed search of artificial intelligence approaches and the conciseness of the models stemming from (max,+) algebra. The final contribution focuses on the definition of the alphabet for a new synchronous product of tropical automata
Fillion, Anthony. "Méthodes variationnelles d'ensemble et optimisation variationnellepour les géosciences." Thesis, Paris Est, 2019. http://www.theses.fr/2019PESC1012.
Full textData assimilation consists in the estimation of a physical system state. This estimation should optimally combine erroneous observations together with imperfect numerical simulations of this system. In practice, this estimation is the initial state of a dynamical system. It can be used to precisely predict the system evolution. Especially in geophysics where data sets are substantial.A first strategy is based on the maximum a posteriori estimation. Thus the estimation is solution of an optimization problem. This strategy called 4DVar often requires the computation of the model and observation operator adjoints. This operation is time consuming in the forecasting system development. A second strategy analyses the system state sequentially. It is based on “ensemble” techniques. Here, perturbations of the system state allow to estimate its statistics sequentially thanks to the Kalman equations.Both strategies was recently successfully combined in EnVar methods currently used in operating systems. They benefit from both: an efficient treatment of the operators nonlinearity through variational optimization techniques together with statistics and derivatives estimation through ensembles. The IEnKS is an archetype of such EnVar methods. It uses a data assimilation window (DAW) which is time shifted each cycle to combine both strategies. Various DAW parameterizations lead to non equivalent assimilations when the system dynamics are non linear.In particular, long DAWs reduce the frequency of the prior density Gaussian approximation. This results in a performance improvement but only to some extent. After, the cost function variational minimization fails because of its complex shape. A solution called “Quasi static variational assimilation” gradually adds observations to the cost function during multiple minimizations. The thesis second chapter generalizes the QSVA to EnVar methods. Theoretical and numerical aspects of the QSVA applied to the IEnKS are adressed.However, the QSVA relies on the absence of model error. Indeed, the information contained in a remote in time observation may be deteriorated by model error. The thesis third chapter is dedicated to model error introduction in the IEnKS. The IEnKS-Q, a 4D ensemble variational method solving sequentially the smoothing problem with model error, is built in this chapter. Unfortunately, with model error, a state trajectory is not anymore determined by its initial condition. The number of parameters required to describe its statistics increase with the DAW length. When this number is paired with the number of model evaluations, the consequences on the computing time are disastrous. A proposed solution is to dissociate those quantities with anomaly matrices decompositions. In this case, the IEnKS-Q is as expensive as an IEnKS in terms of model evaluations
Belhadj, Aram. "L'intégration monétaire et les pays émergents : application au Maghreb." Thesis, Orléans, 2014. http://www.theses.fr/2014ORLE0501.
Full textThis thesis focuses on the option of the launching of a monetary union across three Maghreb Countries, notablyAlgeria, Morocco and Tunisia. It tries to answer the following questions: What are the characteristics of monetaryregimes in the Maghreb Countries and what are the underlying factors for their choices? Do the countries’ structures andinstitutions constitute a favourable environment for the creation of a monetary union between these countries? What would bethe macroeconomic consequences for these countries if they decided to create a monetary union? Are there any alternativemonetary regimes which would enable them to move toward the final steps of monetary integration in a more appropriate way?In order to answer these questions, we opted for a presentation of four chapters. In a first chapter, we describedthe theoretical foundations of monetary integration through the study of Optimum Currency Area Theory (OCA),their theoretical developments, their drawbacks, their extensions and their empirical applications. We tried in asecond chapter to present the structural and institutional mechanisms that insure the viability of the monetaryintegration process and to recourse to some historical experiences. We attempted in a third chapter to describe themonetary regimes currently in use in the Maghreb Countries and to explain the origin of their heterogeneity beforeanalyzing the possibility of setting up an OCA in this context of heterogeneity. Finally, in a fourth chapter, weassessed the consequences of the creation of a monetary union across the three countries. We also suggest possiblemonetary regimes which, in fine, might allow these countries to successfully move toward monetary union.Our main results show that the creation of a monetary union –and its corollary the implementation of a commonmonetary rule- is not beneficial, especially for Algeria where the variability of inflation and activity is moreimportant than in Morocco and Tunisia. On the other hand, we came to the conclusion that the harmonization ofinflation targets within a quasi-flexible exchange rate or the simultaneous setting up of a currency board in thesecountries could represent an appropriate monetary regime which would allow a safe move to monetary union
Roda, Fabio. "Intégration d'exigences de haut niveau dans les problèmes d'optimisation : théorie et applications." Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00817782.
Full textBufi, Elio Antonio. "Optimisation robuste de turbines pour les cycles organiques de Rankine (ORC)." Thesis, Paris, ENSAM, 2016. http://www.theses.fr/2016ENAM0070/document.
Full textIn recent years, the Organic Rankine Cycle (ORC) technology has received great interest from the scientific and technical community because of its capability to recover energy from low-grade heat sources. In some applications, as the Waste Heat Recovery (WHR), ORC plants need to be as compact as possible because of geometrical and weight constraints. Recently, these issues have been studied in order to promote the ORC technology for Internal Combustion Engine (ICE) applications. The idea to recover this residual energy is not new and the 1970s energy crisis encouraged the development of feasible ORC small-scale plants (1-10 kWe). Due to the molecular complexity of the working fluids, strong real gas effects have to be taken into account because of the high pressures and densities, if compared to an ideal gas. In these conditions the fluid is known as dense gas. Dense gases are defined as single phase vapors, characterized by complex molecules and moderate to large molecular weights. The role of dense gas dynamics in transonic internal flows has been widely studied for its importance in turbomachinery applications involved in low-grade energy exploitation, such as the ORC. Recently, the attention has been focused on axial turbines, which minimize the system size, if compared with radial solutions at the same pressure ratios and enthalpy drops. In this work, a novel design methodology for supersonic ORC axial impulse turbine stages is proposed. It consists in a fast, accurate two-dimensional design which is carried out for the mean-line stator and rotor blade rows of a turbine stage by means of a method of characteristic (MOC) extended to a generic equation of state. The viscous effects are taken into account by introducing a proper turbulent compressible boundary layer correction to the inviscid design obtained with MOC. Since proposed heat sources for ORC turbines typically include variable energy sources such as WHR from industrial processes or automotive applications, as a result, to improve the feasibility of this technology, the resistance to variable input conditions is taken into account. The numerical optimization under uncertainties is called Robust Optimization (RO) and it overcomes the limitation of deterministic optimization that neglects the effect of uncertainties in design variables and/or design parameters. To measure the robustness of a new design, statistics such as mean and variance (or standard deviation) of a response are calculated in the RO process. In this work, the MOC design of supersonic ORC nozzle blade vanes is used to create a baseline injector shape. Subsequently, this is optimized through a RO loop. The stochastic optimizer is based on a Bayesian Kriging model of the system response to the uncertain parameters, used to approximate statistics of the uncertain system output, coupled to a multi-objective non-dominated sorting genetic algorithm (NSGA). An optimal shape that maximizes the mean and minimizes the variance of the expander isentropic efficiency is searched. The isentropic efficiency is evaluated by means of RANS (Reynolds Average Navier-Stokes) simulations of the injector. The fluid thermodynamic behavior is modelled by means of the well-known Peng-Robinson-Stryjek-Vera equation of state. The blade shape is parametrized by means of a Free Form Deformation approach. In order to speed-up the RO process, an additional Kriging model is built to approximate the multi-objective fitness function and an adaptive infill strategy based on the Multi Objective Expected Improvement for the individuals is proposed in order to improve the surrogate accuracy at each generation of the NSGA. The robustly optimized ORC expander shape is compared to the results provided by the MOC baseline shape and the injector designed by means of a standard deterministic optimizer
Barreda, Vayá María. "Performance and Energy Optimization of the Iterative Solution of Sparse Linear Systems on Multicore Processors." Doctoral thesis, Universitat Jaume I, 2017. http://hdl.handle.net/10803/401547.
Full textIn this dissertation we target the solution of large sparse systems of linear equations using preconditioned iterative methods based on Krylov subspaces. Specifically, we focus on ILUPACK, a library that offers multi-level ILU preconditioners for the effective solution of sparse linear systems. The increase of the number of equations and the introduction of new HPC architectures motivates us to develop a parallel version of ILUPACK which optimizes both execution time and energy consumption on current multicore architectures and clusters of nodes built from this type of technology. Thus, the main goal of this thesis is the design, implementation and evaluation of parallel and energy-efficient iterative sparse linear system solvers for multicore processors as well as recent manycore accelerators such as the Intel Xeon Phi. To fulfill the general objective, we optimize ILUPACK exploiting task parallelism via OmpSs and MPI, and also develope an automatic framework to detect energy inefficiencies.
Pirayre, Aurélie. "Reconstruction et classification par optimisation dans des graphes avec à priori pour les réseaux de gènes et les images." Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1170/document.
Full textThe discovery of novel gene regulatory processes improves the understanding of cell phenotypicresponses to external stimuli for many biological applications, such as medicine, environmentor biotechnologies. To this purpose, transcriptomic data are generated and analyzed from mi-croarrays or more recently RNAseq experiments. For each gene of a studied organism placed indifferent living conditions, they consist in a sequence of genetic expression levels. From thesedata, gene regulation mechanisms can be recovered by revealing topological links encoded ingeometric graphs. In regulatory graphs, nodes correspond to genes. A link between two nodesis identified if a regulation relationship exists between the two corresponding genes. Such net-works are called Gene Regulatory Networks (GRNs). Their construction as well as their analysisremain challenging despite the large number of available inference methods.In this thesis, we propose to address this network inference problem with recently developedtechniques pertaining to graph optimization. Given all the pairwise gene regulation informa-tion available, we propose to determine the presence of edges in the final GRN by adoptingan energy optimization formulation integrating additional constraints. Either biological (infor-mation about gene interactions) or structural (information about node connectivity) a priorihave been considered to reduce the space of possible solutions. Different priors lead to differentproperties of the global cost function, for which various optimization strategies can be applied.The post-processing network refinements we proposed led to a software suite named BRANE for“Biologically-Related A priori for Network Enhancement”. For each of the proposed methodsBRANE Cut, BRANE Relax and BRANE Clust, our contributions are threefold: a priori-based for-mulation, design of the optimization strategy and validation (numerical and/or biological) onbenchmark datasets.In a ramification of this thesis, we slide from graph inference to more generic data processingsuch as inverse problems. We notably invest in HOGMep, a Bayesian-based approach using aVariation Bayesian Approximation framework for its resolution. This approach allows to jointlyperform reconstruction and clustering/segmentation tasks on multi-component data (for instancesignals or images). Its performance in a color image deconvolution context demonstrates bothquality of reconstruction and segmentation. A preliminary study in a medical data classificationcontext linking genotype and phenotype yields promising results for forthcoming bioinformaticsadaptations