To see the other types of publications on this topic, follow the link: Joint optimization.

Dissertations / Theses on the topic 'Joint optimization'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Joint optimization.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Frick, Eric. "Joint center estimation by single-frame optimization." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6575.

Full text
Abstract:
Joint center location is the driving parameter for determining the kinematics, and later kinetics, associated with human motion capture. Therefore the accuracy with which said location is determined is of great import to any and all subsequent calculation and analysis. The most significant barrier to accurate determination of this parameter is soft tissue artifact, which contaminates the measurements of on-body measurement devices by allowing them to move relative to the underlying rigid bone. This leads to inaccuracy in both bone pose estimation and joint center location. The complexity of soft tissue artifact (it is nonlinear, multimodal, subject-specific, and trial specific) makes it difficult to model, and therefore difficult to mitigate. This thesis proposes a novel method, termed Single Frame Optimization, for determining joint center location (though mitigation of soft tissue artifact) via a linearization approach, in which the optimal vector relating a joint center to a corresponding inertial sensor is calculated at each time frame. This results in a time-varying joint center location vector that captures the relative motion due to soft tissue artifact, from which the relative motion could be isolated and removed. The method’s, and therefore the optimization’s, driving assumption is that the derivative terms in the kinematic equation are negligible relative to the rigid terms. More plainly, it is assumed that any relative motion can be assumed negligible in comparison with the rigid body motion in the chosen data frame. The validity of this assumption is investigated in a series of numerical simulations and experimental investigations. Each item in said series is presented as a chapter in this thesis, but retains the format of a standalone article. This is intended to foment critical analysis of the method at each stage in its development, rather than solely in its practical (and more developed) form.
APA, Harvard, Vancouver, ISO, and other styles
2

Khojastehnia, Mahdi. "Massive MIMO Channels Under the Joint Power Constraints." Thesis, Université d'Ottawa / University of Ottawa, 2019. http://hdl.handle.net/10393/39992.

Full text
Abstract:
Massive MIMO has been recognized as a key technology for 5G systems due to its high spectral efficiency. The capacity and optimal signaling for a MIMO channel under the total power constraint (TPC) are well-known and can be obtained by the water-filling (WF) procedure. However, much less is known about optimal signaling under the per-antenna power constraint constraint (PAC) or under the joint power constraints (TPC+PAC). In this thesis, we consider a massive MIMO Gaussian channel under favorable propagation (FP) and obtain the optimal transmit covariance under the joint constraints. The effect of the joint constraints on the optimal power allocation (OPA) is shown. While it has some similarities to the standard WF, it also has number of notable differences. The numbers of active streams and active PACs are obtained, and a closed-form expression for the optimal dual variable is given. A capped water-filling interpretation of the OPA is given, which is similar to the standard WF, where a container has both floor and ceiling profiles. An iterative water-filling algorithm is proposed to find the OPA under the joint constraints, and its convergence to the OPA is proven. The robustness of optimal signaling under FP is demonstrated in which it becomes nearly optimal for a nearly favorable propagation channel. An upper bound of the sub-optimality gap is given which characterizes nearly (or eps)-favorable propagation. This upper bound quantifies how close the channel is to the FP. A bisection algorithm is developed to numerically compute the optimal dual variable. Newton-barrier and Monte-Carlo algorithms are developed to find the optimal signaling under the joint constraints for an arbitrary channel, not necessarily for a favorable propagation channel. When the diagonal entries of the channel Gram matrix are fixed, it is shown that a favorable propagation channel is not necessarily the best among all possible propagation scenarios capacity-wise. We further show that the main theorems in [1] on favorable propagation are not correct in general. To make their conclusions valid, some modifications as well as additional assumptions are needed, which are given here.
APA, Harvard, Vancouver, ISO, and other styles
3

Widdowson, Brian L. "A joint service optimization of the phased threat distribution." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1998. http://handle.dtic.mil/100.2/ADA344694.

Full text
Abstract:
Thesis (M.S. in Operations Research) Naval Postgraduate School, March 1998.
Thesis advisor(s): Richard E. Rosenthal. "March 1998."-Cover. Includes bibliographical references (p. 69). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
4

Ramachandran, Iyappan. "Joint PHY-MAC optimization for energy-constrained wireless networks /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/5968.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sherrod, Vallan Gray. "Design Optimization for a Compliant,Continuum-Joint, Quadruped Robot." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7766.

Full text
Abstract:
Legged robots have the potential to cover terrain not accessible to wheel-based robots and vehicles. This makes them better suited to perform tasks, such as search and rescue, in real-world unstructured environments. Pneumatically-actuated, compliant robots are also more suited than their rigid counterparts to work in real-world unstructured environments with humans where unintentional contact may occur. This thesis seeks to combine the benefits of these two type of robots by implementing design methods to aid in the design choice of a 16 degree of freedom (DoF) compliant, continuum-joint quadruped. This work focuses on the design optimization, especially the definition of design metrics, for this type of robot. The work also includes the construction and closed-loop control of a four-DoF continuum-joint leg used to validate design methods.We define design metrics for legged robot metrics that evaluate their ability to traverse unstructured terrain, carry payloads, find stable footholds, and move in desired directions. These design metrics require a sampling of a legged-robot's complete configuration space. For high-DoF robots, such as the 16-DoF in evaluated in this work, the evaluation of these metrics become intractable with contemporary computing power. Therefore, we present methods that can be used to simplify and approximate these metrics. These approximations have been validated on a simulated four-DoF legged robot where they can tractably be compared against their full counterparts.Using the approximations of the defined metrics, we have performed a multi-objective design optimization to investigate the ten-dimensional design space of a 16-DoF compliant, continuum-joint quadruped. The design variables used include leg link geometry, robot base dimensions, and the leg mount angles. We have used an evolutionary algorithm as our optimization method which converged on a Pareto front of optimal designs. From these set of designs, we are able to identify the trade-offs and design differences between robots that perform well in each of the different design metrics. Because of our approximation of the metrics, we were able to perform this optimization on a supercomputer with 28 cores in less than 40 hours.We have constructed a 1.3 m long continuum-joint leg from one of the resulting quadruped designs of the optimization. We have implemented configuration estimation and control and force control on this leg to evaluate the leg payload capability. Using these controllers, we have conducted an experiment to compare the leg's ability to provide downward force in comparison with its theoretical payload capabilities. We then demonstrated how the torque model used in the calculation of payload capabilities can accurately calculate trends in force output from the leg.
APA, Harvard, Vancouver, ISO, and other styles
6

Santhanam, Arvind V. "Joint optimization of radio resources in wireless multihop networks /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3158467.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Furst, Séverine. "Multi-objective optimization for joint inversion of geodetic data." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS017/document.

Full text
Abstract:
La surface terrestre est affectée par de nombreux processus locaux tels que des événements volcaniques, des glissements de terrain ou des tremblements de terre. Parallèlement à ces processus naturels, les activités anthropiques, y compris l’extraction et le stockage des ressources profondes (par exemple, les minéraux ou les hydrocarbures) façonnent la Terre à différentes échelles spatiales et temporelles. Ces mécanismes produisent une déformation du sol qui peut être détectée par divers instruments et techniques géodésiques tel que le GNSS, l’InSAR, les inclinomètres. Le but de cette thèse est de développer un outil numérique permettant l’inversion conjointe de multiples données géodésiques associées à la déformation de la plaque ou au changement de contrainte volumique en profondeur. Quatre types d’applications sont ciblés: la déformation intersismiques des plaques, la déformation des volcans, l’exploitation minière profonde et l’extraction de pétrole et de gaz. Différentes complexités du modèle inverse ont été considérées: le niveau I considère un seul type de données géodésiques avec un processus indépendant du temps. Une application est réalisée avec l’inversion des données GPS à travers le sud de la Californie pour déterminer les variations latérales de la rigidité lithosphérique (Furst et al., 2017). Le niveau II représente également un seul type de données géodésiques mais avec un processus dépendant du temps. La détermination conjointe de l’historique des changements de contrainte et des paramètres de dérive d’un réseau d’inclinomètres est étudiée à l’aide d’un exemple synthétique (Furst et al., soumis). Le niveau III considère différents types de données géodésiques et un processus dépendant du temps. Un réseau fictif combinant des données GNSS, InSAR, inclinométriques et de nivellement est défini pour calculer le changement de volume dépendant du temps d’une source profonde de déformation. Une méthodologie pour implémenter ces différents niveaux de complexité est développée dans un seul logiciel. Parce que le problème inverse peut être mal posé, la minimisation de la fonctionnelle peut produire plusieurs minima. Par conséquent, un algorithme d’optimisation global est utilisé (Mohammadi and Saïac, 2003). Le problème direct est traité en utilisant un ensemble de modèles élastiques numériques et analytiques permettant de modéliser les processus de déformation en profondeur. Grâce à ces développements numériques, des avancées concernant les problèmes inverses en géodésie devraient être possibles telle que l’inversion jointe de différents types de données géodésiques acquises lors de la surveillance des volcans. Dans cette perspective, la possibilité de déterminer par inversion les paramètres de dérive des inclinomètres permettrait une détermination précise des sources de déformation profondes. En outre, la méthodologie développée peut être utilisée pour une surveillance précise de la déformation des réservoirs de pétrole et de gaz
The Earth’s surface is affected by numerous local processes like volcanic events, landslides or earthquakes. Along with these natural processes, anthropogenic activities including extraction and storage of deep resources (e.g. minerals, hydrocarbons) shape the Earth at different space and time scales. These mechanisms produce ground deformation that can be detected by various geodetic instruments like GNSS, InSAR, tiltmeters, for example. The purpose of the thesis is to develop a numerical tool to provide the joint inversion of multiple geodetic data associated to plate deformation or volume strain change at depth. Four kinds of applications are targeted: interseismic plate deformation, volcano deformation, deep mining, and oil & gas extraction. Different inverse model complexities were considered: the I-level considers a single type of geodetic data with a time independent process. An application is made with inverting GPS data across southern California to determine the lateral variations of lithospheric rigidity (Furst et al., 2017). The II-level also accounts for a single type of geodetic data but with a time-dependent process. The joint determination of strain change history and the drift parameters of a tiltmeter network is studied through a synthetic example (Furst et al., submitted). The III-level considers different types of geodetic data and a timedependent process. A fictitious network made by GNSS, InSAR, tiltmeters and levelling surveys is defined to compute the time dependent volume change of a deep source of strain. We develop a methodology to implement these different levels of complexity in a single software. Because the inverse problem is possibly ill-posed, the functional to minimize may display several minima. Therefore, a global optimization algorithm is used (Mohammadi and Saïac, 2003). The forward part of the problem is treated by using a collection of numerical and analytical elastic models allowing to model the deformation processes at depth. Thanks to these numerical developments, new advances for inverse geodetic problems should be possible like the joint inversion of various types of geodetic data acquired for volcano monitoring. In this perspective, the possibility to determine by inverse problem the tiltmeter drift parameters should allow for a precise determination of deep strain sources. Also, the developed methodology can be used for an accurate monitoring of oil & gas reservoir deformation
APA, Harvard, Vancouver, ISO, and other styles
8

Giannakas, Theodoros. "Joint modeling and optimization of caching and recommendation systems." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS317.

Full text
Abstract:
La mise en cache du contenu au plus près des utilisateurs a été proposée comme un scénario gagnant-gagnant afin d'offrir de meilleurs tarifs aux utilisateurs tout en économisant sur les coûts des opérateurs. Néanmoins, la mise en cache peut réussir si les fichiers mis en cache parviennent à attirer un grand nombre de demandes. À cette fin, nous profitons du fait qu'Internet est de plus en plus axé sur le divertissement et proposons de lier les systèmes de recommandation et la mise en cache afin d'augmenter le taux de réussite. Nous modélisons un utilisateur qui demande plusieurs contenus à partir d'un réseau équipé d'un cache. Nous proposons un cadre de modélisation pour un tel utilisateur qui est basé sur des chaînes de Markov et s'écarte de l'IRM. Nous explorons différentes versions du problème et dérivons des solutions optimales et sous-optimales selon le cas que nous examinons. Enfin, nous examinons la variation du problème de mise en cache prenant en compte la recommandation et proposons des algorithmes pratiques assortis de garanties de performances. Pour les premiers, les résultats indiquent qu'il y a des gains élevés pour les opérateurs et que les schémas myopes sans vision sont fortement sous-optimaux. Alors que pour ce dernier, nous concluons que les décisions de mise en cache peuvent considérablement s'améliorer en tenant compte des recommandations sous-jacentes
Caching content closer to the users has been proposed as a win-win scenario in order to offer better rates to the users while saving costs from the operators. Nonetheless, caching can be successful if the cached files manage to attract a lot of requests. To this end, we take advantage of the fact that the internet is becoming more entertainment oriented and propose to bind recommendation systems and caching in order to increase the hit rate. We model a user who requests multiple contents from a network which is equipped with a cache. We propose a modeling framework for such a user which is based on Markov chains and depart from the IRM. We delve into different versions of the problem and derive optimal and suboptimal solutions according to the case we examine. Finally we examine the variation of the Recommendation aware caching problem and propose practical algorithms that come with performance guarantees. For the former, the results indicate that there are high gains for the operators and that myopic schemes without a vision, are heavily suboptimal. While for the latter, we conclude that the caching decisions can significantly improve when taking into consideration the underlying recommendations
APA, Harvard, Vancouver, ISO, and other styles
9

Diehl, Douglas D. "How to optimize joint theater ballistic missile defense." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2004. http://library.nps.navy.mil/uhtbin/hyperion/04Mar%5FDiehl.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Foster, James C. "Joint optimization of the technical and social aspects of workplace design." Thesis, Massachusetts Institute of Technology, 1985. http://hdl.handle.net/1721.1/31002.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Sloan School of Management, 1985.
MICROFICHE COPY AVAILABLE IN ARCHIVES AND DEWEY.
Bibliography: leaves 91-97.
by James C. Foster.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
11

Fallgren, Mikael. "Optimization of Joint Cell, Channel and Power Allocation in Wireless Communication Networks." Doctoral thesis, KTH, Optimeringslära och systemteori, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-40274.

Full text
Abstract:
In this thesis we formulate joint cell, channel and power allocation problems within wireless communication networks. The objectives are to maximize the user with mini- mum data throughput (Shannon capacity) or to maximize the total system throughput, referred to as the max-min and max-sum problem respectively. The complexity is stud- ied together with proposed optimization- and heuristic-based approaches. In the first paper an overall joint cell, channel and power allocation max-min prob- lem is formulated. We show that the decision problem is NP-hard and that the op- timization problem is not approximable unless P is equal to NP, for instances with a sufficiently large number of channels. Further, it follows that for a feasible binary cell and channel allocation, the remaining continuous power allocation optimization problem is still not approximable unless P is equal to NP. In addition, it is shown that first-order optimality conditions give global optimum of the single channel power al- location optimization problem, although the problem is in general not convex. In the following two papers heuristics for solving the overall problem are proposed. In the second paper we consider the single channel problem with convex combinations of the max-min and the max-sum objective functions. This variable utility provides the ability of tuning the amount of fairness and total throughput. The third paper investi- gates the multiple channel setting. On a system with three cells, eight mobile users and three channels, we perform an exhaustive search over feasible cell and channel alloca- tions. The exhaustive search is then compared to the less computationally expensive heuristic approaches, presenting potential earnings to strive for. A conclusion is that several of the proposed heuristics perform very well. The final paper incorporates fixed relay stations into the overall joint cell, channel and power allocation max-min problem. The complexity is inherited from the formula- tion without relay stations. Further, we propose a heuristic channel allocation approach that shows good performance, compared to an optimization based approach, in numer- ical simulations on the relay setting.
Financial support by the Swedish Foundation for Strategic Research (SSF) QC 20110915
APA, Harvard, Vancouver, ISO, and other styles
12

Rao, Tingting. "LP-based subgradient algorithm for joint pricing and inventory control problems." Thesis, Massachusetts Institute of Technology, 2008. http://hdl.handle.net/1721.1/45282.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008.
Includes bibliographical references (p. 93-94).
It is important for companies to manage their revenues and -reduce their costs efficiently. These goals can be achieved through effective pricing and inventory control strategies. This thesis studies a joint multi-period pricing and inventory control problem for a make-to-stock manufacturing system. Multiple products are produced under shared production capacity over a finite time horizon. The demand for each product is a function of the prices and no back orders are allowed. Inventory and production costs are linear functions of the levels of inventory and production, respectively. In this thesis, we introduce an iterative gradient-based algorithm. A key idea is that given a demand realization, the cost minimization part of the problem becomes a linear transportation problem. Given this idea, if we knew the optimal demand, we could solve the production problem efficiently. At each iteration of the algorithm, given a demand vector we solve a linear transportation problem and use its dual variables in order to solve a quadratic optimization problem that optimizes the revenue part and generates a new pricing policy. We illustrate computationally that this algorithm obtains the optimal production and pricing policy over the finite time horizon efficiently. The computational experiments in this thesis use a wide range of simulated data. The results show that the algorithm we study in this thesis indeed computes the optimal solution for the joint pricing and inventory control problem and is efficient as compared to solving a reformulation of the problem directly using commercial software. The algorithm proposed in this thesis solves large scale problems and can handle a wide range of nonlinear demand functions.
by Tingting Rao.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
13

Yamani, Jana H. (Jana Hashim). "Approximation of the transient joint queue-length distribution in tandem networks." Thesis, Massachusetts Institute of Technology, 2013. http://hdl.handle.net/1721.1/85470.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 95-97).
This work considers an urban traffic network, and represents it as a Markovian queueing network. This work proposes an analytical approximation of the time-dependent joint queue-length distribution of the network. The challenge is to provide an accurate analytical description of between and within queue (i.e. link) dynamics, while deriving a tractable approach. In order to achieve this, we use an aggregate description of queue states (i.e. state space reduction). These are referred to as aggregate (queue-length) distributions. This reduces the dimensionality of the joint distribution. The proposed method is formulated over three different stages: we approximate the time-dependent aggregate distribution of 1) a single queue, 2) a tandem 3-queue network, 3) a tandem network of arbitrary size. The third stage decomposes the network into overlapping 3-queue sub-networks. The methods are validated versus simulation results. We then use the proposed tandem network model to solve an urban traffic signal control problem, and analyze the added value of accounting for time-dependent between queue dependency in traffic management problems for congested urban networks.
by Jana H. Yamani.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
14

Chen, Yuan. "Joint Design of Redundancy and Maintenance for Parallel-Series Continuous-State Systems." Ohio University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1628594957896883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Chen, Yuan. "Joint Design of Redundancy and Maintenance for Parallel-Series Continuous-State Systems." Ohio University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1628594957896883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Keskin, Burcu Baris. "Joint optimization of location and inventory decisions for improving supply chain cost performance." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-2528.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Liu, Fenghua. "Joint optimization of source and channel coding based on a nonlinear estimate receiver." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq24328.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Nesbitt, Jesse. "Robust Optimization in Operational Risk: A Study of the Joint Platform Allocation Tool." Monterey, California: Naval Postgraduate School, 2014. http://hdl.handle.net/10945/43063.

Full text
Abstract:
The Joint Platform Allocation Tool (JPAT) is a tool currently used to inform Army decision makers on resource management, procurement, and operational employment of Army aerial intelligence, surveillance, and reconnaissance (ISR) assets. The tool is modeled and implemented using point estimates for input data on future resource, equipment capability, and employment demand. This research expands the capability of the JPAT to account for uncertainty and changes in those parameters that bear on the overall operational risk of the Army's ISR mission: uncertain and changing future budgets, and uncertainty and unpredictability of future operational demands for ISR assets. Techniques of robust optimization are explored and applied to JPAT, and results and methodology are shown to be applicable to other operational areas.
APA, Harvard, Vancouver, ISO, and other styles
19

Lee, Jinwoo. "Joint Optimization of Pavement Management and Reconstruction Policies for Segment and System Problems." Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3733346.

Full text
Abstract:

This dissertation presents a methodology for the joint optimization of a variety of pavement construction and management activities for segment and system problems under multiple budget constraints. The objective of pavement management is to minimize the total discounted life time costs for the agency and the highway users by finding optimal policies. The scope of the dissertation is focused on continuous time and continuous state formulations of pavement condition. We use a history-dependent pavement deterioration model to account for the influence of history on the deterioration rate.

Three topics, representing different aspects of the problem are covered in the dissertation. In the first part, the subject is the joint optimization of pavement design, maintenance and rehabilitation (M&R;) strategies for the segment-level problem. A combination of analytical and numerical tools is proposed to solve the problem. In the second part of the dissertation, we present a methodology for the joint optimization of pavement maintenance, rehabilitation and reconstruction (MR&R;) activities for the segment-level problem. The majority of existing Pavement Management Systems (PMS) do not optimize reconstruction jointly with maintenance and rehabilitation policies. We show that not accounting for reconstruction in maintenance and rehabilitation planning results in suboptimal policies for pavements undergoing cumulative damage in the underlying layers (base, sub-base or subgrade). We propose dynamic programming solutions using an augmented state which includes current surface condition and age. In the third part, we propose a methodology for the joint optimization of rehabilitation and reconstruction activities for heterogeneous pavement systems under multiple budget constraints. Within a bottom-up solution approach, Genetic Algorithm (GA) is adopted. The complexity of the algorithm is polynomial in the size of the system and the policy-related parameters.

APA, Harvard, Vancouver, ISO, and other styles
20

Alam, Md Zahangir. "Joint transceiver design and power optimization for wireless sensor networks in underground mines." Master's thesis, Université Laval, 2018. http://hdl.handle.net/20.500.11794/30663.

Full text
Abstract:
Avec les grands développements des technologies de communication sans fil, les réseaux de capteurs sans fil (WSN) ont attiré beaucoup d’attention dans le monde entier au cours de la dernière décennie. Les réseaux de capteurs sans fil sont maintenant utilisés pour a surveillance sanitaire, la gestion des catastrophes, la défense, les télécommunications, etc. De tels réseaux sont utilisés dans de nombreuses applications industrielles et commerciales comme la surveillance des processus industriels et de l’environnement, etc. Un réseau WSN est une collection de transducteurs spécialisés connus sous le nom de noeuds de capteurs avec une liaison de communication distribuée de manière aléatoire dans tous les emplacements pour surveiller les paramètres. Chaque noeud de capteur est équipé d’un transducteur, d’un processeur de signal, d’une unité d’alimentation et d’un émetteur-récepteur. Les WSN sont maintenant largement utilisés dans l’industrie minière souterraine pour surveiller certains paramètres environnementaux, comme la quantité de gaz, d’eau, la température, l’humidité, le niveau d’oxygène, de poussière, etc. Dans le cas de la surveillance de l’environnement, un WSN peut être remplacé de manière équivalente par un réseau à relais à entrées et sorties multiples (MIMO). Les réseaux de relais multisauts ont attiré un intérêt de recherche important ces derniers temps grâce à leur capacité à augmenter la portée de la couverture. La liaison de communication réseau d’une source vers une destination est mise en oeuvre en utilisant un schéma d’amplification/transmission (AF) ou de décodage/transfert (DF). Le relais AF reçoit des informations du relais précédent et amplifie simplement le signal reçu, puis il le transmet au relais suivant. D’autre part, le relais DF décode d’abord le signal reçu, puis il le transmet au relais suivant au deuxième étage s’il peut parfaitement décoder le signal entrant. En raison de la simplicité analytique, dans cette thèse, nous considérons le schéma de relais AF et les résultats de ce travail peuvent également être développés pour le relais DF. La conception d’un émetteur/récepteur pour le relais MIMO multisauts est très difficile. Car à l’étape de relais L, il y a 2L canaux possibles. Donc, pour un réseau à grande échelle, il n’est pas économique d’envoyer un signal par tous les liens possibles. Au lieu de cela, nous pouvons trouver le meilleur chemin de la source à la destination qui donne le rapport signal sur bruit (SNR) de bout en bout le plus élevé. Nous pouvons minimiser la fonction objectif d’erreur quadratique moyenne (MSE) ou de taux d’erreur binaire (BER) en envoyant le signal utilisant le chemin sélectionné. L’ensemble de relais dans le chemin reste actif et le reste des relais s’éteint, ce qui permet d’économiser de l’énergie afin d’améliorer la durée de vie du réseau. Le meilleur chemin de transmission de signal a été étudié dans la littérature pour un relais MIMO à deux bonds mais est plus complexe pour un ...
With the great developments in wireless communication technologies, Wireless Sensor Networks (WSNs) have gained attention worldwide in the past decade and are now being used in health monitoring, disaster management, defense, telecommunications, etc. Such networks are used in many industrial and consumer applications such as industrial process and environment monitoring, among others. A WSN network is a collection of specialized transducers known as sensor nodes with a communication link distributed randomly in any locations to monitor environmental parameters such as water level, and temperature. Each sensor node is equipped with a transducer, a signal processor, a power unit, and a transceiver. WSNs are now being widely used in the underground mining industry to monitor environmental parameters, including the amount of gas, water, temperature, humidity, oxygen level, dust, etc. The WSN for environment monitoring can be equivalently replaced by a multiple-input multiple-output (MIMO) relay network. Multi-hop relay networks have attracted significant research interest in recent years for their capability in increasing the coverage range. The network communication link from a source to a destination is implemented using the amplify-and-forward (AF) or decode-and-forward (DF) schemes. The AF relay receives information from the previous relay and simply amplifies the received signal and then forwards it to the next relay. On the other hand, the DF relay first decodes the received signal and then forwards it to the next relay in the second stage if it can perfectly decode the incoming signal. For analytical simplicity, in this thesis, we consider the AF relaying scheme and the results of this work can also be developed for the DF relay. The transceiver design for multi-hop MIMO relay is very challenging. This is because at the L-th relay stage, there are 2L possible channels. So, for a large scale network, it is not economical to send the signal through all possible links. Instead, we can find the best path from source-to-destination that gives the highest end-to-end signal-to-noise ratio (SNR). We can minimize the mean square error (MSE) or bit error rate (BER) objective function by sending the signal using the selected path. The set of relay in the path remains active and the rest of the relays are turned off which can save power to enhance network life-time. The best path signal transmission has been carried out in the literature for 2-hop MIMO relay and for multiple relaying it becomes very complex. In the first part of this thesis, we propose an optimal best path finding algorithm at perfect channel state information (CSI). We consider a parallel multi-hop multiple-input multiple-output (MIMO) AF relay system where a linear minimum mean-squared error (MMSE) receiver is used at the destination. We simplify the parallel network into equivalent series multi-hop MIMO relay link using best relaying, where the best relay ...
APA, Harvard, Vancouver, ISO, and other styles
21

Richard, Vincent. "Multi-body optimization method for the estimation of joint kinematics : prospects of improvement." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSE1090/document.

Full text
Abstract:
L'analyse du mouvement humain s'appuie généralement sur des techniques de suivi de marqueurs cutanés pour reconstruire la cinématique articulaire. Cependant, ces techniques d'acquisition présentent d'importantes limites dont les " artefacts de tissus mous " (i.e., le mouvement relatif entre les marqueurs cutanés et le squelette sous-jacent). La méthode d'optimisation multi-segmentaire viseà compenser ces artefacts en imposant aux trajectoires de marqueurs les degrés de liberté d'un modèle cinématique prédéfini. Les liaisons mécaniques modélisant classiquement les articulations empêchent toutefois une estimation satisfaisante de la cinématique articulaire. Cette thèse aborde des perspectives d'amélioration de la méthode d'optimisation multi-segmentaire pour l'estimation de la cinématique articulaire du membre inférieur,à travers différentes approches : (1) la reconstruction de la cinématique par suivi de la vitesse angulaire, de l'accélération et de l'orientation de centrales inertiellesà la place du suivi de marqueurs, (2) l'introduction d'un modèle articulaire élastique basé sur la matrice de raideur du genou, permettant une estimation physiologique de la cinématique articulaire et (3) l'introduction d'un modèle des artefacts de tissus mous " cinématique-dépendant ", visantà évaluer et compenser les artefacts de tissus mous simultanément avec l'estimation la cinématique articulaire. Ce travail a démontré la polyvalence de la méthode d'optimisation multi-segmentaire. Les résultats obtenus laissent espérer une amélioration significative de cette méthode qui devient de plus en plus utilisée en biomécanique, en particulier pour la modélisation musculo-squelettique
Human movement analysis generally relies on skin markers monitoring techniques to reconstruct the joint kinematics. However, these acquisition techniques have important limitations including the "soft tissue artefacts" (i.e., the relative movement between the skin markers and the underlying bones). The multi-body optimization method aims to compensate for these artefacts by imposing the degrees of freedom from a predefined kinematic model to markers trajectories. The mechanical linkages typically used for modeling the joints however prevent a satisfactory estimate of the joint kinematics. This thesis addresses the prospects of improvement of the multi-body optimization method for the estimation of joint kinematics of the lower limb through different approaches: (1) the reconstruction of the kinematics by monitoring the angular velocity, the acceleration and the orientation of magneto-inertial measurement units instead of tracking markers, (2) the introduction of an elastic joint model based on the knee stiffness matrix, enabling a physiological estimation of joint kinematics and (3) the introduction of a "kinematic-dependent" soft tissue artefact model to assess and compensate for soft tissue artefact concurrently with estimating the joint kinematics. This work demonstrated the versatility of the multi-body optimization method. The results give hope for significant improvement in this method which is becoming increasingly used in biomechanics, especially for musculoskeletal modeling
APA, Harvard, Vancouver, ISO, and other styles
22

Sharawi, Abeer Tarief. "OPTIMIZATION MODELS FOR EMERGENCY RELIEF SHELTER PLANNING FOR ANTICIPATED HURRICANE EVENTS." Doctoral diss., University of Central Florida, 2007. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4151.

Full text
Abstract:
Natural disasters, specifically hurricanes, can cause catastrophic loss of life and property. In recent years, the United States has endured significant losses due to a series of devastating hurricanes (e.g., Hurricanes Charley and Ivan in 2004, and Hurricanes Katrina and Wilma in 2005). Several Federal authorities report that there are weaknesses in the emergency and disaster planning and response models that are currently employed in practice, thus creating a need for better decision models in emergency situations. The current models not only lack fast communication with emergency responders and the public, but are also inadequate for advising the pre-positioning of supplies at emergency shelters before the storm's impact. The problem of emergency evacuation relief shelter planning during anticipated hurricane events is addressed in this research. The shelter planning problem is modeled as a joint location-allocation-inventory problem, where the number and location of shelter facilities must be identified. In addition, the evacuating citizens must be assigned to the designated shelter facilities, and the amount of emergency supply inventory to pre-position at each facility must be determined. The objective is to minimize total emergency evacuation costs, which is equal to the combined facility opening and preparation cost, evacuee transportation cost and emergency supply inventory cost. A review of the emergency evacuation planning literature reveals that this class of problems has not been largely addressed to date. First, the emergency evacuation relief sheltering problem is formulated under deterministic conditions as a mixed integer non-linear programming (MINLP) model. For three different evacuation scenarios, the proposed MINLP model yields a plan that identifies the locations of relief shelters for evacuees, the assignment of evacuees to those shelters and the amount of emergency supplies to stockpile in advance of an anticipated hurricane. The MINLP model is then used (with minor modifications) to explore the idea of equally distributing the evacuees across the open shelters. The results for the three different scenarios indicate that a balanced utilization of the open shelters is achieved with little increase in the total evacuation cost. Next, the MINLP is enhanced to consider the stochastic characteristics of both hurricane strength and projected trajectory, which can directly influence the storm's behavior. The hurricane's strength is based on its hurricane category according to the Saffir-Simpson Hurricane Scale. Its trajectory is represented as a Markov chain, where the storm's path is modeled as transitions among states (i.e., coordinate locations) within a spherical coordinate system. A specific hurricane that made landfall in the state of Florida is used as a test case for the model. Finally, the stochastic model is employed within a robust optimization strategy, where several probable hurricane behavioral scenarios are solved. Then, a single, robust evacuation sheltering plan that provides the best results, not only in terms of maximum deviation of total evacuation cost across the likely scenarios, but also in terms of maximum deviation of unmet evacuee demand at the shelter locations, is generated. The practical value of this robust plan is quite significant. This plan should accommodate unexpected changes in the behavior of an approaching storm to a reasonable degree with minimal negative impact to the total evacuation cost and the fulfillment of evacuee demand at the shelter locations. Most importantly, the re-allocation and re-mobilization of emergency personnel and supplies are not required, which can cause confusion and potentially increase the response time of responders to the hurricane emergency. The computational results show the promise of this research and usefulness of the proposed models. This work is an initial step in addressing the simultaneous identification of shelter locations, assignment of citizens to those shelters, and determination of a policy for stockpiling emergency supplies in advance of a hurricane. Both the location-allocation problem and the inventory problem have been extensively and individually studied by researchers as well as practitioners. However, this joint location-allocation-inventory problem is a difficult problem to solve, especially in the presence of stochastic storm behavior. The proposed models, even in the deterministic case, are a significant step beyond the current state-of-the-art in the area of emergency and disaster planning.
Ph.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering PhD
APA, Harvard, Vancouver, ISO, and other styles
23

Mori, Gerald M. "Sociotechnical systems analysis and design for selecting and designing the optimum manufacturing process." Master's thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-02162010-020334/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Andersson, Katarina. "Optimization of the Implantation Angle for a Talar Resurfacing Implant : A Finite Element Study." Thesis, KTH, Neuronik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-154237.

Full text
Abstract:
Osteochondral lesions of the talus (OLTs) are the third most common type of osteochondral lesion and can cause pain and instability of the ankle joint. Episurf Medical AB is a medical technology company that develops individualized implants for patients who are suffering from focal cartilage lesions. Episurf have recently started a project that aims to implement their implantation technique in the treatment of OLTs. This master thesis was a part of Episurf’s talus project and the main goal of the thesis was to find the optimal implantation angle of the Episurf implant when treating OLTs. The optimal implantation angle was defined as the angle that minimized the maximum equivalent (von Mises) strain acting on the implant shaft during the stance phase of a normal gait cycle. It is desirable to minimize the strain acting on the implant shaft, since a reduction of the strain can improve the longevity of the implant. To find the optimal implantation angle a finite element model of an ankle joint treated with the Episurf implant was developed. In the model an implant with a diameter of 12 millimeters was placed in the middle part of the medial side of the talar dome. An optimization algorithm was designed to find the implantation angle, which minimized the maximum equivalent strain acting on the implant shaft. The optimal implantation angle was found to be a sagittal angle of 12.5 degrees and a coronal angle of 0 degrees. Both the magnitude and the direction of the force applied to the ankle joint in the simulated stance phase seemed to influence the maximum equivalent strain acting on the implant shaft. A number of simplifications have been done in the simulation of this project, which might affect the accuracy of the results. Therefore it is recommended that further, more detailed, simulations based on this project are performed in order to improve the result accuracy.
Fokala broskskador på talusbenet är den tredje vanligaste typen av fokala broskskador och kan ge upphov till smärta och instabilitet av fotleden. Episurf Medical AB är ett medicintekniskt företag som utvecklar individanpassade implantat för patienter med fokala broskskador. Episurf har nyligen påbörjat ett projekt där deras teknik ska användas i behandlingen av fokala broskskador på talusbenet. Den här masteruppsatsen var en del i Episurfs talusprojekt och dess huvudmål var att finna den optimala implantationsvinkeln av Episurfs implantat i behandlingen av fokala broskskador på talusbenet. Den optimala implanteringsvinkeln definierades som den vinkel som minimerade den effektiva von Mises-töjningen som verkade på implantatskaftet under stance-fasen i en normal gångcykel. Det är eftersträvansvärt att minimera belastningen på implantatskaftet eftersom en reducering av belastningen kan förbättra implantatets livslängd. En finita element-modell av en fotled behandlad med Episurfs implantat utvecklades för att för att finna den optimala implantationsvinkeln. I modellen placerades ett implantat med en diameter på 12 millimeter på mittendelen av talus mediala sida. En optimeringsalgoritm utformades för att finna implantationsvinkeln som minimerade den effektiva von Mises-töjningen på implantatskaftet. Den funna optimala implantationsvinkeln bestod av en vinkel på 12.5 grader i sagittalplan och en vinkel på 0 grader i koronalplan. Både storleken och riktningen på kraften som applicerats på fotleden under den simulerade stance-fasen av gångcykeln verkade påverka belastningen på implantatskaftet. Ett antal förenklingar har gjorts i projektets simuleringar, vilket kan påverka noggrannheten i resultatet. Därför rekommenderas att ytterligare, mer detaljerade simuleringar baserade på det här projektet görs för att förbättra resultatets noggrannhet.
APA, Harvard, Vancouver, ISO, and other styles
25

Vennapusa, Siva Koti Reddy. "Design of bi-adhesive joint for optimal strength." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-16675.

Full text
Abstract:
To support the trust in the design development of adhesively bonded joints, it is important to precisely predict their mechanical failure load. A numerical simulation model with a two-dimensional linear elastic cohesive zone model using a combination of a soft and a stiff adhesive is developed to optimize the strength of a lap-joint. Separation under mixed-mode conditions (normal and shear direction) is considered. By varying the length of the adhesives, the fracture load is optimized. The results obtained from the numerical experiments show an improvement in strength.
APA, Harvard, Vancouver, ISO, and other styles
26

Song, Ruoyu. "Game theoretic optimization for product line evolution." Thesis, Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54472.

Full text
Abstract:
Product line planning aims at optimal planning of product variety. In addition, the traditional product line planning problem develops new product lines based on product attributes without considering existing product lines. However, in reality, almost all new product lines evolve from existing product lines, which leads to the product line evolution problem. Product line evolution involves trade-offs between the marketing perspective and engineering perspective. The marketing concern focuses on maximizing utility for customers; the engineering concern focuses on minimizing engineering cost. Utility represents satisfaction experienced by the customers of a product. Engineering cost is the total cost involved in the process of the development of a product line. These two goals are in conflict since the high utility requires high-end product attributes which could increase the engineering cost and vice versa. Rather than aggregating both problems as one single level optimization problem, the marketing and engineering concerns entail a non-collaborative game per se. This research investigates a game-theoretic approach to the product line evolution problem. A leader-follower joint optimization model is developed to leverage conflicting goals of marketing and engineering concerns within a coherent framework of game theoretic optimization. To solve the joint optimization model efficiently, a bi-level nested genetic algorithm is developed. A case study of smart watch product line evolution is reported to illustrate the feasibility and potential of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
27

Mazumdar, Anupam S. M. Massachusetts Institute of Technology. "Iterative algorithms for a joint pricing and inventory control problem with nonlinear demand functions." Thesis, Massachusetts Institute of Technology, 2009. http://hdl.handle.net/1721.1/55076.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 79-81).
Price management, production planning and inventory control are important determinants of a firm's profitability. The intense competition brought about by rapid innovation, lean manufacturing time and the internet revolution has compelled firms to adopt a dynamic strategy that involves complex interplay between pricing and production decisions. In this thesis we consider some of these problems and develop computationally efficient algorithms that aim to tackle and optimally solve these problems in a finite amount of time. In the first half of the thesis we consider the joint pricing and inventory control problem in a deterministic and multiperiod setting utilizing the popular log linear demand model. We develop four algorithms that aim to solve the resulting profit maximization problem in a finite amount of time. The developed algorithms are then tested in a variety of settings ranging from small to large instances of trial data. The second half of the thesis deals with setting prices effectively when the customer demand is assumed to follow the multinomial logit demand model, which is the most popular discrete choice demand model. The profit maximization problem (even in the absence of constraints) is non-convex and hard to solve. Despite this fact we develop algorithms that compute the optimal solution efficiently. We test the algorithms we develop in a wide variety of scenarios from small to large customer segment, with and without production/inventory constraints. The last part of the thesis develops solution methods for the joint pricing and inventory control problem when costs are linear and demand follows the multinomial logit model.
by Anupam Mazumdar.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
28

Buchal, Ralph Oliver. "Determination of robot trajectories satisfying joint limit and interference constraints using an optimization method." Thesis, University of British Columbia, 1987. http://hdl.handle.net/2429/26967.

Full text
Abstract:
An important problem in robotics research is the automatic off-line planning of optimal robot trajectories to perform specified tasks while satisfying physical constraints. This thesis proposes a method for finding an optimal geometric robot trajectory subject to the constraints of joint displacement limits and interference avoidance. A geometric method for calculating the distance between convex polyhedra is presented, and the method is implemented in two dimensions for the calculation of interference. Point-to-point trajectory planning is posed as a two-point boundary value problem in the calculus of variations. The kinematic constraints are formulated as exterior penalty functions and are combined with other optimization criteria to form a cost functional. The problem is solved by discretizing the problem and numerically minimizing the cost functional by using a steepest-descent approach to iteratively modify the trajectory. Any starting trajectory which satisfies the boundary conditions is acceptable, but different starting trajectories may converge to different locally optimal final trajectories. The method has been implemented for the two-dimensional case by an interactive FORTRAN program running on a VAX 11/750 computer. Successful results were obtained for a number of test cases, and further work has been identified to allow application of the method to a wide range of problems.
Applied Science, Faculty of
Mechanical Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
29

Colpo, Kristie M. "Joint Sensing/Sampling Optimization for Surface Drifting Mine Detection with High-Resolution Drift Model." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/17345.

Full text
Abstract:
Approved for public release; distribution is unlimited
Every mine countermeasures (MCM) operation is a balance of time versus risk. In attempting to reduce time and risk, it is in the interest of the MCM community to use unmanned, stationary sensors to detect and monitor drifting mines through harbor inlets and straits. A network of stationary sensors positioned along an area of interest could be critical in such a process by removing the MCM warfighter from a threat area and reducing the time required to detect a moving target. Although many studies have been conducted to optimize sensors and sensor networks for moving target detection, few of them considered the effects of the environment. In a drifting mine scenario, an oceanographic drift model could offer an estimation of surrounding environmental effects and therefore provide time critical estimations of target movement. These approximations can be used to further optimize sensor network components and locations through a defined methodology using estimated detection probabilities. The goal of this research is to provide such a methodology by modeling idealized stationary sensors and surface drift for the Hampton Roads Inlet.
APA, Harvard, Vancouver, ISO, and other styles
30

Cizaire, Claire (Claire Jia Ling). "Optimization models for joint airline pricing and seat inventory control : multiple products, multiple periods." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/72842.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 153-157).
Pricing and revenue management are two essential levers to optimize the sales of an airline's seat inventory and maximize revenues. Over the past few decades, they have generated a great deal of research but have typically been studied and optimized separately. On the one hand, the pricing process focused on demand segmentation and optimal fares, regardless of any capacity constraints. On the other hand, researchers in revenue management developed algorithms to set booking limits by fare product, given a set of fares and capacity constraints. This thesis develops several approaches to solve for the optimal fares and booking limits jointly and simultaneously. The underlying demand volume in an airline market is modeled as a function of the fares. We propose an initial approach to the two-product, two-period revenue optimization problem by first assuming that the demand is deterministic. We show that the booking limit on sales of the lower-priced product is unnecessary in this case, allowing us to simplify the optimization problem. We then develop a stochastic optimization model and analyze the combined impacts of fares and booking limits on the total number of accepted bookings when the underlying demand is uncertain. We demonstrate that this joint optimization approach can provide a 3-4% increase in revenues from a traditional pricing and revenue management practices. The stochastic model is then extended to the joint pricing and seat inventory control optimization problem for booking horizons involving more than two booking periods, as is the case in reality. A generalized methodology for optimization is presented, and we show that the complexity of the joint optimization problem increases substantially with the number of booking periods. We thus develop three heuristics. Simulations for a three-period problem show that all heuristics outperform the deterministic optimization model. In addition, two of the heuristics can provide revenues close to those obtained with the stochastic model. This thesis provides a basis for the integration of pricing and revenue management. The combined effects of fares and booking limits on the number of accepted bookings, and thus on the revenues, are explicitly taken into account in our joint optimization models. We showed that the proposed approaches can further enhance revenues.
by Claire Cizaire.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
31

Guan, Kyle Chi. "Cost-effective optical network architecture : a joint optimization of topology, switching, routing and wavelength assignment." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/38678.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
Includes bibliographical references (p. 279-285).
To provide end users with economic access to high bandwidth, the architecture of the next generation metropolitan area networks (MANs) needs to be judiciously designed from the cost perspective. In addition to a low initial capital investment, the ultimate goal is to design networks that exhibit excellent scalability - a decreasing cost-per-node-per-unit-traffic as user number and transaction size increase. As an effort to achieve this goal, in this thesis we search for the scalable network architectures over the solution space that embodies the key aspects of optical networks: fiber connection topology, switching architecture selection and resource dimensioning, routing and wavelength assignment (RWA). Due to the inter-related nature of these design elements, we intended to solve the design problem jointly in the optimization process in order to achieve over-all good performance. To evaluate how the cost drives architectural tradeoffs, an analytical approach is taken in most parts of the thesis by first focusing on networks with symmetric and well defined structures (i.e., regular networks) and symmetric traffic patterns (i.e., all-to-all uniform traffic), which are fair representations that give us suggestions of trends, etc.
(cont.) We starts with a examination of various measures of regular topologies. The average minimum hop distance plays a crucial role in evaluating the efficiency of network architecture. From the perspective of designing optical networks, the amount of switching resources used at nodes is proportional to the average minimum hop distance. Thus a smaller average minimum hop distance translates into a lower fraction of pass-through traffic and less switching resources required. Next, a first-order cost model is set up and an optimization problem is formulated for the purpose of characterizing the tradeoffs between fiber and switching resources. Via convex optimization techniques, the joint optimization problem is solved analytically for (static) uniform traffic and symmetric networks. Two classes of regular graphs - Generalized Moore Graphs and A-nearest Neighbors Graphs - are identified to yield lower and upper cost bounds, respectively. The investigation of the cost scalability further demonstrates the advantage of the Generalized Moore Graphs as benchmark topologies: with linear switching cost structure, the minimal normalized cost per unit traffic decreases with increasing network size for the Generalized Moore Graphs and their relatives.
(cont.) In comparison, for less efficient fiber topologies (e.g., A-nearest Neighbors) and switching cost structures (e.g., quadratic cost), the minimal normalized cost per unit traffic plateaus or even increases with increasing network size. The study also reveals other attractive properties of Generalized Moore Graphs in conjunction with minimum hop routing - the aggregate network load is evenly distributed over each fiber. Thus, Generalized Moore Graphs also require the minimum number of wavelengths to support a given uniform traffic demand. Further more, the theoretical works on the Generalized Moore Graphs and their close relatives are extended to study more realistic design scenarios in two aspects. One aspect addresses the irregular topologies and (static) non-uniform traffic, for which the results of Generalized Moore networks are used to provide useful estimates of network cost, and are thus offering good references for cost-efficient optical networks. The other aspect deals with network design under random demands. Two optimization formulations that incorporate the traffic variability are presented.
(cont.) The results show that as physical architecture, Generalized Moore Graphs are most robust (in cost) to the demand uncertainties. Analytical results also provided design guidelines on how optimum dimensioning, network connectivity, and network costs vary as functions of risk aversion, service level requirements, and probability distributions of demands.
by Kyle Chi Guan.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
32

Kadaikar, Aysha-Khatoon. "Optimization of the Rate-Distortion Compromise for Stereoscopic Image Coding using Joint Entropy-Distortion Metric." Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCD083/document.

Full text
Abstract:
Ces dernières décennies ont vu émerger de nombreuses applications utilisant la technologie 3D, telles que les écrans auto-stéréoscopiques, les écrans de télévisions 3D ou encore la visio-conférence stéréoscopique. Ces applications requièrent des techniques adaptées afin que leur flux de données soit compressé efficacement. En particulier, dans le cas des images stéréoscopiques, ces dernières étant composées de deux vues de la même scène, elles nécessitent à ce titre deux fois plus d’informations à transmettre ou à stocker que dans le cas des images 2D traditionnelles. Nos travaux se situent dans le cadre de la compression des images stéréoscopiques. Plus précisément, ils concernent l’amélioration de l’estimation de la carte de disparité dans le but d’obtenir un meilleur compromis entre le débit binaire nécessaire au codage de la carte de disparité et la qualité de l’image prédite.Généralement, la carte de disparité est estimée en minimisant la distorsion de l’image prédite. Cette minimisation peut être sujette à une contrainte de lissage. L’idée étant qu’une carte de disparité plus lisse nécessitera un débit binaire moindre en supposant que les mêmes vecteurs de disparités seront sélectionnés plus souvent. Néanmoins cette contrainte de lissage ne permet pas toujours de diminuer le coût binaire de la carte. Le lissage peut entraîner par ailleurs une augmentation notable de la distorsion de l’image prédite. Dans le premier chapitre de la thèse, nous présentons un algorithme d’estimation de carte de disparité minimisant une métrique conjointe entropie-distorsion. Le coût binaire finale de la carte de disparité est estimée à chaque étape de l’algorithme et est intégré dans le calcul de la métrique. La distorsion globale de la carte de disparité est aussi mise à jour au fur et à mesure du traitement de l’image. Par ailleurs, cette algorithme repose sur la construction séquentiel d’un arbre dont on ne garde qu’un nombre défini de branches à chaque profondeur de l’arbre. Ainsi, l’algorithme développé apporte une solution sous-optimale en minimisant le coût binaire de la carte de disparité tout en assurant une bonne qualité de l’image prédite. Le chapitre deux étend l’algorithme précédent au cas des images non rectifiées. Dans le troisième chapitre, nous nous intéressons au fait de trouver une solution au problème d’optimisation du compromis débit-distorsion en réduisant la complexité numérique par rapport à l’algorithme précédent. De ce fait, nous avons développé le R-algorithme qui se base sur une solution initiale de Référence (celle minimisant la distorsion de l’image prédite) et la modifie successivement tant qu’une amélioration est constatée en termes de compromis débit-distorsion. Le quatrième chapitre s’intéresse toujours au fait d’accroître les performances de l’algorithme développé tout en réduisant le coût en complexité numérique et donc en temps de traitement. Nous proposons deux approches afin de tirer profit d’un grand espace de recherche sans avoir à tester pour chaque bloc à apparier l’ensemble des disparités qui composent cet espace de recherche. En effet, un espace de recherche plus grand permet plus de choix de disparités et donc potentiellement une meilleur reconstruction de l’image prédite. En contrepartie, il se peut que le coût binaire de la carte de disparité augmente si l’ensemble des disparités sélectionnées constituent un ensemble plus divers qu’auparavant. Les deux approches proposées permettent de restreindre l’espace de recherche à un ensemble composées de certaines disparités permettant de minimiser la distorsion de l’image prédite pour un débit donné. Le dernier chapitre de la thèse s’intéresse à l’utilisation des blocs de taille variable pour la compression des images stéréoscopiques
During the last decades, a wide range of applications using stereoscopic technology has emerged still offering an increased immersion to the users such as video games with autostereoscopic displays, 3D-TV or stereovisio-conferencing. The raise of these applications requires fast processing and efficient compression techniques. In particular, stereoscopic images require twice the amount of information needed to transmit or store them in comparison with 2D images as they are composed of two views of the same scene. The contributions of our work are in the field of stereoscopic image compression and more precisely, we get interested in the improvement of the disparity map estimation. Generally, disparities are selected by minimizing a distortion metric which is sometimes subjected to a smoothness constraint, assuming that a smooth disparity map needs a smaller bitrate to be encoded. But a smoother disparity map does not always reduce significantly the bitrate needed to encode it but can increase the distortion of the predicted view. Therefore, the first algorithm we have proposed minimizes a joint entropy-distortion metric to select the disparities. At each step of the algorithm, the bitrate of the final disparity map is estimated and included in the metric to minimize. Moreover, this algorithm relies on a tree where a fixed number of paths are extended at each depth of the tree, ensuring good rate-distortion performance. In the second part of the work, we have proposed a sub-optimal solution with a smaller computational complexity by considering an initial solution -the one minimizing the distortion of the predicted view- which is successively modified as long as an improvement is observed in terms of rate-distortion. Then, we have studied how to take advantages of large search areas in which the disparities are selected as one can easily supposed that enlarging the search area will increase the distortion performance as there will be more choices of disparities. In the other hand, the larger is the range of the selected disparities, the higher is supposed to be the cost of the disparity map in terms of bitrate. We have proposed two approaches allowing to take advantage of a large search area by selecting only sets of disparities belonging to it enabling to achieve a given bitrate while minimizing the distortion of the predicted image. The last part of the work concerns variable block sizes which undeniably allows to improve the bitrate-distortion performance as the block size suits to the image features. We have thus proposed a novel algorithm which jointly estimates and optimizes the disparity and the block length maps
APA, Harvard, Vancouver, ISO, and other styles
33

Chatzitheodoridi, Maria-Elisavet. "Processing Optimization for Continuous Phase Modulation-based Joint Radar-Communication System : Application on Imaging Radar." Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPAST022.

Full text
Abstract:
En raison de la croissance continue des applications électromagnétiques, le spectre devient de plus en plus encombré. Une solution possible à ce problème consiste à créer des systèmes radar-communication joints qui utilisent la même bande passante pour réaliser les deux applications. Dans cette thèse, nous nous intéressons à un système exploitant une forme d'onde issue du monde des communications pour réaliser conjointement de l'imagerie radar et un transfert d'information. Pour cela, parmi la multitude de codes existants, nous avons choisi d'utiliser des codes modulés continument en phase CPM, et plus particulièrement une sous-famille appelée Continuous Phase Frequency Shift-Keying codes. Les propriétés de ceux-ci, notamment en ce qui concerne l'occupation spectrale, sont d'abord étudiées et comparées à d'autres codes représentatifs des communications. Cependant, ces formes d'ondes présentent des qualités de compression dégradées par rapport au chirp habituellement utilisé en radar. En particulier les lobes secondaires résultant de la compression avec le filtre adapté sont plus élevés, nuisant à la qualité de l'image SAR résultante. Un filtre désadapté qui minimise l'énergie des lobes secondaires est proposé, ainsi qu'un algorithme rapide qui fournit les filtres pour tous les signaux émis en un temps de calcul raisonnable. Ce filtre désadapté est amélioré pour pouvoir traiter des valeurs inconnues de décalage Doppler ou de retard hors grille qui peuvent s'appliquer sur le signal reçu. De tels problèmes peuvent être généralisés à d'autres applications radar que le SAR. Une fois le choix de la méthode de compression d'impulsion établi, une évaluation des résultats est proposée. D'une part, des images SAR re-synthétisées sont générées, reconstruites à partir de données réelles basées sur le chirp, en utilisant des codes modulés continument et des filtres désadaptés, et différents outils de comparaison sont utilisés pour s'assurer de leurs performances. D'autre part, des données réelles sont acquises dans un cadre ISAR, afin de valider notre système dans un contexte réaliste. Finalement, nous pouvons apporter une réponse positive à la question suivante : pouvons-nous créer un système conjoint SAR-communication qui transmet des informations et fournit des images radar de haute qualité ?
Due to the continuous growth of electromagnetic applications, the spectrum gets more and more congested. A possible solution to this problem is the creation of joint radar-communication systems, because they can alleviate the spectrum occupancy by using the same bandwidth to perform both applications. In this thesis, a joint Synthetic Aperture Radar (SAR)-Communication system based on a communication waveform is considered. To do so, among all the existing codes, we chose Continuous Phase Modulated codes (CPM), and more specifically a sub-family called Phase Frequency Shift-Keying codes (CPFSK). Their properties, in particular the spectral occupation, are first studied and compared to other well-known communication codes. However, these waveforms present degraded compression qualities when compared to the usual chirp used for radars. More specifically, the sidelobes generated from the Matched Filter compression are higher, and thus deteriorate the resulting SAR image. The mismatched filter that minimizes the sidelobe level is proposed along with a fast algorithm that provides the filters for all the transmitted signals during an acceptable computational cost. This mismatched filter is further improved so that it can deal with unknown parameters. More precisely, if unknown Doppler shift or off-grid delay values are applied to the received signal, then an improved mismatched filter is provided. Such a problem can be extended to other radar applications. Once the range compression method choice is established, an evaluation of the results is proposed. On the one hand, re-synthesized SAR images are generated, reconstructed from real chirp-based data, using CPM codes and mismatched filters, and different comparison tools to ensure their performance.On the other hand, real data are acquired in an ISAR framework, in order to validate our system in a realistic context. Finally, we can provide a positive answer to the question: can we create a joint SAR-communication system that transmits information and provides an image of good radar quality?
APA, Harvard, Vancouver, ISO, and other styles
34

Grenville, N. Delia. "A Sociotechnical Approach to Evaluating the Effects of Managerial Time Allotment on Department Performance." Thesis, Virginia Tech, 1997. http://hdl.handle.net/10919/36811.

Full text
Abstract:
Current organizational design changes such as restructuring, production advancements, and information technology improvements have caused many organizations to move to flatter management structures. Changes in the organizational structure along with the demand for improved performance have broadened the scope of responsibilities for first-level managers in manufacturing organizations. First-level managers are required to balance their time to meet greater demands. The sociotechnical systems principle of joint optimization states that organizations function optimally when design changes are made to meet the needs of both the social and technical subsystems in the context of the organization's environment. This study uses time allotment at the supervisory level to operationalize the sociotechnical systems principle of joint optimization. Ninety-one first-level managers from both the production and distribution areas of thirteen North American facilities participated in this study. Four survey instruments were used to measure the following dimensions: joint optimization, department performance, time allotment to the social and technical subsystems, and organizational values of appropriate time use. Five time allotment constructs emerged from the data collected on time use in the social and technical subsystems. These were time spent on Participation and Information Sharing, Customer Needs and Strategic Planning, Skill Development and Compensation, Quality, and Department Operational Needs. The results indicated time allotment constructs along with the organization's values on appropriate time use can be used to predict both joint optimization and performance at the department level. The results also indicated a strong relationship (r = .607, p < .05) between level of joint optimization and department performance.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
35

IRFAN, MUHAMMAD ABEER. "Joint geometry and color denoising for 3D point clouds." Doctoral thesis, Politecnico di Torino, 2021. http://hdl.handle.net/11583/2912976.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Gecili, Hakan. "Joint Shelf Design and Shelf Space Allocation Problem for Retailers." Wright State University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=wright1594374475655644.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Teague, Kory Alan. "Approaches to Joint Base Station Selection and Adaptive Slicing in Virtualized Wireless Networks." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/85966.

Full text
Abstract:
Wireless network virtualization is a promising avenue of research for next-generation 5G cellular networks. This work investigates the problem of selecting base stations to construct virtual networks for a set of service providers, and adaptive slicing of the resources between the service providers to satisfy service provider demands. A two-stage stochastic optimization framework is introduced to solve this problem, and two methods are presented for approximating the stochastic model. The first method uses a sampling approach applied to the deterministic equivalent program of the stochastic model. The second method uses a genetic algorithm for base station selection and adaptively slicing via a single-stage linear optimization problem. A number of scenarios are simulated using a log-normal model designed to emulate demand from real world cellular networks. Simulations indicate that the first approach can provide a reasonably tight solution, but is constrained as the time expense grows exponentially with the number of parameters. The second approach provides a significant improvement in run time with the introduction of marginal error.
Master of Science
5G, the next generation cellular network standard, promises to provide significant improvements over current generation standards. For 5G to be successful, this must be accompanied by similarly significant efficiency improvements. Wireless network virtualization is a promising technology that has been shown to improve the cost efficiency of current generation cellular networks. By abstracting the physical resource—such as cell tower base stations— from the use of the resource, virtual resources are formed. This work investigates the problem of selecting virtual resources (e.g., base stations) to construct virtual wireless networks with minimal cost and slicing the selected resources to individual networks to optimally satisfy individual network demands. This problem is framed in a stochastic optimization framework and two approaches are presented for approximation. The first approach converts the framework into a deterministic equivalent and reduces it to a tractable form. The second approach uses a genetic algorithm to approximate resource selection. Approaches are simulated and evaluated utilizing a demand model constructed to emulate the statistics of an observed real world urban network. Simulations indicate that the first approach can provide a reasonably tight solution with significant time expense, and that the second approach provides a solution in significantly less time with the introduction of marginal error.
APA, Harvard, Vancouver, ISO, and other styles
38

Camacho, Torregrosa Esteban Efraím. "Dosage optimization and bolted connections for UHPFRC ties." Doctoral thesis, Universitat Politècnica de València, 2014. http://hdl.handle.net/10251/34790.

Full text
Abstract:
Concrete technology has been in changeful evolution since the Roman Empire time. It is remarkable that the technological progress became of higher magnitude from the second part of the XX Century. Advances in the development of new cements, the appearance of the fibers as a reinforcement for structural applications, and specially the grand progress in the field of the water reducing admixtures enabled the emergence of several types of special concretes. One of the lasts is the Ultra High Performance Fiber Reinforced Concrete (UHPFRC), which incorporates advances of the Self-Compacting Concrete (SCC), Fiber-Reinforced Concrete (FRC) and Ultra High Strength Concrete (UHSC) technology. This exclusive material requires a detailed analysis of the components compatibility and a high control of the materials and processes. Mainly patented products have been used for the few structural elements carried out so far today, but the costs makes doubtful the development of many other potential applications. In accordance with the previously explained, a simplification of the UHPFRC components and processes is needed. This becomes the first main goal of this Ph.D. thesis, which emphasizes in the use of local available components and simpler mixing processes. Moreover, the singular properties of this material, between ordinary concrete and steel, allow not only the realization of slenderer structures, but also the viability of new concepts unthinkable with ordinary concrete. In this field is focused the second part of the Ph.D. thesis, which develops a bolted connection system between UHPFRC elements. This research summarizes, first of all, the subfamilies belonging to the HPC-UHPC materials. Afterwards, it is provided a detailed comparison between the dosage and properties of more than a hundred of mixtures proposed by several authors in the last ten years of technology. This becomes a useful tool to recognize correlations between dosages and properties and validate or no preconceived ideas about this material. Based on this state of art analysis was performed the later development of mixtures, on Chapter 4, which analized the effect of use of simpler components and processes on the UHPFRC. The main idea was use local components available in the Spanish market, identifying the combinations that provide the best rheological and mechanical properties. Steam curing use was avoided since a process simplification is intended. Diferent dosages were developed to be adapted to various levels of performance, and always trying to be as economical as possible. The concretes designed were selfcompacting and mainly combined two fiber types (hybrid), as the flexural performance was of greater relevance. The compressive strength obtained varied in the range between 100 and 170 MPa (cube L=100 mm), and the flexural strength between 15 and 45 MPa (prism 100 x 100 x 500 mm). Some of the components introduced are very rarely used in UHPFRC, as limestone coarse aggregate or FC3R, a white active residue from the petrol industry. As a result of the research, some simple and practical tips are provided for designers of UHPFRC dosage. At the end of this chapter, five dosages are characterized as examples of useful concretes for different requirement applications. In a second part, the idea of a bolted joint connection between UHPFRC elements was proposed. The connection system would be especially useful for struts and ties elements, as truss structures. The possible UHPFRC failure modes were introduced and two different types of tests were designed and performed to evaluate the joint capacity. The geometry of the UHPFRC elements was modified in order to correlate it with the failure mode and maximum load reached. Also a linear finite element analysis was performed to analyze the UHPFRC elements connection. This supported the results of the experimental tests to deduce formulations that predict the maximum load for each failure mode. Finally, a real size truss structure was assembled with bolted joints and tested to verify the good structural behavior of these connections. To conclude, some applications designed and developed at the Universitat Politècnica de València with the methods and knowledge acquired on UHPFRC are abstracted. In many of them the material was mixed and poured in a traditional precast concrete company, providing adequate rheological and mechanical results. This showed the viability of simpler UHPFRC technology enabling some of the first applications in Spain with this material.
Camacho Torregrosa, EE. (2013). Dosage optimization and bolted connections for UHPFRC ties [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/34790
TESIS
APA, Harvard, Vancouver, ISO, and other styles
39

Coursen, Jeffrey Thomas. "An experiment in joint product price optimization price elasticities and subsitution [sic] decisions of the hungry barfly /." Connect to this title online, 2007. http://etd.lib.clemson.edu/documents/1202498978/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

SILVA, ALEXANDRE MOREIRA DA. "TWO-STAGE ROBUST OPTIMIZATION MODELS FOR POWER SYSTEM OPERATION AND PLANNING UNDER JOINT GENERATION AND TRANSMISSION SECURITY CRITERIA." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2014. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=24754@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Recentes apagões em todo o mundo fazem da confiabilidade de sistemas de potência, no tocante a contingências múltiplas, um tema de pesquisa mundial. Dentro desse contexo, se faz importante investigar métodos eficientes de proteger o sistema contra falhas de alguns de seus componentes, sejam elas dependentes e/ou independentes de outras falhas. Nesse sentido, se tornou crucial a incorporação de critérios de segurança mais rigorosos na operação e planejamento de sistemas de potência. Contingências múltiplas são mais comuns e desastrosas do que falhas naturais e independentes. A principal razão para isso reside na complexidade da estabilidade dinâmica de sistemas de potência. Além disso, o sistema de proteção que opera em paralelo ao sistema de distribuição não é livre de falhas. Portanto, interrupções naturais podem causar contingências em cascata em decorrência do mau funcionamento de mecanismos de proteção ou da instabilidade do sistema elétrico como um todo. Nesse contexto, se dá a motivação pela busca de critérios de segurança mais severos como, por exemplo, o n - K, onde K pode ser maior do que 2. Nesse trabalho, o principal objetivo é incorporar o crtitério de segurança geral n-K para geração e transmissão em modelos de operação e planejamento de sistemas de potência. Além de interrupções em geradores, restrições de rede, bem como falhas em linhas de transmiss˜ao também são modeladas. Esse avanço leva a novos desafios computacionais, para os quais formulamos metodologias de solução eficientes baseadas em decomposição de Benders. Considerando operação, duas abordagens são apresentadas. A primeira propõe um modelo de otimização trinível para decidir o despacho ótimo de energia e reservas sob um critério de segurançaa n - K. Nessa abordagem, a alta dimensionalidade do problema, por contemplar restrições de rede, bem como falhas de geradores e de linhas de transmissão, é contornada por meio da implícita consideração do conjunto de possíveis contingências. No mesmo contexto, a segunda abordagem leva em conta a incerteza da carga a ser suprida e a correlação entre demandas de diferentes barras. Considerando planejamento de expansão da transmissão, outro modelo de otimização trinível é apresentado no intuito de decidir quais linhas de transmissão, dentro de um conjunto de candidatas, devem ser construídas para atender a um critério de segurança n - K e, consequentemente, aumentar a confiabilidade do sistema como um todo. Portanto, as principais contribuições do presente trabalho são as seguintes: 1) modelos de otimização trinível para considerar o critério de segurança n - K em operação e planejamento de sistemas de potência, 2) consideração implícita de todo o conjunto de contingências por meio de uma abordagem de otimização robusta ajustável, 3) otimização conjunta de energia e reserva para operação de sistemas de potência, considerando restrições de rede e garantindo a entregabilidade das reservas em todos os estados pós-contingência considerados, 4) metodologias de solução eficientes baseadas em decomposição de Benders que convergem em passos finitos para o ótimo global e 5) desenvolvimento de restrições válidas que alavancam a eficiência computacional. Estudos de caso ressaltam a eficácia das metodologias propostas em capturar os efeitos econômicos de demanda nodal correlacionada sob um critério de segurançaa n - 1, em reduzir o esfor¸co computacional para considerar os critérios de seguran¸ca convencionais n-1 e n-2 e em considerar critérios de segurança mais rigorosos do que o n - 2, um problema intratável até então.
Recent major blackouts all over the world have been a driving force to make power system reliability, regarding multiple contingencies, a subject of worldwide research. Within this context, it is important to investigate efficient methods of protecting the system against dependent and/or independent failures. In this sense, the incorporation of tighter security criteria in power systems operation and planning became crucial. Multiple contingencies are more common and dangerous than natural independent faults. The main reason for this lies in the complexity of the dynamic stability of power systems. In addition, the protection system, that operates in parallel to the supply system, is not free of failures. Thus, natural faults can cause subsequent contingencies (dependent on earlier contingencies) due to the malfunction of the protection mechanisms or the instability of the overall system. These facts drive the search for more stringent safety criteria, for example, n - K, where K can be greater than 2. In the present work, the main objective is to incorporate the joint generation and transmission general security criteria in power systems operation and planning models. Here, in addition to generators outages, network constraints and transmission lines failures are also accounted for. Such improvement leads to new computational challenges, for which we design efficient solution methodologies based on Benders decomposition. Regarding operation, two approaches are presented. The first one proposes a trilevel optimization model to decide the optimal scheduling of energy and reserve under an n - K security criterion. In such approach, the high dimensionality curse of considering network constraints as well as outages of generators and transmission assets is withstood by implicitly taking into account the set of possible contingencies. The second approach includes correlated nodal demand uncertainty in the same framework. Regarding transmission expansion planning, another trilevel optimization model is proposed to decide which transmission assets should be built within a set of candidates in order to meet an n - K security criterion, and, consequently, boost the power system reliability. Therefore, the main contributions of this work are the following: 1) trilevel models to consider general n - K security criteria in power systems operation and planning, 2) implicit consideration of the whole contingency set by means of an adjustable robust optimization approach, 3) co-optimization of energy and reserves for power systems operation, regarding network constraints and ensuring the deliverability of reserves in all considered post-contingency states, 4) efficient solution methodologies based on Benders decomposition that finitely converges to the global optimal solution, and 5) development of valid constraints to boost computational efficiency. Case studies highlight the effectiveness of the proposed methodologies in capturing the economic effect of nodal demand correlation on power system operation under an n - 1 security criterion, in reducing the computational effort to consider conventional n-1 and n-2 security criteria, and in considering security criteria tighter than n - 2, an intractable problem heretofore.
APA, Harvard, Vancouver, ISO, and other styles
41

Watanabe, Tetsuyou. "Optimization of grasping by a robotic hand and trajectory design of 3-D.O.F. arm with an unactuated joint." 京都大学 (Kyoto University), 2003. http://hdl.handle.net/2433/148855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Asseln, Malte [Verfasser], Klaus [Akademischer Betreuer] Radermacher, and Dieter Christian [Akademischer Betreuer] Wirtz. "Morphological and functional analysis of the knee joint for implant design optimization / Malte Asseln ; Klaus Radermacher, Dieter Christian Wirtz." Aachen : Universitätsbibliothek der RWTH Aachen, 2019. http://d-nb.info/1221099043/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Moety, Farah. "Joint minimization of power and delay in wireless access networks." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S108/document.

Full text
Abstract:
Dans les réseaux d'accès sans fil, l'un des défis les plus récents est la réduction de la consommation d'énergie du réseau, tout en préservant la qualité de service perçue par les utilisateurs finaux. Cette thèse propose des solutions à ce problème difficile considérant deux objectifs, l'économie d'énergie et la minimisation du délai de transmission. Comme ces objectifs sont contradictoires, un compromis devient inévitable. Par conséquent, nous formulons un problème d’optimisation multi-objectif dont le but est la minimisation conjointe de la puissance consommée et du délai de transmission dans les réseaux sans-fil. La minimisation de la puissance est réalisée en ajustant le mode de fonctionnement des stations de base (BS) du réseau d’un niveau élevé de puissance d’émission vers un niveau d'émission plus faible ou même en mode veille. La minimisation du délai de transmission est réalisée par le meilleur rattachement des utilisateurs avec les BS du réseau. Nous couvrons deux réseaux sans-fil différents en raison de leur pertinence : les réseaux locaux sans-fil (IEEE 802.11 WLAN) et les réseaux cellulaires dotés de la technologie LTE
In wireless access networks, one of the most recent challenges is reducing the power consumption of the network, while preserving the quality of service perceived by the end users. The present thesis provides solutions to this challenging problem considering two objectives, namely, saving power and minimizing the transmission delay. Since these objectives are conflicting, a tradeoff becomes inevitable. Therefore, we formulate a multi-objective optimization problem with aims of minimizing the network power consumption and transmission delay. Power saving is achieved by adjusting the operation mode of the network Base Stations (BSs) from high transmit power levels to low transmit levels or even sleep mode. Minimizing the transmission delay is achieved by selecting the best user association with the network BSs. We cover two different wireless networks, namely IEEE 802.11 wireless local area networks and LTE cellular networks
APA, Harvard, Vancouver, ISO, and other styles
44

畔上, 秀幸, Hideyuki AZEGAMI, 悟史 小山, and Satoshi KOYAMA. "規定した変形を生む異種材料境界面の形状設計." 日本機械学会, 2005. http://hdl.handle.net/2237/12187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Cheng, Jianqiang. "Stochastic Combinatorial Optimization." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112261.

Full text
Abstract:
Dans cette thèse, nous étudions trois types de problèmes stochastiques : les problèmes avec contraintes probabilistes, les problèmes distributionnellement robustes et les problèmes avec recours. Les difficultés des problèmes stochastiques sont essentiellement liées aux problèmes de convexité du domaine des solutions, et du calcul de l’espérance mathématique ou des probabilités qui nécessitent le calcul complexe d’intégrales multiples. A cause de ces difficultés majeures, nous avons résolu les problèmes étudiées à l’aide d’approximations efficaces.Nous avons étudié deux types de problèmes stochastiques avec des contraintes en probabilités, i.e., les problèmes linéaires avec contraintes en probabilité jointes (LLPC) et les problèmes de maximisation de probabilités (MPP). Dans les deux cas, nous avons supposé que les variables aléatoires sont normalement distribués et les vecteurs lignes des matrices aléatoires sont indépendants. Nous avons résolu LLPC, qui est un problème généralement non convexe, à l’aide de deux approximations basée sur les problèmes coniques de second ordre (SOCP). Sous certaines hypothèses faibles, les solutions optimales des deux SOCP sont respectivement les bornes inférieures et supérieures du problème du départ. En ce qui concerne MPP, nous avons étudié une variante du problème du plus court chemin stochastique contraint (SRCSP) qui consiste à maximiser la probabilité de la contrainte de ressources. Pour résoudre ce problème, nous avons proposé un algorithme de Branch and Bound pour calculer la solution optimale. Comme la relaxation linéaire n’est pas convexe, nous avons proposé une approximation convexe efficace. Nous avons par la suite testé nos algorithmes pour tous les problèmes étudiés sur des instances aléatoires. Pour LLPC, notre approche est plus performante que celles de Bonferroni et de Jaganathan. Pour MPP, nos résultats numériques montrent que notre approche est là encore plus performante que l’approximation des contraintes probabilistes individuellement.La deuxième famille de problèmes étudiés est celle relative aux problèmes distributionnellement robustes où une partie seulement de l’information sur les variables aléatoires est connue à savoir les deux premiers moments. Nous avons montré que le problème de sac à dos stochastique (SKP) est un problème semi-défini positif (SDP) après relaxation SDP des contraintes binaires. Bien que ce résultat ne puisse être étendu au cas du problème multi-sac-à-dos (MKP), nous avons proposé deux approximations qui permettent d’obtenir des bornes de bonne qualité pour la plupart des instances testées. Nos résultats numériques montrent que nos approximations sont là encore plus performantes que celles basées sur les inégalités de Bonferroni et celles plus récentes de Zymler. Ces résultats ont aussi montré la robustesse des solutions obtenues face aux fluctuations des distributions de probabilités. Nous avons aussi étudié une variante du problème du plus court chemin stochastique. Nous avons prouvé que ce problème peut se ramener au problème de plus court chemin déterministe sous certaine hypothèses. Pour résoudre ce problème, nous avons proposé une méthode de B&B où les bornes inférieures sont calculées à l’aide de la méthode du gradient projeté stochastique. Des résultats numériques ont montré l’efficacité de notre approche. Enfin, l’ensemble des méthodes que nous avons proposées dans cette thèse peuvent s’appliquer à une large famille de problèmes d’optimisation stochastique avec variables entières
In this thesis, we studied three types of stochastic problems: chance constrained problems, distributionally robust problems as well as the simple recourse problems. For the stochastic programming problems, there are two main difficulties. One is that feasible sets of stochastic problems is not convex in general. The other main challenge arises from the need to calculate conditional expectation or probability both of which are involving multi-dimensional integrations. Due to the two major difficulties, for all three studied problems, we solved them with approximation approaches.We first study two types of chance constrained problems: linear program with joint chance constraints problem (LPPC) as well as maximum probability problem (MPP). For both problems, we assume that the random matrix is normally distributed and its vector rows are independent. We first dealt with LPPC which is generally not convex. We approximate it with two second-order cone programming (SOCP) problems. Furthermore under mild conditions, the optimal values of the two SOCP problems are a lower and upper bounds of the original problem respectively. For the second problem, we studied a variant of stochastic resource constrained shortest path problem (called SRCSP for short), which is to maximize probability of resource constraints. To solve the problem, we proposed to use a branch-and-bound framework to come up with the optimal solution. As its corresponding linear relaxation is generally not convex, we give a convex approximation. Finally, numerical tests on the random instances were conducted for both problems. With respect to LPPC, the numerical results showed that the approach we proposed outperforms Bonferroni and Jagannathan approximations. While for the MPP, the numerical results on generated instances substantiated that the convex approximation outperforms the individual approximation method.Then we study a distributionally robust stochastic quadratic knapsack problems, where we only know part of information about the random variables, such as its first and second moments. We proved that the single knapsack problem (SKP) is a semedefinite problem (SDP) after applying the SDP relaxation scheme to the binary constraints. Despite the fact that it is not the case for the multidimensional knapsack problem (MKP), two good approximations of the relaxed version of the problem are provided which obtain upper and lower bounds that appear numerically close to each other for a range of problem instances. Our numerical experiments also indicated that our proposed lower bounding approximation outperforms the approximations that are based on Bonferroni's inequality and the work by Zymler et al.. Besides, an extensive set of experiments were conducted to illustrate how the conservativeness of the robust solutions does pay off in terms of ensuring the chance constraint is satisfied (or nearly satisfied) under a wide range of distribution fluctuations. Moreover, our approach can be applied to a large number of stochastic optimization problems with binary variables.Finally, a stochastic version of the shortest path problem is studied. We proved that in some cases the stochastic shortest path problem can be greatly simplified by reformulating it as the classic shortest path problem, which can be solved in polynomial time. To solve the general problem, we proposed to use a branch-and-bound framework to search the set of feasible paths. Lower bounds are obtained by solving the corresponding linear relaxation which in turn is done using a Stochastic Projected Gradient algorithm involving an active set method. Meanwhile, numerical examples were conducted to illustrate the effectiveness of the obtained algorithm. Concerning the resolution of the continuous relaxation, our Stochastic Projected Gradient algorithm clearly outperforms Matlab optimization toolbox on large graphs
APA, Harvard, Vancouver, ISO, and other styles
46

Potvin, Brigitte. "Predicting Muscle Activations in a Forward-Inverse Dynamics Framework Using Stability-Inspired Optimization and an In Vivo-Based 6DoF Knee Joint." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/34647.

Full text
Abstract:
Modeling and simulations are useful tools to help understand knee function and injuries. As there are more muscles in the human knee joint than equations of motion, optimization protocols are required to solve a problem. The purpose of this thesis was to improve the biofidelity of these simulations by adding in vivo constraints derived from experimental intra-cortical pin data and stability-inspired objective functions within an OpenSim-Matlab forward-inverse dynamics simulation framework on lower limb muscle activation predictions. Results of this project suggest that constraining the model knee joint’s ranges of motion with pin data had a significant impact on lower limb kinematics, especially in rotational degrees of freedom. This affected muscle activation predictions and knee joint loading when compared to unconstrained kinematics. Furthermore, changing the objective will change muscle activation predictions although minimization of muscle activation as an objective remains more accurate than the stability inspired functions, at least for gait. /// La modélisation et les simulations in-silico sont des outils importants pour approfondir notre compréhension de la fonction du genou et ses blessures. Puisqu’il y a plus de muscles autour du genou humain que d’équations de mouvement, des procédures d’optimisation sont requises pour résoudre le système. Le but de cette thèse était d’explorer l’effet de changer l’objectif de cette optimisation à des fonctions inspirées par la stabilité du genou par l’entremise d’un cadre de simulation de dynamique directe et inverse utilisant MatLab et OpenSim ainsi qu'un model musculo-squelétaire contraint cinématiquement par des données expérimentales dérivées de vis intra-corticales, sur les prédictions d’activation musculaire de la jambe. Les résultats de ce projet suggèrent que les contraintes de mouvement imposées sur le genou modélisé ont démontré des effets importants sur la cinématique de la jambe et conséquemment sur les prédictions d'activation musculaire et le chargement du genou. La fonction objective de l'optimisation change aussi les prédictions d’activations musculaires, bien que la fonction minimisant la consommation énergétique soit la plus juste, du moins pour la marche.
APA, Harvard, Vancouver, ISO, and other styles
47

Lourens, Spencer. "Bias in mixtures of normal distributions and joint modeling of longitudinal and time-to-event data with monotonic change curves." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1685.

Full text
Abstract:
Estimating parameters in a mixture of normal distributions dates back to the 19th century when Pearson originally considered data of crabs from the Bay of Naples. Since then, many real world applications of mixtures have led to various proposed methods for studying similar problems. Among them, maximum likelihood estimation (MLE) and the continuous empirical characteristic function (CECF) methods have drawn the most attention. However, the performance of these competing estimation methods has not been thoroughly studied in the literature and conclusions have not been consistent in published research. In this article, we review this classical problem with a focus on estimation bias. An extensive simulation study is conducted to compare the estimation bias between the MLE and CECF methods over a wide range of disparity values. We use the overlapping coefficient (OVL) to measure the amount of disparity, and provide a practical guideline for estimation quality in mixtures of normal distributions. Application to an ongoing multi-site Huntington disease study is illustrated for ascertaining cognitive biomarkers of disease progression. We also study joint modeling of longitudinal and time-to-event data and discuss pattern-mixture and selection models, but focus on shared parameter models, which utilize unobserved random effects in order to "join" a marginal longitudinal data model and marginal survival model in order to assess an internal time-dependent covariate's effect on time-to-event. The marginal models used in the analysis are the Cox Proportional Hazards model and the Linear Mixed model, and both of these models are covered in some detail before defining joints models and describing the estimation process. Joint modeling provides a modeling framework which accounts for correlation between the longitudinal data and the time-to-event data, while also accounting for measurement error in the longitudinal process, which previous methods failed to do. Since it has been shown that bias is incurred, and this bias is proportional to the amount of measurement error, utilizing a joint modeling approach is preferred. Our setting is also complicated by monotone degeneration of the internal covariate considered, and so a joint model which utilizes monotone B-Splines to recover the longitudinal trajectory and a Cox Proportional Hazards (CPH) model for the time-to-event data is proposed. The monotonicity constraints are satisfied via the Projected Newton Raphson Algorithm as described by Cheng et al., 2012, with the baseline hazard profiled out of the $Q$ function in each M-step of the Expectation Maximization (EM) algorithm used for optimizing the observed likelihood. This method is applied to assess Total Motor Score's (TMS) ability to predict Huntington Disease motor diagnosis in the Biological Predictors of Huntington's Disease study (PREDICT-HD) data.
APA, Harvard, Vancouver, ISO, and other styles
48

Liu, Penghuan. "Statistical and numerical optimization for speckle blind structured illumination microscopy." Thesis, Ecole centrale de Nantes, 2018. http://www.theses.fr/2018ECDN0008/document.

Full text
Abstract:
La microscopie à éclairements structurés(structured illumination microscopy, SIM) permet de dépasser la limite de résolution en microscopie optique due à la diffraction, en éclairant l’objet avec un ensemble de motifs périodiques parfaitement connus. Cependant, il s’avère difficile de contrôler exactement la forme des motifs éclairants. Qui plus est, de fortes distorsions de la grille de lumière peuvent être générées par l’échantillon lui-même dans le volume d’étude, ce qui peut provoquer de forts artefacts dans les images reconstruites. Récemment, des approches dites blind-SIM ont été proposées, où les images sont acquises à partir de motifs d’éclairement inconnus, non-périodiques, de type speckle,bien plus faciles à générer en pratique. Le pouvoir de super résolution de ces méthodes a été observé, sans forcément être bien compris théoriquement. Cette thèse présente deux nouvelles méthodes de reconstruction en microscopie à éclairements structurés inconnus (blind speckle-SIM) : une approche conjointe et une approche marginale. Dans l’approche conjointe, nous estimons conjointement l’objet et les motifs d’éclairement au moyen d’un modèle de type Basis Pursuit DeNoising (BPDN) avec une régularisation en norme lp,q où p=>1 et 0
Conventional structured illumination microscopy (SIM) can surpass the resolution limit inoptical microscopy caused by the diffraction effect, through illuminating the object with a set of perfectly known harmonic patterns. However, controlling the illumination patterns is a difficult task. Even worse, strongdistortions of the light grid can be induced by the sample within the investigated volume, which may give rise to strong artifacts in SIM reconstructed images. Recently, blind-SIM strategies were proposed, whereimages are acquired through unknown, non-harmonic,speckle illumination patterns, which are much easier to generate in practice. The super-resolution capacity of such approaches was observed, although it was not well understood theoretically. This thesis presents two new reconstruction methods in SIM using unknown speckle patterns (blind-speckle-SIM): one joint reconstruction approach and one marginal reconstruction approach. In the joint reconstruction approach, we estimate the object and the speckle patterns together by considering a basis pursuit denoising (BPDN) model with lp,q-norm regularization, with p=>1 and 0
APA, Harvard, Vancouver, ISO, and other styles
49

ALJhayyish, Anwer K. "Optimizing Slab Thickness and Joint Spacing for Long-Life Concrete Pavement in Ohio." Ohio University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1550099928352708.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wang, Lu. "Nonnegative joint diagonalization by congruence for semi-nonnegative independent component analysis." Thesis, Rennes 1, 2014. http://www.theses.fr/2014REN1S141/document.

Full text
Abstract:
La Diagonalisation Conjointe par Congruence (DCC) d'un ensemble de matrices apparaît dans nombres de problèmes de traitement du signal, tels qu'en Analyse en Composantes Indépendantes (ACI). Les développements récents en ACI sous contrainte de non négativité de la matrice de mélange, nommée ACI semi-non négative, permettent de tirer profit d'une modélisation physique réaliste des phénomènes observés tels qu'en audio, en traitement d'image ou en ingénierie biomédicale. Par conséquent, durant cette thèse, l'objectif principal était non seulement de concevoir et développer des algorithmes d'ACI semi-non négative basés sur de nouvelles méthodes de DCC non négative où la matrice de passage recherchée est non négative, mais également d'illustrer leur intérêt dans le cadre d'applications pratiques de séparation de sources. Les algorithmes de DCC non négative proposés exploitent respectivement deux stratégies fondamentales d'optimisation. La première famille d'algorithmes comprend cinq méthodes semi-algébriques, reposant sur la méthode de Jacobi. Cette famille prend en compte la non négativité par un changement de variable carré, permettant ainsi de se ramener à un problème d'optimisation sans contrainte. L'idée générale de la méthode de Jacobi est de i) factoriser la matrice recherchée comme un produit de matrices élémentaires, chacune n'étant définie que par un seul paramètre, puis ii) d'estimer ces matrices élémentaires l'une après l'autre dans un ordre spécifique. La deuxième famille compte un seul algorithme, qui utilise la méthode des directions alternées. Un tel algorithme est obtenu en minimisant successivement le Lagrangien augmenté par rapport aux variables et aux multiplicateurs. Les résultats expérimentaux sur les matrices simulées montrent un gain en performance des algorithmes proposés par comparaison aux méthodes DCC classiques, qui n'exploitent pas la contrainte de non négativité. Il semble que nos méthodes peuvent fournir une meilleure précision d'estimation en particulier dans des contextes difficiles, par exemple, pour de faibles valeurs de rapport signal sur bruit, pour un petit nombre de matrices à diagonaliser et pour des niveaux élevés de cohérence de la matrice de passage. Nous avons ensuite montré l'intérêt de nos approches pour la résolution de problèmes pratiques de séparation aveugle de sources. Pour n'en citer que quelques-uns, nous sommes intéressés à i) l'analyse de composés chimiques en spectroscopie par résonance magnétique, ii) l'identification des profils spectraux des harmoniques (par exemple, de notes de piano) d'un morceau de musique mono-canal par décomposition du spectrogramme, iii) l'élimination partielle du texte se trouvant au verso d'une feuille de papier fin. Ces applications démontrent la validité et l'intérêt de nos algorithmes en comparaison des méthodes classique de séparation aveugle de source
The Joint Diagonalization of a set of matrices by Congruence (JDC) appears in a number of signal processing problems, such as in Independent Component Analysis (ICA). Recent developments in ICA under the nonnegativity constraint of the mixing matrix, known as semi-nonnegative ICA, allow us to obtain a more realistic representation of some real-world phenomena, such as audios, images and biomedical signals. Consequently, during this thesis, the main objective was not only to design and develop semi-nonnegative ICA methods based on novel nonnegative JDC algorithms, but also to illustrate their interest in applications involving Blind Source Separation (BSS). The proposed nonnegative JDC algorithms belong to two fundamental strategies of optimization. The first family containing five algorithms is based on the Jacobi-like optimization. The nonnegativity constraint is imposed by means of a square change of variable, leading to an unconstrained problem. The general idea of the Jacobi-like optimization is to factorize the matrix variable as a product of a sequence of elementary matrices which is defined by only one parameter, then to estimate these elementary matrices one by one in a specific order. The second family containing one algorithm is based on the alternating direction method of multipliers. Such an algorithm is derived by successively minimizing the augmented Lagrangian function of the cost function with respect to the variables and the multipliers. Experimental results on simulated matrices show a better performance of the proposed algorithms in comparison with several classical JDC methods, which do not use the nonnegativity as constraint prior. It appears that our methods can achieve a better estimation accuracy particularly in difficult contexts, for example, for a low signal-to-noise ratio, a small number of input matrices and a high coherence level of matrix. Then we show the interest of our approaches in solving real-life problems. To name a few, we are interested in i) the analysis of the chemical compounds in the magnetic resonance spectroscopy, ii) the identification of the harmonically fixed spectral profiles (such as piano notes) of a piece of signal-channel music record by decomposing its spectrogram, iii) the partial removal of the show-through effect of digital images, where the show-through effect were caused by scanning a semi-transparent paper. These applications demonstrate the validity and improvement of our algorithms, comparing with several state-of-the-art BSS methods
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography