To see the other types of publications on this topic, follow the link: Probability constraints.

Dissertations / Theses on the topic 'Probability constraints'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Probability constraints.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chernyy, Vladimir. "On portfolio optimisation under drawdown and floor type constraints." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:19dee50e-466b-46b5-83ae-5816d3b27c62.

Full text
Abstract:
This work is devoted to portfolio optimisation problem arising in the context of constrained optimisation. Despite the classical convex constraints imposed on proportion of wealth invested in the stock this work deals with the pathwise constraints. The drawdown constraint requires an investor's wealth process to dominate a given function of its up-to-date maximum. Typically, fund managers are required to post information about their maximum portfolio drawdowns as a part of the risk management procedure. One of the results of this work connects the drawdown constrained and the unconstrained asymptotic portfolio optimisation problems in an explicit manner. The main tools for achieving the connection are Azema-Yor processes which by their nature satisfy the drawdown condition. The other result deals with the constraint given as a floor process which the wealth process is required to dominate. The motivation arises from the financial market where the class of products serve as a protection from a downfall, e.g. out of the money put options. The main result provides the wealth process which dominates any fraction of a given floor and preserves the optimality. In the second part of this work we consider a problem of a lifetime utility of consumption maximisation subject to a drawdown constraint. One contribution to the existing literature consists of extending the results to incorporate a general drawdown constraint for a case of a zero interest rate market. The second result provides the first heuristic results for a problem in a presence of interest rates which differs qualitatively from a zero interest rate case. Also the last chapter concludes with the conjecture for the general case of the problem.
APA, Harvard, Vancouver, ISO, and other styles
2

Al-jasser, Faisal M. A. "Phonotactic probability and phonotactic constraints : processing and lexical segmentation by Arabic learners of English as a foreign language." Thesis, University of Newcastle Upon Tyne, 2008. http://hdl.handle.net/10443/537.

Full text
Abstract:
A fundamental skill in listening comprehension is the ability to recognize words. The ability to accurately locate word boundaries(i . e. to lexically segment) is an important contributor to this skill. Research has shown that English native speakers use various cues in the signal in lexical segmentation. One such cue is phonotactic constraints; more specifically, the presence of illegal English consonant sequences such as AV and MY signals word boundaries. It has also been shown that phonotactic probability (i. e. the frequency of segments and sequences of segments in words) affects native speakers' processing of English. However, the role that phonotactic probability and phonotactic constraints play in the EFL classroom has hardly been studied, while much attention has been devoted to teaching listening comprehension in EFL. This thesis reports on an intervention study which investigated the effect of teaching English phonotactics upon Arabic speakers' lexical segmentation of running speech in English. The study involved a native English group (N= 12), a non-native speaking control group (N= 20); and a non-native speaking experimental group (N=20). Each of the groups took three tests, namely Non-word Rating, Lexical Decision and Word Spotting. These tests probed how sensitive the subjects were to English phonotactic probability and to the presence of illegal sequences of phonemes in English and investigated whether they used these sequences in the lexical segmentation of English. The non-native groups were post-tested with the -same tasks after only the experimental group had been given a treatment which consisted of explicit teaching of relevant English phonotactic constraints and related activities for 8 weeks. The gains made by the experimental group are discussed, with implications for teaching both pronunciation and listening comprehension in an EFL setting.
APA, Harvard, Vancouver, ISO, and other styles
3

Hu, Yanling. "SOME CONTRIBUTIONS TO THE CENSORED EMPIRICAL LIKELIHOOD WITH HAZARD-TYPE CONSTRAINTS." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/150.

Full text
Abstract:
Empirical likelihood (EL) is a recently developed nonparametric method of statistical inference. Owen’s 2001 book contains many important results for EL with uncensored data. However, fewer results are available for EL with right-censored data. In this dissertation, we first investigate a right-censored-data extension of Qin and Lawless (1994). They studied EL with uncensored data when the number of estimating equations is larger than the number of parameters (over-determined case). We obtain results similar to theirs for the maximum EL estimator and the EL ratio test, for the over-determined case, with right-censored data. We employ hazard-type constraints which are better able to handle right-censored data. Then we investigate EL with right-censored data and a k-sample mixed hazard-type constraint. We show that the EL ratio test statistic has a limiting chi-square distribution when k = 2. We also study the relationship between the constrained Kaplan-Meier estimator and the corresponding Nelson-Aalen estimator. We try to prove that they are asymptotically equivalent under certain conditions. Finally we present simulation studies and examples showing how to apply our theory and methodology with real data.
APA, Harvard, Vancouver, ISO, and other styles
4

Brahmantio, Bayu Beta. "Efficient Sampling of Gaussian Processes under Linear Inequality Constraints." Thesis, Linköpings universitet, Statistik och maskininlärning, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-176246.

Full text
Abstract:
In this thesis, newer Markov Chain Monte Carlo (MCMC) algorithms are implemented and compared in terms of their efficiency in the context of sampling from Gaussian processes under linear inequality constraints. Extending the framework of Gaussian process that uses Gibbs sampler, two MCMC algorithms, Exact Hamiltonian Monte Carlo (HMC) and Analytic Elliptical Slice Sampling (ESS), are used to sample values of truncated multivariate Gaussian distributions that are used for Gaussian process regression models with linear inequality constraints. In terms of generating samples from Gaussian processes under linear inequality constraints, the proposed methods generally produce samples that are less correlated than samples from the Gibbs sampler. Time-wise, Analytic ESS is proven to be a faster choice while Exact HMC produces the least correlated samples.
APA, Harvard, Vancouver, ISO, and other styles
5

Kokrda, Lukáš. "Optimalizace stavebních konstrukcí s pravděpodobnostními omezeními." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232181.

Full text
Abstract:
The diploma thesis deals with penalty approach to stochastic optimization with chance constraints which are applied to structural mechanics. The problem of optimal design of beam dimensions is modeled and solved. The uncertainty is involved in the form of random load. The corresponding mathematical model contains a condition in the form of ordinary differencial equation that is solved by finite element method. The probability condition is approximated by several types of penalty functions. The results are obtained by computations in the MATLAB software.
APA, Harvard, Vancouver, ISO, and other styles
6

Henry, Tyler R. "Constrained short selling and the probability of informed trade /." Thesis, Connect to this title online; UW restricted, 2005. http://hdl.handle.net/1773/8716.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Pfeiffer, Laurent. "Sensitivity analysis for optimal control problems. Stochastic optimal control with a probability constraint." Palaiseau, Ecole polytechnique, 2013. https://pastel.hal.science/docs/00/88/11/19/PDF/thesePfeiffer.pdf.

Full text
Abstract:
Cette thèse est divisée en deux parties. Dans la première partie, nous étudions des problèmes de contrôle optimal déterministes avec contraintes et nous nous intéressons à des questions d'analyse de sensibilité. Le point de vue que nous adoptons est celui de l'optimisation abstraite; les conditions d'optimalité nécessaires et suffisantes du second ordre jouent alors un rôle crucial et sont également étudiées en tant que telles. Dans cette thèse, nous nous intéressons à des solutions fortes. De façon générale, nous employons ce terme générique pour désigner des contrôles localement optimaux pour la norme L1. En renforçant la notion d'optimalité locale utilisée, nous nous attendons à obtenir des résultats plus forts. Deux outils sont utilisés de façon essentielle : une technique de relaxation, qui consiste à utiliser plusieurs contrôles simultanément, ainsi qu'un principe de décomposition, qui est un développement de Taylor au second ordre particulier du lagrangien. Les chapitres 2 et 3 portent sur les conditions d'optimalité nécessaires et suffisantes du second ordre pour des solutions fortes de problèmes avec contraintes pures, mixtes et sur l'état final. Dans le chapitre 4, nous réalisons une analyse de sensibilité pour des problèmes relaxés avec des contraintes sur l'état final. Dans le chapitre 5, nous réalisons une analyse de sensibilité pour un problème de production d'énergie nucléaire. Dans la deuxième partie, nous étudions des problèmes de contrôle optimal stochastique sous contrainte en probabilité. Nous étudions une approche par programmation dynamique, dans laquelle le niveau de probabilité est vu comme une variable d'état supplémentaire. Dans ce cadre, nous montrons que la sensibilité de la fonction valeur par rapport au niveau de probabilité est constante le long des trajectoires optimales. Cette analyse nous permet de développer des méthodes numériques pour des problèmes en temps continu. Ces résultats sont présentés dans le chapitre 6, dans lequel nous étudions également une application à la gestion actif-passif
This thesis is divided into two parts. In the first part, we study constrained deterministic optimal control problems and sensitivity analysis issues, from the point of view of abstract optimization. Second-order necessary and sufficient optimality conditions, which play an important role in sensitivity analysis, are also investigated. In this thesis, we are interested in strong solutions. We use this generic term for locally optimal controls for the L1-norm, roughly speaking. We use two essential tools: a relaxation technique, which consists in using simultaneously several controls, and a decomposition principle, which is a particular second-order Taylor expansion of the Lagrangian. Chapters 2 and 3 deal with second-order necessary and sufficient optimality conditions for strong solutions of problems with pure, mixed, and final-state constraints. In Chapter 4, we perform a sensitivity analysis for strong solutions of relaxed problems with final-state constraints. In Chapter 5, we perform a sensitivity analysis for a problem of nuclear energy production. In the second part of the thesis, we study stochastic optimal control problems with a probability constraint. We study an approach by dynamic programming, in which the level of probability is a supplementary state variable. In this framework, we show that the sensitivity of the value function with respect to the probability level is constant along optimal trajectories. We use this analysis to design numerical schemes for continuous-time problems. These results are presented in Chapter 6, in which we also study an application to asset-liability management
APA, Harvard, Vancouver, ISO, and other styles
8

Prezioso, Luca. "Financial risk sources and optimal strategies in jump-diffusion frameworks." Doctoral thesis, Università degli studi di Trento, 2020. http://hdl.handle.net/11572/254880.

Full text
Abstract:
An optimal dividend problem with investment opportunities, taking into consideration a source of strategic risk is being considered, as well as the effect of market frictions on the decision process of the financial entities. It concerns the problem of determining an optimal control of the dividend under debt constraints and investment opportunities in an economy with business cycles. It is assumed that the company is to be allowed to accept or reject investment opportunities arriving at random times with random sizes, by changing its outstanding indebtedness, which would impact its capital structure and risk profile. This work mainly focuses on the strategic risk faced by the companies; and, in particular, it focuses on the manager's problem of setting appropriate priorities to deploy the limited resources available. This component is taken into account by introducing frictions in the capital structure modification process. The problem is formulated as a bi-dimensional singular control problem under regime switching in presence of jumps. An explicit condition is obtained in order to ensure that the value function is finite. A viscosity solution approach is used to get qualitative descriptions of the solution. Moreover, a lending scheme for a system of interconnected banks with probabilistic constraints of failure is being considered. The problem arises from the fact that financial institutions cannot possibly carry enough capital to withstand counterparty failures or systemic risk. In such situations, the central bank or the government becomes effectively the risk manager of last resort or, in extreme cases, the lender of last resort. If, on the one hand, the health of the whole financial system depends on government intervention, on the other hand, guaranteeing a high probability of salvage may result in increasing the moral hazard of the banks in the financial network. A closed form solution for an optimal control problem related to interbank lending schemes has been derived, subject to terminal probability constraints on the failure of banks which are interconnected through a financial network. The derived solution applies to real bank networks by obtaining a general solution when the aforementioned probability constraints are assumed for all the banks. We also present a direct method to compute the systemic relevance parameter for each bank within the network. Finally, a possible computation technique for the Default Risk Charge under to regulatory risk measurement processes is being considered. We focus on the Default Risk Charge measure as an effective alternative to the Incremental Risk Charge one, proposing its implementation by a quasi exhaustive-heuristic algorithm to determine the minimum capital requested to a bank facing the market risk associated to portfolios based on assets emitted by several financial agents. While most of the banks use the Monte Carlo simulation approach and the empirical quantile to estimate this risk measure, we provide new computational approaches, exhaustive or heuristic, currently becoming feasible, because of both new regulation and the high speed - low cost technology available nowadays.
APA, Harvard, Vancouver, ISO, and other styles
9

Et, Tabii Mohamed. "Contributions aux descriptions optimales par modèles statistiques exponentiels." Rouen, 1997. http://www.theses.fr/1997ROUES028.

Full text
Abstract:
Cette thèse est consacrée à l'amélioration de l'approximation d'une loi inconnue, par la méthode de la i-projection sous contraintes. L'amélioration consiste à minimiser une information de Kullback pour obtenir la i-projection. La i-projection est définie à l'aide d'une probabilité de référence et de fonctions qui constituent les contraintes. Le rôle de ces données pour améliorer l'approximation est explicite. Nous développons une méthodologie pour caractériser les contraintes, les meilleures tenant compte des propriétés plausibles de la densité de la loi inconnue. Elle consiste de plus à construire pour la méthode de la i-projection, une probabilité de référence, meilleure que celle de départ. Des applications numériques montrent la pertinence de la méthode.
APA, Harvard, Vancouver, ISO, and other styles
10

Hansson, Patrik. "Overconfidence and Format Dependence in Subjective Probability Intervals: Naive Estimation and Constrained Sampling." Licentiate thesis, Umeå University, Department of Psychology, 2005. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-14737.

Full text
Abstract:

A particular field in research on judgment and decision making (JDM) is concerned with realism of confidence in one’s knowledge. An interesting finding is the so-called format dependence effect which implies that assessment of the same probability distribution generates different conclusions about over- or underconfidence bias depending on the assessment format. In particular,expressing a belief about some unknown quantity in the form of a confidence interval is severely prone to overconfidence as compared to expressing the belief as an assessment of a probability. This thesis gives a tentative account of this finding in terms of a Naïve Sampling Model (NSM;Juslin, Winman, & Hansson, 2004), which assumes that people accurately describe their available information stored in memory but they are naïve in the sense that they treat sample properties as proper estimators of population properties. The NSM predicts that it should be possible to reducethe overconfidence in interval production by changing the response format into interval evaluation and to manipulate the degree of format dependence between interval production and interval evaluation. These predictions are verified in empirical experiments which contain both general knowledge tasks (Study 1) and laboratory learning tasks (Study 2). A bold hypothesis,that working memory is a constraining factor for sample size in judgment which suggests that experience per se does not eliminate overconfidence, is investigated and verified. The NSM predicts that the absolute error of the placement of the interval is a constant fraction of interval size, a prediction that is verified (Study 2). This thesis suggests that no cognitive processing bias(Tversky & Kahneman, 1974) over and above naivety is needed to understand and explain the overconfidence bias in interval production and hence the format dependence effect.

APA, Harvard, Vancouver, ISO, and other styles
11

Elbahloul, Salem A. "Modelling of turbulent flames with transported probability density function and rate-controlled constrained equilibrium methods." Thesis, Imperial College London, 2013. http://hdl.handle.net/10044/1/30826.

Full text
Abstract:
In this study, turbulent diffusion flames have been modelled using the Transported Probability Density Function (PDF) method and chemistry reduction with the Rate-Controlled Constrained Equilibrium (RCCE). RCCE is a systematic method of chemistry reduction which is employed to simulate the evolution of the chemical composition with a reduced number of species. It is based on the principle of chemical time-scale separation and is formulated in a generalised and systematic manner that allows a reduced mechanism to be derived given a set of constraint species. The transported scalar PDF method was coupled with RANS turbulence modelling and this PDF-RANS methodology was exploited to simulate several turbulent diffusion flames with detailed and RCCE-reduced chemistry. The phenomena of extinction and reignition, soot formation and thermal radiation in these flames are explored. Sandia Flames D, E and F have been simulated with both the detailed GRI-3.0 mechanism and RCCE reduced mechanisms. Scatter plots show that PDF methods with simple mixing models are able to reproduce different degrees of local extinction in Sandia piloted flames. The PDF-RCCE results are compared with PDF simulations with the detailed mechanism and with measurements of Sandia flames. The RCCE method predicted the three flames with the same level of accuracy of the detailed mechanism. The methodology has also been applied to sooting flames with radiative heat transfer. Semi-empirical soot model and Optically-thin radiation model have been combined with the PDF-RCCE method to compute these flames. Methane flames measured by Brooks and Moss [26] have been predicted using several RCCE mechanisms with good agreement with measurements. The propane flame with preheated air [162] has also been simulated with the PDF-RCCE methodology. Gaseous species profiles of the propane flame compare reasonably with measurements but soot and temperature predictions in this flame were weak and improvements are still needed.
APA, Harvard, Vancouver, ISO, and other styles
12

Prakash, Sunil. "Modeling the Constraint Effects on Fracture Toughness of Materials." University of Akron / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=akron1259271280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Picard, Vincent. "Réseaux de réactions : de l’analyse probabiliste à la réfutation." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S093/document.

Full text
Abstract:
L'étude de la dynamique des réseaux de réactions est un enjeu majeur de la biologie des systèmes. Cela peut-être réalisé de deux manières : soit de manière déterministe à l'aide d'équations différentielles, soit de manière probabiliste à l'aide de chaînes de Markov. Dans les deux cas, un problème majeur est celui de la détermination des lois cinétiques impliquées et l'inférence de paramètres cinétiques associés. Pour cette raison, l'étude directe de grands réseaux de réactions est impossible. Dans le cas de la modélisation déterministe, ce problème peut-être contourné à l'aide d'une analyse stationnaire du réseau. Une méthode connue est celle de l'analyse des flux à l'équilibre (FBA) qui permet d'obtenir des systèmes de contraintes à partir d'informations sur les pentes moyennes des trajectoires. Le but de cette thèse est d'introduire une méthode analogue dans le cas de la modélisation probabiliste. Les résultats de la thèse se divisent en trois parties. Tout d'abord on présente une analyse stationnaire de la modélisation probabiliste reposant sur une approximation de Bernoulli. Dans un deuxième temps, cette dynamique approximée nous permet d'établir des systèmes de contraintes à l'aide d'informations obtenues sur les moyennes, les variances et les co-variances des trajectoires du système. Enfin, on présente plusieurs applications à ces systèmes de contraintes telles que la possibilité de réfuter des réseaux de réactions à l'aide d'informations de variances ou de co-variances et la vérification formelle de propriétés logiques sur le régime stationnaire du système
A major goal in systems biology is to inverstigate the dynamical behavior of reaction networks. There exists two main dynamical frameworks : the first one is the deterministic dynamics where the dynamics is described using odinary differential equations, the second one is probabilistic and relies on Markov chains. In both cases, one major issue is to determine the kinetic laws of the systems together with its kinetic parameters. As a consequence the direct study of large biological reaction networks is impossible. To deal with this issue, stationnary assumptions have been used. A widely used method is flux balance analysis, where systems of constraints are derived from information on the average slopes of the system trajectories. In this thesis, we construct a probabilistic analog of this stationnary analysis. The results are divided into three parts. First, we introduce a stationnary analysis of the probabilistic dynamics which relies on a Bernoulli approximation. Second, this approximated dynamics allows us to derive systems of constraints from information about the means, variances and co-variances of the system trajectories. Third, we present several applications of these systems of constraints such as the possibility to reject reaction networks using information from experimental variances and co-variances and the formal verification of logical properties concerning the stationnary regime of the system
APA, Harvard, Vancouver, ISO, and other styles
14

Baek, Yeongcheon. "An interior point approach to constrained nonparametric mixture models /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/5753.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Sohul, Munawwar Mahmud. "PERFORMANCE OF LINEAR DECISION COMBINER FOR PRIMARY USER DETECTION IN COGNITIVE RADIO." OpenSIUC, 2011. https://opensiuc.lib.siu.edu/theses/705.

Full text
Abstract:
The successful implementation and employment of various cognitive radio services are largely dependent on the spectrum sensing performance of the cognitive radio terminals. Previous works on detection of cognitive radio have suggested the necessity of user cooperation in order to be able to detect at low signal-to-noise ratios experienced in practical situations. This report provides a brief overview of the impact of different fusion strategies on the spectrum hole detection performance of a fusion center in a distributed detection environment. Different decision or detection rule and fusion strategies, like single sensor scenario, counting rule, and linear decision metric, were used to analyze their influence on the spectrum sensing performance of the cognitive radio network. We consider a system of cognitive radio users who cooperate with each other in trying to detect licensed transmissions. Assuming that the cooperating nodes use identical energy detectors, we model the received signals as correlated log-normal random variables and study the problem of fusing the decisions made by the individual nodes. The cooperating radios were assumed to be designed in such a way that they satisfy the interference probability constraint individually. The interference probability constraint was also met at the fusion center. The simulation results strongly suggests that even when the observations at the individual sensors are moderately correlated, it is important not to ignore the correlation between the nodes for fusing the local decisions made by the secondary users. The thesis mainly focuses on the performance measurement of linear decision combiner in detecting primary users in a cognitive radio network.
APA, Harvard, Vancouver, ISO, and other styles
16

Peng, Shen. "Optimisation stochastique avec contraintes en probabilités et applications." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS153/document.

Full text
Abstract:
L'incertitude est une propriété naturelle des systèmes complexes. Les paramètres de certains modèles peuvent être imprécis; la présence de perturbations aléatoires est une source majeure d'incertitude pouvant avoir un impact important sur les performances du système. Dans cette thèse, nous étudierons les problèmes d’optimisation avec contraintes en probabilités dans les cas suivants : Tout d’abord, nous passons en revue les principaux résultats relatifs aux contraintes en probabilités selon trois perspectives: les problèmes liés à la convexité, les reformulations et les approximations de ces contraintes, et le cas de l’optimisation distributionnellement robuste. Pour les problèmes d’optimisation géométriques, nous étudions les programmes avec contraintes en probabilités jointes. A l’aide d’hypothèses d’indépendance des variables aléatoires elliptiquement distribuées, nous déduisons une reformulation des programmes avec contraintes géométriques rectangulaires jointes. Comme la reformulation n’est pas convexe, nous proposons de nouvelles approximations convexes basées sur la transformation des variables ainsi que des méthodes d’approximation linéaire par morceaux. Nos résultats numériques montrent que nos approximations sont asymptotiquement serrées. Lorsque les distributions de probabilité ne sont pas connues à l’avance, le calcul des bornes peut être très utile. Par conséquent, nous développons quatre bornes supérieures pour les contraintes probabilistes individuelles, et jointes dont les vecteur-lignes de la matrice des contraintes sont indépendantes. Sur la base des inégalités de Chebyshev, Chernoff, Bernstein et de Hoeffding, nous proposons des approximations déterministes. Des conditions suffisantes de convexité. Pour réduire la complexité des calculs, nous reformulons les approximations sous forme de problèmes d'optimisation convexes solvables basés sur des approximations linéaires et tangentielles par morceaux. Enfin, des expériences numériques sont menées afin de montrer la qualité des approximations étudiées sur des données aléatoires. Dans certains systèmes complexes, la distribution des paramètres aléatoires n’est que partiellement connue. Pour traiter les incertitudes dans ces cas, nous proposons un ensemble d'incertitude basé sur des données obtenues à partir de distributions mixtes. L'ensemble d'incertitude est construit dans la perspective d'estimer simultanément des moments d'ordre supérieur. Ensuite, nous proposons une reformulation du problème robuste avec contraintes en probabilités en utilisant des données issues d’échantillonnage. Comme la reformulation n’est pas convexe, nous proposons des approximations convexes serrées basées sur la méthode d’approximation linéaire par morceaux sous certaines conditions. Pour le cas général, nous proposons une approximation DC pour dériver une borne supérieure et une approximation convexe relaxée pour dériver une borne inférieure pour la valeur de la solution optimale du problème initial. Enfin, des expériences numériques sont effectuées pour montrer que les approximations proposées sont efficaces. Nous considérons enfin un jeu stochastique à n joueurs non-coopératif. Lorsque l'ensemble de stratégies de chaque joueur contient un ensemble de contraintes linéaires stochastiques, nous modélisons ces contraintes sous la forme de contraintes en probabilité jointes. Pour chaque joueur, nous formulons les contraintes en probabilité dont les variables aléatoires sont soit normalement distribuées, soit elliptiquement distribuées, soit encore définies dans le cadre de l’optimisation distributionnellement robuste. Sous certaines conditions, nous montrons l’existence d’un équilibre de Nash pour ces jeux stochastiques
Chance constrained optimization is a natural and widely used approaches to provide profitable and reliable decisions under uncertainty. And the topics around the theory and applications of chance constrained problems are interesting and attractive. However, there are still some important issues requiring non-trivial efforts to solve. In view of this, we will systematically investigate chance constrained problems from the following perspectives. As the basis for chance constrained problems, we first review some main research results about chance constraints in three perspectives: convexity of chance constraints, reformulations and approximations for chance constraints and distributionally robust chance constraints. For stochastic geometric programs, we formulate consider a joint rectangular geometric chance constrained program. With elliptically distributed and pairwise independent assumptions for stochastic parameters, we derive a reformulation of the joint rectangular geometric chance constrained programs. As the reformulation is not convex, we propose new convex approximations based on the variable transformation together with piecewise linear approximation methods. Our numerical results show that our approximations are asymptotically tight. When the probability distributions are not known in advance or the reformulation for chance constraints is hard to obtain, bounds on chance constraints can be very useful. Therefore, we develop four upper bounds for individual and joint chance constraints with independent matrix vector rows. Based on the one-side Chebyshev inequality, Chernoff inequality, Bernstein inequality and Hoeffding inequality, we propose deterministic approximations for chance constraints. In addition, various sufficient conditions under which the aforementioned approximations are convex and tractable are derived. To reduce further computational complexity, we reformulate the approximations as tractable convex optimization problems based on piecewise linear and tangent approximations. Finally, based on randomly generated data, numerical experiments are discussed in order to identify the tight deterministic approximations. In some complex systems, the distribution of the random parameters is only known partially. To deal with the complex uncertainties in terms of the distribution and sample data, we propose a data-driven mixture distribution based uncertainty set. The data-driven mixture distribution based uncertainty set is constructed from the perspective of simultaneously estimating higher order moments. Then, with the mixture distribution based uncertainty set, we derive a reformulation of the data-driven robust chance constrained problem. As the reformulation is not a convex program, we propose new and tight convex approximations based on the piecewise linear approximation method under certain conditions. For the general case, we propose a DC approximation to derive an upper bound and a relaxed convex approximation to derive a lower bound for the optimal value of the original problem, respectively. We also establish the theoretical foundation for these approximations. Finally, simulation experiments are carried out to show that the proposed approximations are practical and efficient. We consider a stochastic n-player non-cooperative game. When the strategy set of each player contains a set of stochastic linear constraints, we model the stochastic linear constraints of each player as a joint chance constraint. For each player, we assume that the row vectors of the matrix defining the stochastic constraints are pairwise independent. Then, we formulate the chance constraints with the viewpoints of normal distribution, elliptical distribution and distributionally robustness, respectively. Under certain conditions, we show the existence of a Nash equilibrium for the stochastic game
APA, Harvard, Vancouver, ISO, and other styles
17

Val, Petran. "BINOCULAR DEPTH PERCEPTION, PROBABILITY, FUZZY LOGIC, AND CONTINUOUS QUANTIFICATION OF UNIQUENESS." Case Western Reserve University School of Graduate Studies / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=case1504749439893027.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Xun. "Essays on Down Payment Constraint, House Price and Young People's Homeownership Behavior." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1275015646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Looper, Jason K. "Semiparametric Estimation of Unimodal Distributions." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Van, Ackooij Wim. "Chance Constrained Programming : with applications in Energy Management." Thesis, Châtenay-Malabry, Ecole centrale de Paris, 2013. http://www.theses.fr/2013ECAP0071/document.

Full text
Abstract:
Les contraintes en probabilité constituent un modèle pertinent pour gérer les incertitudes dans les problèmes de décision. En management d’énergie de nombreux problèmes d’optimisation ont des incertitudes sous-jacentes. En particulier c’est le cas des problèmes de gestion de la production au court-terme. Dans cette Thèse, nous investiguons les contraintes probabilistes sous l’angle théorique, algorithmique et applicative. Nous donnons quelques nouveaux résultats de différentiabilité des contraintes en probabilité et de convexité des ensembles admissibles. Des nouvelles variantes des méthodes de faisceaux « proximales » et « de niveaux » sont spécialement mises au point pour traiter des problèmes d’optimisation convexe sous contrainte en probabilité. Ces algorithmes gèrent en particulier, les erreurs d’évaluation de la contrainte en probabilité, ainsi que son gradient. La convergence vers une solution du problème est montrée. Enfin, nous examinons deux applications : l’optimisation d’une vallée hydraulique sous incertitude sur les apports et l’optimisation d’un planning de production sous incertitude sur la demande. Dans les deux cas nous utilisons une contrainte en probabilité pour gérer les incertitudes. Les résultats numériques présentés semblent montrer la faisabilité de résoudre des problèmes d’optimisation avec une contrainte en probabilité jointe portant sur un système de environ 200 contraintes. Il s’agit de l’ordre de grandeur nécessaire pour les applications. Les nouveaux résultats de différentiabilité concernent à la fois des contraintes en probabilité portant sur des systèmes linéaires et non-linéaires. Dans le deuxième cas, la convexité dans l’argument représentant le vecteur incertain est requise. Ce vecteur est supposé suivre une loi Gaussienne ou Student multi-variée. Les formules de gradient permettent l’application directe d’un schéma d’évaluation numérique efficient. Pour les contraintes en probabilité qui peuvent se réécrire à l’aide d’une Copule, nous donnons de nouveau résultats de convexité pour l’ensemble admissibles. Ces résultats requirent la concavité généralisée de la Copule, les distributions marginales sous-jacents et du système d’incertitude. Il est suffisant que ces propriétés de concavité généralisée tiennent sur un ensemble spécifique
In optimization problems involving uncertainty, probabilistic constraints are an important tool for defining safety of decisions. In Energy management, many optimization problems have some underlying uncertainty. In particular this is the case of unit commitment problems. In this Thesis, we will investigate probabilistic constraints from a theoretical, algorithmic and applicative point of view. We provide new insights on differentiability of probabilistic constraints and on convexity results of feasible sets. New variants of bundle methods, both of proximal and level type, specially tailored for convex optimization under probabilistic constraints, are given and convergence shown. Both methods explicitly deal with evaluation errors in both the gradient and value of the probabilistic constraint. We also look at two applications from energy management: cascaded reservoir management with uncertainty on inflows and unit commitment with uncertainty on customer load. In both applications uncertainty is dealt with through the use of probabilistic constraints. The presented numerical results seem to indicate the feasibility of solving an optimization problem with a joint probabilistic constraint on a system having up to 200 constraints. This is roughly the order of magnitude needed in the applications. The differentiability results involve probabilistic constraints on uncertain linear and nonlinear inequality systems. In the latter case a convexity structure in the underlying uncertainty vector is required. The uncertainty vector is assumed to have a multivariate Gaussian or Student law. The provided gradient formulae allow for efficient numerical sampling schemes. For probabilistic constraints that can be rewritten through the use of Copulae, we provide new insights on convexity of the feasible set. These results require a generalized concavity structure of the Copulae, the marginal distribution functions of the underlying random vector and of the underlying inequality system. These generalized concavity properties may hold only on specific sets
APA, Harvard, Vancouver, ISO, and other styles
21

Mahmood, Khalid. "Constrained linear and non-linear adaptive equalization techniques for MIMO-CDMA systems." Thesis, De Montfort University, 2013. http://hdl.handle.net/2086/10203.

Full text
Abstract:
Researchers have shown that by combining multiple input multiple output (MIMO) techniques with CDMA then higher gains in capacity, reliability and data transmission speed can be attained. But a major drawback of MIMO-CDMA systems is multiple access interference (MAI) which can reduce the capacity and increase the bit error rate (BER), so statistical analysis of MAI becomes a very important factor in the performance analysis of these systems. In this thesis, a detailed analysis of MAI is performed for binary phase-shift keying (BPSK) signals with random signature sequence in Raleigh fading environment and closed from expressions for the probability density function of MAI and MAI with noise are derived. Further, probability of error is derived for the maximum Likelihood receiver. These derivations are verified through simulations and are found to reinforce the theoretical results. Since the performance of MIMO suffers significantly from MAI and inter-symbol interference (ISI), equalization is needed to mitigate these effects. It is well known from the theory of constrained optimization that the learning speed of any adaptive filtering algorithm can be increased by adding a constraint to it, as in the case of the normalized least mean squared (NLMS) algorithm. Thus, in this work both linear and non-linear decision feedback (DFE) equalizers for MIMO systems with least mean square (LMS) based constrained stochastic gradient algorithm have been designed. More specifically, an LMS algorithm has been developed , which was equipped with the knowledge of number of users, spreading sequence (SS) length, additive noise variance as well as MAI with noise (new constraint) and is named MIMO-CDMA MAI with noise constrained (MNCLMS) algorithm. Convergence and tracking analysis of the proposed algorithm are carried out in the scenario of interference and noise limited systems, and simulation results are presented to compare the performance of MIMO-CDMA MNCLMS algorithm with other adaptive algorithms.
APA, Harvard, Vancouver, ISO, and other styles
22

Ferdjoukh, Adel. "Une approche déclarative pour la génération de modèles." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT325/document.

Full text
Abstract:
Disposer de données dans le but de valider ou tester une approche ou un concept est d'une importance primordiale dans beaucoup de domaines différents. Malheureusement, ces données ne sont pas toujours disponibles, sont coûteuses à obtenir, ou bien ne répondent pas à certaines exigences de qualité ce qui les rend inutiles dans certains cas de figure.Un générateur automatique de données est un bon moyen pour obtenir facilement et rapidement des données valides, de différentes tailles, pertinentes et diversifiées. Dans cette thèse, nous proposons une nouvelle approche complète, dirigée par les modèles et basée sur la programmation par contraintes pour la génération de données
Owning data is useful in many different fields. Data can be used to test and to validate approaches, algorithms and concepts. Unfortunately, data is rarely available, is cost to obtain, or is not adapted to most of cases due to a lack of quality.An automated data generator is a good way to generate quickly and easily data that are valid, in different sizes, likelihood and diverse.In this thesis, we propose a novel and complete model driven approach, based on constraint programming for automated data generation
APA, Harvard, Vancouver, ISO, and other styles
23

Björnemo, Erik. "Energy Constrained Wireless Sensor Networks : Communication Principles and Sensing Aspects." Doctoral thesis, Uppsala universitet, Institutionen för teknikvetenskaper, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-9519.

Full text
Abstract:
Wireless sensor networks are attractive largely because they need no wired infrastructure. But precisely this feature makes them energy constrained, and the consequences of this hard energy constraint are the overall topic of this thesis. We are in particular concerned with principles for energy efficient wireless communication and the energy-wise trade-off between sensing and radio communication. Radio transmission between sensors incurs both a fixed energy cost from radio circuit processing, and a variable energy cost related to the level of radiated energy. We here find that transmission techniques that are otherwise considered efficient consumes too much processing energy. Currently available sensor node radios typically have a maximum output power that is too limited to benefit from transmission-efficient, but processing-intensive, techniques. Our results provide new design guidelines for the radio output power. With increasing transmission energy -- with increasing distance -- the considered techniques should be applied in the following order: output power control, polarisation receiver diversity, error correcting codes, multi-hop communication, and cooperative multiple-input multiple-output transmissions. To assess the measurement capability of the network as a whole, and to facilitate a study of the sensing-communication trade-off, we devise a new metric: the network measurement capacity. It is based on the number of different measurement sequences that a network can provide, and is hence a measure of the network's readiness to meet a large number of possible events. Optimised multi-hop routing under this metric reveals that the energy consumed for sensing has decisive impact on the best multi-hop routes. We also find support for the use of hierarchical heterogeneous network structures. Model parameter uncertainties have large impact on our results and we use probability theory as logic to include them consistently. Our analysis shows that common assumptions can give misleading results, and our analysis of radio channel measurements confirms the inadequacy of the Rayleigh fading channel model.
wisenet
APA, Harvard, Vancouver, ISO, and other styles
24

Zhao, Jianmin. "Optimal Clustering: Genetic Constrained K-Means and Linear Programming Algorithms." VCU Scholars Compass, 2006. http://hdl.handle.net/10156/1583.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Moran, Michael. "On Comparative Algorithmic Pathfinding in Complex Networks for Resource-Constrained Software Agents." ScholarWorks, 2017. https://scholarworks.waldenu.edu/dissertations/3951.

Full text
Abstract:
Software engineering projects that utilize inappropriate pathfinding algorithms carry a significant risk of poor runtime performance for customers. Using social network theory, this experimental study examined the impact of algorithms, frameworks, and map complexity on elapsed time and computer memory consumption. The 1,800 2D map samples utilized were computer random generated and data were collected and processed using Python language scripts. Memory consumption and elapsed time results for each of the 12 experimental treatment groups were compared using factorial MANOVA to determine the impact of the 3 independent variables on elapsed time and computer memory consumption. The MANOVA indicated a significant factor interaction between algorithms, frameworks, and map complexity upon elapsed time and memory consumption, F(4, 3576) = 94.09, p < .001, h2 = .095. The main effects of algorithms, F(4, 3576) = 885.68, p < .001, h2 = .498; and frameworks, F(2, 1787) = 720,360.01, p .001, h2 = .999; and map complexity, F(2, 1787) = 112,736.40, p < .001, h2 = .992, were also all significant. This study may contribute to positive social change by providing software engineers writing software for complex networks, such as analyzing terrorist social networks, with empirical pathfinding algorithm results. This is crucial to enabling selection of appropriately fast, memory-efficient algorithms that help analysts identify and apprehend criminal and terrorist suspects in complex networks before the next attack.
APA, Harvard, Vancouver, ISO, and other styles
26

Excoffier, Mathilde. "Chance-Constrained Programming Approaches for Staffing and Shift-Scheduling Problems with Uncertain Forecasts : application to Call Centers." Thesis, Paris 11, 2015. http://www.theses.fr/2015PA112244/document.

Full text
Abstract:
Le problème de dimensionnement et planification d'agents en centre d'appels consiste à déterminer sur une période le nombre d'interlocuteurs requis afin d'atteindre la qualité de service exigée et minimiser les coûts induits. Ce sujet fait l'objet d'un intérêt croissant pour son intérêt théorique mais aussi pour l'impact applicatif qu'il peut avoir. Le but de cette thèse est d'établir des approches en contraintes en probabilités en considérant l'incertitude de la demande.Tout d'abord, la thèse présente un modèle en problème d'optimisation stochastique avec contrainte en probabilité jointe traitant la problématique complète en une étape afin d'obtenir un programme facile à résoudre. Une approche basée sur l'idée de continuité est proposée grâce à des lois de probabilité continues, une nouvelle relation entre les taux d'arrivées et les besoins théoriques et la linéarisation de contraintes. La répartition du risque global est faite pendant le processus d'optimisation, permettant une solution au coût réduit. Ces solutions résultantes respectent le niveau de risque tout en diminuant le coût par rapport à d'autres approches.De plus, le modèle en une étape est étendu pour améliorer sa représentation de la réalité. D'une part, le modèle de file d'attente est amélioré et inclus la patience limitée des clients. D'autre part, une nouvelle expression de l'incertitude est proposée pour prendre la dépendance des périodes en compte.Enfin, une nouvelle représentation de l'incertitude est considérée. L'approche distributionally robust permet de modéliser le problème sous l'hypothèse que la loi de probabilité adéquate est inconnue et fait partie d'un ensemble de lois, défini par une moyenne et une variance données. Le problème est modélisé par une contrainte en probabilité jointe. Le risque à chaque période est définie par une variable à optimiser.Un problème déterministe équivalent est proposé et des approximations linéaires permettent d'obtenir une formulation d'optimisation linéaire
The staffing and shift-scheduling problems in call centers consist in deciding how many agents handling the calls should be assigned to work during a given period in order to reach the required Quality of Service and minimize the costs. These problems are subject to a growing interest, both for their interesting theoritical formulation and their possible applicative effects. This thesis aims at proposing chance-constrained approaches considering uncertainty on demand forecasts.First, this thesis proposes a model solving the problems in one step through a joint chance-constrained stochastic program, providing a cost-reducing solution. A continuous-based approach leading to an easily-tractable optimization program is formulated with random variables following continuous distributions, a new continuous relation between arrival rates and theoritical real agent numbers and constraint linearizations. The global risk level is dynamically shared among the periods during the optimization process, providing reduced-cost solution. The resulting solutions respect the targeted risk level while reducing the cost compared to other approaches.Moreover, this model is extended so that it provides a better representation of real situations. First, the queuing system model is improved and consider the limited patience of customers. Second, another formulation of uncertainty is proposed so that the period correlation is considered.Finally, another uncertainty representation is proposed. The distributionally robust approach provides a formulation while assuming that the correct probability distribution is unknown and belongs to a set of possible distributions defined by given mean and variance. The problem is formulated with a joint chance constraint. The risk at each period is a decision variable to be optimized. A deterministic equivalent problem is proposed. An easily-tractable mixed-integer linear formulation is obtained through piecewise linearizations
APA, Harvard, Vancouver, ISO, and other styles
27

Liu, Hai. "Semiparametric regression analysis of zero-inflated data." Diss., University of Iowa, 2009. https://ir.uiowa.edu/etd/308.

Full text
Abstract:
Zero-inflated data abound in ecological studies as well as in other scientific and quantitative fields. Nonparametric regression with zero-inflated response may be studied via the zero-inflated generalized additive model (ZIGAM). ZIGAM assumes that the conditional distribution of the response variable belongs to the zero-inflated 1-parameter exponential family which is a probabilistic mixture of the zero atom and the 1-parameter exponential family, where the zero atom accounts for an excess of zeroes in the data. We propose the constrained zero-inflated generalized additive model (COZIGAM) for analyzing zero-inflated data, with the further assumption that the probability of non-zero-inflation is some monotone function of the (non-zero-inflated) exponential family distribution mean. When the latter assumption obtains, the new approach provides a unified framework for modeling zero-inflated data, which is more parsimonious and efficient than the unconstrained ZIGAM. We develop an iterative algorithm for model estimation based on the penalized likelihood approach, and derive formulas for constructing confidence intervals of the maximum penalized likelihood estimator. Some asymptotic properties including the consistency of the regression function estimator and the limiting distribution of the parametric estimator are derived. We also propose a Bayesian model selection criterion for choosing between the unconstrained and the constrained ZIGAMs. We consider several useful extensions of the COZIGAM, including imposing additive-component-specific proportional and partial constraints, and incorporating threshold effects to account for regime shift phenomena. The new methods are illustrated with both simulated data and real applications. An R package COZIGAM has been developed for model fitting and model selection with zero-inflated data.
APA, Harvard, Vancouver, ISO, and other styles
28

Sassi, Achille. "Numerical methods for hybrid control and chance-constrained optimization problems." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLY005/document.

Full text
Abstract:
Cette thèse est dediée à l'alanyse numérique de méthodes numériques dans le domaine du contrôle optimal, et est composée de deux parties. La première partie est consacrée à des nouveaux résultats concernant des méthodes numériques pour le contrôle optimal de systèmes hybrides, qui peuvent être contrôlés simultanément par des fonctions mesurables et des sauts discontinus dans la variable d'état. La deuxième partie est dédiée è l'étude d'une application spécifique surl'optimisation de trajectoires pour des lanceurs spatiaux avec contraintes en probabilité. Ici, on utilise des méthodes d'optimisation nonlineaires couplées avec des techniques de statistique non parametrique. Le problème traité dans cette partie appartient à la famille des problèmes d'optimisation stochastique et il comporte la minimisation d'une fonction de coût en présence d'une contrainte qui doit être satisfaite dans les limites d'un seuil de probabilité souhaité
This thesis is devoted to the analysis of numerical methods in the field of optimal control, and it is composed of two parts. The first part is dedicated to new results on the subject of numerical methods for the optimal control of hybrid systems, controlled by measurable functions and discontinuous jumps in the state variable simultaneously. The second part focuses on a particular application of trajectory optimization problems for space launchers. Here we use some nonlinear optimization methods combined with non-parametric statistics techniques. This kind of problems belongs to the family of stochastic optimization problems and it features the minimization of a cost function in the presence of a constraint which needs to be satisfied within a desired probability threshold
APA, Harvard, Vancouver, ISO, and other styles
29

Marêché, Laure. "Kinetically constrained models : relaxation to equilibrium and universality results." Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7125.

Full text
Abstract:
Cette thèse étudie la classe de systèmes de particules en interaction appelés modèles avec contraintes cinétiques (KCM). La première question considérée est celle de l’universalité : peut-on classer l’infinité de modèles possibles en un nombre fini de classes selon leurs propriétés ? Un tel résultat a été récemment démontré dans une classe de modèles proche, la percolation bootstrap, où les modèles se subdivisent en surcritiques, critiques et sous-critiques. Cette classification s’applique aussi aux KCM, mais elle n’est pas assez fine : les KCM surcritiques doivent être subdivisés en enracinés et non enracinés, et les KCM critiques selon qu’ils ont ou pas une infinité de directions stables. Cette thèse prouve la pertinence de cette classification des KCM et complète la preuve de leur universalité dans les cas surcritique et critique, en démontrant une borne inférieure pour deux grandeurs caractéristiques, le temps de relaxation et le premier temps auquel un site est à 0, dans les cas surcritique enraciné (travail avec F. Martinelli et C. Toninelli, reposant sur un résultat combinatoire réalisé sans collaboration) et critique avec une infinité de directions stables (travail avec I. Hartarsky et C. Toninelli). Elle établit aussi une borne inférieure plus précise dans le cas particulier du modèle de Duarte (travail avec F. Martinelli et C. Toninelli). Dans un deuxième temps, cette thèse montre des résultats de convergence exponentielle vers l’équilibre, pour tous les KCM surcritiques sous certaines conditions et dans le cas particulier du modèle Est en dimension d sans restriction
This thesis studies the class of interacting particle systems called kinetically constrained models (KCMs). It considers first the question of universality: can the infinity of possible models be sorted into a finite number of classes according to their properties? Such a result was recently proven in a related class of models, bootstrap percolation, where models can be divided into supercritical, critical and subcritical. This classification can also be applied to KCMs, but it is not precise enough: supercritical KCMs have to be divided into rooted and unrooted, and critical KCMs depending on them having or not an infinity of stable directions. This thesis shows the relevance of this classification of KCMs and completes the proof of their universality in the supercritical and critical cases, by proving a lower bound for two characteristic scales, the relaxation time and the first time at which a site is at 0, in the supercritical rooted case (work with F. Martinelli and C. Toninelli, relying on a combinatorial result shown without collaboration) and in the case of critical models with an infinity of stable directions (work with I. Hartarsky and C. Toninelli). It also establishes a more precise lower bound in the particular case of the Duarte model (work with F. Martinelli and C. Toninelli). Secondly, this thesis shows results of exponential convergence to equilibrium, for all supercritical KCMs under certain conditions and in the particular case of the d-dimensional East model without restrictions
APA, Harvard, Vancouver, ISO, and other styles
30

Ekberg, Marie. "Sensitivity analysis of optimization : Examining sensitivity of bottleneck optimization to input data models." Thesis, Högskolan i Skövde, Institutionen för ingenjörsvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-12624.

Full text
Abstract:
The aim of this thesis is to examine optimization sensitivity in SCORE to the accuracy of particular input data models used in a simulation model of a production line. The purpose is to evaluate if it is sufficient to model input data using sample mean and default distributions instead of fitted distributions. An existing production line has been modeled for the simulation study. SCORE is based on maximizing any key performance measure of the production line while simultaneously minimizing the number of improvements necessary to achieve maximum performance. The sensitivity to the input models should become apparent the more changes required. The experiments concluded that the optimization struggles to obtain convergence when fitted distribution models were used. Configuring the input parameters to the optimization might yield better optimization result. The final conclusion is that the optimization is sensitive to what input data models are used in the simulation model.
APA, Harvard, Vancouver, ISO, and other styles
31

Serra, Romain. "Opérations de proximité en orbite : évaluation du risque de collision et calcul de manoeuvres optimales pour l'évitement et le rendez-vous." Thesis, Toulouse, INSA, 2015. http://www.theses.fr/2015ISAT0035/document.

Full text
Abstract:
Cette thèse traite de l'évitement de collision entre un engin spatial opérationnel, appelé objet primaire, et un débris orbital, dit secondaire. Ces travaux concernent aussi bien la question de l'estimation du risque pour une paire d'objets sphériques que celle du calcul d'un plan de manoeuvres d'évitement pour le primaire. Pour ce qui est du premier point, sous certaines hypothèses, la probabilité de collision s'exprime comme l'intégrale d'une fonction gaussienne sur une boule euclidienne, en dimension deux ou trois. On en propose ici une nouvelle méthode de calcul, basée sur les théories de la transformée de Laplace et des fonctions holonomes. En ce qui concerne le calcul de manoeuvres de propulsion, différentes méthodes sont développées en fonction du modèle considéré. En toute généralité, le problème peut être formulé dans le cadre de l'optimisation sous contrainte probabiliste et s'avère difficile à résoudre. Dans le cas d'un mouvement considéré comme relatif rectiligne, l'approche par scénarios se prête bien au problème et permet d'obtenir des solutions admissibles. Concernant les rapprochements lents, une linéarisation de la dynamique des objets et un recouvrement polyédral de l'objet combiné sont à la base de la construction d'un problème de substitution. Deux approches sont proposées pour sa résolution : une première directe et une seconde par sélection du risque. Enfin, la question du calcul de manoeuvres de proximité en consommation optimale et temps fixé, sans contrainte d'évitement, est abordée. Par l'intermédiaire de la théorie du vecteur efficacité, la solution analytique est obtenue pour la partie hors-plan de la dynamique képlérienne linéarisée
This thesis is about collision avoidance for a pair of spherical orbiting objects. The primary object - the operational satellite - is active in the sense that it can use its thrusters to change its trajectory, while the secondary object is a space debris that cannot be controlled in any way. Onground radars or other means allow to foresee a conjunction involving an operational space craft,leading in the production of a collision alert. The latter contains statistical data on the position and velocity of the two objects, enabling for the construction of a probabilistic collision model.The work is divided in two parts : the computation of collision probabilities and the design of maneuvers to lower the collision risk. In the first part, two kinds of probabilities - that can be written as integrals of a Gaussian distribution over an Euclidean ball in 2 and 3 dimensions -are expanded in convergent power series with positive terms. It is done using the theories of Laplace transform and Definite functions. In the second part, the question of collision avoidance is formulated as a chance-constrained optimization problem. Depending on the collision model, namely short or long-term encounters, it is respectively tackled via the scenario approach or relaxed using polyhedral collision sets. For the latter, two methods are proposed. The first one directly tackles the joint chance constraints while the second uses another relaxation called risk selection to obtain a mixed-integer program. Additionaly, the solution to the problem of fixed-time fuel minimizing out-of-plane proximity maneuvers is derived. This optimal control problem is solved via the primer vector theory
APA, Harvard, Vancouver, ISO, and other styles
32

Benhida, Soufia. "De l'optimisation pour l'aide à la décision : applications au problème du voyageur de commerce probabiliste et à l'approximation de données." Thesis, Normandie, 2018. http://www.theses.fr/2018NORMIR27.

Full text
Abstract:
La 1ere partie de ce travail traite l'optimisation des tournées sous forme d'un problème d'optimisation nommé Le problème de Voyageur de Commerce. Dans cette partie nous nous intéressons à faire une riche présentation du problème de Voyageur de Commerce, ses variantes, puis nous proposons une stratégie de génération de contrainte pour la résolution du TSP. Ensuite on traite sa version stochastique : le problème de Voyageur de commerce Probabiliste. Nous proposons une formulation mathématique du PTSP et nous présentons des résultats numériques obtenus par résolution exacte pour une série d'instances de petite taille. Dans la seconde partie, nous proposons une méthode d'approximation générale permettant d'approcher différents type de données, d'abord nous traitons l'approximation d'un signal de vent (cas simple, ID), ensuite l'approximation d'un champ de vecteurs avec prise en compte de la topographie qui constitue la principale contribution de cette partie
The first part of this work deals with route optimization in the form of an optimization problem named The Traveler's Business Problem. In this part we are interested to make a rich presentation of the problem of Traveler Commerce, its variants, then we propose a strategy of constraint generation for the resolution of the TSP. Then we treat its stochastic version : the probabilistic business traveler problem. We propose a mathematical formulation of the PTSP and we present numerical results obtained by exact resolution for a series of small instances. In the second part, we propose a method of general approximation to approximate different type of data, first we treat the approximation of a wind signal (simple case, 1D), then the approximation of a vector field taking into account the topography which is the main contribution of this part
APA, Harvard, Vancouver, ISO, and other styles
33

Merlinge, Nicolas. "State estimation and trajectory planning using box particle kernels." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS425/document.

Full text
Abstract:
L'autonomie d'un engin aérospatial requière de disposer d'une boucle de navigation-guidage-pilotage efficace et sûre. Cette boucle intègre des filtres estimateurs et des lois de commande qui doivent dans certains cas s'accommoder de non-linéarités sévères et être capables d'exploiter des mesures ambiguës. De nombreuses approches ont été développées à cet effet et parmi celles-ci, les approches particulaires présentent l'avantage de pouvoir traiter de façon unifiée des problèmes dans lesquels les incertitudes d’évolution du système et d’observation peuvent être soumises à des lois statistiques quelconques. Cependant, ces approches ne sont pas exemptes de défauts dont le plus important est celui du coût de calcul élevé. D'autre part, dans certains cas, ces méthodes ne permettent pas non plus de converger vers une solution acceptable. Des adaptations récentes de ces approches, combinant les avantages du particulaire tel que la possibilité d'extraire la recherche d'une solution d'un domaine local de description et la robustesse des approches ensemblistes, ont été à l'origine du travail présenté dans cette thèse.Cette thèse présente le développement d’un algorithme d’estimation d’état, nommé le Box Regularised Particle Filter (BRPF), ainsi qu’un algorithme de commande, le Box Particle Control (BPC). Ces algorithmes se basent tous deux sur l’utilisation de mixtures de noyaux bornés par des boites (i.e., des vecteurs d’intervalles) pour décrire l’état du système sous la forme d’une densité de probabilité multimodale. Cette modélisation permet un meilleur recouvrement de l'espace d'état et apporte une meilleure cohérence entre la prédite et la vraisemblance. L’hypothèse est faite que les incertitudes incriminées sont bornées. L'exemple d'application choisi est la navigation par corrélation de terrain qui constitue une application exigeante en termes d'estimation d'état.Pour traiter des problèmes d’estimation ambiguë, c’est-à-dire lorsqu’une valeur de mesure peut correspondre à plusieurs valeurs possibles de l’état, le Box Regularised Particle Filter (BRPF) est introduit. Le BRPF est une évolution de l’algorithme de Box Particle Filter (BPF) et est doté d’une étape de ré-échantillonnage garantie et d’une stratégie de lissage par noyau (Kernel Regularisation). Le BRPF assure théoriquement une meilleure estimation que le BPF en termes de Mean Integrated Square Error (MISE). L’algorithme permet une réduction significative du coût de calcul par rapport aux approches précédentes (BPF, PF). Le BRPF est également étudié dans le cadre d’une intégration dans des architectures fédérées et distribuées, ce qui démontre son efficacité dans des cas multi-capteurs et multi-agents.Un autre aspect de la boucle de navigation–guidage-pilotage est le guidage qui nécessite de planifier la future trajectoire du système. Pour tenir compte de l'incertitude sur l'état et des contraintes potentielles de façon versatile, une approche nommé Box Particle Control (BPC) est introduite. Comme pour le BRPF, le BPC se base sur des mixtures de noyaux bornés par des boites et consiste en la propagation de la densité d’état sur une trajectoire jusqu’à un certain horizon de prédiction. Ceci permet d’estimer la probabilité de satisfaire les contraintes d’état au cours de la trajectoire et de déterminer la séquence de futures commandes qui maintient cette probabilité au-delà d’un certain seuil, tout en minimisant un coût. Le BPC permet de réduire significativement la charge de calcul
State estimation and trajectory planning are two crucial functions for autonomous systems, and in particular for aerospace vehicles.Particle filters and sample-based trajectory planning have been widely considered to tackle non-linearities and non-Gaussian uncertainties.However, these approaches may produce erratic results due to the sampled approximation of the state density.In addition, they have a high computational cost which limits their practical interest.This thesis investigates the use of box kernel mixtures to describe multimodal probability density functions.A box kernel mixture is a weighted sum of basic functions (e.g., uniform kernels) that integrate to unity and whose supports are bounded by boxes, i.e., vectors of intervals.This modelling yields a more extensive description of the state density while requiring a lower computational load.New algorithms are developed, based on a derivation of the Box Particle Filter (BPF) for state estimation, and of a particle based chance constrained optimisation (Particle Control) for trajectory planning under uncertainty.In order to tackle ambiguous state estimation problems, a Box Regularised Particle Filter (BRPF) is introduced.The BRPF consists of an improved BPF with a guaranteed resampling step and a smoothing strategy based on kernel regularisation.The proposed strategy is theoretically proved to outperform the original BPF in terms of Mean Integrated Square Error (MISE), and empirically shown to reduce the Root Mean Square Error (RMSE) of estimation.BRPF reduces the computation load in a significant way and is robust to measurement ambiguity.BRPF is also integrated to federated and distributed architectures to demonstrate its efficiency in multi-sensors and multi-agents systems.In order to tackle constrained trajectory planning under non-Gaussian uncertainty, a Box Particle Control (BPC) is introduced.BPC relies on an interval bounded kernel mixture state density description, and consists of propagating the state density along a state trajectory at a given horizon.It yields a more accurate description of the state uncertainty than previous particle based algorithms.A chance constrained optimisation is performed, which consists of finding the sequence of future control inputs that minimises a cost function while ensuring that the probability of constraint violation (failure probability) remains below a given threshold.For similar performance, BPC yields a significant computation load reduction with respect to previous approaches
APA, Harvard, Vancouver, ISO, and other styles
34

Gabriš, Ondrej. "Software Projects Risk Management Support Tool." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2011. http://www.nusl.cz/ntk/nusl-412827.

Full text
Abstract:
Management projektů a jejich rizik je v současnosti rozvíjející se disciplína, která si získává stále větší pozornost a uplatnění v praxi. Tato práce popisuje úvod do problematiky řízení rizik, zkoumání metod jejich identifikace, vyhodnocení a managementu, předcházení jejich následkům a jejich zvládání. V další části práce byla provedena analýza vzorků rizik z reálných projektů, byly popsány metody pro identifikaci a vyhodnocení následků rizik v úvodních fázích softwarového projektu, taktéž byly popsány atributy rizik a navržen způsob jejich dokumentace. V závěrečné části zadání byl navržen a implementován prototyp modelové aplikace pro podporu managementu rizik softwarových projektů.
APA, Harvard, Vancouver, ISO, and other styles
35

Alais, Jean-Christophe. "Risque et optimisation pour le management d'énergies : application à l'hydraulique." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1071/document.

Full text
Abstract:
L'hydraulique est la principale énergie renouvelable produite en France. Elle apporte une réserve d'énergie et une flexibilité intéressantes dans un contexte d'augmentation de la part des énergies intermittentes dans la production. Sa gestion soulève des problèmes difficiles dus au nombre des barrages, aux incertitudes sur les apports d'eau et sur les prix, ainsi qu'aux usages multiples de l'eau. Cette thèse CIFRE, effectuée en partenariat avec Electricité de France, aborde deux questions de gestion hydraulique formulées comme des problèmes d'optimisation dynamique stochastique. Elles sont traitées dans deux grandes parties.Dans la première partie, nous considérons la gestion de la production hydroélectrique d'un barrage soumise à une contrainte dite de cote touristique. Cette contrainte vise à assurer une hauteur de remplissage du réservoir suffisamment élevée durant l'été avec un niveau de probabilité donné. Nous proposons différentes modélisations originales de ce problème et nous développons les algorithmes de résolution correspondants. Nous présentons des résultats numériques qui éclairent différentes facettes du problème utiles pour les gestionnaires du barrage.Dans la seconde partie, nous nous penchons sur la gestion d'une cascade de barrages. Nous présentons une méthode de résolution approchée par décomposition-coordination, l'algorithme Dual Approximate Dynamic Programming (DADP). Nousmontrons comment décomposer, barrage par barrage, le problème de la cascade en sous-problèmes obtenus en dualisant la contrainte de couplage spatial ``déversé supérieur = apport inférieur''. Sur un cas à trois barrages, nous sommes en mesure de comparer les résultats de DADP à la solution exacte (obtenue par programmation dynamique), obtenant desgains à quelques pourcents de l'optimum avec des temps de calcul intéressants. Les conclusions auxquelles nous sommes parvenu offrent des perspectives encourageantes pour l'optimisation stochastique de systèmes de grande taille
Hydropower is the main renewable energy produced in France. It brings both an energy reserve and a flexibility, of great interest in a contextof penetration of intermittent sources in the production of electricity. Its management raises difficulties stemming from the number of dams, from uncertainties in water inflows and prices and from multiple uses of water. This Phd thesis has been realized in partnership with Electricité de France and addresses two hydropower management issues, modeled as stochastic dynamic optimization problems. The manuscript is divided in two parts. In the first part, we consider the management of a hydroelectric dam subject to a so-called tourist constraint. This constraint assures the respect of a given minimum dam stock level in Summer months with a prescribed probability level. We propose different original modelings and we provide corresponding numerical algorithms. We present numerical results that highlight the problem under various angles useful for dam managers. In the second part, we focus on the management of a cascade of dams. We present the approximate decomposition-coordination algorithm called Dual Approximate Dynamic Programming (DADP). We show how to decompose an original (large scale) problem into smaller subproblems by dualizing the spatial coupling constraints. On a three dams instance, we are able to compare the results of DADP with the exact solution (obtained by dynamic programming); we obtain approximate gains that are only at a few percents of the optimum, with interesting running times. The conclusions we arrived at offer encouraging perspectives for the stochastic optimization of large scale problems
APA, Harvard, Vancouver, ISO, and other styles
36

Prigent, Sylvain. "Approche novatrice pour la conception et l’exploitation d’avions écologiques." Thesis, Toulouse, ISAE, 2015. http://www.theses.fr/2015ESAE0014/document.

Full text
Abstract:
L'objectif de ce travail de thèse est de poser, d'analyser et de résoudre le problème multidisciplinaire et multi-objectif de la conception d'avions plus écologiques et plus économiques. Dans ce but, les principaux drivers de l'optimisation des performances d'un avion seront: la géométrie de l'avion, son moteur ainsi que son profil de mission, autrement dit sa trajectoire. Les objectifs à minimiser considérés sont la consommation de carburant, l'impact climatique et le coût d'opération de l'avion. L'étude sera axée sur la stratégie de recherche de compromis entre ces objectifs, afin d'identifier les configurations d'avions optimales selon le critère sélectionné et de proposer une analyse de ces résultats. L'incertitude présente au niveau des modèles utilisés sera prise en compte par des méthodes rigoureusement sélectionnées. Une configuration d'avion hybride est proposée pour atteindre l'objectif de réduction d'impact climatique
The objective of this PhD work is to pose, investigate, and solve the highly multidisciplinary and multiobjective problem of environmentally efficient aircraft design and operation. In this purpose, the main three drivers for optimizing the environmental performance of an aircraft are the airframe, the engine, and the mission profiles. The figures of merit, which will be considered for optimization, are fuel burn, local emissions, global emissions, and climate impact (noise excluded). The study will be focused on finding efficient compromise strategies and identifying the most powerful design architectures and design driver combinations for improvement of environmental performances. The modeling uncertainty will be considered thanks to rigorously selected methods. A hybrid aircraft configuration is proposed to reach the climatic impact reduction objective
APA, Harvard, Vancouver, ISO, and other styles
37

Hee, Sonke. "Computational Bayesian techniques applied to cosmology." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/273346.

Full text
Abstract:
This thesis presents work around 3 themes: dark energy, gravitational waves and Bayesian inference. Both dark energy and gravitational wave physics are not yet well constrained. They present interesting challenges for Bayesian inference, which attempts to quantify our knowledge of the universe given our astrophysical data. A dark energy equation of state reconstruction analysis finds that the data favours the vacuum dark energy equation of state $w {=} -1$ model. Deviations from vacuum dark energy are shown to favour the super-negative ‘phantom’ dark energy regime of $w {< } -1$, but at low statistical significance. The constraining power of various datasets is quantified, finding that data constraints peak around redshift $z = 0.2$ due to baryonic acoustic oscillation and supernovae data constraints, whilst cosmic microwave background radiation and Lyman-$\alpha$ forest constraints are less significant. Specific models with a conformal time symmetry in the Friedmann equation and with an additional dark energy component are tested and shown to be competitive to the vacuum dark energy model by Bayesian model selection analysis: that they are not ruled out is believed to be largely due to poor data quality for deciding between existing models. Recent detections of gravitational waves by the LIGO collaboration enable the first gravitational wave tests of general relativity. An existing test in the literature is used and sped up significantly by a novel method developed in this thesis. The test computes posterior odds ratios, and the new method is shown to compute these accurately and efficiently. Compared to computing evidences, the method presented provides an approximate 100 times reduction in the number of likelihood calculations required to compute evidences at a given accuracy. Further testing may identify a significant advance in Bayesian model selection using nested sampling, as the method is completely general and straightforward to implement. We note that efficiency gains are not guaranteed and may be problem specific: further research is needed.
APA, Harvard, Vancouver, ISO, and other styles
38

Mirroshandel, Seyedabolghasem. "Towards less supervision in dependency parsing." Thesis, Aix-Marseille, 2015. http://www.theses.fr/2015AIXM4096/document.

Full text
Abstract:
Analyse probabiliste est l'un des domaines de recherche les plus attractives en langage naturel En traitement. Analyseurs probabilistes succès actuels nécessitent de grandes treebanks qui Il est difficile, prend du temps et coûteux à produire. Par conséquent, nous avons concentré notre l'attention sur des approches moins supervisés. Nous avons proposé deux catégories de solution: l'apprentissage actif et l'algorithme semi-supervisé. Stratégies d'apprentissage actives permettent de sélectionner les échantillons les plus informatives pour annotation. La plupart des stratégies d'apprentissage actives existantes pour l'analyse reposent sur la sélection phrases incertaines pour l'annotation. Nous montrons dans notre recherche, sur quatre différents langues (français, anglais, persan, arabe), que la sélection des phrases complètes ne sont pas une solution optimale et de proposer un moyen de sélectionner uniquement les sous-parties de phrases. Comme nos expériences ont montré, certaines parties des phrases ne contiennent aucune utiles information pour la formation d'un analyseur, et en se concentrant sur les sous-parties incertains des phrases est une solution plus efficace dans l'apprentissage actif
Probabilistic parsing is one of the most attractive research areas in natural language processing. Current successful probabilistic parsers require large treebanks which are difficult, time consuming, and expensive to produce. Therefore, we focused our attention on less-supervised approaches. We suggested two categories of solution: active learning and semi-supervised algorithm. Active learning strategies allow one to select the most informative samples for annotation. Most existing active learning strategies for parsing rely on selecting uncertain sentences for annotation. We show in our research, on four different languages (French, English, Persian, and Arabic), that selecting full sentences is not an optimal solution and propose a way to select only subparts of sentences. As our experiments have shown, some parts of the sentences do not contain any useful information for training a parser, and focusing on uncertain subparts of the sentences is a more effective solution in active learning
APA, Harvard, Vancouver, ISO, and other styles
39

Safadi, El Abed El. "Contribution à l'évaluation des risques liés au TMD (transport de matières dangereuses) en prenant en compte les incertitudes." Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAT059/document.

Full text
Abstract:
Le processus d'évaluation des risques technologiques, notamment liés au Transport de Matières Dangereuses (TMD), consiste, quand un événement accidentel se produit, à évaluer le niveau de risque potentiel des zones impactées afin de pouvoir dimensionner et prendre rapidement des mesures de prévention et de protection (confinement, évacuation...) dans le but de réduire et maitriser les effets sur les personnes et l'environnement. La première problématique de ce travail consiste donc à évaluer le niveau de risque des zones soumises au transport des matières dangereuses. Pour ce faire, un certain nombre d'informations sont utilisées, comme la quantification de l'intensité des phénomènes qui se produisent à l'aide de modèles d'effets (analytique ou code informatique). Pour ce qui concerne le problème de dispersion de produits toxiques, ces modèles contiennent principalement des variables d'entrée liées à la base de données d'exposition, de données météorologiques,… La deuxième problématique réside dans les incertitudes affectant certaines entrées de ces modèles. Pour correctement réaliser une cartographie en déterminant la zone de de danger où le niveau de risque est jugé trop élevé, il est nécessaire d'identifier et de prendre en compte les incertitudes sur les entrées afin de les propager dans le modèle d'effets et ainsi d'avoir une évaluation fiable du niveau de risque. Une première phase de ce travail a consisté à évaluer et propager l'incertitude sur la concentration qui est induite par les grandeurs d'entrée incertaines lors de son évaluation par les modèles de dispersion. Deux approches sont utilisées pour modéliser et propager les incertitudes : l'approche ensembliste pour les modèles analytiques et l'approche probabiliste (Monte-Carlo) qui est plus classique et utilisable que le modèle de dispersion soit analytique ou défini par du code informatique. L'objectif consiste à comparer les deux approches pour connaitre leurs avantages et inconvénients en termes de précision et temps de calcul afin de résoudre le problème proposé. Pour réaliser les cartographies, deux modèles de dispersion (Gaussien et SLAB) sont utilisés pour évaluer l'intensité des risques dans la zone contaminée. La réalisation des cartographies a été abordée avec une méthode probabiliste (Monte Carlo) qui consiste à inverser le modèle d'effets et avec une méthode ensembliste générique qui consiste à formuler ce problème sous la forme d'un ensemble de contraintes à satisfaire (CSP) et le résoudre ensuite par inversion ensembliste. La deuxième phase a eu pour but d'établir une méthodologie générale pour réaliser les cartographies et améliorer les performances en termes de temps du calcul et de précision. Cette méthodologie s'appuie sur 3 étapes : l'analyse préalable des modèles d'effets utilisés, la proposition d'une nouvelle approche pour la propagation des incertitudes mixant les approches probabiliste et ensembliste en tirant notamment partie des avantages des deux approches précitées, et utilisable pour n'importe quel type de modèle d'effets spatialisé et statique, puis finalement la réalisation des cartographies en inversant les modèles d'effets. L'analyse de sensibilité présente dans la première étape s'adresse classiquement à des modèles probabilistes. Nous discutons de la validité d'utiliser des indices de type Sobol dans le cas de modèles intervalles et nous proposerons un nouvel indice de sensibilité purement intervalle cette fois-ci
When an accidental event is occurring, the process of technological risk assessment, in particular the one related to Dangerous Goods Transportation (DGT), allows assessing the level of potential risk of impacted areas in order to provide and quickly take prevention and protection actions (containment, evacuation ...). The objective is to reduce and control its effects on people and environment. The first issue of this work is to evaluate the risk level for areas subjected to dangerous goods transportation. The quantification of the intensity of the occurring events needed to do this evaluation is based on effect models (analytical or computer code). Regarding the problem of dispersion of toxic products, these models mainly contain inputs linked to different databases, like the exposure data and meteorological data. The second problematic is related to the uncertainties affecting some model inputs. To determine the geographical danger zone where the estimated risk level is not acceptable, it is necessary to identify and take in consideration the uncertainties on the inputs in aim to propagate them in the effect model and thus to have a reliable evaluation of the risk level. The first phase of this work is to evaluate and propagate the uncertainty on the gas concentration induced by uncertain model inputs during its evaluation by dispersion models. Two approaches are used to model and propagate the uncertainties. The first one is the set-membership approach based on interval calculus for analytical models. The second one is the probabilistic approach (Monte Carlo), which is more classical and used more frequently when the dispersion model is described by an analytic expression or is is defined by a computer code. The objective is to compare the two approaches to define their advantages and disadvantages in terms of precision and computation time to solve the proposed problem. To determine the danger zones, two dispersion models (Gaussian and SLAB) are used to evaluate the risk intensity in the contaminated area. The risk mapping is achieved by using two methods: a probabilistic method (Monte Carlo) which consists in solving an inverse problem on the effect model and a set-membership generic method that defines the problem as a constraint satisfaction problem (CSP) and to resolve it with an set-membership inversion method. The second phase consists in establishing a general methodology to realize the risk mapping and to improve performance in terms of computation time and precision. This methodology is based on three steps: - Firstly the analysis of the used effect model. - Secondly the proposal of a new method for the uncertainty propagationbased on a mix between the probabilistic and set-membership approaches that takes advantage of both approaches and that is suited to any type of spatial and static effect model. -Finally the realization of risk mapping by inversing the effect models. The sensitivity analysis present in the first step is typically addressed to probabilistic models. The validity of using Sobol indices for interval models is discussed and a new interval sensitivity indiceis proposed
APA, Harvard, Vancouver, ISO, and other styles
40

Gouyou, Doriane. "Introduction de pièces déformables dans l’analyse de tolérances géométriques de mécanismes hyperstatiques." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0343/document.

Full text
Abstract:
Les mécanismes hyperstatiques sont souvent utilisés dans l’industrie pour garantir une bonne tenue mécanique du système et une bonne robustesse aux écarts de fabrication des surfaces. Même si ces assemblages sont très courants, les méthodologies d’analyse de tolérances de ces mécanismes sont difficiles à mettre en oeuvre.En fonction de ses écarts de fabrication, un assemblage hyperstatique peut soit présenter des interférences de montage, soit être assemblé avec jeu. Dans ces travaux de thèse, nous avons appliqué la méthode des polytopes afin de détecter les interférences de montage. Pour un assemblage donné, le polytope résultant du mécanisme est calculé. Si ce polytope est non vide, l’assemblage ne présente pas d’interférence. Si ce polytope est vide, l’assemblage présente des interférences de montage. En fonction du résultat obtenu, deux méthodes d’analyse distinctes sont proposées.Si l’assemblage est réalisable sans interférence le polytope résultant du mécanisme permet de conclure sur sa conformité au regard de l’exigence fonctionnelle. Si l’assemblage présente des interférences de montage, une analyse prenant en compte la raideur des pièces est réalisée. Cette approche est basée sur une réduction de modèle avec des super-éléments. Elle permet de déterminer rapidement l’état d’équilibre du système après assemblage. Un effort de montage est ensuite estimé à partir de ces résultats pour conclure sur la faisabilité de l’assemblage. Si l’assemblage est déclaré réalisable, la propagation des déformations dans les pièces est caractérisée pour vérifier la conformité du système au regard de l’exigence fonctionnelle.La rapidité de mise en oeuvre de ces calculs nous permet de réaliser des analyses de tolérances statistiques par tirage de Monte Carlo pour estimer les probabilités de montage et de respect d’une Condition Fonctionnelle
Over-constrained mechanisms are often used in industries to ensure a good mechanical strength and a good robustness to manufacturing deviations of parts. The tolerance analysis of such assemblies is difficult to implement.Indeed, depending on the geometrical deviations of parts, over-constrained mechanisms can have assembly interferences. In this work, we used the polytope method to check whether the assembly has interferences or not. For each assembly, the resulting polytope of the mechanism is computed. If it is non empty, the assembly can be performed without interference. If not, there is interferences in the assembly. According to the result, two different methods can be implemented.For an assembly without interference, the resulting polytope enables to check directly its compliance. For an assembly with interferences, a study taking into account the stiffness of the parts is undertaken. This approach uses a model reduction with super elements. It enables to compute quickly the assembly with deformation. Then, an assembly load is computed to conclude on its feasibility. Finally, the spreading of deformation through the parts is calculated to check the compliance of the mechanism.The short computational time enables to perform stochastic tolerance analyses in order to provide the rates of compliant assemblies
APA, Harvard, Vancouver, ISO, and other styles
41

Basei, Matteo. "Topics in stochastic control and differential game theory, with application to mathematical finance." Doctoral thesis, Università degli studi di Padova, 2016. http://hdl.handle.net/11577/3424239.

Full text
Abstract:
We consider three problems in stochastic control and differential game theory, arising from practical situations in mathematical finance and energy markets. First, we address the problem of optimally exercising swing contracts in energy markets. Our main result consists in characterizing the value function as the unique viscosity solution of a Hamilton-Jacobi-Bellman equation. The case of contracts with penalties is straightforward. Conversely, the case of contracts with strict constraints gives rise to stochastic control problems where a non-standard integral constraint is present: we get the anticipated characterization by considering a suitable sequence of unconstrained problems. The approximation result is proved for a general class of problems with an integral constraint on the controls. Then, we consider a retailer who has to decide when and how to intervene and adjust the price of the energy he sells, in order to maximize his earnings. The intervention costs can be either fixed or depending on the market share. In the first case, we get a standard impulsive control problem and we characterize the value function and the optimal price policy. In the second case, classical theory cannot be applied, due to the singularities of the penalty function; we then outline an approximation argument and we finally consider stronger conditions on the controls to characterize the optimal policy. Finally, we focus on a general class of non-zero-sum stochastic differential games with impulse controls. After defining a rigorous framework for such problems, we prove a verification theorem: if a couple of functions is regular enough and satisfies a suitable system of quasi-variational inequalities, it coincides with the value functions of the problem and a characterization of the Nash equilibria is possible. We conclude by a detailed example: we investigate the existence of equilibria in the case where two countries, with different goals, can affect the exchange rate between the corresponding currencies.
In questa tesi vengono considerati tre problemi relativi alla teoria del controllo stocastico e dei giochi differenziali; tali problemi sono legati a situazioni concrete nell'ambito della finanza matematica e, più precisamente, dei mercati dell'energia. Innanzitutto, affrontiamo il problema dell'esercizio ottimale di opzioni swing nel mercato dell'energia. Il risultato principale consiste nel caratterizzare la funzione valore come unica soluzione di viscosità di un'opportuna equazione di Hamilton-Jacobi-Bellman. Il caso relativo ai contratti con penalità può essere trattato in modo standard. Al contrario, il caso relativo ai contratti con vincoli stretti porta a problemi di controllo stocastico in cui è presente un vincolo non standard sui controlli: la suddetta caratterizzazione è allora ottenuta considerando un'opportuna successione di problemi non vincolati. Tale approssimazione viene dimostrata per una classe generale di problemi con vincolo integrale sui controlli. Successivamente, consideriamo un fornitore di energia che deve decidere quando e come intervenire per cambiare il prezzo che chiede ai suoi clienti, al fine di massimizzare il suo guadagno. I costi di intervento possono essere fissi o dipendere dalla quota di mercato del fornitore. Nel primo caso, otteniamo un problema standard di controllo stocastico impulsivo, in cui caratterizziamo la funzione valore e la politica ottimale di gestione del prezzo. Nel secondo caso, la teoria classica non può essere applicata a causa delle singolarità nella funzione che definisce le penalità. Delineiamo quindi una procedura di approssimazione e consideriamo infine condizioni più forti sui controlli, così da caratterizzare, anche in questo caso, il controllo ottimale. Infine, studiamo una classe generale di giochi differenziali a somma non nulla e con controlli di tipo impulsivo. Dopo aver definito rigorosamente tali problemi, forniamo la dimostrazione di un teorema di verifica: se una coppia di funzioni è sufficientemente regolare e soddisfa un opportuno sistema di disequazioni quasi-variazionali, essa coincide con le funzioni valore del problema ed è possibile caratterizzare gli equilibri di Nash. Concludiamo con un esempio dettagliato: indaghiamo l'esistenza di equilibri nel caso in cui due nazioni, con obiettivi differenti, possono condizionare il tasso di cambio tra le rispettive valute.
APA, Harvard, Vancouver, ISO, and other styles
42

Xu, Chuan. "Power-Aware Protocols for Wireless Sensor Networks." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS498/document.

Full text
Abstract:
Ce manuscrit contient d'abord l'étude d'une extension du modèle des protocoles de populations, qui représentent des réseaux de capteurs asynchrones, passivement mobiles, limités en ressources et anonymes. Pour la première fois (à notre connaissance), un modèle formel de consommation d'énergie est proposé pour les protocoles de populations. A titre d'application, nous étudions à la complexité en énergie (dans le pire des cas et en moyenne) pour le problème de collecte de données. Deux protocoles prenant en compte la consommation d'énergie sont proposés. Le premier est déterministe et le second randomisé. Pour déterminer les valeurs optimales des paramètres, nous faisons appel aux techniques d'optimisation. Nous appliquons aussi ces techniques dans un cadre différent, celui des réseaux de capteurs corporels (WBAN). Une formulation de flux est proposée pour acheminer de manière optimale les paquets de données en minimisant la pire consommation d'énergie. Une procédure de recherche à voisinage variable est développée et les résultats numériques montrent son efficacité. Enfin, nous considérons le problème d'optimisation avec des paramètres aléatoires. Précisément, nous étudions un modèle semi-défini positif sous contrainte en probabilité. Un nouvel algorithme basé sur la simulation est proposé et testé sur un problème réel de théorie du contrôle. Nous montrons que notre méthode permet de trouver une solution moins conservatrice que d'autres approches en un temps de calcul raisonnable
In this thesis, we propose a formal energy model which allows an analytical study of energy consumption, for the first time in the context of population protocols. Population protocols model one special kind of sensor networks where anonymous and uniformly bounded memory sensors move unpredictably and communicate in pairs. To illustrate the power and the usefulness of the proposed energy model, we present formal analyses on time and energy, for the worst and the average cases, for accomplishing the fundamental task of data collection. Two power-aware population protocols, (deterministic) EB-TTFM and (randomized) lazy-TTF, are proposed and studied for two different fairness conditions, respectively. Moreover, to obtain the best parameters in lazy-TTF, we adopt optimization techniques and evaluate the resulting performance by experiments. Then, we continue the study on optimization for the power-aware data collection problem in wireless body area networks. A minmax multi-commodity netflow formulation is proposed to optimally route data packets by minimizing the worst power consumption. Then, a variable neighborhood search approach is developed and the numerical results show its efficiency. At last, a stochastic optimization model, namely the chance constrained semidefinite programs, is considered for the realistic decision making problems with random parameters. A novel simulation-based algorithm is proposed with experiments on a real control theory problem. We show that our method allows a less conservative solution, than other approaches, within reasonable time
APA, Harvard, Vancouver, ISO, and other styles
43

Cheng, Jianqiang. "Stochastic Combinatorial Optimization." Thesis, Paris 11, 2013. http://www.theses.fr/2013PA112261.

Full text
Abstract:
Dans cette thèse, nous étudions trois types de problèmes stochastiques : les problèmes avec contraintes probabilistes, les problèmes distributionnellement robustes et les problèmes avec recours. Les difficultés des problèmes stochastiques sont essentiellement liées aux problèmes de convexité du domaine des solutions, et du calcul de l’espérance mathématique ou des probabilités qui nécessitent le calcul complexe d’intégrales multiples. A cause de ces difficultés majeures, nous avons résolu les problèmes étudiées à l’aide d’approximations efficaces.Nous avons étudié deux types de problèmes stochastiques avec des contraintes en probabilités, i.e., les problèmes linéaires avec contraintes en probabilité jointes (LLPC) et les problèmes de maximisation de probabilités (MPP). Dans les deux cas, nous avons supposé que les variables aléatoires sont normalement distribués et les vecteurs lignes des matrices aléatoires sont indépendants. Nous avons résolu LLPC, qui est un problème généralement non convexe, à l’aide de deux approximations basée sur les problèmes coniques de second ordre (SOCP). Sous certaines hypothèses faibles, les solutions optimales des deux SOCP sont respectivement les bornes inférieures et supérieures du problème du départ. En ce qui concerne MPP, nous avons étudié une variante du problème du plus court chemin stochastique contraint (SRCSP) qui consiste à maximiser la probabilité de la contrainte de ressources. Pour résoudre ce problème, nous avons proposé un algorithme de Branch and Bound pour calculer la solution optimale. Comme la relaxation linéaire n’est pas convexe, nous avons proposé une approximation convexe efficace. Nous avons par la suite testé nos algorithmes pour tous les problèmes étudiés sur des instances aléatoires. Pour LLPC, notre approche est plus performante que celles de Bonferroni et de Jaganathan. Pour MPP, nos résultats numériques montrent que notre approche est là encore plus performante que l’approximation des contraintes probabilistes individuellement.La deuxième famille de problèmes étudiés est celle relative aux problèmes distributionnellement robustes où une partie seulement de l’information sur les variables aléatoires est connue à savoir les deux premiers moments. Nous avons montré que le problème de sac à dos stochastique (SKP) est un problème semi-défini positif (SDP) après relaxation SDP des contraintes binaires. Bien que ce résultat ne puisse être étendu au cas du problème multi-sac-à-dos (MKP), nous avons proposé deux approximations qui permettent d’obtenir des bornes de bonne qualité pour la plupart des instances testées. Nos résultats numériques montrent que nos approximations sont là encore plus performantes que celles basées sur les inégalités de Bonferroni et celles plus récentes de Zymler. Ces résultats ont aussi montré la robustesse des solutions obtenues face aux fluctuations des distributions de probabilités. Nous avons aussi étudié une variante du problème du plus court chemin stochastique. Nous avons prouvé que ce problème peut se ramener au problème de plus court chemin déterministe sous certaine hypothèses. Pour résoudre ce problème, nous avons proposé une méthode de B&B où les bornes inférieures sont calculées à l’aide de la méthode du gradient projeté stochastique. Des résultats numériques ont montré l’efficacité de notre approche. Enfin, l’ensemble des méthodes que nous avons proposées dans cette thèse peuvent s’appliquer à une large famille de problèmes d’optimisation stochastique avec variables entières
In this thesis, we studied three types of stochastic problems: chance constrained problems, distributionally robust problems as well as the simple recourse problems. For the stochastic programming problems, there are two main difficulties. One is that feasible sets of stochastic problems is not convex in general. The other main challenge arises from the need to calculate conditional expectation or probability both of which are involving multi-dimensional integrations. Due to the two major difficulties, for all three studied problems, we solved them with approximation approaches.We first study two types of chance constrained problems: linear program with joint chance constraints problem (LPPC) as well as maximum probability problem (MPP). For both problems, we assume that the random matrix is normally distributed and its vector rows are independent. We first dealt with LPPC which is generally not convex. We approximate it with two second-order cone programming (SOCP) problems. Furthermore under mild conditions, the optimal values of the two SOCP problems are a lower and upper bounds of the original problem respectively. For the second problem, we studied a variant of stochastic resource constrained shortest path problem (called SRCSP for short), which is to maximize probability of resource constraints. To solve the problem, we proposed to use a branch-and-bound framework to come up with the optimal solution. As its corresponding linear relaxation is generally not convex, we give a convex approximation. Finally, numerical tests on the random instances were conducted for both problems. With respect to LPPC, the numerical results showed that the approach we proposed outperforms Bonferroni and Jagannathan approximations. While for the MPP, the numerical results on generated instances substantiated that the convex approximation outperforms the individual approximation method.Then we study a distributionally robust stochastic quadratic knapsack problems, where we only know part of information about the random variables, such as its first and second moments. We proved that the single knapsack problem (SKP) is a semedefinite problem (SDP) after applying the SDP relaxation scheme to the binary constraints. Despite the fact that it is not the case for the multidimensional knapsack problem (MKP), two good approximations of the relaxed version of the problem are provided which obtain upper and lower bounds that appear numerically close to each other for a range of problem instances. Our numerical experiments also indicated that our proposed lower bounding approximation outperforms the approximations that are based on Bonferroni's inequality and the work by Zymler et al.. Besides, an extensive set of experiments were conducted to illustrate how the conservativeness of the robust solutions does pay off in terms of ensuring the chance constraint is satisfied (or nearly satisfied) under a wide range of distribution fluctuations. Moreover, our approach can be applied to a large number of stochastic optimization problems with binary variables.Finally, a stochastic version of the shortest path problem is studied. We proved that in some cases the stochastic shortest path problem can be greatly simplified by reformulating it as the classic shortest path problem, which can be solved in polynomial time. To solve the general problem, we proposed to use a branch-and-bound framework to search the set of feasible paths. Lower bounds are obtained by solving the corresponding linear relaxation which in turn is done using a Stochastic Projected Gradient algorithm involving an active set method. Meanwhile, numerical examples were conducted to illustrate the effectiveness of the obtained algorithm. Concerning the resolution of the continuous relaxation, our Stochastic Projected Gradient algorithm clearly outperforms Matlab optimization toolbox on large graphs
APA, Harvard, Vancouver, ISO, and other styles
44

Nenna, Luca. "Numerical Methods for Multi-Marginal Optimal Transportation." Thesis, Paris Sciences et Lettres (ComUE), 2016. http://www.theses.fr/2016PSLED017/document.

Full text
Abstract:
Dans cette thèse, notre but est de donner un cadre numérique général pour approcher les solutions des problèmes du transport optimal (TO). L’idée générale est d’introduire une régularisation entropique du problème initial. Le problème régularisé correspond à minimiser une entropie relative par rapport à une mesure de référence donnée. En effet, cela équivaut à trouver la projection d’un couplage par rapport à la divergence de Kullback-Leibler. Cela nous permet d’utiliser l’algorithme de Bregman/Dykstra et de résoudre plusieurs problèmes variationnels liés au TO. Nous nous intéressons particulièrement à la résolution des problèmes du transport optimal multi-marges (TOMM) qui apparaissent dans le cadre de la dynamique des fluides (équations d’Euler incompressible à la Brenier) et de la physique quantique (la théorie de fonctionnelle de la densité ). Dans ces cas, nous montrons que la régularisation entropique joue un rôle plus important que de la simple stabilisation numérique. De plus, nous donnons des résultats concernant l’existence des transports optimaux (par exemple des transports fractals) pour le problème TOMM
In this thesis we aim at giving a general numerical framework to approximate solutions to optimal transport (OT) problems. The general idea is to introduce an entropic regularization of the initialproblems. The regularized problem corresponds to the minimization of a relative entropy with respect a given reference measure. Indeed, this is equivalent to find the projection of the joint coupling with respect the Kullback-Leibler divergence. This allows us to make use the Bregman/Dykstra’s algorithm and solve several variational problems related to OT. We are especially interested in solving multi-marginal optimal transport problems (MMOT) arising in Physics such as in Fluid Dynamics (e.g. incompressible Euler equations à la Brenier) and in Quantum Physics (e.g. Density Functional Theory). In these cases we show that the entropic regularization plays a more important role than a simple numerical stabilization. Moreover, we also give some important results concerning existence and characterization of optimal transport maps (e.g. fractal maps) for MMOT
APA, Harvard, Vancouver, ISO, and other styles
45

CHUANG, HUI-TING, and 莊惠婷. "Optimization of GPRS Time Slot Allocation Considering Call Blocking Probability Constraints." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/54202451602068361393.

Full text
Abstract:
碩士
國立臺灣大學
資訊管理研究所
90
GPRS is a better solution to mobile data transfer before the coming of 3G network. By using packet-switched technique, one to eight time-slot channels can be dynamically assigned to GPRS users on demand. Because GSM and GPRS users share the same channels and resources, the admission control of different type of traffics is needed to optimize the data channel allocation. The methodology of dynamic slot allocation plays an important role on both maximizing the system revenue and satisfying users’ QoS requirements. We try to find out the best methodology to get the maximum system revenue considering the call blocking probability constraints. We propose two mathematical models to deal with the slot allocation problem in this thesis. The goal of our model is to find a slot allocation policy to maximize the system revenue considering the capacity and the call blocking probability constraints. The main difference between two models is the time type. The first model is discrete-time case, and the other is continuous-time case. Markovian decision process can be applied to solve the problem of maximizing system revenue without the call blocking probability constraints. In order to take the call blocking probability constraints into account, we propose three approaches: linear programming in Markovian decision process, Lagrangian relaxation with Markovian decision process, and the expansion of the Markovian decision process. The most significant and contributive part of this thesis is that we combine the Lagrangian relaxation and the Markovian decision process and use it to successfully solve the Markovian decision process with additional constraints. The computational results are good in our experiments. We can find a slot allocation policy to maximize the system revenue under the call blocking probability constraints. Compared to the policy that the vendors often used, the policy we found has great improvement in system revenue. Thus, our model could provide much better decisions for system vendors and network planners.
APA, Harvard, Vancouver, ISO, and other styles
46

Wu, Chia-Lin, and 吳家麟. "Downlink Interference Management in Underlay Femtocell Networks with Outage Probability Constraints." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/43839186532879003281.

Full text
Abstract:
碩士
國立交通大學
電信工程研究所
98
The femtocell technology is an attractive alternative to solve the capacity and coverage problems of the existing macrocellular networks. To fully exploit and realize the promised potentials, it has toovercome the interference issue when femtocells are to be deployed over a macrocell system sharing the same spectrum. In this thesis, we adopt a resource allocation based interference control approach. We consider downlink transmissions of an orthogonal frequency division multiple access (OFDMA) based femtocell network. The interference constraint is manifested in the form of (average) outage probability requirement. In other words, a femto base station (BS) is allowed to use a certain spectrum only if such an use does not cause the outage of a macrocellular user within the coverage neighborhood. The advantage of using the outage probability constraint is that the knowledge of the locations of the primary (macrocellular) users is not needed. To satisfy the average outage probability constraint, a femtocell BS has to allocates the downlink resource (subcarriers and power) properly. We divide the resource allocation problem into two subproblems. The semidefinete relaxation (SDR) is used to convert the non-convex subcarrier assignment problem into a convex semidefinete programming (SDP) which is then solved by the primal-dual interior-point method. Given the subcarrier assignment, we suggest an iterative water-filling method by solving the KKT conditions for the power allocation problem.
APA, Harvard, Vancouver, ISO, and other styles
47

Szabados, Viktor. "Nové trendy ve stochastickém programování." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-367895.

Full text
Abstract:
Stochastic methods are present in our daily lives, especially when we need to make a decision based on uncertain events. In this thesis, we present basic approaches used in stochastic tasks. In the first chapter, we define the stochastic problem and introduce basic methods and tasks which are present in the literature. In the second chapter, we present various problems which are non-linearly dependent on the probability measure. Moreover, we introduce deterministic and non-deterministic multicriteria tasks. In the third chapter, we give an insight on the concept of stochastic dominance and we describe the methods that are used in tasks with multidimensional stochastic dominance. In the fourth chapter, we capitalize on the knowledge from chapters two and three and we try to solve the role of portfolio optimization on real data using different approaches. 1
APA, Harvard, Vancouver, ISO, and other styles
48

Lapšanská, Alica. "Úlohy vícestupňového stochastického programování - dekompozice." Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-336705.

Full text
Abstract:
The thesis deals with a multistage stochastic model and its application to a number of practical problems. Special attention is devoted to the case where a random element follows an autoregressive sequence and the constraint sets correspond to the individual probability constraints. For this case conditions under which is the problem well-defined are specified. Further, the approximation of the problem and its convergence rate under the empirical estimate of the distribution function is analyzed. Finally, an example of the investment in financial instruments is solved, which is defined as a two-stage stochastic programming problem with the probability constraint and a random element following an autoregressive sequence. Powered by TCPDF (www.tcpdf.org)
APA, Harvard, Vancouver, ISO, and other styles
49

Uhliar, Miroslav. "Ekonomické růstové modely ve stochastickém prostředí." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-367898.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Chen, Jiunn-Yann, and 陳俊諺. "Low-Complexity Symmetry-Constrained Maximum-A-Posteriori Probability Algorithm for Adaptive Blind Beamforming." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/09482189312002402759.

Full text
Abstract:
碩士
國立中央大學
通訊工程學系
101
In this paper, we propose a real-valued SC-MAP (RSC-MAP) algorithm for concurrent adaptive filter (CAF) applied to beamforming. We first contribute to deriving a closed-form optimal weight expression for blind MAP algorithm. A conjugate symmetric property associated with optimal blind MAP weights is further acquired. Then, we use the conjugate symmetric constraint to guide the proposed RSC-MAP algorithms to follow the optimal blind MAP expression form during adapting procedure. In the simulations, we show that the proposed RSC-MAP algorithms have better performance than the classic ones. Compared with SC-MAP, the RSC-MAP with less computational complexity has the same bit-error rate performance.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography