Academic literature on the topic 'Convexe optimierung'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Convexe optimierung.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Dissertations / Theses on the topic "Convexe optimierung"

1

Bot, Radu Ioan, Ernö Robert Csetnek, and Erika Nagy. "Solving systems of monotone inclusions via primal-dual splitting techniques." Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-108172.

Full text
Abstract:
In this paper we propose an algorithm for solving systems of coupled monotone inclusions in Hilbert spaces. The operators arising in each of the inclusions of the system are processed in each iteration separately, namely, the single-valued are evaluated explicitly (forward steps), while the set-valued ones via their resolvents (backward steps). In addition, most of the steps in the iterative scheme can be executed simultaneously, this making the method applicable to a variety of convex minimization problems. The numerical performances of the proposed splitting algorithm are emphasized through applications in average consensus on colored networks and image classification via support vector machines.
APA, Harvard, Vancouver, ISO, and other styles
2

Tünde, Vargyas Emese. "Duality for convex composed programming problems." [S.l. : s.n.], 2004. http://archiv.tu-chemnitz.de/pub/2004/0179.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Schellewald, Christian. "Convex Mathematical Programs for Relational Matching of Object Views." [S.l. : s.n.], 2005. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB11947807.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hodrea, Ioan Bogdan. "Farkas - type results for convex and non - convex inequality systems." Doctoral thesis, [S.l. : s.n.], 2008. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-200800075.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kuhn, Daniel. "Generalized bounds for convex multistage stochastic programs /." Berlin [u.a.] : Springer, 2005. http://www.loc.gov/catdir/enhancements/fy0818/2004109705-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Vargyas, Emese Tünde. "Duality for convex composed programming problems." Doctoral thesis, Universitätsbibliothek Chemnitz, 2004. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200401793.

Full text
Abstract:
The goal of this work is to present a conjugate duality treatment of composed programming as well as to give an overview of some recent developments in both scalar and multiobjective optimization. In order to do this, first we study a single-objective optimization problem, in which the objective function as well as the constraints are given by composed functions. By means of the conjugacy approach based on the perturbation theory, we provide different kinds of dual problems to it and examine the relations between the optimal objective values of the duals. Given some additional assumptions, we verify the equality between the optimal objective values of the duals and strong duality between the primal and the dual problems, respectively. Having proved the strong duality, we derive the optimality conditions for each of these duals. As special cases of the original problem, we study the duality for the classical optimization problem with inequality constraints and the optimization problem without constraints. The second part of this work is devoted to location analysis. Considering first the location model with monotonic gauges, it turns out that the same conjugate duality principle can be used also for solving this kind of problems. Taking in the objective function instead of the monotonic gauges several norms, investigations concerning duality for different location problems are made. We finish our investigations with the study of composed multiobjective optimization problems. In doing like this, first we scalarize this problem and study the scalarized one by using the conjugacy approach developed before. The optimality conditions which we obtain in this case allow us to construct a multiobjective dual problem to the primal one. Additionally the weak and strong duality are proved. In conclusion, some special cases of the composed multiobjective optimization problem are considered. Once the general problem has been treated, particularizing the results, we construct a multiobjective dual for each of them and verify the weak and strong dualities
In dieser Arbeit wird, anhand der sogenannten konjugierten Dualitätstheorie, ein allgemeines Dualitätsverfahren für die Untersuchung verschiedener Optimierungsaufgaben dargestellt. Um dieses Ziel zu erreichen wird zuerst eine allgemeine Optimierungsaufgabe betrachtet, wobei sowohl die Zielfunktion als auch die Nebenbedingungen zusammengesetzte Funktionen sind. Mit Hilfe der konjugierten Dualitätstheorie, die auf der sogenannten Störungstheorie basiert, werden für die primale Aufgabe drei verschiedene duale Aufgaben konstruiert und weiterhin die Beziehungen zwischen deren optimalen Zielfunktionswerten untersucht. Unter geeigneten Konvexitäts- und Monotonievoraussetzungen wird die Gleichheit dieser optimalen Zielfunktionswerte und zusätzlich die Existenz der starken Dualität zwischen der primalen und den entsprechenden dualen Aufgaben bewiesen. In Zusammenhang mit der starken Dualität werden Optimalitätsbedingungen hergeleitet. Die Ergebnisse werden abgerundet durch die Betrachtung zweier Spezialfälle, nämlich die klassische restringierte bzw. unrestringierte Optimierungsaufgabe, für welche sich die aus der Literatur bekannten Dualitätsergebnisse ergeben. Der zweite Teil der Arbeit ist der Dualität bei Standortproblemen gewidmet. Dazu wird ein sehr allgemeines Standortproblem mit konvexer zusammengesetzter Zielfunktion in Form eines Gauges formuliert, für das die entsprechenden Dualitätsaussagen abgeleitet werden. Als Spezialfälle werden Optimierungsaufgaben mit monotonen Normen betrachtet. Insbesondere lassen sich Dualitätsaussagen und Optimalitätsbedingungen für das klassische Weber und Minmax Standortproblem mit Gauges als Zielfunktion herleiten. Das letzte Kapitel verallgemeinert die Dualitätsaussagen, die im zweiten Kapitel erhalten wurden, auf multikriterielle Optimierungsprobleme. Mit Hilfe geeigneter Skalarisierungen betrachten wir zuerst ein zu der multikriteriellen Optimierungsaufgabe zugeordnetes skalares Problem. Anhand der in diesem Fall erhaltenen Optimalitätsbedingungen formulieren wir das multikriterielle Dualproblem. Weiterhin beweisen wir die schwache und, unter bestimmten Annahmen, die starke Dualität. Durch Spezialisierung der Zielfunktionen bzw. Nebenbedingungen resultieren die klassischen konvexen Mehrzielprobleme mit Ungleichungs- und Mengenrestriktionen. Als weitere Anwendungen werden vektorielle Standortprobleme betrachtet, zu denen wir entsprechende duale Aufgaben formulieren
APA, Harvard, Vancouver, ISO, and other styles
7

Hendrich, Christopher. "Proximal Splitting Methods in Nonsmooth Convex Optimization." Doctoral thesis, Universitätsbibliothek Chemnitz, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-149548.

Full text
Abstract:
This thesis is concerned with the development of novel numerical methods for solving nondifferentiable convex optimization problems in real Hilbert spaces and with the investigation of their asymptotic behavior. To this end, we are also making use of monotone operator theory as some of the provided algorithms are originally designed to solve monotone inclusion problems. After introducing basic notations and preliminary results in convex analysis, we derive two numerical methods based on different smoothing strategies for solving nondifferentiable convex optimization problems. The first approach, known as the double smoothing technique, solves the optimization problem with some given a priori accuracy by applying two regularizations to its conjugate dual problem. A special fast gradient method then solves the regularized dual problem such that an approximate primal solution can be reconstructed from it. The second approach affects the primal optimization problem directly by applying a single regularization to it and is capable of using variable smoothing parameters which lead to a more accurate approximation of the original problem as the iteration counter increases. We then derive and investigate different primal-dual methods in real Hilbert spaces. In general, one considerable advantage of primal-dual algorithms is that they are providing a complete splitting philosophy in that the resolvents, which arise in the iterative process, are only taken separately from each maximally monotone operator occurring in the problem description. We firstly analyze the forward-backward-forward algorithm of Combettes and Pesquet in terms of its convergence rate for the objective of a nondifferentiable convex optimization problem. Additionally, we propose accelerations of this method under the additional assumption that certain monotone operators occurring in the problem formulation are strongly monotone. Subsequently, we derive two Douglas–Rachford type primal-dual methods for solving monotone inclusion problems involving finite sums of linearly composed parallel sum type monotone operators. To prove their asymptotic convergence, we use a common product Hilbert space strategy by reformulating the corresponding inclusion problem reasonably such that the Douglas–Rachford algorithm can be applied to it. Finally, we propose two primal-dual algorithms relying on forward-backward and forward-backward-forward approaches for solving monotone inclusion problems involving parallel sums of linearly composed monotone operators. The last part of this thesis deals with different numerical experiments where we intend to compare our methods against algorithms from the literature. The problems which arise in this part are manifold and they reflect the importance of this field of research as convex optimization problems appear in lots of applications of interest.
APA, Harvard, Vancouver, ISO, and other styles
8

Waldherr, Steffen. "Uncertainty and robustness analysis of biochemical reaction networks via convex optimisation and robust control theory." Düsseldorf VDI-Verl, 2009. http://d-nb.info/999624148/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rätsch, Gunnar. "Robust boosting via convex optimization." Phd thesis, Universität Potsdam, 2001. http://opus.kobv.de/ubp/volltexte/2005/39/.

Full text
Abstract:
In dieser Arbeit werden statistische Lernprobleme betrachtet. Lernmaschinen extrahieren Informationen aus einer gegebenen Menge von Trainingsmustern, so daß sie in der Lage sind, Eigenschaften von bisher ungesehenen Mustern - z.B. eine Klassenzugehörigkeit - vorherzusagen. Wir betrachten den Fall, bei dem die resultierende Klassifikations- oder Regressionsregel aus einfachen Regeln - den Basishypothesen - zusammengesetzt ist. Die sogenannten Boosting Algorithmen erzeugen iterativ eine gewichtete Summe von Basishypothesen, die gut auf ungesehenen Mustern vorhersagen.
Die Arbeit behandelt folgende Sachverhalte:

o Die zur Analyse von Boosting-Methoden geeignete Statistische Lerntheorie. Wir studieren lerntheoretische Garantien zur Abschätzung der Vorhersagequalität auf ungesehenen Mustern. Kürzlich haben sich sogenannte Klassifikationstechniken mit großem Margin als ein praktisches Ergebnis dieser Theorie herausgestellt - insbesondere Boosting und Support-Vektor-Maschinen. Ein großer Margin impliziert eine hohe Vorhersagequalität der Entscheidungsregel. Deshalb wird analysiert, wie groß der Margin bei Boosting ist und ein verbesserter Algorithmus vorgeschlagen, der effizient Regeln mit maximalem Margin erzeugt.

o Was ist der Zusammenhang von Boosting und Techniken der konvexen Optimierung?
Um die Eigenschaften der entstehenden Klassifikations- oder Regressionsregeln zu analysieren, ist es sehr wichtig zu verstehen, ob und unter welchen Bedingungen iterative Algorithmen wie Boosting konvergieren. Wir zeigen, daß solche Algorithmen benutzt werden koennen, um sehr große Optimierungsprobleme mit Nebenbedingungen zu lösen, deren Lösung sich gut charakterisieren laesst. Dazu werden Verbindungen zum Wissenschaftsgebiet der konvexen Optimierung aufgezeigt und ausgenutzt, um Konvergenzgarantien für eine große Familie von Boosting-ähnlichen Algorithmen zu geben.

o Kann man Boosting robust gegenüber Meßfehlern und Ausreissern in den Daten machen?
Ein Problem bisheriger Boosting-Methoden ist die relativ hohe Sensitivität gegenüber Messungenauigkeiten und Meßfehlern in der Trainingsdatenmenge. Um dieses Problem zu beheben, wird die sogenannte 'Soft-Margin' Idee, die beim Support-Vector Lernen schon benutzt wird, auf Boosting übertragen. Das führt zu theoretisch gut motivierten, regularisierten Algorithmen, die ein hohes Maß an Robustheit aufweisen.

o Wie kann man die Anwendbarkeit von Boosting auf Regressionsprobleme erweitern?
Boosting-Methoden wurden ursprünglich für Klassifikationsprobleme entwickelt. Um die Anwendbarkeit auf Regressionsprobleme zu erweitern, werden die vorherigen Konvergenzresultate benutzt und neue Boosting-ähnliche Algorithmen zur Regression entwickelt. Wir zeigen, daß diese Algorithmen gute theoretische und praktische Eigenschaften haben.

o Ist Boosting praktisch anwendbar?
Die dargestellten theoretischen Ergebnisse werden begleitet von Simulationsergebnissen, entweder, um bestimmte Eigenschaften von Algorithmen zu illustrieren, oder um zu zeigen, daß sie in der Praxis tatsächlich gut funktionieren und direkt einsetzbar sind. Die praktische Relevanz der entwickelten Methoden wird in der Analyse chaotischer Zeitreihen und durch industrielle Anwendungen wie ein Stromverbrauch-Überwachungssystem und bei der Entwicklung neuer Medikamente illustriert.
In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues:

o The statistical learning theory framework for analyzing boosting methods.
We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution.

o How can boosting methods be related to mathematical optimization techniques?
To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms.

o How to make Boosting noise robust?
One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness.

o How to adapt boosting to regression problems?
Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties.

o Can boosting techniques be useful in practice?
The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process.

---
Anmerkung:
Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002.
APA, Harvard, Vancouver, ISO, and other styles
10

Fischer, Frank. "Dynamic Graph Generation and an Asynchronous Parallel Bundle Method Motivated by Train Timetabling." Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-118358.

Full text
Abstract:
Lagrangian relaxation is a successful solution approach for many combinatorial optimisation problems, one of them being the train timetabling problem (TTP). We model this problem using time expanded networks for the single train schedules and coupling constraints to enforce restrictions like station capacities and headway times. Lagrangian relaxation of these coupling constraints leads to shortest path subproblems in the time expanded networks and is solved using a proximal bundle method. However, large instances of our practical partner Deutsche Bahn lead to computationally intractable models. In this thesis we develop two new algorithmic techniques to improve the solution process for this kind of optimisation problems. The first new technique, Dynamic Graph Generation (DGG), aims at improving the computation of the shortest path subproblems in large time expanded networks. Without sacrificing any accuracy, DGG allows to store only small parts of the networks and to dynamically extend them whenever the stored part proves to be too small. This is possible by exploiting the properties of the objective function in many scheduling applications to prefer early paths or due times, respectively. We prove that DGG can be implemented very efficiently and its running time and the size of nodes that have to be stored additionally does not depend on the size of the time expanded network but only on the length of the train routes. The second technique is an asynchronous and parallel bundle method (APBM). Traditional bundle methods require one solution of each subproblem in each iteration. However, many practical applications, e.g. the TTP, consist of rather loosely coupled subproblems. The APBM chooses only small subspaces corresponding to the Lagrange multipliers of strongly violated coupling constraints and optimises only these variables while keeping all other variables fixed. Several subspaces of disjoint variables may be chosen simultaneously and are optimised in parallel. The solutions of the subspace problem are incorporated into the global data as soon as it is available without any synchronisation mechanism. However, in order to guarantee convergence, the algorithm detects automatically dependencies between different subspaces and respects these dependencies in future subspace selections. We prove the convergence of the APBM under reasonable assumptions for both, the dual and associated primal aggregate data. The APBM is then further extended to problems with unknown dependencies between subproblems and constraints in the Lagrangian relaxation problem. The algorithm automatically detects these dependencies and respects them in future iterations. Again we prove the convergence of this algorithm under reasonable assumptions. Finally we test our solution approach for the TTP on some real world instances of Deutsche Bahn. Using an iterative rounding heuristic based on the approximate fractional solutions obtained by the Lagrangian relaxation we are able to compute feasible schedules for all trains in a subnetwork of about 10% of the whole German network in about 12 hours. In these timetables 99% of all passenger trains could be scheduled with no significant delay and the travel time of the freight trains could be reduced by about one hour on average.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Convexe optimierung"

1

Grötschel, Martin. Geometric algorithms and combinatorial optimization. Berlin: Springer-Verlag, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Grötschel, Martin. Geometric algorithms and combinatorial optimization. 2nd ed. Berlin: Springer-Verlag, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

M, Teboulle, ed. Asymptotic cones and functions in optimization and variational inequalities. New York: Springer, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Network Flows and Monotropic Optimization. Athena Scientific, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Auslender, Alfred, and Marc Teboulle. Asymptotic Cones and Functions in Optimization and Variational Inequalities. Springer, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography