Academic literature on the topic 'Algorithms. Convex functions. Programming (Mathematics)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Algorithms. Convex functions. Programming (Mathematics).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Algorithms. Convex functions. Programming (Mathematics)"

1

Kebaili, Zahira, and Mohamed Achache. "Solving nonmonotone affine variational inequalities problem by DC programming and DCA." Asian-European Journal of Mathematics 13, no. 03 (December 17, 2018): 2050067. http://dx.doi.org/10.1142/s1793557120500679.

Full text
Abstract:
In this paper, we consider an optimization model for solving the nonmonotone affine variational inequalities problem (AVI). It is formulated as a DC (Difference of Convex functions) program for which DCA (DC Algorithms) are applied. The resulting DCA are simple: it consists of solving successive convex quadratic program. Numerical experiments on several test problems illustrate the efficiency of the proposed approach in terms of the quality of the obtained solutions and the speed of convergence.
APA, Harvard, Vancouver, ISO, and other styles
2

Eckstein, Jonathan. "Nonlinear Proximal Point Algorithms Using Bregman Functions, with Applications to Convex Programming." Mathematics of Operations Research 18, no. 1 (February 1993): 202–26. http://dx.doi.org/10.1287/moor.18.1.202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Awais, Hafiz Muhammad, Tahir Nadeem Malik, and Aftab Ahmad. "Artificial Algae Algorithm with Multi-Light Source Movement for Economic Dispatch of Thermal Generation." Mehran University Research Journal of Engineering and Technology 39, no. 3 (July 1, 2020): 564–82. http://dx.doi.org/10.22581/muet1982.2003.12.

Full text
Abstract:
Economic Dispatch (ED) is one of the major concerns for the efficient and economical operation of the modern power system. Actual ED problem is non-convex in nature due to Ramp Rate Limits (RRL), Valve-Point Loading Effects (VPLE), and Prohibited Operating Zones (POZs). It is generally converted into a convex problem as mathematical programming based approaches cannot handle the non-convex cost functions except dynamic programming, which also suffers from the curse of dimensionality. Heuristic techniques are potential solution methodologies for solving the non-convex ED problem. Artificial Algae Algorithm (AAA), a recent meta-heuristic optimization approach showed remarkable results on certain MATLAB benchmark functions but its application on industrial problem such as ED is yet to be explored. In this paper, AAA is used to investigate convex and non-convex ED problem due to valve-point effects and POZs while considering the transmission losses. The robustness and effectiveness of the proposed approach are validated by implementing it on IEEE standard test systems (3, 6, 13 and 40 unit Test Systems), which are widely addressed in the literature. The simulation results are promising when compared with other well-known evolutionary algorithms, showing the potential and stability of this algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Cocan, Moise, and Bogdana Pop. "An algorithm for solving the problem of convex programming with several objective functions." Korean Journal of Computational & Applied Mathematics 6, no. 1 (January 1999): 79–88. http://dx.doi.org/10.1007/bf02941908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Östermark, Ralf. "A parallel algorithm for optimizing the capital structure contingent on maximum value at risk." Kybernetes 44, no. 3 (March 2, 2015): 384–405. http://dx.doi.org/10.1108/k-08-2014-0171.

Full text
Abstract:
Purpose – The purpose of this paper is to measure the financial risk and optimal capital structure of a corporation. Design/methodology/approach – Irregular disjunctive programming problems arising in firm models and risk management can be solved by the techniques presented in the paper. Findings – Parallel processing and mathematical modeling provide a fruitful basis for solving ultra-scale non-convex general disjunctive programming (GDP) problems, where the computational challenge in direct mixed-integer non-linear programming (MINLP) formulations or single processor algorithms would be insurmountable. Research limitations/implications – The test is limited to a single firm in an experimental setting. Repeating the test on large sample of firms in future research will indicate the general validity of Monte-Carlo-based VAR estimation. Practical implications – The authors show that the risk surface of the firm can be approximated by integrated use of accounting logic, corporate finance, mathematical programming, stochastic simulation and parallel processing. Originality/value – Parallel processing has potential to simplify large-scale MINLP and GDP problems with non-convex, multi-modal and discontinuous parameter generating functions and to solve them faster and more reliably than conventional approaches on single processors.
APA, Harvard, Vancouver, ISO, and other styles
6

Chao, Miantao, Yongxin Zhao, and Dongying Liang. "A Proximal Alternating Direction Method of Multipliers with a Substitution Procedure." Mathematical Problems in Engineering 2020 (April 27, 2020): 1–12. http://dx.doi.org/10.1155/2020/7876949.

Full text
Abstract:
In this paper, we considers the separable convex programming problem with linear constraints. Its objective function is the sum of m individual blocks with nonoverlapping variables and each block consists of two functions: one is smooth convex and the other one is convex. For the general case m≥3, we present a gradient-based alternating direction method of multipliers with a substitution. For the proposed algorithm, we prove its convergence via the analytic framework of contractive-type methods and derive a worst-case O1/t convergence rate in nonergodic sense. Finally, some preliminary numerical results are reported to support the efficiency of the proposed algorithm.
APA, Harvard, Vancouver, ISO, and other styles
7

Dias, Bruno H., André L. M. Marcato, Reinaldo C. Souza, Murilo P. Soares, Ivo C. Silva Junior, Edimar J. de Oliveira, Rafael B. S. Brandi, and Tales P. Ramos. "Stochastic Dynamic Programming Applied to Hydrothermal Power Systems Operation Planning Based on the Convex Hull Algorithm." Mathematical Problems in Engineering 2010 (2010): 1–20. http://dx.doi.org/10.1155/2010/390940.

Full text
Abstract:
This paper presents a new approach for the expected cost-to-go functions modeling used in the stochastic dynamic programming (SDP) algorithm. The SDP technique is applied to the long-term operation planning of electrical power systems. Using state space discretization, the Convex Hull algorithm is used for constructing a series of hyperplanes that composes a convex set. These planes represent a piecewise linear approximation for the expected cost-to-go functions. The mean operational costs for using the proposed methodology were compared with those from the deterministic dual dynamic problem in a case study, considering a single inflow scenario. This sensitivity analysis shows the convergence of both methods and is used to determine the minimum discretization level. Additionally, the applicability of the proposed methodology for two hydroplants in a cascade is demonstrated. With proper adaptations, this work can be extended to a complete hydrothermal system.
APA, Harvard, Vancouver, ISO, and other styles
8

SUN, YIJUN, SINISA TODOROVIC, and JIAN LI. "REDUCING THE OVERFITTING OF ADABOOST BY CONTROLLING ITS DATA DISTRIBUTION SKEWNESS." International Journal of Pattern Recognition and Artificial Intelligence 20, no. 07 (November 2006): 1093–116. http://dx.doi.org/10.1142/s0218001406005137.

Full text
Abstract:
AdaBoost rarely suffers from overfitting problems in low noise data cases. However, recent studies with highly noisy patterns have clearly shown that overfitting can occur. A natural strategy to alleviate the problem is to penalize the data distribution skewness in the learning process to prevent several hardest examples from spoiling decision boundaries. In this paper, we pursue such a penalty scheme in the mathematical programming setting, which allows us to define a suitable classifier soft margin. By using two smooth convex penalty functions, based on Kullback–Leibler divergence (KL) and l2 norm, we derive two new regularized AdaBoost algorithms, referred to as AdaBoostKL and AdaBoostNorm2, respectively. We prove that our algorithms perform stage-wise gradient descent on a cost function, defined in the domain of their associated soft margins. We demonstrate the effectiveness of the proposed algorithms through experiments over a wide variety of data sets. Compared with other regularized AdaBoost algorithms, our methods achieve at least the same or better performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Xu, Lei. "One-Bit-Matching Theorem for ICA, Convex-Concave Programming on Polyhedral Set, and Distribution Approximation for Combinatorics." Neural Computation 19, no. 2 (February 2007): 546–69. http://dx.doi.org/10.1162/neco.2007.19.2.546.

Full text
Abstract:
According to the proof by Liu, Chiu, and Xu (2004) on the so-called one-bit-matching conjecture (Xu, Cheung, and Amari, 1998a), all the sources can be separated as long as there is an one-to-one same-sign correspondence between the kurtosis signs of all source probability density functions (pdf's) and the kurtosis signs of all model pdf's, which is widely believed and implicitly supported by many empirical studies. However, this proof is made only in a weak sense that the conjecture is true when the global optimal solution of an independent component analysis criterion is reached. Thus, it cannot support the successes of many existing iterative algorithms that usually converge at one of the local optimal solutions. This article presents a new mathematical proof that is obtained in a strong sense that the conjecture is also true when any one of local optimal solutions is reached in helping to investigating convex-concave programming on a polyhedral set. Theorems are also provided not only on partial separation of sources when there is a partial matching between the kurtosis signs, but also on an interesting duality of maximization and minimization on source separation. Moreover, corollaries are obtained on an interesting duality, with supergaussian sources separated by maximization and subgaussian sources separated by minimization. Also, a corollary is obtained to confirm the symmetric orthogonalization implementation of the kurtosis extreme approach for separating multiple sources in parallel, which works empirically but lacks mathematical proof. Furthermore, a linkage has been set up to combinatorial optimization from a distribution approximation perspective and a Stiefel manifold perspective, with algorithms that guarantee convergence as well as satisfaction of constraints.
APA, Harvard, Vancouver, ISO, and other styles
10

Popkov, Alexander S. "Optimal program control in the class of quadratic splines for linear systems." Vestnik of Saint Petersburg University. Applied Mathematics. Computer Science. Control Processes 16, no. 4 (2020): 462–70. http://dx.doi.org/10.21638/11701/spbu10.2020.411.

Full text
Abstract:
This article describes an algorithm for solving the optimal control problem in the case when the considered process is described by a linear system of ordinary differential equations. The initial and final states of the system are fixed and straight two-sided constraints for the control functions are defined. The purpose of optimization is to minimize the quadratic functional of control variables. The control is selected in the class of quadratic splines. There is some evolution of the method when control is selected in the class of piecewise constant functions. Conveniently, due to the addition/removal of constraints in knots, the control function can be piecewise continuous, continuous, or continuously differentiable. The solution algorithm consists in reducing the control problem to a convex mixed-integer quadratically-constrained programming problem, which could be solved by using well-known optimization methods that utilize special software.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Algorithms. Convex functions. Programming (Mathematics)"

1

Potaptchik, Marina. "Portfolio Selection Under Nonsmooth Convex Transaction Costs." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2940.

Full text
Abstract:
We consider a portfolio selection problem in the presence of transaction costs. Transaction costs on each asset are assumed to be a convex function of the amount sold or bought. This function can be nondifferentiable in a finite number of points. The objective function of this problem is a sum of a convex twice differentiable function and a separable convex nondifferentiable function. We first consider the problem in the presence of linear constraints and later generalize the results to the case when the constraints are given by the convex piece-wise linear functions.

Due to the special structure, this problem can be replaced by an equivalent differentiable problem in a higher dimension. It's main drawback is efficiency since the higher dimensional problem is computationally expensive to solve.

We propose several alternative ways to solve this problem which do not require introducing new variables or constraints. We derive the optimality conditions for this problem using subdifferentials. First, we generalize an active set method to this class of problems. We solve the problem by considering a sequence of equality constrained subproblems, each subproblem having a twice differentiable objective function. Information gathered at each step is used to construct the subproblem for the next step. We also show how the nonsmoothness can be handled efficiently by using spline approximations. The problem is then solved using a primal-dual interior-point method.

If a higher accuracy is needed, we do a crossover to an active set method. Our numerical tests show that we can solve large scale problems efficiently and accurately.
APA, Harvard, Vancouver, ISO, and other styles
2

Visagie, S. E. "Algoritmes vir die maksimering van konvekse en verwante knapsakprobleme /." Link to the online version, 2007. http://hdl.handle.net/10019.1/1082.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Visagie, Stephan E. "Algoritmes vir die maksimering van konvekse en verwante knapsakprobleme." Thesis, Stellenbosch : University of Stellenbosch, 2007. http://hdl.handle.net/10019.1/1082.

Full text
Abstract:
Thesis (PhD (Logistics))--University of Stellenbosch, 2007.
In this dissertation original algorithms are introduced to solve separable resource allocation problems (RAPs) with increasing nonlinear functions in the objective function, and lower and upper bounds on each variable. Algorithms are introduced in three special cases. The first case arises when the objective function of the RAP consists of the sum of convex functions and all the variables for these functions range over the same interval. In the second case RAPs with the sum of convex functions in the objective function are considered, but the variables of these functions can range over different intervals. In the last special case RAPs with an objective function comprising the sum of convex and concave functions are considered. In this case the intervals of the variables can range over different values. In the first case two new algorithms, namely the fraction and the slope algorithm are presented to solve the RAPs adhering to the conditions of the case. Both these algorithms yield far better solution times than the existing branch and bound algorithm. A new heuristic and three new algorithms are presented to solve RAPs falling into the second case. The iso-bound heuristic yields, on average, good solutions relative to the optimal objective function value in faster times than exact algorithms. The three algorithms, namely the iso-bound algorithm, the branch and cut algorithm and the iso-bound branch and cut algorithm also yield considerably beter solution times than the existing branch and bound algorithm. It is shown that, on average, the iso-bound branch and cut algorithm yields the fastest solution times, followed by the iso-bound algorithm and then by die branch and cut algorithm. In the third case the necessary and sufficient conditions for optimality are considered. From this, the conclusion is drawn that search techniques for points complying with the necessary conditions will take too long relative to branch and bound techniques. Thus three new algorithms, namely the KL, SKL and IKL algorithms are introduced to solve RAPs falling into this case. These algorithms are generalisations of the branch and bound, branch and cut, and iso-bound algorithms respectively. The KL algorithm was then used as a benchmark. Only the IKL algorithm yields a considerable improvement on the KL algorithm.
APA, Harvard, Vancouver, ISO, and other styles
4

Phan, Duy Nhat. "Algorithmes basés sur la programmation DC et DCA pour l’apprentissage avec la parcimonie et l’apprentissage stochastique en grande dimension." Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0235/document.

Full text
Abstract:
De nos jours, avec l'abondance croissante de données de très grande taille, les problèmes de classification de grande dimension ont été mis en évidence comme un challenge dans la communauté d'apprentissage automatique et ont beaucoup attiré l'attention des chercheurs dans le domaine. Au cours des dernières années, les techniques d'apprentissage avec la parcimonie et l'optimisation stochastique se sont prouvées être efficaces pour ce type de problèmes. Dans cette thèse, nous nous concentrons sur le développement des méthodes d'optimisation pour résoudre certaines classes de problèmes concernant ces deux sujets. Nos méthodes sont basées sur la programmation DC (Difference of Convex functions) et DCA (DC Algorithm) étant reconnues comme des outils puissants d'optimisation non convexe. La thèse est composée de trois parties. La première partie aborde le problème de la sélection des variables. La deuxième partie étudie le problème de la sélection de groupes de variables. La dernière partie de la thèse liée à l'apprentissage stochastique. Dans la première partie, nous commençons par la sélection des variables dans le problème discriminant de Fisher (Chapitre 2) et le problème de scoring optimal (Chapitre 3), qui sont les deux approches différentes pour la classification supervisée dans l'espace de grande dimension, dans lequel le nombre de variables est beaucoup plus grand que le nombre d'observations. Poursuivant cette étude, nous étudions la structure du problème d'estimation de matrice de covariance parcimonieuse et fournissons les quatre algorithmes appropriés basés sur la programmation DC et DCA (Chapitre 4). Deux applications en finance et en classification sont étudiées pour illustrer l'efficacité de nos méthodes. La deuxième partie étudie la L_p,0régularisation pour la sélection de groupes de variables (Chapitre 5). En utilisant une approximation DC de la L_p,0norme, nous prouvons que le problème approché, avec des paramètres appropriés, est équivalent au problème original. Considérant deux reformulations équivalentes du problème approché, nous développons différents algorithmes basés sur la programmation DC et DCA pour les résoudre. Comme applications, nous mettons en pratique nos méthodes pour la sélection de groupes de variables dans les problèmes de scoring optimal et d'estimation de multiples matrices de covariance. Dans la troisième partie de la thèse, nous introduisons un DCA stochastique pour des problèmes d'estimation des paramètres à grande échelle (Chapitre 6) dans lesquelles la fonction objectif est la somme d'une grande famille des fonctions non convexes. Comme une étude de cas, nous proposons un schéma DCA stochastique spécial pour le modèle loglinéaire incorporant des variables latentes
These days with the increasing abundance of data with high dimensionality, high dimensional classification problems have been highlighted as a challenge in machine learning community and have attracted a great deal of attention from researchers in the field. In recent years, sparse and stochastic learning techniques have been proven to be useful for this kind of problem. In this thesis, we focus on developing optimization approaches for solving some classes of optimization problems in these two topics. Our methods are based on DC (Difference of Convex functions) programming and DCA (DC Algorithms) which are wellknown as one of the most powerful tools in optimization. The thesis is composed of three parts. The first part tackles the issue of variable selection. The second part studies the problem of group variable selection. The final part of the thesis concerns the stochastic learning. In the first part, we start with the variable selection in the Fisher's discriminant problem (Chapter 2) and the optimal scoring problem (Chapter 3), which are two different approaches for the supervised classification in the high dimensional setting, in which the number of features is much larger than the number of observations. Continuing this study, we study the structure of the sparse covariance matrix estimation problem and propose four appropriate DCA based algorithms (Chapter 4). Two applications in finance and classification are conducted to illustrate the efficiency of our methods. The second part studies the L_p,0regularization for the group variable selection (Chapter 5). Using a DC approximation of the L_p,0norm, we indicate that the approximate problem is equivalent to the original problem with suitable parameters. Considering two equivalent reformulations of the approximate problem we develop DCA based algorithms to solve them. Regarding applications, we implement the proposed algorithms for group feature selection in optimal scoring problem and estimation problem of multiple covariance matrices. In the third part of the thesis, we introduce a stochastic DCA for large scale parameter estimation problems (Chapter 6) in which the objective function is a large sum of nonconvex components. As an application, we propose a special stochastic DCA for the loglinear model incorporating latent variables
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Algorithms. Convex functions. Programming (Mathematics)"

1

Hiriart-Urruty, Jean-Baptiste. Convex analysis and minimization algorithms. 2nd ed. Berlin: Springer-Verlag, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hiriart-Urruty, Jean-Baptiste. Convex analysis and minimization algorithms. Berlin: Springer-Verlag, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Xiaoqi, Yang, ed. Lagrange-type functions in constrained non-convex optimization. Boston: Kluwer Academic Publishers, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Crama, Yves. Boolean functions: Theory, algorithms, and applications. Cambridge: Cambridge University Press, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hiriart-Urruty, Jean-Baptiste. Fundamentals of convex analysis. Berlin: Springer, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

1944-, Lemaréchal Claude, ed. Fundamentals of convex analysis. Berlin: Springer, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

N, Iusem Alfredo, ed. Totally convex functions for fixed points computation and infinite dimensional optimization. Dordrecht: Kluwer Academic Publishers, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Nanda, Sudarsan. Two applications of functional analysis. Kingston, Ont., Canada: Queen's University, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

M, Teboulle, ed. Asymptotic cones and functions in optimization and variational inequalities. New York: Springer, 2003.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liana, Lupșa, ed. Non-connected convexities and applications. Dordrecht: Kluwer Academic Publishers, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Algorithms. Convex functions. Programming (Mathematics)"

1

Peressini, Anthony L., J. J. Uhl, and Francis E. Sullivan. "Convex Sets and Convex Functions." In The Mathematics of Nonlinear Programming, 37–81. New York, NY: Springer New York, 1988. http://dx.doi.org/10.1007/978-1-4612-1025-2_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sridharan, Sriraman, and R. Balakrishnan. "Sets, Relations and Functions." In Foundations of Discrete Mathematics with Algorithms and Programming, 1–46. Boca Raton : Taylor & Francis, a CRC title, part of the Taylor & Francis imprint, a member of the Taylor & Francis Group, the academic division of T&F Informa, plc, 2019.: Chapman and Hall/CRC, 2018. http://dx.doi.org/10.1201/9781351019149-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Masood, Talha Bin, and Ingrid Hotz. "Continuous Histograms for Anisotropy of 2D Symmetric Piece-Wise Linear Tensor Fields." In Mathematics and Visualization, 39–70. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-56215-1_3.

Full text
Abstract:
AbstractIn this chapter we present an accurate derivation of the distribution of scalar invariants with quadratic behavior represented as continuous histograms. The anisotropy field, computed from a two-dimensional piece-wise linear tensor field, is used as an example and is discussed in all details. Histograms visualizing an approximation of the distribution of scalar values play an important role in visualization. They are used as an interface for the design of transfer-functions for volume rendering or feature selection in interactive interfaces. While there are standard algorithms to compute continuous histograms for piece-wise linear scalar fields, they are not directly applicable to tensor invariants with non-linear, often even non-convex behavior in cells when applying linear tensor interpolation. Our derivation is based on a sub-division of the mesh in triangles that exhibit a monotonic behavior. We compare the results to a naïve approach based on linear interpolation on the original mesh or the subdivision.
APA, Harvard, Vancouver, ISO, and other styles
4

"2. Self-Concordant Functions and Newton Method." In Interior-Point Polynomial Algorithms in Convex Programming, 11–55. Society for Industrial and Applied Mathematics, 1994. http://dx.doi.org/10.1137/1.9781611970791.ch2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mäenpää, Petri. "Analytic program derivation in type theory." In Twenty Five Years of Constructive Type Theory. Oxford University Press, 1998. http://dx.doi.org/10.1093/oso/9780198501275.003.0009.

Full text
Abstract:
This work proposes a new method of deriving programs from their specifications in constructive type theory: the method of analysis-synthesis. It is new as a mathematical method only in the area of programming methodology, as it is modelled upon the most successful and widespread method in the history of exact sciences. The method of analysis-synthesis, also known as the method of analysis, was devised by Ancient Greek mathematicians for solving geometric construction problems with ruler and compass. Its most important subsequent elaboration is Descartes’s algebraic method of analysis, which pervades all exact sciences today. The present work expands this method further into one that aims at systematizing program derivation in a heuristically useful way, analogously to the way Descartes’s method systematized the solution of geometric and arithmetical problems. To illustrate the method, we derive the Boyer-Moore algorithm for finding an element that has a majority of occurrences in a given list. It turns out that solving programming problems need not be too different from solving mathematical problems in general. This point of view has been emphasized in particular by Martin-Löf (1982) and Dijkstra (1986). The idea of a logic of problem solving originates in Kolmogorov (1932). We aim to refine the analogy between programming and mathematical problem solving by investigating the mathematical method of analysis in the context of programming. The central idea of the analytic method, in modern terms, is to analyze the functional dependencies between the constituents of a geometric configuration. The aim is to determine how the sought constituents depend on the given ones. A Greek analysis starts by drawing a diagram with the sought constructions drawn on the given ones, in the relation required by the problem specification. Then the sought constituents of the configuration are determined in terms of the given ones. Analysis was the Greeks’ method of discovering solutions to problems. Their method of justification was synthesis, which cast analysis into standard deductive form. First it constructed the sought objects from the given ones, and then demonstrated that they relate as required to the given ones. In his Geometry, Descartes developed Greek geometric analysis-synthesis into the modern algebraic method of analysis.
APA, Harvard, Vancouver, ISO, and other styles
6

Ray, Jhuma, Siddhartha Bhattacharyya, and N. Bhupendro Singh. "Portfolio Optimization and Asset Allocation With Metaheuristics." In Research Anthology on Multi-Industry Uses of Genetic Programming and Algorithms, 78–96. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-8048-6.ch005.

Full text
Abstract:
Portfolio optimization stands to be an issue of finding an optimal allocation of wealth to place within the obtainable assets. Markowitz stated the problem to be structured as dual-objective mean-risk optimization, pointing the best trade-off solutions within a portfolio between risks which is measured by variance and mean. Thus the major intention was nothing else than hunting for optimum distribution of wealth over a specific amount of assets by diminishing risk and maximizing returns of a portfolio. Value-at-risk, expected shortfall, and semi-variance measures prove to be complex for measuring risk, for maximization of skewness, liquidity, dividends by added objective functions, cardinality constraints, quantity constraints, minimum transaction lots, class constraints in real-world constraints all of which are incorporated in modern portfolio selection models, furnish numerous optimization challenges. The emerging portfolio optimization issue turns out to be extremely tough to be handled with exact approaches because it exhibits nonlinearities, discontinuities and high-dimensional, efficient boundaries. Because of these attributes, a number of researchers got motivated in researching the usage of metaheuristics, which stand to be effective measures for finding near optimal solutions for tough optimization issues in an adequate computational time frame. This review report serves as a short note on portfolio optimization field with the usage of Metaheuristics and finally states that how multi-objective metaheuristics prove to be efficient in dealing with portfolio selection problems with complex measures of risk defining non-convex, non-differential objective functions.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Algorithms. Convex functions. Programming (Mathematics)"

1

Hamza, Karim, and Mohammed Shalaby. "Convex Estimators for Optimization of Kriging Model Problems." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-48566.

Full text
Abstract:
This paper presents a framework for identification of the global optimum of Kriging models. The framework is based on a branch and bound scheme for sub-division of the search space into hypercubes while constructing convex under-estimators of the Kriging models. The convex under-estimators, which are a key development in this paper, provide a relaxation of the original problem. The relaxed problem has two key features: i) convex optimization algorithms such as sequential quadratic programming (SQP) are guaranteed to find the global optimum of the relaxed problem, and ii) objective value of the relaxed problem is a lower bound on the best attainable solution within a hypercube for the original (Kriging model) problem. The convex under-estimators improve in accuracy as the size of a hypercube gets smaller via the branching search. Termination of a hypercube branch is done when either: i) solution of the relaxed problem within the hypercube is no better than current best solution of the original problem, or ii) best solution of the original problem and that of the relaxed problem are within tolerance limits. To assess the significance of the proposed framework, comparison studies against genetic algorithm (GA) are conducted using Kriging models that approximate standard nonlinear test functions, as well as application problems of water desalination and vehicle crashworthiness. Results of the studies show the proposed framework deterministically providing a solution within tolerance limits from the global optimum, while GA is observed to not reliably discover the best solutions in problems with larger number of design variables.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography