Academic literature on the topic 'Large-scale optimization methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Large-scale optimization methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Large-scale optimization methods"

1

Papadrakakis, M., N. D. Lagaros, Y. Tsompanakis, and V. Plevris. "Large scale structural optimization: Computational methods and optimization algorithms." Archives of Computational Methods in Engineering 8, no. 3 (2001): 239–301. http://dx.doi.org/10.1007/bf02736645.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gould, Nick, Dominique Orban, and Philippe Toint. "Numerical methods for large-scale nonlinear optimization." Acta Numerica 14 (April 19, 2005): 299–361. http://dx.doi.org/10.1017/s0962492904000248.

Full text
Abstract:
Recent developments in numerical methods for solving large differentiable nonlinear optimization problems are reviewed. State-of-the-art algorithms for solving unconstrained, bound-constrained, linearly constrained and non-linearly constrained problems are discussed. As well as important conceptual advances and theoretical aspects, emphasis is also placed on more practical issues, such as software availability.
APA, Harvard, Vancouver, ISO, and other styles
3

Bottou, Léon, Frank E. Curtis, and Jorge Nocedal. "Optimization Methods for Large-Scale Machine Learning." SIAM Review 60, no. 2 (2018): 223–311. http://dx.doi.org/10.1137/16m1080173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rogers, Jr., Jack W., and Robert A. Donnelly. "Potential Transformation Methods for Large-Scale Global Optimization." SIAM Journal on Optimization 5, no. 4 (1995): 871–91. http://dx.doi.org/10.1137/0805042.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Leblanc, Larry J. "Optimization methods for large-scale multiechelon stockage problems." Naval Research Logistics 34, no. 2 (1987): 239–49. http://dx.doi.org/10.1002/1520-6750(198704)34:2<239::aid-nav3220340209>3.0.co;2-t.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bagattini, Francesco, Fabio Schoen, and Luca Tigli. "Clustering methods for large scale geometrical global optimization." Optimization Methods and Software 34, no. 5 (2019): 1099–122. http://dx.doi.org/10.1080/10556788.2019.1582651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

TSOMPANAKIS, YIANNIS, and MANOLIS PAPADRAKAKIS. "EFFICIENT COMPUTATIONAL METHODS FOR LARGE-SCALE STRUCTURAL OPTIMIZATION." International Journal of Computational Engineering Science 01, no. 02 (2000): 331–54. http://dx.doi.org/10.1142/s146587630000015x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bertsekas, Dimitri P. "Incremental proximal methods for large scale convex optimization." Mathematical Programming 129, no. 2 (2011): 163–95. http://dx.doi.org/10.1007/s10107-011-0472-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Robinson, Daniel P. "Primal-Dual Active-Set Methods for Large-Scale Optimization." Journal of Optimization Theory and Applications 166, no. 1 (2015): 137–71. http://dx.doi.org/10.1007/s10957-015-0708-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Fasano, Giovanni, and Massimo Roma. "Preconditioning Newton–Krylov methods in nonconvex large scale optimization." Computational Optimization and Applications 56, no. 2 (2013): 253–90. http://dx.doi.org/10.1007/s10589-013-9563-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Large-scale optimization methods"

1

Barclay, Alexander. "SQP methods for large-scale optimization /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 1999. http://wwwlib.umi.com/cr/ucsd/fullcit?p9936873.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhang, Hongchao. "Gradient methods for large-scale nonlinear optimization." [Gainesville, Fla.] : University of Florida, 2006. http://purl.fcla.edu/fcla/etd/UFE0013703.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Erway, Jennifer B. "Iterative methods for large-scale unconstrained optimization." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3222051.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2006.<br>Title from first page of PDF file (viewed September 20, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 146-149).
APA, Harvard, Vancouver, ISO, and other styles
4

Fountoulakis, Kimon. "Higher-order methods for large-scale optimization." Thesis, University of Edinburgh, 2015. http://hdl.handle.net/1842/15797.

Full text
Abstract:
There has been an increased interest in optimization for the analysis of large-scale data sets which require gigabytes or terabytes of data to be stored. A variety of applications originate from the fields of signal processing, machine learning and statistics. Seven representative applications are described below. - Magnetic Resonance Imaging (MRI): A medical imaging tool used to scan the anatomy and the physiology of a body. - Image inpainting: A technique for reconstructing degraded parts of an image. - Image deblurring: Image processing tool for removing the blurriness of a photo caused by natural phenomena, such as motion. - Radar pulse reconstruction. - Genome-Wide Association study (GWA): DNA comparison between two groups of people (with/without a disease) in order to investigate factors that a disease depends on. - Recommendation systems: Classification of data (i.e., music or video) based on user preferences. - Data fitting: Sampled data are used to simulate the behaviour of observed quantities. For example estimation of global temperature based on historic data. Large-scale problems impose restrictions on methods that have been so far employed. The new methods have to be memory efficient and ideally, within seconds they should offer noticeable progress towards a solution. First-order methods meet some of these requirements. They avoid matrix factorizations, they have low memory requirements, additionally, they sometimes offer fast progress in the initial stages of optimization. Unfortunately, as demonstrated by numerical experiments in this thesis, first-order methods miss essential information about the conditioning of the problems, which might result in slow practical convergence. The main advantage of first-order methods which is to rely only on simple gradient or coordinate updates becomes their essential weakness. We do not think this inherent weakness of first-order methods can be remedied. For this reason, the present thesis aims at the development and implementation of inexpensive higher-order methods for large-scale problems.
APA, Harvard, Vancouver, ISO, and other styles
5

Griffin, Joshua D. "Interior-point methods for large-scale nonconvex optimization /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2005. http://wwwlib.umi.com/cr/ucsd/fullcit?p3167839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lu, Haihao Ph D. Massachusetts Institute of Technology. "Large-scale optimization Methods for data-science applications." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/122272.

Full text
Abstract:
Thesis: Ph. D. in Mathematics and Operations Research, Massachusetts Institute of Technology, Department of Mathematics, 2019<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages 203-211).<br>In this thesis, we present several contributions of large scale optimization methods with the applications in data science and machine learning. In the first part, we present new computational methods and associated computational guarantees for solving convex optimization problems using first-order methods. We consider general convex optimization problem, where we presume knowledge of a strict lower bound (like what happened in empirical risk minimization in machine learning). We introduce a new functional measure called the growth constant for the convex objective function, that measures how quickly the level sets grow relative to the function value, and that plays a fundamental role in the complexity analysis. Based on such measure, we present new computational guarantees for both smooth and non-smooth convex optimization, that can improve existing computational guarantees in several ways, most notably when the initial iterate is far from the optimal solution set.<br>The usual approach to developing and analyzing first-order methods for convex optimization always assumes that either the gradient of the objective function is uniformly continuous (in the smooth setting) or the objective function itself is uniformly continuous. However, in many settings, especially in machine learning applications, the convex function is neither of them. For example, the Poisson Linear Inverse Model, the D-optimal design problem, the Support Vector Machine problem, etc. In the second part, we develop a notion of relative smoothness, relative continuity and relative strong convexity that is determined relative to a user-specified "reference function" (that should be computationally tractable for algorithms), and we show that many differentiable convex functions are relatively smooth or relatively continuous with respect to a correspondingly fairly-simple reference function.<br>We extend the mirror descent algorithm to our new setting, with associated computational guarantees. Gradient Boosting Machine (GBM) introduced by Friedman is an extremely powerful supervised learning algorithm that is widely used in practice -- it routinely features as a leading algorithm in machine learning competitions such as Kaggle and the KDDCup. In the third part, we propose the Randomized Gradient Boosting Machine (RGBM) and the Accelerated Gradient Boosting Machine (AGBM). RGBM leads to significant computational gains compared to GBM, by using a randomization scheme to reduce the search in the space of weak-learners. AGBM incorporate Nesterov's acceleration techniques into the design of GBM, and this is the first GBM type of algorithm with theoretically-justified accelerated convergence rate. We demonstrate the effectiveness of RGBM and AGBM over GBM in obtaining a model with good training and/or testing data fidelity.<br>by Haihao Lu.<br>Ph. D. in Mathematics and Operations Research<br>Ph.D.inMathematicsandOperationsResearch Massachusetts Institute of Technology, Department of Mathematics
APA, Harvard, Vancouver, ISO, and other styles
7

Marcia, Roummel F. "Primal-dual interior-point methods for large-scale optimization /." Diss., Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2002. http://wwwlib.umi.com/cr/ucsd/fullcit?p3044769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Van, Mai Vien. "Large-Scale Optimization With Machine Learning Applications." Licentiate thesis, KTH, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-263147.

Full text
Abstract:
This thesis aims at developing efficient algorithms for solving some fundamental engineering problems in data science and machine learning. We investigate a variety of acceleration techniques for improving the convergence times of optimization algorithms.  First, we investigate how problem structure can be exploited to accelerate the solution of highly structured problems such as generalized eigenvalue and elastic net regression. We then consider Anderson acceleration, a generic and parameter-free extrapolation scheme, and show how it can be adapted to accelerate practical convergence of proximal gradient methods for a broad class of non-smooth problems. For all the methods developed in this thesis, we design novel algorithms, perform mathematical analysis of convergence rates, and conduct practical experiments on real-world data sets.<br><p>QC 20191105</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Ghadimi, Euhanna. "Accelerating Convergence of Large-scale Optimization Algorithms." Doctoral thesis, KTH, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-162377.

Full text
Abstract:
Several recent engineering applications in multi-agent systems, communication networks, and machine learning deal with decision problems that can be formulated as optimization problems. For many of these problems, new constraints limit the usefulness of traditional optimization algorithms. In some cases, the problem size is much larger than what can be conveniently dealt with using standard solvers. In other cases, the problems have to be solved in a distributed manner by several decision-makers with limited computational and communication resources. By exploiting problem structure, however, it is possible to design computationally efficient algorithms that satisfy the implementation requirements of these emerging applications. In this thesis, we study a variety of techniques for improving the convergence times of optimization algorithms for large-scale systems. In the first part of the thesis, we focus on multi-step first-order methods. These methods add memory to the classical gradient method and account for past iterates when computing the next one. The result is a computationally lightweight acceleration technique that can yield significant improvements over gradient descent. In particular, we focus on the Heavy-ball method introduced by Polyak. Previous studies have quantified the performance improvements over the gradient through a local convergence analysis of twice continuously differentiable objective functions. However, the convergence properties of the method on more general convex cost functions has not been known. The first contribution of this thesis is a global convergence analysis of the Heavy- ball method for a variety of convex problems whose objective functions are strongly convex and have Lipschitz continuous gradient. The second contribution is to tailor the Heavy- ball method to network optimization problems. In such problems, a collection of decision- makers collaborate to find the decision vector that minimizes the total system cost. We derive the optimal step-sizes for the Heavy-ball method in this scenario, and show how the optimal convergence times depend on the individual cost functions and the structure of the underlying interaction graph. We present three engineering applications where our algorithm significantly outperform the tailor-made state-of-the-art algorithms. In the second part of the thesis, we consider the Alternating Direction Method of Multipliers (ADMM), an alternative powerful method for solving structured optimization problems. The method has recently attracted a large interest from several engineering communities. Despite its popularity, its optimal parameters have been unknown. The third contribution of this thesis is to derive optimal parameters for the ADMM algorithm when applied to quadratic programming problems. Our derivations quantify how the Hessian of the cost functions and constraint matrices affect the convergence times. By exploiting this information, we develop a preconditioning technique that allows to accelerate the performance even further. Numerical studies of model-predictive control problems illustrate significant performance benefits of a well-tuned ADMM algorithm. The fourth and final contribution of the thesis is to extend our results on optimal scaling and parameter tuning of the ADMM method to a distributed setting. We derive optimal algorithm parameters and suggest heuristic methods that can be executed by individual agents using local information. The resulting algorithm is applied to distributed averaging problem and shown to yield substantial performance improvements over the state-of-the-art algorithms.<br><p>QC 20150327</p>
APA, Harvard, Vancouver, ISO, and other styles
10

Becker, Adrian Bernard Druke. "Decomposition methods for large scale stochastic and robust optimization problems." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68969.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011.<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (p. 107-112).<br>We propose new decomposition methods for use on broad families of stochastic and robust optimization problems in order to yield tractable approaches for large-scale real world application. We introduce a new type of a Markov decision problem named the Generalized Rest less Bandits Problem that encompasses a broad generalization of the restless bandit problem. For this class of stochastic optimization problems, we develop a nested policy heuristic which iteratively solves a series of sub-problems operating on smaller bandit systems. We also develop linear-optimization based bounds for the Generalized Restless Bandit problem and demonstrate promising computational performance of the nested policy heuristic on a large-scale real world application of search term selection for sponsored search advertising. We further study the distributionally robust optimization problem with known mean, covariance and support. These optimization models are attractive in their real world applications as they require the model consumer to only rely on those statistics of uncertainty that are known with relative confidence rather than making arbitrary assumptions about the exact dynamics of the underlying distribution of uncertainty. Known to be AP - hard, current approaches invoke tractable but often weak relaxations for real-world applications. We develop a decomposition method for this family of problems which recursively derives sub-policies along projected dimensions of uncertainty and provides a sequence of bounds on the value of the derived policy. In the development of this method, we prove that non-convex quadratic optimization in n-dimensions over a box in two-dimensions is efficiently solvable. We also show that this same decomposition method yields a promising heuristic for the MAXCUT problem. We then provide promising computational results in the context of a real world fixed income portfolio optimization problem. The decomposition methods developed in this thesis recursively derive sub-policies on projected dimensions of the master problem. These sub-policies are optimal on relaxations which admit "tight" projections of the master problem; that is, the projection of the feasible region for the relaxation is equivalent to the projection of that of master problem along the dimensions of the sub-policy. Additionally, these decomposition strategies provide a hierarchical solution structure that aids in solving large-scale problems.<br>by Adrian Bernard Druke Becker.<br>Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Large-scale optimization methods"

1

Tsurkov, Vladimir. Large-scale Optimization - Problems and Methods. Springer US, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Large-scale optimization: Problems and methods. Kluwer Academic Publishers, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Tsurkov, Vladimir. Large-scale Optimization — Problems and Methods. Springer US, 2001. http://dx.doi.org/10.1007/978-1-4757-3243-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Chang, Jui-ming. Power optimization and synthesis at behavioral and system levels using formal methods. Kluwer, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chang, Jui-ming. Power optimization and synthesis at behavioral and system levels using formal methods. Kluwer, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tsurkov, Vladimir. Large-Scale Optimization - Problems and Methods (Applied Optimization). Springer, 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

(Editor), William W. Hager, Shu-Jen Huang (Editor), Panos M. Pardalos (Editor), and Oleg A. Prokopyev (Editor), eds. Multiscale Optimization Methods and Applications (Nonconvex Optimization and Its Applications). Springer, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Ding, Tao. Power System Operation with Large Scale Stochastic Wind Power Integration: Interval Arithmetic Based Analysis and Optimization Methods. Springer, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ding, Tao. Power System Operation with Large Scale Stochastic Wind Power Integration: Interval Arithmetic Based Analysis and Optimization Methods. Springer, 2016.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pedram, Massoud, and Jui-Ming Chang. Power Optimization and Synthesis at Behavioral and System Levels Using Formal Methods. Springer, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Large-scale optimization methods"

1

Leibfritz, Friedemann, and Ekkehard W. Sachs. "Numerical Solution of Parabolic State Constrained Control Problems Using SQP- and Interior-Point-Methods." In Large Scale Optimization. Springer US, 1994. http://dx.doi.org/10.1007/978-1-4613-3632-7_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nash, Stephen G., R. Polyak, and Ariela Sofer. "A Numerical Comparison of Barrier and Modified Barrier Methods For Large-Scale Bound-Constrained Optimization." In Large Scale Optimization. Springer US, 1994. http://dx.doi.org/10.1007/978-1-4613-3632-7_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Blaheta, Radim, Rostislav Hrtus, Roman Kohut, and Ondřej Jakl. "Optimization Methods for Calibration of Heat Conduction Models." In Large-Scale Scientific Computing. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-29843-1_61.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Murray, Walter. "Some Aspects of Sequential Quadratic Programming Methods." In Large-Scale Optimization with Applications. Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1960-6_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hazra, Subhendu Bikash. "PDE-Constrained Optimization Methods." In Large-Scale PDE-Constrained Optimization in Applications. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-01502-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Van Den Berg, P. M., and R. E. Kleinman. "Gradient Methods in Inverse Acoustic and Electromagnetic Scattering." In Large-Scale Optimization with Applications. Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1962-0_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Morales, José Luis, Jorge Nocedal, Richard A. Waltz, Guanghui Liu, and Jean-Pierre Goux. "Assessing the Potential of Interior Methods for Nonlinear Optimization." In Large-Scale PDE-Constrained Optimization. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-642-55508-4_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bank, Randolph E., Philip E. Gill, and Roummel F. Marcia. "Interior Methods For a Class of Elliptic Variational Inequalities." In Large-Scale PDE-Constrained Optimization. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-642-55508-4_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ghattas, Omar, and Jai-Hyeong Bark. "Large-Scale SQP Methods for Optimization of Navier-Stokes Flows." In Large-Scale Optimization with Applications. Springer New York, 1997. http://dx.doi.org/10.1007/978-1-4612-1960-6_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tran-Dinh, Quoc, and Volkan Cevher. "Smoothing Alternating Direction Methods for Fully Nonsmooth Constrained Convex Optimization." In Large-Scale and Distributed Optimization. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97478-1_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Large-scale optimization methods"

1

Kazimipour, Borhan, Xiaodong Li, and A. K. Qin. "Initialization methods for large scale global optimization." In 2013 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2013. http://dx.doi.org/10.1109/cec.2013.6557902.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Akinfiev, Valery, and Anatoly Tsvirkun. "Simulation and Optimization Methods for Choosing Investment Decisions." In 2020 13th International Conference Management of large-scale system development (MLSD). IEEE, 2020. http://dx.doi.org/10.1109/mlsd49919.2020.9247730.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Stoven-Dubois, Alexis, Aziz Dziri, Bertrand Leroy, and Roland Chapuis. "Graph Optimization Methods for Large-Scale Crowdsourced Mapping." In 2020 IEEE 23rd International Conference on Information Fusion (FUSION). IEEE, 2020. http://dx.doi.org/10.23919/fusion45008.2020.9190292.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

"Session MP3a: Distributed methods for large-scale optimization (invited)." In 2017 51st Asilomar Conference on Signals, Systems, and Computers. IEEE, 2017. http://dx.doi.org/10.1109/acssc.2017.8335185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fleury, Claude. "Structural Optimization Methods for Large Scale Problems: Status and Limitations." In ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/detc2007-34326.

Full text
Abstract:
This paper presents results from recent numerical experiments supported by theoretical arguments which indicate where are the limits of current optimization methods when applied to problems involving a large number of design variables as well as a large number of constraints, many of them being active. This is typical of optimal sizing problems with local stress constraints especially when composite materials are employed. It is shown that in both primal and dual methods the CPU time spent in the optimizer is related to the numerical effort needed to invert a symmetric positive definite matrix of size jact, jact being the effective number of active constraints, i.e. constraints associated with positive Lagrange multipliers. This CPU time varies with jact3. When the number m of constraints increases, jact has a tendency to grow, but there is a limit. Indeed another well known theoretical property is that the number of active constraints jact should not exceed the number of free primal variables iact, i.e. the number of variables that do not reach a lower or upper bound. This number iact is itself of course smaller than the real number of design variables n. This leads to the conclusion that for problems with many active constraints the CPU time could grow as fast as n3. With respect to m the increase in CPU time remains approximately linear. Some practical applications to real life industrial problems will be briefly shown: design optimisation of an aircraft composite wing with local buckling constraints and topology optimization of an engine pylon.
APA, Harvard, Vancouver, ISO, and other styles
6

Arul, Sivasankar, Tomáš Brzobohatý, and Tomáš Kozubek. "FETI based domain decomposition methods for large-scale topology optimization." In INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS ICNAAM 2019. AIP Publishing, 2020. http://dx.doi.org/10.1063/5.0026486.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lopes, Rodolfo A., Rodrigo C. P. Silva, and Alan R. R. de Freitas. "An abstract interface for large-scale continuous optimization decomposition methods." In GECCO '21: Genetic and Evolutionary Computation Conference. ACM, 2021. http://dx.doi.org/10.1145/3449726.3463188.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cao, Ming-yuan, and Yue-ting Yang. "Two Classes of Conjugate Gradient Methods for Large-Scale Unconstrained Optimization." In 2011 Fourth International Joint Conference on Computational Sciences and Optimization (CSO). IEEE, 2011. http://dx.doi.org/10.1109/cso.2011.290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mulkay, Eric, and S. Rao. "Sequential linear programming with interior-point methods for large-scale optimization." In 39th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference and Exhibit. American Institute of Aeronautics and Astronautics, 1998. http://dx.doi.org/10.2514/6.1998-1967.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Niewiadomska-Szynkiewicz, Ewa, and Jacek Blaszczyk. "Simulation-Based Optimization Methods Applied to Large Scale Water Systems Control." In 2016 Intl IEEE Conferences on Ubiquitous Intelligence & Computing, Advanced and Trusted Computing, Scalable Computing and Communications, Cloud and Big Data Computing, Internet of People, and Smart World Congress (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld). IEEE, 2016. http://dx.doi.org/10.1109/uic-atc-scalcom-cbdcom-iop-smartworld.2016.0108.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Large-scale optimization methods"

1

Nocedal, Jorge. Nonlinear Optimization Methods for Large-Scale Learning. Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1571768.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schnabel, Robert B., and Richard H. Byrd. Developing and Understanding Methods for Large Scale Nonlinear Optimization. Defense Technical Information Center, 2001. http://dx.doi.org/10.21236/ada413924.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Byrd, Richard H., and Robert B. Schnabel. Developing and Understanding Methods for Large-Scale Nonlinear Optimization. Defense Technical Information Center, 2006. http://dx.doi.org/10.21236/ada454804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tolle, Jon W. Methods and Convergence Analysis in Large Scale Nonlinear Optimization. Defense Technical Information Center, 1992. http://dx.doi.org/10.21236/ada258182.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Schnabel, Robert B., and Richard H. Byrd. Developing and Understanding Methods for Large-Scale Nonlinear Optimization. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/ada369917.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Schnabel, Robert B., and Richard H. Byrd. Large Scale Optimization Methods with a Focus on Chemistry Problems. Defense Technical Information Center, 2002. http://dx.doi.org/10.21236/ada418451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schnabel, Robert B., and Richard H. Byrd. Large-Scale Optimization Methods With a Focus on Chemistry Problems. Defense Technical Information Center, 2001. http://dx.doi.org/10.21236/ada387261.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Draganescu, Andrei. Efficient Solution Methods for Large-scale Optimization Problems Constrained by Time-dependent Partial Differential Equations (Final Report). Office of Scientific and Technical Information (OSTI), 2019. http://dx.doi.org/10.2172/1494701.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Friedman, A. Mathematical methods in material science and large scale optimization workshops: Final report, June 1, 1995-November 30, 1996. Office of Scientific and Technical Information (OSTI), 1996. http://dx.doi.org/10.2172/467125.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Wen, Zaiwen, and Donald Goldfarb. A Line Search Multigrid Method for Large-Scale Convex Optimization. Defense Technical Information Center, 2007. http://dx.doi.org/10.21236/ada478093.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!