To see the other types of publications on this topic, follow the link: Cutting plate.

Dissertations / Theses on the topic 'Cutting plate'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Cutting plate.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lu, Guoxing. "Cutting of a plate by a wedge." Thesis, University of Cambridge, 1989. https://www.repository.cam.ac.uk/handle/1810/250955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Thomas, Paul Francis. "The mechanics of plate cutting with application to ship grounding." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/12839.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering, 1992 and Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1992.
Includes bibliographical references (leaves 160-163).
by Paul Francis Thomas.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
3

Agulov, A. V., A. A. Goncharov, S. A. Goncharova, A. I. Bazhin, V. A. Stupak, and V. V. Petukhov. "Thermal Stability of Hafnium Diboride Films, Obtained on Substrates of Steel 12X18H9T and Cutting Plate T15K6." Thesis, Sumy State University, 2012. http://essuir.sumdu.edu.ua/handle/123456789/34923.

Full text
Abstract:
Investigation results of the influence of high-temperature annealing in the air environment on the phase structure and structure of hafnium diboride films, received on substrates from steel 12Х18Н9Т and cutting plate Т15К6 are presented. It is shown that in the course of annealing on a surface of HfВ2 film the oxide layer of HfО2, with monoclinic structure is formed. Thus, annealing temperature increase from 600 to 1000 С leads to increase in thickness of an oxide layer from 100 to 600 nanometers and to formation of a multilayered covering of HfB2 - HfO2. On substrates of steel 12Х18Н9Т the coating is destructed at the temperature higher on 800 C than for Т15К6. When you are citing the document, use the following link http://essuir.sumdu.edu.ua/handle/123456789/34923
APA, Harvard, Vancouver, ISO, and other styles
4

Franko, Matej. "Optimalizace laserového přivařování tvrdokovových řezných destiček na nosnou trubku." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-231678.

Full text
Abstract:
My thesis is considering the options of manufacturing of the core drill. One of these options is technology of the laser welding, which was described in details in the theoretical part of this thesis. In applied part of this thesis there were experimentally manufactured two pieces of core drills by using Yb-YAG fibre laser technology. One core drill was used for testing of the welding parameters and in the next step, these parameters were applied in real working conditions and were evaluated in final. Result is written down in the final part of my thesis. The second core drill was welded using the best values measured in the fixture, which was especially designed for this purpose.
APA, Harvard, Vancouver, ISO, and other styles
5

Білоус, Д. О. "Моделювання впливу захисних покриттів на теплоперенесення в системі з узагальненими граничними умовами." Master's thesis, Сумський державний університет, 2019. http://essuir.sumdu.edu.ua/handle/123456789/75493.

Full text
Abstract:
Сформована математична задача дослідження, визначені початкові та граничні умови поширення теплової енергії. Виконано моделювання теплового поля, що базувалось на рішенні диференціального рівняння теплопровідності при двовимірному характері однорідного ізотропного середовища. Розроблена комп’ютерна програма в середовищі Matlab. Встановлена теплозахисна роль покриття на дуже малих проміжках часу, наприклад під час фрезерування складеною фрезою, коли одна ріжуча пластина знаходиться в контакті із оброблюваною поверхнею декілька мікросекунд.
APA, Harvard, Vancouver, ISO, and other styles
6

Гончаров, Олександр Андрійович, Александр Андреевич Гончаров, Oleksandr Andriiovych Honcharov, Андрій Миколайович Юнда, Андрей Николаевич Юнда, Andrii Mykolaiovych Yunda, and Р. Ю. Бондаренко. "Моделювання теплових процесів в ріжучий пластині із захисним покриттям." Thesis, Сумський державний університет, 2017. http://essuir.sumdu.edu.ua/handle/123456789/65329.

Full text
Abstract:
Вітчизняний і зарубіжний досвід показує, що якість, точність, продуктивність і собівартість виготовлення машинобудівних виробів сильно залежать від властивостей застосовуваного різального інструменту. Відомо, що нанесення зносостійких захисних покриттів широко використовується для зменшення зносу ріжучого інструменту, за рахунок сповільнення рекристалізаційних процесів в матеріалі інструмента, а також за рахунок зменшення потужності теплового потоку, діючого на ріжучий інструмент.
APA, Harvard, Vancouver, ISO, and other styles
7

Dobiáš, Radek. "Řešení technologie součásti chladicí věže." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-230152.

Full text
Abstract:
The content of this diploma thesis is solution of production technology of cooling tower component. A model the cooling tower according to customer specifications is designed. There are selected 2 plates from steel construction that will be machined by plasma cutting. The technology of production of the plates is also proposed. Technical and economic evaluation of production compares 2 versions of plate production using unconventional metal cutting methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Pessoa, Davi Felipe [Verfasser], Martina [Gutachter] Zimmermann, and Pedro Dolabella [Gutachter] Portella. "Influence of notches due to laser beam cutting on the fatigue behavior of plate-like shaped parts made of metastable austenitic stainless steel / Davi Felipe Pessoa ; Gutachter: Martina Zimmermann, Pedro Dolabella Portella." Dresden : Technische Universität Dresden, 2020. http://d-nb.info/1227833504/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zheng, Zi-Ming. "Theoretical analyses of wedge cutting through metal plates." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/38035.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 1995.
Vita.
Includes bibliographical references (leaves 171-174).
by Zi-Ming Zheng.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
10

Hansen, Stephen Lee. "Complete Randomized Cutting Plane Algorithms for Propositional Satisfiability." NSUWorks, 2000. http://nsuworks.nova.edu/gscis_etd/565.

Full text
Abstract:
The propositional satisfiability problem (SAT) is a fundamental problem in computer science and combinatorial optimization. A considerable number of prior researchers have investigated SAT, and much is already known concerning limitations of known algorithms for SAT. In particular, some necessary conditions are known, such that any algorithm not meeting those conditions cannot be efficient. This paper reports a research to develop and test a new algorithm that meets the currently known necessary conditions. In chapter three, we give a new characterization of the convex integer hull of SAT, and two new algorithms for finding strong cutting planes. We also show the importance of choosing which vertex to cut, and present heuristics to find a vertex that allows a strong cutting plane. In chapter four, we describe an experiment to implement a SAT solving algorithm using the new algorithms and heuristics, and to examine their effectiveness on a set of problems. In chapter five, we describe the implementation of the algorithms, and present computational results. For an input SAT problem, the output of the implemented program provides either a witness to the satisfiability or a complete cutting plane proof of satisfiability. The description, implementation, and testing of these algorithms yields both empirical data to characterize the performance of the new algorithms, and additional insight to further advance the theory. We conclude from the computational study that cutting plane algorithms are efficient for the solution of a large class of SAT problems.
APA, Harvard, Vancouver, ISO, and other styles
11

Oskoorouchi, Mohammad R. "The analytic center cutting plane method with semidefinite cuts /." Thesis, McGill University, 2002. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=38507.

Full text
Abstract:
We propose an analytic center cutting plane algorithm for semidefinite programming (SDP). Reformulation of the dual problem of SDP into an eigenvalue optimization, when the trace of any feasible primal matrix is a positive constant, is well known. We transform the eigenvalue optimization problem into a convex feasibility problem. The problem of interest seeks a feasible point in a bounded convex set, which contains a full dimensional ball with &egr;(<1) radius and is contained in a compact convex set described by matrix inequalities, known as the set of localization. At each iteration, an approximate analytic center of the set of localization is computed. If this point is not in the solution set, an oracle is called to return a p-dimensional semidefinite cut. The set of localization then, is updated by adding the semidefinite cut through the center. We prove that the analytic center is recovered after adding a p-dimensional semidefinite cut in O(plog(p + 1)) damped Newton's iteration and that the ACCPM with semidefinite cuts is a fully polynomial approximation scheme. We report the numerical result of our algorithm when applied to the semidefinite relaxation of the Max-Cut problem.
APA, Harvard, Vancouver, ISO, and other styles
12

Denault, M. (Michel). "Variational inequalities with the analytic center cutting plane method." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=34945.

Full text
Abstract:
This thesis concerns the solution of variational inequalities (VIs) with analytic center cutting plane methods (ACCPMs). A convex feasibility problem reformulation of the variational inequality is used; this reformulation applies to VIs defined with pseudo-monotone, single-valued mappings or with maximal monotone, multi-valued mappings.
Two cutting plane methods are presented: the first is based on linear cuts while the second uses quadratic cuts. The first method, ACCPM-VI (linear cuts), requires mapping evaluations but no Jacobian evaluations; in fact, no differentiability assumption is needed. The cuts are placed at approximate analytic centers that are tracked with infeasible primal-dual Newton steps. Linear equality constraints may be present in the definition of the VI's set of reference, and are treated explicitly. The set of reference is assumed to be polyhedral, or is convex and iteratively approximated by polyhedra. Alongside of the sequence of analytic centers, another sequence of points is generated, based on convex combinations of the analytic centers. This latter sequence is observed to converge to a solution much faster than the former sequence.
The second method, ACCPM-VI (quadratic cuts), has cuts based on both mapping evaluations and Jacobian evaluations. The use of such a richer information set allows cuts that guide more accurately the sequence of analytic centers towards a solution. Mappings are assumed to be strongly monotone. However, Jacobian approximations, relying only on mapping evaluations, are observed to work very well in practice, so that differentiability of the mappings may not be required. There are two versions of the ACCPM-VI (quadratic cuts), that differ in the way a new analytic center is reached after the introduction of a cut. One version uses a curvilinear search followed by dual Newton centering steps. The search entails a full eigenvector-eigenvalue decomposition of a dense matrix of the order of the number of variables. The other version uses two line searches, primal-dual Newton steps, but no eigenvector-eigenvalue decomposition.
The algorithms described in this thesis were implemented in the M ATLAB environment. Numerical tests were performed on a variety of problems, some new and some traditional applications of variational inequalities.
APA, Harvard, Vancouver, ISO, and other styles
13

Denault, Michel. "Variational inequalities with the analytic center cutting plane method." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0005/NQ44411.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Sontag, David Alexander. "Cutting plane algorithms for variational inference in graphical models." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40327.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2007.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Includes bibliographical references (leaves 65-66).
In this thesis, we give a new class of outer bounds on the marginal polytope, and propose a cutting-plane algorithm for efficiently optimizing over these constraints. When combined with a concave upper bound on the entropy, this gives a new variational inference algorithm for probabilistic inference in discrete Markov Random Fields (MRFs). Valid constraints are derived for the marginal polytope through a series of projections onto the cut polytope. Projecting onto a larger model gives an efficient separation algorithm for a large class of valid inequalities arising from each of the original projections. As a result, we obtain tighter upper bounds on the logpartition function than possible with previous variational inference algorithms. We also show empirically that our approximations of the marginals are significantly more accurate. This algorithm can also be applied to the problem of finding the Maximum a Posteriori assignment in a MRF, which corresponds to a linear program over the marginal polytope. One of the main contributions of the thesis is to bring together two seemingly different fields, polyhedral combinatorics and probabilistic inference, showing how certain results in either field can carry over to the other.
by David Alexander Sontag.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
15

Harris, Andrew William. "Generating an original Cutting-plane Algorithm in Three Sets." Thesis, Kansas State University, 2010. http://hdl.handle.net/2097/7013.

Full text
Abstract:
Master of Science
Department of Industrial & Manufacturing Systems Engineering
Todd W. Easton
Integer programs (IP) are a commonly researched class of problems used by governments and businesses to improve decision making through optimal resource allocation and scheduling. However, integer programs require an exponential amount of effort to solve and in some instances a feasible solution is unknown even with the most powerful computers. There are several methods commonly used to reduce the solution time for IPs. One such approach is to generate new valid inequalities through lifting. Lifting strengthens a valid inequality by changing the coefficients of the variables in the inequality. Lifting can result in facet defining inequalities, which are the theoretically strongest inequalities. This thesis introduces the Cutting-plane Algorithm in Three Sets (CATS) that can help reduce the solution time of integer programs. CATS uses synchronized simultaneous lifting to generate a new class of previously undiscovered valid inequalities. These inequalities are based upon three sets of indices from a binary knapsack integer program, which is a commonly studied integer program. CATS requires quartic effort times the number of inequalities generated. Some theoretical results describe easily verifiable conditions under which CATS inequalities are facet defining. A small computational study shows CATS obtains about an 8.9% percent runtime improvement over a commercial IP software. CATS preprocessing time is fast and requires an average time of approximately .032 seconds to perform. With the exciting new class of inequalities produced relatively quickly compared to the solution time, CATS is advantageous and should be implemented to reduce solution time of many integer programs.
APA, Harvard, Vancouver, ISO, and other styles
16

Su, Ning. "Cutting force modeling and optimization in 3D plane surface machining." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp01/MQ39890.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Duan, Zhaoyang. "Parallel cutting plane algorithms for inverse mixed integer linear programming." [Ames, Iowa : Iowa State University], 2009. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:1468079.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Chen, Cameron Kai-Ming. "Analysis of the metal cutting process using the shear plane model." Thesis, Montana State University, 2010. http://etd.lib.montana.edu/etd/2010/chen/ChenC1210.pdf.

Full text
Abstract:
The objective of the metal cutting process is to reshape a piece of metal, or workpiece, of initial geometry into a new geometry of desired shape. Although there are a variety of ways to cut metal, this study focuses on the type of cutting where metal is sheared away from the workpiece as is commonly done with machine tools such as the lathe or mill. Typically, the correct machine settings can be found from reference guides that summarize a great amount of empirical data on metal cutting. Trial and error when combined with experience, often suffices to select the proper process parameters. The aim of this study is to predict the outcome of a metal cutting process given the properties of the workpiece, feed and cutting speed in order to understand the cutting process and predict optimum conditions. The shear plane model is well known, having been developed in the early and mid-20th century. However the empirical nature of the model and approximations made in making predictions of the metal cutting process serve to limit the usefulness of this model. A calculation routine devised by P.L.B Oxley to predict how to cut steel was created with modifications allowing predictions of the metal cutting process with any metal. A comparative study was done with 1006 steel, 6Al-4V titanium, 2024-T3 aluminum and OFE copper regarding the differences in tool forces and temperatures that would result if each metal was cut with the same process. A quantitative prediction of the metal cutting process was made for the four metals under study. Although there is no experimental data with which to evaluate these predictions, a number of case studies were performed. These case studies involved the prediction of experimental data presented in literature from other laboratories. The metal cutting model presented here has great promise as a guide to predict the best machine tool parameters.
APA, Harvard, Vancouver, ISO, and other styles
19

Magnanti, Thomas L., and Rita Vachani. "A Strong Cutting Plane Algorithm for Production Scheduling with Changeover Costs." Massachusetts Institute of Technology, Operations Research Center, 1987. http://hdl.handle.net/1721.1/5192.

Full text
Abstract:
Changeover costs (and times) are central to numerous manufacturing operations. These costs arise whenever work centers capable of processing only one product at a time switch from the manufacture of one product to another. Although many researchers have contributed to the solution of scheduling problems that include changeover costs, due to the problem's combinatorial explosiveness, optimization-based methods have met with limited success. In this paper, we develop and apply polyhedral methods from integer programming for a dynamic version of the problem. Computational tests with problems containing one to five products (and up to 225 integer variables) show that polyhedral methods based upon a set of facet inequalities developed in this paper can effectively reduce the gap between the value of an integer program formulation of the problem and its linear programming relaxation (by a factor of 94 to 100 per cent). These results suggest the use of a combined cutting plane/branch and bound procedure as a solution approach. In a test with a five product problem, this procedure, when compared with a standard linear programming-based branch and bound approach, reduced computation time by a factor of seven.
APA, Harvard, Vancouver, ISO, and other styles
20

Trouiller, Cyril. "Capacitated multi-item lot sizing with an interior point cutting plane algorithm." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23429.

Full text
Abstract:
The capacitated multi-item lot sizing problem is a model which aims at scheduling production of several products over a finite number of periods, while minimizing production costs, holding inventory costs and setup costs subject to demand and capacity constraints. These costs may vary for each product and each period and are all linear. Our model includes setup times for each product.
We compare two approaches: a classic Lagrangean relaxation of the capacity constraints and a Lagrangean decomposition by variable splitting. In both cases, the Lagrangean multipliers are updated with an interior point cutting plane technique. The results show: (1) The superiority of the interior point method over the commonly used subgradient optimization in terms of accuracy at termination, number of iterations and ease of utilization. (2) The better quality of the bounds obtained by the Lagrangean decomposition by variable splitting over the Lagrangean relaxation.
APA, Harvard, Vancouver, ISO, and other styles
21

Fernando, L. Greshan. "Development of an analytical model for electrochemical machining (ECM) of an axisymmetric disk." Ohio : Ohio University, 1999. http://www.ohiolink.edu/etd/view.cgi?ohiou1175884893.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Abdul-Hamid, Fatimah. "An investigation of algorithms for the solution of integer programming problems." Thesis, Brunel University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.294883.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Armendariz, Charles E. "Makeready reduction in a platen die cutting operation : an analysis of process improvement methodologies /." Online version of thesis, 2009. http://ritdml.rit.edu/handle/1850/10844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Krishnamurthy, Ravi S. "Enhanced intersection cutting plane and reformulation-linearization enumeration based approaches for linear complementarity problems." Diss., Virginia Tech, 1995. http://hdl.handle.net/10919/38573.

Full text
Abstract:
In this research effort, we consider the linear complementarity problem (LCP) that arises in diverse areas including optimal control, economics, engineering, mechanics, and quadratic programming. This class of problems has posed a challenge to researchers for over three decades now. Most of the current algorithms designed to solve LCP are guaranteed to work only under some restrictive assumptions on the matrix M associated with LCP. In this research, we introduce two new algorithms based on an equivalent 0-1 mixed integer bilinear programming formulation of LCP. In the first approach, we develop an enhanced intersection cutting plane algorithm for solving LCP. The matrix M is not assumed to possess any special structure, except that the corresponding feasible region is assumed to be bounded. A procedure is described to generate cuts that are deeper versions of Tuy's intersection cuts, based on a relaxation of the usual polar set. The proposed algorithm then attempts to find an LCP solution in the process of generating either a single or a pair of such strengthened intersection cuts. The process of generating these cuts involves a vertex ranking scheme that either finds an LCP solution, or else, these cuts eliminate the entire feasible region leading to the conclusion that no LCP solution exists. Based on the bilinear formulation, a heuristic is also proposed to front-end the algorithm, in order to possibly solve LCP. In the second part of the dissertation, we present a global optimization algorithm based on a novel Reformulation-Linearization Technique (RLT). We do not place any restrictions on the matrix M associated with LCP in this case. This RLT scheme provides an equivalent linear, mixed integer programming formulation of LCP, that possesses a tight linear programming relaxation. The solution strategy developed is a composite impliCit enumeration-Lagrangian relaxation scheme. In addition to the bounds provided by the RLT-based relaxation, we further tighten these bounds at each node of the branch-and-bound tree through the use of strongest surrogate cuts, and strengthened intersection cuts. The heuristic developed in the previous algorithm is also invoked within this scheme. Both algorithms have been implemented and tested on randomly generated test problems. These problems include both indefinite and negative definite defining matrices. The results of these tests indicate that both methods are effective in solving the LCP with the heuristic being quite effective in recovering LCP solutions when the matrix Mis negative definite. The Lagrangian dual approach for the implicit enumeration scheme proved to be computationally more efficient than a similar simplex based lower bounding scheme.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Sharifi, Mokhtarian Faranak. "Analytic center cutting plane and path-following interior-point methods in convex programming and variational inequalities." Thesis, McGill University, 1997. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=35615.

Full text
Abstract:
Interior-point methods have not only shown their efficiency for linear and some nonlinear programming problems, but also for cutting plane methods and large scale optimization. The analytic center cutting plane method uses the analytic center of the current polyhedral approximation of the feasible region to add a new cutting plane. In this thesis, analytic center cutting plane and path-following interior-point methodologies are used to solve the following problems: (1) convex feasibility problems defined by a deep cut separation oracle; (2) convex optimization problems involving a nonlinear objective and a constraint set defined implicitly by a separation oracle; (3) variational inequalities involving a nonlinear operator and a convex set explicitly defined; (4) variational inequalities involving an affine operator and a constraint set defined implicitly by a deep cut separation oracle; and (5) variational inequalities involving a nonlinear operator and a constraint set defined implicitly by a deep cut separation oracle. Here, the oracle is a routine that takes as input a test point. If the point belongs to the feasible region, it answers "yes", otherwise it answers "no" and returns a cut separating the point from the feasible region. Complexity bounds are established for algorithms developed for Cases 1, 2 and 4. The algorithm developed for Case 3 will be proven to be convergent, whereas, in Case 5, the developed algorithm will be shown to find an approximate solution in finite time.
APA, Harvard, Vancouver, ISO, and other styles
26

Sharifi, Mokhtarian Faranak. "Analytic center cutting plane and path-following interior-point methods in convex programming and variational inequalities." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape11/PQDD_0015/NQ44580.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lemmon, Heber. "Methods for reduced platen compression (RPC) test specimen cutting locations using micro-CT and planar radiographs." Texas A&M University, 2003. http://hdl.handle.net/1969/310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

PRAIS, MARCELO. "A STUDY ON CUTTING PLANE AND FIXING VARIABLE TECHNIQUES APPLIED TO THE RESOLUTION OF SET PARTITIONING PROBLEMS." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 1987. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=10249@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Este trabalho consiste da aplicação de métodos de planos de corte (euclideano acelerado e cortes disjuntivos) na solução de problemas de programação inteira pura do tipo 0- 1 e suas especializações para o problemas de particionamento, quando combinados com técnicas de penalidades para fixação de variáveis. Desenvolve-se um estudo de técnicas de penalidades, que permitem fixar variáveis a valores inteiros a partir da solução ótima da relaxação linear do problema inteiro. As variáveis fixadas são eliminadas do problema e este é reescrito, tendo suas dimensões originais reduzidas. Sugerem-se melhorias no cálculo destas penalidades, levando-se em conta a estrutura particular do problema de particionamento. Finalmente, propõe-se um novo enfoque para a solução de problemas de particionamento: um algoritmo de planos de corte que utiliza técnicas de penalidades, com a finalidade de acelerar a convergência dos métodos puros de planos de corte e de reduzir os problemas por estes apresentados. Resultados computacionais são apresentados, comparando-se o desempenho (i) do algoritmo euclideano acelerado, (ii) do algoritmo de cortes disjuntivos e (iii) do algoritmo de cortes disjuntivos utilizando-se técnicas de penalidades. Para este último algoritmo, são comparados os resultados obtidos utilizando-se técnicas de penalidades genéricas para problemas inteiros do tipo 0-1 e as melhorias destas penalidades, especificas para problemas de particionamento. Considerando-se o problemas de particionamento e as melhorias propostas no cálculo de penalidades, mostra-se que é, freqüentemente, possível fixar um maior número de variáveis ou até mesmo resolver-se diretamente o problema 0-1 original. Em alguns casos, ao aplicar-se o algoritmo de planos de corte com técnicas de penalidades não só pode- se acelerar a convergência, como também superar os problemas de degenerescência dual e erros por arredondamento apresentados pelos algoritmos puros de plano de corte.
This work consists on the application of cutting plane techniques (accelerated euclidean algorithm and disjunctive cuts) for solving pure 0-1 integer problems and their specializations for the set partitioning problem, when combined to penalty techniques for fixing variables. A study on penalty techniques, which allows the fixation of variables to integer values, is also developed. These penalties are directly derived from the optimal tableau nof the linear relaxation of the integer problem. The variables fixed due to penalties are eliminated and the problem is reformulated, having its initial dimensions reduced. Some improvements on the evaluation of penalties are suggested, taking into account the special structure of the set partitioning problem. Finally, a new approach to the solution of set partitioning problems is proposed: a cutting plane algorithm which uses penalty techniques, in order to accelerate the convergence of pure cutting plane methods and overcome the problems arising from their use. Computational results are shown, allowing to compare the performance of (i) the accelerated euclidean algorithm, (ii) the disjunctive cut algorithm and (iii) the last one combined with penalty techniques. For the latter, the results obained by the use of generic penalties for 0-1 integer programs are compared with those obtained by the use of the improved penalties for ser partitioning problems. Taking into account set partitioninng problems and the improvements proposed for the evaluation of penalties, it is shown that very often it is possible to fix more variables to integer values and even to solve directly the original 0-1 problem. For some cases, by applying the cutting plane algorithm together with penalties, it is possible to accelerate the convergence and overcome dual degeneracy and round-off errors arising from the use of pure cutting plane algorithms.
APA, Harvard, Vancouver, ISO, and other styles
29

Little, Patrick E. (Patrick Edward). "A study of the wedge cutting force through transversely stiffened plates : an application to ship grounding resistance." Thesis, Massachusetts Institute of Technology, 1994. http://hdl.handle.net/1721.1/37522.

Full text
Abstract:
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering, 1994, and Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1994.
Includes bibliographical references (leaves 115-117).
by Patrick E. Little.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
30

Smith, Jonathan Cole. "Tight Discrete Formulations to Enhance Solvability with Applications to Production, Telecommunications, and Air Transportation Problems." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/26710.

Full text
Abstract:
In formulating discrete optimization problems, it is not only important to have a correct mathematical model, but to have a well structured model that can be solved effectively. Two important characteristics of a general integer or mixed-integer program are its size (the number of constraints and variables in the problem), and its strength or tightness (a measure of how well it approximates the convex hull of feasible solutions). In designing model formulations, it is critical to ensure a proper balance between compactness of the representation and the tightness of its linear relaxation, in order to enhance its solvability. In this dissertation, we consider these issues pertaining to the modeling of mixed-integer 0-1 programming problems in general, as well as in the context of several specific real-world applications, including a telecommunications network design problem and an airspace management problem. We first consider the Reformulation-Linearization Technique (RLT) of Sherali and Adams and explore the generation of reduced first-level representations for mixed-integer 0-1 programs that tend to retain the strength of the full first-level linear programming relaxation. The motivation for this study is provided by the computational success of the first-level RLT representation (in full or partial form) experienced by several researchers working on various classes of problems. We show that there exists a first-level representation having only about half the RLT constraints that yields the same lower bound value via its relaxation. Accordingly, we attempt to a priori predict the form of this representation and identify many special cases for which this prediction is accurate. However, using various counter-examples, we show that this prediction as well as several variants of it are not accurate in general, even for the case of a single binary variable. Since the full first-level relaxation produces the convex hull representation for the case of a single binary variable, we investigate whether this is the case with respect to the reduced first-level relaxation as well, and show similarly that it holds true only for some special cases. Empirical results on the prediction capability of the reduced, versus the full, first-level representation demonstrate a high level of prediction accuracy on a set of random as well as practical, standard test problems. Next, we focus on a useful modeling concept that is frequently ignored while formulating discrete optimization problems. Very often, there exists a natural symmetry inherent in the problem itself that, if propagated to the model, can hopelessly mire a branch-and-bound solver by burdening it to explore and eliminate such alternative symmetric solutions. We discuss three applications where such a symmetry arises. For each case, we identify the indistinguishable objects in the model which create the problem symmetry, and show how imposing certain decision hierarchies within the model significantly enhances its solvability. These hierarchies render an otherwise virtually intractable formulation computationally viable using commercial software. For the first problem, we consider a problem of minimizing the maximum dosage of noise to which workers are exposed while working on a set of machines. We next examine a problem of minimizing the cost of acquiring and utilizing machines designed to cool large facilities or buildings, subject to minimum operational requirements. For each of these applications, we generate realistic test beds of problems. The decision hierarchies allow all previously intractable problems to be solved relatively quickly, and dramatically decrease the required computational time for all other problems. For the third problem, we investigate a network design problem arising in the context of deploying synchronous optical networks (SONET) using a unidirectional path switched ring architecture, a standard of transmission using optical fiber technology. Given several rings of this type, the problem is to find a placement of nodes to possibly multiple rings, and to determine what portion of demand traffic between node pairs spanned by each ring should be allocated to that ring. The constraints require that the demand traffic between each node pair should be satisfiable given the ring capacities, and that no more than a specified maximum number of nodes should be assigned to each ring. The objective function is to minimize the total number of node-to-ring assignments, and hence, the capital investment in add-drop multiplexer equipments. We formulate the problem as a mixed-integer programming model, and propose several alternative modeling techniques designed to improve the mathematical representation of this problem. We then develop various classes of valid inequalities for the problem along with suitable separation procedures for tightening the representation of the model, and accordingly, prescribe an algorithmic approach that coordinates tailored routines with a commercial solver (CPLEX). We also propose a heuristic procedure which enhances the solvability of the problem and provides bounds within 5-13% of the optimal solution. Promising computational results that exhibit the viability of the overall approach and that lend insights into various modeling and algorithmic constructs are presented. Following this we turn our attention to the modeling and analysis of several issues related to airspace management. Currently, commercial aircraft are routed along certain defined airspace corridors, where safe minimum separation distances between aircraft may be routinely enforced. However, this mode of operation does not fully utilize the available airspace resources, and may prove to be inadequate under future National Airspace (NAS) scenarios involving new concepts such as Free-Flight. This mode of operation is further compounded by the projected significant increase in commercial air traffic. (Free-Flight is a paradigm of aircraft operations which permits the selection of more cost-effective routes for flights rather than simple traversals between designated way-points, from various origins to different destinations.) We begin our study of Air Traffic Management (ATM) by first developing an Airspace Sector Occupancy Model (AOM) that identifies the occupancies of flights within three dimensional (possibly nonconvex) regions of space called sectors. The proposed iterative procedure effectively traces each flight's progress through nonconvex sector modules which comprise the sectors. Next, we develop an Aircraft Encounter Model (AEM), which uses the information obtained from AOM to efficiently characterize the number and nature of blind-conflicts (i.e., conflicts under no avoidance or resolution maneuvers) resulting from a selected mix of flight-plans. Besides identifying the existence of a conflict, AEM also provides useful information on the severity of the conflict, and its geometry, such as the faces across which an intruder enters and exits the protective shell or envelope of another aircraft, the duration of intrusion, its relative heading, and the point of closest approach. For purposes of evaluation and assessment, we also develop an aggregate metric that provides an overall assessment of the conflicts in terms of their individual severity and resolution difficulty. We apply these models to real data provided by the Federal Aviation Administration (FAA) for evaluating several Free-Flight scenarios under wind-optimized and cruise-climb conditions. We digress at this point to consider a more general collision detection problem that frequently arises in the field of robotics. Given a set of bodies with their initial positions and trajectories, we wish to identify the first collision that occurs between any two bodies, or to determine that none exists. For the case of bodies having linear trajectories, we construct a convex hull representation of the integer programming model of Selim and Almohamad, and exhibit the relative effectiveness of solving this problem via the resultant linear program. We also extend this analysis to model a situation in which bodies move along piecewise linear trajectories, possibly rotating at the end of each linear translation. For this case, we again compare an integer programming approach with its linear programming convex hull representation, and exhibit the relative effectiveness of solving a sequence of problems based on applying the latter construct to each time segment. Returning to Air Traffic Management, another future difficulty in airspace resource utilization stems from a projected increase in commercial space traffic, due to the advent of Reusable Launch Vehicle (RLV) technology. Currently, each shuttle launch cordons off a large region of Special Use Airspace (SUA) in which no commercial aircraft are permitted to enter for the specified duration. Of concern to airspace planners is the expense of routinely disrupting air traffic, resulting in circuitous diversions and delays, while enforcing such SUA restrictions. To provide a tool for tactical and planning purposes in such a context within the framework of a coordinated decision making process between the FAA and commercial airlines, we develop an Airspace Planning Model (APM). Given a set of flights for a particular time horizon, along with (possibly several) alternative flight-plans for each flight that are based on delays and diversions due to special-use airspace (SUA) restrictions prompted by launches at spaceports or weather considerations, this model prescribes a set of flight-plans to be implemented. The model formulation seeks to minimize a delay and fuel cost based objective function, subject to the constraints that each flight is assigned one of the designated flight-plans, and that the resulting set of flight-plans satisfies certain specified workload, safety, and equity criteria. These requirements ensure that the workload for air-traffic controllers in each sector is held under a permissible limit, that any potential conflicts which may occur are routinely resolvable, and that the various airlines involved derive equitable levels of benefits from the overall implemented schedule. In order to solve the resulting 0-1 mixed-integer programming problem more effectively using commercial software (CPLEX-MIP), we explore the use of various facetial cutting planes and reformulation techniques designed to more closely approximate the convex hull of feasible solutions to the problem. We also prescribe a heuristic procedure which is demonstrated to provide solutions to the problem that are either optimal or are within 0.01% of optimality. Computational results are reported on several scenarios based on actual flight data obtained from the Federal Aviation Administration (FAA) in order to demonstrate the efficacy of the proposed approach for air traffic management (ATM) purposes. In addition to the evaluation of these various models, we exhibit the usefulness of this airspace planning model as a strategic planning tool for the FAA by exploring the sensitivity of the solution provided by the model to changes both in the radius of the SUA formulated around the spaceport, and in the duration of the launch-window during which the SUA is activated.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
31

El-Bialy, B. H. M. "A metallurgical and machining study of the behaviour of ion plated titanium nitride coated high speed steel cutting tools." Thesis, University of Salford, 1986. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.372142.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Chen, Binyuan. "FINITE DISJUNCTIVE PROGRAMMING METHODS FOR GENERAL MIXED INTEGER LINEAR PROGRAMS." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/145120.

Full text
Abstract:
In this dissertation, a finitely convergent disjunctive programming procedure, the Convex Hull Tree (CHT) algorithm, is proposed to obtain the convex hull of a general mixed–integer linear program with bounded integer variables. The CHT algorithm constructs a linear program that has the same optimal solution as the associated mixed-integer linear program. The standard notion of sequential cutting planes is then combined with ideasunderlying the CHT algorithm to help guide the choice of disjunctions to use within a new cutting plane method, the Cutting Plane Tree (CPT) algorithm. We show that the CPT algorithm converges to an integer optimal solution of the general mixed-integer linear program with bounded integer variables in finitely many steps. We also enhance the CPT algorithm with several techniques including a “round-of-cuts” approach and an iterative method for solving the cut generation linear program (CGLP). Two normalization constraints are discussed in detail for solving the CGLP. For moderately sized instances, our study shows that the CPT algorithm provides significant gap closures with a pure cutting plane method.
APA, Harvard, Vancouver, ISO, and other styles
33

Riedel, Sebastian. "Efficient prediction of relational structure and its application to natural language processing." Thesis, University of Edinburgh, 2009. http://hdl.handle.net/1842/4167.

Full text
Abstract:
Many tasks in Natural Language Processing (NLP) require us to predict a relational structure over entities. For example, in Semantic Role Labelling we try to predict the ’semantic role’ relation between a predicate verb and its argument constituents. Often NLP tasks not only involve related entities but also relations that are stochastically correlated. For instance, in Semantic Role Labelling the roles of different constituents are correlated: we cannot assign the agent role to one constituent if we have already assigned this role to another. Statistical Relational Learning (also known as First Order Probabilistic Logic) allows us to capture the aforementioned nature of NLP tasks because it is based on the notions of entities, relations and stochastic correlations between relationships. It is therefore often straightforward to formulate an NLP task using a First Order probabilistic language such as Markov Logic. However, the generality of this approach comes at a price: the process of finding the relational structure with highest probability, also known as maximum a posteriori (MAP) inference, is often inefficient, if not intractable. In this work we seek to improve the efficiency of MAP inference for Statistical Relational Learning. We propose a meta-algorithm, namely Cutting Plane Inference (CPI), that iteratively solves small subproblems of the original problem using any existing MAP technique and inspects parts of the problem that are not yet included in the current subproblem but could potentially lead to an improved solution. Our hypothesis is that this algorithm can dramatically improve the efficiency of existing methods while remaining at least as accurate. We frame the algorithm in Markov Logic, a language that combines First Order Logic and Markov Networks. Our hypothesis is evaluated using two tasks: Semantic Role Labelling and Entity Resolution. It is shown that the proposed algorithm improves the efficiency of two existing methods by two orders of magnitude and leads an approximate method to more probable solutions. We also give show that CPI, at convergence, is guaranteed to be at least as accurate as the method used within its inner loop. Another core contribution of this work is a theoretic and empirical analysis of the boundary conditions of Cutting Plane Inference. We describe cases when Cutting Plane Inference will definitely be difficult (because it instantiates large networks or needs many iterations) and when it will be easy (because it instantiates small networks and needs only few iterations).
APA, Harvard, Vancouver, ISO, and other styles
34

Essmann, Erich C. "A cost model for the manufacture of bipolar plates using micro milling." Thesis, Stellenbosch : Stellenbosch University, 2012. http://hdl.handle.net/10019.1/20319.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2012.
ENGLISH ABSTRACT: In a move towards cleaner and more sustainable energy systems, hydrogen as an energy carrier and hydrogen fuel cells as energy converters are receiving increasing global attention. Considering the vital role that platinum plays in the operation of hydrogen fuels cells, South Africa stands to gain enormously as the world’s leading platinum group metals supplier. Therefore, in order to benefit across the whole value chain, it is imperative to develop the capability to manufacture hydrogen fuel cell stacks locally. This project addresses this imperative, in part, by building a framework to evaluate the manufacturing performance of one of the more costly components of the hydrogen fuel cell stack. More specifically, this project builds a cost evaluation model (or cost model) for the manufacture of bipolar plates using micro milling. In essence, the model characterises manufacturing cost (and time) as a function of relevant inputs. The model endeavours to be flexible in accommodating relevant contributing cost drivers such as tool life and manufacturing time. Moreover, the model lays the groundwork, from a micro milling perspective, for a comparison of different manufacturing methods for bipolar plates. The approach taken in building the cost model is a fundamental one, owing to the lack of historical cost data for this particular process. As such, manufacturing knowledge and experimentation are used to build the cost model in a structured way. The process followed in building the cost model begins with the formulation of the cost components by reviewing relevant examples from literature. Thereafter, two main cost drivers are comprehensively addressed. Tool life is characterised experimentally as a function of cutting parameters and manufacturing time is characterised as a function of relevant inputs. The work is then synthesized into a coherent cost model. Following the completion of the cost model, analysis is done to find the near-optimal combination of machine cutting parameters. Further, analysis is done to quantify the sensitivity of manufacturing cost to design changes and production volumes. This attempts to demonstrate how typical managerial issues can be addressed using the cost model format. The value of this work must be seen in terms of its practical contribution. That is, its contribution to the development of the capability to manufacture hydrogen fuel cells locally. By understanding the effect of relevant input factors on manufacturing cost, ‘upstream’ design and development activities can be integrated with ‘downstream’ manufacturing activities. Therefore, this project supports the development of manufacturing capability by providing a mechanism to control cost throughout the process.
AFRIKAANSE OPSOMMING: In die soeke na skoner, meer volhoubare energie bronne word die fokus op waterstof, as energie draer, en waterstof brandstofselle, as energie omskakelaars, al meer verskerp. Deur die sleutelrol van platinum in die werking van waterstof brandstofselle in ag te neem, word Suid-Afrika, as die wêreld se grootste platinum verskaffer, in `n uitstekende posisie geplaas om voordeel te trek uit hierdie geleentheid. Om dus as land voordeel te trek uit die proses in geheel, is dit van kardinale belang om die vermoë te ontwikkel om waterstof brandstofsel stapels op eie bodem te vervaardig. Hierdie projek adresseer gedeeltelik hierdie noodsaaklikheid, deur `n raamwerk te bou wat die vervaardigingsoptrede van een van die meer duursame komponente van die waterstof brandstofsel stapel evalueer. Meer spesifiek, bou hierdie projek `n koste evaluerings model (of koste model) vir die vervaardiging van bipolêre plate deur die gebruik van mikro-masjienering. In wese kenmerk hierdie model vervaardigings kostes (en tyd) as `n funksie van relevante insette. Hierdie model poog om buigsaam te wees met die in ag neming van relevante bydraende kostedrywers soos buitelleeftyd en vervaardigingstyd. Daarbenewens lê hierdie model die grondwerk, vanuit `n mikro masjienerings oogpunt, vir die vergelyking van verskillende vervaardingings metodes vir bipolêre plate. Die benadering wat gevolg word in die bou van die koste model is fundamenteel as gevolg van die gebrek van historiese data vir hierdie spesifieke proses. As sodanig word vervaardigings kennis en eksperimentering gebruik om die koste model in `n gestruktueerde wyse te bou. Die proses gevolg in die bou van die koste model begin met die formulering van die koste komponente deur die hersiening van relevante voorbeelde vanuit die literatuur. Daarna word twee hoof koste drywers deeglik geadresseer. Buitelleeftyd word ekperimenteel gekenmerk as funksie van masjieneringsparameters en vervaardigingstyd word gekenmerk as `n funksie van relevante insette. Die werk word dan gesintetiseer in `n samehangende koste model. Wat volg op die voltooiing van die koste model is `n analise om die optimale kombinasie masjieneringsparameters te vind. Daaropvolgens word analises gedoen om die sensitiwiteit van vervaardigingskoste onderworpe aan ontwerpsveranderings en produksie volumes te kwantisfiseer. Dit poog om te demostreer hoe tipiese bestuursproblem geadresseer kan word deur die koste model formaat te gebruik. Die waarde van hierdie werk moet in die lig van die praktiese bydrae daarvan gesien word, menende, die bydrae tot die ontwikkeling van die vermoë om waterstof brandstofselle in Suid-Afrika te vervaardig. Deur die effek van relevante inset faktore op vervaardigingskoste te verstaan, kan ‘stroom-op’ ontwerp en ontwikkelings aktiwiteite geïntegreer word met ‘stroom-af’ vervaardigings aktiwiteite. Dus, hierdie projek ondersteun die ontwikkeling van vervaardigingsvermoëns deur `n meganisme te voorsien om kostes oor die omvang van die proses te beheer.
APA, Harvard, Vancouver, ISO, and other styles
35

Da, Costa Fontes Fábio Francisco. "Optimization Models and Algorithms for the Design of Global Transportation Networks." Thesis, Artois, 2017. http://www.theses.fr/2017ARTO0206/document.

Full text
Abstract:
Le développement de structures de réseau efficaces pour le transport de marchandises est fondamental sur le marché mondial actuel. Les demandes doivent être traitées rapidement, répondre aux besoins des clients dans les meilleurs délais, les congestions et les retards doivent être minimisés, les émissions de CO2 doivent être contrôlés et des coûts de transport moins élevés doivent être proposés aux clients. La structure hub-and-spoke est un modèle de réseau courant utilisé à la fois dans le transport régional comme dans le transport intercontinental, permettant une économie d'échelle grâce aux consolidations opérées au niveau des noeuds hub. Mais, les retards, les congestions et les longs délais de livraison sont des inconvénients de ce type de réseau. Dans cette thèse, un nouveau concept, "sub-hub", est ajouté à la structure du réseau classique hub-and-spoke. Dans les modèles de réseau proposés, une économie d'échelle et des chemins alternatifs plus courts sont mis en oeuvre, en minimisant ainsi le coût de transport et le délai de livraison. Le sub-hub est vu comme un point de connexion entre deux routes distinctes de régions voisines. Des transbordements sans passer par les noeuds hub sont possibles au niveau des sub-hubs. Des congestions peuvent ainsi être évitées et, par conséquent, les retards associés sont ainsi minimisés. Quatre modèles de programmation linéaire en nombres entiers binaires du problème de la localisation de hubs et de routage sont développés dans cette thèse. Des réseaux avec sub-hub et des réseaux sans sub-hub prenant en compte des routes circulaires entre hubs ou des connexions directes entre hubs sont ainsi comparées. Ces modèles sont composés de quatre sous-problèmes (localisation, allocation, conception de service et routage) qui rendent complexe la recherche de solutions. Une approche cutting plane est testée pour résoudre de petites instances de problème tandis qu'une recherche à voisinage variable avec décomposition (VNDS) composée de méthodes exactes (matheuristic) a été développée pour résoudre de grandes instances. Le VNDS mis en oeuvre, explore chaque sous-problème avec différents opérateurs. Des gains importants dans la fonction objective sont observés par les modèles avec sub-hub confirmant ainsi le développement de réseaux plus compétitifs
The development of efficient network structures for freight transport is a major concern for the current global market. Demands need to be quickly transported and should also meet the customer needs in a short period of time. Traffic congestions and delays must be minimized, since CO2 emissions must be controlled and affordable transport costs have to be offered to customers. Hub-and-spoke structure is a current network model used by both regional and intercontinental transportation, which offers an economy of scale for aggregated demands inside hub nodes. However, delays, traffic congestions and long delivery time are drawbacks from this kind of network. In this thesis, a new concept, which is called "sub-hub", is proposed to the classic hub-and-spoke network structure. In the proposed network models, economy of scale and shorter alternative paths are implemented, thus minimizing the transport cost and delivery time. The sub-hub proposal can be viewed as a connection point between two routes from distinct and close regions. Transshipments without the need to pass through hub nodes are possible inside sub-hubs. This way, congestions can be avoided and, consequently, delays are minimized. Four binary integer linear programming models for hub location and routing problem were developed in this thesis. Networks with sub-hub and networks without sub-hub taking into account circular hub routes or direct connections between hubs are compared. These models are composed of four sub-problems (location, allocation, service design and routing), which hinders the solution. A cutting plane approach was used to solve small instances of problem, while a Variable Neighborhood Decomposition Search (VNDS) composed of exact methods (matheuristic) was developed to solve large instances. The VNDS was used to explore each sub-problem by different operators. Major benefits are provided by models with sub-hub, thus promoting the development of more competitive networks
APA, Harvard, Vancouver, ISO, and other styles
36

Bracco, Mark Douglas. "A study of the wedge cutting force through longitudinally stiffened plates : an application to grounding resistance of single and double hull ships." Thesis, Monterey, California. Naval Postgraduate School, 1994. http://hdl.handle.net/10945/26279.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Perakis, Georgia, and M. (Marina) Zaretsky. "On the Efficient Solution of Variational Inequalities; Complexity and Computational Efficiency." Massachusetts Institute of Technology, Operations Research Center, 2002. http://hdl.handle.net/1721.1/5099.

Full text
Abstract:
In this paper we combine ideas from cutting plane and interior point methods in order to solve variational inequality problems efficiently. In particular, we introduce a general framework that incorporates nonlinear as well as linear "smarter" cuts. These cuts utilize second order information on the problem through the use of a gap function. We establish convergence as well as complexity results for this framework. Moreover, in order to devise more practical methods, we consider an affine scaling method as it applies to symmetric, monotone variationalinequality problems and demonstrate its convergence. Finally, in order to further improve the computational efficiency of the methods in this paper, we combine the cutting plane approach with the affine scaling approach.
APA, Harvard, Vancouver, ISO, and other styles
38

Залога, Вільям Олександрович, Вильям Александрович Залога, Viliam Oleksandrovych Zaloha, Костянтин Олександрович Дядюра, Константин Александрович Дядюра, Kostiantyn Oleksandrovych Diadiura, Олександр Володимирович Івченко, et al. "Нормативне забезпечення неруйнівного експрес-методу оцінювання якості лез різального інструменту." Thesis, Сумський державний університет, 2017. http://essuir.sumdu.edu.ua/handle/123456789/66758.

Full text
Abstract:
Відомо, що на сучасному етапі розвитку промислового виробництва у зв’язку з суттєвим збільшенням номенклатури виробів та зменшенням їх у кількості у партіях, що замовляються, стає нераціональним виготовлення різальних інструментів й інструментального оснащення «своїми» силами, у результаті чого у теперішній час суттєво зросла питома вага покупних інструментів та оснащення, виготовлених спеціалізованими виробництвами (фірмами).
APA, Harvard, Vancouver, ISO, and other styles
39

Chandrasekaran, Karthekeyan. "New approaches to integer programming." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44814.

Full text
Abstract:
Integer Programming (IP) is a powerful and widely-used formulation for combinatorial problems. The study of IP over the past several decades has led to fascinating theoretical developments, and has improved our ability to solve discrete optimization problems arising in practice. This thesis makes progress on algorithmic solutions for IP by building on combinatorial, geometric and Linear Programming (LP) approaches. We use a combinatorial approach to give an approximation algorithm for the feedback vertex set problem (FVS) in a recently developed Implicit Hitting Set framework. Our algorithm is a simple online algorithm which finds a nearly optimal FVS in random graphs. We also propose a planted model for FVS and show that an optimal hitting set for a polynomial number of subsets is sufficient to recover the planted subset. Next, we present an unexplored geometric connection between integer feasibility and the classical notion of discrepancy of matrices. We exploit this connection to show a phase transition from infeasibility to feasibility in random IP instances. A recent algorithm for small discrepancy solutions leads to an efficient algorithm to find an integer point for random IP instances that are feasible with high probability. Finally, we give a provably efficient implementation of a cutting-plane algorithm for perfect matchings. In our algorithm, cuts separating the current optimum are easy to derive while a small LP is solved to identify the cuts that are to be retained for later iterations. Our result gives a rigorous theoretical explanation for the practical efficiency of the cutting plane approach for perfect matching evident from implementations. In summary, this thesis contributes to new models and connections, new algorithms and rigorous analysis of well-known approaches for IP.
APA, Harvard, Vancouver, ISO, and other styles
40

Soberanis, Policarpio Antonio. "Risk optimization with p-order conic constraints." Diss., University of Iowa, 2009. https://ir.uiowa.edu/etd/437.

Full text
Abstract:
My dissertation considers solving of linear programming problems with p-order conic constraints that are related to a class of stochastic optimization models with risk objective or constraints that involve higher moments of loss distributions. The general proposed approach is based on construction of polyhedral approximations for p-order cones, thereby approximating the non-linear convex p-order conic programming problems using linear programming models. It is shown that the resulting LP problems possess a special structure that makes them amenable to efficient decomposition techniques. The developed algorithms are tested on the example of portfolio optimization problem with higher moment coherent risk measures that reduces to a p-order conic programming problem. The conducted case studies on real financial data demonstrate that the proposed computational techniques compare favorably against a number of benchmark methods, including second-order conic programming methods.
APA, Harvard, Vancouver, ISO, and other styles
41

Qi, Yunwei. "Time-staged decomposition and related algorithms for stochastic mixed-integer programming." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1343416038.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Hellman, Fredrik. "Towards the Solution of Large-Scale and Stochastic Traffic Network Design Problems." Thesis, Uppsala University, Department of Information Technology, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-130013.

Full text
Abstract:

This thesis investigates the second-best toll pricing and capacity expansion problems when stated as mathematical programs with equilibrium constraints (MPEC). Three main questions are rised: First, whether conventional descent methods give sufficiently good solutions, or whether global solution methods are to prefer. Second, how the performance of the considered solution methods scale with network size. Third, how a discretized stochastic mathematical program with equilibrium constraints (SMPEC) formulation of a stochastic network design problem can be practically solved. An attempt to answer these questions is done through a series ofnumerical experiments.

The traffic system is modeled using the Wardrop’s principle for user behavior, separable cost functions of BPR- and TU71-type. Also elastic demand is considered for some problem instances.

Two already developed method approaches are considered: implicit programming and a cutting constraint algorithm. For the implicit programming approach, several methods—both local and global—are applied and for the traffic assignment problem an implementation of the disaggregate simplicial decomposition (DSD) method is used. Regarding the first question concerning local and global methods, our results don’t give a clear answer.

The results from numerical experiments of both approaches on networks of different sizes shows that the implicit programming approach has potential to solve large-scale problems, while the cutting constraint algorithm scales worse with network size.

Also for the stochastic extension of the network design problem, the numerical experiments indicate that implicit programming is a good approach to the problem.

Further, a number of theorems providing sufficient conditions for strong regularity of the traffic assignment solution mapping for OD connectors and BPR cost functions are given.

APA, Harvard, Vancouver, ISO, and other styles
43

Kulla, Lukáš. "Statické zajištění zámku." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2016. http://www.nusl.cz/ntk/nusl-240358.

Full text
Abstract:
The aim of this final thesis was static protection for castle in Miroslavské Knínice. It was necessary to explore several respects and find signs of violation. Next analyze and propose suitable assurance of individual parts. Separe into the stages of construction and to consider the proposal in terms of ensuring the resistance of materials. Finally create a detailed documentation in the range suitable for performance. Based on engineering geology and visual survey was designed horizontal bracing prestressing cables at three levels. The first level "A" consists of a closed circuit of prestressed reinforced concrete passports, supplemented by the cross and construction of prestressed reinforced concrete passports. Next level "B,C" is used to secure the top of the building. Levels “B,C” are proposed using prestressed cable in spare cable channels.
APA, Harvard, Vancouver, ISO, and other styles
44

Arenas, Jaén Manuel Jesús [Verfasser], Dietmar [Akademischer Betreuer] Hömberg, Dietmar [Gutachter] Hömberg, Pedreira María Dolores [Gutachter] Gómez, and Alfred [Gutachter] Schmidt. "Thermal cutting of steel plates : modelling, simulation and optimal control of preheating strategies / Manuel Jesús Arenas Jaén ; Gutachter: Dietmar Hömberg, María Dolores Gómez Pedreira, Alfred Schmidt ; Betreuer: Dietmar Hömberg." Berlin : Technische Universität Berlin, 2021. http://d-nb.info/1234550032/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Cardozo, Arteaga Carmen. "Optimisation of power system security with high share of variable renewables : Consideration of the primary reserve deployment dynamics on a Frequency Constrained Unit Commitment model." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLC024/document.

Full text
Abstract:
Le placement de production (UC pour unit commitment) est une famille de problèmes d'optimisation qui déterminent l’état et la puissance de consigne des groupes de production pour satisfaire la demande électrique à moindre coût. Traditionnellement, une contrainte de sûreté détermine un certain volume de capacité raccordée disponible, appelé la réserve, destinée à gérer l'incertitude. Néanmoins, dans les petits systèmes la contrainte de réserve fixe peut entraîner dans certains cas une violation du critère N-1 bien que le volume de réserve minimale soit respecté. Plus récemment, la part croissante de production variable à partir de sources renouvelables (ENR) peut conduire à des programmes d’appel qui ne garantissent plus la sûreté même dans les grands systèmes.Pour y faire face, différentes techniques d'atténuation des impacts ont été proposées telle que la révision des modèles de placement de la production pour inclure une meilleure représentation de la dynamique du système. Cette sous-famille des problèmes UC est formellement définie dans ces travaux comme le problème FCUC (frequency constrained unit commitment). Elle vise à maintenir la fréquence au-dessus d'un certain seuil, et éviter ainsi le délestage par sous-fréquence (DSF).La première partie de ces travaux identifie les défis dans la formulation du problème FCUC. D’une part, la contrainte de fréquence est fortement non-linéaire par rapport aux variables de décision du problème UC. D’autre part, elle est difficile à approcher par des fonctions analytiques. La simulation séquentielle d'un modèle UC classique et d’un modèle de réponse primaire de la fréquence est alors proposée. L’intérêt d’une formulation plus fidèle de la contrainte de sûreté est donc révélé. La deuxième partie de ces travaux étudie l'impact des ENR sur la réponse primaire de la fréquence. Le besoin de formuler des modèles de FCUC plus précis est mis en avant.La troisième partie des travaux examine le coût, les bénéfices et les limitations des modèles FCUC, basés sur des contraintes indirectes sur certains paramètres dynamiques des unités de production. Il est montré que, bien que l'application de contraintes de sécurité indirectes assure la sûreté dans certains pas horaires, l'effet inverse peut apparaître à un autre instant. Ainsi, l’efficacité des leviers dépend fortement du point de fonctionnement du système. Il en est de même pour le coût de la solution. Cette étude met en évidence la nécessité de nouvelles méthodes pour traiter correctement la contrainte sur le creux de fréquence afin d'assurer l'optimalité et efficacité de la solution.Finalement, la quatrième partie des travaux offre une nouvelle formulation du problème FCUC suivant une approche de décomposition de Bender. La décomposition de Bender sépare un problème d'optimisation avec une certaine structure en deux parties : le problème maître et le problème esclave. Dans le cas du FCUC, le problème maître propose des plans de production candidats (états des groupes) et le problème esclave assure le respect des contraintes de fréquence par le biais d'un modèle de plans sécants. Les résultats de simulation montrent que la représentation plus précise du creux de fréquence au niveau du problème esclave réduit le risque de DSF et le coût de la sécurité par rapport à d'autres modèles de FCUC
The Unit Commitment problem (UC) is a family of optimisation models for determining the optimal short-term generation schedule to supply electric power demand with a defined risk level. The UC objective function is given by the operational costs over the optimisation horizon. The constraints include, among others, technical, operational and security limits. Traditionally, the security constraints are given by the requirement of a certain volume of on-line spare capacity, which is called the reserve and is meant to handle uncertainty, while preventing the interruption of power supply. It is commonly specified following a static reliability criterion, such as the N-1 rule.Nevertheless, in small systems the fixed, and a priori defined, reserve constraint could entail a violation of the N-1 criterion, although the reserve constraint was met. More recently, the increasing share of variable generation from renewable sources (V-RES), such as wind and solar, may lead to UC solutions that no longer ensure system security. Therefore, different impact mitigation techniques have been proposed in literature, which include the revision of UC models to provide a better representation of the system dynamics. This subfamily of UC models is formally defined in this work as the frequency constrained UC problem (FCUC), and aims to keep the frequency above a certain threshold, following pre-defined contingencies, by adding enhanced security constraints. In this work this topic is addressed in four parts.The first part identifies the main challenge of formulating the FCUC problem. Indeed, the frequency minimum, also called the frequency nadir, constraint is strongly non-linear on the decision variables of the UC model. Moreover, the behaviour of the frequency nadir regarding the binary decision variables is hard to approximate by analytical functions. Thus, a sequential simulation approach is proposed, based on a classic UC model and a reduced order model of the primary frequency response. The potential benefits of a smarter allocation of the primary reserve is revealed.The second part of this work investigates the impact of V-RES sources on the primary frequency response. The underlying processes that lead to the increase of the Under-Frequency Load Shedding (UFLS) risk are thoroughly discussed. The need of formulating more accurate FCUC models is highlighted.The third part of this work examines the cost/benefit and limitation of FCUC models based on indirect constraints over certain dynamic parameters of the generating units. A methodology is proposed that assesses the effectiveness and optimality of some existing V-RES impact mitigation techniques, such as the increase of the primary reserve requirement, the prescription of an inertia requirement, the authorisation of V-RES dispatch-down or the consideration of fast non-synchronous providers of frequency regulation services. This study showed the need for new methods to properly handle the frequency nadir constraint in order to ensure optimality, without compromising the optimisation problem’s tractability.The fourth part of this work offers a new formulation of the FCUC problem following a Bender’s decomposition approach. This method is based on the decomposition of an optimisation problem into two stages: the master and the slave problems. Here, the master problem deals with the generating unit states and the slave problem handles the frequency nadir constraints through a cutting plane model. Simulation results showed that the more accurate representation of the frequency nadir in the slave problem reduces the risk of UFLS and the security cost, with respect to other FCUC models, such as those based on inertia constraints. In addition, the optimality of the global solution is guaranteed; although the convergence of the master problem is slow, due to the well-known tailing off effect of cutting plane methods
APA, Harvard, Vancouver, ISO, and other styles
46

Hadji, Makhlouf. "Synthèse de réseaux à composantes connexes unicycliques." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2009. http://tel.archives-ouvertes.fr/tel-00459860.

Full text
Abstract:
Cette thèse s'inscrit dans le domaine de l'optimisation combinatoire. Elle utilise l'approche polyèdrale pour résoudre des problèmes combinatoires qui se posent dans le contexte des réseaux de télécommunications. Nous introduisons et étudions le problème de synthèse de réseaux à composantes connexes unicycliques. Après avoir rappelé que le problème est facile à résoudre en absence d'autres contraintes, nous étudions de nouvelles variantes en intégrant de nouvelles contraintes techniques. Nous commençons par une contrainte portant sur la taille des cycles. Nous souhaitons interdire tous les cycles contenant au plus p sommets. Le problème est alors NP-Difficile. Des inégalités valides sont alors proposées pour ce problème. On montre sous des conditions bien précises que ces inégalités peuvent être des facettes. Plusieurs algorithmes polynomiaux ont été proposés pour la séparation des inégalités valides. Ces algorithme sont mis en oeuvre et des résultats numériques sont donnés. Nous nous focalisons par la suite sur un nouveau problème dit de Steiner consistant à partitionner un réseau en composantes unicycliques tout en imposant que certains sommets soient sur les cycles. On montre alors que ce problème est facile au sens de la complexité algorithmique en proposant un algorithme polynomial et une formulation étendue du problème. On présente également une description partielle de l'enveloppe convexe des vecteurs d'incidence de ces réseaux. La séparation des inégalités est également étudiée. Nous proposons notamment une généralisation de l'algorithme de Padberg-Rao pour séparer les inégalités Blossom. D'autres contraintes techniques sont prises en compte : contraintes de degrés, contrainte sur le nombre de composantes connexes, appartenance de certains sommets à une même composante connexe et enfin la séparation de certains sommets qui doivent être sur des composantes différentes. Enfin, nous faisons une étude spectrale de deux classes spécifiques de graphes unicycliques. Mots clés : Optimisation Combinatoire, Polyèdres, Algorithme à Plans Coupants, Graphes
APA, Harvard, Vancouver, ISO, and other styles
47

El-Hajj, Racha. "Vehicle routing problems with profits, exact and heuristic approaches." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2192.

Full text
Abstract:
Nous nous intéressons dans cette thèse à la résolution du problème de tournées sélectives (Team Orienteering Problem - TOP) et ses variantes. Ce problème est une extension du problème de tournées de véhicules en imposan tcertaines limitations de ressources. Nous proposons un algorithme de résolution exacte basé sur la programmation linéaire en nombres entiers (PLNE) en ajoutant plusieurs inégalités valides capables d’accélérer la résolution. D’autre part, en considérant des périodes de travail strictes pour chaque véhicule durant sa tournée, nous traitons une des variantes du TOP qui est le problème de tournées sélectives multipériodique (multiperiod TOP - mTOP) pour lequel nous développons une métaheuristique basée sur l’optimisation par essaim pour le résoudre. Un découpage optimal est proposé pour extraire la solution optimale de chaque particule en considérant les tournées saturées et pseudo saturées .Finalement, afin de prendre en considération la disponibilité des clients, une fenêtre de temps est associée à chacun d’entre eux, durant laquelle ils doivent être servis. La variante qui en résulte est le problème de tournées sélectives avec fenêtres de temps (TOP with Time Windows - TOPTW). Deux algorithmes exacts sont proposés pour résoudre ce problème. Le premier est basé sur la génération de colonnes et le deuxième sur la PLNE à laquelle nous ajoutons plusieurs coupes spécifiques à ce problème
We focus in this thesis on developing new algorithms to solve the Team Orienteering Problem (TOP) and two of its variants. This problem derives from the well-known vehicle routing problem by imposing some resource limitations .We propose an exact method based on Mixed Integer Linear Programming (MILP) to solve this problem by adding valid inequalities to speed up its solution process. Then, by considering strict working periods for each vehicle during its route, we treat one of the variants of TOP, which is the multi-period TOP (mTOP) for which we develop a metaheuristic based on the particle swarm optimization approach to solve it. An optimal split procedure is proposed to extract the optimal solution from each particle by considering saturated and pseudo-saturated routes. Finally, in order to take into consideration the availability of customers, a time window is associated with each of them, during which they must be served. The resulting variant is the TOP with Time Windows (TOPTW). Two exact algorithms are proposed to solve this problem. The first algorithm is based on column generation approach and the second one on the MILP to which we add additional cuts specific for this problem. The comparison between our exact and heuristic methods with the existing one in the literature shows the effectiveness of our approaches
APA, Harvard, Vancouver, ISO, and other styles
48

Бабій, Михайло Володимирович, Михаил Владимирович Бабий, and M. V. Babiy. "Обґрунтування параметрів відрізних різців з бічною установкою багатогранних непереточуваних пластин." Thesis, Тернопільський національний технічний університет ім. Івана Пулюя, 2013. http://elartu.tntu.edu.ua/handle/123456789/2573.

Full text
Abstract:
Роботу виконано на кафедрі експлуатації суднових енергетичних установок та загальноінженерної підготовки Херсонської державної морської академії Міністерства освіти і науки України. Захист відбувся 27 листопада 2013 р. о 10 годині на засіданні спеціалізованої вченої ради К58.052.03 у Тернопільському національному технічному університеті імені Івана Пулюя за адресою: 46001, м. Тернопіль, вул. Руська, 56, корп. 2, ауд. 79. З дисертацією можна ознайомитися в бібліотеці Тернопільського національного технічного університету імені Івана Пулюя за адресою: 46001, м. Тернопіль, вул. Руська, 56.
Дисертація присвячена вирішенню важливої науково-прикладної задачі –підвищення ефективності використання збірних відрізних різців за рахунок упровадження бічної установки БНП, розроблення нових конструкцій різців, пластин, способів та оснащення для їх виробництва й дослідження процесів обробки ними. Розроблено основні принципи класифікації і відбору таких БНП та структурно-математичну модель їх синтезу, за якими синтезовані усі можливі варіанти конструкцій і проведено їх повний аналіз з відбором раціональних варіантів. Теоретично обґрунтовано й отримано нові розрахункові залежності для визначення конструктивних і геометричних параметрів БНП – з лисками, одно- та дворадіусними виїмками на вершинах, виходячи з умов зменшення припуску на їх обробку, забезпечення найбільшої міцності й кращого сходження стружки, а також для визначення бічних піднутрень, з умов рівномірного розподілу поверхонь тертя, та для визначення раціонального кута установки БНП у корпусі відрізного різця й для розрахунку елементів кріплення при бічній установці БНП. На базі експериментальних досліджень отримано емпіричні залежності описування дії сил різання на розроблені різці залежно від режимів і кутів різання. Обґрунтовано рекомендації з заміни пропонованими БНП базових пластин CoroCut XS, CoroCut–3, Multicut 4, PentaCut при збільшенні глибини відрізування з 6...10, до 12...24 мм. У порівнянні з пластинами Q–Cut, CoroCut 2 нові БНП мають більші витрати інструментального матеріалу на 1 різальну кромку, але їх більша маса збільшує теплоємність, що є перевагою при відрізуванні.
Диссертация посвящена решению научно-прикладной задачи повышения эффективности использования сборных отрезных резцов за счет внедрения боковой установки МНП, разработки новых конструкций резцов, пластин, способов и оснастки для их производства и исследование процессов обработки ими. Для этого были разработаны основные принципы классификации и отбора конструкций МНП, а также структурно-математическая модель их синтеза, на базе которой синтезированы все возможные варианты конструкций и проведен их полный анализ с отбором рациональных вариантов из 2178 исходных исполнений. Теоретически обоснованы и получены новые расчетные математические зависимости для определения конструктивных и геометрических параметров МНП боковой установки с дополнительной их заточкой по лыскам, одно- и двухрадиусным выемкам, устраняющим переходные радиусы, полученные при прессовании МНП на вершинах сопряжения боковых сторон. При этом найдены и реализованы условия уменьшения припуска на обработку МНП, обеспечения наибольшей их прочности и улучшения условий схождения стружки по лыскам и выемкам, а также проведены их исследования. Разработаны технологические принципы и новые конструкции заготовок МНП, обеспечивающие уменьшение трудоемкости обработки лысок и выемок за счет выполнения радиусных выступов на боковых гранях МНП и проведены их исследования. Синтезированы 36 вариантов исполнений выступов и выемок на боковых сторонах пластин для обеспечения требуемых условий резания в канавках и при отрезании. Разработана методика определения боковых поднутрений исходя из условий равномерного распределения остаточных выступов-поверхностей трения у вершин и получены расчетные зависимости для их определения. Решена задача определения сил зажима и закрепления предлагаемых МНП в гнезде резца и получены расчетные математические зависимости для их определения. Решена также задача определения рационального угла установки МНП в гнезде корпуса отрезного резца при ее боковой установке, получены расчетные зависимости для ее определения. Исследования статической жесткости данных резцов выполняли методом конечных элементов в универсальной програмной среде. Для этого была смоделирована твердотельная конструкция исследуемого отрезного резца и его конечно-элементная сетка, на базе которой получены результаты расчета перемещений составных элементов конструкции. Проведены экспериментальные исследования жесткости и сил резания при эксплуатации созданных сборных отрезных резцов и МНП боковой установки, на базе которых подтверждена их работоспособность. На базе этих исследований получены эмпирические расчетные зависимости для описания действия сил резания в зависимости от режимов и углов резания. Обоснованы рекомендации по замене предложенными МНП базовых МНП Corocut XS, Corocut-3, Multicut 4, Pentacut при увеличении глубины отрезания с 6…10, до 9…24 мм. По сравнению с пластинами Q-Cut, Corocut 2, предложенные МНП имеют больший расход инструментального материала в пересчете на 1 режущую кромку, но их большая масса увеличивает теплоемкость, что является преимуществом при резании в стесненных условиях.
Dissertation is dedicated to scientific and applied problems of system substantiation on the basis of methods to search for new technical solutions to improve the principles of multifaceted requiring no-sharpening plates (MNP) for cutting blades and methods their attachment, as well as the choice of their structural and geometrical parameters on their conditions of use and production, and the development of methods of their designing. Developed the basic principles classification and selection of MNP, structural and mathematical model of synthesis, in which synthesized all possible structures and their full analysis conducted with the selection of rational choices. Theoretically grounded and found the calculated dependences for the determination of structural and geometrical parameters of the MNP with flats, with single and double radial grooves on the tops from the conditions for reducing the allowance of the processing, ensure maximum durability and better chip flow and to determine the lateral undercuts of the conditions of uniform the distribution of the friction surfaces, and to determine the angle of the rational MNP body seat cutting tool at its side installation, and for calculating the elements of its mounting. On the basis of experimental studies have provided empirical equations to describe the action of cutting forces on the cutting edge data, depending on the modes and corners cut. Based recommendations for the proposed replacement of the MNP plates Corocut XS, Corocut-3, Multicut 4, Pentacut, when the depth intervals from 6… 10 to 12…24 mm. Compared with the plates Q-Cut, Corocut 2 proposed MNP have a greater flow of instrumental material in terms of 1 cutting edge, but their large mass increases the heat capacity, which is an advantage when cutting in confined spaces.
APA, Harvard, Vancouver, ISO, and other styles
49

Štaffa, Jiří. "Ztráty jednofázového asynchronního motoru s trvale připojeným kondenzátorem." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2015. http://www.nusl.cz/ntk/nusl-221263.

Full text
Abstract:
This project deals with increasing efficiency of one phase induction motor with permanent split capacitor. We can whole thesis divide into two parts, the first one is basic and the second is interested in analysis and measurement. First part handles with construction of single phase induction motor, explanation of function principle, start and run of motor. Calculating of efficiency including type of losses, which reduce efficiency. Second part concerns analysis losses including moment load characteristic, motor measurement while rotor is locked, with no load operation, measuring mechanical and additional losses. Further there will be measured useful values for creation model for simulation (reactance of windings etc.). Than will be the model created in ANSYS Maxwell with module RMxprt. After analytic calculation in RMxprt and using Finite Element Method (FEM) load characteristics will be compared together. This comparison gives us information about accuracy of model for simulation. Simulation and measurement will be carried out on another engine with high quality ferromagnetic material used for magnetic circuit of motor. Further will be done simulation of motor with modifications shown in previous chapter for high efficiency.
APA, Harvard, Vancouver, ISO, and other styles
50

Neamatian, Monemi Rahimeh. "Fixed cardinality linear ordering problem, polyhedral studies and solution methods." Thesis, Clermont-Ferrand 2, 2014. http://www.theses.fr/2014CLF22516/document.

Full text
Abstract:
Le problème d’ordre linéaire (LOP) a reçu beaucoup d’attention dans différents domaines d’application, allant de l’archéologie à l’ordonnancement en passant par l’économie et même de la psychologie mathématique. Ce problème est aussi connu pour être parmi les problèmes NP-difficiles. Nous considérons dans cette thèse une variante de (LOP) sous contrainte de cardinalité. Nous cherchons donc un ordre linéaire d’un sous-ensemble de sommets du graphe de préférences de cardinalité fixée et de poids maximum. Ce problème, appelé (FCLOP) pour ’fixed-cardinality linear ordering problem’, n’a pas été étudié en tant que tel dans la littérature scientifique même si plusieurs applications dans les domaines de macro-économie, de classification dominante ou de transport maritime existent concrètement. On retrouve en fait ses caractéristiques dans les modèles étendus de sous-graphes acycliques. Le problème d’ordre linéaire est déjà connu comme un problème NP-difficile et il a donné lieu à de nombreuses études, tant théoriques sur la structure polyédrale de l’ensemble des solutions réalisables en variables 0-1 que numériques grâce à des techniques de relaxation et de séparation progressive. Cependant on voit qu’il existe de nombreux cas dans la littérature, dans lesquelles des solveurs de Programmation Linéaire en nombres entiers comme CPLEX peuvent en résoudre certaines instances en moins de 10 secondes, mais une fois que la cardinalité est limitée, ces mêmes instances deviennent très difficiles à résoudre. Sur les aspects polyédraux, nous avons étudié le polytope de FCLOP, défini plusieurs classes d’inégalités valides et identifié la dimension ainsi que certaines inégalités qui définissent des facettes pour le polytope de FCLOP. Nous avons introduit un algorithme Relax-and-Cut basé sur ces résultats pour résoudre les instances du problème. Dans cette étude, nous nous sommes également concentrés sur la relaxation Lagrangienne pour résoudre ces cas difficiles. Nous avons étudié différentes stratégies de relaxation et nous avons comparé les bornes duales par rapport à la consolidation obtenue à partir de chaque stratégie de relâcher les contraintes afin de détecter le sous-ensemble des contraintes le plus approprié. Les résultats numériques montrent que nous pouvons trouver des bornes duales de très haute qualité. Nous avons également mis en place une méthode de décomposition Lagrangienne. Dans ce but, nous avons décomposé le modèle de FCLOP en trois sous-problèmes (au lieu de seulement deux) associés aux contraintes de ’tournoi’, de ’graphes sans circuits’ et de ’cardinalité’. Les résultats numériques montrent une amélioration significative de la qualité des bornes duales pour plusieurs cas. Nous avons aussi mis en oeuvre une méthode de plans sécants (cutting plane algorithm) basée sur la relaxation pure des contraintes de circuits. Dans cette méthode, on a relâché une partie des contraintes et on les a ajoutées au modèle au cas où il y a des de/des violations. Les résultats numériques montrent des performances prometteuses quant à la réduction du temps de calcul et à la résolution d’instances difficiles hors d’atteinte des solveurs classiques en PLNE
Linear Ordering Problem (LOP) has receive significant attention in different areas of application, ranging from transportation and scheduling to economics and even archeology and mathematical psychology. It is classified as a NP-hard problem. Assume a complete weighted directed graph on V n , |V n |= n. A permutation of the elements of this finite set of vertices is a linear order. Now let p be a given fixed integer number, 0 ≤ p ≤ n. The p-Fixed Cardinality Linear Ordering Problem (FCLOP) is looking for a subset of vertices containing p nodes and a linear order on the nodes in S. Graphically, there exists exactly one directed arc between every pair of vertices in an LOP feasible solution, which is also a complete cycle-free digraph and the objective is to maximize the sum of the weights of all the arcs in a feasible solution. In the FCLOP, we are looking for a subset S ⊆ V n such that |S|= p and an LOP on these S nodes. Hence the objective is to find the best subset of the nodes and an LOP over these p nodes that maximize the sum of the weights of all the arcs in the solution. Graphically, a feasible solution of the FCLOP is a complete cycle-free digraph on S plus a set of n − p vertices that are not connected to any of the other vertices. There are several studies available in the literature focused on polyhedral aspects of the linear ordering problem as well as various exact and heuristic solution methods. The fixed cardinality linear ordering problem is presented for the first time in this PhD study, so as far as we know, there is no other study in the literature that has studied this problem. The linear ordering problem is already known as a NP-hard problem. However one sees that there exist many instances in the literature that can be solved by CPLEX in less than 10 seconds (when p = n), but once the cardinality number is limited to p (p < n), the instance is not anymore solvable due to the memory issue. We have studied the polytope corresponding to the FCLOP for different cardinality values. We have identified dimension of the polytope, proposed several classes of valid inequalities and showed that among these sets of valid inequalities, some of them are defining facets for the FCLOP polytope for different cardinality values. We have then introduced a Relax-and-Cut algorithm based on these results to solve instances of the FCLOP. To solve the instances of the problem, in the beginning, we have applied the Lagrangian relaxation algorithm. We have studied different relaxation strategies and compared the dual bound obtained from each case to detect the most suitable subproblem. Numerical results show that some of the relaxation strategies result better dual bound and some other contribute more in reducing the computational time and provide a relatively good dual bound in a shorter time. We have also implemented a Lagrangian decomposition algorithm, decom-6 posing the FCLOP model to three subproblems (instead of only two subproblems). The interest of decomposing the FCLOP model to three subproblems comes mostly from the nature of the three subproblems, which are relatively quite easier to solve compared to the initial FCLOP model. Numerical results show a significant improvement in the quality of dual bounds for several instances. We could also obtain relatively quite better dual bounds in a shorter time comparing to the other relaxation strategies. We have proposed a cutting plane algorithm based on the pure relaxation strategy. In this algorithm, we firstly relax a subset of constraints that due to the problem structure, a very few number of them are active. Then in the course of the branch-and-bound tree we verify if there exist any violated constraint among the relaxed constraints or. Then the characterized violated constraints will be globally added to the model. (...)
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography