To see the other types of publications on this topic, follow the link: Methode gauss newton.

Dissertations / Theses on the topic 'Methode gauss newton'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 25 dissertations / theses for your research on the topic 'Methode gauss newton.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chen, Min. "Excitation optimale d'un systeme parabolique en vue de son identification." Nantes, 1987. http://www.theses.fr/1987NANT2050.

Full text
Abstract:
Le systeme considere est de type parabolique non lineaire. On montre l'existence et l'unicite de la solution du systeme et on le resoud numeriquement. On utilise les methodes d'optimisation du gradient conjugue et de gauss-newton pour l'identification des parametres avec l'excitation du systeme donnee puis on determine l'excitation optimale pour l'estimation des parametres dans le cas ou les parametres sont fonction de l'etat
APA, Harvard, Vancouver, ISO, and other styles
2

Simonis, Joseph P. "Newton-Picard Gauss-Seidel." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-051305-162036/unrestricted/simonis.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Simonis, Joseph P. "Newton-Picard Gauss-Seidel." Digital WPI, 2005. https://digitalcommons.wpi.edu/etd-dissertations/285.

Full text
Abstract:
Newton-Picard methods are iterative methods that work well for computing roots of nonlinear equations within a continuation framework. This project presents one of these methods and includes the results of a computation involving the Brusselator problem performed by an implementation of the method. This work was done in collaboration with Andrew Salinger at Sandia National Laboratories.
APA, Harvard, Vancouver, ISO, and other styles
4

Parkhurst, Steven Christopher. "Solution of equations arising in reservoir simulation by the truncated Gauss-Newton method." Thesis, University of Hertfordshire, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.283463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Meadows, Leslie J. "Iteratively Regularized Methods for Inverse Problems." Digital Archive @ GSU, 2013. http://digitalarchive.gsu.edu/math_diss/13.

Full text
Abstract:
We are examining iteratively regularized methods for solving nonlinear inverse problems. Of particular interest for these types of methods are application problems which are unstable. For these application problems, special methods of numerical analysis are necessary, since classical algorithms tend to be divergent.
APA, Harvard, Vancouver, ISO, and other styles
6

Aguiar, Ademir Alves. "Análise semi-local do método de Gauss-Newton sob uma condição majorante." Universidade Federal de Goiás, 2014. http://repositorio.bc.ufg.br/tede/handle/tede/4251.

Full text
Abstract:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2015-03-05T14:28:50Z No. of bitstreams: 2 Dissertação - Ademir Alves Aguiar - 2014.pdf: 1975016 bytes, checksum: 31320b5840b8b149afedc97d0e02b49b (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2015-03-06T10:38:03Z (GMT) No. of bitstreams: 2 Dissertação - Ademir Alves Aguiar - 2014.pdf: 1975016 bytes, checksum: 31320b5840b8b149afedc97d0e02b49b (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Made available in DSpace on 2015-03-06T10:38:03Z (GMT). No. of bitstreams: 2 Dissertação - Ademir Alves Aguiar - 2014.pdf: 1975016 bytes, checksum: 31320b5840b8b149afedc97d0e02b49b (MD5) license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) Previous issue date: 2014-12-18
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
In this dissertation we present a semi-local convergence analysis for the Gauss-Newton method to solve a special class of systems of non-linear equations, under the hypothesis that the derivative of the non-linear operator satisfies a majorant condition. The proofs and conditions of convergence presented in this work are simplified by using a simple majorant condition. Another tool of demonstration that simplifies our study is to identify regions where the iteration of Gauss-Newton is “well-defined”. Moreover, special cases of the general theory are presented as applications.
Nesta dissertação apresentamos uma análise de convergência semi-local do método de Gauss-Newton para resolver uma classe especial de sistemas de equações não-lineares, sob a hipótese que a derivada do operador não-linear satisfaz uma condição majorante. As demonstrações e condições de convergência apresentadas neste trabalho são simplificadas pelo uso de uma simples condição majorante. Outra ferramenta de demonstração que simplifica o nosso estudo é a identificação de regiões onde a iteração de Gauss-Newton está “bem-definida”. Além disso, casos especiais da teoria geral são apresentados como aplicações.
APA, Harvard, Vancouver, ISO, and other styles
7

Dolák, Martin. "Nelineární regrese v programu R." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-193088.

Full text
Abstract:
This thesis deals with solutions of nonlinear regression problems using R programming language. The introductory theoretical part is devoted to familiarization with the principles of solving nonlinear regression models and of their applications in the program R. In both, theoretical and practical part, the most famous and used differentiator algorithms are presented, particularly the Gauss-Newton's and of the steepest descent method, for estimating the parameters of nonlinear regression. Further, in the practical part, there are some demo solutions of particular tasks using nonlinear regression methods. Overall, a large number of graphs processed by the author is used in this thesis for better comprehension.
APA, Harvard, Vancouver, ISO, and other styles
8

Bokka, Naveen. "Comparison of Power Flow Algorithms for inclusion in On-line Power Systems Operation Tools." ScholarWorks@UNO, 2010. http://scholarworks.uno.edu/td/1237.

Full text
Abstract:
The goal of this thesis is to develop a new, fast, adaptive load flow algorithm that "automatically alternates" numerical methods including Newton-Raphson method, Gauss-Seidel method and Gauss method for a load flow run to achieve less run time. Unlike the proposed method, the traditional load flow analysis uses only one numerical method at a time. This adaptive algorithm performs all the computation for finding the bus voltage angles and magnitudes, real and reactive powers for the given generation and load values, while keeping track of the proximity to convergence of a solution. This work focuses on finding the algorithm that uses multiple numerical techniques, rather than investigating programming techniques and programming languages. The convergence time is compared with those from using each of the numerical techniques. The proposed method is implemented on the IEEE 39-bus system with different contingencies and the solutions obtained are verified with PowerWorld Simulator, a commercial software for load flow analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Gumpert, Ben Allen. "A recursive Gauss-Newton method for model independent eye-in-hand visual servoing / by Ben Allen Gumpert." Thesis, Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/17260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mollevik, Iris. "Bundle adjustment for large problems - The effect of a truncated Gauss-Newton method on performance and precision." Thesis, Umeå universitet, Institutionen för datavetenskap, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-155346.

Full text
Abstract:
We implement a truncated Gauss-Newton algorithm and apply it to the bundle adjustment problem in a photogrammetry application. The normal equations are solved approximately using the conjugate gradient method preconditioned with the incomplete Cholesky factor.  Our implementation is compared to an exact Gauss-Newton implementation.  Improvements in time performance are found in some cases. The observed relative errors in estimated parameters are of order 10^−10 or smaller.  The preconditioner proves to be very important, as does the permutation of the Jacobian. Excluding the time to re-permute the Jacobian, execution times are lowered by up to 24%. The truncated algorithm is observed to improve performance for larger datasets but not for smaller ones.
APA, Harvard, Vancouver, ISO, and other styles
11

Cho, Taewon. "Numerical Methods for Separable Nonlinear Inverse Problems with Constraint and Low Rank." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/82929.

Full text
Abstract:
In this age, there are many applications of inverse problems to lots of areas ranging from astronomy, geoscience and so on. For example, image reconstruction and deblurring require the use of methods to solve inverse problems. Since the problems are subject to many factors and noise, we can't simply apply general inversion methods. Furthermore in the problems of interest, the number of unknown variables is huge, and some may depend nonlinearly on the data, such that we must solve nonlinear problems. It is quite different and significantly more challenging to solve nonlinear problems than linear inverse problems, and we need to use more sophisticated methods to solve these kinds of problems.
Master of Science
In various research areas, there are many required measurements which can't be observed due to physical and economical reasons. Instead, these unknown measurements can be recovered by known measurements. This phenomenon can be modeled and be solved by mathematics.
APA, Harvard, Vancouver, ISO, and other styles
12

Mirsad, Ćosović. "Distributed State Estimation in Power Systems using Probabilistic Graphical Models." Phd thesis, Univerzitet u Novom Sadu, Fakultet tehničkih nauka u Novom Sadu, 2019. https://www.cris.uns.ac.rs/record.jsf?recordId=108459&source=NDLTD&language=en.

Full text
Abstract:
We present a detailed study on application of factorgraphs and the belief propagation (BP) algorithm to thepower system state estimation (SE) problem. We startfrom the BP solution for the linear DC model, for whichwe provide a detailed convergence analysis. Using BPbasedDC model we propose a fast real-time stateestimator for the power system SE. The proposedestimator is easy to distribute and parallelize, thusalleviating computational limitations and allowing forprocessing measurements in real time. The presentedalgorithm may run as a continuous process, with eachnew measurement being seamlessly processed by thedistributed state estimator. In contrast to the matrixbasedSE methods, the BP approach is robust to illconditionedscenarios caused by significant differencesbetween measurement variances, thus resulting in asolution that eliminates observability analysis. Using theDC model, we numerically demonstrate the performanceof the state estimator in a realistic real-time systemmodel with asynchronous measurements. We note thatthe extension to the non-linear SE is possible within thesame framework.Using insights from the DC model, we use two differentapproaches to derive the BP algorithm for the non-linearmodel. The first method directly applies BP methodology,however, providing only approximate BP solution for thenon-linear model. In the second approach, we make a keyfurther step by providing the solution in which the BP isapplied sequentially over the non-linear model, akin towhat is done by the Gauss-Newton method. The resultingiterative Gauss-Newton belief propagation (GN-BP)algorithm can be interpreted as a distributed Gauss-Newton method with the same accuracy as thecentralized SE, however, introducing a number ofadvantages of the BP framework. The thesis providesextensive numerical study of the GN-BP algorithm,provides details on its convergence behavior, and gives anumber of useful insights for its implementation.Finally, we define the bad data test based on the BPalgorithm for the non-linear model. The presented modelestablishes local criteria to detect and identify bad datameasurements. We numerically demonstrate that theBP-based bad data test significantly improves the baddata detection over the largest normalized residual test.
Glavni rezultati ove teze su dizajn i analiza novihalgoritama za rešavanje problema estimacije stanjabaziranih na faktor grafovima i „Belief Propagation“ (BP)algoritmu koji se mogu primeniti kao centralizovani ilidistribuirani estimatori stanja u elektroenergetskimsistemima. Na samom početku, definisan je postupak zarešavanje linearnog (DC) problema korišćenjem BPalgoritma. Pored samog algoritma data je analizakonvergencije i predloženo je rešenje za unapređenjekonvergencije. Algoritam se može jednostavnodistribuirati i paralelizovati, te je pogodan za estimacijustanja u realnom vremenu, pri čemu se informacije moguprikupljati na asinhroni način, zaobilazeći neke odpostojećih rutina, kao npr. provera observabilnostisistema. Proširenje algoritma za nelinearnu estimacijustanja je moguće unutar datog modela.Dalje se predlaže algoritam baziran na probabilističkimgrafičkim modelima koji je direktno primenjen nanelinearni problem estimacije stanja, što predstavljalogičan korak u tranziciji od linearnog ka nelinearnommodelu. Zbog nelinearnosti funkcija, izrazi za određenuklasu poruka ne mogu se dobiti u zatvorenoj formi, zbogčega rezultujući algoritam predstavlja aproksimativnorešenje. Nakon toga se predlaže distribuirani Gaus-Njutnov metod baziran na probabilističkim grafičkimmodelima i BP algoritmu koji postiže istu tačnost kao icentralizovana verzija Gaus-Njutnovog metoda zaestimaciju stanja, te je dat i novi algoritam za otkrivanjenepouzdanih merenja (outliers) prilikom merenjaelektričnih veličina. Predstavljeni algoritam uspostavljalokalni kriterijum za otkrivanje i identifikacijunepouzdanih merenja, a numerički je pokazano daalgoritam značajno poboljšava detekciju u odnosu nastandardne metode.
APA, Harvard, Vancouver, ISO, and other styles
13

Derflinger, Gerhard, Wolfgang Hörmann, and Josef Leydold. "Random Variate Generation by Numerical Inversion when only the Density Is Known." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2008. http://epub.wu.ac.at/1112/1/document.pdf.

Full text
Abstract:
We present a numerical inversion method for generating random variates from continuous distributions when only the density function is given. The algorithm is based on polynomial interpolation of the inverse CDF and Gauss-Lobatto integration. The user can select the required precision which may be close to machine precision for smooth, bounded densities; the necessary tables have moderate size. Our computational experiments with the classical standard distributions (normal, beta, gamma, t-distributions) and with the noncentral chi-square, hyperbolic, generalized hyperbolic and stable distributions showed that our algorithm always reaches the required precision. The setup time is moderate and the marginal execution time is very fast and the same for all distributions. Thus for the case that large samples with fixed parameters are required the proposed algorithm is the fastest inversion method known. Speed-up factors up to 1000 are obtained when compared to inversion algorithms developed for the specific distributions. This makes our algorithm especially attractive for the simulation of copulas and for quasi-Monte Carlo applications. (author´s abstract)
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
14

Derflinger, Gerhard, Wolfgang Hörmann, and Josef Leydold. "Online Supplement to "Random Variate Generation by Numerical Inversion When Only the Density Is Known"." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2009. http://epub.wu.ac.at/162/1/document.pdf.

Full text
Abstract:
We present a numerical inversion method for generating random variates from continuous distributions when only the density function is given. The algorithm is based on polynomial interpolation of the inverse CDF and Gauss-Lobatto integration. The user can select the required precision which may be close to machine precision for smooth, bounded densities; the necessary tables have moderate size. Our computational experiments with the classical standard distributions (normal, beta, gamma, t-distributions) and with the noncentral chi-square, hyperbolic, generalized hyperbolic and stable distributions showed that our algorithm always reaches the required precision. The setup time is moderate and the marginal execution time is very fast and nearly the same for all distributions. Thus for the case that large samples with fixed parameters are required the proposed algorithm is the fastest inversion method known. Speed-up factors up to 1000 are obtained when compared to inversion algorithms developed for the specific distributions. This makes our algorithm especially attractive for the simulation of copulas and for quasi-Monte Carlo applications.

This paper is the revised final version of the working paper no. 78 of this research report series.
Series: Research Report Series / Department of Statistics and Mathematics

APA, Harvard, Vancouver, ISO, and other styles
15

AMARAL, Magali Teresópolis Reis. "Abordagem bayesiana para curva de crescimento com restrições nos parâmetros." Universidade Federal Rural de Pernambuco, 2008. http://www.tede2.ufrpe.br:8080/tede2/handle/tede2/5184.

Full text
Abstract:
Submitted by (ana.araujo@ufrpe.br) on 2016-08-04T13:26:23Z No. of bitstreams: 1 Magali Teresopolis Reis Amaral.pdf: 5438608 bytes, checksum: a3ca949533ae94adaf7883fd465a627a (MD5)
Made available in DSpace on 2016-08-04T13:26:23Z (GMT). No. of bitstreams: 1 Magali Teresopolis Reis Amaral.pdf: 5438608 bytes, checksum: a3ca949533ae94adaf7883fd465a627a (MD5) Previous issue date: 2008-08-18
The adjustment of the weight-age growth curves for animals plays an important role in animal production planning. These adjusted growth curves must be coherent with the biological interpretation of animal growth, which often demands imposition of constraints on model parameters.The inference of the parameters of nonlinear models with constraints, using classical techniques, presents various difficulties. In order to bypass those difficulties, a bayesian approach for adjustment of the growing curves is proposed. In this respect the bayesian proposed approach introduces restrictions on model parameters through choice of the prior density. Due to the nonlinearity, the posterior density of those parameters does not have a kernel that can be identified among the traditional distributions, and their moments can only be obtained using numerical techniques. In this work the MCMC simulation (Monte Carlo chain Markov) was implemented to obtain a summary of the posterior density. Besides, selection model criteria were used for the observed data, based on generated samples of the posterior density.The main purpose of this work is to show that the bayesian approach can be of practical use, and to compare the bayesian inference of the estimated parameters considering noninformative prior density (from Jeffreys), with the classical inference obtained by the Gauss-Newton method. Therefore it was possible to observe that the calculation of the confidence intervals based on the asymptotic theory fails, indicating non significance of certain parameters of some models, while in the bayesian approach the intervals of credibility do not present this problem. The programs in this work were implemented in R language,and to illustrate the utility of the proposed method, analysis of real data was performed, from an experiment of evaluation of system of crossing among cows from different herds, implemented by Embrapa Pecuária Sudeste. The data correspond to 12 measurements of weight of animals between 8 and 19 months old, from the genetic groups of the races Nelore and Canchim, belonging to the genotype AALLAB (Paz 2002). The results reveal excellent applicability of the bayesian method, where the model of Richard presented difficulties of convergence both in the classical and in the bayesian approach (with non informative prior). On the other hand the logistic model provided the best adjustment of the data for both methodologies when opting for non informative and informative prior density.
O ajuste de curva de crescimento peso-idade para animais tem um papel importante no planejamento da produção animal. No entanto, as curvas de crescimento ajustadas devem ser coerentes com as interpretações biológicas do crescimento do animal, o que exige muitas vezes que sejam impostas restrições aos parâmetros desse modelo.A inferência de parâmetros de modelos não lineares sujeito a restrições, utilizando técnicas clássicas apresenta diversas dificuldades. Para contornar estas dificuldades, foi proposta uma abordagem bayesiana para ajuste de curvas de crescimento. Neste sentido,a abordagem bayesiana proposta introduz as restrições nos parâmetros dos modelos através das densidades de probabilidade a priori adotadas. Devido à não linearidade, as densidades a posteriori destes parâmetros não têm um núcleo que possa ser identificado entre as distribuições tradicionalmente conhecidas e os seus momentos só podem ser obtidos numericamente. Neste trabalho, as técnicas de simulação de Monte Carlo Cadeia de Markov (MCMC) foram implementadas para obtenção de um sumário das densidades a posteriori. Além disso, foram utilizados critérios de seleção do melhor modelo para um determinado conjunto de dados baseados nas amostras geradas das densidades a posteriori.O objetivo principal deste trabalho é mostrar a viabilidade da abordagem bayesiana e comparar a inferência bayesiana dos parâmetros estimados, considerando-se densidades a priori não informativas (de Jeffreys), com a inferência clássica das estimativas obtidas pelo método de Gauss-Newton. Assim, observou-se que o cálculo de intervalos de confiança, baseado na teoria assintótica, falha, levando a não significância de certos parâmetros de alguns modelos. Enquanto na abordagem bayesiana os intervalos de credibilidade não apresentam este problema. Os programas utilizados foram implementados no R e para ilustração da aplicabilidade do método proposto, foram realizadas análises de dados reais oriundos de um experimento de avaliação de sistema de cruzamento entre raças bovinas de corte, executado na Embrapa Pecuária Sudeste. Os dados correspondem a 12 mensurações de peso dos 8 aos 19 meses de idade do grupo genético das raças Nelore e Canchim, pertencente ao grupo de genotípico AALLAB, ver (Paz 2002). Os resultados revelaram excelente aplicabilidade do método bayesiano, destacando que o modelo de Richard apresentou dificuldades de convergência tanto na abordagem clássica como bayesiana (com priori não informativa). Por outro lado o modelo Logístico foi quem melhor se ajustou aos dados em ambas metodologias quando se optou por densidades a priori não informativa e informativa.
APA, Harvard, Vancouver, ISO, and other styles
16

Kanduri, Srinivasa Rangarajan Mukhesh, and Vinay Kumar Reddy Medapati. "Evaluation of TDOA based Football Player’s Position Tracking Algorithm using Kalman Filter." Thesis, Blekinge Tekniska Högskola, Institutionen för tillämpad signalbehandling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-16433.

Full text
Abstract:
Time Difference Of Arrival (TDOA) based position tracking technique is one of the pinnacles of sports tracking technology. Using radio frequency com-munication, advanced filtering techniques and various computation methods, the position of a moving player in a virtually created sports arena can be iden-tified using MATLAB. It can also be related to player’s movement in real-time. For football in particular, this acts as a powerful tool for coaches to enhanceteam performance. Football clubs can use the player tracking data to boosttheir own team strengths and gain insight into their competing teams as well. This method helps to improve the success rate of Athletes and clubs by analyz-ing the results, which helps in crafting their tactical and strategic approach to game play. The algorithm can also be used to enhance the viewing experienceof audience in the stadium, as well as broadcast.In this thesis work, a typical football field scenario is assumed and an arrayof base stations (BS) are installed along perimeter of the field equidistantly.The player is attached with a radio transmitter which emits radio frequencythroughout the assigned game time. Using the concept of TDOA, the position estimates of the player are generated and the transmitter is tracked contin-uously by the BS. The position estimates are then fed to the Kalman filter, which filters and smoothens the position estimates of the player between the sample points considered. Different paths of the player as straight line, circu-lar, zig-zag paths in the field are animated and the positions of the player are tracked. Based on the error rate of the player’s estimated position, the perfor-mance of the Kalman filter is evaluated. The Kalman filter’s performance is analyzed by varying the number of sample points.
APA, Harvard, Vancouver, ISO, and other styles
17

Altoumaimi, Rasha Talal. "Nonlinear Least-Square Curve Fitting of Power-Exponential Functions: Description and comparison of different fitting methods." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-38606.

Full text
Abstract:
This thesis examines how to find the best fit to a series of data points when curve fitting using power-exponential models. We describe the different numerical methods such as the Gauss-Newton and Levenberg-Marquardt methods to compare them for solving non-linear least squares of curve fitting using different power-exponential functions. In addition, we show the results of numerical experiments that illustrate the effectiveness of this approach.Furthermore, we show its application to the practical problems by using different sets of data such as death rates and rocket-triggered lightning return strokes based on the transmission line model.
APA, Harvard, Vancouver, ISO, and other styles
18

Jurča, Ondřej. "Ustálený chod a zkratové poměry v síti 110 kV E.ON napájené z rozvodny 110 kV Otrokovice v roce 2011." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219016.

Full text
Abstract:
Distribution network 110 kV owned by E. ON in the area Otrokovice; powered by 110 kV and two variants of involvement contained.The first option is basic involvement, without the use of the bridge. The second option includes involvement with the bridge. The aim of this study is to compare; by calculating the steady-state network operation and short circuit conditions of the network, the involvement of these two options. The thesis is divided into two parts, theoretical and practical. The theoretical part consists of a description of the steady operation of networks of high-voltage and short circuit ratio calculations. Load flow calculations are described by the Gauss-Seidel and Newton iterative method. In the case of short-circuit conditions, the effects of their characteristic values, processes and various methods of calculation are described.In the second part, this theoretical knowledge is applied to input data and dispatching programme with the appropriate calculations of network operation and short circuit conditions. The calculated values are listed in the thesis, on the basis of which an evaluation of the two possible connections is made.
APA, Harvard, Vancouver, ISO, and other styles
19

Adámek, Daniel. "Automatická kalibrace robotického ramene pomocí kamer/y." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2019. http://www.nusl.cz/ntk/nusl-402130.

Full text
Abstract:
K nahrazení člověka při úloze testování dotykových embedded zařízení je zapotřebí vyvinout komplexní automatizovaný robotický systém. Jedním ze zásadních úkolů je tento systém automaticky zkalibrovat. V této práci jsem se zabýval možnými způsoby automatické kalibrace robotického ramene v prostoru ve vztahu k dotykovému zařízení pomocí jedné či více kamer. Následně jsem představil řešení založené na estimaci polohy jedné kamery pomocí iterativních metod jako např. Gauss-Newton nebo Levenberg-Marquardt. Na konci jsem zhodnotil dosaženou přesnost a navrhnul postup pro její zvýšení.
APA, Harvard, Vancouver, ISO, and other styles
20

Mohamed, Ibrahim Daoud Ahmed. "Automatic history matching in Bayesian framework for field-scale applications." Texas A&M University, 2004. http://hdl.handle.net/1969.1/3170.

Full text
Abstract:
Conditioning geologic models to production data and assessment of uncertainty is generally done in a Bayesian framework. The current Bayesian approach suffers from three major limitations that make it impractical for field-scale applications. These are: first, the CPU time scaling behavior of the Bayesian inverse problem using the modified Gauss-Newton algorithm with full covariance as regularization behaves quadratically with increasing model size; second, the sensitivity calculation using finite difference as the forward model depends upon the number of model parameters or the number of data points; and third, the high CPU time and memory required for covariance matrix calculation. Different attempts were used to alleviate the third limitation by using analytically-derived stencil, but these are limited to the exponential models only. We propose a fast and robust adaptation of the Bayesian formulation for inverse modeling that overcomes many of the current limitations. First, we use a commercial finite difference simulator, ECLIPSE, as a forward model, which is general and can account for complex physical behavior that dominates most field applications. Second, the production data misfit is represented by a single generalized travel time misfit per well, thus effectively reducing the number of data points into one per well and ensuring the matching of the entire production history. Third, we use both the adjoint method and streamline-based sensitivity method for sensitivity calculations. The adjoint method depends on the number of wells integrated, and generally is of an order of magnitude less than the number of data points or the model parameters. The streamline method is more efficient and faster as it requires only one simulation run per iteration regardless of the number of model parameters or the data points. Fourth, for solving the inverse problem, we utilize an iterative sparse matrix solver, LSQR, along with an approximation of the square root of the inverse of the covariance calculated using a numerically-derived stencil, which is broadly applicable to a wide class of covariance models. Our proposed approach is computationally efficient and, more importantly, the CPU time scales linearly with respect to model size. This makes automatic history matching and uncertainty assessment using a Bayesian framework more feasible for large-scale applications. We demonstrate the power and utility of our approach using synthetic cases and a field example. The field example is from Goldsmith San Andres Unit in West Texas, where we matched 20 years of production history and generated multiple realizations using the Randomized Maximum Likelihood method for uncertainty assessment. Both the adjoint method and the streamline-based sensitivity method are used to illustrate the broad applicability of our approach.
APA, Harvard, Vancouver, ISO, and other styles
21

Truscott, Simon. "A heterogenous three-dimensional computational model for wood drying." Queensland University of Technology, 2004. http://eprints.qut.edu.au/15960/.

Full text
Abstract:
The objective of this PhD research program is to develop an accurate and efficient heterogeneous three-dimensional computational model for simulating the drying of wood at temperatures below the boiling point of water. The complex macroscopic drying equations comprise a coupled and highly nonlinear system of physical laws for liquid and energy conservation. Due to the heterogeneous nature of wood, the physical model parameters strongly depend upon the local pore structure, wood density variation within growth rings and variations in primary and secondary system variables. In order to provide a realistic representation of this behaviour, a set of previously determined parameters derived using sophisticated image analysis methods and homogenisation techniques is embedded within the model. From the literature it is noted that current three-dimensional computational models for wood drying do not take into consideration the heterogeneities of the medium. A significant advance made by the research conducted in this thesis is the development of a three - dimensional computational model that takes into account the heterogeneous board material properties which vary within the transverse plane with respect to the pith position that defines the radial and tangential directions. The development of an accurate and efficient computational model requires the consideration of a number of significant numerical issues, including the virtual board description, an effective mesh design based on triangular prismatic elements, the control volume finite element discretisation process for the cou- pled conservation laws, the derivation of an accurate dux expression based on gradient approximations together with flux limiting, and finally the solution of a large, coupled, nonlinear system using an inexact Newton method with a suitably preconditioned iterative linear solver for computing the Newton correction. This thesis addresses all of these issues for the case of low temperature drying of softwood. Specific case studies are presented that highlight the efficiency of the proposed numerical techniques and illustrate the complex heat and mass transport processes that evolve throughout drying.
APA, Harvard, Vancouver, ISO, and other styles
22

Cayemitte, Jean-Marie. "Accumulation des biens, croissance et monnaie." Thesis, Paris 2, 2014. http://www.theses.fr/2014PA020001/document.

Full text
Abstract:
Cette thèse construit un modèle théorique qui renouvelle l’approche traditionnelle de l’équilibre du marché. En introduisant dans le paradigme néo-classique le principe de préférence pour la quantité, il génère de façon optimale des stocks dans un marché concurrentiel. Les résultats sont très importants, car ils expliquent à la fois l’émergence des invendus et l’existence de cycles économiques. En outre, il étudie le comportement optimal du monopole dont la puissance de marché dépend non seulement de la quantité de biens étalés, mais aussi de celle de biens achetés. Contrairement à l’hypothèse traditionnelle selon laquelle le monopoleur choisit le prix ou la quantité qui maximise son profit, il attire, via un indice de Lerner généralisé la demande à la fois par le prix et la quantité de biens exposés. Quelle que soit la structure du marché, le phénomène d’accumulation des stocks de biens apparaît dans l’économie. De plus, il a l’avantage d’expliquer explicitement les achats impulsifs non encore traités par la théorie économique. Pour vérifier la robustesse des résultats du modèle théorique, ils sont testés sur des données américaines. En raison de leur non-linéarité, la méthode de Gauss-Newton est appropriée pour analyser l’impact de la préférence pour la quantité sur la production et l’accumulation de biens, et par conséquent sur les prévisions de PIB. Enfin, cette thèse construit un modèle à générations imbriquées à deux pays qui étend l’équilibre dynamique à un gamma-équilibre dynamique sans friction. Sur la base de la contrainte de détention préalable d’encaisse, il ressort les conditions de sur-accumulation du capital et les conséquences de la mobilité du capital sur le bien-être dans un contexte d’accumulation du stock d’invendus
This thesis constructs a theoretical model that renews the traditional approach of the market equilibrium. By introducing into the neoclassical paradigm the principle of preference for quantity, it optimally generates inventories within a competitive market. The results are very important since they explain both the emergence of unsold goods and the existence of economic cycles. In addition, it studies the optimal behavior of a monopolist whose the market power depends not only on the quantity of displayed goods but also that of goods that the main consumer is willing to buy. Contrary to the traditional assumption that the monopolist chooses price or quantity that maximizes its profit, through a generalized Lerner index (GLI) it attracts customers’ demand by both the price and the quantity of displayed goods. Whatever the market structure, the phenomenon of inventory accumulation appears in the economy. Furthermore, it has the advantage of explicitly explaining impulse purchases untreated by economics. To check the robustness of the results,the theoretical model is fitted to U.S. data. Due to its nonlinearity, the Gauss-Newtonmethod is appropriate to highlight the impact of consumers’ preference for quantity on production and accumulation of goods and consequently GDP forecast. Finally, this thesis builds a two-country overlapping generations (OLG) model which extends the dynamic OLG equilibrium to a frictionless dynamic OLG gamma-equilibrium. Based on the cash-inadvance constraint, it highlights the conditions of over-accumulation of capital and welfare implications of capital mobility in a context of accumulation of stock of unsold goods
APA, Harvard, Vancouver, ISO, and other styles
23

Martin, Petitfrere. "EOS based simulations of thermal and compositional flows in porous media." Thesis, Pau, 2014. http://www.theses.fr/2014PAUU3036/document.

Full text
Abstract:
Les calculs d'équilibres à triphasiques et quadriphasiques sont au cœur des simulations de réservoirs impliquant des processus de récupérations tertiaires. Dans les procédés d'injection de gaz ou de vapeur, le système huile-gaz est enrichi d'une nouvelle phase qui joue un rôle important dans la récupération de l'huile en place. Les calculs d'équilibres représentent la majeure partie des temps de calculs dans les simulations de réservoir compositionnelles où les routines thermodynamiques sont appelées un nombre conséquent de fois. Il est donc important de concevoir des algorithmes qui soient fiables, robustes et rapides. Dans la littérature peu de simulateurs basés sur des équations d'état sont applicables aux procédés de récupération thermique. A notre connaissance, il n'existe pas de simulation thermique complètement compositionnelle de ces procédés pour des cas d'applications aux huiles lourdes. Ces simulations apparaissent essentielles et pourraient offrir des outils améliorés pour l’étude prédictive de certains champs. Dans cette thèse, des algorithmes robustes et efficaces de calculs d’équilibre multiphasiques sont proposés permettant de surmonter les difficultés rencontrés durant les simulations d'injection de vapeur pour des huiles lourdes. La plupart des algorithmes d'équilibre de phases sont basés sur la méthode de Newton et utilisent les variables conventionnelles comme variables indépendantes. Dans un premier temps, des améliorations de ces algorithmes sont proposées. Les variables réduites permettent de réduire la dimensionnalité du système de nc (nombre de composants) dans le cas des variables conventionnelles, à M (M<
Three to four phase equilibrium calculations are in the heart of tertiary recovery simulations. In gas/steam injection processes, additional phases emerging from the oil-gas system are added to the set and have a significant impact on the oil recovery. The most important computational effort in many chemical process simulators and in petroleum compositional reservoir simulations is required by phase equilibrium and thermodynamic property calculations. In field scale reservoir simulations, a huge number of phase equilibrium calculations is required. For all these reasons, the algorithms must be robust and time-saving. In the literature, few simulators based on equations of state (EoS) are applicable to thermal recovery processes such as steam injection. To the best of our knowledge, no fully compositional thermal simulation of the steam injection process has been proposed with extra-heavy oils; these simulations are essential and will offer improved tools for predictive studies of the heavy oil fields. Thus, in this thesis different algorithms of improved efficiency and robustness for multiphase equilibrium calculations are proposed, able to handle conditions encountered during the simulation of steam injection for heavy oil mixtures. Most of the phase equilibrium calculations are based on the Newton method and use conventional independent variables. These algorithms are first investigated and different improvements are proposed. Michelsen’s (Fluid Phase Equil. 9 (1982) 21-40) method for multiphase-split problems is modified to take full advantage of symmetry (in the construction of the Jacobian matrix and the resolution of the linear system). The reduction methods enable to reduce the space of study from nc (number of components) for conventional variables to M (M<
APA, Harvard, Vancouver, ISO, and other styles
24

Huang, Chung-Wei, and 黃崇瑋. "Gauss-Newton and Nelder-Mead Nonlinear Least Squares Methods for Target Localization in Wireless Sensor Networks." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/64305946205589444305.

Full text
Abstract:
碩士
國立中央大學
通訊工程學系
101
Wireless sensor networks (WSNs) conventionally consist of a large number of low-cost, low-power, densely distributed, and mostly heterogeneous sensors. For the localization application, the target signal strength in a WSN is usually reported by sensors with quantized levels and all quantized data are collected in a fusion center to estimate the target location based on a nonlinear relationship between distance and signal strength. Instead of using the computation-intensive maximum likelihood (ML) method, we study the least squares method by which the least squares cost function is significantly deteriorated due to nonlinear parameter estimation. To solve this problem, the μ-law compression technique is considered for robust position estimation. Two nonlinear least squares estimation methods, Gauss-Newton and Nelder-Mead, are discussed in our work. Numerical results show that the proposed method can achieve a good mean square error performance close to the ML method with lower computation loading.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Yu-Ting, and 陳昱廷. "Regularized Semi-Dense Map Reconstruction from a Monocular Sequence based on Piecewise Planar Constraint and Gauss Newton Method." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/69236153826950334115.

Full text
Abstract:
碩士
國立臺灣大學
電機工程學研究所
104
Three-dimensional environment reconstruction from a monocular camera has been a popular and a challenge research topic in past few years. This technique can be applied to unmanned vehicles to perform automatic navigation, environment exploration and automatic obstacle avoidance. In addition, it can also be applied to augmented reality. Since the camera is not equipped with an inertial measurement unit (IMU), it is necessary to locate the camera position and map the environment simultaneously. In this thesis, the camera pose estimation is based on feature based method [24: Lepetit et al. 2009] and direct method [1: Engel et al. 2014]. The camera localization thread is depend on the semi-dense map which is the high gradient area in image and is easily to become noisy. Hence, a method that can regularize the reconstructed semi-dense map without affect the accuracy of the camera pose localization is proposed in this thesis. The regularization method can eliminate the noise and smooth the semi-dense map. Furthermore, the regularization method is related to the photometric information between two images, unlike other methods only using the information of the depth and spatial relation. The reconstruction algorithm can be divide into three parts: stereo matching, piecewise planar constraint, and plane optimization. Since the high gradient areas are always narrow and hard to apply the piecewise planar constraint, a stereo matching method that can broaden the high gradient area by using their nearby low gradient pixels is proposed. After the semi-dense map is reconstructed, the semi-dense map will propagate to the piecewise planar constraint which can estimate the initial piecewise planes for each pixel. Finally, the optimization method is applied to optimize each estimated piecewise plane. In this thesis, the proposed stereo matching is composed of prior depth of ORB feature [27: Rublee et al. 2011], KD-Tree [36: Bentley 1975], Priority Queue and the entropy of the histogram of oriented gradient. The aim is to match the low gradient area around the high gradient area between two images correctly by using the epipolar geometry. It is hard to match two textureless areas between two images, so the best nearby texture area is searched to do the matching procedure. Firstly, if one pixel does not hold an inverse depth hypothesis, the nearby ORB features which has initial depth knowledge is used to initiate the inverse depth value, which can shorter the epipolar line searching length and improve the accuracy of the matching result. Searching the texture area which contains high gradient pixel is done by using k nearest neighbor search with KD-Tree, and sorting the searched pixels in accordance with the gradient magnitude by the priority queue. If the searched point passes the stereo searching constraint, the searched high gradient point will form a 5×5 pixels template and be used to do the stereo line searching. The corresponding points are considered to be matched if the residual between the templates in two image pass the stereo matching threshold which will change with the value of the searching region’s entropy of the histogram of oriented gradient. In the regularization part of this thesis, each tiny piece of point cloud projected from the image in 3D coordinate is assumed to fit a plane. The corresponding size in the image of each piece is set to 5×5 pixels. Since the assumption will not hold if the piece is in the border between two different objects or the discontinuous area, the planar constraint is applied to discriminate the non-planar region. After passing the planar constraint, Gauss-Newton method is used to minimize the photometric error between the two patches which projected from the piece in 3D coordinate in two images and the optimal parameters of the plane can be obtained. Afterwards, the optimal parameters are used to eliminate the noises and smooth the point cloud. The experimental results demonstrate that the proposed regularization algorithm can eliminate most of the noises and reconstruct a more clearly point cloud.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography