To see the other types of publications on this topic, follow the link: Expansion de Taylor.

Dissertations / Theses on the topic 'Expansion de Taylor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'Expansion de Taylor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Dula, Mark, Eunice Mogusu, Sheryl Strasser, Ying Liu, and Shimin Zheng. "Median and Mode Approximation for Skewed Unimodal Continuous Distributions using Taylor Series Expansion." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etsu-works/112.

Full text
Abstract:
Background: Measures of central tendency are one of the foundational concepts of statistics, with the most commonly used measures being mean, median, and mode. While these are all very simple to calculate when data conform to a unimodal symmetric distribution, either discrete or continuous, measures of central tendency are more challenging to calculate for data distributed asymmetrically. There is a gap in the current statistical literature on computing median and mode for most skewed unimodal continuous distributions. For example, for a standardized normal distribution, mean, median, and mode are all equal to 0. The mean, median, and mode are all equal to each other. For a more general normal distribution, the mode and median are still equal to the mean. Unfortunately, the mean is highly affected by extreme values. If the distribution is skewed either positively or negatively, the mean is pulled in the direction of the skew; however, the median and mode are more robust statistics and are not pulled as far as the mean. The traditional response is to provide an estimate of the median and mode as current methodological approaches are limited in determining their exact value once the mean is pulled away. Methods: The purpose of this study is to test a new statistical method, utilizing the first order and second order partial derivatives in Taylor series expansion, for approximating the median and mode of skewed unimodal continuous distributions. Specifically, to compute the approximated mode, the first order derivatives of the sum of the first three terms in the Taylor series expansion is set to zero and then the equation is solved to find the unknown. To compute the approximated median, the integration from negative infinity to the median is set to be one half and then the equation is solved for the median. Finally, to evaluate the accuracy of our derived formulae for computing the mode and median of the skewed unimodal continuous distributions, simulation study will be conducted with respect to skew normal distributions, skew t-distributions, skew exponential distributions, and others, with various parameters. Conclusions: The potential of this study may have a great impact on the advancement of current central tendency measurement, the gold standard used in public health and social science research. The study may answer an important question concerning the precision of median and mode estimates for skewed unimodal continuous distributions of data. If this method proves to be an accurate approximation of the median and mode, then it should become the method of choice when measures of central tendency are required.
APA, Harvard, Vancouver, ISO, and other styles
2

Guillot, Jérémie. "Optimization techniques for high level synthesis and pre-compilation based on Taylor expansion diagrams." Lorient, 2009. http://www.theses.fr/2009LORIS121.

Full text
Abstract:
Cette thèse adresse la problématique de l'optimisation automatique des spécifications dans le flot de conception des circuits intégrés. Par l'utilisation d'un formalisme canonique (basé sur les Taylors Expansion Diagram) et la reconnaissance de motifs particuliers dans le graphe, les optimisations issues de ces travaux permettent d'améliorer les résultats générés par les outils de synthèse de haut niveau sans connaissance à priori de l'application à implémenter
This thesis addresses the design productivity gap problem in design automation by emp]oying a canonical representation, called Taylor Expansion Diagram. TED is a graphical representation based on Taylor series decomposition of the data-flow computation. Optimizations and high-level transformations developed in this thesis are based on transformations and pattern recognition applied to the TED representation. The results of su ch transformations are the optimized data-flow graphs, which provide input to standard, HLS too]s for final architectural synthesis. Such optimizations cannot be achieved by traditional architectural and high-level synthesis tools or compiJers available today
APA, Harvard, Vancouver, ISO, and other styles
3

Liljas, Erik. "Stochastic Differential Equations : and the numerical schemes used to solve them." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-86799.

Full text
Abstract:
This thesis explains the theoretical background of stochastic differential equations in one dimension. We also show how to solve such differential equations using strong It o-Taylor expansion schemes over large time grids. We also attempt to solve a problem regarding a specific approximation of a stochastic integral for which there is no explicit solution. This approximation, which utilizes the distribution of this particular stochastic integral, gives the wrong order of convergence when performing a grid convergence study. We use numerical integration of the stochastic integral as an alternative approximation, which is correct with regards to convergence.
APA, Harvard, Vancouver, ISO, and other styles
4

García, Monera María. "r-critical points and Taylor expansion of the exponential map, for smooth immersions in Rk+n." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/50935.

Full text
Abstract:
[EN] Classically, the study of the contact with hyperplanes and hyperspheres has been realized by using the family of height and distance squared functions. On the first part of the thesis, we analyze the Taylor expansion of the exponential map up to order three of a submanifold $M$ immersed in $\r n.$ Our main goal is to show its usefulness for the description of special contacts of the submanifolds with geometrical models. As we analyze the contacts of high order, the complexity of the calculations increases. In this work, through the Taylor expansion of the exponential map, we characterize the geometry of order higher than $3$ in terms of invariants of the immersion, so that the effective computations in specific cases become more affordable. It allows also to get new geometric insights. On the second part of the thesis, we introduce the concept of critical point of a smooth map between submanifolds. If we consider a differentiable $k$-dimensional manifold $M$ immersed in $\r{k+n},$ we know that its focal set can also be interpreted as the image of the critical points of the {\it normal map} $\nu(m,u): NM\to \r{k+n}$ defined by $\nu(m,u)=\pi_N(m,u)+ u,$ for $m\in M$ and $u\in N_mM,$ where $\pi_N:NM\to M$ denotes the normal bundle. In the same way, the parabolic set of a differential submanifold is given through the analysis of the singularities of the height functions over the submanifold. If we consider a differentiable $k$-dimensional manifold $M$ immersed in $\r{k+n},$ we know that its parabolic set can also be interpreted as the image of the critical points of the {\it generalized Gauss map} $\psi(m,u): NM\to \r{k+n}$ defined by $\psi(m,u)= u,$ for $u\in N_mM.$ Finally, we characterize the asymptotic directions as the tangent set of a $k$-dimensional manifold $M$ immersed in $\r{k+n}$ throughout the study of the singularities of the tangent map $\Omega(m,y): TM\to \r{k+n}$ defined by $\Omega(m,y)=\pi(m,y)+y,$ for $y\in T_mM,$ where $\pi:TM\to M$ denotes the tangent bundle. We describe first the focal set and its geometrical relation to the Veronese of curvature for $k$-dimensional immersions in $\r{k+n}.$ Then we define the $r$-critical points of a differential map $f:H \to K$ between two differential manifolds and characterize the $2$ and $3$-critical points of the normal map and generalized Gauss map. The number of these critical points at $m\in M$ may depend on the degeneration of the curvature ellipse and we calculate those numbers in the particular case that $M$ is an immersed surface in $\r{4}$ for the normal map and $\r{5}$ for the generalized Gauss map.
[ES] En general, el estudio del contacto con hiperplanos e hiperesferas se ha llevado a cabo usando la familia de funciones altura y la función distancia al cuadrado. En la primera parte de la tesis analizamos el desarrollo de Taylor de la aplicación exponencial hasta orden 3 de una subvariedad $M$ inmersa en $\r n.$ Nuestro principal objetivo es mostrar su utilidad en el estudio de contactos especiales de subvariedades con modelos geométricos. A medida que analizamos los contactos de orden mayor, la complejidad de las cuentas aumenta. En este trabajo, a través del desarrollo de Taylor de la aplicación exponencial, caracterizamos la geometría de orden mayor que $3$ en términos de invariantes geométricos de la inmersión, por lo que el trabajo con las cuentas en casos especiales se convierte en más manejable. Esto nos permite también obtener nuevos resultados geométricos. En la segunda parte de la tesis se introduce el concepto de punto crítico de una aplicación regular entre subvariedades. Si consideramos una variedad diferenciable $M$ de dimensión $k$ e inmersa en $\r{k+n},$ sabemos que su conjunto focal puede ser interpretado como la imagen de los puntos críticos de la {\it aplicación normal} $\nu(m,u): NM\to \r{k+n}$ definida por $\nu(m,u)=\pi_N(m,u)+ u,$ para $m\in M$ y $u\in N_mM,$ donde $\pi_N:NM\to M$ denota el fibrado normal. De la misma manera, el conjunto parabólico de una subvariedad diferencial viene dado por el análisis de las singularidades de la función altura sobre la subvariedad. Si consideramos una subvariedad $M$ de dimensión $k$ e inmersa en $\r{k+n},$ sabemos que su conjunto parabólico puede ser interpretado como la imagen de los puntos críticos de la {\it aplicación generalizada de Gauss} $\psi(m,u): NM\to \r{k+n}$ definida por $\psi(m,u)= u,$ donde $u\in N_mM.$ Finalmente, caracterizamos las direcciones asintóticas como el conjunto de direcciones del tangente de una subvariedad $M$ de dimensión $k$ e inmersa en $\r{k+n}$ a través del estudio de las singularidades de la aplicación tangente $\Omega(m,y): TM\to \r{k+n}$ definida por $\Omega(m,y)=\pi(m,y)+y,$ para $y\in T_mM,$ donde $\pi:TM\to M$ denota el fibrado tangente. Describimos primero el conjunto focal y su relación geométrica con la Veronese de curvatura para una variedad $k$ dimensional inmersa en $\r{k+n}.$ Entonces, definimos los puntos $r$-críticos de una aplicación $f:H \to K$ entre dos subvariedades y caracterizamos los puntos $2$ y $3$ críticos de la aplicación normal y la aplicación generalizada de Gauss. El número de estos puntos críticos en $m\in M$ depende de la degeneración de la elipse de curvatura y calculamos ese número en el caso particular de una superficie inmersa en $\r{4}$ para la aplicación normal y $\r{5}$ para la aplicación generalizada de Gauss.
[CAT] En general, l'estudi del contacte amb hiperplans i hiperesferes s'ha dut a terme utilitzant la família de funcions altura i la funció distància al quadrat. A la primera part de la tesi analitzem el desenvolupament de Taylor de l'aplicació exponencial fins a ordre 3 d'una subvarietat $M$ immersa en $\r n.$ El nostre principal objectiu és mostrar la seua utilitat en l'estudi de contactes especials de subvarietats amb models geomètrics. A mesura que analitzem els contactes d'ordre major, la complexitat dels comptes augmenta. En aquest treball, a través del desenvolupament de Taylor de l'aplicació exponencial, caracteritzem la geometria d'ordre major que $ 3 $ en termes d'invariants geomètrics de la immersió, de manera que el treball amb els comptes en casos especials es converteix en més manejable. Això ens permet també obtenir nous resultats geomètrics. A la segona part de la tesi s'introdueix el concepte de punt crític d'una aplicació regular entre subvarietats. Si considerem una varietat diferenciable $ M $ de dimensió $ k $ i immersa en $ \r {k + n}, $ sabem que el seu conjunt focal pot ser interpretat com la imatge dels punts crítics de la {\it aplicació normal} $ \nu (m, u): NM \to \r {k + n} $ definida per $ \nu (m, u) = \pi_N (m, u) + o, $ per $ m \in M $ i $ u \in N_mM, $ on $ \pi_N: NM \to M $ denota el fibrat normal. De la mateixa manera, el conjunt parabòlic d'una subvarietat diferencial ve donat per l'anàlisi de les singularitats de la funció altura sobre la subvarietat. Si considerem una subvarietat $ M $ de dimensió $ k $ i immersa en $ \r {k + n}, $ sabem que el seu conjunt parabòlic pot ser interpretat com la imatge dels punts crítics de la {\it aplicació generalitzada de Gauss} $ \psi (m, u): NM \to \r{k + n} $ definida per $ \psi (m, u) = u, $ on $ u \in N_mM. $ Finalment, caracteritzem les direccions asimptòtiques com el conjunt de direccions del tangent d'una subvarietat $ M $ de dimensió $ k $ i immersa en $ \r{k + n} $ a través de l'estudi de les singularitats de l'aplicació tangent $ \Omega (m, y): TM \to \r {k + n} $ definida per $ \Omega (m, y) = \pi (m, y) + y, $ per $ y \in T_mM, $ on $ \pi: TM \to M $ denota el fibrat tangent. Descrivim primer el conjunt focal i la seva relació geomètrica amb la Veronese de curvatura per a una varietat $ k $ dimensional immersa en $ \r{k + n}. $ Llavors, definim els punts $ r $-crítics d'una aplicació $ f: H \to K $ entre dues subvarietats i caracteritzem els punts $ 2 $ i $ 3 $ crítics de l'aplicació normal i l'aplicació generalitzada de Gauss. El nombre d'aquests punts crítics en $ m \in M $ depèn de la degeneració de l'el·lipse de curvatura i calculem aquest nombre en el cas particular d'una superfície immersa en $ \r{4} $ per a l'aplicació normal i $ \r{5} $ per a l'aplicació generalitzada de Gauss.
García Monera, M. (2015). r-critical points and Taylor expansion of the exponential map, for smooth immersions in Rk+n [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/50935
TESIS
APA, Harvard, Vancouver, ISO, and other styles
5

Kratky, Joseph J. "SERIES EXPANSION FOR SEMI-SPDES WITH REMARKS ON HYPERBOLIC SPDES ON THE LATTICE." Kent State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=kent1310614464.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Semeraro, Emanuele. "Experimental investigation on hydrodynamic phenomena associated with a sudden gas expansion in a narrow channel." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066516/document.

Full text
Abstract:
La vaporisation rapide du sodium liquide surchauffé est supposée être à l’origine des arrêts automatiques pour réactivité négative du réacteur Phénix.Un dispositif expérimental a été mis en œuvre pour reproduire la détente d'un gaz pressurisé, repoussant un liquide dans un canal de section rectangulaire très allongée.L’interface qui sépare les deux fluides, initialement plate, ondule du fait d'instabilités de Rayleigh-Taylor dont le caractère 2D est garanti par le rapport d'aspect de la section du canal. L’aire interfaciale augmente d'un facteur 50.L’expansion du gaz peut être divisée en deux phases principales : une phase dite « de Rayleigh-Taylor » (linéaire et non-linéaire) et une phase dite « à multi-structures » (transitionnelle et chaotique). La première est caractérisée par la dynamique de l'interface et l’aire interfaciale qui en résulte est proportionnelle à l’amplitude des ondulations. La deuxième est influencée par le comportement des structures liquides, dispersées dans la matrice gazeuse et l’aire interfaciale est alors proportionnelle au nombre de structures.La distribution de fraction volumique suggère un modèle d’écoulement composé de trois régions : une région où la frontière des bulles est clairement définie et régulière, une région compartimentée par des membranes liquides issues des frontières des bulles, une région diphasique formée de la queue de ces structures. L’analyse de sensibilité à la tension superficielle confirme que plus la tension est faible, plus les interfaces sont instables. Les ondes sont plus prononcées et plus de structures sont produites, ce qui conduit à une majoration du taux de production de l’aire interfaciale
The sharp vaporization of superheated liquid sodium is investigated. It is suspected to be at the origin of the automatic shutdown for negative reactivity, occurred in the Phénix reactor at the end of the eighties.An experimental apparatus has been designed and operated to reproduce the expansion of overpressurized air, superposed to water in a narrow vertical rectangular section channel.When expansion begins, the initial flat interface separating the two fluids becomes corrugated under the development of two-dimensional Rayleigh-Taylor instabilities. The interface area increases significantly and becomes even 50 times larger than the initial value. Since the channel is very narrow, instabilities along the channel depth do not develop.The gas expansion in a narrow channel can be divided into two main phases: Rayleigh-Taylor (linear and non-linear) and multi-structures (transition and chaotic) phases. The former is characterized by the dynamic of corrugated profile and the interface area results proportional to the amplitude of corrugation The latter is influenced by the behavior of the liquid structures dispersed in gas matrix and the interface area is mainly proportional to the number of liquid structures.The distribution of volume fraction suggests a model of channel flow consisting of three regions: the regular profile of peaks, the spike region and the structures tails. The analysis of sensibility to surface tension confirms that, with a lower surface tension, the fluids configuration is more unstable. The interface corrugations are more pronounced and more structures are produced, leading to a higher increment of the interface area
APA, Harvard, Vancouver, ISO, and other styles
7

Volkmer, Toni. "Taylor and rank-1 lattice based nonequispaced fast Fourier transform." Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-106489.

Full text
Abstract:
The nonequispaced fast Fourier transform (NFFT) allows the fast approximate evaluation of trigonometric polynomials with frequencies supported on full box-shaped grids at arbitrary sampling nodes. Due to the curse of dimensionality, the total number of frequencies and thus, the total arithmetic complexity can already be very large for small refinements at medium dimensions. In this paper, we present an approach for the fast approximate evaluation of trigonometric polynomials with frequencies supported on an arbitrary subset of the full grid at arbitrary sampling nodes, which is based on Taylor expansion and rank-1 lattice methods. For the special case of symmetric hyperbolic cross index sets in frequency domain, we present error estimates and numerical results.
APA, Harvard, Vancouver, ISO, and other styles
8

Matchie, Lydienne. "Cubature methods and applications to option pricing." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/5374.

Full text
Abstract:
Thesis (MSc (Mathematics))--University of Stellenbosch, 2010.
ENGLISH ABSTRACT: In this thesis, higher order numerical methods for weak approximation of solutions of stochastic differential equations (SDEs) are presented. They are motivated by option pricing problems in finance where the price of a given option can be written as the expectation of a functional of a diffusion process. Numerical methods of order at most one have been the most used so far and higher order methods have been difficult to perform because of the unknown density of iterated integrals of the d-dimensional Brownian motion present in the stochastic Taylor expansion. In 2001, Kusuoka constructed a higher order approximation scheme based on Malliavin calculus. The iterated stochastic integrals are replaced by a family of finitely-valued random variables whose moments up to a certain fixed order are equivalent to moments of iterated Stratonovich integrals of Brownian motion. This method has been shown to outperform the traditional Euler-Maruyama method. In 2004, this method was refined by Lyons and Victoir into Cubature on Wiener space. Lyons and Victoir extended the classical cubature method for approximating integrals in finite dimension to approximating integrals in infinite dimensional Wiener space. Since then, many authors have intensively applied these ideas and the topic is today an active domain of research. Our work is essentially based on the recently developed higher order schemes based on ideas of the Kusuoka approximation and Lyons-Victoir “Cubature on Wiener space” and mostly applied to option pricing. These are the Ninomiya-Victoir (N-V) and Ninomiya- Ninomiya (N-N) approximation schemes. It should be stressed here that many other applications of these schemes have been developed among which is the Alfonsi scheme for the CIR process and the decomposition method presented by Kohatsu and Tanaka for jump driven SDEs. After sketching the main ideas of numerical approximation methods in Chapter 1 , we start Chapter 2 by setting up some essential terminologies and definitions. A discussion on the stochastic Taylor expansion based on iterated Stratonovich integrals is presented, we close this chapter by illustrating this expansion with the Euler-Maruyama approximation scheme. Chapter 3 contains the main ideas of Kusuoka approximation scheme, we concentrate on the implementation of the algorithm. This scheme is applied to the pricing of an Asian call option and numerical results are presented. We start Chapter 4 by taking a look at the classical cubature formulas after which we propose in a simple way the general ideas of “Cubature on Wiener space” also known as the Lyons-Victoir approximation scheme. This is an extension of the classical cubature method. The aim of this scheme is to construct cubature formulas for approximating integrals defined on Wiener space and consequently, to develop higher order numerical schemes. It is based on the stochastic Stratonovich expansion and can be viewed as an extension of the Kusuoka scheme. Applying the ideas of the Kusuoka and Lyons-Victoir approximation schemes, Ninomiya- Victoir and Ninomiya-Ninomiya developed new numerical schemes of order 2, where they transformed the problem of solving SDE into a problem of solving ordinary differential equations (ODEs). In Chapter 5 , we begin by a general presentation of the N-V algorithm. We then apply this algorithm to the pricing of an Asian call option and we also consider the optimal portfolio strategies problem introduced by Fukaya. The implementation and numerical simulation of the algorithm for these problems are performed. We find that the N-V algorithm performs significantly faster than the traditional Euler-Maruyama method. Finally, the N-N approximation method is introduced. The idea behind this scheme is to construct an ODE-valued random variable whose average approximates the solution of a given SDE. The Runge-Kutta method for ODEs is then applied to the ODE drawn from the random variable and a linear operator is constructed. We derive the general expression for the constructed operator and apply the algorithm to the pricing of an Asian call option under the Heston volatility model.
AFRIKAANSE OPSOMMING: In hierdie proefskrif, word ’n hoërorde numeriese metode vir die swak benadering van oplossings tot stogastiese differensiaalvergelykings (SDV) aangebied. Die motivering vir hierdie werk word gegee deur ’n probleem in finansies, naamlik om opsiepryse vas te stel, waar die prys van ’n gegewe opsie beskryf kan word as die verwagte waarde van ’n funksionaal van ’n diffusie proses. Numeriese metodes van orde, op die meeste een, is tot dus ver in algemene gebruik. Dit is moelik om hoërorde metodes toe te pas as gevolg van die onbekende digtheid van herhaalde integrale van d-dimensionele Brown-beweging teenwoordig in die stogastiese Taylor ontwikkeling. In 2001 het Kusuoka ’n hoërorde benaderings skema gekonstrueer wat gebaseer is op Malliavin calculus. Die herhaalde stogastiese integrale word vervang deur ’n familie van stogastiese veranderlikes met eindige waardes, wat se momente tot ’n sekere vaste orde bestaan. Dit is al gedemonstreer dat hierdie metode die tradisionele Euler-Maruyama metode oortref. In 2004 is hierdie metode verfyn deur Lyons en Victoir na volumeberekening op Wiener ruimtes. Lyons en Victoir het uitgebrei op die klassieke volumeberekening metode om integrale te benader in eindige dimensie na die benadering van integrale in oneindige dimensionele Wiener ruimte. Sedertdien het menige outeurs dié idees intensief toegepas en is die onderwerp vandag ’n aktiewe navorsings gebied. Ons werk is hoofsaaklik gebaseer op die onlangse ontwikkelling van hoërorde skemas, wat op hul beurt gebaseer is op die idees van Kusuoka benadering en Lyons-Victoir "Volumeberekening op Wiener ruimte". Die werk word veral toegepas op die prysvastelling van opsies, naamlik Ninomiya-Victoir en Ninomiya-Ninomiya benaderings skemas. Dit moet hier beklemtoon word dat baie ander toepassings van hierdie skemas al ontwikkel is, onder meer die Alfonsi skema vir die CIR proses en die ontbinding metode wat voorgestel is deur Kohatsu en Tanaka vir sprong aangedrewe SDVs. Na ’n skets van die hoof idees agter metodes van numeriese benadering in Hoofstuk 1 , begin Hoofstuk 2 met die neersetting van noodsaaklike terminologie en definisies. ’n Diskussie oor die stogastiese Taylor ontwikkeling, gebaseer op herhaalde Stratonovich integrale word uiteengeset, waarna die hoofstuk afsluit met ’n illustrasie van dié ontwikkeling met die Euler-Maruyama benaderings skema. Hoofstuk 3 bevat die hoofgedagtes agter die Kusuoka benaderings skema, waar daar ook op die implementering van die algoritme gekonsentreer word. Hierdie skema is van toepassing op die prysvastelling van ’n Asiatiese call-opsie, numeriese resultate word ook aangebied. Ons begin Hoofstuk 4 deur te kyk na klassieke volumeberekenings formules waarna ons op ’n eenvoudige wyse die algemene idees van "Volumeberekening op Wiener ruimtes", ook bekend as die Lyons-Victoir benaderings skema, as ’n uitbreiding van die klassieke volumeberekening metode gebruik. Die doel van hierdie skema is om volumeberekening formules op te stel vir benaderings integrale wat gedefinieer is op Wiener ruimtes en gevolglik, hoërorde numeriese skemas te ontwikkel. Dit is gebaseer op die stogastiese Stratonovich ontwikkeling en kan beskou word as ’n ontwikkeling van die Kusuoka skema. Deur Kusuoka en Lyon-Victoir se idees oor benaderings skemas toe te pas, het Ninomiya-Victoir en Ninomiya- Ninomiya nuwe numeriese skemas van orde 2 ontwikkel, waar hulle die probleem omgeskakel het van een waar SDVs opgelos moet word, na een waar gewone differensiaalvergelykings (GDV) opgelos moet word. Hierdie twee skemas word in Hoofstuk 5 uiteengeset. Alhoewel die benaderings soortgelyk is, is daar ’n beduidende verskil in die algoritmes self. Hierdie hoofstuk begin met ’n algemene uiteensetting van die Ninomiya-Victoir algoritme waar ’n arbitrêre vaste tyd horison, T, gebruik word. Dié word toegepas op opsieprysvastelling en optimale portefeulje strategie probleme. Verder word numeriese simulasies uitgevoer, die prestasie van die Ninomiya-Victoir algoritme was bestudeer en vergelyk met die Euler-Maruyama metode. Ons maak die opmerking dat die Ninomiya-Victoir algoritme aansienlik vinniger is. Die belangrikste resultaat van die Ninomiya-Ninomiya benaderings skema word ook voorgestel. Deur die idee van ’n Lie algebra te gebruik, het Ninomiya en Ninomiya ’n stogastiese veranderlike met GDV-waardes gekonstrueer wat se gemiddeld die oplossing van ’n gegewe SDV benader. Die Runge-Kutta metode vir GDVs word dan toegepas op die GDV wat getrek is uit die stogastiese veranderlike en ’n lineêre operator gekonstrueer. ’n Veralgemeende uitdrukking vir die gekonstrueerde operator is afgelei en die algoritme is toegepas op die prysvasstelling van ’n Asiatiese opsie onder die Heston onbestendigheids model.
APA, Harvard, Vancouver, ISO, and other styles
9

Adolfsson, David, and Tom Claesson. "Estimation methods for Asian Quanto Basket options." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160920.

Full text
Abstract:
All financial institutions that provide options to counterparties will in most cases get involved withMonte Carlo simulations. Options with a payoff function that depends on asset’s value at differenttime points over its lifespan are so called path dependent options. This path dependency impli-cates that there exists no parametric solution and the price must hence be estimated, it is hereMonte Carlo methods come into the picture. The problem though with this fundamental optionpricing method is the computational time. Prices fluctuate continuously on the open market withrespect to different risk factors and since it’s impossible to re-evaluate the option for all shifts dueto its computing intensive nature, estimations of the option price must be used. Estimating theprice from known points will of course never produce the same result as a full re-evaluation but anestimation method that produces reliable results and greatly reduces computing time is desirable.This thesis will evaluate different approaches and try to minimize the estimation error with respectto a certain number of risk factors.This is the background for our master thesis at Swedbank. The goal is to create multiple estima-tion methods and compare them to Swedbank’s current estimation model. By doing this we couldpotentially provide Swedbank with improvement ideas regarding some of its option products andrisk measurements. This thesis is primarily based on two estimation methods that estimate optionprices with respect to two variable risk factors, the value of the underlying assets and volatility.The first method is a grid that uses a second order Taylor expansion and the sensitivities delta,gamma and vega. The other method uses a grid of pre-simulated option prices for different shiftsin risk factors. The interpolation technique that is used in this method is calledPiecewise CubicHermiteinterpolation. The methods (or referred to as approaches in the report) are implementedto handle a relative change of 50 percent in the underlying asset’s index value, which is the firstrisk factor. Concerning the second risk factor, volatility, both methods estimate prices for a 50percent relative downward change and an upward change of 400 percent from the initial volatility.Should there emerge even more extreme market conditions both methods use linear extrapolationto estimate a new option price.
APA, Harvard, Vancouver, ISO, and other styles
10

Guo, Longkai. "Numerical investigation of Taylor bubble and development of phase change model." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI095.

Full text
Abstract:
Le mouvement d'une bulle d'azote de Taylor dans des solutions mixtes glycérol-eau s'élevant à travers différents types d'expansions et de contractions est étudié par une approche numérique. La procédure CFD est basée sur un solveur open-source Basilisk, qui adopte la méthode du volume de fluide (VOF) pour capturer l'interface gaz-liquide. Les résultats des expansions/contractions soudaines sont comparés aux résultats expérimentaux. Les résultats montrent que les simulations sont en bon accord avec les expériences. La vitesse de la bulle augmente dans les expansions soudaines et diminue dans les contractions soudaines. Le modèle de rupture des bulles est observé dans les expansions soudaines avec de grands taux d'expansion, et un modèle de blocage des bulles est observé dans les contractions soudaines avec de petits rapports de contraction. De plus, la contrainte de cisaillement de la paroi, l'épaisseur du film liquide et la pression dans les simulations sont étudiées pour comprendre l'hydrodynamique de la bulle de Taylor montant par expansions/contractions. Le processus transitoire de la bulle de Taylor passant par une expansion/contraction soudaine est ensuite analysé pour trois singularités différentes: graduelle, parabolique convexe et parabolique concave. Une caractéristique unique de la contraction concave parabolique est que la bulle de Taylor passe par la contraction même pour de petits rapports de contraction. De plus, un modèle de changement de phase est développé dans le solveur Basilisk. Afin d'utiliser la méthode VOF géométrique existante dans Basilisk, une méthode VOF géométrique générale en deux étapes est implémentée. Le flux de masse n'est pas calculé dans les cellules interfaciales mais transféré aux cellules voisines autour de l'interface. La condition aux limites de température saturée est imposée à l'interface par une méthode de cellule fantôme. Le modèle de changement de phase est validé par évaporation de gouttelettes avec un taux de transfert de masse constant, le problème de Stefan unidimensionnel, le problème d'aspiration de l'interface et un cas d'ébullition à film plan. Les résultats montrent un bon accord avec les solutions analytiques ou les corrélations
The motion of a nitrogen Taylor bubble in glycerol-water mixed solutions rising through different types of expansions and contractions is investigated by a numerical approach. The CFD procedure is based on an open-source solver Basilisk, which adopts the volume-of-fluid (VOF) method to capture the gas-liquid interface. The results of sudden expansions/contractions are compared with experimental results. The results show that the simulations are in good agreement with experiments. The bubble velocity increases in sudden expansions and decreases in sudden contractions. The bubble break-up pattern is observed in sudden expansions with large expansion ratios, and a bubble blocking pattern is found in sudden contractions with small contraction ratios. In addition, the wall shear stress, the liquid film thickness, and pressure in the simulations are studied to understand the hydrodynamics of the Taylor bubble rising through expansions/contractions. The transient process of the Taylor bubble passing through sudden expansion/contraction is further analyzed for three different singularities: gradual, parabolic convex and parabolic concave. A unique feature in parabolic concave contraction is that the Taylor bubble passes through the contraction even for small contraction ratios. Moreover, a phase change model is developed in the Basilisk solver. In order to use the existed geometric VOF method in Basilisk, a general two-step geometric VOF method is implemented. Mass flux is calculated not in the interfacial cells but transferred to the neighboring cells around the interface. The saturated temperature boundary condition is imposed at the interface by a ghost cell method. The phase change model is validated by droplet evaporation with a constant mass transfer rate, the one-dimensional Stefan problem, the sucking interface problem, and a planar film boiling case. The results show good agreement with analytical solutions or correlations
APA, Harvard, Vancouver, ISO, and other styles
11

Mwangota, Lutufyo. "Cubature on Wiener Space for the Heath--Jarrow--Morton framework." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-42804.

Full text
Abstract:
This thesis established the cubature method developed by Gyurkó & Lyons (2010) and Lyons & Victor (2004) for the Heath–Jarrow–Morton (HJM) model. The HJM model was first proposed by Heath, Jarrow, and Morton (1992) to model the evolution of interest rates through the dynamics of the forward rate curve. These dynamics are described by an infinite-dimensional stochastic equation with the whole forward rate curve as a state variable. To construct the cubature method, we first discretize the infinite dimensional HJM equation and thereafter apply stochastic Taylor expansion to obtain cubature formulae. We further used their results to construct cubature formulae to degree 3, 5, 7 and 9 in 1-dimensional space. We give, a considerable step by step calculation regarding construction of cubature formulae on Wiener space.
APA, Harvard, Vancouver, ISO, and other styles
12

Midez, Jean baptiste. "Une étude combinatoire du lambda-calcul avec ressources uniforme." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4093/document.

Full text
Abstract:
Le lambda-calcul avec ressources est une variante du lambda-calcul fondée sur la linéarité : les lambda-termes avec ressources sont aux lambda-termes ce que sont les polynômes aux fonctions réelles, c'est à dire des approximations multi-linéaires. En particulier les réductions dans le lambda-calcul avec ressources peuvent être vues comme des approximations des beta-réductions, mais la contrainte de linéarite a des conséquences importantes, notamment la forte normalisation de la réduction avec ressources. Pour ainsi dire, la beta-réduction est obtenue par passage à la limite des réductions avec ressources qui l'approximent. Cette thèse étudie les aspects combinatoires, très riches, du lambda-calcul avec ressources. On commence par définir précisément la notion de réduction avec ressource associée à une beta-réduction: étant donné un lambda-terme $t$, un approximant $s$ de celui-ci et $t'$ une beta-réduction de $t$, on lui associe une réduction avec ressources (appelée gamma-réduction) de $s$ qui réduit les «mêmes» redex que celle de $t$ et produit un ensemble $S'$ d'approximants de $t'$. Cette définition permet de retrouver une preuve légèrement plus intuitive de l'un des théorèmes fondamentaux de la théorie, qui permet également de le généraliser. Dans un second temps on étudie les relations «familiales» entre termes avec ressources, la question centrale étant de caractériser le fait que deux termes avec ressources sont des réduits d'un même terme. Ce problème central et difficile n'est pas pleinement résolu, mais la thèse présente plusieurs résultats préliminaires et développe les bases d'une théorie pour arriver à cette fin
The resource lambda-calculus is a variant of lambda-calculus based on linearity: resource lambda-terms are to lambda-terms as polynomials are to real functions. In particular reductions in resource lambda-calculus can be viewed as approximations of beta-reductions. But the linearity constraint has important consequences, especially the strong normalisation of resource reduction. So to speak, beta-reduction is obtained by passage to the limit of resource reduction which approximates it. This thesis is a study of the combinatory aspect of resource lambda-calculus. First, we define precisely the notion of resource reduction associated to beta-reduction: let t be a lambda-term, s an approximant of t and t' a beta-reduction of t, we associate a resource reduction (called gamma-reduction) of s which reducts the "same" redex as the beta-reduction of t and this generates a set S' of approximants of t'. This definition allows to find a new proof (who is more intuitive) of one of the fundamental theorems of this theory and it also allows to generalize it. Then we study the "family" relations between resource lambda-terms. The main question is to characterize the resource lambda-terms which are reducts of same term. This central problem is hard and not completely resolved, but this thesis exhibits several preliminary results and lays the foundations of a theory aimed at resolving it
APA, Harvard, Vancouver, ISO, and other styles
13

Ashu, Tom A. Ashu. "Non-Smooth SDEs and Hyperbolic Lattice SPDEs Expansions via the Quadratic Covariation Differentiation Theory and Applications." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1500334062680747.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Anderson, Travis V. "Efficient, Accurate, and Non-Gaussian Error Propagation Through Nonlinear, Closed-Form, Analytical System Models." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2675.

Full text
Abstract:
Uncertainty analysis is an important part of system design. The formula for error propagation through a system model that is most-often cited in literature is based on a first-order Taylor series. This formula makes several important assumptions and has several important limitations that are often ignored. This thesis explores these assumptions and addresses two of the major limitations. First, the results obtained from propagating error through nonlinear systems can be wrong by one or more orders of magnitude, due to the linearization inherent in a first-order Taylor series. This thesis presents a method for overcoming that inaccuracy that is capable of achieving fourth-order accuracy without significant additional computational cost. Second, system designers using a Taylor series to propagate error typically only propagate a mean and variance and ignore all higher-order statistics. Consequently, a Gaussian output distribution must be assumed, which often does not reflect reality. This thesis presents a proof that nonlinear systems do not produce Gaussian output distributions, even when inputs are Gaussian. A second-order Taylor series is then used to propagate both skewness and kurtosis through a system model. This allows the system designer to obtain a fully-described non-Gaussian output distribution. The benefits of having a fully-described output distribution are demonstrated using the examples of both a flat rolling metalworking process and the propeller component of a solar-powered unmanned aerial vehicle.
APA, Harvard, Vancouver, ISO, and other styles
15

Chouquet, Jules. "Une géométrie de calcul : Réseaux de preuve, appel-par-pousse-valeur et topologie du consensus." Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7015.

Full text
Abstract:
L’informatique fondamentale et de la théorie de la démonstration. Deux approches sont menées : la première consiste à examiner les mécanismes d’approximation multilinéaires dans des systèmes issus du λ-calcul et de la Logique Linéaire. La seconde consiste à étudier les modèles topologiques pour les systèmes distribués et à les adapter aux algorithmes probabilistes. On étudie d’abord le développement de Taylor des réseaux de preuve de la Logique Linéaire. On introduit des méthodes de démonstration qui utilisent la géométrie de l’élimination des coupures des réseaux multiplicatifs, et qui permettent de manipuler des sommes infinies de réseaux de façon sûre et correcte, pour en extraire des propriétés sur les réductions qui sont à l’œuvre. Ensuite, nous introduisons un langage permettant de définir le développement de Taylor syntaxique pour l’Appel-Par-Pousse-Valeur (Call-By-Push-Value), en capturant certaines propriétés de la sémantique dénotationelle liées aux morphismes de coalgèbres. Puis nous nous intéressons aux systèmes distribués (à mémoire partagée, tolérants aux pannes), et au problème du Consensus. On utilise un modèle topologique qui permet d’interpréter la communication dans les complexes simpliciaux, eton l’adapte de façon à transformer les résultats d’impossibilité bien connus en résultats de borne inférieure de probabilité pour des algorithmes probabilistes
This Phd thesis presents a quantitative study of various computation models of fundamental computer science and proof theory, in two principad directions :the first consists in the examination of mecanismis of multilinear approximations in systems related to λ-calculus and Linear Logic. The second consists in a study of topological models for asynchronous distributed systems, and probabilistic algorithms. We first study Taylor expansion in Linear Logic proof nets. We introduce proof methods using the geometry of cut elimination in multiplicativenets, and which allow to work with infinite sums of nets in a safe and correct way,in order to extract properties about reduction. Then, we introduce a language allowing us to define Taylor expansion for Call-By-Push-Value, while capturing some properties of the denotational semantics, related to coalgebras morphisms.We focus then on fault tolerant-distributed systems with shared memory, andto Consensus problem. We use a topological model which allows to interpret communication with simplicial complexes, and we adapt in so as to transform the well-known impossibility results in lower bounds for probabilistic algorithms
APA, Harvard, Vancouver, ISO, and other styles
16

Seo, Dong-Won. "Performance analysis of queueing networks via Taylor series expansions." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/25098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Rihua. "Analysis for Taylor vortex flow." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/53628.

Full text
Abstract:
Taylor vortex flow is one of the basic problems of nonlinear hydrodynamic stability. In contrast with the wide region of wavenumber predicted by the linear theory, experiments show that Taylor vortex flow only appears in a small region containing the critical wavenumber ßer This phenomenon is called wave selection. In this work, several high-order perturbation methods and a numerical method are established. Both evolution and steady state of the How caused by single or several disturbances are studied. The existence of multiple steady states for disturbances with small wavenumber is discovered and proved. The stable and unstable steady state solutions and some associated phenomena such as jump phenomenon and hysteresis phenomenon are found. and explained. In the small region, the wavenumbers and initial amplitudes of disturbances determine the wavenumber of the flow. But outside this region, only the wavenumbers of the disturbances have effect on the wave selection. These results indicate that unstable solutions play a key role in wave selection. The side-band stability curve produced by the high-order perturbation methods is accurate at low Taylor numbers but incorrect at relatively high Taylor numbers. The relation of the unstable solutions and side-band stability is discussed. Besides, the overshoot and the oscillation phenomena during evolution are studied in detail. Connections between this work and experiments are discussed.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
18

Doan, Van Tu. "Modèles réduits pour des analyses paramètriques du flambement de structures : application à la fabrication additive." Thesis, Valenciennes, 2018. http://www.theses.fr/2018VALE0017/document.

Full text
Abstract:
Le développement de la fabrication additive permet d'élaborer des pièces de forme extrêmement complexes, en particulier des structures alvéolaires ou "lattices", où l'allégement est recherché. Toutefois, cette technologie, en très forte croissance dans de nombreux secteurs d'activités, n'est pas encore totalement mature, ce qui ne facilite pas les corrélations entre les mesures expérimentales et les simulations déterministes. Afin de prendre en compte les variations de comportement, les approches multiparamétriques sont, de nos jours, des solutions pour tendre vers des conceptions fiables et robustes. L'objectif de cette thèse est d'intégrer des incertitudes matérielles et géométriques, quantifiées expérimentalement, dans des analyses de flambement. Pour y parvenir, nous avons, dans un premier temps, évalué différentes méthodes de substitution, basées sur des régressions et corrélations, et différentes réductions de modèles afin de réduire les temps de calcul prohibitifs. Les projections utilisent des modes issus soit de la décomposition orthogonale aux valeurs propres, soit de développements homotopiques ou encore des développements de Taylor. Dans un second temps, le modèle mathématique, ainsi créé, est exploité dans des analyses ensemblistes et probabilistes pour estimer les évolutions de la charge critique de flambement de structures lattices
The development of additive manufacturing allows structures with highly complex shapes to be produced. Complex lattice shapes are particularly interesting in the context of lightweight structures. However, although the use of this technology is growing in numerous engineering domains, this one is not enough matured and the correlations between the experimental data and deterministic simulations are not obvious. To take into account observed variations of behavior, multiparametric approaches are nowadays efficient solutions to tend to robust and reliable designs. The aim of this thesis is to integrate material and geometric uncertainty, experimentally quantified, in buckling analyses. To achieve this objective, different surrogate models, based on regression and correlation techniques as well as different reduced order models have been first evaluated to reduce the prohibitive computational time. The selected projections rely on modes calculated either from Proper Orthogonal Decomposition, from homotopy developments or from Taylor series expansion. Second, the proposed mathematical model is integrated in fuzzy and probabilistic analyses to estimate the evolution of the critical buckling load for lattice structures
APA, Harvard, Vancouver, ISO, and other styles
19

Sun, Yong. "Reliability prediction of complex repairable systems : an engineering approach." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16273/.

Full text
Abstract:
This research has developed several models and methodologies with the aim of improving the accuracy and applicability of reliability predictions for complex repairable systems. A repairable system is usually defined as one that will be repaired to recover its functions after each failure. Physical assets such as machines, buildings, vehicles are often repairable. Optimal maintenance strategies require the prediction of the reliability of complex repairable systems accurately. Numerous models and methods have been developed for predicting system reliability. After an extensive literature review, several limitations in the existing research and needs for future research have been identified. These include the follows: the need for an effective method to predict the reliability of an asset with multiple preventive maintenance intervals during its entire life span; the need for considering interactions among failures of components in a system; and the need for an effective method for predicting reliability with sparse or zero failure data. In this research, the Split System Approach (SSA), an Analytical Model for Interactive Failures (AMIF), the Extended SSA (ESSA) and the Proportional Covariate Model (PCM), were developed by the candidate to meet the needs identified previously, in an effective manner. These new methodologies/models are expected to rectify the identified limitations of current models and significantly improve the accuracy of the reliability prediction of existing models for repairable systems. The characteristics of the reliability of a system will alter after regular preventive maintenance. This alternation makes prediction of the reliability of complex repairable systems difficult, especially when the prediction covers a number of imperfect preventive maintenance actions over multiple intervals during the asset's lifetime. The SSA uses a new concept to address this issue effectively and splits a system into repaired and unrepaired parts virtually. SSA has been used to analyse system reliability at the component level and to address different states of a repairable system after single or multiple preventive maintenance activities over multiple intervals. The results obtained from this investigation demonstrate that SSA has an excellent ability to support the making of optimal asset preventive maintenance decisions over its whole life. It is noted that SSA, like most existing models, is based on the assumption that failures are independent of each other. This assumption is often unrealistic in industrial circumstances and may lead to unacceptable prediction errors. To ensure the accuracy of reliability prediction, interactive failures were considered. The concept of interactive failure presented in this thesis is a new variant of the definition of failure. The candidate has made several original contributions such as introducing and defining related concepts and terminologies, developing a model to analyse interactive failures quantitatively and revealing that interactive failure can be either stable or unstable. The research results effectively assist in avoiding unstable interactive relationship in machinery during its design phase. This research on interactive failures pioneers a new area of reliability prediction and enables the estimation of failure probabilities more precisely. ESSA was developed through an integration of SSA and AMIF. ESSA is the first effective method to address the reliability prediction of systems with interactive failures and with multiple preventive maintenance actions over multiple intervals. It enhances the capability of SSA and AMIF. PCM was developed to further enhance the capability of the above methodologies/models. It addresses the issue of reliability prediction using both failure data and condition data. The philosophy and procedure of PCM are different from existing models such as the Proportional Hazard Model (PHM). PCM has been used successfully to investigate the hazard of gearboxes and truck engines. The candidate demonstrated that PCM had several unique features: 1) it automatically tracks the changing characteristics of the hazard of a system using symptom indicators; 2) it estimates the hazard of a system using symptom indicators without historical failure data; 3) it reduces the influence of fluctuations in condition monitoring data on hazard estimation. These newly developed methodologies/models have been verified using simulations, industrial case studies and laboratory experiments. The research outcomes of this research are expected to enrich the body of knowledge in reliability prediction through effectively addressing some limitations of existing models and exploring the area of interactive failures.
APA, Harvard, Vancouver, ISO, and other styles
20

Semenov, Andrei. "Intertemporal utility models for asset pricing : reference levels and individual heterogeneity." Thèse, [Montréal] : Université de Montréal, 2003. http://wwwlib.umi.com/cr/umontreal/fullcit?pNQ92724.

Full text
Abstract:
Thèse (Ph.D.) -- Université de Montréal, 2004.
"Thèse présentée à la Faculté des études supérieures en vue de l'obtention du grade de Philosophiae Doctor (Ph.D.) en sciences économiques" Version électronique également disponible sur Internet.
APA, Harvard, Vancouver, ISO, and other styles
21

Quintanilha, Laura de Mesquita. "Análise do modelo de fluxo de potência retangular intervalar baseado na expansão completa da série de Taylor." Universidade Federal de Juiz de Fora (UFJF), 2018. https://repositorio.ufjf.br/jspui/handle/ufjf/7554.

Full text
Abstract:
Submitted by Geandra Rodrigues (geandrar@gmail.com) on 2018-09-25T13:33:58Z No. of bitstreams: 1 laurademesquitaquintanilha.pdf: 5886814 bytes, checksum: 7bcc03d59c68d5b4124de87183557234 (MD5)
Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-10-01T18:13:33Z (GMT) No. of bitstreams: 1 laurademesquitaquintanilha.pdf: 5886814 bytes, checksum: 7bcc03d59c68d5b4124de87183557234 (MD5)
Made available in DSpace on 2018-10-01T18:13:34Z (GMT). No. of bitstreams: 1 laurademesquitaquintanilha.pdf: 5886814 bytes, checksum: 7bcc03d59c68d5b4124de87183557234 (MD5) Previous issue date: 2018-07-20
A análise de fluxo de potência visa calcular as tensões nas barras e as correntes nos ramos, para um dado cenário pré-estabelecido de geração e carga. É uma ferramenta essencial na operação e no controle dos sistemas elétricos de potência. Na análise tradicional, os parâmetros são tratados como quantidades determinísticas. Contudo, na prática, esses parâmetros podem apresentar incertezas associadas à medição ou à variação inerente ao longo do tempo. Em adição, o crescimento da participação de fontes intermitentes, como eólica e solar, em redes elétricas, aumenta o grau de incerteza e, portanto, estudos específicos de fluxo de potência devem ser desenvolvidos no sentido de tratar esta possível variabilidade de dados. Neste contexto, este trabalho investiga um método, publicado na literatura, que modela o fluxo de potência sujeito a incertezas associadas às cargas ativa e reativa das barras. A idéia básica deste método é proceder a expansão completa, em termos da série de Taylor, das equações de potência expressas em coordenadas retangulares das tensões nas barras. O método é implementado em MATLAB, considerando diferentes incertezas aplicadas aos sistemas IEEE 57 barras e brasileiro de 107 barras. Os resultados são, então, comparados com aqueles gerados pela matemática intervalar e pela simulação de Monte Carlo. De forma geral, a qualidade dos intervalos gerados pelo método em estudo é melhor que aquela apresentada pela matemática intervalar.
The power flow analysis aims to calculate bus voltage and current in branches, for a given pre-established scenario of generation and load. It is an essential tool in electrical power systems operation and control In traditional analysis, the parameters are treated as deterministic values. However, in practice, these parameters may present uncertainties associated with measurement as well as their inherent variation over the time. In addition, the growth of intermittent sources participation, such as wind and solar, into power grids has increased the uncertainties level, which demands the development of specific power flow studies in order to deal with data variability. In this context, this work investigates a method published in literature, that models the power flow subject to uncertainties associated with active and reactive bus loads. The basic idea of this method is to carry out the complete expansion of power equations, in terms of Taylor series, expressed in rectangular coordinates of bus voltages. The method is implemented in MATLAB, considering different uncertainties applied to IEEE 57 bus and Brazilian 107 bus. The results are then compared with those generated by interval mathematics and Monte Carlo simulation. In general, the quality of this method generated intervals is better than that presented by interval mathematics.
APA, Harvard, Vancouver, ISO, and other styles
22

Chiou, Sheng-Yu, and 邱昇昱. "Approximation for Quantile Using Taylor Expansion." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/61867953366538993196.

Full text
Abstract:
碩士
國立中山大學
應用數學系研究所
100
Quantile is a basic and an important quantity of a random variable. In some distributions, their quantiles have closed-form expressions. However, for many continuous distributions, the closed-form expressions of their quantiles do not exist. Yu and Zelterman (2011) and Chang (2004) have proposed an approximation of quantiles. In this paper, we propose an improved method which is combined the Taylor expansion with Newton’s method. Some examples are given to compare the computing time of the method we proposed with the methods in Yu and Zelterman (2011) and Chang (2004).
APA, Harvard, Vancouver, ISO, and other styles
23

Askar, Serkan. "Behavioral synthesis using Taylor expansion diagrams." 2006. https://scholarworks.umass.edu/dissertations/AAI3242096.

Full text
Abstract:
An original technique to transform functional representation of the design into a structural representation in form of a data ow graph (DFG) is described. A canonical, word-level data structure, Taylor Expansion Diagram (TED), is used as a vehicle to effect this transformation. The problem is formulated as that of applying a sequence of decomposition cuts to the TED that transforms it into a family of DFGs. A systematic approach to generate cut sequences and metrics to evaluate the cost of the resulting DFGs are described. Experimental results show that such constructed DFGs provide a better starting point for architectural synthesis that those derived directly from HDL specifications.
APA, Harvard, Vancouver, ISO, and other styles
24

Ren, Qian. "Optimizing behavioral transformations using Taylor Expansion Diagrams." 2008. https://scholarworks.umass.edu/dissertations/AAI3325111.

Full text
Abstract:
Optimization of designs specified at higher levels of abstraction than gate-level or register-transfer level (RTL) has been shown to have the greatest impact on the quality of synthesized hardware. This work presents a systematic method and an experimental software system for behavioral transformations of designs specified at algorithmic and behavioral levels. It targets data-flow and computation-intensive designs used in digital signal processing applications. The system is intended to provide transformations of the initial design specifications prior to architectural and RTL synthesis. It aims at optimizing practical designs while taking into consideration hardware design constraints. The system is based on canonical, graph-based representation, called Taylor Expansion Diagram (TED). The design, initially specified in C, system C, or behavioral hardware description language (HDL), is translated into a hybrid network composed of islands of functional blocks, represented as TEDs, and structural operators, represented as black boxes. TEDs, constructed from polynomial expressions describing functionality of the arithmetic components, are transformed into a structural data flow graph (DFG) representation through a series of TED transformation steps, such as TED linearization, factorization, common subexpression elimination, and TED decomposition. The resulting DFGs are combined with other operators in a hybrid structural network, which is then further restructured to minimize the design latency, subject to the imposed resource constraints. The behavioral transformation system presented in this work relies on novel TED decomposition and DFG restructuring algorithms to produce minimum-latency DFGs and heuristically minimize the overall TD network under the resource constraints. The results show that this system can produce high quality results and can be applied to practical industrial designs. To the best of our knowledge this is the first truly behavioral optimization system which performs transformations of the behavioral design descriptions in a systematic fashion.
APA, Harvard, Vancouver, ISO, and other styles
25

Wu, Yi-Hao, and 吳逸豪. "On the Immunization Effect of the Nth Order Taylor Series Expansion of Bond Price." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/80457861770856532347.

Full text
Abstract:
碩士
國立臺灣大學
財務金融學系
86
The immunization strategy is extensively used to protect the nominal value of a portfolio against interest rate changes, particularly suitable to pension funds and insurance companies. Given various types of the term structure of interest rates, this paper studies how effective the following strategies on immunizing the interest rate risk of the bond portfolio: Duration Matching, M-Absolute, M- Square, and M-Vector. Taiwan government bond data is used to investigate the immunization effects of the mentioned strategies. The evidence indicates that actual portfolio values deviate from target values under different strategies. This results lead to the appropriate application of the immunization strategies for the bond portfolio managers. This paper also shows how to manage bond portfolios, to hedge bond portfolios, and hence solve the following problems in reality. Fisst, how does an asset manager construct a bond portfolio and hedge the expected return for the cash outflow? Second, how does the portfolio manager apply the operation research to construct the bond portfolio? Third, how does the portfolio manager explain the properties of the constructed portfolio with different immunization strategies? Finally, can the portfolio manager construct the optimal bond portfolio by simultaneously taking into account the hedging effectiveness, yield to maturity, and construction cost of the portfolio? The evidence indicates that, due to the time-varying characteristics of the interest rate, it is impossible to perfectly hedge the risk of the bond price with traditional duration matching. For the sake of immunization effectiveness, management cost, and convenience, it is better to execute M-absolute or M-Square strategies than the duration matching.
APA, Harvard, Vancouver, ISO, and other styles
26

Zhang, Liangliang. "Essays on numerical solutions to forward-backward stochastic differential equations and their applications in finance." Thesis, 2017. https://hdl.handle.net/2144/26430.

Full text
Abstract:
In this thesis, we provide convergent numerical solutions to non-linear forward-BSDEs (Backward Stochastic Differential Equations). Applications in mathematical finance, financial economics and financial econometrics are discussed. Numerical examples show the effectiveness of our methods.
APA, Harvard, Vancouver, ISO, and other styles
27

CHIU, SHAO-WEN, and 邱紹文. "Assessment of The Overall Survival Rate of Cancer Using A Revised Taylor Expansion Algorithm- A Taiwanese Population-Based Survey for Breast And Colon Cancers." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3y7taq.

Full text
Abstract:
博士
中臺科技大學
醫學影像暨放射科學系暨研究所
107
This paper explores the overall survival (OS) analysis using the Taylor expansion algorithm to give different interpretations of the survival curve under different stages of cancer with different treatments. Breast and colon cancers were selected examples for in-depth discussion. The proposed speculation was based on the theory of the hit and target model and make use of the Taylor expansion algorithm applied in Taiwanese population-based survey. The overall survival (OS) is the most commonly used assessment indicator for survival analysis, also called observed survival rate. This indicator takes death as an event in order to focus on time duration of the case from diagnosis to death (regardless of the cause of death). Since life extension is the primary goal of cancer treatment, this assessment indicator is recognized as the most significant proven clinical efficacy. The survival rate analysis in this paper adopted the Kaplan-Meier Method (KM), also known as the “product-limit”, to predict survival rate in a series of intervals according to the actual survival time, along with the revised Taylor expansion algorithm by its characteristics of the exponential function to calculate how many survival years with radiotherapy, compared to no radiotherapy cases supporting by big data statistics. This method allows the general public to quickly understand the necessity of treatment. The Taylor expansion algorithm is a polynomial approximation function, that is, a function is represented by a polynomial. Using the lethal frequency [α, yr-1] (i.e. survival rate = exp(-αt)) to obtain 0.37 from the preset survival curve, then α is equal to the reciprocal ratio of the specific year of 0.37. The interpretation of lethal frequency was similar to the decay constant in dealing of the radionuclide decay. To discuss the variation of survival rate of breast cancer patient and predict the positive reaction to radiotherapy by using the characteristics of the Taylor expansion algorithm. If the death of breast cancer patients is regarded as the decay of the radionuclide, the survival curve can be viewed as a decayed one. It is quite reasonable to interpreter the data of breast cancer patients. Among those 0-IV stages breast cancer patients who take radiotherapy, the lethal frequency was {0.0029, 0.0066, 0.0178, 0.0475, 0.1785}, respectively. Conversely, those without radiotherapy, the lethal frequency was {0.0072, 0.0137, 0.0264, 0.0913, 0.2425}. The average life expectancy of stage II breast cancer patients without radiotherapy was 37.8 years, while increased to 56.3 years with radiotherapy, as for stage III breast cancer patients, with and without radiotherapy, the average life expectancy were 11 years and added to 21.0 years respectively, which means that breast cancer patients who received radiotherapy were able to increase their average life. However, the average life expectancy of stage IV breast cancer patients did not increase significantly, which may be due to the fact that radiotherapy is not the main method for treatment. High lethal frequency of radiotherapy means low survival rate and short average life. According to the revised Taylor expansion algorithm, the modified calculating method was proposed to predict survival rate of breast cancer patients at different stage. The algorithm can improve the accuracy in predicting the survival rate by suggesting an additional term of patient’s recovery from therapy into the original survival rate regression. The same method was applied to assess the effect for colon cancer patients with or without surgery at stage 0-IV. The lethal frequency of the patients with surgery were {0.029, 0.036, 0.058, 0.077, 0.236} and the average life expectancy was {34.5, 27.8, 17.2, 13.0, 4.2} years; on the contrary, patients without surgery, the lethal frequency were {0.116, 0.181, 0.256, 0.203, 0.504} and the average life expectancy were {8.6, 5.5, 3.9, 4.9, 2.0} years. It is obvious that the colon cancer patients with surgery gain twice average life expectancy than those without surgery and the shoulder effect can be seen. In conclusion, the four recovery methods for breast cancer or colon cancer patients with radiotherapy (surgery) or without radiotherapy (surgery) follow the hit-Target model. The shoulder effect means good response to the treatment and hence increased survival rate.
APA, Harvard, Vancouver, ISO, and other styles
28

Gaikwad, Akash S. "Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment." Thesis, 2018. http://hdl.handle.net/1805/17923.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
In recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems. This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model. This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model. 1: Pruning based on Taylor expansion of change in cost function Delta C. 2: Pruning based on L2 normalization of activation maps. 3: Pruning based on a combination of method 1 and method 2. The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L2 normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.
APA, Harvard, Vancouver, ISO, and other styles
29

Chen, Bing Chang, and 陳炳嘗. "The Taylor expansions of Jacobi forms over the Cayley numbers." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/89707241756026849180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ellis, Trenton Blake. "A subsurface investigation in Taylor clay." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-08-3973.

Full text
Abstract:
A comprehensive field and laboratory investigation at the location of the Lymon C. Reese Research Wall is presented. Soil at the site is a stiff, fissured and heavily overconsolidated clay from the Taylor Group. Index properties such as Atterberg limits and clay fractions were used with common empirical guidelines to assess the qualitative swell potential. The soil's compressibility and strength characteristics were difficult to measure in the lab, owing to the stiff soil's secondary structure. Measured values were compared to well established correlations and test results from similar soils sampled from locations near the present test site. Cyclic swell tests were to predict the soil's lateral swell potential after multiple cycles of wetting and drying. Empirical guidelines indicated the soil has a "high" to "very high" swell potential. This was validated by the swelling that was observed during consolidation and cyclic swell tests. The soil's drained and undrained strengths were both rather large, often more typical of rock than soil. The stress history was not evident from consolidation results, either due to disturbance, cementation or extreme overconsolidation. The hydraulic conductivity was particularly elusive, again due to the soil's secondary structure.
text
APA, Harvard, Vancouver, ISO, and other styles
31

Meng, Jin. "Coding Theorems via Jar Decoding." Thesis, 2013. http://hdl.handle.net/10012/7359.

Full text
Abstract:
In the development of digital communication and information theory, every channel decoding rule has resulted in a revolution at the time when it was invented. In the area of information theory, early channel coding theorems were established mainly by maximum likelihood decoding, while the arrival of typical sequence decoding signaled the era of multi-user information theory, in which achievability proof became simple and intuitive. Practical channel code design, on the other hand, was based on minimum distance decoding at the early stage. The invention of belief propagation decoding with soft input and soft output, leading to the birth of turbo codes and low-density-parity check (LDPC) codes which are indispensable coding techniques in current communication systems, changed the whole research area so dramatically that people started to use the term "modern coding theory'' to refer to the research based on this decoding rule. In this thesis, we propose a new decoding rule, dubbed jar decoding, which would be expected to bring some new thoughts to both the code performance analysis and the code design. Given any channel with input alphabet X and output alphabet Y, jar decoding rule can be simply expressed as follows: upon receiving the channel output y^n ∈ Y^n, the decoder first forms a set (called a jar) of sequences x^n ∈ X^n considered to be close to y^n and pick any codeword (if any) inside this jar as the decoding output. The way how the decoder forms the jar is defined independently with the actual channel code and even the channel statistics in certain cases. Under this jar decoding, various coding theorems are proved in this thesis. First of all, focusing on the word error probability, jar decoding is shown to be near optimal by the achievabilities proved via jar decoding and the converses proved via a proof technique, dubbed the outer mirror image of jar, which is also quite related to jar decoding. Then a Taylor-type expansion of optimal channel coding rate with finite block length is discovered by combining those achievability and converse theorems, and it is demonstrated that jar decoding is optimal up to the second order in this Taylor-type expansion. Flexibility of jar decoding is then illustrated by proving LDPC coding theorems via jar decoding, where the bit error probability is concerned. And finally, we consider a coding scenario, called interactive encoding and decoding, and show that jar decoding can be also used to prove coding theorems and guide the code design in the scenario of two-way communication.
APA, Harvard, Vancouver, ISO, and other styles
32

(5931047), Akash Gaikwad. "Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment." Thesis, 2019.

Find full text
Abstract:

In recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems.

This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model.

This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model.

1: Pruning based on Taylor expansion of change in cost function Delta C.

2: Pruning based on L2 normalization of activation maps.

3: Pruning based on a combination of method 1 and method 2.

The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L2 normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.

APA, Harvard, Vancouver, ISO, and other styles
33

Hsiang, Te-Cheng, and 向德成. "Base on the truncated 6-order Taylor series expansions to implement ROM-less DDFS." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/51853765444635929073.

Full text
Abstract:
碩士
元智大學
通訊工程學系
96
A ROM-Less direct digital frequency synthesizer (DDFS) architecture is presented. It uses the Taylor series expansions for cosine function compared to already existing systems with spurious-free dynamic range (SFDR) and error. To achieve both maximize SFDR and minimize error, we utilize the truncated 6-order Taylor series expansions and the minimum square error method. The synthesizer achieves a wide SFDR of 100 dB and the error of less than 3.1x10-4.
APA, Harvard, Vancouver, ISO, and other styles
34

Reetz, Sunny W. H. "Effects of Land Use, Market Integration, and Poverty on Tropical Deforestation: Evidence from Forest Margins Areas in Central Sulawesi, Indonesia." Doctoral thesis, 2012. http://hdl.handle.net/11858/00-1735-0000-000D-EF46-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography