Dissertations / Theses on the topic 'Expansion de Taylor'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 34 dissertations / theses for your research on the topic 'Expansion de Taylor.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Dula, Mark, Eunice Mogusu, Sheryl Strasser, Ying Liu, and Shimin Zheng. "Median and Mode Approximation for Skewed Unimodal Continuous Distributions using Taylor Series Expansion." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etsu-works/112.
Full textGuillot, Jérémie. "Optimization techniques for high level synthesis and pre-compilation based on Taylor expansion diagrams." Lorient, 2009. http://www.theses.fr/2009LORIS121.
Full textThis thesis addresses the design productivity gap problem in design automation by emp]oying a canonical representation, called Taylor Expansion Diagram. TED is a graphical representation based on Taylor series decomposition of the data-flow computation. Optimizations and high-level transformations developed in this thesis are based on transformations and pattern recognition applied to the TED representation. The results of su ch transformations are the optimized data-flow graphs, which provide input to standard, HLS too]s for final architectural synthesis. Such optimizations cannot be achieved by traditional architectural and high-level synthesis tools or compiJers available today
Liljas, Erik. "Stochastic Differential Equations : and the numerical schemes used to solve them." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-86799.
Full textGarcía, Monera María. "r-critical points and Taylor expansion of the exponential map, for smooth immersions in Rk+n." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/50935.
Full text[ES] En general, el estudio del contacto con hiperplanos e hiperesferas se ha llevado a cabo usando la familia de funciones altura y la función distancia al cuadrado. En la primera parte de la tesis analizamos el desarrollo de Taylor de la aplicación exponencial hasta orden 3 de una subvariedad $M$ inmersa en $\r n.$ Nuestro principal objetivo es mostrar su utilidad en el estudio de contactos especiales de subvariedades con modelos geométricos. A medida que analizamos los contactos de orden mayor, la complejidad de las cuentas aumenta. En este trabajo, a través del desarrollo de Taylor de la aplicación exponencial, caracterizamos la geometría de orden mayor que $3$ en términos de invariantes geométricos de la inmersión, por lo que el trabajo con las cuentas en casos especiales se convierte en más manejable. Esto nos permite también obtener nuevos resultados geométricos. En la segunda parte de la tesis se introduce el concepto de punto crítico de una aplicación regular entre subvariedades. Si consideramos una variedad diferenciable $M$ de dimensión $k$ e inmersa en $\r{k+n},$ sabemos que su conjunto focal puede ser interpretado como la imagen de los puntos críticos de la {\it aplicación normal} $\nu(m,u): NM\to \r{k+n}$ definida por $\nu(m,u)=\pi_N(m,u)+ u,$ para $m\in M$ y $u\in N_mM,$ donde $\pi_N:NM\to M$ denota el fibrado normal. De la misma manera, el conjunto parabólico de una subvariedad diferencial viene dado por el análisis de las singularidades de la función altura sobre la subvariedad. Si consideramos una subvariedad $M$ de dimensión $k$ e inmersa en $\r{k+n},$ sabemos que su conjunto parabólico puede ser interpretado como la imagen de los puntos críticos de la {\it aplicación generalizada de Gauss} $\psi(m,u): NM\to \r{k+n}$ definida por $\psi(m,u)= u,$ donde $u\in N_mM.$ Finalmente, caracterizamos las direcciones asintóticas como el conjunto de direcciones del tangente de una subvariedad $M$ de dimensión $k$ e inmersa en $\r{k+n}$ a través del estudio de las singularidades de la aplicación tangente $\Omega(m,y): TM\to \r{k+n}$ definida por $\Omega(m,y)=\pi(m,y)+y,$ para $y\in T_mM,$ donde $\pi:TM\to M$ denota el fibrado tangente. Describimos primero el conjunto focal y su relación geométrica con la Veronese de curvatura para una variedad $k$ dimensional inmersa en $\r{k+n}.$ Entonces, definimos los puntos $r$-críticos de una aplicación $f:H \to K$ entre dos subvariedades y caracterizamos los puntos $2$ y $3$ críticos de la aplicación normal y la aplicación generalizada de Gauss. El número de estos puntos críticos en $m\in M$ depende de la degeneración de la elipse de curvatura y calculamos ese número en el caso particular de una superficie inmersa en $\r{4}$ para la aplicación normal y $\r{5}$ para la aplicación generalizada de Gauss.
[CAT] En general, l'estudi del contacte amb hiperplans i hiperesferes s'ha dut a terme utilitzant la família de funcions altura i la funció distància al quadrat. A la primera part de la tesi analitzem el desenvolupament de Taylor de l'aplicació exponencial fins a ordre 3 d'una subvarietat $M$ immersa en $\r n.$ El nostre principal objectiu és mostrar la seua utilitat en l'estudi de contactes especials de subvarietats amb models geomètrics. A mesura que analitzem els contactes d'ordre major, la complexitat dels comptes augmenta. En aquest treball, a través del desenvolupament de Taylor de l'aplicació exponencial, caracteritzem la geometria d'ordre major que $ 3 $ en termes d'invariants geomètrics de la immersió, de manera que el treball amb els comptes en casos especials es converteix en més manejable. Això ens permet també obtenir nous resultats geomètrics. A la segona part de la tesi s'introdueix el concepte de punt crític d'una aplicació regular entre subvarietats. Si considerem una varietat diferenciable $ M $ de dimensió $ k $ i immersa en $ \r {k + n}, $ sabem que el seu conjunt focal pot ser interpretat com la imatge dels punts crítics de la {\it aplicació normal} $ \nu (m, u): NM \to \r {k + n} $ definida per $ \nu (m, u) = \pi_N (m, u) + o, $ per $ m \in M $ i $ u \in N_mM, $ on $ \pi_N: NM \to M $ denota el fibrat normal. De la mateixa manera, el conjunt parabòlic d'una subvarietat diferencial ve donat per l'anàlisi de les singularitats de la funció altura sobre la subvarietat. Si considerem una subvarietat $ M $ de dimensió $ k $ i immersa en $ \r {k + n}, $ sabem que el seu conjunt parabòlic pot ser interpretat com la imatge dels punts crítics de la {\it aplicació generalitzada de Gauss} $ \psi (m, u): NM \to \r{k + n} $ definida per $ \psi (m, u) = u, $ on $ u \in N_mM. $ Finalment, caracteritzem les direccions asimptòtiques com el conjunt de direccions del tangent d'una subvarietat $ M $ de dimensió $ k $ i immersa en $ \r{k + n} $ a través de l'estudi de les singularitats de l'aplicació tangent $ \Omega (m, y): TM \to \r {k + n} $ definida per $ \Omega (m, y) = \pi (m, y) + y, $ per $ y \in T_mM, $ on $ \pi: TM \to M $ denota el fibrat tangent. Descrivim primer el conjunt focal i la seva relació geomètrica amb la Veronese de curvatura per a una varietat $ k $ dimensional immersa en $ \r{k + n}. $ Llavors, definim els punts $ r $-crítics d'una aplicació $ f: H \to K $ entre dues subvarietats i caracteritzem els punts $ 2 $ i $ 3 $ crítics de l'aplicació normal i l'aplicació generalitzada de Gauss. El nombre d'aquests punts crítics en $ m \in M $ depèn de la degeneració de l'el·lipse de curvatura i calculem aquest nombre en el cas particular d'una superfície immersa en $ \r{4} $ per a l'aplicació normal i $ \r{5} $ per a l'aplicació generalitzada de Gauss.
García Monera, M. (2015). r-critical points and Taylor expansion of the exponential map, for smooth immersions in Rk+n [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/50935
TESIS
Kratky, Joseph J. "SERIES EXPANSION FOR SEMI-SPDES WITH REMARKS ON HYPERBOLIC SPDES ON THE LATTICE." Kent State University / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=kent1310614464.
Full textSemeraro, Emanuele. "Experimental investigation on hydrodynamic phenomena associated with a sudden gas expansion in a narrow channel." Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066516/document.
Full textThe sharp vaporization of superheated liquid sodium is investigated. It is suspected to be at the origin of the automatic shutdown for negative reactivity, occurred in the Phénix reactor at the end of the eighties.An experimental apparatus has been designed and operated to reproduce the expansion of overpressurized air, superposed to water in a narrow vertical rectangular section channel.When expansion begins, the initial flat interface separating the two fluids becomes corrugated under the development of two-dimensional Rayleigh-Taylor instabilities. The interface area increases significantly and becomes even 50 times larger than the initial value. Since the channel is very narrow, instabilities along the channel depth do not develop.The gas expansion in a narrow channel can be divided into two main phases: Rayleigh-Taylor (linear and non-linear) and multi-structures (transition and chaotic) phases. The former is characterized by the dynamic of corrugated profile and the interface area results proportional to the amplitude of corrugation The latter is influenced by the behavior of the liquid structures dispersed in gas matrix and the interface area is mainly proportional to the number of liquid structures.The distribution of volume fraction suggests a model of channel flow consisting of three regions: the regular profile of peaks, the spike region and the structures tails. The analysis of sensibility to surface tension confirms that, with a lower surface tension, the fluids configuration is more unstable. The interface corrugations are more pronounced and more structures are produced, leading to a higher increment of the interface area
Volkmer, Toni. "Taylor and rank-1 lattice based nonequispaced fast Fourier transform." Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-106489.
Full textMatchie, Lydienne. "Cubature methods and applications to option pricing." Thesis, Stellenbosch : University of Stellenbosch, 2010. http://hdl.handle.net/10019.1/5374.
Full textENGLISH ABSTRACT: In this thesis, higher order numerical methods for weak approximation of solutions of stochastic differential equations (SDEs) are presented. They are motivated by option pricing problems in finance where the price of a given option can be written as the expectation of a functional of a diffusion process. Numerical methods of order at most one have been the most used so far and higher order methods have been difficult to perform because of the unknown density of iterated integrals of the d-dimensional Brownian motion present in the stochastic Taylor expansion. In 2001, Kusuoka constructed a higher order approximation scheme based on Malliavin calculus. The iterated stochastic integrals are replaced by a family of finitely-valued random variables whose moments up to a certain fixed order are equivalent to moments of iterated Stratonovich integrals of Brownian motion. This method has been shown to outperform the traditional Euler-Maruyama method. In 2004, this method was refined by Lyons and Victoir into Cubature on Wiener space. Lyons and Victoir extended the classical cubature method for approximating integrals in finite dimension to approximating integrals in infinite dimensional Wiener space. Since then, many authors have intensively applied these ideas and the topic is today an active domain of research. Our work is essentially based on the recently developed higher order schemes based on ideas of the Kusuoka approximation and Lyons-Victoir “Cubature on Wiener space” and mostly applied to option pricing. These are the Ninomiya-Victoir (N-V) and Ninomiya- Ninomiya (N-N) approximation schemes. It should be stressed here that many other applications of these schemes have been developed among which is the Alfonsi scheme for the CIR process and the decomposition method presented by Kohatsu and Tanaka for jump driven SDEs. After sketching the main ideas of numerical approximation methods in Chapter 1 , we start Chapter 2 by setting up some essential terminologies and definitions. A discussion on the stochastic Taylor expansion based on iterated Stratonovich integrals is presented, we close this chapter by illustrating this expansion with the Euler-Maruyama approximation scheme. Chapter 3 contains the main ideas of Kusuoka approximation scheme, we concentrate on the implementation of the algorithm. This scheme is applied to the pricing of an Asian call option and numerical results are presented. We start Chapter 4 by taking a look at the classical cubature formulas after which we propose in a simple way the general ideas of “Cubature on Wiener space” also known as the Lyons-Victoir approximation scheme. This is an extension of the classical cubature method. The aim of this scheme is to construct cubature formulas for approximating integrals defined on Wiener space and consequently, to develop higher order numerical schemes. It is based on the stochastic Stratonovich expansion and can be viewed as an extension of the Kusuoka scheme. Applying the ideas of the Kusuoka and Lyons-Victoir approximation schemes, Ninomiya- Victoir and Ninomiya-Ninomiya developed new numerical schemes of order 2, where they transformed the problem of solving SDE into a problem of solving ordinary differential equations (ODEs). In Chapter 5 , we begin by a general presentation of the N-V algorithm. We then apply this algorithm to the pricing of an Asian call option and we also consider the optimal portfolio strategies problem introduced by Fukaya. The implementation and numerical simulation of the algorithm for these problems are performed. We find that the N-V algorithm performs significantly faster than the traditional Euler-Maruyama method. Finally, the N-N approximation method is introduced. The idea behind this scheme is to construct an ODE-valued random variable whose average approximates the solution of a given SDE. The Runge-Kutta method for ODEs is then applied to the ODE drawn from the random variable and a linear operator is constructed. We derive the general expression for the constructed operator and apply the algorithm to the pricing of an Asian call option under the Heston volatility model.
AFRIKAANSE OPSOMMING: In hierdie proefskrif, word ’n hoërorde numeriese metode vir die swak benadering van oplossings tot stogastiese differensiaalvergelykings (SDV) aangebied. Die motivering vir hierdie werk word gegee deur ’n probleem in finansies, naamlik om opsiepryse vas te stel, waar die prys van ’n gegewe opsie beskryf kan word as die verwagte waarde van ’n funksionaal van ’n diffusie proses. Numeriese metodes van orde, op die meeste een, is tot dus ver in algemene gebruik. Dit is moelik om hoërorde metodes toe te pas as gevolg van die onbekende digtheid van herhaalde integrale van d-dimensionele Brown-beweging teenwoordig in die stogastiese Taylor ontwikkeling. In 2001 het Kusuoka ’n hoërorde benaderings skema gekonstrueer wat gebaseer is op Malliavin calculus. Die herhaalde stogastiese integrale word vervang deur ’n familie van stogastiese veranderlikes met eindige waardes, wat se momente tot ’n sekere vaste orde bestaan. Dit is al gedemonstreer dat hierdie metode die tradisionele Euler-Maruyama metode oortref. In 2004 is hierdie metode verfyn deur Lyons en Victoir na volumeberekening op Wiener ruimtes. Lyons en Victoir het uitgebrei op die klassieke volumeberekening metode om integrale te benader in eindige dimensie na die benadering van integrale in oneindige dimensionele Wiener ruimte. Sedertdien het menige outeurs dié idees intensief toegepas en is die onderwerp vandag ’n aktiewe navorsings gebied. Ons werk is hoofsaaklik gebaseer op die onlangse ontwikkelling van hoërorde skemas, wat op hul beurt gebaseer is op die idees van Kusuoka benadering en Lyons-Victoir "Volumeberekening op Wiener ruimte". Die werk word veral toegepas op die prysvastelling van opsies, naamlik Ninomiya-Victoir en Ninomiya-Ninomiya benaderings skemas. Dit moet hier beklemtoon word dat baie ander toepassings van hierdie skemas al ontwikkel is, onder meer die Alfonsi skema vir die CIR proses en die ontbinding metode wat voorgestel is deur Kohatsu en Tanaka vir sprong aangedrewe SDVs. Na ’n skets van die hoof idees agter metodes van numeriese benadering in Hoofstuk 1 , begin Hoofstuk 2 met die neersetting van noodsaaklike terminologie en definisies. ’n Diskussie oor die stogastiese Taylor ontwikkeling, gebaseer op herhaalde Stratonovich integrale word uiteengeset, waarna die hoofstuk afsluit met ’n illustrasie van dié ontwikkeling met die Euler-Maruyama benaderings skema. Hoofstuk 3 bevat die hoofgedagtes agter die Kusuoka benaderings skema, waar daar ook op die implementering van die algoritme gekonsentreer word. Hierdie skema is van toepassing op die prysvastelling van ’n Asiatiese call-opsie, numeriese resultate word ook aangebied. Ons begin Hoofstuk 4 deur te kyk na klassieke volumeberekenings formules waarna ons op ’n eenvoudige wyse die algemene idees van "Volumeberekening op Wiener ruimtes", ook bekend as die Lyons-Victoir benaderings skema, as ’n uitbreiding van die klassieke volumeberekening metode gebruik. Die doel van hierdie skema is om volumeberekening formules op te stel vir benaderings integrale wat gedefinieer is op Wiener ruimtes en gevolglik, hoërorde numeriese skemas te ontwikkel. Dit is gebaseer op die stogastiese Stratonovich ontwikkeling en kan beskou word as ’n ontwikkeling van die Kusuoka skema. Deur Kusuoka en Lyon-Victoir se idees oor benaderings skemas toe te pas, het Ninomiya-Victoir en Ninomiya- Ninomiya nuwe numeriese skemas van orde 2 ontwikkel, waar hulle die probleem omgeskakel het van een waar SDVs opgelos moet word, na een waar gewone differensiaalvergelykings (GDV) opgelos moet word. Hierdie twee skemas word in Hoofstuk 5 uiteengeset. Alhoewel die benaderings soortgelyk is, is daar ’n beduidende verskil in die algoritmes self. Hierdie hoofstuk begin met ’n algemene uiteensetting van die Ninomiya-Victoir algoritme waar ’n arbitrêre vaste tyd horison, T, gebruik word. Dié word toegepas op opsieprysvastelling en optimale portefeulje strategie probleme. Verder word numeriese simulasies uitgevoer, die prestasie van die Ninomiya-Victoir algoritme was bestudeer en vergelyk met die Euler-Maruyama metode. Ons maak die opmerking dat die Ninomiya-Victoir algoritme aansienlik vinniger is. Die belangrikste resultaat van die Ninomiya-Ninomiya benaderings skema word ook voorgestel. Deur die idee van ’n Lie algebra te gebruik, het Ninomiya en Ninomiya ’n stogastiese veranderlike met GDV-waardes gekonstrueer wat se gemiddeld die oplossing van ’n gegewe SDV benader. Die Runge-Kutta metode vir GDVs word dan toegepas op die GDV wat getrek is uit die stogastiese veranderlike en ’n lineêre operator gekonstrueer. ’n Veralgemeende uitdrukking vir die gekonstrueerde operator is afgelei en die algoritme is toegepas op die prysvasstelling van ’n Asiatiese opsie onder die Heston onbestendigheids model.
Adolfsson, David, and Tom Claesson. "Estimation methods for Asian Quanto Basket options." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160920.
Full textGuo, Longkai. "Numerical investigation of Taylor bubble and development of phase change model." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI095.
Full textThe motion of a nitrogen Taylor bubble in glycerol-water mixed solutions rising through different types of expansions and contractions is investigated by a numerical approach. The CFD procedure is based on an open-source solver Basilisk, which adopts the volume-of-fluid (VOF) method to capture the gas-liquid interface. The results of sudden expansions/contractions are compared with experimental results. The results show that the simulations are in good agreement with experiments. The bubble velocity increases in sudden expansions and decreases in sudden contractions. The bubble break-up pattern is observed in sudden expansions with large expansion ratios, and a bubble blocking pattern is found in sudden contractions with small contraction ratios. In addition, the wall shear stress, the liquid film thickness, and pressure in the simulations are studied to understand the hydrodynamics of the Taylor bubble rising through expansions/contractions. The transient process of the Taylor bubble passing through sudden expansion/contraction is further analyzed for three different singularities: gradual, parabolic convex and parabolic concave. A unique feature in parabolic concave contraction is that the Taylor bubble passes through the contraction even for small contraction ratios. Moreover, a phase change model is developed in the Basilisk solver. In order to use the existed geometric VOF method in Basilisk, a general two-step geometric VOF method is implemented. Mass flux is calculated not in the interfacial cells but transferred to the neighboring cells around the interface. The saturated temperature boundary condition is imposed at the interface by a ghost cell method. The phase change model is validated by droplet evaporation with a constant mass transfer rate, the one-dimensional Stefan problem, the sucking interface problem, and a planar film boiling case. The results show good agreement with analytical solutions or correlations
Mwangota, Lutufyo. "Cubature on Wiener Space for the Heath--Jarrow--Morton framework." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-42804.
Full textMidez, Jean baptiste. "Une étude combinatoire du lambda-calcul avec ressources uniforme." Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4093/document.
Full textThe resource lambda-calculus is a variant of lambda-calculus based on linearity: resource lambda-terms are to lambda-terms as polynomials are to real functions. In particular reductions in resource lambda-calculus can be viewed as approximations of beta-reductions. But the linearity constraint has important consequences, especially the strong normalisation of resource reduction. So to speak, beta-reduction is obtained by passage to the limit of resource reduction which approximates it. This thesis is a study of the combinatory aspect of resource lambda-calculus. First, we define precisely the notion of resource reduction associated to beta-reduction: let t be a lambda-term, s an approximant of t and t' a beta-reduction of t, we associate a resource reduction (called gamma-reduction) of s which reducts the "same" redex as the beta-reduction of t and this generates a set S' of approximants of t'. This definition allows to find a new proof (who is more intuitive) of one of the fundamental theorems of this theory and it also allows to generalize it. Then we study the "family" relations between resource lambda-terms. The main question is to characterize the resource lambda-terms which are reducts of same term. This central problem is hard and not completely resolved, but this thesis exhibits several preliminary results and lays the foundations of a theory aimed at resolving it
Ashu, Tom A. Ashu. "Non-Smooth SDEs and Hyperbolic Lattice SPDEs Expansions via the Quadratic Covariation Differentiation Theory and Applications." Kent State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=kent1500334062680747.
Full textAnderson, Travis V. "Efficient, Accurate, and Non-Gaussian Error Propagation Through Nonlinear, Closed-Form, Analytical System Models." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2675.
Full textChouquet, Jules. "Une géométrie de calcul : Réseaux de preuve, appel-par-pousse-valeur et topologie du consensus." Thesis, Université de Paris (2019-....), 2019. http://www.theses.fr/2019UNIP7015.
Full textThis Phd thesis presents a quantitative study of various computation models of fundamental computer science and proof theory, in two principad directions :the first consists in the examination of mecanismis of multilinear approximations in systems related to λ-calculus and Linear Logic. The second consists in a study of topological models for asynchronous distributed systems, and probabilistic algorithms. We first study Taylor expansion in Linear Logic proof nets. We introduce proof methods using the geometry of cut elimination in multiplicativenets, and which allow to work with infinite sums of nets in a safe and correct way,in order to extract properties about reduction. Then, we introduce a language allowing us to define Taylor expansion for Call-By-Push-Value, while capturing some properties of the denotational semantics, related to coalgebras morphisms.We focus then on fault tolerant-distributed systems with shared memory, andto Consensus problem. We use a topological model which allows to interpret communication with simplicial complexes, and we adapt in so as to transform the well-known impossibility results in lower bounds for probabilistic algorithms
Seo, Dong-Won. "Performance analysis of queueing networks via Taylor series expansions." Diss., Georgia Institute of Technology, 2002. http://hdl.handle.net/1853/25098.
Full textLi, Rihua. "Analysis for Taylor vortex flow." Diss., Virginia Polytechnic Institute and State University, 1986. http://hdl.handle.net/10919/53628.
Full textPh. D.
Doan, Van Tu. "Modèles réduits pour des analyses paramètriques du flambement de structures : application à la fabrication additive." Thesis, Valenciennes, 2018. http://www.theses.fr/2018VALE0017/document.
Full textThe development of additive manufacturing allows structures with highly complex shapes to be produced. Complex lattice shapes are particularly interesting in the context of lightweight structures. However, although the use of this technology is growing in numerous engineering domains, this one is not enough matured and the correlations between the experimental data and deterministic simulations are not obvious. To take into account observed variations of behavior, multiparametric approaches are nowadays efficient solutions to tend to robust and reliable designs. The aim of this thesis is to integrate material and geometric uncertainty, experimentally quantified, in buckling analyses. To achieve this objective, different surrogate models, based on regression and correlation techniques as well as different reduced order models have been first evaluated to reduce the prohibitive computational time. The selected projections rely on modes calculated either from Proper Orthogonal Decomposition, from homotopy developments or from Taylor series expansion. Second, the proposed mathematical model is integrated in fuzzy and probabilistic analyses to estimate the evolution of the critical buckling load for lattice structures
Sun, Yong. "Reliability prediction of complex repairable systems : an engineering approach." Queensland University of Technology, 2006. http://eprints.qut.edu.au/16273/.
Full textSemenov, Andrei. "Intertemporal utility models for asset pricing : reference levels and individual heterogeneity." Thèse, [Montréal] : Université de Montréal, 2003. http://wwwlib.umi.com/cr/umontreal/fullcit?pNQ92724.
Full text"Thèse présentée à la Faculté des études supérieures en vue de l'obtention du grade de Philosophiae Doctor (Ph.D.) en sciences économiques" Version électronique également disponible sur Internet.
Quintanilha, Laura de Mesquita. "Análise do modelo de fluxo de potência retangular intervalar baseado na expansão completa da série de Taylor." Universidade Federal de Juiz de Fora (UFJF), 2018. https://repositorio.ufjf.br/jspui/handle/ufjf/7554.
Full textApproved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2018-10-01T18:13:33Z (GMT) No. of bitstreams: 1 laurademesquitaquintanilha.pdf: 5886814 bytes, checksum: 7bcc03d59c68d5b4124de87183557234 (MD5)
Made available in DSpace on 2018-10-01T18:13:34Z (GMT). No. of bitstreams: 1 laurademesquitaquintanilha.pdf: 5886814 bytes, checksum: 7bcc03d59c68d5b4124de87183557234 (MD5) Previous issue date: 2018-07-20
A análise de fluxo de potência visa calcular as tensões nas barras e as correntes nos ramos, para um dado cenário pré-estabelecido de geração e carga. É uma ferramenta essencial na operação e no controle dos sistemas elétricos de potência. Na análise tradicional, os parâmetros são tratados como quantidades determinísticas. Contudo, na prática, esses parâmetros podem apresentar incertezas associadas à medição ou à variação inerente ao longo do tempo. Em adição, o crescimento da participação de fontes intermitentes, como eólica e solar, em redes elétricas, aumenta o grau de incerteza e, portanto, estudos específicos de fluxo de potência devem ser desenvolvidos no sentido de tratar esta possível variabilidade de dados. Neste contexto, este trabalho investiga um método, publicado na literatura, que modela o fluxo de potência sujeito a incertezas associadas às cargas ativa e reativa das barras. A idéia básica deste método é proceder a expansão completa, em termos da série de Taylor, das equações de potência expressas em coordenadas retangulares das tensões nas barras. O método é implementado em MATLAB, considerando diferentes incertezas aplicadas aos sistemas IEEE 57 barras e brasileiro de 107 barras. Os resultados são, então, comparados com aqueles gerados pela matemática intervalar e pela simulação de Monte Carlo. De forma geral, a qualidade dos intervalos gerados pelo método em estudo é melhor que aquela apresentada pela matemática intervalar.
The power flow analysis aims to calculate bus voltage and current in branches, for a given pre-established scenario of generation and load. It is an essential tool in electrical power systems operation and control In traditional analysis, the parameters are treated as deterministic values. However, in practice, these parameters may present uncertainties associated with measurement as well as their inherent variation over the time. In addition, the growth of intermittent sources participation, such as wind and solar, into power grids has increased the uncertainties level, which demands the development of specific power flow studies in order to deal with data variability. In this context, this work investigates a method published in literature, that models the power flow subject to uncertainties associated with active and reactive bus loads. The basic idea of this method is to carry out the complete expansion of power equations, in terms of Taylor series, expressed in rectangular coordinates of bus voltages. The method is implemented in MATLAB, considering different uncertainties applied to IEEE 57 bus and Brazilian 107 bus. The results are then compared with those generated by interval mathematics and Monte Carlo simulation. In general, the quality of this method generated intervals is better than that presented by interval mathematics.
Chiou, Sheng-Yu, and 邱昇昱. "Approximation for Quantile Using Taylor Expansion." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/61867953366538993196.
Full text國立中山大學
應用數學系研究所
100
Quantile is a basic and an important quantity of a random variable. In some distributions, their quantiles have closed-form expressions. However, for many continuous distributions, the closed-form expressions of their quantiles do not exist. Yu and Zelterman (2011) and Chang (2004) have proposed an approximation of quantiles. In this paper, we propose an improved method which is combined the Taylor expansion with Newton’s method. Some examples are given to compare the computing time of the method we proposed with the methods in Yu and Zelterman (2011) and Chang (2004).
Askar, Serkan. "Behavioral synthesis using Taylor expansion diagrams." 2006. https://scholarworks.umass.edu/dissertations/AAI3242096.
Full textRen, Qian. "Optimizing behavioral transformations using Taylor Expansion Diagrams." 2008. https://scholarworks.umass.edu/dissertations/AAI3325111.
Full textWu, Yi-Hao, and 吳逸豪. "On the Immunization Effect of the Nth Order Taylor Series Expansion of Bond Price." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/80457861770856532347.
Full text國立臺灣大學
財務金融學系
86
The immunization strategy is extensively used to protect the nominal value of a portfolio against interest rate changes, particularly suitable to pension funds and insurance companies. Given various types of the term structure of interest rates, this paper studies how effective the following strategies on immunizing the interest rate risk of the bond portfolio: Duration Matching, M-Absolute, M- Square, and M-Vector. Taiwan government bond data is used to investigate the immunization effects of the mentioned strategies. The evidence indicates that actual portfolio values deviate from target values under different strategies. This results lead to the appropriate application of the immunization strategies for the bond portfolio managers. This paper also shows how to manage bond portfolios, to hedge bond portfolios, and hence solve the following problems in reality. Fisst, how does an asset manager construct a bond portfolio and hedge the expected return for the cash outflow? Second, how does the portfolio manager apply the operation research to construct the bond portfolio? Third, how does the portfolio manager explain the properties of the constructed portfolio with different immunization strategies? Finally, can the portfolio manager construct the optimal bond portfolio by simultaneously taking into account the hedging effectiveness, yield to maturity, and construction cost of the portfolio? The evidence indicates that, due to the time-varying characteristics of the interest rate, it is impossible to perfectly hedge the risk of the bond price with traditional duration matching. For the sake of immunization effectiveness, management cost, and convenience, it is better to execute M-absolute or M-Square strategies than the duration matching.
Zhang, Liangliang. "Essays on numerical solutions to forward-backward stochastic differential equations and their applications in finance." Thesis, 2017. https://hdl.handle.net/2144/26430.
Full textCHIU, SHAO-WEN, and 邱紹文. "Assessment of The Overall Survival Rate of Cancer Using A Revised Taylor Expansion Algorithm- A Taiwanese Population-Based Survey for Breast And Colon Cancers." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/3y7taq.
Full text中臺科技大學
醫學影像暨放射科學系暨研究所
107
This paper explores the overall survival (OS) analysis using the Taylor expansion algorithm to give different interpretations of the survival curve under different stages of cancer with different treatments. Breast and colon cancers were selected examples for in-depth discussion. The proposed speculation was based on the theory of the hit and target model and make use of the Taylor expansion algorithm applied in Taiwanese population-based survey. The overall survival (OS) is the most commonly used assessment indicator for survival analysis, also called observed survival rate. This indicator takes death as an event in order to focus on time duration of the case from diagnosis to death (regardless of the cause of death). Since life extension is the primary goal of cancer treatment, this assessment indicator is recognized as the most significant proven clinical efficacy. The survival rate analysis in this paper adopted the Kaplan-Meier Method (KM), also known as the “product-limit”, to predict survival rate in a series of intervals according to the actual survival time, along with the revised Taylor expansion algorithm by its characteristics of the exponential function to calculate how many survival years with radiotherapy, compared to no radiotherapy cases supporting by big data statistics. This method allows the general public to quickly understand the necessity of treatment. The Taylor expansion algorithm is a polynomial approximation function, that is, a function is represented by a polynomial. Using the lethal frequency [α, yr-1] (i.e. survival rate = exp(-αt)) to obtain 0.37 from the preset survival curve, then α is equal to the reciprocal ratio of the specific year of 0.37. The interpretation of lethal frequency was similar to the decay constant in dealing of the radionuclide decay. To discuss the variation of survival rate of breast cancer patient and predict the positive reaction to radiotherapy by using the characteristics of the Taylor expansion algorithm. If the death of breast cancer patients is regarded as the decay of the radionuclide, the survival curve can be viewed as a decayed one. It is quite reasonable to interpreter the data of breast cancer patients. Among those 0-IV stages breast cancer patients who take radiotherapy, the lethal frequency was {0.0029, 0.0066, 0.0178, 0.0475, 0.1785}, respectively. Conversely, those without radiotherapy, the lethal frequency was {0.0072, 0.0137, 0.0264, 0.0913, 0.2425}. The average life expectancy of stage II breast cancer patients without radiotherapy was 37.8 years, while increased to 56.3 years with radiotherapy, as for stage III breast cancer patients, with and without radiotherapy, the average life expectancy were 11 years and added to 21.0 years respectively, which means that breast cancer patients who received radiotherapy were able to increase their average life. However, the average life expectancy of stage IV breast cancer patients did not increase significantly, which may be due to the fact that radiotherapy is not the main method for treatment. High lethal frequency of radiotherapy means low survival rate and short average life. According to the revised Taylor expansion algorithm, the modified calculating method was proposed to predict survival rate of breast cancer patients at different stage. The algorithm can improve the accuracy in predicting the survival rate by suggesting an additional term of patient’s recovery from therapy into the original survival rate regression. The same method was applied to assess the effect for colon cancer patients with or without surgery at stage 0-IV. The lethal frequency of the patients with surgery were {0.029, 0.036, 0.058, 0.077, 0.236} and the average life expectancy was {34.5, 27.8, 17.2, 13.0, 4.2} years; on the contrary, patients without surgery, the lethal frequency were {0.116, 0.181, 0.256, 0.203, 0.504} and the average life expectancy were {8.6, 5.5, 3.9, 4.9, 2.0} years. It is obvious that the colon cancer patients with surgery gain twice average life expectancy than those without surgery and the shoulder effect can be seen. In conclusion, the four recovery methods for breast cancer or colon cancer patients with radiotherapy (surgery) or without radiotherapy (surgery) follow the hit-Target model. The shoulder effect means good response to the treatment and hence increased survival rate.
Gaikwad, Akash S. "Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment." Thesis, 2018. http://hdl.handle.net/1805/17923.
Full textIn recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems. This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model. This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model. 1: Pruning based on Taylor expansion of change in cost function Delta C. 2: Pruning based on L2 normalization of activation maps. 3: Pruning based on a combination of method 1 and method 2. The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L2 normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.
Chen, Bing Chang, and 陳炳嘗. "The Taylor expansions of Jacobi forms over the Cayley numbers." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/89707241756026849180.
Full textEllis, Trenton Blake. "A subsurface investigation in Taylor clay." Thesis, 2011. http://hdl.handle.net/2152/ETD-UT-2011-08-3973.
Full texttext
Meng, Jin. "Coding Theorems via Jar Decoding." Thesis, 2013. http://hdl.handle.net/10012/7359.
Full text(5931047), Akash Gaikwad. "Pruning Convolution Neural Network (SqueezeNet) for Efficient Hardware Deployment." Thesis, 2019.
Find full textIn recent years, deep learning models have become popular in the real-time embedded application, but there are many complexities for hardware deployment because of limited resources such as memory, computational power, and energy. Recent research in the field of deep learning focuses on reducing the model size of the Convolution Neural Network (CNN) by various compression techniques like Architectural compression, Pruning, Quantization, and Encoding (e.g., Huffman encoding). Network pruning is one of the promising technique to solve these problems.
This thesis proposes methods to prune the convolution neural network (SqueezeNet) without introducing network sparsity in the pruned model.
This thesis proposes three methods to prune the CNN to decrease the model size of CNN without a significant drop in the accuracy of the model.
1: Pruning based on Taylor expansion of change in cost function Delta C.
2: Pruning based on L2 normalization of activation maps.
3: Pruning based on a combination of method 1 and method 2.
The proposed methods use various ranking methods to rank the convolution kernels and prune the lower ranked filters afterwards SqueezeNet model is fine-tuned by backpropagation. Transfer learning technique is used to train the SqueezeNet on the CIFAR-10 dataset. Results show that the proposed approach reduces the SqueezeNet model by 72% without a significant drop in the accuracy of the model (optimal pruning efficiency result). Results also show that Pruning based on a combination of Taylor expansion of the cost function and L2 normalization of activation maps achieves better pruning efficiency compared to other individual pruning criteria and most of the pruned kernels are from mid and high-level layers. The Pruned model is deployed on BlueBox 2.0 using RTMaps software and model performance was evaluated.
Hsiang, Te-Cheng, and 向德成. "Base on the truncated 6-order Taylor series expansions to implement ROM-less DDFS." Thesis, 2008. http://ndltd.ncl.edu.tw/handle/51853765444635929073.
Full text元智大學
通訊工程學系
96
A ROM-Less direct digital frequency synthesizer (DDFS) architecture is presented. It uses the Taylor series expansions for cosine function compared to already existing systems with spurious-free dynamic range (SFDR) and error. To achieve both maximize SFDR and minimize error, we utilize the truncated 6-order Taylor series expansions and the minimum square error method. The synthesizer achieves a wide SFDR of 100 dB and the error of less than 3.1x10-4.
Reetz, Sunny W. H. "Effects of Land Use, Market Integration, and Poverty on Tropical Deforestation: Evidence from Forest Margins Areas in Central Sulawesi, Indonesia." Doctoral thesis, 2012. http://hdl.handle.net/11858/00-1735-0000-000D-EF46-1.
Full text