To see the other types of publications on this topic, follow the link: MYT decomposition.

Dissertations / Theses on the topic 'MYT decomposition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 49 dissertations / theses for your research on the topic 'MYT decomposition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

GALUPPI, FRANCESCO. "Waring decompositions via degenerations." Doctoral thesis, Università degli studi di Ferrara, 2018. http://hdl.handle.net/11392/2488103.

Full text
Abstract:
Nel tentativo di generalizzare il Teorema dei Quattro Quadrati di Lagrange, nel 1770 Waring enunciò che ogni numero naturale può essere scritto come somma di 9 cubi, di 14 quarte potenze e così via. La stessa domanda si può estendere a polinomi omogenei, chiedendosi se una forma di grado d ammetta una decomposizione di Waring come somma di potenze d-esime di forme lineari. In questo contesto emergono diverse questioni, ma il nostro principale interesse è l'identificabilità. Una forma f è identificabile se ammette un'unica decomposizione di Waring. Ci sono risultati classici che dimostrano che il generico polinomio di grado d in n+1 indeterminate è identificabile per specifici valori di n e d, e l'obiettivo è di trovare tutte le coppie (n; d) con questa proprietà. La mia tesi contiene una classificazione di tali coppie. In particolare dimostriamo che il generico f non è mai identificabile, eccetto per i pochi casi classicamente noti. É possibile anche studiare l'identificabilità di due o più polinomi, generalizzando la nozione di diagonalizzazione simultanea di due matrici quadrate. Un approccio computazionale ci ha permesso di trovare un nuovo esempio: la terna generica di forme ternarie di gradi 3, 3 e 4 ammette un'unica decomposizione simultanea. Inoltre, estendiamo un altro risultato classico sulle forme ternarie. Un teorema di Roberts afferma che una conica e una cubica piana generiche sono simultaneamente identificabili, e dimostriamo che questo è l'unico caso genericamente identificabile quando la differenza dei gradi è 1. Diamo inoltre una stima del numero di decomposizioni. L'identificabilità generica ha un'interpretazione geometrica in termini della varietà secante alla varietà di Veronese. Uno dei vantaggi di questo punto di vista è che l'unicità della decomposizione implica la birazionalità di una certa proiezione tangenziale. Di conseguenza, per dimostrare che la forma generica non è identicabile, è sufficiente mostrare che il grado della mappa è maggiore di 1. Per questa ragione lavoriamo con il sistema lineare associato. Per trattare questo argomento abbiamo applicato tecniche di degenerazione. Lo studio di queste degenerazioni ci ha portato a considerare limiti piatti di sottoschemi 0-dimensionali dello spazio proiettivo. A differenza dell'approccio standard, la nostra tecnica considera la collisione di alcuni dei punti grassi. Questo porta il nuovo problema di comprendere appieno tale limite e di darne una descrizione. Tuttavia, una volta risolto questo problema, abbiamo a disposizione una nuova degenerazione che si dimostra utile per la nostra trattazione.<br>The Waring problem was first stated as a number theory problem. As an attempt to find a generalization of Lagrange's four-squares theorem, Waring stated that every number can be decomposed as a sum of 9 cubes, 14 fourth powers, and so on. This question can be extended to homogeneous polynomials, asking when a degree d form f admits a Waring decomposition as a sum of d-th powers of linear forms. Many different questions arise in this context, but our main interest is identifiability. f is identifiable if it admits a unique decomposition. There are classical theorems proving that the general degree d form in n+1 indeterminates is identifiable for specic values of n and d, and it is a challenging task to find all pairs (n; d) with this property. My thesis contains a classification of all such pairs, in particular we prove that the general f is never identifiable, except for the few classically known cases. Identifiability can be investigated also for two or more polynomials, generalizing the notion of simultaneous diagonalization of two square matrices. A computational approach allowed us to find a new example: the general triple of ternary forms of degrees 3, 3 and 4 admits a unique simultaneous decomposition. Moreover, we extend another classical results about ternary forms. A theorem by Roberts states that a conic and a cubic are simultaneously identifiable, and we prove that this is the only generically identifiable case when the difference of the degrees is 1. We also give a lower bound on the number of decompositions. We use a geometric interpretation of generic identifiability in terms of the secant variety of the Veronese variety. One of the advantages of this point of view is that the uniqueness of the decomposition implies the birationality of a certain tangential projection, so in order to disprove identifiability it is enough to show that the degree of the map is greater than 1. For this reason we work with the associated linear system. This topic is widely studied, and we could use different techniques, in particular degenerations. The study of such degenerations led us to consider limits of 0-dimensional subschemes of the projective space. Unlike the standard specialization approach, we find it convenient to consider the collision of some of the fat points. This yields the new problem to fully understand and describe such limit scheme. However, once it is done, we have a new degeneration tool which proves useful to our goal.
APA, Harvard, Vancouver, ISO, and other styles
2

Parriani, Tiziano <1984&gt. "Decomposition Methods and Network Design Problems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6551/1/Thesis_Parriani.pdf.

Full text
Abstract:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).
APA, Harvard, Vancouver, ISO, and other styles
3

Parriani, Tiziano <1984&gt. "Decomposition Methods and Network Design Problems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6551/.

Full text
Abstract:
Decomposition based approaches are recalled from primal and dual point of view. The possibility of building partially disaggregated reduced master problems is investigated. This extends the idea of aggregated-versus-disaggregated formulation to a gradual choice of alternative level of aggregation. Partial aggregation is applied to the linear multicommodity minimum cost flow problem. The possibility of having only partially aggregated bundles opens a wide range of alternatives with different trade-offs between the number of iterations and the required computation for solving it. This trade-off is explored for several sets of instances and the results are compared with the ones obtained by directly solving the natural node-arc formulation. An iterative solution process to the route assignment problem is proposed, based on the well-known Frank Wolfe algorithm. In order to provide a first feasible solution to the Frank Wolfe algorithm, a linear multicommodity min-cost flow problem is solved to optimality by using the decomposition techniques mentioned above. Solutions of this problem are useful for network orientation and design, especially in relation with public transportation systems as the Personal Rapid Transit. A single-commodity robust network design problem is addressed. In this, an undirected graph with edge costs is given together with a discrete set of balance matrices, representing different supply/demand scenarios. The goal is to determine the minimum cost installation of capacities on the edges such that the flow exchange is feasible for every scenario. A set of new instances that are computationally hard for the natural flow formulation are solved by means of a new heuristic algorithm. Finally, an efficient decomposition-based heuristic approach for a large scale stochastic unit commitment problem is presented. The addressed real-world stochastic problem employs at its core a deterministic unit commitment planning model developed by the California Independent System Operator (ISO).
APA, Harvard, Vancouver, ISO, and other styles
4

Paronuzzi, Paolo <1989&gt. "Models and algorithms for decomposition problems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amsdottorato.unibo.it/9330/1/Thesis_Paronuzzi.pdf.

Full text
Abstract:
This thesis deals with the decomposition both as a solution method and as a problem itself. A decomposition approach can be very effective for mathematical problems presenting a specific structure in which the associated matrix of coefficients is sparse and it is diagonalizable in blocks. But, this kind of structure may not be evident from the most natural formulation of the problem. Thus, its coefficient matrix may be preprocessed by solving a structure detection problem in order to understand if a decomposition method can successfully be applied. So, this thesis deals with the k-Vertex Cut problem, that is the problem of finding the minimum subset of nodes whose removal disconnects a graph into at least k components, and it models relevant applications in matrix decomposition for solving systems of equations by parallel computing. The capacitated k-Vertex Separator problem, instead, asks to find a subset of vertices of minimum cardinality the deletion of which disconnects a given graph in at most k shores and the size of each shore must not be larger than a given capacity value. Also this problem is of great importance for matrix decomposition algorithms. This thesis also addresses the Chance-Constrained Mathematical Program that represents a significant example in which decomposition techniques can be successfully applied. This is a class of stochastic optimization problems in which the feasible region depends on the realization of a random variable and the solution must optimize a given objective function while belonging to the feasible region with a probability that must be above a given value. In this thesis, a decomposition approach for this problem is introduced. The thesis also addresses the Fractional Knapsack Problem with Penalties, a variant of the knapsack problem in which items can be split at the expense of a penalty depending on the fractional quantity.
APA, Harvard, Vancouver, ISO, and other styles
5

Brandoni, Domitilla <1994&gt. "Tensor-Train decomposition for image classification problems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amsdottorato.unibo.it/10121/3/phd_thesis_DomitillaBrandoni_final.pdf.

Full text
Abstract:
In these last years a great effort has been put in the development of new techniques for automatic object classification, also due to the consequences in many applications such as medical imaging or driverless cars. To this end, several mathematical models have been developed from logistic regression to neural networks. A crucial aspect of these so called classification algorithms is the use of algebraic tools to represent and approximate the input data. In this thesis, we examine two different models for image classification based on a particular tensor decomposition named Tensor-Train (TT) decomposition. The use of tensor approaches preserves the multidimensional structure of the data and the neighboring relations among pixels. Furthermore the Tensor-Train, differently from other tensor decompositions, does not suffer from the curse of dimensionality making it an extremely powerful strategy when dealing with high-dimensional data. It also allows data compression when combined with truncation strategies that reduce memory requirements without spoiling classification performance. The first model we propose is based on a direct decomposition of the database by means of the TT decomposition to find basis vectors used to classify a new object. The second model is a tensor dictionary learning model, based on the TT decomposition where the terms of the decomposition are estimated using a proximal alternating linearized minimization algorithm with a spectral stepsize.<br>Negli ultimi anni si è registrato un notevole sviluppo di nuove tecniche per il riconoscimento automatico di oggetti, anche dovuto alle possibili ricadute di tali avanzamenti nel campo medico o automobilistico. A tal fine sono stati sviluppati svariati modelli matematici dai metodi di regressione fino alle reti neurali. Un aspetto cruciale di questi cosiddetti algoritmi di classificazione è l'uso di aspetti algebrici per la rappresentazione e l'approssimazione dei dati in input. In questa tesi esamineremo due diversi modelli per la classificazione di immagini basati sulla decomposizione Tensor-Train (TT). In generale, l'uso di approcci tensoriali è fondamentale per preservare la struttura intrinsecamente multidimensionale dei dati. Inoltre l'occupazione di memoria per la decomposizione Tensor-Train non cresce esponenzialmente all'aumentare dei dati, a differenza di altre decomposizioni tensoriali. Questo la rende particolarmente adatta nel caso di dati di grandi dimensioni. Inoltre permette, attraverso l'uso di opportune strategie di troncamento, di limitare notevolmente l'occupazione di memoria senza ricadute negative sulle performance di classificazione. Il primo modello proposto in questa tesi è basato su una decomposizione diretta del database tramite la decomposizione TT. In questo modo viene determinata una base che verrà di seguito utilizzata nella classificazione di nuove immagini. Il secondo è invece un modello di dictionary learning tensoriale sempre basato sulla decomposizione TT in cui i termini della decomposizione sono determinati utilizzando un nuovo metodo di ottimizzazione alternato con l'utilizzo di passi spettrali.
APA, Harvard, Vancouver, ISO, and other styles
6

Furini, Fabio <1982&gt. "Decomposition and reformulation of integer linear programming problems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2011. http://amsdottorato.unibo.it/3593/.

Full text
Abstract:
This thesis deals with an investigation of Decomposition and Reformulation to solve Integer Linear Programming Problems. This method is often a very successful approach computationally, producing high-quality solutions for well-structured combinatorial optimization problems like vehicle routing, cutting stock, p-median and generalized assignment . However, until now the method has always been tailored to the specific problem under investigation. The principal innovation of this thesis is to develop a new framework able to apply this concept to a generic MIP problem. The new approach is thus capable of auto-decomposition and autoreformulation of the input problem applicable as a resolving black box algorithm and works as a complement and alternative to the normal resolving techniques. The idea of Decomposing and Reformulating (usually called in literature Dantzig and Wolfe Decomposition DWD) is, given a MIP, to convexify one (or more) subset(s) of constraints (slaves) and working on the partially convexified polyhedron(s) obtained. For a given MIP several decompositions can be defined depending from what sets of constraints we want to convexify. In this thesis we mainly reformulate MIPs using two sets of variables: the original variables and the extended variables (representing the exponential extreme points). The master constraints consist of the original constraints not included in any slaves plus the convexity constraint(s) and the linking constraints(ensuring that each original variable can be viewed as linear combination of extreme points of the slaves). The solution procedure consists of iteratively solving the reformulated MIP (master) and checking (pricing) if a variable of reduced costs exists, and in which case adding it to the master and solving it again (columns generation), or otherwise stopping the procedure. The advantage of using DWD is that the reformulated relaxation gives bounds stronger than the original LP relaxation, in addition it can be incorporated in a Branch and bound scheme (Branch and Price) in order to solve the problem to optimality. If the computational time for the pricing problem is reasonable this leads in practice to a stronger speed up in the solution time, specially when the convex hull of the slaves is easy to compute, usually because of its special structure.
APA, Harvard, Vancouver, ISO, and other styles
7

Felisetti, Camilla <1990&gt. "Two applications of the decomposition theorem to moduli spaces." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amsdottorato.unibo.it/8681/1/felisetti_camilla_tesi.pdf.

Full text
Abstract:
The decomposition theorem is a statement about the (derived) direct image of the intersection cohomology by an algebraic projective map. The decomposition theorem and more generally the theory of perverse sheaves have found many interesting applications, especially in representation theory. Usually a lot of work is needed to apply it in concrete situations, to identify the various summands. This thesis proposes two applications of the decomposition theorem. In the first we consider the moduli space of Higgs bundles of rank 2 and degree 0 over a curve of genus 2. Applying the decomposition theorem, we are able to compute the weight polynomial of the intersection cohomology of this moduli space. The second result contained in this thesis is concerned with the general problem of determining the support of a map, and therefore in line with the ”support theorem” by Ngo. We consider families C ! B of integral curves with at worst planar singularities, and the relative ”nested” Hilbert scheme C^[m,m+1]. Applying the technique of higher discriminants, recently developed by Migliorini and Shende, we prove that in this case there are no supports other than the whole base B of the family. Along the way we investigate smoothness properties of C[m,m+1], which may be of interest on their own.
APA, Harvard, Vancouver, ISO, and other styles
8

Delorme, Maxence <1989&gt. "Mathematical Models and Decomposition Algorithms for Cutting and Packing Problems." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amsdottorato.unibo.it/7828/1/Mathematical%20Models%20and%20Decomposition%20Algorithms%20for%20Cutting%20and%20Packing%20Problems.pdf.

Full text
Abstract:
In this thesis, we provide (or review) new and effective algorithms based on Mixed-Integer Linear Programming (MILP) models and/or decomposition approaches to solve exactly various cutting and packing problems. The first three contributions deal with the classical bin packing and cutting stock problems. First, we propose a survey on the problems, in which we review more than 150 references, implement and computationally test the most common methods used to solve the problems (including branch-and-price, constraint programming (CP) and MILP), and we successfully propose new instances that are difficult to solve in practice. Then, we introduce the BPPLIB, a collection of codes, benchmarks, and links for the two problems. Finally, we study in details the main MILP formulations that have been proposed for the problems, we provide a clear picture of the dominance and equivalence relations that exist among them, and we introduce reflect, a new pseudo-polynomial formulation that achieves state of the art results for both problems and some variants. The following three contributions deal with two-dimensional packing problems. First, we propose a method using Logic based Benders’ decomposition for the orthogonal stock cutting problem and some extensions. We solve the master problem through an MILP model while CP is used to solve the slave problem. Computational experiments on classical benchmarks from the literature show the effectiveness of the proposed approach. Then, we introduce TwoBinGame, a visual application we developed for students to interactively solve two-dimensional packing problems, and analyze the results obtained by 200 students. Finally, we study a complex optimization problem that originates from the packaging industry, which combines cutting and scheduling decisions. For its solution, we propose mathematical models and heuristic algorithms that involve a non-trivial decomposition method. In the last contribution, we study and strengthen various MILP and CP approaches for three project scheduling problems.
APA, Harvard, Vancouver, ISO, and other styles
9

VON, DER OHE ULRICH. "Toward a structural theory of learning algebraic decompositions." Doctoral thesis, Università degli studi di Genova, 2021. http://hdl.handle.net/11567/1045060.

Full text
Abstract:
We propose a framework generalizing several variants of Prony's method and explaining their relations. These methods are suitable for determining the support of linear combinations in particular in vector spaces of functions from evaluations. They are based on suitable sequences of linear maps resp. their matrices and include Hankel and Toeplitz variants of Prony's method for the decomposition of multivariate exponential sums, polynomials (w.r.t. the monomial and Chebyshev bases), Gaußian sums, spherical harmonic sums, taking also into account whether they have their support on an algebraic set.
APA, Harvard, Vancouver, ISO, and other styles
10

Girardi, Nicola. "Regular biproduct decompositions of objects." Doctoral thesis, Università degli studi di Padova, 2012. http://hdl.handle.net/11577/3422119.

Full text
Abstract:
This thesis mainly pertains biproduct decompositions of objects in certain additive categories that exhibit a peculiar regular behaviour. More precisely, in certain additive categories, a biproduct of objects $\{X_i\}_{i<r}$ is completely determined up to isomorphism by a list of invariants $([X_i]_{\equiv_\mu})_{i<r,\mu<n}$, where $\{\equiv_\mu\}_{\mu<n}$ are suitable equivalence relations (n-Krull-Schmidt Theorem). In the first chapter we introduce prerequisite notions that enable us to extend results regarding certain module categories to suitable preadditive categories: The Jacobson radical of a preadditive category and ideals associated to ideals of endomorphism rings (subject of research by Facchini and Prihoda), the universal embedding of a preadditive category into an additive category, and the universal embedding of an additive category into an idempotent-complete additive category. We give a version of the Chinese Remainder Theorem for preadditive categories, extrapolated from results of Facchini and Perone, and generalised, and we provide an improved version of the classical Krull-Schmidt Theorem which is the starting point of later developments. Semilocal rings and categories are reviewed in the second chapter, and their relationship with the notion of dual Goldie dimension is explained. The third chapter also deals with prerequisites, namely, we thereby try to give a careful review of the theory of the Auslander-Bridger transpose. In the fourth chapter we generalise Warfield's results on finitely presented modules over semiperfect rings to Auslander-Bridger modules, a more general class of modules over arbitrary rings. We show how such modules are characterised by two invariants and such invariants are interchanged by the Auslander-Bridger transpose. The fifth chapter culminates in a criterion for the aforementioned n-Krull-Schmidt Theorem to hold in a given additive category, and we give some concrete examples in the case of categories of modules, such as artinian modules with prescribed heterogeneous socle, and quiver representations. The case n=2 of said theorem has long been known as ``Weak Krull-Schmidt Theorem,'' and has been proved over the years for various classes of modules. One of these, the class of couniformly presented modules, is dealt with in a more elementary way in the sixth chapter<br>Questa tesi riguarda principalmente le decomposizioni in biprodotti di oggetti di certe categorie additive che esibiscono un comportamento regolare peculiare. Più precisamente, in certe categorie additive, un biprodotto di oggetti $\{X_i\}_{i<r}$ \`e completamente caratterizzato a meno di isomorfismo da una lista di invarianti $([X_i]_{\equiv_\mu})_{i<r,\mu<n}$, dove $\{\equiv_\mu\}_{\mu<n}$ sono opportune relazioni di equivalenza (n-teorema di Krull-Schmidt). Nel primo capitolo introduciamo prerequisiti che ci permettono di estendere risultati che riguardano certe categorie di moduli a opportune categorie preadditive: il radicale di Jacobson di una categoria preadditiva e suoi ideali associati ad ideali di anelli di endomorfismi (soggetto di ricerche da parte di Facchini e Prihoda), l'immersione universale di una categoria preadditiva in una categoria additiva, e l'immersione universale di una categoria additiva in una categoria additiva in cui gli idempotenti si spezzano. Diamo una versione del Teorema Cinese dei Resti per le categorie preadditive, estrapolato da risultati di Facchini e Perone e generalizzato, e forniamo una versione migliorata del teorema classico di Krull-Schmidt che è il punto di partenza di sviluppi seguenti. Gli anelli e le categorie semilocali sono passati in rassegna nel secondo capitolo, in cui viene anche spiegata la loro relazione con la nozione di dimensione duale di Goldie. Il terzo capitolo è pure dedicato ai prerequisiti, precisamente, ivi cerchiamo di passare in attenta rassegna la teoria della trasposta di Auslander-Bridger. Nel quarto capitolo generalizziamo i risultati di Warfield sui moduli finitamente presentati su anelli semiperfetti ai moduli di Auslander-Bridger, che sono una classe più generale di moduli su anelli arbitrari. Mostriamo come tali moduli sono caratterizzati da due invarianti e come tali invarianti siano scambiati dalla trasposta di Auslander-Bridger. Il quinto capitolo culmina in un criterio per stabilire la validità del sopracitato n-teorema di Krull-Schmidt in una data categoria additiva, a diamo alcuni esempi concreti nel caso di categorie di moduli, come i moduli artiniani con zoccolo eterogeneo prefissato, e nel caso di categorie di rappresentazioni di quiver. Il caso n=2 di detto teorema è noto come ``teorema debole di Krull-Schmidt,'' ed è stato dimostrato negli anni per varie classi di moduli. Una di queste, la classe dei moduli couniformemente presentati, è trattata in un modo più elementare nel sesto capitolo
APA, Harvard, Vancouver, ISO, and other styles
11

Nguyen, Hong Thuy. "The algebraic representation of OWA functions in the binomial decomposition framework and its applications in large-scale problems." Doctoral thesis, Università degli studi di Trento, 2019. https://hdl.handle.net/11572/367977.

Full text
Abstract:
In the context of multicriteria decision making, the ordered weighted averaging (OWA) functions play a crucial role in aggregating multiple criteria evaluations into an overall assessment to support decision makers reaching a decision. The determination of OWA weights is, therefore, an important task in this process. Solving real-life problems with a large number of OWA weights, however, can be very challenging and time consuming. In this research we recall that OWA functions correspond to the Choquet integrals associated with symmetric capacities. The problem of defining all Choquet capacities on a set of n criteria requires 2^n real coefficients. Grabisch introduced the k-additive framework to reduce the exponential computational burden. We review the binomial decomposition framework with a constraint on k-additivity whereby OWA functions can be expressed as linear combinations of the first k binomial OWA functions and the associated coefficients of the binomial decomposition framework. In particular, we investigate the role of k-additivity in two particular cases of the binomial decomposition of OWA functions, the 2-additive and 3-additive cases. We identify the relationship between OWA weights and the associated coefficients of the binomial decomposition of OWA functions. Analogously, this relationship is also studied for two well-known parametric families of OWA functions, namely the S-Gini and Lorenzen welfare functions. Finally, we propose a new approach to determine OWA weights in large-scale problems by using the binomial decomposition of OWA functions with natural constraints on k-additivity to control the complexity of the OWA weight distributions.
APA, Harvard, Vancouver, ISO, and other styles
12

SAPIENZA, ANNA. "Tensor decomposition techniques for analysing time-varying networks." Doctoral thesis, Politecnico di Torino, 2017. http://hdl.handle.net/11583/2668112.

Full text
Abstract:
The aim of this Ph.D thesis is the study of time-varying networks via theoretical and data-driven approaches. Networks are natural objects to represent a vast variety of systems in nature, e.g., communication networks (phone calls and e-mails), online social networks (Facebook, Twitter), infrastructural networks, etc. Considering the temporal dimension of networks helps to better understand and predict complex phenomena, by taking into account both the fact that links in the network are not continuously active over time and the potential relation between multiple dimensions, such as space and time. A fundamental challenge in this area is the definition of mathematical models and tools able to capture topological and dynamical aspects and to reproduce properties observed on the real dynamics of networks. Thus, the purpose of this thesis is threefold: 1) we will focus on the analysis of the complex mesoscale patterns, as community like structures and their evolution in time, that characterize time-varying networks; 2) we will study how these patterns impact dynamical processes that occur over the network; 3) we will sketch a generative model to study the interplay between topological and temporal patterns of time-varying networks and dynamical processes occurring over the network, e.g., disease spreading. To tackle these problems, we adopt and extend an approach at the intersection between multi-linear algebra and machine learning: the decomposition of time-varying networks represented as tensors (multi-dimensional arrays). In particular, we focus on the study of Non-negative Tensor Factorization (NTF) techniques to detect complex topological and temporal patterns in the network. We first extend the NTF framework to tackle the problem of detecting anomalies in time-varying networks. Then, we propose a technique to approximate and reconstruct time-varying networks affected by missing information, to both recover the missing values and to reproduce dynamical processes on top of the network. Finally, we focus on the analysis of the interplay between the discovered patterns and dynamical processes. To this aim, we use the NTF as an hint to devise a generative model of time-varying networks, in which we can control both the topological and temporal patterns, to identify which of them has a major impact on the dynamics.
APA, Harvard, Vancouver, ISO, and other styles
13

Perone, Marco. "Direct sum decompositions and weak Krull-Schmidt Theorems." Doctoral thesis, Università degli studi di Padova, 2011. http://hdl.handle.net/11577/3427427.

Full text
Abstract:
In this thesis we discuss the behaviour of direct sum decomposition in additive categories and in particular in categories of modules. In the first part of the thesis, we investigate the ring theoretical properties that play a main role in the theory of factorization in additive categories, like the exchange property, semilocality and Goldie dimension. We stress the importance of the latter and we investigate with care the infinite case of the dual Goldie dimension of rings. In the rest of the thesis, we use a more categorical approach, studying the behaviour of direct sum decomposition in additive categories. Given an additive category C, its skeleton V(C) has the structure of a commutative monoid under the operation of direct sum, and all the information about the regularity of the direct sum decomposition in the category C are traceable from the monoid V(C). We study classes of categories where the direct sum decomposition behaves quite regularly; mainly we restrict to categories C whose monoid V(C) is a Krull monoid, underlining the prominent role played by semilocal endomorphism rings. We analyze the peculiar behaviour of direct sum decomposition in some categories of modules, where the uniqueness of the decomposition is obtained up to two permutations, and we notice how this phenomenon is due to the presence of endomorphism rings of type two. In the last chapter we investigate what happens when we pass from finite direct sum of indecomposable objects to infinite direct sums, and we develop the setting for the phenomena we studied in the finite case to appear, both at a monoid theoretical and at a categorical level.<br>In questa tesi discutiamo il comportamento della decomposizione in somma diretta in categorie additive e in particolare in categorie di moduli. Nella prima parte della tesi, investighiamo le proprietà degli anelli che giocano un ruolo prominente nella teoria della fattorizzazione nelle categorie additive, come per esempio la proprietà di scambio, la semilocalità e la dimensione di Goldie. Vogliamo sottolineare l'importanza di quest'ultima e investighiamo con attenzione il caso infinito della dimensione duale di Goldie di un anello. Nel resto della tesi, utilizziamo un approccio più categoriale, studiando il comportamento della decomposizione in somma diretta nelle categorie additive. Data una categoria additiva C, il suo scheletro V(C) ha la struttura di un monoide commutativo rispetto all'operazione di somma diretta, e tutte le informazioni riguardo la regolarità della decomposizione in somma diretta nella categoria C sono rintracciabili attraverso il monoide V(C). Studiamo classi di categorie in cui la decomposizione in somma diretta assume un comportamento abbastanza regolare; principalemente ci restringiamo a categorie C il cui monoide V(C) è un monoide di Krull, evidenziando il ruolo prominente occupato da parte degli anelli degli endomorfismi semilocali. Analizziamo il comportamento peculiare della decomposizione in somma diretta in alcune categorie di moduli, dove l'unicità della decomposizione è garantita a meno di due permutazioni, e notiamo come questo fenomeno sia dovuto alla presenza di anelli degli endomorfismi di tipo due. Nell'ultimo capitolo investighiamo cosa succede quando passiamo da somme dirette finite di oggetti indecomponibili a somme dirette infinite, e sviluppiamo l'ambiente in cui i fenomeni studiati precedentemente nel caso finito si manifestano, sia ad un livello di teoria dei monodi sia ad un livello categoriale.
APA, Harvard, Vancouver, ISO, and other styles
14

Nguyen, Hong Thuy. "The algebraic representation of OWA functions in the binomial decomposition framework and its applications in large-scale problems." Doctoral thesis, University of Trento, 2019. http://eprints-phd.biblio.unitn.it/3485/1/Thesis-Nguyen.pdf.

Full text
Abstract:
In the context of multicriteria decision making, the ordered weighted averaging (OWA) functions play a crucial role in aggregating multiple criteria evaluations into an overall assessment to support decision makers reaching a decision. The determination of OWA weights is, therefore, an important task in this process. Solving real-life problems with a large number of OWA weights, however, can be very challenging and time consuming. In this research we recall that OWA functions correspond to the Choquet integrals associated with symmetric capacities. The problem of defining all Choquet capacities on a set of n criteria requires 2^n real coefficients. Grabisch introduced the k-additive framework to reduce the exponential computational burden. We review the binomial decomposition framework with a constraint on k-additivity whereby OWA functions can be expressed as linear combinations of the first k binomial OWA functions and the associated coefficients of the binomial decomposition framework. In particular, we investigate the role of k-additivity in two particular cases of the binomial decomposition of OWA functions, the 2-additive and 3-additive cases. We identify the relationship between OWA weights and the associated coefficients of the binomial decomposition of OWA functions. Analogously, this relationship is also studied for two well-known parametric families of OWA functions, namely the S-Gini and Lorenzen welfare functions. Finally, we propose a new approach to determine OWA weights in large-scale problems by using the binomial decomposition of OWA functions with natural constraints on k-additivity to control the complexity of the OWA weight distributions.
APA, Harvard, Vancouver, ISO, and other styles
15

Campanini, Federico. "Weak forms of the Krull-Schmidt theorem and Prüfer rings in distinguished constructions." Doctoral thesis, Università degli studi di Padova, 2019. http://hdl.handle.net/11577/3422693.

Full text
Abstract:
This thesis is divided in two chapters. The first one concerns direct-sum decompositions in additive categories. It is well known that if a module admits a direct-sum decomposition into indecomposable modules with local endomorphism rings, then this decomposition is essentially unique, up to isomorphism and a permutation of the direct summands. However, there are situations in which direct-sum decompositions into indecomposable modules are not essentially unique. Among these cases, particularly interesting are those in which it is possible to find some kind of regularity: direct-sum decompositions can be described via two invariants up to two permutations. Such behaviour was firstly discovered for uniserial modules by A. Facchini in 1996, and it was subsequently investigated for several other classes of modules, such as cyclically presented modules over a local ring, couniformly presented modules and kernels of morphisms between indecomposable injective modules. In this thesis, we provide examples of additive categories in which direct-sum decompositions can be classified via finitely many invariants. It is worth noting that, in our constructions, we treat cases in which the number of invariants needed to describe finite direct-sum decompositions can be arbitrarily large. The second chapter is devoted to the study of Prufer (commutative) rings with zero-divisors. We investigate the so-called "Prufer-like conditions" in several constructions, most of them related to pullbacks. It is well known that fiber products provide a rich source of examples and counterexamples in Commutative Algebra, because of their ability of producing rings with certain predetermined properties. Our investigation moves from very natural settings, for example those of regular conductor squares, up to more technical constructions, such as bi-amalgamated algebras, introduced by Kabbaj, Louartiti and Tamekkante in 2017 as a generalization of that of amalgamated algebras. Our main results in the pullback framework cover several different situations studied up to now by Bakkaki and Mahdou, Boynton, Houston and Taylor. We also investigate Prufer ring from other points of view. We introduce the notion of regular morphism and we prove that if a ring R is the homomorphic image of a Prufer ring via a regular morphism, then R is Prufer. Finally, we turn our attention to the ideal-theory of pre-Prufer rings, proving a number of generalizations of some results of Boisen and Larsen.
APA, Harvard, Vancouver, ISO, and other styles
16

Zampini, S. "NON-OVERLAPPING DOMAIN DECOMPOSITION METHODS FOR CARDIAC REACTION-DIFFUSION MODELS AND APPLICATIONS." Doctoral thesis, Università degli Studi di Milano, 2010. http://hdl.handle.net/2434/150076.

Full text
Abstract:
In this thesis we consider different aspects related to the mathematical modeling of cardiac electrophysiology, either from the cellular or from the tissue perspective, and we develope novel numerical methods for the parallel iterative solution of the resulting reaction-diffusion models. In Chapter one we develope and validate the HHRd model, which accounts for transmural cellular heterogeneities of the canine left ventricle. Next, we introduce the reaction-diffusion models describing the spread of excitation in cardiac tissue, namely the anisotropic Bidomain and Monodomain models. For their discretization, we consider trilinear isoparametric finite elements in space and a semi-implicit (IMEX) method in time. In order to reduce the computational costs of parallel three-dimensional cardiac simulations, in Chapter three we consider different strategies to accelerate convergence of the Preconditioned Conjugate Gradient method. We consider novel choices for the Krylov initial guess in order to reduce the number of iterations per time step, either lagrangian interpolants in time or the Proper Orthogonal Decomposion technique combined with a usual Galerkin projection. In the last three chapters we construct non-overlapping domain decomposition methods for both cardiac reaction-diffusion models. In Chapter four we deal with preconditioners of the Neumann-Neumann type, in particular we consider the additive Neumann-Neumann method for the Monodomain model and the Balancing Neumann-Neumann method for the Bidomain model. In Chapter five we construct a Balancing Domain Decomposition by Constraint (BDDC) method for the Bidomain model, whereas in Chapter six we investigate the use of an approximate BDDC method for the Bidomain model. For all preconditioners considered, we develope novel theoretical estimates for the condition number of the preconditioned systems with respect to the spatial discretization, to the subdomains' diameter and to the time step, also in case of discontinuity in the conductivity coefficients of the cardiac tissue, with jumps aligned with the interface among subdomains. We prove scalability and quasi-optimality for the balancing methods, providing parallel numerical results confirming the theoretical estimates.
APA, Harvard, Vancouver, ISO, and other styles
17

Marini, F. "PARALLEL ADDITIVE SCHWARZ PRECONDITIONING FOR ISOGEOMETRIC ANALYSIS." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/336923.

Full text
Abstract:
We present a multi-level massively parallel additive Schwarz preconditioner for Isogeometric Analysis, a FEM-like numerical analysis for PDEs that permits exact geometry representation and high regularity basis functions. Two model problems are considered: the scalar elliptic equation and the advection-diffusion equation. Theoretical analysis proves that the adoption of a coarse correction grid is crucial in order to have the condition number of the preconditioned stiffness matrix independent from the number of subdomains, whenever the ratio between the coarse mesh size and the fine mesh size is kept fixed. Numerical tests for the scalar elliptic equation (in 2D and 3D on trivial and non-trivial domains) confirm the theory. The preconditioner is then applied to the advection-diffusion equation in 2D and 3D. Again, the numerical results shows that the condition number of the preconditioned linear system scales with the number of subdomains up to 8100 processors, eventually with SUPG stabilization. The tests are implemented in C programming language on the top of PETSc library.
APA, Harvard, Vancouver, ISO, and other styles
18

ABDELHAKIM, MOUHAMED Ahmed Abdelsattar. "On the space - time integrability properties of the solution of the inhomogeneous Schrödinger equation." Doctoral thesis, Università degli studi di Ferrara, 2013. http://hdl.handle.net/11392/2388925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

FERNICOLA, FRANCO. "Strutture di incidenza finite con blocchi di forma assegnata." Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2020. http://hdl.handle.net/11380/1200565.

Full text
Abstract:
In questa tesi si prende in considerazione il problema della decomposizione del grafo completo con v vertici in sottografi tutti isomorfi a un assegnato grafo H. I sottografi della decomposizione vengono detti blocchi. Secondo la definizione di decomposizione ogni spigolo del grafo completo si deve pertanto collocare in esattamente un blocco della decomposizione. Questa nozione generalizza l’idea di disegno a blocchi che da questo punto di vista diventa una decomposizione del grafo completo in sottografi completi tutti della stessa cardinalità k. Nella definizione di disegno a blocchi questi ultimi sono privi di struttura, vale a dire sono solo pensati come sottoinsiemi dell’insieme dei punti. Nella nozione di H-disegno studiata in questa tesi i blocchi vengono dunque ad assumere una struttura, quella dettata dal grafo H, che dunque in un certo senso determina la “forma” dei blocchi. Uno dei problemi principali tanto nella teoria dei disegni a blocchi classici quanto nella teoria degli H-disegni consiste nella determinazione dello spettro di esistenza, cioè la determinazione dei valori di v per cui il disegno a blocchi esiste. Problemi aperti relativi allo spettro di esistenza sussistono, come è noto, anche per strutture di incidenza studiate ancora prima dei disegni a blocchi, ad esempio i piani proiettivi finiti. Nel caso trattato in questa tesi si tratta di stabilire, per un fissato grafo H, quale sia lo spettro di esistenza degli H-disegni, cioè l’insieme dei valori di v per cui una H-deecomposizione del grafo completo con v vertici esiste. Si dà un contributo al problema della determinazione dello spettro nel caso in cui il grafo H sia un grafo connesso con 7 vertici e 7 spigoli dotato di un ciclo di lunghezza 3. Questo caso rimane aperto rispetto a indagini precedenti, nelle quali il grafo H è più piccolo oppure possiede un ciclo di lunghezza maggiore. Vengono anche affrontate generalizzazioni in varie direzioni, cambiando lievemente la tipologia del grafo H oppure richiedendo che la decomposizione abbia proprietà di incidenza particolari che vanno generalmente sotto il nome di proprietà di bilanciamento, cioè proprietà relative all’uniformità di certi parametri della decomposzione. Nel caso specifico la decomposizione è bilanciata se risulta costante il numero di blocchi che contengono ciascun vertice.<br>In this thesis we consider the problem of decomposing the complete graph on v vertices into subgraphs, all of which are isomorphic to a given graph H. The subgraphs of the decomposition are called blocks. According to the definition of a decomposition each edge of the complete graph must occur in precisely one blocjk of the decomposition. This notion generalizes the idea of a block design. From this point of view a block design is a decomposition of the complete graph into complete subgraphs, all having equal cardinality k. In the definition of a block design blocks have no additional structure other than that of bare subsets of the set of points. Therefore, the notion of an H-design which is studied in this thesis forces blocks to inherit a certain structure, namely that of being a copy of the graph H, which, in some sense, determines the “shape” of the blocks. One of the main problems in the theory of classical block designs as well as in the theory of H-designs consists in the determination of the existence spectrum, that is the determination of the values v for which the block design exists. Open problems related to the existence spectrum do survive, as it is known, even for incidence structures that were studied long before block designs, say, for instance, finite projective planes. In the case of interest for this thesis, the request is to establish, for a given graph H, the existence spectrum for H-designs, that is the set of all values v for which an H-deecomposition of the complete graph on v vertices exists. A contribution to the determination of the spectrum is obtained in case H is a connected graph with 7 vertices and 7 edges containing a cycle of length 3. This case remains open from previous investigations in which the graph H is smaller or contains a longer cycle. Generalizations in various directions are also approached, assuming a slight change in the type of the graph H or assuming special incidence features for the decomposition, such as those that generally involve some kind of balance, typically the request that a certain parameter of the decomposition remain uniform consideration. In our specific context the decomposition is balanced if the number of blocks containing any given vertex is constant.
APA, Harvard, Vancouver, ISO, and other styles
20

CHINELLO, GIANMARCO. "Représentations l-modulaires des groupes p-adiques. Décomposition en blocs de la catégorie des représentations lisses de GL(m,D), groupe métaplectique et représentation de Weil." Doctoral thesis, Université de Versailles St-Quentin-en-Yvelines, 2015. http://hdl.handle.net/10281/123569.

Full text
Abstract:
This thesis focuses on two problems on l-modular representation theory of p-adic groups. Let F be a non-archimedean local field of residue characteristic p different from l. In the first part, we study block decomposition of the category of smooth modular representations of GL(n; F) and its inner forms.We want to reduce the description of a positive-level block to the description of a 0-level block (of a similar group) seeking equivalences of categories. Using the type theory of Bushnell-Kutzko in the modular case and a theorem of category theory, we reduce the problem to find an isomorphism between two intertwining algebras. The proof of the existence of such an isomorphism is not complete because it relies on a conjecture that we state and we prove for several cases. In the second part we generalize the construction of metaplectic group and Weil representation in the case of representations over un integral domain. We define a central extension of the symplectic group over F by the multiplicative group of an integral domain. We prove that it satisfies the same properties as in the complex case.<br>Cette thèse traite deux problèmes concernant la théorie des représentations l-modulaires d’un groupe p-adique. Soit F un corps local non archimédien de caractéristique résiduelle p différente de l. Dans la première partie, on étudie la décomposition en blocs de la catégorie des représentations lisses `-modulaires de GL(n; F) et de ses formes intérieures. On veut ramener la description d’un bloc de niveau positif à celle d’un bloc de niveau 0 (d’un autre groupe du même type) en cherchant des équivalences de catégories. En utilisant la théorie des types de Bushnell-Kutzko dans le cas modulaire et un théorème de la théorie des catégories, on se ramene à trouver un isomorphisme entre deux algèbres d’entrelacement. La preuve de l’existence d’un tel isomorphisme n’est pas complète car elle repose sur une conjecture qu’on énonce et qui est prouvée pour plusieurs cas. Dans une deuxième partie on généralise la construction du groupe métaplectique et de la représentation de Weil dans le cas des représentations sur un anneau intègre. On construit une extension centrale du groupe symplectique sur F par le groupe multiplicatif d’un anneau intègre et on prouve qu’il satisfait les mêmes propriétés que dans le cas des représentations complexes.
APA, Harvard, Vancouver, ISO, and other styles
21

Staffolani, Reynaldo. "Schur apolarity and how to use it." Doctoral thesis, Università degli studi di Trento, 2022. https://hdl.handle.net/11572/330432.

Full text
Abstract:
The aim of this thesis is to investigate the tensor decomposition of structured tensors related to SL(n)-irreducible representations. Structured tensors are multilinear objects satisfying specific symmetry relations and their decompositions are of great interest in the applications. In this thesis we look for the decompositions of tensors belonging to irreducible representations of SL(n) into sum of elementary objects associated to points of SL(n)-rational hoogeneous varieties. This family includes Veronese varieties (symmetric tensors), Grassmann varieties (skew-symmetric tensors), and flag varieties. A classic tool to study the decomposition of symmetric tensors is the apolarity theory, which dates back to Sylvester. An analogous skew-symmetric apolarity theory for skew-symmetric tensors have been developed only few years ago. In this thesis we describe a global apolarity theory called Schur apolarity theory, which is suitable for tensors belonging to any irreducible representation of SL(n). Examples, properties and applications of such apolarity are studied with details and original results both in algebra and geoemtry are provided.
APA, Harvard, Vancouver, ISO, and other styles
22

CHARAWI, LARA ANTONELLA. "ISOGEOMETRIC OVERLAPPING ADDITIVE SCHWARZ PRECONDITIONERS IN COMPUTATIONAL ELECTROCARDIOLOGY." Doctoral thesis, Università degli Studi di Milano, 2014. http://hdl.handle.net/2434/237557.

Full text
Abstract:
In this thesis we present and study overlapping additive Schwarz preconditioner for the isogeometric discretization of reaction-diffusion systems modeling the heart bioelectrical activity, known as the Bidomain and Monodomain models. The cardiac Bidomain model consists of a degenerate system of parabolic and elliptic PDE, whereas the simplified Monodomain model consists of a single parabolic equation. These models include intramural fiber rotation, anisotropic conductivity coefficients and are coupled through the reaction term with a system of ODEs, which models the ionic currents of the cellular membrane. The overlapping Schwarz preconditioner is applied with a PCG accelerator to solve the linear system arising at each time step from the isogeometric discretization in space and a semi-implicit adaptive method in time. A theoretical convergence rate analysis shows that the resulting solver is scalable, optimal in the ratio of subdomain/element size and the convergence rate improves with increasing overlap size. Numerical tests in three-dimensional ellipsoidal domains confirm the theoretical estimates and additionally show the robustness with respect to jump discontinuities of the orthotropic conductivity coefficients.
APA, Harvard, Vancouver, ISO, and other styles
23

Wenzel, Anne. "Komponentenzerlegung des Regelleistungsbedarfs mit Methoden der Zeitreihenanalyse." Master's thesis, Universitätsbibliothek Chemnitz, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-66420.

Full text
Abstract:
Im Rahmen der Arbeit wurden die minutengenauen Daten des Regelleistungsbedarfs (Summe aus Sekundärregelleistung und Minutenreserve) der Monate April bis Dezember des Jahres 2009 einer Regelzone einer Zeitreihenanalyse unterzogen und in Komponenten gemäß dem klassischen Komponentenmodell zerlegt. Diese sind die Trendkomponente, ermittelt durch einen gleitenden Durchschnitt mit der Länge einer Stunde, weiterhin zwei periodische Komponenten mit der Periodenlänge einer Stunde sowie der Periodenlänge eines Tages und die Restkomponente, welche mit einem ARIMA(2,1,5)-Prozess modelliert wurde. In der Zukunft sollte das erstellte Modell des Regelleistungsbedarfs durch Hinzunahme einer jahreszeitlichen Komponente noch verbessert werden. Dies war im Rahmen der Arbeit nicht möglich, da keine Daten über einen Zeitraum von mehreren Jahren vorhanden waren. Zusätzlich kann geprüft werden, inwiefern mit dem Komponentenmodell Prognosen durchführbar sind. Dafür sollte die Trendkomponente anders gewählt werden, da sich der hier gewählte Weg zu sehr an den Daten orientiert. Der zweite Teil der Aufgabenstellung dieser Arbeit bestand im Identifizieren inhaltlicher Komponenten, also möglicher Zusammenhänge zwischen dem Regelleistungsbedarf und verschiedenen denkbaren Ursachen. Als potentielle Ursachen wurden der Lastverlauf sowie die Windenergieeinspeisung untersucht. Zwischen der Zeitreihe des Lastverlaufs und der des Regelleistungsbedarfs bestand eine leichte positive Korrelation, zwischen der Zeitreihe der Windenergieeinspeisung und der des Regelleistungsbedarfs eine geringe negative Korrelation.
APA, Harvard, Vancouver, ISO, and other styles
24

Ebeling, Adierson Gilvani. "Caracter?sticas Estruturais da Mat?ria Org?nica em Organossolos H?plicos." Universidade Federal Rural do Rio de Janeiro, 2010. https://tede.ufrrj.br/jspui/handle/jspui/1838.

Full text
Abstract:
Submitted by Sandra Pereira (srpereira@ufrrj.br) on 2017-06-27T14:38:40Z No. of bitstreams: 1 2010 - Adierson Gilvani Ebeling.pdf: 3744657 bytes, checksum: 0458139e287a489573818b522e4c02de (MD5)<br>Made available in DSpace on 2017-06-27T14:38:40Z (GMT). No. of bitstreams: 1 2010 - Adierson Gilvani Ebeling.pdf: 3744657 bytes, checksum: 0458139e287a489573818b522e4c02de (MD5) Previous issue date: 2010-08-27<br>Conselho Nacional de Desenvolvimento Cient?fico e Tecnol?gico, CNPq, Brasil.<br>The Histosols have a small geographic extension in the Brazilian territory; however, they are intensively used in family agriculture systems and have a great environmental importance. The drainage of Histosols leads to the subsidence process and other changes in the soil organic matter (SOM), with consequences in their characteristics and potential. The nature of humic substances (HSs) is determinant in the alterations of the Histosols. The characterization of the HSs allows the understanding of processes of SOM transformation in the Histosols, and their environmental impact. The objectives of this study were: to characterize Histosols from different environments and land usage intensity; and to evaluate alterations in humic fractions of SOM, by using elemental composition analyses, spectroscopic, thermal degradation, and nuclear magnetic resonance (NMR) techniques. The study was developed in eight Histosols, from the States of Rio de Janeiro, Maranh?o and Paran?, in Brazil. Their chemical properties: total organic carbon (TOC), pH, sum of bases (SB), H+, Al3+, cation exchange capacity (CEC), and V%; and physical properties: bulk density (BD), MR, MM, and OMD, were evaluated. Also, the quantitative fractionating of the HSs: fulvic acid (C-FAF), humic acid (CHAF), and humin (C-HUM), and C-FAH/C-FAF relations, C-EA/C-HUM (C-EA = C-FAF + C-FAH). The humic acids were extracted using the method of the International Humic Substances Society (IHSS), and evaluated by different techniques. The chemical attributes varied with the intensity of burning and agricultural usage. Though, in general, the Histosols presented low natural fertility, and it was related to the humic acid fraction (high ratio CFAH/ C-FAF). Amongst the SOM fractions the HUM predominated, with an average value of 59.98% of total carbon determined by the CHN; followed by the FAH. The C-FAH/C-FAF ratio diminished with the increase of agriculture usage intensity. The results of the TGI (Thermal Gravimetric Index) suggested strong resistance to thermal degradation of majority of the organic horizons. The elemental composition (%C, %H, %N, %O) of the humic acids showed large amplitude of variation between the horizons, but no pattern was observed between the Histosols. The increase of carbon content, the high values of TGI, and the reduction of oxygen content in the humic acids (HA) might explain the high thermal decomposition resistance found in the HA extracted form the Histosols. A correlation between H/C and TGI was observed, where the lower values of H/C were related to the highest resistance of the humic acids to thermal degradation. The spectroscopic and NMR techniques allowed characterizing compounds and groups of substances in the HA, showing the great potential of these tools in studies of HS from Histosols. The multivariate methods allowed a combined analysis of techniques applied in the study, showing a group of labile and recalcitrance materials in the soils. The results, in general, indicated the fragility of the Histosols areas, in terms of agricultural management and the formation environment. Their importance for the environment should be priority in comparison to agricultural usage, mainly due to their relevant part in the aquifers preservation.<br>Os Organossolos t?m pequena representatividade geogr?fica no Brasil, entretanto, s?o utilizados intensamente em modelos de agricultura familiar e t?m grande import?ncia ambiental. Por?m, a sua drenagem conduz ao processo de subsid?ncia e outras modifica??es na mat?ria org?nica do solo (MOS), com implica??es nas caracter?sticas dos Organossolos e em sua potencialidade. A natureza das subst?ncias h?micas (SHs) ? determinante nessas altera??es nos Organossolos. A caracteriza??o das SHs permite a compreens?o dos processos de transforma??o da MOS nos Organossolos e seu impacto no ambiente. Os objetivos deste trabalho foram: caracterizar Organossolos em v?rios ambientes e intensidade de uso agr?cola e avaliar altera??es nas fra??es humificadas da MO, atrav?s de t?cnicas de an?lise da composi??o elementar, espectrosc?picas, termodegradativas e de resson?ncia magn?tica nuclear (RMN). Foram estudados oito perfis de solos, nos Estados do Rio de Janeiro, Maranh?o e Paran?. Foram avaliadas as suas propriedades qu?micas: carbono org?nico total (COT), pH, soma de bases (SB), H+, Al3+, CTC e V%; e propriedades f?sicas: densidade do solo (Ds), RM, MM e DMO. Al?m do fracionamento quantitativo das SHs: ?cidos f?lvicos (C-FAF), ?cidos h?micos (C-FAH) e humina (C-HUM), e rela??es C-FAH/C-FAF, C-EA/CHUM (C-EA = C-FAF + C-FAH). Os ?cidos h?micos (AH) foram extra?dos pelo m?todo da Sociedade Internacional de Subst?ncias H?micas (IHSS) e avaliados por distintas t?cnicas. Os atributos qu?micos variaram com o efeito das queimadas e da intensidade de uso agr?cola; por?m, em geral, os Organossolos apresentaram baixa fertilidade natural, a qual, em geral, esteve relacionada ? fra??o ?cido h?mico (maior raz?o C-FAH/C-FAF). Dentre as fra??es da MO, a HUM predominou, com valor m?dio de 59,98% do carbono total determinado pelo CHN, seguida da FAH. A rela??o C-FAH/C-FAF diminuiu a medida que o uso agr?cola ? intensificado. Os dados do ITG (?ndice Termogravim?trico) sugeriram forte resist?ncia ? termodegrada??o para a maioria dos horizontes org?nicos. A composi??o elementar (%C, %H, %N, %O) dos ?cidos h?micos apresentou grande amplitude entre os horizontes, por?m sem padr?o diferenciado entre os Organossolos. O aumento do conte?do de carbono, os altos valores de ITG e a diminui??o do conte?do de oxig?nio nos ?cidos h?micos podem explicar a maior resist?ncia a termodecomposi??o dos AH extra?dos dos Organossolos. Foi observada correla??o entre a raz?o H/C e o ITG, onde os menores valores de H/C estiveram relacionados a maior resist?ncia dos AH ? termodegrada??o. As t?cnicas espectrosc?picas e de RMN permitiram caracterizar compostos e grupamentos nos AH, demonstrando o potencial dessas ferramentas nos estudos de SHs provenientes de Organossolos. Os m?todos de an?lise multivariada permitiram uma avalia??o conjunta das t?cnicas utilizadas, mostrando um grupo de amostras l?beis e recalcitrantes nos solos. Os resultados encontrados, em geral, indicam a fragilidade das ?reas de Organossolos, em fun??o do manejo para agricultura e do seu ambiente de forma??o. A sua import?ncia em termos ambientais deveria ser priorizada em rela??o ao uso agr?cola, principalmente pelo papel relevante na preserva??o de aqu?feros.
APA, Harvard, Vancouver, ISO, and other styles
25

SACCO, FRANCESCO. "Mathematical models and analysis of turbulent, wall-bounded, complex flows." Doctoral thesis, Gran Sasso Science Institute, 2020. http://hdl.handle.net/20.500.12571/15321.

Full text
Abstract:
In the classical wall bounded turbulent flow a fundamental statement is the existence of a layer, called overlap layer, in which every flow behaves the same and the mean streamwise velocity of each system can be described with only the wall normal coordinate with a logarithmic profile, characterized by the von Kármán constant. This law has been at first derived on data on parallel flows and boundary layer, that are model flows for wall turbulence, but indeed have a much simpler flow than complex shape geometries. The formulation of Millikan has much more general requirement on the flow and it is based on the asymptotic expansion of the velocity field; this theory of the logarithmic behavior of the overlap layer is an asymptotic approximation, and so holds for very high Reynolds numbers, Re_τ → ∞. For this reason much of the research effort has been directed at increasing the Reynolds number. However, due to the limits in resources, and so in the possibility of reaching the highest possible value, every similarity theory is still incomplete; but like all asymptotic approximations, it can be improved with the addition of higher-order terms. We develop a correction of the classical von Kármán logarithmic law for a turbulent Taylor-Couette (TC) flow, the fluid flow developing between two coaxial, independently rotating cylinders, when the curvature of the system is small, i.e. with an inner to outer radius ratio η = r_i /r_o ≥ 0.9, when both the cylinder rotates with the same magnitude but in opposite directions. While in straight geometries like channel or pipe, the deviation from the law can be ascribed to the effect of pressure gradient, in small gap TC flow this effect can be accounted to the conserved transverse current of azimuthal motion. We show that, when the correction is applied, the logarithmic law is restored even when varying the curvature, and that the parameters founded here for TC flow converge to the ones founded in [P. Luchini. European Journal of Mechanics B Fluids, 71, 2018.] for plane Couette flow, in the limit of vanishing curvature η → 1.<br>In many shear- and pressure-driven wall-bounded turbulent flows secondary motions spontaneously develop and their interaction with the main flow alters the overall large-scale features and transfer properties. Taylor–Couette flow, the fluid motion developing in the gap between two concentric cylinders rotating at different angular velocities, is not an exception, and toroidal Taylor rolls have been observed from the early development of the flow up to the fully turbulent regime. In this manuscript we show that under the generic name of ‘Taylor rolls’ there is a wide variety of structures that differ in the vorticity distribution within the cores, the way they are driven and their effects on the mean flow. We relate the rolls at high Reynolds numbers not to centrifugal instabilities, but to a combination of shear and anti-cyclonic rotation, showing that they are preserved in the limit of vanishing curvature and can be better understood as a pinned cycle which shows similar characteristics as the self-sustained process of shear flows. By analysing the effect of the computational domain size, we show that this pinning is not a product of numerics, and that the position of the rolls is governed by a random process with the space and time variations depending on domain size.<br>We use experiments and direct numerical simulations to probe the phase space of low-curvature Taylor–Couette flow in the vicinity of the ultimate regime. The cylinder radius ratio is fixed at η = r_i /r_o = 0.91, where r_i (r_o ) is the inner (outer) cylinder radius. Non-dimensional shear drivings (Taylor numbers Ta) in the range 10^7 ≤ Ta ≤ 10^11 are explored for both co- and counter-rotating configurations. In the Ta range 10^8 ≤ Ta ≤ 10^10 , we observe two local maxima of the angular momentum transport as a function of the cylinder rotation ratio, which can be described as either ‘co-’ or ‘counter-rotating’ due to their location or as ‘broad’ or ‘narrow’ due to their shape. We confirm that the broad peak is accompanied by the strengthening of the large-scale structures, and that the narrow peak appears once the driving (Ta) is strong enough. As first evidenced in numerical simulations by Brauckmann et al. (J. Fluid Mech., vol. 790, 2016, pp. 419–452), the broad peak is produced by centrifugal instabilities and that the narrow peak is a consequence of shear instabilities. We describe how the peaks change with Ta as the flow becomes more turbulent. Close to the transition to the ultimate regime when the boundary layers (BLs) become turbulent, the usual structure of counter-rotating Taylor vortex pairs breaks down and stable unpaired rolls appear locally. We attribute this state to changes in the underlying roll characteristics during the transition to the ultimate regime. Further changes in the flow structure around Ta ≈ 10^10 cause the broad peak to disappear completely and the narrow peak to move. This second transition is caused when the regions inside the BLs which are locally smooth regions disappear and the whole boundary layer becomes active.<br>Large scale structures have been observed in many turbulent wall bounded flows, such as pipe, Couette or square duct flows. Many efforts have been made in order to capture such structures to understand and model them. However, commonly used methods have their limitations, such as arbitrariness in parameter choice or specificity to certain setups. In this manuscript we attempt to overcome these limitations by using two variants of Dynamic Mode Decomposition (DMD). We apply these methods to (rotating) Plane Couette flow, and verify that DMD-based methods are adequate to detect the coherent structures and to extract the distinct properties arising from different control parameters. In particular, these DMD variants are able to capture the influence of rotation on large-scale structures by coupling velocity components. We also show how high-order DMD methods are able to capture some complex temporal dynamics of the large-scale structures. These results show that DMD-based methods are a promising way of filtering and analysing wall bounded flows.
APA, Harvard, Vancouver, ISO, and other styles
26

MOLINARI, MARIA CHIARA. "Decomposizioni in cicli pari di indice 3 nei line graph 4-regolari." Doctoral thesis, Università degli studi di Modena e Reggio Emilia, 2022. http://hdl.handle.net/11380/1266028.

Full text
Abstract:
Una decomposizione in cicli pari (ECD) di un grafo euleriano è una partizione dell'insieme degli spigoli in cicli pari. Coloriamo i cicli pari in modo che i cicli che condividono almeno un vertice ricevano colori distinti. Se m è il numero minimo di colori richiesti, allora diciamo che la decomposizione in cicli pari ha indice m. La nozione di ECD di indice m è connessa al palette index di un grafo, un parametro cromatico che descrive un grafo a partire dal numero minimo di palette dei suoi vertici. In particolare, i possibili valori per il palette index di un grafo 4-regolare sono 1, 3, 4 e 5. Il palette index è 3 se e solo se il grafico ha un 2-fattore pari o un ECD di indice 3. Esistono infinite famiglie di grafi 4-regolari con un ECD di indice 3 . Per quanto ne sappiamo, non è noto alcun esempio di grafo 4-regolare il cui insieme di spigoli può essere partizionato in cicli pari e ogni ECD ha indice maggiore di 3. Motivati dal problema sull'esistenza di un tale grafo 4-regolare, studiamo ECD in line graph 4-regolari di grafi cubici di classe 2. Per alcune delle infinite famiglie di grafi cubici di classe 2, caratterizzati da una grande oddness, possiamo trovare un ECD di indice 3 nel line graph corrispondente.<br>An even cycle decomposition (ECD) of an Eulerian graph is a partition of the edge-set into even cycles. We color the even cycles so as two cycles sharing at least one vertex receive distinct colors. If m is the minimum number of required colors, then we say that the even cycle decomposition has index m. The notion of an ECD of index m is connected to the palette index of a graph, a chromatic parameter describing a graph by the minimum number of palettes of its vertices. In particular, the possible values for the palette index of a 4-regular graph are 1, 3, 4 and 5. It is 3 if and only if the graph has an even 2-factor or an ECD of index 3. There exist infinite families of 4-regular graphs with an ECD of index 3 . As far as we know, no example of 4-regular graph whose edge set can be partitioned into even cycles and every ECDs has index larger than 3 is known. Motivated by the problem on the existence of such a 4-regular graph, we study ECDs in 4-regular line graphs of class 2 cubic graphs. For some of the infinite families of class 2 cubic graphs that are characterized by an arbitrary large oddness, we can find an ECD of index 3 in the corresponding line graph.
APA, Harvard, Vancouver, ISO, and other styles
27

Lösch, Manfred. "Ungeordnete Zahlpartitionen mit k Parts, ihre 2^(k - 1) Typen und ihre typspezifischen erzeugenden Funktionen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-95635.

Full text
Abstract:
Jede ungeordnete Zahlpartition mit k Parts (k-Partiton) hat einen Typ, der mittels einer geordneten Partition von k definiert werden kann. Es können somit 2^(k - 1) Typen definiert werden. Pro Typ gibt es eine eindeutig nummerierbare erzeugende Funktion der geschlossenen Form. Mit Rekursionen können diese Funktionen in (unendlich lange) Potenzreihen expandiert werden. Mit diesen erzeugenden Funktionen lassen sich Bijektionen zwischen den Partitionsmengen verschiedener Typen aufspüren.
APA, Harvard, Vancouver, ISO, and other styles
28

Krautz, Maria. "Wege zur Optimierung magnetokalorischer Fe-basierter Legierungen mit NaZn13-Struktur für die Kühlung bei Raumtemperatur." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2015. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-168939.

Full text
Abstract:
Die magnetische Kühlung ist eine etablierte Technologie im Bereich der Tieftemperaturphysik. Allerdings bieten die Skalierbarkeit des magnetokalorischen Effektes und die Möglichkeit zur kompakten Bauweise auch ein breites Anwendungsspektrum für den Einsatz bei Raumtemperatur. Besonders hervorzuheben ist die Möglichkeit zur Anpassung der magnetostrukturellen Umwandlungstemperatur in La(Fe, Si)13-basierten Materialien an die Arbeitstemperatur einer Kühleinheit. Die Herstellung von Ausgangsmaterial über das Schmelzspinnen, ist von hoher technologischer Relevanz, da im Vergleich zu konventionell gegossenem Massivmaterial die anschließende Glühdauer drastisch reduziert werden kann [1]. In der vorliegenden Arbeit wird zunächst auf die optimalen Glühbedingungen in rasch-erstarrtem Bandmaterial für die Bildung der relevanten magnetokalorischen Phase eingegangen. Durch Variation der Glühtemperatur wird der Einfluss von Sekundärphasen auf den magnetokalorischen Effekt bewertet. Darüber hinaus können bei optimaler Wahl der Legierungszusammensetzung ein großer magnetokalorischer Effekt und der gewünschte Arbeitstemperaturbereich eingestellt werden. Besonderes Augenmerk wird auf die Verknüpfung des Substitutionseffektes (hier: Si für Fe) und der Aufweitung des Gitters durch Hydrierung mit dem resultierenden magnetokalorischen Effekt gelegt. Ein weiterer Punkt, sind die Untersuchungen zur Langzeitstabilität der Eigenschaften von hydriertem Band- und Massivmaterial. Grundlegende und umfassende Untersuchungen zur Substitution von Eisen durch Mangan und zum daraus folgenden Einfluss auf Phasenbildung, Umwandlungstemperatur sowie auf den magnetokalorischen Effekt, insbesondere nach der Hydrierung, werden ebenfalls dargestellt. Die Ergebnisse der vorliegenden Arbeit erlauben damit die Bewertung verschiedener Strategien zur Optimierung der magnetokalorischen Eigenschaften von La(Fe, Si)13.
APA, Harvard, Vancouver, ISO, and other styles
29

Lösch, Manfred. "Ungeordnete Zahlpartitionen mit k Parts, ihre 2^(k - 1) Typen und ihre typspezifischen erzeugenden Funktionen." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-143512.

Full text
Abstract:
Die 2^(k – 1) Typen der ungeordneten Zahlpartitionen mit k Parts (k-Partitionen) werden hier mit Hilfe der geordneten Partitionen von k definiert. Für jeden Typ gibt es eine erzeugende Funktion der geschlossenen Form mit eindeutiger Nummerierung. Die bekannte erzeugende Funktion der k-Partitionen ist die Summe dieser 2^(k – 1) typspezifischen erzeugenden Funktionen. Die Expansion dieser typspezifischen erzeugenden Funktionen in (unendlich lange) Potenzreihen ist rekursiv möglich. Untersucht werden Zerlegungen von erzeugenden Funktionen der einfachen Typen in erzeugende Funktionen anderer Typen. Damit lassen sich Bijektionen zwischen den Partitionen verschiedener Typen aufspüren. Die typspezifischen Betrachtungen werden auf die geordneten Partitionen und auf ihre erzeugenden Funktionen ausgeweitet.
APA, Harvard, Vancouver, ISO, and other styles
30

Lin, J. "EXACT ALGORITHMS FOR SIZE CONSTRAINED CLUSTERING." Doctoral thesis, Università degli Studi di Milano, 2012. http://hdl.handle.net/2434/172513.

Full text
Abstract:
This thesis investigates the following general constrained clustering problem: given a dimension $d$, an $L^p$-norm, a set $X\subset\R^d$, a positive integer $k$ and a finite set $\mathcal M\subset\N$, find the optimal $k$-partition $\{A_1,...,A_k\}$ of $X$ w.r.t. the $L^p$-norm satisfying $|A_i|\in \mathcal M$, $i=1,...,k$. First of all, we prove that the problem is NP-hard even if $k=2$ (for all $p>1$), or $d=2$ and $|\mathcal M|=2$ (with Euclidean norm). Moreover, we put in evidence that the problem is computationally hard if $p$ is a non-integer rational. When $d=2$, $k=2$ and $\mathcal M=\{m,n-m\}$, we design an algorithm for solving the problem in time $O(n\sqrt[3]m \log^2 n)$ in the case of Euclidean norm; this result relies on combinatorial geometry techniques concerning $k$-sets and dynamic convex hulls. Finally, we study the problem in fixed dimension $d$ with $k=2$; by means of tools of real algebraic geometry and numerical techniques for localising algebraic roots we construct a polynomial-time method for solving the constrained clustering problem with integer $p$ given in unary notation.
APA, Harvard, Vancouver, ISO, and other styles
31

TAVERNA, ANDREA. "ALGORITHMS FOR THE LARGE-SCALE UNIT COMMITMENT PROBLEM IN THE SIMULATION OF POWER SYSTEMS." Doctoral thesis, Università degli Studi di Milano, 2017. http://hdl.handle.net/2434/487071.

Full text
Abstract:
Lo Unit Commitment Problem (UCP) è un problema di programmazione matematica dove un insieme di impianti termoelettrici deve essere programmato per soddisfare la domanda di energia e altri vincoli di sistema. Il modello è impiegato da decenni per supportare la pianificazione operazionale di breve termine dei sistemi elettrici. In questo lavoro affrontiamo il problema di risolvere UCP lineari di larga-scala per realizzare simulazioni accurate di sistemi elettrici, con i requisiti aggiuntivi di impiegare capacità di calcolo convenzionali, ad esempio i personal computers, ed un tempo di soluzione di poche ore. Il problema, sotto le medesime condizioni, è affrontato abitualmente dal nostro partner industriale RSE S.p.A. (Ricerche Sistema Energetico), uno dei principali centri ricerche industriali su sistemi energetici in Italia. L’ottimizzazione diretta di queste formulazioni con solutori generici è impraticabile. Nonostante sia possibile calcolare buone soluzioni euristiche, ovvero con un gap di ottimalità sotto il 10%, in tempi ragionevoli per UCP di larga scala, si richiedono soluzioni più accurate, per esempio con gap sotto l’1%, per migliorare l’affidabilità delle simulazioni ed aiutare gli esperti di dominio, che potrebbero non essere familiari con i dettagli dei metodi di programmazione matematica, a supportare meglio le loro analisi. Tra le idee che abbiamo esplorato i seguenti metodi risultano i più promettenti: una mateuristica per calcolare efficientemente buone soluzioni e due metodi esatti di bounding: column generation e Benders decomposition. Questi metodi decompongono il problema disaccoppiando il commitment degli impianti termoelettrici, rappresentati da variabili discrete, e il loro livello di produzione, rappresentato da variabili continue. I nostri esperimenti dimostrano che il modello possiede proprietà intrinseche come degenerazione e forma della funzione obbiettivo piatta che ostacolano o impediscono la convergenza in risolutori allo stato dell’arte. Tuttavia, i metodi che abbiamo sviluppato, sfruttando efficacemente le proprietà strutturali del modello, permettono di raggiungere soluzioni quasi ottime in poche iterazioni per la maggior parte delle istanze.<br>The Unit Commitment Problem (UCP) is a mathematical programming problem where a set of power plants needs to be scheduled to satisfy energy demand and other system-wide constraints. It has been employed for decades to support short-term operational planning of power plants. In this work we tackle the problem of solving large-scale linear UCPs to perform accurate medium-term power systems simulations, with the additional requirements of employing conventional computing power, such as personal computers, and a solution time of a few hours. The problem, under such conditions, is routinely faced by our industry partner, the Energy Systems Development department at RSE S.p.A. (Ricerche Sistema Energetico), a major industrial research centre on power systems in Italy. The direct optimization of these formulations via general-purpose solvers is impractical. While good heuristic solutions, that is with an optimality gap below 10%, can be found for large-scale UCPs in affordable time, more accurate solutions, for example with a gap below 1%, are sought to improve the reliability of the simulations and help domain experts, who may not be familiar with the details of mathematical programming methods, to better support their analysis. Among the ideas we explored, the following methods are the most promising: a matheuristic to efficiently compute good solutions and two exact bounding methods: column generation and Benders decomposition. These methods decompose the problem by decoupling the commitment of thermal plants, represented by discrete variables, and their level of production, represented by continuous variables. Our experiments proved that the model posses inherent properties as degeneracy and objective flatness which hinder or prevent convergence in state-of-the-art solvers. On the other hand, the methods we devised, by effectively exploiting structural properties of the model, allow to reach quasi-optimal solutions within a few iterations on most instances.
APA, Harvard, Vancouver, ISO, and other styles
32

Deolmi, Giulia. "Computational Parabolic Inverse Problems." Doctoral thesis, Università degli studi di Padova, 2012. http://hdl.handle.net/11577/3423351.

Full text
Abstract:
This thesis presents a general approach to solve numerically parabolic Inverse Problems, whose underlying mathematical model is discretized using the Finite Element method. The proposed solution is based upon an adaptive parametrization and it is applied specically to a geometric conduction inverse problem of corrosion estimation and to a boundary convection inverse problem of pollution rate estimation.<br>In questa tesi viene presentato un approccio numerico volto alla risoluzione di problemi inversi parabolici, basato sull'utilizzo di una parametrizzazione adattativa. L'algoritmo risolutivo viene descritto per due specici problemi: mentre il primo consiste nella stima della corrosione di una faccia incognita del dominio, il secondo ha come scopo la quanticazione di inquinante immesso in un fiume.
APA, Harvard, Vancouver, ISO, and other styles
33

Khadir, Omar. "Algorithmes et combinatoire dans l'algèbre de Jordan spéciale libre." Rouen, 1994. http://www.theses.fr/1994ROUES014.

Full text
Abstract:
Cette thèse est une contribution à l'étude des algèbres de Jordan spéciales libres vue sous l'angle de la combinatoire algébrique. Nous avons utilisé comme outil d'exploration le calcul formel. Elle se compose de 5 chapitres et une annexe. Le chapitre 1 et une présentation de polynômes non commutatifs et des mots de Lyndon qui sont les mots minimaux des polynômes de Lie et qui réapparaîtront dans les mots minimaux des monômes de Jordan. Le chapitre 2 est un rappel des résultats de la recherche des mots minimaux dans l'algèbre de Lie libre. Le chapitre 3 contre notre résultat sur les mots minimaux des monômes de Jordan : ce sont les puissances des mots de Lyndon. Ce résultat est discuté en fin de chapitre ou l'on montre, grâce au théorème de Cohn, que les polynômes de Jordan ont d'autres mots minimaux. Le chapitre 4 se sert du théorème précédent comme base à l'algorithmique de la décomposition d'un polynôme palindrome en monômes de Jordan. Les algorithmes de ce chapitre ont été implémentés dans le langage de calcul formel Maple, et un listage des procédures est placé à la fin du chapitre. Le chapitre 5, enfin, contient une étude des relations entre les monômes de Jordan et les équivalences d'arbres. Le résultat obtenu est que, deux monômes de Jordan standards sont égaux si, et seulement si, les arbres dont ils sont l'évaluation sont équivalents par le groupe des équivalences d'arbres. Ce groupe est lui-même limite de 2-groupes de Sylow de groupes symétriques et une présentation en est donnée. Ceci constitue une voie pour le calcul automatique d'un système complet d'identités comme il est illustré par un exemple à la fin du chapitre. En annexe, se trouvent les tables et programmes développés
APA, Harvard, Vancouver, ISO, and other styles
34

Martens, Christoph. "Wellenleiterquantenelektrodynamik mit Mehrniveausystemen." Doctoral thesis, Humboldt-Universität zu Berlin, Mathematisch-Naturwissenschaftliche Fakultät, 2016. http://dx.doi.org/10.18452/17416.

Full text
Abstract:
Mit dem Begriff Wellenleiterquantenelektrodynamik (WQED) wird gemeinhin die Physik des quantisierten und in eindimensionalen Wellenleitern geführten Lichtes in Wechselwirkung mit einzelnen Emittern bezeichnet. In dieser Arbeit untersuche ich Effekte der WQED für einzelne Dreiniveausysteme (3NS) bzw. Paare von Zweiniveausystemen (2NS), die in den Wellenleiter eingebettet sind. Hierzu bediene ich mich hauptsächlich numerischer Methoden und betrachte die Modellsysteme im Rahmen der Drehwellennäherung. Ich untersuche die Dynamik der Streuung einzelner Photonen an einzelnen, in den Wellenleiter eingebetteten 3NS. Dabei analysiere ich den Einfluss dunkler bzw. nahezu dunkler Zustände der 3NS auf die Streuung und zeige, wie sich mit Hilfe stationärer elektrischer Treibfelder gezielt auf die Streuung einwirken lässt. Ich quantifiziere Verschränkung zwischen dem Lichtfeld im Wellenleiter und den Emittern mit Hilfe der Schmidt-Zerlegung und untersuche den Einfluss der Form der Einhüllenden eines Einzelphotonpulses auf die Ausbeute der Verschränkungserzeugung bei der Streuung des Photons an einem einzelnen Lambda-System im Wellenleiter. Hier zeigt sich, dass die Breite der Einhüllenden im k-Raum und die Emissionszeiten der beiden Übergänge des 3NS die maßgeblichen Parameter darstellen. Abschließend ergründe ich die Emissionsdynamik zweier im Abstand L in den Wellenleiter eingebetteter 2NS. Diese Dynamik wird insbesondere durch kavitätsartige und polaritonische Zustände des Systems aus Wellenleiter und Emitter ausschlaggebend beeinflusst. Bei der kollektiven Emission der 2NS treten - abhängig vom Abstand L - Sub- bzw. Superradianz auf. Dabei nimmt die Intensität dieser Effekte mit längerem Abstand L zu. Diese Eigenart lässt sich auf die Eindimensionalität des Wellenleiters zurückführen.<br>The field of waveguide quantum electrodynamics (WQED) deals with the physics of quantised light in one-dimensional (1D) waveguides coupled to single emitters. In this thesis, I investigate WQED effects for single three-level systems (3LS) and pairs of two-level systems (2LS), respectively, which are embedded in the waveguide. To this end, I utilise numerical techniques and consider all model systems within the rotating wave approximation. I investigate the dynamics of single-photon scattering by single, embedded 3LS. In doing so, I analyse the influence of dark and almost-dark states of the 3LS on the scattering dynamics. I also show, how stationary electrical driving fields can control the outcome of the scattering. I quantify entanglement between the waveguide''s light field and single emitters by utilising the Schmidt decomposition. I apply this formalism to a lambda-system embedded in a 1D waveguide and study the generation of entanglement by scattering single-photon pulses with different envelopes on the emitter. I show that this entanglement generation is mainly determined by the photon''s width in k-space and the 3LS''s emission times. Finally, I explore the emission dynamics of a pair of 2LS embedded by a distance L into the waveguide. These dynamics are primarily governed by bound states in the continuum and by polaritonic atom-photon bound-states. For collective emission processes of the two 2LS, sub- and superradiance appear and depend strongly on the 2LS''s distance: the effects increase for larger L. This is an exclusive property of the 1D nature of the waveguide.
APA, Harvard, Vancouver, ISO, and other styles
35

Plinke, Burkhard. "Größenanalyse an nicht separierten Holzpartikeln mit regionenbildenden Algorithmen am Beispiel von OSB-Strands." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-98518.

Full text
Abstract:
Bei strukturorientierten, aus relativ großen Holzpartikeln aufgebauten Holzwerkstoffen wie z.B. OSB (oriented strand board) addieren sich die gerichteten Festigkeiten der einzelnen Lagen je nach Orientierung der Partikel und der Verteilung ihrer Größenparameter. Wünschenswert wäre eine Messung der Partikelgeometrie und Orientierung möglichst im Prozess, z.B. am Formstrang vor der Presse direkt durch den „Blick auf das Vlies“. Bisher sind regelmäßige on-line-Messungen der Spangeometrie aber nicht möglich, und Einzelspanmessungen werden nicht vorgenommen, weil sie zu aufwändig wären. Um die Partikelkonturen zunächst hinreichend für die Vermessung zu restaurieren und dann zu vermessen, muss ein mehrstufiges Verfahren angewendet werden, das eine Szene mit Strands und mehr oder weniger deutlichen Kanten zunächst als „Grauwertgebirge“ auffasst. Zur Segmentierung reicht ein Watershed-Algorithmus nicht aus. Auch ein zweistufiger Kantendetektor nach Canny liefert allein noch kein ausreichendes Ergebnis, weil sich keine geschlossenen Objektkonturen ergeben. Hinreichend dagegen ist ein komplexes Verfahren auf der Grundlage der Höhenschichtzerlegung und nachfolgenden Synthese: Nach einer Transformation der Grauwerte des Bildes in eine reduzierte, gleichverteilte Anzahl von Höhenschichten werden zwischen diesen die lokalen morphologischen Gradienten berechnet und herangezogen für die Rekonstruktion der ursprünglichen Spankonturen. Diese werden aus den Höhenschichten aufaddiert, wobei allerdings nur Teilflächen innerhalb eines für die gesuchten Spangrößen plausiblen Größenintervalls einbezogen werden, um Störungen zu unterdrücken. Das Ergebnis der Rekonstruktion wird zusätzlich verknüpft mit den bereits durch einen Canny-Operator im Originalbild detektierten deutlichen Kanten und morphologisch bereinigt. Diese erweiterte Höhenschichtanalyse ergibt ausreichend segmentierte Bilder, in denen die Objektgrenzen weitgehend den Spankonturen entsprechen. Bei der nachfolgenden Vermessung der Objekte werden Standard-Algorithmen eingesetzt, wobei sich die Approximation von Spankonturen durch momentengleiche Ellipsen als sinnvoll erwies. Verbliebene Fehldetektionen können bei der Vermessung unterdrückt werden durch Formfaktoren und zusätzliche Größenintervalle. Zur Darstellung und Charakterisierung der Größenverteilungen für die Länge und die Breite wurden die nach der Objektfläche gewichtete, linear skalierte Verteilungsdichte (q2-Verteilung), die Verteilungssumme und verschiedene Quantile verwendet. Zur Umsetzung und Demonstration des Zusammenwirkens der verschiedenen Algorithmen wurde auf der Basis von MATLAB das Demonstrationsprogramm „SizeBulk“ entwickelt, das Bildfolgen verarbeiten kann und mit dem die verschiedenen Varianten der Bildaufbereitung und Parametrierung durchgespielt werden können. Das Ergebnis des Detektionsverfahrens enthält allerdings nur die vollständigen Konturen der ganz oben liegenden Objekte; Objekte unterhalb der Außenlage sind teilweise verdeckt und können daher nur unvollständig vermessen werden. Zum Test wurden daher synthetische Bilder mit vereinzelten und überlagerten Objekten bekannter Größenverteilung erzeugt und dem Detektions- und Messverfahren unterworfen. Dabei zeigte sich, dass die Größenstatistiken durch den Überlagerungseffekt und auch die Spanorientierung zwar beeinflusst werden, dass aber zumindest die Modalwerte der wichtigsten Größenparameter Länge und Breite meist erkennbar bleiben. Als Versuchsmaterial dienten außer den synthetischen Bildern verschiedene Sortimente von OSB-Strands aus Industrie- und Laborproduktion. Sie wurden sowohl manuell vereinzelt als auch zu einem Vlies arrangiert vermessen. Auch bei realen Strands zeigten sich gleiche Einflüsse der Überlagerung auf die Größenverteilungen wie in der Simulation. Es gilt aber auch hier, dass die Charakteristika verschiedener Spankontingente bei gleichen Aufnahmebedingungen und Auswerteparametern gut messbar sind bzw. dass Änderungen in der gemessenen Größenverteilung eindeutig den geometrischen Eigenschaften der Späne zugeordnet werden können. Die Eignung der Verarbeitungsfolge zur Charakterisierung von Spangrößenverteilungen bestätigte sich auch an Bildern, die ausschließlich am Vlies auf einem Formstrang aufgenommen wurden. Zusätzlich wurde nachgewiesen, dass mit der erweiterten Höhenschichtanalyse auch Bilder von Spanplattenoberflächen ausgewertet werden könnten und daraus auf die Größenverteilung der eingesetzten Deckschichtspäne geschlossen werden kann. Das vorgestellte Verfahren ist daher eine gute und neuartige Möglichkeit, prozessnah an Teilflächen von OSB-Vliesen anhand von Grauwertbildern die Größenverteilungen der Strands zu charakterisieren und eignet sich grundsätzlich für den industriellen Einsatz. Geeignete Verfahren waren zumindest für Holzpartikel bisher nicht bekannt. Diese Möglichkeit, Trends in der Spangrößenverteilung automatisch zu erkennen, eröffnet daher neue Perspektiven für die Prozessüberwachung<br>The strength of wood-based materials made of several layers of big and oriented particles like OSB (oriented strand board) is a superposition of the strengths of the layers according to the orientation of the particles and depending from their size distribution. It would be desirable to measure particle geometry and orientation close to the production process, e.g. with a “view onto the mat”. Currently, continuous on-line measurements of the particle geometry are not possible, while measurements of separated particles would be too costly and time-consuming. Before measuring particle shapes they have to be reconstructed in a multi-stage procedure which considers an image scene with strands as “gray value mountains”. Segmentation using a watershed algorithm is not sufficient. Also a two-step edge detector according to Canny does not yield closed object shapes. A multi-step procedure based on threshold decomposition and recombination however is successful: The gray values in the image are transformed into a reduced and uniformly distributed set of threshold levels. The local morphological gradients between these levels are used to re-build the original particle shapes by adding the threshold levels. Only shapes with a plausible size corresponding to real particle shapes are included in order to suppress noise. The result of the reconstruction from threshold levels is then matched with the result of the strong edges in the original image, which had been detected using a Canny operator, and is finally cleaned with morphological operators. This extended threshold analysis produces sufficiently segmented images with object shapes corresponding extensively to the particle shapes. Standard algorithms are used to measure geometric features of the objects. An approximation of particle shapes with ellipses of equal moments of inertia is useful. Remaining incorrectly detected objects are removed by form factors and size intervals. Size distributions for the parameters length and width are presented and characterized as density distribution histograms, weighted by the object area and linearly scaled (q2 distribution), as well as the cumulated distribution and different quantiles. A demonstration software “SizeBulk” based on MATLAB has been developed to demonstrate the computation and the interaction of algorithms. Image sequences can be processed and different variations of image preprocessing and parametrization can be tested. However, the detection procedure yields complete shapes only for those particles in the top layer. Objects in lower layers are partially hidden and cannot be measured completely. Artificial images with separated and with overlaid objects with a known size distribution were generated to study this effect. It was shown that size distributions are influenced by this covering effect and also by the strand orientation, but that at least the modes of the most important size parameters length and width remain in evidence. Artificial images and several samples with OSB strands from industrial and laboratory production were used for testing. They were measured as single strands as well as arrangements similar to an OSB mat. For real strands, the same covering effects to the size distributions revealed as in the simulation. Under stable image acquisition conditions and using similar processing parameters the characteristics of these samples can well be measured, and changes in the size distributions are definitely due to the geometric properties of the strands. The suitability of the processing procedure for the characterization of strand size distributions could also be confirmed for images acquired from OSB mats in a production line. Moreover, it could be shown that the extended threshold analysis is also suitable to evaluate images of particle board surfaces and to draw conclusions about the size distribution of the top layer particles. Therefore, the method presented here is a novel possibility to measure size distributions of OSB strands through the evaluation of partial gray value images of the mat surface. In principle, this method is suitable to be transferred to an industrial application. So far, methods that address the problem of detecting trends of the strand size distribution were not known, and this work shows new perspectives for process monitoring
APA, Harvard, Vancouver, ISO, and other styles
36

Müller, Hannes. "Ein Konzept zur numerischen Berechnung inkompressibler Strömungen auf Grundlage einer diskontinuierlichen Galerkin-Methode in Verbindung mit nichtüberlappender Gebietszerlegung." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 1999. http://nbn-resolving.de/urn:nbn:de:swb:14-992350020281-96843.

Full text
Abstract:
A new combination of techniques for the numerical computation of incompressible flow is presented. The temporal discretization bases on the discontinuous Galerkin-formulation. Both constant (DG(0)) and linear approximation (DG(1)) in time is discussed. In case of DG(1) an iterative method reduces the problem to a sequence of problems each with the dimension of the DG(0) approach. For the semi-discrete problems a Galerkin/least-squares method is applied. Furthermore a non-overlapping domain decomposition method can be used for a parallelized computation. The main advantage of this approach is the low amount of information which must be exchanged between the subdomains. Due to the slight bandwidth a workstation-cluster is a suitable platform. Otherwise this method is efficient only for a small number of subdomains. The interface condition is of the Robin/Robin-type and for the Navier-Stokes equation a formulation introducing a further pressure interface condition is used. Additionally a suggestion for the implementation of the standard k-epsilon turbulence model with special wall function is done in this context. All the features mentioned above are implemented in a code called ParallelNS. Using this code the verification of this approach was done on a large number of examples ranging from simple advection-diffusion problems to turbulent convection in a closed cavity.
APA, Harvard, Vancouver, ISO, and other styles
37

Phruksahiran, Narathep. "Polarimetrische Streuungseigenschaften und Fokussierungsmethoden zur quantitativen Auswertung der polarimetrischen SAR-Daten." Doctoral thesis, Universitätsbibliothek Chemnitz, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-107764.

Full text
Abstract:
Das Radar mit synthetischer Apertur (Synthetic Aperture Radar - SAR) liefert eine quasi-fotographische Abbildung der beleuchteten Bodenoberfläche mit zusätzlichen Informationen, die von der gesendeten und empfangenen Polarisation der Wellen abhängig sind. Eine nützliche Anwendung der polarimetrischen SAR-Daten liegt bei der Klassifizierung der Bodenstruktur anhand der polarimetrischen Streuungseigenschaften. In diesem Zusammenhang beschäftigt sich die vorliegende Arbeit mit der Entwicklung und Untersuchung neuer polarimetrischen Fokussierungsfunktion für die SAR-Datenverarbeitung mit Hilfe der polarimetrischen Rückstreuungseigenschaft, die zu einer alternativen quantitativen Auswertung der polarimerischen SAR-Daten führen kann. Die physikalische Optik Approximation wird für die numerische Berechnung der rückgestreuten elektrischen Felder der kanonischen Ziele unter SAR-Geometrie unter Berücksichtigung der Polarisationslage verwendet. Aus den rückgestreuten elektrischen Felder werden die polarimetrischen Radarrückstreuquerschnitte berechnet. Ein SAR-Simulator wird zur Datenverarbeitung der E-SAR des DLR entwickelt. Der Ansatz des polarimetrischen Radarrückstreuquerschnittes ermöglicht die approximierte numerische Berechnung der Rückstreuungseigenschaften der kanonischen Ziele sowohl im kopolaren als auch im kreuzpolaren Polarisationsbetrieb. Bei der SAR-Datenverarbeitung werden die Rohdatensätze durch die Referenzfunktion eines Punktzieles in der Entfernungsrichtung verarbeitet. Bei der Azimutkompression werden die vier Referenzfunktionen, das heißt die Referenzfunktion eines Punktzieles, die polarimetrische Fokussierungsfunktion einer flachen Platte, die polarimetrische Fokussierungsfunktion eines Zweifach-Reflektors und die polarimetrische Fokussierungsfunktion eines Dreifach-Reflektors, eingesetzt. Die qunatitativen Auswertung der SAR-Daten werden anhand des Pauli-Zerlegungstheorems, der differentiellen Reflektivität und des linearen Depolarisationsverhältnises durchgeführt.
APA, Harvard, Vancouver, ISO, and other styles
38

Maron, Oded, and Tomas Lozano-Perez. "Visible Decomposition: Real-Time Path Planning in Large Planar Environments." 1998. http://hdl.handle.net/1721.1/5935.

Full text
Abstract:
We describe a method called Visible Decomposition for computing collision-free paths in real time through a planar environment with a large number of obstacles. This method divides space into local visibility graphs, ensuring that all operations are local. The search time is kept low since the number of regions is proved to be small. We analyze the computational demands of the algorithm and the quality of the paths it produces. In addition, we show test results on a large simulation testbed.
APA, Harvard, Vancouver, ISO, and other styles
39

TARTAGLIONE, Michela. "Analysis and decomposition of frequency modulated multicomponent signals." Doctoral thesis, 2021. http://hdl.handle.net/11573/1516269.

Full text
Abstract:
Frequency modulated (FM) signals are studied in many research fields, including seismology, astrophysics, biology, acoustics, animal echolocation, radar and sonar. They are referred as multicomponent signals (MCS), as they are generally composed of multiple waveforms, with specific time-dependent frequencies, known as instantaneous frequencies (IFs). Many applications require the extraction of signal characteristics (i.e. amplitudes and IFs). that is why MCS decomposition is an important topic in signal processing. It consists of the recovery of each individual mode and it is often performed by IFs separation. The task becomes very challenging if the signal modes overlap in the TF domain, i.e. they interfere with each other, at the so-called non-separability region. For this reason, a general solution to MCS decomposition is not available yet. As a matter of fact, the existing methods addressing overlapping modes share the same limitations: they are parametric, therefore they adapt only to the assumed signal class, or they rely on signal-dependent and parametric TF representations; otherwise, they are interpolation techniques, i.e. they almost ignore the information corrupted by interference and they recover IF curve by some fitting procedures, resulting in high computational cost and bad performances against noise. This thesis aims at overcoming these drawbacks, providing efficient tools for dealing with MCS with interfering modes. An extended state-of-the-art revision is provided, as well as the mathematical tools and the main definitions needed to introduce the topic. Then, the problem is addressed following two main strategies: the former is an iterative approach that aims at enhancing MCS' resolution in the TF domain; the latter is a transform-based approach, that combines TF analysis and Radon Transform for separating individual modes. As main advantage, the methods derived from both the iterative and the transform-based approaches are non-parametric, as they do not require specific assumptions on the signal class. As confirmed by the experimental results and the comparative studies, the proposed approach contributes to the current state of the-art improvement.
APA, Harvard, Vancouver, ISO, and other styles
40

He, Kun. "Automated Measurement of Neuromuscular Jitter Based on EMG Signal Decomposition." Thesis, 2007. http://hdl.handle.net/10012/3332.

Full text
Abstract:
The quantitative analysis of decomposed electromyographic (EMG) signals reveals information for diagnosing and characterizing neuromuscular disorders. Neuromuscular jitter is an important measure that reflects the stability of the operation of a neuromuscular junction. It is conventionally measured using single fiber electromyographic (SFEMG) techniques. SFEMG techniques require substantial physician dexterity and subject cooperation. Furthermore, SFEMG needles are expensive, and their re-use increases the risk of possible transmission of infectious agents. Using disposable concentric needle (CN) electrodes and automating the measurment of neuromuscular jitter would greatly facilitate the study of neuromuscular disorders. An improved automated jitter measurment system based on the decomposition of CN detected EMG signals is developed and evaluated in this thesis. Neuromuscular jitter is defined as the variability of time intervals between two muscle fiber potentials (MFPs). Given the candidate motor unit potentials (MUPs) of a decomposed EMG signal, which is represented by a motor unit potential train (MUPT), the automated jitter measurement system designed in this thesis can be summarized as a three-step procedure: 1) identify isolated motor unit potentials in a MUPT, 2) detect the significant MFPs of each isolated MUP, 3) track significant MFPs generated by the same muscle fiber across all isolated MUPs, select typical MFP pairs, and calculate jitter. In Step one, a minimal spanning tree-based 2-phase clustering algorithm was developed for identifying isolated MUPs in a train. For the second step, a pattern recognition system was designed to classify detected MFP peaks. At last, the neuromuscular jitter is calculated based on the tracked and selected MFP pairs in the third step. These three steps were simulated and evaluated using synthetic EMG signals independently, and the whole system is preliminary implemented and evaluated using a small simulated data base. Compared to previous work in this area, the algorithms in this thesis showed better performance and great robustness across a variety of EMG signals, so that they can be applied widely to similar scenarios. The whole system developed in this thesis can be implemented in a large EMG signal decomposition system and validated using real data.
APA, Harvard, Vancouver, ISO, and other styles
41

Wenzel, Anne. "Komponentenzerlegung des Regelleistungsbedarfs mit Methoden der Zeitreihenanalyse." Master's thesis, 2010. https://monarch.qucosa.de/id/qucosa%3A19478.

Full text
Abstract:
Im Rahmen der Arbeit wurden die minutengenauen Daten des Regelleistungsbedarfs (Summe aus Sekundärregelleistung und Minutenreserve) der Monate April bis Dezember des Jahres 2009 einer Regelzone einer Zeitreihenanalyse unterzogen und in Komponenten gemäß dem klassischen Komponentenmodell zerlegt. Diese sind die Trendkomponente, ermittelt durch einen gleitenden Durchschnitt mit der Länge einer Stunde, weiterhin zwei periodische Komponenten mit der Periodenlänge einer Stunde sowie der Periodenlänge eines Tages und die Restkomponente, welche mit einem ARIMA(2,1,5)-Prozess modelliert wurde. In der Zukunft sollte das erstellte Modell des Regelleistungsbedarfs durch Hinzunahme einer jahreszeitlichen Komponente noch verbessert werden. Dies war im Rahmen der Arbeit nicht möglich, da keine Daten über einen Zeitraum von mehreren Jahren vorhanden waren. Zusätzlich kann geprüft werden, inwiefern mit dem Komponentenmodell Prognosen durchführbar sind. Dafür sollte die Trendkomponente anders gewählt werden, da sich der hier gewählte Weg zu sehr an den Daten orientiert. Der zweite Teil der Aufgabenstellung dieser Arbeit bestand im Identifizieren inhaltlicher Komponenten, also möglicher Zusammenhänge zwischen dem Regelleistungsbedarf und verschiedenen denkbaren Ursachen. Als potentielle Ursachen wurden der Lastverlauf sowie die Windenergieeinspeisung untersucht. Zwischen der Zeitreihe des Lastverlaufs und der des Regelleistungsbedarfs bestand eine leichte positive Korrelation, zwischen der Zeitreihe der Windenergieeinspeisung und der des Regelleistungsbedarfs eine geringe negative Korrelation.:Einleitung 1 Ausgangssituation und technische Gegebenheiten 2 Mathematische Grundlagen 3 Analyse der Regelleistungsdaten 4 Zusammenfassung und Ausblick
APA, Harvard, Vancouver, ISO, and other styles
42

Krautz, Maria. "Wege zur Optimierung magnetokalorischer Fe-basierter Legierungen mit NaZn13-Struktur für die Kühlung bei Raumtemperatur." Doctoral thesis, 2014. https://tud.qucosa.de/id/qucosa%3A28714.

Full text
Abstract:
Die magnetische Kühlung ist eine etablierte Technologie im Bereich der Tieftemperaturphysik. Allerdings bieten die Skalierbarkeit des magnetokalorischen Effektes und die Möglichkeit zur kompakten Bauweise auch ein breites Anwendungsspektrum für den Einsatz bei Raumtemperatur. Besonders hervorzuheben ist die Möglichkeit zur Anpassung der magnetostrukturellen Umwandlungstemperatur in La(Fe, Si)13-basierten Materialien an die Arbeitstemperatur einer Kühleinheit. Die Herstellung von Ausgangsmaterial über das Schmelzspinnen, ist von hoher technologischer Relevanz, da im Vergleich zu konventionell gegossenem Massivmaterial die anschließende Glühdauer drastisch reduziert werden kann [1]. In der vorliegenden Arbeit wird zunächst auf die optimalen Glühbedingungen in rasch-erstarrtem Bandmaterial für die Bildung der relevanten magnetokalorischen Phase eingegangen. Durch Variation der Glühtemperatur wird der Einfluss von Sekundärphasen auf den magnetokalorischen Effekt bewertet. Darüber hinaus können bei optimaler Wahl der Legierungszusammensetzung ein großer magnetokalorischer Effekt und der gewünschte Arbeitstemperaturbereich eingestellt werden. Besonderes Augenmerk wird auf die Verknüpfung des Substitutionseffektes (hier: Si für Fe) und der Aufweitung des Gitters durch Hydrierung mit dem resultierenden magnetokalorischen Effekt gelegt. Ein weiterer Punkt, sind die Untersuchungen zur Langzeitstabilität der Eigenschaften von hydriertem Band- und Massivmaterial. Grundlegende und umfassende Untersuchungen zur Substitution von Eisen durch Mangan und zum daraus folgenden Einfluss auf Phasenbildung, Umwandlungstemperatur sowie auf den magnetokalorischen Effekt, insbesondere nach der Hydrierung, werden ebenfalls dargestellt. Die Ergebnisse der vorliegenden Arbeit erlauben damit die Bewertung verschiedener Strategien zur Optimierung der magnetokalorischen Eigenschaften von La(Fe, Si)13.
APA, Harvard, Vancouver, ISO, and other styles
43

Jee, Joo-Eun [Verfasser]. "Mechanistic studies on the formation and decomposition reactions of iron(III) porphyrin complexes with NO = Mechanistische Untersuchungen der Bildungs- und Zersetzungsreaktionen von Eisen(III)-Porphyrin-Komplexen mit NO / vorgelegt von Joo-Eun Jee." 2007. http://d-nb.info/984903968/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Held, Joachim. "Ein Gebietszerlegungsverfahren für parabolische Probleme im Zusammenhang mit Finite-Volumen-Diskretisierung." Doctoral thesis, 2006. http://hdl.handle.net/11858/00-1735-0000-0006-B39E-E.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Rioux-Lavoie, Damien. "Méthode SPH implicite d’ordre 2 appliquée à des fluides incompressibles munis d’une frontière libre." Thèse, 2017. http://hdl.handle.net/1866/19377.

Full text
Abstract:
L’objectif de ce mémoire est d’introduire une nouvelle méthode smoothed particle hydrodynamics (SPH) implicite purement lagrangienne, pour la résolution des équations de Navier- Stokes incompressibles bidimensionnelles en présence d’une surface libre. Notre schéma de discrétisation est basé sur celui de Kéou Noutcheuwa et Owens [19]. Nous avons traité la surface libre en combinant la méthode multiple boundary tangent (MBT) de Yildiz et al. [43] et les conditions aux limites sur les champs auxiliaires de Yang et Prosperetti [42]. Ce faisant, nous obtenons un schéma de discrétisation d’ordre $\mathcal{O}(\Delta t ^2)$ et $\mathcal{O}(\Delta x ^2)$, selon certaines contraintes sur la longueur de lissage $h$. Dans un premier temps, nous avons testé notre schéma avec un écoulement de Poiseuille bidimensionnel à l’aide duquel nous analysons l’erreur de discrétisation de la méthode SPH. Ensuite, nous avons tenté de simuler un problème d’extrusion newtonien bidimensionnel. Malheureusement, bien que le comportement de la surface libre soit satisfaisant, nous avons rencontré des problèmes numériques sur la singularité à la sortie du moule.<br>The objective of this thesis is to introduce a new implicit purely lagrangian smoothed particle hydrodynamics (SPH) method, for the resolution of the two-dimensional incompressible Navier-Stokes equations in the presence of a free surface. Our discretization scheme is based on that of Kéou Noutcheuwa et Owens [19]. We have treated the free surface by combining Yildiz et al. [43] multiple boundary tangent (MBT) method and boundary conditions on the auxiliary fields of Yang et Prosperetti [42]. In this way, we obtain a discretization scheme of order $\mathcal{O}(\Delta t ^2)$ and $\mathcal{O}(\Delta x ^2)$, according to certain constraints on the smoothing length $h$. First, we tested our scheme with a two-dimensional Poiseuille flow by means of which we analyze the discretization error of the SPH method. Then, we tried to simulate a two-dimensional Newtonian extrusion problem. Unfortunately, although the behavior of the free surface is satisfactory, we have encountered numerical problems on the singularity at the output of the die.
APA, Harvard, Vancouver, ISO, and other styles
46

LANTERI, ALESSANDRO. "Novel methods for Intrinsic dimension estimation and manifold learning." Doctoral thesis, 2016. http://hdl.handle.net/11573/905425.

Full text
Abstract:
One of the most challenging problems in modern science is how to deal with the huge amount of data that today's technologies provide. Several diculties may arise. For instance, the number of samples may be too big and the stream of incoming data may be faster than the algorithm needed to process them. Another common problem is that when data dimension grows also the volume of the space does, leading to a sparsication of the available data. This may cause problems in the statistical analysis since the data needed to support our conclusion often grows exponentially with the dimension. This problem is commonly referred to as the Curse of Dimensionality and it is one of the reasons why high dimensional data can not be analyzed eciently with traditional methods. Classical methods for dimensionality reduction, like principal component analysis and factor analysis, may fail due to a nonlinear structure of the data. In recent years several methods for nonlinear dimensionality reduction have been proposed. A general way to model high dimensional data set is to represent the observations as noisy samples drawn from a probability distribution mu in the real coordinate space of D dimensions. It has been observed that the essential support of mu can be often well approximated by low dimensional sets. These sets can be assumed to be low dimensional manifolds embedded in the ambient dimension D. A manifold is a topologial space which globally may not be Euclidean but in a small neighbor of each point behaves like an Euclidean space. In this setting we call intrinsic dimension the dimension of the manifold, which is usually much lower than the ambient dimension D. Roughly speaking, the intrinsic dimension of a data set can be described as the minimum number of variables needed to represent the data without signicant loss of information. In this work we propose dierent methods aimed at estimate the intrinsic dimension. The rst method we present models the neighbors of each point as stochastic processes, in such a way that a closed form likelihood function can be written. This leads to a closed form maximum likelihood estimator (MLE) for the intrinsic dimension, which has all the good features that a MLE can have. The second method is based on a multiscale singular value decomposition (MSVD) of the data. This method performs singular value decomposition (SVD) on neighbors of increasing size and nd an estimate for the intrinsic dimension studying the behavior of the singular values as the radius of the neighbor increases. We also introduce an algorithm to estimate the model parameters when the data are assumed to be sampled around an unknown number of planes with dierent intrinsic dimensions, embedded in a high dimensional space. This kind of models have many applications in computer vision and patter recognition, where the data can be described by multiple linear structures or need to be clusterized into groups that can be represented by low dimensional hyperplanes. The algorithm relies on both MSVD and spectral clustering, and it is able to estimate the number of planes, their dimension as well as their arrangement in the ambient space. Finally, we propose a novel method for manifold reconstruction based on a multiscale approach, which approximates the manifold from coarse to ne scales with increasing precision. The basic idea is to produce, at a generic scale j, a piecewise linear approximation of the manifold using a collection of low dimensional planes and use those planes to create clusters for the data. At scale j + 1, each cluster is independently approximated by another collection of low dimensional planes.The process is iterated until the desired precision is achieved. This algorithm is fast because it is highly parallelizable and its computational time is independent from the sample size. Moreover this method automatically constructs a tree structure for the data. This feature can be particularly useful in applications which requires an a priori tree data structure. The aim of the collection of methods proposed in this work is to provide algorithms to learn and estimate the underlying structure of high dimensional dataset.
APA, Harvard, Vancouver, ISO, and other styles
47

Phruksahiran, Narathep. "Polarimetrische Streuungseigenschaften und Fokussierungsmethoden zur quantitativen Auswertung der polarimetrischen SAR-Daten." Doctoral thesis, 2012. https://monarch.qucosa.de/id/qucosa%3A19855.

Full text
Abstract:
Das Radar mit synthetischer Apertur (Synthetic Aperture Radar - SAR) liefert eine quasi-fotographische Abbildung der beleuchteten Bodenoberfläche mit zusätzlichen Informationen, die von der gesendeten und empfangenen Polarisation der Wellen abhängig sind. Eine nützliche Anwendung der polarimetrischen SAR-Daten liegt bei der Klassifizierung der Bodenstruktur anhand der polarimetrischen Streuungseigenschaften. In diesem Zusammenhang beschäftigt sich die vorliegende Arbeit mit der Entwicklung und Untersuchung neuer polarimetrischen Fokussierungsfunktion für die SAR-Datenverarbeitung mit Hilfe der polarimetrischen Rückstreuungseigenschaft, die zu einer alternativen quantitativen Auswertung der polarimerischen SAR-Daten führen kann. Die physikalische Optik Approximation wird für die numerische Berechnung der rückgestreuten elektrischen Felder der kanonischen Ziele unter SAR-Geometrie unter Berücksichtigung der Polarisationslage verwendet. Aus den rückgestreuten elektrischen Felder werden die polarimetrischen Radarrückstreuquerschnitte berechnet. Ein SAR-Simulator wird zur Datenverarbeitung der E-SAR des DLR entwickelt. Der Ansatz des polarimetrischen Radarrückstreuquerschnittes ermöglicht die approximierte numerische Berechnung der Rückstreuungseigenschaften der kanonischen Ziele sowohl im kopolaren als auch im kreuzpolaren Polarisationsbetrieb. Bei der SAR-Datenverarbeitung werden die Rohdatensätze durch die Referenzfunktion eines Punktzieles in der Entfernungsrichtung verarbeitet. Bei der Azimutkompression werden die vier Referenzfunktionen, das heißt die Referenzfunktion eines Punktzieles, die polarimetrische Fokussierungsfunktion einer flachen Platte, die polarimetrische Fokussierungsfunktion eines Zweifach-Reflektors und die polarimetrische Fokussierungsfunktion eines Dreifach-Reflektors, eingesetzt. Die qunatitativen Auswertung der SAR-Daten werden anhand des Pauli-Zerlegungstheorems, der differentiellen Reflektivität und des linearen Depolarisationsverhältnises durchgeführt.
APA, Harvard, Vancouver, ISO, and other styles
48

ARRIGHETTI, Walter. "Mathematical models and methods for Electromagnetism on fractal geometries." Doctoral thesis, 2007. http://hdl.handle.net/11573/1656600.

Full text
Abstract:
This work summarizes the research path done by Walter Arrighetti during his three years of Doctorate of Research in Electromagnetism at Università degli Studi di Roma “La Sapienza,” Rome, under the guidance of Professor Giorgio Gerosa. This work was mainly motivated by the struggle to find simpler and simpler models to introduce complex geometries (like fractal ones, for example, which are complicated but far from being ‘irregular’) in physical field theories like the Classical Electrodynamics, and which stand at the base of most contemporary applied research activities: from antennas (of any sizes, bandwidths and operational distances) to waveguides &amp; resonators (for devices ranging from IC motherboards , to high-speed fibre channel links), to magnetic resonance (RMI) devices (for both diagnostic and research purposes), all the way up to particle accelerators. All of these models need not only a solid physical base, but also a specifically crafted ensemble of mathematical methods, in order to tackle with problems which “standard-geometry” models (both in the continuum and the discrete cases) are not best-suited for. During his previous years of study towards the Laurea degree in Electronic Engineering, the author used different approaches toward Fractal Electrodynamics, form purely-analytical, to computer-assisted numerical simulations of applied electromagnetic structures (both radiating and wave-guiding), down to algebraic-topological ones. The latter approaches, more often than not, proved to be the best way to start with, because the author found out that self-similarity (a property which many complicated geometries —even non-fractal ones— seem to, at least, tend to possess) can be easily interpreted as a topological symmetry, wonderfully described using “ad hoc” nontrivial algebraic languages. Whatever can be successfully described in the language of Algebra (either via numbers, symmetry groups, graphs, polynomials, etc.) is then always simplified (or “quotiented” — so to speak in a more strict mathematical language) and, when numerical computation takes the way towards the solution of a specific applied problem, those simplifications turn in handy to reduce the complexity of it. For example, the strict self-similarity possessed by some fractals (like those generated via an Iterated Function System — or IFS) allows to numerically store the geometrical data for a fractal object in a sequence of simpler and simpler data which are, for example, instantly recovered by a computer starting from the simplest data (like simplices, squares/cubes, circles/spheres and regular polygons/polytopes). For the same reason, all the physical properties that depend on the geometry (or the topology — i.e. basically the number of “holes” or inner connections) of the domain can be reduced, estimated or be even completely known a priori, even before a numerical simulation is performed. In this work, several of these methods (coming from apparently different branches of pure and applied Mathematics) are presented and finally joined with Electromagnetism equations to solve some more or less applied problems. Since many of the mathematical tools used to build the studied models and methods are advanced and generally not sufficiently known to experts in either such different fields, the first two Chapters are devoted to a brief introduction of some purely mathematical topics. In that context, the author found that the best way to accomplish this was to re-write all those different results from different branches of both pure and applied Mathematics in a formalism as more solid and unified as possible, with continuous links back and forth to different topics (and to the next more applied Chapters). That approach is seldom found in most graduate-level texts. For example, very similar mathematical objects may be even called or classified in different ways, according to the different mathematical contexts they are introduced in, which is exactly the opposite philosophy which has guided underneath in writing these first Chapters. On the other end, simpler and more trivial mathematical definitions, formalisms or electromagnetic problems, when not elsewhere referenced to, can be found in [9], Arrighetti W., Analisi di Strutture Elettromagnetiche Frattali, the author’s Laurea degree dissertation (currently only in Italian language). The most original part of the work is in the last three Chapters where —always using the same “language” and helping with cross-links, as well as to the Bibliography— methods are introduced and then applied to model some electromagnetic problems (previously either unsolved — or already-known, but here solved with a different, usually simpler, or at least more elegant approach).
APA, Harvard, Vancouver, ISO, and other styles
49

Kundu, Madan Gopal. "Advanced Modeling of Longitudinal Spectroscopy Data." Thesis, 2014. http://hdl.handle.net/1805/5454.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)<br>Magnetic resonance (MR) spectroscopy is a neuroimaging technique. It is widely used to quantify the concentration of important metabolites in a brain tissue. Imbalance in concentration of brain metabolites has been found to be associated with development of neurological impairment. There has been increasing trend of using MR spectroscopy as a diagnosis tool for neurological disorders. We established statistical methodology to analyze data obtained from the MR spectroscopy in the context of the HIV associated neurological disorder. First, we have developed novel methodology to study the association of marker of neurological disorder with MR spectrum from brain and how this association evolves with time. The entire problem fits into the framework of scalar-on-function regression model with individual spectrum being the functional predictor. We have extended one of the existing cross-sectional scalar-on-function regression techniques to longitudinal set-up. Advantage of proposed method includes: 1) ability to model flexible time-varying association between response and functional predictor and (2) ability to incorporate prior information. Second part of research attempts to study the influence of the clinical and demographic factors on the progression of brain metabolites over time. In order to understand the influence of these factors in fully non-parametric way, we proposed LongCART algorithm to construct regression tree with longitudinal data. Such a regression tree helps to identify smaller subpopulations (characterized by baseline factors) with differential longitudinal profile and hence helps us to identify influence of baseline factors. Advantage of LongCART algorithm includes: (1) it maintains of type-I error in determining best split, (2) substantially reduces computation time and (2) applicable even observations are taken at subject-specific time-points. Finally, we carried out an in-depth analysis of longitudinal changes in the brain metabolite concentrations in three brain regions, namely, white matter, gray matter and basal ganglia in chronically infected HIV patients enrolled in HIV Neuroimaging Consortium study. We studied the influence of important baseline factors (clinical and demographic) on these longitudinal profiles of brain metabolites using LongCART algorithm in order to identify subgroup of patients at higher risk of neurological impairment.<br>Partial research support was provided by the National Institutes of Health grants U01-MH083545, R01-CA126205 and U01-CA086368
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!