Siga este enlace para ver otros tipos de publicaciones sobre el tema: Element matriciel.

Tesis sobre el tema "Element matriciel"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores tesis para su investigación sobre el tema "Element matriciel".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Vaquer, Charles. "Optimisation du dimensionnement et comportement des matrices frettees". Toulouse 3, 1988. http://www.theses.fr/1988TOU30024.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Ernenwein, René. "Mise au point d'un systeme de programmes vectorise et multitaches pour le calcul ab initio scf/ci sur cray 2". Université Louis Pasteur (Strasbourg) (1971-2008), 1988. http://www.theses.fr/1988STR13128.

Texto completo
Resumen
Application du formalisme de mc murchie et davidson au calcul vectoriel des integrales mono et bielectroniques en tirant parti de l'architecture a la fois vectorielle et parallele au superordinateur cray 2. Gain substantiel de performance dans le calcul scf par une utilisation judicieuse des elements de supermatrice reordonnes par rapport aux deux premiers indices. Par utilisation conjointe de la grande memoire et de l'adressage indirect vectoriel du cray 2, vectorisation du calcul de ces elements et des contributions a la matrice de fock
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Drechsler, Florian. "Über die Lösung von elliptischen Randwertproblemen mittels Gebietszerlegungstechniken, Hierarchischer Matrizen und der Methode der finiten Elemente". Doctoral thesis, Universitätsbibliothek Leipzig, 2011. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-68462.

Texto completo
Resumen
In dieser Arbeit entwickeln wir einen Löser für elliptische Randwertprobleme. Dazu diskretisieren wir ein Randwertproblem mittels der Methode der finiten Elemente und erhalten ein Gleichungssystem. Mittels Gebietszerlegungstechniken unterteilen wir das Gebiet der Differentialgleichung und können Teilprobleme des Randwertproblems definieren. Durch die Gebietszerlegung wird eine Hierarchie von Zerlegungen definiert, die wir mittels eines Gebietszerlegungsbaumes festhalten. Anhand dieses Baumes definieren wir nun einen Löser für das Randwertproblem. Dabei berechnen wir die verschiedenen Matrizen des Lösers durch den sogenannten HDD-Algorithmus (engl. hierarchical domain decomposition). Die meisten der zu erstellenden Matrizen sind dabei vollbesetzt, weshalb wir sie mittels Hierarchischer Matrizen approximieren. Mit Hilfe der Hierarchischen Matrizen können wir die Matrizen mit einem fast linearen Aufwand erstellen und speichern. Der Aufwand der Matrixoperationen ist ebenfalls fast linear. Damit wir die Hierarchischen Matrizen für den HDD-Algorithmus verwenden können, müssen wir die Technik der Hierarchischen Matrizen erweitern. Unter anderem führen wir eingeschränkte Clusterbäume, eingeschränkte Blockclusterbäume und die verallgemeinerte Addition für Hierarchische Matrizen ein. Zusätzlich führen wir eine neue Clusterbaum-Konstruktion ein, die auf den HDD-Algorithmus zugeschnitten ist. Die Kombination des HDD-Algorithmus mit Hierarchischen Matrizen liefert einen Löser, den wir mit einem fast linearen Aufwand berechnen können. Der Aufwand zur Berechnung einer Lösung sowie der Speicheraufwand ist ebenfalls fast linear. Des Weiteren geben wir noch einige Modifizierungen des HDD-Algorithmus für weitere Anwendungsmöglichkeiten an. Zusätzlich diskutieren wir die Möglichkeiten der Parallelisierung, denn durch die Verwendung der Gebietszerlegung wird das Randwertproblem in unabhängige Teilprobleme aufgeteilt, die sich sehr gut parallelisieren lassen. Wir schließen die Arbeit mit numerischen Tests ab, die die theoretischen Aussagen bestätigen.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ozdamar, Huseyin Hasan. "A Stiffened Dkt Shell Element". Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/2/12605741/index.pdf.

Texto completo
Resumen
A stiffened DKT shell element is formulated for the linear static analysis of stiffened plates and shells. Three-noded triangular shell elements and two-noded beam elements with 18 and 12 degrees of freedom are used respectively in the formulation. The stiffeners follow the nodal lines of the shell element. Eccentricity of the stiffener is taken into account. The dynamic and stability characteristic of the element is also investigated. With the developed computer program, the results obtained by the proposed element agrees fairly well with the existing literature.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Lopes, Lidia Velazquez. "Sorption of the platinum-group elements in selected solid matrices". Master's thesis, University of Cape Town, 2003. http://hdl.handle.net/11427/4210.

Texto completo
Resumen
Summary in English.
Bibliography: leaves 70-75.
Recent research on the platinum-group elements (PGE) has shown increased concentrations in environmental samples, probably as a result of the widespread use of PGE (Pt, Pd and Rh in particular) as catalysts in the chemical and car industry. Most of the recent research on PGE focuses on the analysis of concentrations in environmental samples exposed to anthropogenic sources of PGE, but there are very few studies that have investigated sorption behaviour of PGE in soils.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Billing, Caren. "The determination of trace elements in complex matrices by electrochemical techniques". Pretoria : [s.n.], 2000. http://upetd.up.ac.za/thesis/available/etd-03272006-114615/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Back, Sung-Yong. "A shear-flexible finite element model for lateral torsional buckling analysis of thin-walled open beams". Diss., Georgia Institute of Technology, 1992. http://hdl.handle.net/1853/20999.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Odi, A. R. A. "Bonded Repair of Composite Structures; A Finite Element Approach". Thesis, Department of Materials and Medical Sciences, 2009. http://hdl.handle.net/1826/3893.

Texto completo
Resumen
This thesis addresses the issues surrounding the application of the finite element method to analyse composite structure repairs with an emphasis on aircraft applications. A comprehensive literature survey has been carried out for this purpose and the results are presented. A preliminary study and a comparative study of different modelling approaches have been completed. These studies aim to explore and identify the problems in modelling repairso n simplec ompositep anelsw ith speciala ttention given to adhesivem odelling. Three modelling approaches have been considered: Siener's model which is an extension of the traditional plane strain 2D model used for adhesively bonded joints, Bait's model which is a promising new approach and a full 3D model. These studies have shown that these methods are complementary providing a different insight into bonded repairs. They have also highlighted the need for a new modelling approach which will provide an overall view of bonded repairs. Improved modelling approachesh ave been developedf or externallyb onded patch and flush repairs. These models enable the study of adhesive failure as well as composite adherendf ailures.T hesea pproachesh aveb eena ppliedt o real repairs and the predicted results compared to experimental data. Four case studies have been conducted: external bonded patch repairs to composite plates, a scarf joint for bonded repairs, a flat panel repaired with a scarfed patch and a repaired curved panel. These case studies have shown that bonded repairs to composite structures can be analyseds uccessfullyu sing PC-basedc ommercialf inite elementc odes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Odi, A. Randolph A. "Bonded repair of composite structures : a finite element approach". Thesis, Cranfield University, 1998. http://dspace.lib.cranfield.ac.uk/handle/1826/3893.

Texto completo
Resumen
This thesis addresses the issues surrounding the application of the finite element method to analyse composite structure repairs with an emphasis on aircraft applications. A comprehensive literature survey has been carried out for this purpose and the results are presented. A preliminary study and a comparative study of different modelling approaches have been completed. These studies aim to explore and identify the problems in modelling repairso n simplec ompositep anelsw ith speciala ttention given to adhesivem odelling. Three modelling approaches have been considered: Siener's model which is an extension of the traditional plane strain 2D model used for adhesively bonded joints, Bait's model which is a promising new approach and a full 3D model. These studies have shown that these methods are complementary providing a different insight into bonded repairs. They have also highlighted the need for a new modelling approach which will provide an overall view of bonded repairs. Improved modelling approachesh ave been developedf or externallyb onded patch and flush repairs. These models enable the study of adhesive failure as well as composite adherendf ailures.T hesea pproachesh aveb eena ppliedt o real repairs and the predicted results compared to experimental data. Four case studies have been conducted: external bonded patch repairs to composite plates, a scarf joint for bonded repairs, a flat panel repaired with a scarfed patch and a repaired curved panel. These case studies have shown that bonded repairs to composite structures can be analyseds uccessfullyu sing PC-basedc ommercialf inite elementc odes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Zanoni, Gian Marco. "Analisi distribuzioni degli elementi delle matrici di un controllo h-infinito". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amslaurea.unibo.it/7724/.

Texto completo
Resumen
Lo studio di tesi che segue analizza un problema di controllo ottimo che ho sviluppato con la collaborazione dell'Ing. Stefano Varisco e della Dott.ssa Francesca Mincigrucci, presso la Ferrari Spa di Maranello. Si è trattato quindi di analizzare i dati di un controllo H-infinito; per eseguire ciò ho utilizzato i programmi di simulazione numerica Matlab e Simulink. Nel primo capitolo è presente la teoria dei sistemi di equazioni differenziali in forma di stato e ho analizzato le loro proprietà. Nel secondo capitolo, invece, ho introdotto la teoria del controllo automatico e in particolare il controllo ottimo. Nel terzo capitolo ho analizzato nello specifico il controllo che ho utilizzato per affrontare il problema richiesto che è il controllo H-infinito. Infine, nel quarto e ultimo capitolo ho specificato il modello che ho utilizzato e ho riportato l'implementazione numerica dell'algoritmo di controllo, e l'analisi dei dati di tale controllo.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Elom, Nwabueze. "Human health risk assessment of potentially toxic elements (PTEs) from environmental matrices". Thesis, Northumbria University, 2012. http://nrl.northumbria.ac.uk/15594/.

Texto completo
Resumen
In assessing human health risk of potentially toxic elements (PTEs), it is not the concentration of PTEs in the environmental matrices that is of greatest concern but the fraction that is absorbed into the body via the exposure pathways. The determination of this fraction (i.e. the bioaccessible fraction) through the application of bioaccessibility protocols is the focus of this work. The study investigated human health risk of PTEs (As, Cd, Cr, Cu, Pb, Mn, Ni and Zn) from oral ingestion of soil / dust, inhalation of urban street dust and air-borne dust (PM10). To assess health risk via oral ingestion of soil and dust, total PTEs were determined in twenty nine soil samples collected from children’s playing fields and ninety urban street dusts collected from six cities. Analysis of total PTE content in these samples via ICP-MS revealed high Pb concentrations (> 450 mg/kg) in 3 playground soils and 32 urban street dusts. Detailed quantitative risk assessment (DQRA) carried out in the playgrounds showed that no significant possibility of significant harm exist in the playgrounds. The concentration of Pb from a particular dust sample based on 50 mg/day ingestion rate that a child might possibly ingest to reach the estimated tolerable daily intake was calculated and it exceeded the tolerable daily intake for oral ingestion in 4 cities. The bioaccessible PTEs were determined both in the soil and dust samples using the Unified BARGE method and the result showed that in all the samples, the PTEs solubilised more in the gastric phase than in the intestinal phase. A new method has been developed; simulated epithelial lung fluid (SELF) and was used to assess the respiratory bioaccessibility of Pb from inhalable urban dust (<10 µm). Low bioaccessibility (<10 %) was recorded in all the samples analysed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Nguyen, Manh Cuong. "Elements continus de plaques et coques avec prise en compte du cisaillement transverse : application à l'interaction fluide-structure". Paris 6, 2003. http://www.theses.fr/2003PA066466.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Jones, O. R. "Resonance ionisation mass spectrometry of trace elements in metallic and organic host matrices". Thesis, Swansea University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.637710.

Texto completo
Resumen
In this work, sputter initiated resonance ionisation mass spectrometry has been used for quantitative analysis of trace amounts of elements in both metallic and organic host matrices. In parallel with the experimental work, theoretical methods have been developed to describe both the interaction of the laser radiation with the plume of sputtered particles and the processes involved in resonance ionisation. On the experimental side, general one-colour, two-step resonance ionisation schemes were demonstrated, using a reflectron type time-of-flight mass spectrometer combined with a duoplasmatron primary sputter ion source and a Nd:YAG pumped dye laser system. Timing electronics were developed to precisely synchronise the pulsed laser system to the sputter process and the mass spectrometric analysis. For metal matrices, the elements titanium, chromium, iron, nickel and molybdenum were probed in the 290-300nm wavelength range. The detection of an enhanced molybdenum signal at 294.421nm is believed to be the first resonance ionisation signal to have been obtained for this element. Aluminium was probed in the 305-310nm wavelength range. For organic matrices, the feasibility of using spatially resolved resonance ionisation mass spectrometry for the analysis of potentially toxic element accumulation in neural tissue was investigated. In particular it was shown that aluminium, which is linked to brain disorders such as Alzheimer's disease, could be detected in brain tissue at concentrations of around 100ppm, with a detection limit of about 3ppm using the current set-up. On the theoretical side, the use of a quantum mechanical density matrix approach in describing the process of resonance ionisation was shown to be more generally applicable than a simple rate-equation or Schrödinger equation approach. In particular, the saturation of the resonant and ionisation steps, and the power broadening of the resonant transition at high laser fluences were investigated, with satisfactory agreement found between theory and experiment in both cases.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Borchers, Brian Edward. "Uniquely clean elements, optimal sets of units and counting minimal sets of units". Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1829.

Texto completo
Resumen
Let R be a ring. We say x ∈ R is clean if x = e + u where u is a unit and e is an idempotent (e2 = e). R is clean if every element of R is clean. I will give the motivation for clean rings, which comes from Fitting's Lemma for Vector Spaces. This leads into the ABCD lemma, which is the foundation of a paper by Camillo, Khurana, Lam, Nicholson and Zhou. Semi-perfect rings are a well known type of ring. I will show a relationship that occurs between clean rings and semi-perfect rings which will allow me to utilize what is known already about semi-perfect rings. It is also important to note that I will be using the Fundamental Theorem of Torsion-free Modules over Principal Ideal Domains to work with finite dimensional vector spaces. These finite dimensional vector spaces are in fact strongly clean, which simply means they are clean and the idempotent and unit commute. This additionally means that since L = e + u, Le = eL. Several types of rings are clean, including a weaker version of commutative Von Neumann regular rings, Duo Von Neumann regular, which I have proved. The goal of my research is to find out how many ways to write matrices or other ring elements as sums of units and idempotents. To do this, I have come up with a method that is self contained, drawing from but not requiring the entire literature of Nicholson. We also examine sets other than idempotents such as upper-triangular and row reduced and examine the possibility or exclusion that an element may be represented as the sum of a upper-triangular (resp. row reduced) element and a unit. These and other element properties highlight some of the complexity of examining an additive property when the underlying properties are multiplicative.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Murphy, Steven. "Methods for solving discontinuous-Galerkin finite element equations with application to neutron transport". Phd thesis, Toulouse, INPT, 2015. http://oatao.univ-toulouse.fr/14650/1/murphy.pdf.

Texto completo
Resumen
We consider high order discontinuous-Galerkin finite element methods for partial differential equations, with a focus on the neutron transport equation. We begin by examining a method for preprocessing block-sparse matrices, of the type that arise from discontinuous-Galerkin methods, prior to factorisation by a multifrontal solver. Numerical experiments on large two and three dimensional matrices show that this pre-processing method achieves a significant reduction in fill-in, when compared to methods that fail to exploit block structures. A discontinuous-Galerkin finite element method for the neutron transport equation is derived that employs high order finite elements in both space and angle. Parallel Krylov subspace based solvers are considered for both source problems and $k_{eff}$-eigenvalue problems. An a-posteriori error estimator is derived and implemented as part of an h-adaptive mesh refinement algorithm for neutron transport $k_{eff}$-eigenvalue problems. This algorithm employs a projection-based error splitting in order to balance the computational requirements between the spatial and angular parts of the computational domain. An hp-adaptive algorithm is presented and results are collected that demonstrate greatly improved efficiency compared to the h-adaptive algorithm, both in terms of reduced computational expense and enhanced accuracy. Computed eigenvalues and effectivities are presented for a variety of challenging industrial benchmarks. Accurate error estimation (with effectivities of 1) is demonstrated for a collection of problems with inhomogeneous, irregularly shaped spatial domains as well as multiple energy groups. Numerical results are presented showing that the hp-refinement algorithm can achieve exponential convergence with respect to the number of degrees of freedom in the finite element space
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Song, Huimin. "Rigorous joining of advanced reduced-dimensional beam models to 3D finite element models". Diss., Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33901.

Texto completo
Resumen
This dissertation developed a method that can accurately and efficiently capture the response of a structure by rigorous combination of a reduced-dimensional beam finite element model with a model based on full two-dimensional (2D) or three-dimensional (3D) finite elements. As a proof of concept, a joint 2D-beam approach is studied for planar-inplane deformation of strip-beams. This approach is developed for obtaining understanding needed to do the joint 3D-beam model. A Matlab code is developed to solve achieve this 2D-beam approach. For joint 2D-beam approach, the static response of a basic 2D-beam model is studied. The whole beam structure is divided into two parts. The root part where the boundary condition is applied is constructed as a 2D model. The free end part is constructed as a beam model. To assemble the two different dimensional model, a transformation matrix is used to achieve deflection continuity or load continuity at the interface. After the transformation matrix from deflection continuity or from load continuity is obtained, the 2D part and the beam part can be assembled together and solved as one linear system. For a joint 3D-beam approach, the static and dynamic response of a basic 3D-beam model is studied. A Fortran program is developed to achieve this 3D-beam approach. For the uniform beam constrained at the root end, similar to the joint 2D-beam analysis, the whole beam structure is divided into two parts. The root part where the boundary condition is applied is constructed as a 3D model. The free end part is constructed as a beam model. To assemble the two different dimensional models, the approach of load continuity at the interface is used to combine the 3D model with beam model. The load continuity at the interface is achieved by stress recovery using the variational-asymptotic method. The beam properties and warping functions required for stress recovery are obtained from VABS constitutive analysis. After the transformation matrix from load continuity is obtained, the 3D part and the beam part can be assembled together and solved as one linear system. For a non-uniform beam example, the whole structure is divided into several parts, where the root end and the non-uniform parts are constructed as 3D models and the uniform parts are constructed as beams. At all the interfaces, the load continuity is used to connect 3D model with beam model. Stress recovery using the variational-asymptotic method is used to achieve the load continuity at all interfaces. For each interface, there is a transformation matrix from load continuity. After we have all the transformation matrices, the 3D parts and the beam parts are assembled together and solved as one linear system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Sharaf, Jamal Mahmood. "Elemental analysis of biological matrices using emission and transmission tomographic techniques". Thesis, University of Surrey, 1994. http://epubs.surrey.ac.uk/844448/.

Texto completo
Resumen
The main objective of this study has been to investigate the feasibility of using tomographic techniques for non-destructive analysis. A potentially useful technique with neutrons as probes for material characterisation is presented. The technique combines the principles of reconstructive tomography with instrumental neutron activation analysis (INAA) so that elemental distributions in a section through a specimen can be mapped. Neutron induced gamma-ray emission tomography (NIGET) technique, where prompt or delayed gamma-rays can be detected in a tomographic mode, has been developed for samples irradiated in the core of a nuclear reactor and used in studies of different biological matrices. The capabilities of the technique will be illustrated using a spatial resolution of 1 mm. The quantitative usefulness of NIGET depends on the accuracy of compensation for the effect of scattering and attenuation as well as determination of the tomographic system characteristics which contribute to the intrinsic measurement process. It will be shown how quantitative information about the induced radionuclide concentration distribution in a specimen can be obtained when compensation for scattered gamma-rays is taken into account employing a high resolution semiconductor detector and a method of scattering correction based upon the use of three energy windows to collect emission data. For attenuation correction an iterative method which combined emission and transmission measurements has been implemented and its performance was compared to the performance of a number of other attenuation correction algorithms. The work involved investigation into the role of a number of factors which influence the accuracy of data acquisition. An efficiency-resolution figure of merit as a function of collimator efficiency, system resolution and object diameter has been defined. Further, a number of reconstruction techniques were investigated and compared for accuracy, minimum number of projections required and their ability to handle noise. Reconstruction by filtered back projection was fastest to compute, but performed poorly when compared to iterative techniques.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Boisa, Ndokiari. "Bioaccessibility of potentially harmful elements (PHEs) from environmental matrices and implications for human health". Thesis, Northumbria University, 2013. http://nrl.northumbria.ac.uk/13333/.

Texto completo
Resumen
Internationally publicized impacts upon human health associated with exposure to potentially harmful elements (PHE) have been reported globally. Particular concern has surrounded the exposure to Pb indicated by the presence of highly elevated concentrations of Pb in blood and hair samples amongst internally displaced populations (IDPs) in Mitrovica, Kosovo, following the Kosovan War (Runow, 2005). The exposure risk to humans depends in part on the potential of the PHE to mobilise from its matrices in the human digestive and respiratory systems (bioaccessibility) and enter the blood stream (bioavailability). This study utilizes physiologically based in-vitro extraction methods to assess the bioaccessibility of PHEs in surface soils and metallurgical waste in Mitrovica and assesses the potential daily ingestion of soil-bound PHEs (As, Cd, Cu, Mn, Pb, and Zn) and inhalation (Pb) of particulate matter < 10 μm (PM10). A total of 63 samples (52 surface soils and 11 mine/smelter waste) were selected based on PHE loadings and their spatial distribution. For the in-vitro oral bioaccessibility 0.3 g subsamples were analysed using the UBM method (adopted by BARGE, Wragg et al., 2009). The mean bioaccessibility of Cd, Pb and Zn in the gastric phase is 51 %, 57 % and 41 %, respectively, compared to 18 %, 16% and 14%, respectively, in the gastric-intestinal phase. The trend with As and Cu data is less consistent across the sample locations, with a mean of 20 % and 22 % in the gastric phase and 22 % and 26 % bioaccessibility in the gastric-intestinal phase, respectively. To investigate the role of mineralogy in understanding the bioaccessibility data subsamples (< 250 μm) were submitted to the British Geological Survey, Nottingham, for X-ray diffraction (XRD) analyses. Samples associated with lower bioaccessibilities typically contain a number of XRD-identifiable primary and secondary mineral phases, particularly As- and Pb-bearing arseninian pyrite, beudantite, galena and cerrusite. For the inhalation bioaccessibility, PM10 subsamples were extracted from 33 samples using a locally developed laboratory based wet method. The 0.3 g PM10 subsamples were analysed using a new tracheobronchial fluid and protocol developed as part of this study. The bioaccessibility of Pb for all the 33 samples tested ranged from 0.02 to 11 % and it is consistent with a range (0.17 to 11 %) previously reported by Harris and Silberman (1988) for Pb bioaccessibility in inhalable particulates (< 22 μm) using canine serum. Quantification of the potential human exposure risk associated with the inhalation and ingestion of soil-associated PHEs indicates the likely possibility of local populations exceeding the recommended tolerable daily intake of Pb. IEUBK model (USEPA, 2007) predicted mean blood Pb concentrations for children based on bioaccessible (ingestion) data are above the CDC level of concern (10 μg/dL).
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Pallavicini, Nicola. "Method development for isotope analysis of trace and ultra-trace elements in environmental matrices". Doctoral thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-59705.

Texto completo
Resumen
The increasing load of toxic elements entering the ecosystems, as a consequence of anthropogenic processes, has grown public awareness in the last decades, resulting in a great number of studies focusing on pollution sources, transport, distribution, interactions with living organisms and remediation. Physical/chemical processes that drive the uptake, assimilation, compartmentation and translocation of heavy metals in biota has received a great deal of attention recently, since elemental concentrations and isotopic composition in biological matrices can be used as  probes of both natural and anthropogenic sources. Further they can help to evaluate fate of contaminants and to assess bioavailability of such elements in nature. While poorly defined isotopic pools, multiple sources and fractionating processes add complexity to source identification studies, tracing is hindered mainly by poorly known or unidentified fractionating factors. High precision isotope ratio measurements have found increasing application in various branches of science, from classical isotope geochronology to complex multi-tracer experiments in environmental studies. Instrumental development and refining separation schemes have allowed higher quality data to be obtained and played a major role in the recent progress of the field. The use of modern techniques such as inductively coupled plasma sector field mass spectrometry (ICP-SFMS) and multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) for trace and ultra-trace element concentrations and isotope ratio measurements have given new opportunities.  However, sources of errors must be accurately evaluated and avoided at every procedural step. Moreover, even with the utilization of sound analytical measurement protocols, source and process tracing in natural systems can be complicated further by spatial and temporal variability. The work described in the present thesis has been focused primarily on analytical method development, optimization and evaluation (including sample preparation, matrix separation, instrumental analysis and data evaluation stages) for isotopic and multi-elemental analyses in environmental samples at trace and ultra-trace levels. Special attention was paid to evaluate strengths and limitations of the methods as applied to complex natural environments, aiming at correct interpretation of isotopic results in environmental forensics. The analytical protocols covered several isotope systems of both stable (Cd, B, Cr, Cu, Fe, Tl and Zn) and radiogenic (Os, Pb and Sr) elements. Paper I was dedicated to the optimization and testing of a rapid and high sample throughput method for Os concentrations and isotope measurements by ICP-SFMS. If microwave (MW) digestion followed by sample introduction to ICP-SFMS by traditional solution nebulization (SN) offered unparalleled throughput important for processing large number of samples, high-pressure ashing (HPA) combined with gas-phase introduction (GPI) proved to be advantageous for samples with low (below 500 pg) analyte content. The method was applied to a large scale bio-monitoring case, confirming accumulation of anthropogenic Os in animals from an area affected by emissions from a stainless steel foundry. The method for Cr concentrations and isotope ratios in different environmental matrices was optimized in Paper II. A coupling between a high pressure/temperature acid digestion and a one pass, single column matrix separation allowed the analysis of chromites, soils, and biological matrices (first Cr isotope study in lichens and mosses) by ICP-SFMS and MC-ICP-MS. With an overall reproducibility of 0.11‰ (2σ), the results suggested a uniform isotope composition in soil depth profiles. On the other hand a strong negative correlation found between δ53Cr and Cr concentrations in lichens and mosses indicates that airborne Cr from local anthropogenic source(s) is depleted in heavy isotopes, therefore highlighting the possibility of utilization of Cr isotopes to trace local airborne pollution source from steel foundries.   Paper III describes development of high-precision Cd isotope ratio measurement by MC-ICP-MS in a variety of environmental matrices. Several digestion methods (HPA, MW, ultrawave and ashing) were tested for sample preparation, followed by analyte separation from matrix using ion-exchange chromatography. The reproducibility of the method (2σ for δ114Cd/110Cd) was found to be better than 0.1‰. The method was applied to a large number of birch leaves (n>80) collected at different locations and growth stages. Cd in birch leaves is enriched in heavier isotopes relative to the NIST SRM 3108 Cd standard with a mean δ114Cd/110Cd of 0.7‰. The fractionation is assumed to stem from sample uptake through the root system and element translocation in the plant and it exhibits profound between-tree as well as seasonal variations. The latter were compared with seasonal isotopic variations for other isotopic systems (Zn, Os, Pb) in the same trees to aid a better understanding of underlying processes. In Paper IV the number of isotope systems studied was extended to include B, Cd, Cu, Fe, Pb, Sr, Tl and Zn. The analytical procedure utilized a high pressure acid digestion (UltraCLAVE), which provides complete oxidation of the organic material in biological samples, and a two-column ion-exchange separation which represents further development of the separation scheme described in Paper III. Such sample preparation ensures low blank levels, efficient separation of matrix elements, sufficiently high analyte recoveries and reasonably high sample throughput. The method was applied to a large number of biological samples (n>240) and the data obtained represent the first combined characterization of variability in isotopic composition for eight elements in leaves, needles, lichens and mushrooms collected from a geographically confined area. To further explore the reason of variability observed, soil profiles from the same area were analyzed for both concentrations and isotopic compositions of B, Cd, Cr, Cu, Fe, Pb, Sr, Tl and Zn in Paper V. Results of this study suggest that the observed high variability can be dependent on operationally-defined fractions (assessed by applying a modified SEP to process soil samples) and on the typology of the individual matrix analyzed (assessed through the coupling of soil profile results to those obtained for other matrices: lysimetric waters, mushrooms, litter, needles, leaves and lichens). The method development conducted in this work highlights the importance of considering all possible sources of biases/errors as well as possibility to use overlapping sample preparation schemes for multi-isotope studies. The results obtained for different environmental matrices represent a starting point for discussing the role of natural isotopic variability in isotope applications and forensics, and the importance of in-depth knowledge of the multiple parameters affecting the variability observed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Albouy, William. "De la contribution de la visco-élasto-plasticité au comportement en fatigue de composites à matrice thermoplastique et thermodurcissable". Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00942294.

Texto completo
Resumen
La présente étude vise à comprendre l'influence du comportement visco-élasto-plastique d'une matrice TP (PPS) et TD (Epoxy) sur le comportement en fatigue à haute température de composite tissés à fibres de carbone. Une analyse fractographique a permis de révéler le rôle déterminant des zones riches en matrice au niveau des plis à ±45° dans la chronologie d'endommagement et sur le comportement en fatigue de stratifiés à plis croisés et quasi-isotropes. Afin d'évaluer la contribution de la viscoélasticité et de la viscoplasticité de la matriceTP au comportement thermomécanique des stratifiés C/TP à T>Tg, un modèle viscoélastique de Norton généralisé ont été implémentés dans le code Eléments Finis Cast3m. Une technique de corrélation d'images numériques (CIN) a été mise œuvre pour tester la capacité du modèle à prédire la réponse du stratifié dans le cas de structures à forts gradients de contraintes.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Boim, Alexys Giorgia Friol. "Human bioaccessibility and absorption by intestinal cells of potentially harmful elements from urban environmental matrices". Universidade de São Paulo, 2018. http://www.teses.usp.br/teses/disponiveis/11/11140/tde-14032019-181637/.

Texto completo
Resumen
Potentially harmful elements (PHE) are found naturally in soils, usually in low concentrations. However, due to the intensity of the anthropic activities, the concentrations of these elements may increase and have negative effects on the environment and human health. Methods for risk assessment may predict or indicate the level of exposure to contamination of an area. In addition to the total or pseudo-total concentration of PHE, generally extracted with acidic solutions, it is possible to determine the reactive, bioavailable and bioaccessible levels of these elements in order to evaluate the degree of soil contamination. Urban soil samples located in residential areas were collected in Piracicaba, State of São Paulo (SP) and in Santo Amaro, State of Bahia, including soils collected near a primary lead smelter area (COBRAC/Plumbum), where researchers detected elevated levels of PHE. Soils samples in an old lead metallurgy plant (Usina do Calabouço / IPT), which today belongs to the Centro Integrado de Ensino Multidisciplinar (CIEM/ Companhia de Pesquisa de Recursos Minerais (CPRM) - Geological Survey of Brazil) in Apiaí, located in the Upper Ribeira Valley (SP) were also collected. In vitro methods have been used in several countries to assess the bioaccessibility of PHE in humans. In this study, procedures based on ingestion and inhalation of soils using the Unified BARGE Method (UBM) and Artificial Lysosomal Fluid (ALF) methods were used to obtain the bioaccessible concentration in the gastrointestinal and pulmonary tract, respectively. As the bioaccessible fraction does not estimate the concentration absorbed and transported into the bloodstream, the in vitro method using Caco-2 cells, which are derived from human colon adenocarcinoma, was used to assess the amount of PHE that intestinal cells can absorb. The mineralogical data was obtained, and the sequential extraction of As, Cd, Cu, Mn, Pb and Zn was carried out to evaluate their interaction with lung fluid and gastric/gastrointestinal fluids. As expected, mine tailing samples had the highest pseudo-total concentrations of PHE in comparison to soil and sediment samples, both in the bulk soil (2 mm) and in the 250 μm and 10 μm sizes. Both respiratory and oral bioaccessibility of PHE varied widely among matrices, indicating that they were influenced by matrices´ chemistry, physical and mineralogical characteristics. The respiratory bioaccessible fraction, calculated as a percentage of the PHE pseudo-total concentrations, ranged from 13 - 109% for As; 14 - 98% for Cd; 21 - 89% for Cu; 46 - 140% for Pb, 35 - 88% for Mn and; 21 - 154% for Zn. Gastric bioaccessibility was greater than gastrointestinal bioaccessibility, ranging from 0-33% and 0-26% for As; 0-69% and 0-40% for Cd; 18-75% and 12-89% for Cu; 24-83% and 7-50% for Pb; 43-105% and 27-97% for Mn; 14-88% and 6-46% for Zn. Pseudo-total concentration provided a good estimate of respiratory and oral bioaccessibility, but the in-vitro methods provided more accurate results. Caco-2 cell line (in vitro test) was a good model for evaluating the effect of PHE exposure, but further studies on the transport and bioavailability of PHE in intestinal cells are needed.
Elementos potencialmente nocivos (EPN), dentre eles os metais pesados, são encontrados naturalmente nos solos, geralmente em baixas concentrações. Porém, devido à intensidade das atividades antrópicas, as concentrações destes elementos podem aumentar e ocasionar efeitos negativos ao meio ambiente e à saúde humana. Métodos para a avaliação de risco podem prever ou indicar o nível da exposição de uma área à contaminação. Além do teor total ou pseudototal de EPN, geralmente extraídos com soluções ácidas, pode-se determinar os teores nas frações reativa, biodisponível, bioacessível destes elementos para avaliação do grau de contaminação do solo. Por sua vez, métodos in vitro têm sido utilizados em vários países para avaliar a bioacessibilidade de PHE em seres humanos. Neste trabalho foram utilizadas amostras de solo urbano coletadas em Piracicaba e solos coletados em áreas de uma antiga usina de metalurgia de chumbo (Usina do Calabouço/IPT) na cidade de Apiaí, ambas localizadas no Estado de São Paulo; e na cidade de Santo Amaro, Bahia, onde foram coletadas amostras de solos urbanos localizados em áreas residenciais, principalmente próximas a uma antiga área de refino de chumbo, onde foram detectados níveis elevados de EPN. Foram avaliados procedimentos baseados na ingestão e na inalação de solos coletados por meio dos métodos Unified BARGE Method (UBM) e do Artificial Lysosomal Fluid (ALF) para obtenção do teor bioacessível nos fluidos gastrointestinais e nos fluidos pulmonares, respectivamente. Como a fração bioacessível não estima a concentração absorvida e transportada para a corrente sanguínea, o método in vitro, utilizando células Caco-2, que são extraídas de adenocarcinoma do cólon humano, foi usado para avaliar a quantidade de EPN que as células intestinais podem absorver. As características mineralógicas das amostras e a extração sequencial de As, Cd, Cu, Mn, Pb e Zn foram obtidas para avaliar a interação com fluido pulmonar e do o suco gástrico e gastrointestinal. Como esperado, as amostras de rejeito de mineração apresentaram as maiores concentrações pseudototais de EPN em comparação com as amostras de solo e sedimento, tanto nas amostras < 2mm, como nas amostras de tamanho < 250 μm e < 10 μm. A bioacessibilidade respiratória e oral dos EPN variou amplamente entre as matrizes, indicando que foram influenciadas por características químicas, físicas e mineralógicas das matrizes. A fração bioacessível respiratória, calculada como porcentagem das concentrações de EPN pseudototal, variou de 13 a 109% para As; 14 - 98% para Cd; 21 - 89% para Cu; 46 - 140% para Pb, 35 - 88% para Mn e; 21 - 154% para Zn. A bioacessibilidade gástrica foi maior que a bioacessibilidade gastrointestinal, variando de 0 a 33% e 0 a 26% para As; 0-69% e 0-40% para Cd; 18-75% e 12-89% para Cu; 24-83% e 7-50% para Pb; 43-105% e 27-97% para o Mn; 14-88% e 6-46% para Zn. A concentração pseudototal forneceu uma boa estimativa para bioacessibilidade respiratória e oral, mas os métodos in vitro fornecem resultados mais precisos. A linhagem celular Caco-2 foi um bom modelo para avaliar o efeito da exposição ao PHE, mas são necessários mais estudos sobre o transporte e biodisponibilidade de PHE em células intestinais.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Okorie, Ikechukwu Alexander. "Determination of potentially toxic elements (PTEs) and an assessment of environmental health risk from environmental matrices". Thesis, Northumbria University, 2010. http://nrl.northumbria.ac.uk/1502/.

Texto completo
Resumen
A former industrial site now used for recreational activities was investigated for total PTE content, uptake of the PTEs by foraged fruits and mobility of the PTEs using single extraction such as HOAc and EDTA. In order to evaluate the health risks arising from ingestion of the PTE contaminated soil, the oral bioaccessibility using in vitro physiologically based extraction test (PBET) and tolerable daily intake (TDI) or mean daily intake (MDI) was used. The PBET simulates the transition of the PTE pollutants in the soil into human gastrointestinal system while the TDI or MDI is the mass of soil that a child would require to take without posing any health risk. In addition to the former industrial site, an investigation of the urban road dust from Newcastle city centre and its environs was undertaken with the view to looking into the PTE content, oral bioaccessibility and the platinum group elements (PGEs). Optimized microwave procedure was applied to 19 samples obtained from a former industrial site (St Anthony's lead works) in Newcastle upon Tyne. Of the range of PTEs potentially present at the site as a consequence of former industrial activity (As, Cd, Cr, Cu, Ni, Pb and Zn), the majority of top soil samples indicated elevated concentrations of one or more of these PTEs. In particular, data obtained using either inductively coupled plasma mass spectrometry (ICP-MS) or flame atomic absorption spectroscopy (FAAS) indicates the high and wide concentration of Pb on the site (174 to 33,306 mg/kg). Comparing the resulting PTEs data with UK Soil Guidelines Values (SGVs) suggests at least parts of the site represent areas of potential human health risk. It was found that Pb soil values exceeded the SGV on 17 out of the 19 sampling sites; similarly for As 7 out of 19 sampling sites exceeded the SGV. While for Cd and Ni the soil levels were below the stated SGVs. Samples of foraged fruits collected from the same site were also analysed for the same PTEs. The foraged fruit was gathered over two seasons along with samples of soil from the same sampling areas, acid digested using a microwave oven, and then analysed by ICP-MS. The foraged fruits samples included blackberries, rosehips and sloes which were readily available on the site. The concentration levels of the selected elements in foraged samples varied between not detectable limits and 24.6 mg/kg (Zn). Finally, the soil-to plant transfer factor was assessed for the 7 elements. In all cases, the transfer values obtained were below 1.00,except Cd in 2007 which is 1.00, indicating that the majority of the PTE remains in the soil and that the uptake of PTE from soil to plant at this site is not significant. The determination of total or pseudo total PTE content of soil is often insufficient to assess the risk to humans. A range of extraction protocols were applied to the 19 samples urban topsoils, and report on the correlations between pseudo total PTE content and results obtained following a physiologically-based extraction procedure (oral bioaccessibility), EDTA and HOAc extraction protocols (reagent-specific available fraction), for a broad range of PTEs (As, Cd, Cu, Cr, Ni, Pb, Zn). Results of the single-reagent extraction procedures did not, in general, provide a good indication of oral bioaccessibility but shows positive correlation with the pseudo total PTE content. The bioaccessibility data shows that considerable variation exists both spatially across the site, and between the different PTEs, but correlates well with the pseudo-total concentrations for all elements (r2 exceeding 0.8). One of the main objectives of this work is to show the role of bioaccessibility in generic risk assessment. Comparison of the pseudo-total PTE concentrations with SGV or generic assessment criteria (GAC) indicated that all of the PTEs investigated need further action, such as receptor exposure modelling.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Alves, Vancler Ribeiro. "Analise de Elementos Estruturais em Hastes de Paredes Delgadas". Universidade Federal Fluminense, 2003. http://www.bdtd.ndc.uff.br/tde_busca/arquivo.php?codArquivo=285.

Texto completo
Resumen
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
O objetivo deste trabalho é avaliar a importância do comportamento de estruturas dotadas de elementos de hastes de paredes delgadas, e o surgimento das tensões de bimomentos nas mesmas. Particularizações do estudo são feitas para pórticos planos e vigas dotadas de solicitações que induzem ao surgimento de empenamentos e deformações características de hastes delgadas, utilizando-se os métodos analíticos. Para a análise mais abrangente do surgimento das tensões de bimomentos, cisalhantes e do comportamento estrutural são feitas variações nas dimensões da seção transversal dos perfis de seção aberta.
This work intends to evaluate bimoments stresses in thin-walled structures mainly in beams and plane frames. External loads induce to warping and strains due to slenderness of thin profiles in open cross section, essential characteristics of a thin-walled structure. Analytical methods are employed in order to research the structural behavior and bimoments stresses, in a wider aspect that cannot be explained by the theory of thick-walled beams.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Diamoutani, Mamadou. "De quelques méthodes de calcul de valeurs propres de grandes matrices". Grenoble INPG, 1986. http://tel.archives-ouvertes.fr/tel-00321850.

Texto completo
Resumen
Etude de quelques algorithmes de calcul d'éléments propres de matrices de grande taille : méthode des puissances, itérations de Tchébychev simultanées et algorithme de Lanczos, base orthonormée du sous-espace dominant construite à partir de la forme de Schur de la matrice de projection. Présentation des résultats numériques
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Young, Kuao-John. "A unified approach to the formulation of non-consistent rod and beam mass matrices for improved finite element modal analysis". Diss., This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-07282008-135633/.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Temin, Gendron Pascale. "Approche numérique du comportement homogénéisé des composites à matrice métallique et renforts continus : validation expérimentale". Châtenay-Malabry, Ecole centrale de Paris, 1990. http://www.theses.fr/1990ECAP0163.

Texto completo
Resumen
Après un rappel de la méthode d'homogénéisation des milieux périodiques, nous étudions la dépendance des caractéristiques d'élasticité des composites en fonction de la nature du renfort (fibres isotrope et anisotrope) et en fonction de la fraction volumique des fibres. Des mesures sur banc d'essai par ultrasons ont été parallèlement afin de valider les caractéristiques d'élasticité obtenus pour certains composites. Des essais de traction dans deux directions, sens long et sens travers, sont effectués afin d'évaluer l'anisotropie du comportement à rupture de composites a matrice métallique fibres. Puis nous étudions la construction des convexes de rupture macroscopiques par homogénéisation, à partir de chargements extrêmes sur la cellule de base (microstructure). Nous soulignons le problème d'existence des charges limites, points de la frontière du convexe, qui demande quelques précautions dans le cas ou la microstructure met en présence des matériaux élastiques et élastoplastiques. Nous sommes amenés ainsi, à construire deux méthodes d'approximation par des éléments conformes avec intégration réduite, et une méthode d'approximation par des éléments non conformes. Les résultats de ces deux méthodes d'approximation conduisent à un encadrement précis des contraintes a rupture macroscopiques. Cette approche permet également de prédire les modes de rupture microscopiques
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Mach, Thomas. "Eigenvalue Algorithms for Symmetric Hierarchical Matrices". Doctoral thesis, Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-85308.

Texto completo
Resumen
This thesis is on the numerical computation of eigenvalues of symmetric hierarchical matrices. The numerical algorithms used for this computation are derivations of the LR Cholesky algorithm, the preconditioned inverse iteration, and a bisection method based on LDLT factorizations. The investigation of QR decompositions for H-matrices leads to a new QR decomposition. It has some properties that are superior to the existing ones, which is shown by experiments using the HQR decompositions to build a QR (eigenvalue) algorithm for H-matrices does not progress to a more efficient algorithm than the LR Cholesky algorithm. The implementation of the LR Cholesky algorithm for hierarchical matrices together with deflation and shift strategies yields an algorithm that require O(n) iterations to find all eigenvalues. Unfortunately, the local ranks of the iterates show a strong growth in the first steps. These H-fill-ins makes the computation expensive, so that O(n³) flops and O(n²) storage are required. Theorem 4.3.1 explains this behavior and shows that the LR Cholesky algorithm is efficient for the simple structured Hl-matrices. There is an exact LDLT factorization for Hl-matrices and an approximate LDLT factorization for H-matrices in linear-polylogarithmic complexity. This factorizations can be used to compute the inertia of an H-matrix. With the knowledge of the inertia for arbitrary shifts, one can compute an eigenvalue by bisectioning. The slicing the spectrum algorithm can compute all eigenvalues of an Hl-matrix in linear-polylogarithmic complexity. A single eigenvalue can be computed in O(k²n log^4 n). Since the LDLT factorization for general H-matrices is only approximative, the accuracy of the LDLT slicing algorithm is limited. The local ranks of the LDLT factorization for indefinite matrices are generally unknown, so that there is no statement on the complexity of the algorithm besides the numerical results in Table 5.7. The preconditioned inverse iteration computes the smallest eigenvalue and the corresponding eigenvector. This method is efficient, since the number of iterations is independent of the matrix dimension. If other eigenvalues than the smallest are searched, then preconditioned inverse iteration can not be simply applied to the shifted matrix, since positive definiteness is necessary. The squared and shifted matrix (M-mu I)² is positive definite. Inner eigenvalues can be computed by the combination of folded spectrum method and PINVIT. Numerical experiments show that the approximate inversion of (M-mu I)² is more expensive than the approximate inversion of M, so that the computation of the inner eigenvalues is more expensive. We compare the different eigenvalue algorithms. The preconditioned inverse iteration for hierarchical matrices is better than the LDLT slicing algorithm for the computation of the smallest eigenvalues, especially if the inverse is already available. The computation of inner eigenvalues with the folded spectrum method and preconditioned inverse iteration is more expensive. The LDLT slicing algorithm is competitive to H-PINVIT for the computation of inner eigenvalues. In the case of large, sparse matrices, specially tailored algorithms for sparse matrices, like the MATLAB function eigs, are more efficient. If one wants to compute all eigenvalues, then the LDLT slicing algorithm seems to be better than the LR Cholesky algorithm. If the matrix is small enough to be handled in dense arithmetic (and is not an Hl(1)-matrix), then dense eigensolvers, like the LAPACK function dsyev, are superior. The H-PINVIT and the LDLT slicing algorithm require only an almost linear amount of storage. They can handle larger matrices than eigenvalue algorithms for dense matrices. For Hl-matrices of local rank 1, the LDLT slicing algorithm and the LR Cholesky algorithm need almost the same time for the computation of all eigenvalues. For large matrices, both algorithms are faster than the dense LAPACK function dsyev.
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Cayzac, Henri-Alexandre. "Analyses expérimentale et numérique de l'endommagement matriciel d'un matériau composite : Cas d'un pultrudé thermoplastique renforcé de fibres de verre". Thesis, Paris, ENMP, 2014. http://www.theses.fr/2014ENMP0034/document.

Texto completo
Resumen
L'utilisation croissante des matériaux composites à matrice polymère dans les structures industrielles est impulsée par le besoin de contraintes environnementales tout en conservant d'excellentes propriétés mécaniques. L'évolution des procédés de fabrication et l'émergence de la pultrusion réactive permet la production de matériaux composites à matrice thermoplastique possédant des taux de fibres très importants. Ceci leur confère les propriétés longitudinales souhaitées mais ces procédés induisent une variabilité microstructurale importante. De plus, les pièces industrielles sont bien souvent sollicitées de façon complexe induisant des contraintes multiaxiales. Ces contraintes sont alors ``ressenties'' par la microstructure du matériau composite et par la matrice confinée par les fibres notamment. La variabilité microstructurale tend alors à amplifier les contraintes. C'est dans ce contexte qu'une approche multi-échelle macro-micro (globale/locale) expérimentale et numérique a été développée. Les mécanismes de déformation, d'endommagement et de rupture ont été expérimentalement analysés à l'échelle globale du matériau composite ainsi qu'à l'échelle locale de sa microstructure. Pour ce faire, de techniques expérimentales liées à la tomographie aux rayons X ont été mises en place et permettent d'observer in-situ l'évolution de la microstructure sollicitée. Il a été observé que l'endommagement se développe au sein de la matrice thermoplastique. Un modèle de comportement de la matrice endommageable a donc été mis au point à l'aide des approches issues de la mécanique des milieux poreux et permet de rendre compte des micro-mécanismes de déformation et d'endommagement de la matrice confinée par les fibres. Une approche de type ``top-down'' a été développée. Celle-ci permet de localiser les zones critiques d'une structure industrielle composite. Le chargement appliqué localement sur la pièce sert alors de conditions aux limites sur une microstructure réelle modélisée. Ainsi, il est possible de simuler la cinétique d'endommagement, permettant de comprendre l'amorçage et la propagation de fissures dans une structure industrielle. Cette approche appliquée au cas d'une canalisation composite sous pression a permis de déterminer des pressions d'amorçage de fissures en fonction de l'enroulement du composite sur la canalisation
The use of composite materials composed of polymeric matrix have known a growing interest in industrial structures due to the ratio between structure weight reduction and reliable mechanical properties. The pultrusion with in-situ polymerization process allows high fiber volume fraction which provides the longitudinal mechanical properties needed nevertheless, such process induces a microstructural variability. These engineering structures are often submitted to complex multiaxial stresses. Such stresses are locally amplified due to the microstructural variability and particularly due to the fact that the matrix is constrained by the fibres. It is in this context that a multi-scale top-down (global / local) experimental and numerical approach has been developped. Deformation, damage and fracture mechanisms have been experimentally studied at both global and local scales. In order to do so, experimental technics related to X ray tomography have been used and allow in-situ observation of damage in the composite material submitted to different stresses. A constitutive model of the polymeric matrix has been developped thanks to approaches from the mechanics of porous media and allows to take into account the damage behavior of the constrained matrix. A multi-scale model allowing critical zones localization on industrial structures has been set up. The resulting stresses on the critical zones are then applied to the microstructure of the composite material. This model is able to take into account the damage cinetic, as well as transverse cracks initiation and propagation through the microstructure. Such approach has been used to determine cracks initiation pressures for different plies orientation of a composite pipe
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Mach, Thomas. "Eigenvalue Algorithms for Symmetric Hierarchical Matrices". Doctoral thesis, Max Planck Institute for Dynamics of Complex Technical Systems, 2011. https://monarch.qucosa.de/id/qucosa%3A19684.

Texto completo
Resumen
This thesis is on the numerical computation of eigenvalues of symmetric hierarchical matrices. The numerical algorithms used for this computation are derivations of the LR Cholesky algorithm, the preconditioned inverse iteration, and a bisection method based on LDLT factorizations. The investigation of QR decompositions for H-matrices leads to a new QR decomposition. It has some properties that are superior to the existing ones, which is shown by experiments using the HQR decompositions to build a QR (eigenvalue) algorithm for H-matrices does not progress to a more efficient algorithm than the LR Cholesky algorithm. The implementation of the LR Cholesky algorithm for hierarchical matrices together with deflation and shift strategies yields an algorithm that require O(n) iterations to find all eigenvalues. Unfortunately, the local ranks of the iterates show a strong growth in the first steps. These H-fill-ins makes the computation expensive, so that O(n³) flops and O(n²) storage are required. Theorem 4.3.1 explains this behavior and shows that the LR Cholesky algorithm is efficient for the simple structured Hl-matrices. There is an exact LDLT factorization for Hl-matrices and an approximate LDLT factorization for H-matrices in linear-polylogarithmic complexity. This factorizations can be used to compute the inertia of an H-matrix. With the knowledge of the inertia for arbitrary shifts, one can compute an eigenvalue by bisectioning. The slicing the spectrum algorithm can compute all eigenvalues of an Hl-matrix in linear-polylogarithmic complexity. A single eigenvalue can be computed in O(k²n log^4 n). Since the LDLT factorization for general H-matrices is only approximative, the accuracy of the LDLT slicing algorithm is limited. The local ranks of the LDLT factorization for indefinite matrices are generally unknown, so that there is no statement on the complexity of the algorithm besides the numerical results in Table 5.7. The preconditioned inverse iteration computes the smallest eigenvalue and the corresponding eigenvector. This method is efficient, since the number of iterations is independent of the matrix dimension. If other eigenvalues than the smallest are searched, then preconditioned inverse iteration can not be simply applied to the shifted matrix, since positive definiteness is necessary. The squared and shifted matrix (M-mu I)² is positive definite. Inner eigenvalues can be computed by the combination of folded spectrum method and PINVIT. Numerical experiments show that the approximate inversion of (M-mu I)² is more expensive than the approximate inversion of M, so that the computation of the inner eigenvalues is more expensive. We compare the different eigenvalue algorithms. The preconditioned inverse iteration for hierarchical matrices is better than the LDLT slicing algorithm for the computation of the smallest eigenvalues, especially if the inverse is already available. The computation of inner eigenvalues with the folded spectrum method and preconditioned inverse iteration is more expensive. The LDLT slicing algorithm is competitive to H-PINVIT for the computation of inner eigenvalues. In the case of large, sparse matrices, specially tailored algorithms for sparse matrices, like the MATLAB function eigs, are more efficient. If one wants to compute all eigenvalues, then the LDLT slicing algorithm seems to be better than the LR Cholesky algorithm. If the matrix is small enough to be handled in dense arithmetic (and is not an Hl(1)-matrix), then dense eigensolvers, like the LAPACK function dsyev, are superior. The H-PINVIT and the LDLT slicing algorithm require only an almost linear amount of storage. They can handle larger matrices than eigenvalue algorithms for dense matrices. For Hl-matrices of local rank 1, the LDLT slicing algorithm and the LR Cholesky algorithm need almost the same time for the computation of all eigenvalues. For large matrices, both algorithms are faster than the dense LAPACK function dsyev.:List of Figures xi List of Tables xiii List of Algorithms xv List of Acronyms xvii List of Symbols xix Publications xxi 1 Introduction 1 1.1 Notation 2 1.2 Structure of this Thesis 3 2 Basics 5 2.1 Linear Algebra and Eigenvalues 6 2.1.1 The Eigenvalue Problem 7 2.1.2 Dense Matrix Algorithms 9 2.2 Integral Operators and Integral Equations 14 2.2.1 Definitions 14 2.2.2 Example - BEM 16 2.3 Introduction to Hierarchical Arithmetic 17 2.3.1 Main Idea 17 2.3.2 Definitions 19 2.3.3 Hierarchical Arithmetic 24 2.3.4 Simple Hierarchical Matrices (Hl-Matrices) 30 2.4 Examples 33 2.4.1 FEM Example 33 2.4.2 BEM Example 36 2.4.3 Randomly Generated Examples 37 2.4.4 Application Based Examples 38 2.4.5 One-Dimensional Integral Equation 38 2.5 Related Matrix Formats 39 2.5.1 H2-Matrices 40 2.5.2 Diagonal plus Semiseparable Matrices 40 2.5.3 Hierarchically Semiseparable Matrices 42 2.6 Review of Existing Eigenvalue Algorithms 44 2.6.1 Projection Method 44 2.6.2 Divide-and-Conquer for Hl(1)-Matrices 45 2.6.3 Transforming Hierarchical into Semiseparable Matrices 46 2.7 Compute Cluster Otto 47 3 QR Decomposition of Hierarchical Matrices 49 3.1 Introduction 49 3.2 Review of Known QR Decompositions for H-Matrices 50 3.2.1 Lintner’s H-QR Decomposition 50 3.2.2 Bebendorf’s H-QR Decomposition 52 3.3 A new Method for Computing the H-QR Decomposition 54 3.3.1 Leaf Block-Column 54 3.3.2 Non-Leaf Block Column 56 3.3.3 Complexity 57 3.3.4 Orthogonality 60 3.3.5 Comparison to QR Decompositions for Sparse Matrices 61 3.4 Numerical Results 62 3.4.1 Lintner’s H-QR decomposition 62 3.4.2 Bebendorf’s H-QR decomposition 66 3.4.3 The new H-QR decomposition 66 3.5 Conclusions 67 4 QR-like Algorithms for Hierarchical Matrices 69 4.1 Introduction 70 4.1.1 LR Cholesky Algorithm 70 4.1.2 QR Algorithm 70 4.1.3 Complexity 71 4.2 LR Cholesky Algorithm for Hierarchical Matrices 72 4.2.1 Algorithm 72 4.2.2 Shift Strategy 72 4.2.3 Deflation 73 4.2.4 Numerical Results 73 4.3 LR Cholesky Algorithm for Diagonal plus Semiseparable Matrices 75 4.3.1 Theorem 75 4.3.2 Application to Tridiagonal and Band Matrices 79 4.3.3 Application to Matrices with Rank Structure 79 4.3.4 Application to H-Matrices 80 4.3.5 Application to Hl-Matrices 82 4.3.6 Application to H2-Matrices 83 4.4 Numerical Examples 84 4.5 The Unsymmetric Case 84 4.6 Conclusions 88 5 Slicing the Spectrum of Hierarchical Matrices 89 5.1 Introduction 89 5.2 Slicing the Spectrum by LDLT Factorization 91 5.2.1 The Function nu(M − µI) 91 5.2.2 LDLT Factorization of Hl-Matrices 92 5.2.3 Start-Interval [a, b] 96 5.2.4 Complexity 96 5.3 Numerical Results 97 5.4 Possible Extensions 100 5.4.1 LDLT Slicing Algorithm for HSS Matrices 103 5.4.2 LDLT Slicing Algorithm for H-Matrices 103 5.4.3 Parallelization 105 5.4.4 Eigenvectors 107 5.5 Conclusions 107 6 Computing Eigenvalues by Vector Iterations 109 6.1 Power Iteration 109 6.1.1 Power Iteration for Hierarchical Matrices 110 6.1.2 Inverse Iteration 111 6.2 Preconditioned Inverse Iteration for Hierarchical Matrices 111 6.2.1 Preconditioned Inverse Iteration 113 6.2.2 The Approximate Inverse of an H-Matrix 115 6.2.3 The Approximate Cholesky Decomposition of an H-Matrix 116 6.2.4 PINVIT for H-Matrices 117 6.2.5 The Interior of the Spectrum 120 6.2.6 Numerical Results 123 6.2.7 Conclusions 130 7 Comparison of the Algorithms and Numerical Results 133 7.1 Theoretical Comparison 133 7.2 Numerical Comparison 135 8 Conclusions 141 Theses 143 Bibliography 145 Index 153
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Ladenheim, Scott Aaron. "Constraint Preconditioning of Saddle Point Problems". Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/319906.

Texto completo
Resumen
Mathematics
Ph.D.
This thesis is concerned with the fast iterative solution of linear systems of equations of saddle point form. Saddle point problems are a ubiquitous class of matrices that arise in a host of computational science and engineering applications. The focus here is on improving the convergence of iterative methods for these problems by preconditioning. Preconditioning is a way to transform a given linear system into a different problem for which iterative methods converge faster. Saddle point matrices have a very specific block structure and many preconditioning strategies for these problems exploit this structure. The preconditioners considered in this thesis are constraint preconditioners. This class of preconditioner mimics the structure of the original saddle point problem. In this thesis, we prove norm- and field-of-values-equivalence for constraint preconditioners associated to saddle point matrices with a particular structure. As a result of these equivalences, the number of iterations needed for convergence of a constraint preconditioned minimal residual Krylov subspace method is bounded, independent of the size of the matrix. In particular, for saddle point systems that arise from the finite element discretization of partial differential equations (p.d.e.s), the number of iterations it takes for GMRES to converge for theses constraint preconditioned systems is bounded (asymptotically), independent of the size of the mesh width. Moreover, we extend these results when appropriate inexact versions of the constraint preconditioner are used. We illustrate this theory by presenting numerical experiments on saddle point matrices that arise from the finite element solution of coupled Stokes-Darcy flow. This is a system of p.d.e.s that models the coupling of a free flow to a porous media flow by conditions across the interface of the two flow regions. We present experiments in both two and three dimensions, using different types of elements (triangular, quadrilateral), different finite element schemes (continuous, discontinuous Galerkin methods), and different geometries. In all cases, the effectiveness of the constraint preconditioner is demonstrated.
Temple University--Theses
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

Schwetz, Paulete Fridman. "Análise numérico-experimental de lajes nervuradas sujeitas a cargas estáticas de serviço". reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/32552.

Texto completo
Resumen
Soluções estruturais sofisticadas e racionais são exigências crescentes no cotidiano de projetistas de estruturas, como conseqüência da evolução dos projetos arquitetônicos e dos novos conceitos de gerenciamento das construções. As lajes nervuradas se enquadram nesta realidade como uma atraente alternativa, por propiciar economia de materiais e mão-deobra, com redução de perdas e aumento da produtividade, exigindo, porém, uma laboriosa modelagem numérica. Para entender melhor como funciona, na prática, este sistema construtivo, torna-se necessário obter um maior conhecimento sobre seu comportamento estrutural, bem como aperfeiçoar os modelos teóricos empregados para seu projeto e simulação. O objetivo principal desta pesquisa é analisar a adequação de métodos de cálculo empregados na modelagem destas estruturas, verificando se os mesmos representam satisfatoriamente seu comportamento. Para tanto, foram instrumentadas três lajes nervuradas de concreto armado em escala natural e um modelo reduzido de microconcreto armado na escala 1:7,5 representativo de uma laje nervurada real. O estudo mediu deformações no concreto/microconcreto e deslocamentos verticais em seções características das estruturas, submetidas a diferentes tipos de carregamento. A modelagem numérica foi feita empregando-se o programa Sistema Computacional TQS versão 11.9.9, que utiliza a análise matricial de grelhas, e o programa SAP2000 versão 14.2.2, que utiliza o método dos elementos finitos. Os valores medidos de deslocamentos verticais apresentaram-se na mesma ordem de grandeza das previsões teóricas e as deformações específicas indicaram a presença de momentos fletores nas seções instrumentadas coincidentes com os previstos pela análise numérica. Os resultados indicaram que as previsões teóricas, obtidas através de análises lineares e não lineares, bem como os valores medidos experimentalmente, sugeriram comportamentos semelhantes das estruturas, comprovando que as modelagens numéricas foram satisfatórias na simulação do comportamento de lajes nervuradas de concreto armado.
Waffle slabs are, nowadays, a demand for structural designers, as a consequence of architectural design evolution and new building management concepts, in spite of its laborious numerical modeling. Therefore, it becomes necessary to know more about their structural behavior and to improve the theoretical models used for simulating these slabs. The objective of this work is to analyze the adequacy of two methods widely used in the modeling of waffle slabs, verifying if they represent the slab behavior satisfactorily. Three real scale waffle slabs, and also a reduced microconcrete model, were submitted to diffent loads and instrumented with strain and deflection gages. The numerical analysis was made using a grid model program developed by a local software company, specialized in concrete structural design, and a finite element model, developed by an american software company, specialized in concrete structural analysis. Numerically computed deflections presented a good estimate of the measured results and experimental strains defined bending moments coincident with the forecast of the theoretical models, in all structures. Results indicated that theoretical linear and nonlinear analysis and the measured values suggested a similar behavior for all structures, confirming that concrete waffle slabs may be numericaly simulated in a satisfactory way by such models.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Bossaller, Daniel P. "Some Topics in Infinite Dimensional Algebra". Ohio University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1520332321386827.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Montagné, Nicolaïdes Nathalie. "Prediction de champs thermiques instationnaires : methode des elements finis". Toulouse 3, 1986. http://www.theses.fr/1986TOU30121.

Texto completo
Resumen
Lors de la simulation numerique d'une operation de formage de piece, il est necessaire d'associer au logiciel de deformation plastique, un code elements finis permettant de determiner l'evolution du champ des temperatures. Le type de piece metallique considere est axisymetrique, a meridienne evolutive. Quatre phases differentes sont considerees, au cours desquelles la piece pourra etre, seule, en contact avec le tas inferieur de la matrice, en contact avec les tas inferieur et superieur, puis en cours de deformation. Les divers contacts de surfaces peuvent etre des raccordements thermiques de conduction, parfaits ou imparfaits. Pour les frontieres exterieures de l'ensemble, une condition de troisieme espece (surface libre) est retenue. Ces differents types de contact et le probleme de base (transfert de chaleur) amenent a considerer l'equation de fourier completee des conditions aux limites. Cette equation est traitee par la methode des elements finis via une voie implicite de resolution. Etant donne la configuration complexe de l'ensemble apres deformation, ce sont les quadrilateres courbes et isoparametriques qui sont employes. Le systeme lineaire final, obtenu apres diverses approximations, est resolu en adaptant la methode matricielle de gauss au "stockage ligne de ciel" de la matrice. Ainsi, il est possible de traiter le cas de maillages fins et plus precis, comportant un grand nombre de noeuds.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Falco, Aurélien. "Bridging the Gap Between H-Matrices and Sparse Direct Methods for the Solution of Large Linear Systems". Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0090/document.

Texto completo
Resumen
De nombreux phénomènes physiques peuvent être étudiés au moyen de modélisations et de simulations numériques, courantes dans les applications scientifiques. Pour être calculable sur un ordinateur, des techniques de discrétisation appropriées doivent être considérées, conduisant souvent à un ensemble d’équations linéaires dont les caractéristiques dépendent des techniques de discrétisation. D’un côté, la méthode des éléments finis conduit généralement à des systèmes linéaires creux, tandis que les méthodes des éléments finis de frontière conduisent à des systèmes linéaires denses. La taille des systèmes linéaires en découlant dépend du domaine où le phénomène physique étudié se produit et tend à devenir de plus en plus grand à mesure que les performances des infrastructures informatiques augmentent. Pour des raisons de robustesse numérique, les techniques de solution basées sur la factorisation de la matrice associée au système linéaire sont la méthode de choix utilisée lorsqu’elle est abordable. A cet égard, les méthodes hiérarchiques basées sur de la compression de rang faible ont permis une importante réduction des ressources de calcul nécessaires pour la résolution de systèmes linéaires denses au cours des deux dernières décennies. Pour les systèmes linéaires creux, leur utilisation reste un défi qui a été étudié à la fois par la communauté des matrices hiérarchiques et la communauté des matrices creuses. D’une part, la communauté des matrices hiérarchiques a d’abord exploité la structure creuse du problème via l’utilisation de la dissection emboitée. Bien que cette approche bénéficie de la structure hiérarchique qui en résulte, elle n’est pas aussi efficace que les solveurs creux en ce qui concerne l’exploitation des zéros et la séparation structurelle des zéros et des non-zéros. D’autre part, la factorisation creuse est accomplie de telle sorte qu’elle aboutit à une séquence d’opérations plus petites et denses, ce qui incite les solveurs à utiliser cette propriété et à exploiter les techniques de compression des méthodes hiérarchiques afin de réduire le coût de calcul de ces opérations élémentaires. Néanmoins, la structure hiérarchique globale peut être perdue si la compression des méthodes hiérarchiques n’est utilisée que localement sur des sous-matrices denses. Nous passons en revue ici les principales techniques employées par ces deux communautés, en essayant de mettre en évidence leurs propriétés communes et leurs limites respectives, en mettant l’accent sur les études qui visent à combler l’écart qui les séparent. Partant de ces observations, nous proposons une classe d’algorithmes hiérarchiques basés sur l’analyse symbolique de la structure des facteurs d’une matrice creuse. Ces algorithmes s’appuient sur une information symbolique pour grouper les inconnues entre elles et construire une structure hiérarchique cohérente avec la disposition des non-zéros de la matrice. Nos méthodes s’appuient également sur la compression de rang faible pour réduire la consommation mémoire des sous-matrices les plus grandes ainsi que le temps que met le solveur à trouver une solution. Nous comparons également des techniques de renumérotation se fondant sur des propriétés géométriques ou topologiques. Enfin, nous ouvrons la discussion à un couplage entre la méthode des éléments finis et la méthode des éléments finis de frontière dans un cadre logiciel unique
Many physical phenomena may be studied through modeling and numerical simulations, commonplace in scientific applications. To be tractable on a computer, appropriated discretization techniques must be considered, which often lead to a set of linear equations whose features depend on the discretization techniques. Among them, the Finite Element Method usually leads to sparse linear systems whereas the Boundary Element Method leads to dense linear systems. The size of the resulting linear systems depends on the domain where the studied physical phenomenon develops and tends to become larger and larger as the performance of the computer facilities increases. For the sake of numerical robustness, the solution techniques based on the factorization of the matrix associated with the linear system are the methods of choice when affordable. In that respect, hierarchical methods based on low-rank compression have allowed a drastic reduction of the computational requirements for the solution of dense linear systems over the last two decades. For sparse linear systems, their application remains a challenge which has been studied by both the community of hierarchical matrices and the community of sparse matrices. On the one hand, the first step taken by the community of hierarchical matrices most often takes advantage of the sparsity of the problem through the use of nested dissection. While this approach benefits from the hierarchical structure, it is not, however, as efficient as sparse solvers regarding the exploitation of zeros and the structural separation of zeros from non-zeros. On the other hand, sparse factorization is organized so as to lead to a sequence of smaller dense operations, enticing sparse solvers to use this property and exploit compression techniques from hierarchical methods in order to reduce the computational cost of these elementary operations. Nonetheless, the globally hierarchical structure may be lost if the compression of hierarchical methods is used only locally on dense submatrices. We here review the main techniques that have been employed by both those communities, trying to highlight their common properties and their respective limits with a special emphasis on studies that have aimed to bridge the gap between them. With these observations in mind, we propose a class of hierarchical algorithms based on the symbolic analysis of the structure of the factors of a sparse matrix. These algorithms rely on a symbolic information to cluster and construct a hierarchical structure coherent with the non-zero pattern of the matrix. Moreover, the resulting hierarchical matrix relies on low-rank compression for the reduction of the memory consumption of large submatrices as well as the time to solution of the solver. We also compare multiple ordering techniques based on geometrical or topological properties. Finally, we open the discussion to a coupling between the Finite Element Method and the Boundary Element Method in a unified computational framework
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Carratalá, Sáez Rocío. "Analysis of Parallelization Strategies in the context of Hierarchical Matrix Factorizations". Doctoral thesis, Universitat Jaume I, 2021. http://hdl.handle.net/10803/671577.

Texto completo
Resumen
H-matrices offer log-linear storage and computations costs, thanks to a controlled accuracy loss. This is the reason why they are specially suitable for Boundary Element Methods (BEM). Task-parallelism strategies are applied to tiled/block algorithms to provide powerful and efficient parallel solutions for multicore architectures. The main objective of this thesis is designing, implementing and evaluating parallel algorithms to operate efficiently with H-matrices in multicore architectures. The first contribution is a study in which we prove that task-parallelism is suitable for operating with H-matrices, while illustrating the difficulties of parallelizing its complex implementations. Afterwards, we explain how the OmpSs-2 programming model helped us avoid the described issues and attain a fair efficiency. Lastly, we explain the creation of the open source library H-Chameleon, based on Tile H-Matrices (a regularized version of H-matrices), which is competitive-with-pure-H-matrices precision and compression ratios, and leverages the benefits of tile algorithms applied to (regular) tiles.
Las H-Matrices presentan un coste de almacenamiento y cómputo logarítmico-lineal gracias a una pérdida de precisión controlable. Por ello, son apropiadas para los Métodos de Elementos de Contorno. Las estrategias de paralelismo de tareas, aplicadas a algoritmos a bloques, posibilitan soluciones paralelas eficientes para arquitecturas multinúcleo. El objetivo principal de esta tesis es diseñar, implementar y evaluar algoritmos paralelos para operar eficientemente con H-Matrices en arquitecturas multinúcleo. En la primera contribución de esta tesis demostramos que el paralelismo de tareas es apropiado para operar con H-Matrices, ilustrando también las dificultades de dichas implementaciones. A continuación, explicamos cómo el modelo de programación OmpSs-2 permite sortear dichas cuestiones para alcanzar una buena eficiencia. Finalmente, explicamos el diseño de H-Chameleon, una librería de código abierto basada en Tile H-Matrices (H-Matrices regularizadas), capaz de mantener un ratio de precisión y compresión competitivo con las H-Matrices puras, beneficiándose de los algoritmos a bloques (regulares).
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Brillon, Laura. "Matrices de Cartan, bases distinguées et systèmes de Toda". Thesis, Toulouse 3, 2017. http://www.theses.fr/2017TOU30077/document.

Texto completo
Resumen
Dans cette thèse, nous nous intéressons à plusieurs aspects des systèmes de racines des algèbres de Lie simples. Dans un premier temps, nous étudions les coordonnées des vecteurs propres des matrices de Cartan. Nous commençons par généraliser les travaux de physiciens qui ont montré que les masses des particules dans la théorie des champs de Toda affine sont égales aux coordonnées du vecteur propre de Perron -- Frobenius de la matrice de Cartan. Puis nous adoptons une approche différente, puisque nous utilisons des résultats de la théorie des singularités pour calculer les coordonnées des vecteurs propres de certains systèmes de racines. Dans un deuxième temps, en s'inspirant des idées de Givental, nous introduisons les matrices de Cartan q-déformées et étudions leur spectre et leurs vecteurs propres. Puis, nous proposons une q-déformation des équations de Toda et construisons des 1-solitons solutions en adaptant la méthode de Hirota, d'après les travaux de Hollowood. Enfin, notre intérêt se porte sur un ensemble de transformations agissant sur l'ensemble des bases ordonnées de racines comme le groupe de tresses. En particulier, nous étudions les bases distinguées, qui forment l'une des orbites de cette action, et des matrices que nous leur associons
In this thesis, our goal is to study various aspects of root systems of simple Lie algebras. In the first part, we study the coordinates of the eigenvectors of the Cartan matrices. We start by generalizing the work of physicists who showed that the particle masses of the affine Toda field theory are equal to the coordinates of the Perron -- Frobenius eigenvector of the Cartan matrix. Then, we adopt another approach. Namely, using the ideas coming from the singularity theory, we compute the coordinates of the eigenvectors of some root systems. In the second part, inspired by Givental's ideas, we introduce q-deformations of Cartan matrices and we study their spectrum and their eigenvectors. Then, we propose a q-deformation of Toda's equations et compute 1-solitons solutions, using the Hirota's method and Hollowood's work. Finally, our interest is focused on a set of transformations which induce an action of the braid group on the set of ordered root basis. In particular, we study an orbit for this action, the set of distinguished basis and some associated matrices
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Beuchler, Sven. "Multi-level solver for degenerated problems with applications to p-versions of the fem". [S.l. : s.n.], 2003. http://www.bsz-bw.de/cgi-bin/xvms.cgi?SWB10673667.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Vyhlídal, Michal. "Porušování vybraných stavebních kompozitů v blízkosti rozhraní plniva a matrice". Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2018. http://www.nusl.cz/ntk/nusl-372021.

Texto completo
Resumen
The interface between aggregate grains and matrix in cementitious composites is their weakest element. The topic is particularly significant in the case of high performance and high strength concrete technology for which the eliminination or reduction of these weak links are necessary. The aim of this thesis is to determine the influence of the interface on the fracture behaviour of the cementitious composites. The fracture experiments were performed for this purpose and were complemented by the nanoindentation’s results and scanning electron microscopy results. Numerical model was created in ANSYS software on the basis of these data and the fracture toughness values of the interface were evaluated by means of the generalized fracture mechanics principles. Conclusion of the thesis is proof that the interface properties have a significant influence on the fracture behaviour of cementitious composites.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

D'Ascenzo, Marco. "Analisi del comportamento a caldo dell'acciaio AISI - H11 per la stima della vita utile di matrici per estrusione". Master's thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amslaurea.unibo.it/508/.

Texto completo
Resumen
The effect of process parameters on the creep-fatigue behavior of a hot-work tool steel for aluminum extrusion die was investigated through a technological test in which the specimen geometry resembled the mandrel of a hollow extrusion die. Tests were performed on a Gleeble thermomechanical simulator by heating the specimen using joule’s effect and by applying cyclic loading up to 6.30 h or till specimen failure. Displacements during the tests at 380, 490, 540 and 580°C and under the average stresses of 400, 600 and 800 MPa were determined. In the first set of test a dwell time of 3 min was introduced during each of the tests to understand the creep behavior. The results showed that the test could indeed physically simulate the cyclic loading on the hollow die during extrusion and reveal all the mechanisms of creep-fatigue interaction. In the second set a pure fatigue laod were induced and in the third set a static creep load were induced in the specimens. Furher type of tests, finite element and microstructural analysis were presented.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Trousset, Emilie. "Prévision des dommages d'impact basse vitesse et basse énergie dans les composites à matrice organique stratifiés". Phd thesis, Paris, ENSAM, 2013. http://pastel.archives-ouvertes.fr/pastel-00942339.

Texto completo
Resumen
Afin de mieux comprendre et de mieux quantifier la formation des dommages d'impact et leurs conséquences sur la tenue de la structure composite, le recours à la simulation numérique semble être un complément indispensable pour enrichir les campagnes expérimentales. Cette thèse a pour objectif la mise au point d'un modèle d'impact pour la simulation numérique par éléments finis dynamique implicite, capable de prévoir les dommages induits.La première étape du travail a consisté à élaborer un modèle s'appuyant sur le modèle de comportement du pli " Onera Progressive Failure Model " (OPFM) et sur le modèle bilinéaire de zones cohésives proposé par Alfano et Crisfield, puis d'évaluer la sensibilité aux différentes composantes des lois de comportement de la réponse à un impact et des dommages prévus. Des essais d'impact et d'indentation sur des plaques stratifiées en carbone/époxy ont ensuite été réalisés, analysés et enfin confrontés aux résultats numériques, afin d'évaluer les performances à l'impact du modèle OPFM et ses limites.Ces travaux permettent d'aboutir à trois principales conclusions. Premièrement, l'usage de modèles de zones cohésives semble nécessaire pour prévoir la chute de force caractéristique de l'impact sur stratifiés. Deuxièmement, la prise en compte des contraintes hors plan, notamment les cisaillements, est indispensable pour prévoir correctement l'endommagement d'impact. Enfin, si le modèle OPFM est capable de prévoir qualitativement les dommages d'impact, l'absence de caractère adoucissant ou de viscoplasticité semble cependant limiter leur prévision quantitative.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Kessentini, Ahmed. "Approche numérique pour le calcul de la matrice de diffusion acoustique : application pour les cas convectifs et non convectifs". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEC019/document.

Texto completo
Resumen
La propagation acoustique guidée est étudiée dans ce travail. La propagation des ondes acoustiques dans une direction principale est privilégiée. La méthode des éléments finis ondulatoires est donc exploitée pour extraire les nombres d'ondes. Les déformées des différents modes de conduit rigide sont aussi obtenues. Pour des conduits avec des discontinuités d'impédance, la matrice de diffusion peut être calculée à l'aide d'une modélisation par éléments finis de la partie traitée acoustiquement. Une modélisation tridimensionnelle des conduits traités acoustiquement permet une étude de la propagation pour tous les ordres des modes, de leur diffusion et du comportement acoustique des matériaux absorbants. Les réponses forcées de diverses configurations de guides d'ondes aux conditions aux limites imposées sont également calculées. L'étude est finalement étendue à la propagation acoustique dans les guides d'ondes avec un écoulement moyen uniforme
The guided acoustical propagation is investigated in this work. The propagation of the acoustic waves in a main direction is privileged. A Wave Finite Element method is therefore exploited to extract the wavenumbers. Rigid duct's mode shapes are moreover obtained. For ducts with impedance discontinuities, the scattering matrix can be then calculated through a Finite Element modelling of the lined part. A three dimensional modelling of the lined ducts allows a study of the propagation for the full modes orders, their scattering and the acoustic behaviour of the absorbing materials. The forced responses of various configurations of waveguides with imposed boundary conditions are also calculated. The study is finally extended to the acoustical propagation within waveguides with a uniform mean flow
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Fruhauf, Jean-Baptiste. "ELABORATION ET CARACTERISATION MECANIQUE DE COMPOSITES A MATRICE TITANE RENFORCES PAR DES PARTICULES DE TIC". Thesis, Saint-Etienne, EMSE, 2012. http://www.theses.fr/2012EMSE0670/document.

Texto completo
Resumen
Les propriétés spécifiques du titane en font un matériau de choix pour remplacer l’acier dans des applications où le poids est un paramètre de conception important. cependant, contrairement à l’acier, le titane souffre de mauvaises propriétés tribologiques. c’est pour répondre à cette problématique qu’il est envisagé de développer des composites à matrice métallique (cmm) titane renforcée par des particules de carbure de titane. dans le cadre de ce projet, plusieurs nuances de cmm à matrice ti ou ti-6al-4v contenants différentes fractions volumiques de particules de tic ont été élaborées par métallurgie des poudres. trois procédés ont été employés : le frittage libre, la compression isostatique à chaud et le filage. les différentes nuances ont ensuite été caractérisées du point de vue microstructurale (taux de densification, taille des grains) et mécanique (traction). la confrontation des résultats a permis d’établir un lien entre microstructure et propriétés mécaniques. dans l’optique d’étudier la mise en forme mais également d’améliorer les propriétés mécaniques, un post-traitement de type forgeage a été appliqué à la suite de la phase d’élaboration. excepté dans le cas des cmm filés, la présence de renforts entraîne l’apparition d’endommagement lors de la déformation à chaud. nous avons alors déterminé les conditions de forgeage les plus adaptées selon les nuances.finalement, à travers un travail de modélisation analytique et de simulation numérique par méthode d’homogénéisation, nous avons déterminé les grandeurs mécaniques (module de young et limite d’élasticité) et prévu la loi de comportement des cmm en traction
The specific properties of titanium make it a key material for the replacement of steel in weight dependent applications. however, unlike steel, titanium suffers from poor wear resistance. in order to improve this weakness, it is proposed to develop titanium metal matrix composites (mmc) reinforced with titanium carbide particles.to this end, ti and ti-6al-4v mmc were prepared with reinforcement fractions ranging from 5 percent to 20 percent using three powder metallurgy techniques: free sintering, hot isostatic compression and extrusion. the composites were then characterized from a microstructural (density, grain size) and a mechanical (tensile test) point of view. by comparing the results, it was possible to establish a relationship between microstructural features and mechanical properties.following their preparation, the composites were subjected to a forging step in order to study their behavior during hot deformation and to further improve their mechanical properties. the presence of particles induces the apparition of damage during hot deformation. therefore, we determined the best forging for the different composites whilst taking microstructure into account.finally, through analytical modeling and numerical simulations, we determined the young modulus, the yield stress and predicted the behavior of a mmc during a tensile test
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Beluffi, Camille. "Search for rare processes with a Z+bb signature at the LHC, with the matrix element method". Thesis, Strasbourg, 2015. http://www.theses.fr/2015STRAE022/document.

Texto completo
Resumen
Cette thèse présente une étude détaillée de l'état final avec un boson Z se désintégrant en deux leptons, produit dans le détecteur CMS au LHC. Pour identifier cette topologie, des algorithmes sophistiqués d'étiquetage des jets b ont été utilisés, et la calibration de l'un d'entre eux, Jet Probability, est exposée. Une étude de la dégradation de cet algorithme à hautes énergies a été menée et une amélioration des performances a pu être notée. Cette étude est suivie par une recherche du boson de Higgs du modèle standard (MS) se désintégrant en deux quarks b, et produit en association avec un boson Z (canal ZH), à l'aide de la Méthode des Éléments de Matrice (MEM) et deux algorithmes d'étiquetage des jets b: JP et Combined Secondary Vertex (CSV). La MEM est un outil perfectionné qui produit une variable discriminante par événement, appelée poids. Pour l'appliquer, plusieurs lots de fonctions de transfert ont été produits. Le résultat final renvoie une limite observée sur le taux de production de ZH avec le rapport d'embranchement H → bb de 5.46xσMS en utilisant CSV et de 4.89xσMS en faisant usage de JP : un léger excès de données est observé seulement pour CSV. Enfin, à partir de l'analyse précédente, une recherche de nouvelle physique modèle-indépendante a été faite. Le but était de discriminer entre eux les processus du MS afin de catégoriser l'espace de phase Zbb, à l'aide d'une méthode récursive basée sur les poids de la MEM. Des paramètres libres ont été ajustés pour obtenir la meilleure limite d'exclusion pour le signal ZH. En utilisant la meilleure configuration trouvée, la limite calculée est de 3.58xσMS, proche de celle obtenue avec l'analyse précédente
This thesis presents a detailed study of the final state with the Z boson decaying into two leptons, produced in the CMS detector at the LHC. In order to tag this topology, sophisticated b jet tagging algorithms have been used, and the calibration of one of them, the Jet Probability (JP) tagger is exposed. A study of the tagger degradation at high energy has been done and led to a small gain of performance. This investigation is followed by the search for the associated production of the standard model (SM) Higgs boson with a Z boson and decaying into two b quarks (ZH channel), using the Matrix Element Method (MEM) and two b-taggers: JP and Combined Secondary Vertex (CSV). The MEM is an advanced tool that produces an event-by-event discriminating variable, called weight. To apply it, several sets of transfer function have been produced. The final results give an observed limit on the ZH production cross section with the H → bb branching ratio of 5.46xσSM when using the CSV tagger and 4.89xσSM when using the JP algorithm: a small excess of data is observed only for CSV. Finally, based on the previous analysis, a model-independent search of new physics has been performed. The goal was to discriminate all the SM processes to categorize the Zbb phase space, with a recursive approach using the MEM weights. Free parameters had to be tuned in order to find the best exclusion limit on the ZH signal strength. For the best configuration, the computed limit was 3.58xσSM, close to the one obtained with the dedicated search
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Castro, Waleska. "Elemental Analysis of Biological Matrices by Laser Ablation High Resolution Inductively Coupled Plasma Mass Spectrometry (LA-HR-ICP-MS) and High Resolution Inductively Coupled Plasma Mass Spectrometry (HR-ICP-MS)". FIU Digital Commons, 2008. http://digitalcommons.fiu.edu/etd/185.

Texto completo
Resumen
The need for elemental analysis of biological matrices such as bone, teeth, and plant matter for sourcing purposes has emerged within the forensic and geochemical laboratories. Trace elemental analyses for the comparison of aterials such as glass by inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation ICP-MS has been shown to offer a high degree of discrimination between different manufacturing sources. Unit resolution ICP-MS instruments may suffer from some polyatomic interferences including 40Ar16O+, 40Ar16O1H+, and 40Ca16O+ that affect iron measurement at trace levels. Iron is an important element in the analysis of glass and also of interest for the analysis of several biological matrices. A comparison of the nalytical performance of two different ICP-MS systems for iron analysis in glass for determining the method detection limits (MDLs), accuracy, and precision of the measurement is presented. Acid digestion and laser ablation methods are also compared. Iron polyatomic interferences were reduced or resolved by using dynamic reaction cell and high resolution ICP-MS. MDLs as low as 0.03 ìg g-1 and 0.14 ìg g-1 for laser ablation and solution based analyses respectively were achieved. The use of helium as a carrier gas demonstrated improvement in the detection limits of both iron isotopes (56Fe and 57Fe) in medium resolution for the HR-ICP-MS and with a dynamic reaction cell (DRC) coupled to a quadrupole ICP-MS system. The development and application of robust analytical methods for the quantification of trace elements in biological matrices has lead to a better understanding of the potential utility of these measurements in forensic chemical analyses. Standard reference materials (SRMs) were used in the development of an analytical method using HR-ICP-MS and LA-HR-ICP-MS that was subsequently applied on the analysis of real samples. Bone, teeth and ashed marijuana samples were analyzed with the developed method. Elemental analysis of bone samples from 12 different individuals provided discrimination between individuals, when femur and humerus bones were considered separately. Discrimination of 14 teeth samples based on elemental composition was achieved with the exception of one case where samples from the same individual were not associated with each other. The discrimination of 49 different ashed plant (cannabis)samples was achieved using the developed method.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Muratt, Diana Tomazi. "Desenvolvimento e validação de métodos voltamétricos sequenciais para a determinação de elementos em matrizes complexas". Universidade Federal de Santa Maria, 2013. http://repositorio.ufsm.br/handle/1/10578.

Texto completo
Resumen
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
Toxic elements are in continuous accumulation in the environment mainly due to anthropogenic activities. In this work, a voltammetric method of sequential analysis for application in matrices of complex characteristics was developed. Using an organic mixture complexing, SVRS (alizarin violet) DMG (dimethylglyoxime), 8-hydroxyquinoline (oxine), CA (chloranilic acid) and DTPA (diethylenetriaminepentaacetic acid) 13 elements could be determined at two different methods. According to the formation constants with their respective ligands, in method 1, step were determined Al3+, Fe3+, Mo6+ SVRS in the presence of AdSV (Adsorptive Stripping Voltammetry). In step 2 were determined Zn2+, Cd2+, Pb2+ and Cu2+ by ASV. In step 3, were determined Ni2+ and Co2+ in the presence of DMG and oxine. In method 2, V5+ and U6+ were determined by AdSV in the presence of CA (chloranilic acid), Cr(total) was analyzed in sequence by the presence of DTPA by AdSV and finally Tl+ was determined by ASV. The data for the figures of merit showed that the proposed method is suitable for samples of complex matrices studied (certified materials and commercial plant compounds). High concentrations for some elements were found in commercial samples. It indicates that the species translocates through the environment in which they are insert, being susceptible to contact with humans.
Elementos tóxicos estão em contínuo acúmulo no ambiente principalmente devido a atividades antropogênicas. Neste trabalho, foi desenvolvido um método voltamétrico de análise sequencial para aplicação em matrizes de natureza complexa. Utilizando uma mistura de complexantes orgânicos, SVRS (violeta de solocromo), DMG (dimetilglioxima), 8-Hidroxiquinolina (oxina), CA (ácido cloranílico) e DTPA (ácido dietilenotriamino pentaacético) foi possível determinar 13 elementos em dois diferentes métodos. De acordo com as constantes de formação com os respectivos ligantes foi determinado no método 1, etapa 1 Al3+, Fe3+, Mo6+ na presença de SVRS por AdSV (Adsorptive Stripping Voltammetry). Na etapa 2 determinou-se Zn2+, Cd2+, Pb2+, Cu2+ por ASV. Na etapa 3, foram determinados Ni2+ e Co2+, na presença de DMG e oxina. No método 2, U6+ e V5+ foram determinados por AdSV na presença de CA (ácido cloranílico), Cr(total) foi determinado na sequência na presença de DTPA por AdSV e por fim, Tl+ foi determinado por ASV. Os dados obtidos para as figuras de mérito mostraram que o método proposto é adequado para as amostras de matrizes complexas estudadas (materiais certificados e compostos vegetais comerciais). Concentrações altas para alguns elementos foram encontradas nas amostras comerciais. Este dado indica que as espécies translocam-se através do meio em que estão inseridos estando suscetíveis a entrar em contato com o ser humano.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Cachia, Maxime. "Caractérisation des transferts d’éléments trace métalliques dans une matrice gaz/eau/roche représentative d'un stockage subsurface de gaz naturel". Thesis, Pau, 2017. http://www.theses.fr/2017PAUU3006/document.

Texto completo
Resumen
Le gaz naturel représente environ 20% de la consommation énergétique mondiale et cette part est attendue à la hausse dans les prochaines années en raison de la transition énergétique. Pour des raisons économiques et stratégiques, et afin de réguler les demandes d’énergie entre l’été et l’hiver, le gaz naturel est stocké temporairement dans des réservoirs souterrains, notamment des réservoirs aquifères. Les opérations d’injection et de soutirage du gaz mettent donc en contact des espèces gazeuses, liquides et solides, et rendent potentiellement possibles de nombreux phénomènes de transferts d’espèces chimiques d’un milieu vers un autre. Ainsi, bien que composé majoritairement de méthane (70-90%vol), le gaz naturel peut présenter des concentrations variées d’éléments trace métalliques (arsenic, mercure, plomb…). Compte tenu du caractère néfaste de ces composés, à la fois pour les installations industrielles et pour l’environnement, il est de la première importance de connaître l’impact de la composition chimique du gaz sur l’aquifère.Les travaux réalisés dans le cadre de cette thèse s’inscrivent dans ce contexte et ont eu pour objectif de caractériser les matrices gaz/eau/roche ainsi que les interactions qui existent entre elles, avec pour centre d’intérêt principal les éléments trace métalliques.Pour cela nous avons fait porter nos efforts sur l’optimisation (i) des conditions d’utilisation d’un banc de prélèvement ATEX, basé sur le principe de barbotage, et (ii) des méthodes de piégeages des métaux lourds puis d’analyses employées. Ce dispositif unique permet d’échantillonner les métaux présents dans un gaz naturel sous pression (100 bar maximum). Utilisé sur des sites industriels, ce banc a permis de mesurer et suivre sur plusieurs années la composition chimique en éléments trace métalliques du gaz naturel, mais aussi ponctuellement d’un biogaz et d’un biomethane. En effet, Ces deux derniers gaz ont vocation à réduire l’utilisation des énergies fossiles, celle du gaz naturel en particulier. Les biométhanes sont donc amenés à parcourir les mêmes réseaux de transport et à séjourner dans les mêmes sites de stockage que ceux utilisés pour le gaz naturel.En complément de la caractérisation de la phase gazeuse, nous nous sommes intéressés aux évolutions des compositions chimiques des phases aqueuse et minérale du stockage souterrain, sans pouvoir identifier de mécanisme de transfert spécifiquement lié aux activités de stockage de gaz
Natural gas represents 20% of energy consumption in the world. This percentage is expected to increase in the next years due to the energy transition. For economic and strategic concerns, and in to regulate energy demand between summer and winter, natural gas might be stored in underground storages like aquifers. Consequently, injection and drawing operations favour contact between gaseous, liquid and solid species and make possible transfer phenomena of chemical species from one matrix to another. In addition, even though natural gases are composed essentially of methane (70-90%vol), they can also show various metallic trace element concentrations (mercury, arsenic, tin…). According harmful effects of these compounds on industrial infrastructures and on environment, knowing impacts of natural gas composition on aquifer storage is crucial.The different tasks of this thesis are incorporated within such a context with the objective to characterize gases-waters-rocks matrices and their potential interactions, focusing on metallic trace elements.Therefore, we have focused a part of this PhD thesis on the optimisation of conditions of use (i) of a in EX zone 0 sampler device, working according to the principle of bubbling and (ii) of trapping methodology as well as analytic methods. This unique device allows metal sampling from natural gases up to 100 bar pressure. Its use on industrial sites has permitted to measure and monitor during several years the metallic trace element chemical compositions of a natural gas and also more limited biogas and a biomethane analysis. Indeed, these two last gases are designed to reduce fossil fuel consumption particularly natural gas one. Biomethanes are led to use the same transportation network and to be temporarily stored in the same way as natural gaz. In addition of the gaseous phase, we have taken interest in the water and the mineral phases to characterize their chemical composition evolutions in time, without identify specific transfer mechanisms in touch with gas storage activity
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Essongue-Boussougou, Simon. "Méthode des éléments finis augmentés pour la rupture quasi-fragile : application aux composites tissés à matrice céramique". Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0018/document.

Texto completo
Resumen
Le calcul de la durée de vie des Composites tissés à Matrice Céramique (CMC) nécessite de déterminer l’évolution de la densité de fissures dans le matériau(pouvant atteindre 10 mm-1). Afin de les représenter finement on se propose de travailler à l’échelle mésoscopique. Les méthodes de type Embedded Finite Element (EFEM) nous ont paru être les plus adaptées au problème. Elles permettent une représentation discrète des fissures sans introduire de degrés de liberté additionnels.Notre choix s’est porté sur une EFEM s’affranchissant d’itérations élémentaires et appelée Augmented Finite Element Method (AFEM). Une variante d’AFEM, palliant des lacunes de la méthode originale, a été développée. Nous avons démontré que,sous certaines conditions, AFEM et la méthode des éléments finis classique (FEM) étaient équivalentes. Nous avons ensuite comparé la précision d’AFEM et de FEM pour représenter des discontinuités fortes et faibles. Les travaux de thèse se concluent par des exemples d’application de la méthode aux CMC
Computing the lifetime of woven Ceramic Matrix Composites (CMC) requires evaluating the crack density in the material (which can reach 10 mm-1). Numerical simulations at the mesoscopic scale are needed to precisely estimate it. Embedded Finite Element Methods (EFEM) seem to be the most appropriate to do so. They allow for a discrete representation of cracks with no additional degrees of freedom.We chose to work with an EFEM free from local iterations named the Augmented Finite Element Method (AFEM). Improvements over the original AFEM have been proposed. We also demonstrated that, under one hypothesis, the AFEM and the classical Finite Element Method (FEM) are fully equivalent. We then compare the accuracy of the AFEM and the classical FEM to represent weak and strong discontinuities. Finally, some examples of application of AFEM to CMC are given
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Hertz-Clemens, Stéphane. "Etude d'un composite aéraunautique à matrice métallique sous chargements de fatigue : solution mécano-thermique et propagation de fissures". Paris, ENMP, 2002. http://www.theses.fr/2002ENMP1072.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Francescato, Pascal. "Prévision du comportement plastique des matériaux hétérogènes à constituants métalliques : application aux composites à matrice métallique à fibres continues et aux plaques perforées". Université Joseph Fourier (Grenoble), 1994. http://www.theses.fr/1994GRE10110.

Texto completo
Resumen
Ce travail de these porte sur la mise en uvre numerique de la methode d'homogeneisation periodique en calcul a la rupture ou analyse limite et son application a la prevision des proprietes de resistance macroscopiques de materiaux composites a fibres longues ou de plaques metalliques perforees. La methode numerique adoptee consiste a ramener la mise en uvre des approches statique par l'interieur et cinematique par l'exterieur a la resolution d'un probleme d'optimisation lineaire pose sur le volume elementaire representatif (v. E. R. ). Les programmes numeriques font appel a une utilisation originale de la methode des elements finis avec une discretisation discontinue des differents v. E. R. Etudies ainsi qu'a une technique nouvelle de linearisation des criteres de tresca et von mises. Une premiere validation de ces methodes est faite a partir de resultats theoriques et experimentaux obtenus par ailleurs, de meme qu'une campagne experimentale est menee sur des plaques minces perforees par des trous circulaires. Ces methodes donnent une evaluation tres precise de l'anisotropie de resistance de ce type de materiau. Dans la suite, une extension de ces methodes au cas de la deformation plane generalisee et au cas general tridimensionnel est proposee afin d'etudier le comportement plastique de composites a matrice metallique (c. M. M. ). Les maillages elements finis restant plans du fait du cas des fibres continues unidirectionnelles considere ici, l'objectif est de determiner le convexe de resistance de c. M. M. Unidirectionnels sous un chargement hors axes quelconque. Dans le cas d'une interface fibre/matrice a adherence maximale, les calculs mettent clairement en evidence l'anisotropie transverse de ce type de composite, y compris dans le cas d'un v. E. R. Hexagonal. Enfin l'etude se termine par une serie de calculs prenant en compte un critere de decohesion a l'interface fibre/matrice alors que la fibre et la matrice obeissent au critere de tresca tridimensionnel isotrope. Les parametres caracteristiques du critere a l'interface sont identifies a partir d'un essai de traction simple transversalement aux fibres
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Di, Stasio Luca. "Effet de la microstructure sur le décollement à l'interface fibre/matrice dans les stratifiés à matrice polymère avec renfort en fibre soumis à traction". Electronic Thesis or Diss., Université de Lorraine, 2019. http://www.theses.fr/2019LORR0229.

Texto completo
Resumen
L'objectif principal de cette thèse est d'étudier l'effet de la microstructure sur l'amorçage et propagation de décollements entre fibre et matrice. Dans ce but, des modèles de Volume Elémentaire Représentatif (VER) des composites unidirectionnels et des stratifiés croisés sont développés, caractérisés par différentes configurations des fibres et degré d'endommagement. L'amorçage du décollement est analysé par rapport à la distribution des contraintes à l'interface entre fibre et matrice. En revanche, la propagation du décollement est étudiée avec l'approche de la Mécanique Linéaire Elastique de la Rupture (MLER), et plus spécifiquement avec l'évaluation du taux de restitution d'énergie en Mode I et Mode II. Les champs de déplacement et contrainte sont calculés avec la Méthode des éléments finis (MEF) dans le logiciel Abaqus. La détermination des composants du taux de restitution d'énergie est effectuée avec la technique de fermeture virtuelle de fissure implémentée par l'auteur en langage Python. La solution élastique du problème de décollement entre fibre et matrice est caractérisée par la présence de deux régimes : celui de fissure ouverte et celui de fissure fermée. Dans le premier cas, il n'existe aucun contact entre les lèvres du décollement et les champs des déplacements et contraintes présentent une singularité oscillatoire. Un 'étude de convergence de la technique de fermeture virtuelle de fissure est donc requis et constitue le premier élément du travail de cette thèse. Ensuite, la propagation de décollement est étudiée dans Volume Elémentaire Représentative de : composites unidirectionnels avec épaisseur variable, mesuré par le nombre des rangées des fibres ; stratifié croisé avec un pli central à 90° d'épaisseur variable, mesuré par le nombre des rangées des fibres ; composites unidirectionnels épais, modélisés comme infinis à travers l'épaisseur. Configurations multiples de l'endommagement sont aussi examinées, qui correspondent à différentes étapes du processus d'amorçage des fissures transverses : décollements isolés ; décollements interagissant distribués dans la direction d'application de la charge mécanique ; décollements localisés sur fibres consécutives à travers l'épaisseur. Enfin, la taille du décollement juste après l'amorçage et la taille ultime du décollement sont estimées à partir de l'analyse de la distribution des contraintes à l'interface entre fibre et matrice (pour l'amorçage) et sur la base du critère de Griffith de la MLER
The main objective of the present work is to investigate the influence of the microstructure on debond growth along the fiber arc direction. To this end, models of 2-dimensional Representative Volume Elements (RVEs) of Uni-Directional (UD) composites and cross-ply laminates are developed. The Representative Volume Elements are characterized by different configurations of fibers and different damage states. Debond initiation is studied through the analysis of the distribution of stresses at the fiber/matrix interface in the absence of damage. Debond growth on the other hand is characterized using the approach of Linear Elastic Fracture Mechanics (LEFM), specifically through the evaluation of the Mode I, Mode II and total Energy Release Rate (ERR). Displacement and stress fields are evaluated by means of the Finite Element Method (FEM) using the commercial solver Abaqus. The components of the Energy Release Rate are then evaluated using the Virtual Crack Closure Technique (VCCT), implemented in a custom Python routine. The elastic solution of the debonding problem presents two different regimes: the open crack and the closed crack behavior. In the latter, debond faces are in contact in a region of finite size at the debond tip and it is known that stress and displacement fields at the debond tip present an oscillating singularity. A convergence analysis of the VCCT in the context of the FEM solution is thus required to guarantee the validity of results and represents the first step of the work presented in this thesis. Debond growth under remote tensile loading is then studied in Representative Volume Elements of: UD composites of varying thickness, measured in terms of number of rows of fibers; cross-ply laminates with a central 90° ply of varying thickness, measured as well in terms of number of rows of fibers; thick UD composites (modelled as infinite along the through-the-thickness direction). Different damage configurations are also considered, corresponding to different stages of transverse crack onset: non-interacting isolated debonds; interacting debonds distributed along the loading direction; debonds on consecutive fibers along the through-the-thickness direction. Finally, an estimation of debond size at initiation and of debond maximum size is proposed based on arguments from stress analysis (for initiation) and on Griffith's criterion from LEFM (for propagation)
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía