Дисертації з теми "Accelerating methods"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Accelerating methods.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-50 дисертацій для дослідження на тему "Accelerating methods".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Kerdreux, Thomas. "Accelerating conditional gradient methods." Thesis, Université Paris sciences et lettres, 2020. http://www.theses.fr/2020UPSLE002.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les algorithmes de Frank-Wolfe sont des méthodes d’optimisation de problèmes sous contraintes. Elles décomposent un problème non-linéaire en une série de problèmes linéaires. Cela en fait des méthodes de choix pour l’optimisation en grande dimension et notamment explique leur utilisation dans de nombreux domaines appliqués. Ici nous proposons de nouveaux algorithmes de Frank-Wolfe qui convergent plus rapidement vers la solution du problème d’optimisation sous certaines hypothèses structurelles assez génériques. Nous montrons en particulier, que contrairement à d’autres types d’algorithmes, cette famille s’adapte à ces hypothèses sans avoir à spécifier les paramètres qui les contrôlent
The Frank-Wolfe algorithms, a.k.a. conditional gradient algorithms, solve constrained optimization problems. They break down a non-linear problem into a series of linear minimization on the constraint set. This contributes to their recent revival in many applied domains, in particular those involving large-scale optimization problems. In this dissertation, we design or analyze versions of the Frank-Wolfe algorithms. We notably show that, contrary to other types of algorithms, this family is adaptive to a broad spectrum of structural assumptions, without the need to know and specify the parameters controlling these hypotheses
2

Dahlin, Johan. "Accelerating Monte Carlo methods for Bayesian inference in dynamical models." Doctoral thesis, Linköpings universitet, Reglerteknik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-125992.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Making decisions and predictions from noisy observations are two important and challenging problems in many areas of society. Some examples of applications are recommendation systems for online shopping and streaming services, connecting genes with certain diseases and modelling climate change. In this thesis, we make use of Bayesian statistics to construct probabilistic models given prior information and historical data, which can be used for decision support and predictions. The main obstacle with this approach is that it often results in mathematical problems lacking analytical solutions. To cope with this, we make use of statistical simulation algorithms known as Monte Carlo methods to approximate the intractable solution. These methods enjoy well-understood statistical properties but are often computational prohibitive to employ. The main contribution of this thesis is the exploration of different strategies for accelerating inference methods based on sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC). That is, strategies for reducing the computational effort while keeping or improving the accuracy. A major part of the thesis is devoted to proposing such strategies for the MCMC method known as the particle Metropolis-Hastings (PMH) algorithm. We investigate two strategies: (i) introducing estimates of the gradient and Hessian of the target to better tailor the algorithm to the problem and (ii) introducing a positive correlation between the point-wise estimates of the target. Furthermore, we propose an algorithm based on the combination of SMC and Gaussian process optimisation, which can provide reasonable estimates of the posterior but with a significant decrease in computational effort compared with PMH. Moreover, we explore the use of sparseness priors for approximate inference in over-parametrised mixed effects models and autoregressive processes. This can potentially be a practical strategy for inference in the big data era. Finally, we propose a general method for increasing the accuracy of the parameter estimates in non-linear state space models by applying a designed input signal.
Borde Riksbanken höja eller sänka reporäntan vid sitt nästa möte för att nå inflationsmålet? Vilka gener är förknippade med en viss sjukdom? Hur kan Netflix och Spotify veta vilka filmer och vilken musik som jag vill lyssna på härnäst? Dessa tre problem är exempel på frågor där statistiska modeller kan vara användbara för att ge hjälp och underlag för beslut. Statistiska modeller kombinerar teoretisk kunskap om exempelvis det svenska ekonomiska systemet med historisk data för att ge prognoser av framtida skeenden. Dessa prognoser kan sedan användas för att utvärdera exempelvis vad som skulle hända med inflationen i Sverige om arbetslösheten sjunker eller hur värdet på mitt pensionssparande förändras när Stockholmsbörsen rasar. Tillämpningar som dessa och många andra gör statistiska modeller viktiga för många delar av samhället. Ett sätt att ta fram statistiska modeller bygger på att kontinuerligt uppdatera en modell allteftersom mer information samlas in. Detta angreppssätt kallas för Bayesiansk statistik och är särskilt användbart när man sedan tidigare har bra insikter i modellen eller tillgång till endast lite historisk data för att bygga modellen. En nackdel med Bayesiansk statistik är att de beräkningar som krävs för att uppdatera modellen med den nya informationen ofta är mycket komplicerade. I sådana situationer kan man istället simulera utfallet från miljontals varianter av modellen och sedan jämföra dessa mot de historiska observationerna som finns till hands. Man kan sedan medelvärdesbilda över de varianter som gav bäst resultat för att på så sätt ta fram en slutlig modell. Det kan därför ibland ta dagar eller veckor för att ta fram en modell. Problemet blir särskilt stort när man använder mer avancerade modeller som skulle kunna ge bättre prognoser men som tar för lång tid för att bygga. I denna avhandling använder vi ett antal olika strategier för att underlätta eller förbättra dessa simuleringar. Vi föreslår exempelvis att ta hänsyn till fler insikter om systemet och därmed minska antalet varianter av modellen som behöver undersökas. Vi kan således redan utesluta vissa modeller eftersom vi har en bra uppfattning om ungefär hur en bra modell ska se ut. Vi kan också förändra simuleringen så att den enklare rör sig mellan olika typer av modeller. På detta sätt utforskas rymden av alla möjliga modeller på ett mer effektivt sätt. Vi föreslår ett antal olika kombinationer och förändringar av befintliga metoder för att snabba upp anpassningen av modellen till observationerna. Vi visar att beräkningstiden i vissa fall kan minska ifrån några dagar till någon timme. Förhoppningsvis kommer detta i framtiden leda till att man i praktiken kan använda mer avancerade modeller som i sin tur resulterar i bättre prognoser och beslut.
3

Lopes, Antonio Roldao. "Accelerating iterative methods for solving systems of linear equations using FPGAs." Thesis, Imperial College London, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.526401.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Ghadimi, Euhanna. "Accelerating Convergence of Large-scale Optimization Algorithms." Doctoral thesis, KTH, Reglerteknik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-162377.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Several recent engineering applications in multi-agent systems, communication networks, and machine learning deal with decision problems that can be formulated as optimization problems. For many of these problems, new constraints limit the usefulness of traditional optimization algorithms. In some cases, the problem size is much larger than what can be conveniently dealt with using standard solvers. In other cases, the problems have to be solved in a distributed manner by several decision-makers with limited computational and communication resources. By exploiting problem structure, however, it is possible to design computationally efficient algorithms that satisfy the implementation requirements of these emerging applications. In this thesis, we study a variety of techniques for improving the convergence times of optimization algorithms for large-scale systems. In the first part of the thesis, we focus on multi-step first-order methods. These methods add memory to the classical gradient method and account for past iterates when computing the next one. The result is a computationally lightweight acceleration technique that can yield significant improvements over gradient descent. In particular, we focus on the Heavy-ball method introduced by Polyak. Previous studies have quantified the performance improvements over the gradient through a local convergence analysis of twice continuously differentiable objective functions. However, the convergence properties of the method on more general convex cost functions has not been known. The first contribution of this thesis is a global convergence analysis of the Heavy- ball method for a variety of convex problems whose objective functions are strongly convex and have Lipschitz continuous gradient. The second contribution is to tailor the Heavy- ball method to network optimization problems. In such problems, a collection of decision- makers collaborate to find the decision vector that minimizes the total system cost. We derive the optimal step-sizes for the Heavy-ball method in this scenario, and show how the optimal convergence times depend on the individual cost functions and the structure of the underlying interaction graph. We present three engineering applications where our algorithm significantly outperform the tailor-made state-of-the-art algorithms. In the second part of the thesis, we consider the Alternating Direction Method of Multipliers (ADMM), an alternative powerful method for solving structured optimization problems. The method has recently attracted a large interest from several engineering communities. Despite its popularity, its optimal parameters have been unknown. The third contribution of this thesis is to derive optimal parameters for the ADMM algorithm when applied to quadratic programming problems. Our derivations quantify how the Hessian of the cost functions and constraint matrices affect the convergence times. By exploiting this information, we develop a preconditioning technique that allows to accelerate the performance even further. Numerical studies of model-predictive control problems illustrate significant performance benefits of a well-tuned ADMM algorithm. The fourth and final contribution of the thesis is to extend our results on optimal scaling and parameter tuning of the ADMM method to a distributed setting. We derive optimal algorithm parameters and suggest heuristic methods that can be executed by individual agents using local information. The resulting algorithm is applied to distributed averaging problem and shown to yield substantial performance improvements over the state-of-the-art algorithms.

QC 20150327

5

Singh, Karanpreet. "Accelerating Structural Design and Optimization using Machine Learning." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/104114.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Machine learning techniques promise to greatly accelerate structural design and optimization. In this thesis, deep learning and active learning techniques are applied to different non-convex structural optimization problems. Finite Element Analysis (FEA) based standard optimization methods for aircraft panels with bio-inspired curvilinear stiffeners are computationally expensive. The main reason for employing many of these standard optimization methods is the ease of their integration with FEA. However, each optimization requires multiple computationally expensive FEA evaluations, making their use impractical at times. To accelerate optimization, the use of Deep Neural Networks (DNNs) is proposed to approximate the FEA buckling response. The results show that DNNs obtained an accuracy of 95% for evaluating the buckling load. The DNN accelerated the optimization by a factor of nearly 200. The presented work demonstrates the potential of DNN-based machine learning algorithms for accelerating the optimization of bio-inspired curvilinearly stiffened panels. But, the approach could have disadvantages for being only specific to similar structural design problems, and requiring large datasets for DNNs training. An adaptive machine learning technique called active learning is used in this thesis to accelerate the evolutionary optimization of complex structures. The active learner helps the Genetic Algorithms (GA) by predicting if the possible design is going to satisfy the required constraints or not. The approach does not need a trained surrogate model prior to the optimization. The active learner adaptively improve its own accuracy during the optimization for saving the required number of FEA evaluations. The results show that the approach has the potential to reduce the total required FEA evaluations by more than 50%. Lastly, the machine learning is used to make recommendations for modeling choices while analyzing a structure using FEA. The decisions about the selection of appropriate modeling techniques are usually based on an analyst's judgement based upon their knowledge and intuition from past experience. The machine learning-based approach provides recommendations within seconds, thus, saving significant computational resources for making accurate design choices.
Doctor of Philosophy
This thesis presents an innovative application of artificial intelligence (AI) techniques for designing aircraft structures. An important objective for the aerospace industry is to design robust and fuel-efficient aerospace structures. The state of the art research in the literature shows that the structure of aircraft in future could mimic organic cellular structure. However, the design of these new panels with arbitrary structures is computationally expensive. For instance, applying standard optimization methods currently being applied to aerospace structures to design an aircraft, can take anywhere from a few days to months. The presented research demonstrates the potential of AI for accelerating the optimization of an aircraft structures. This will provide an efficient way for aircraft designers to design futuristic fuel-efficient aircraft which will have positive impact on the environment and the world.
6

Bryan, Paul David. "Accelerating microarchitectural simulation via statistical sampling principles." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47715.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The design and evaluation of computer systems rely heavily upon simulation. Simulation is also a major bottleneck in the iterative design process. Applications that may be executed natively on physical systems in a matter of minutes may take weeks or months to simulate. As designs incorporate increasingly higher numbers of processor cores, it is expected the times required to simulate future systems will become an even greater issue. Simulation exhibits a tradeoff between speed and accuracy. By basing experimental procedures upon known statistical methods, the simulation of systems may be dramatically accelerated while retaining reliable methods to estimate error. This thesis focuses on the acceleration of simulation through statistical processes. The first two techniques discussed in this thesis focus on accelerating single-threaded simulation via cluster sampling. Cluster sampling extracts multiple groups of contiguous population elements to form a sample. This thesis introduces techniques to reduce sampling and non-sampling bias components, which must be reduced for sample measurements to be reliable. Non-sampling bias is reduced through the Reverse State Reconstruction algorithm, which removes ineffectual instructions from the skipped instruction stream between simulated clusters. Sampling bias is reduced via the Single Pass Sampling Regimen Design Process, which guides the user towards selected representative sampling regimens. Unfortunately, the extension of cluster sampling to include multi-threaded architectures is non-trivial and raises many interesting challenges. Overcoming these challenges will be discussed. This thesis also introduces thread skew, a useful metric that quantitatively measures the non-sampling bias associated with divergent thread progressions at the beginning of a sampling unit. Finally, the Barrier Interval Simulation method is discussed as a technique to dramatically decrease the simulation times of certain classes of multi-threaded programs. It segments a program into discrete intervals, separated by barriers, which are leveraged to avoid many of the challenges that prevent multi-threaded sampling.
7

Parks, Paula L. "Moving at the speed of potential| A mixed-methods study of accelerating developmental students in a California community college." Thesis, Capella University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3611804.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:

Most developmental community college students are not completing the composition sequence successfully. This mixed-methods study examined acceleration as a way to help developmental community college students complete the composition sequence more quickly and more successfully. Acceleration is a curricular redesign that includes challenging readings and assignments and reduces the number of required classes in the developmental composition sequence. Developmental students taking an accelerated composition class at the California community college studied were as successful as developmental students taking the traditional segmented basic skills course. Students who pass the accelerated course skip a developmental class and are eligible to take the college-level course, which saves them time and money. The students who were interviewed cited the main factors leading to their success: the academic support from faculty, academic support from fellow students, the personality/caring of the teacher, and an interest in the class theme. Data were from the first semester the college offered this class. Findings from the study indicate that the college studied should continue offering accelerated composition classes and should encourage attendance at professional development meetings so that all parts of the accelerated curriculum will be implemented in the future. Implementing all parts of the accelerated curriculum may increase the success rates. The college studied should also re-examine its traditional basic skills curriculum and the timed writing departmental final exam, which causes unnecessary stress and lowers expectations. More effort could be made to include readings from minority authors and to provide support, such as through learning communities.

8

O'Brien, Gerard. "Comparison and evaluation of United Nations and ARC based test methods for the determination of self-accelerating decomposition temperatures." Thesis, London South Bank University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.388169.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Drzisga, Daniel [Verfasser], Barbara [Akademischer Betreuer] Wohlmuth, Matthias [Gutachter] Möller, Barbara [Gutachter] Wohlmuth, and Giancarlo [Gutachter] Sangalli. "Accelerating Isogeometric Analysis and Matrix-free Finite Element Methods Using the Surrogate Matrix Methodology / Daniel Drzisga ; Gutachter: Matthias Möller, Barbara Wohlmuth, Giancarlo Sangalli ; Betreuer: Barbara Wohlmuth." München : Universitätsbibliothek der TU München, 2020. http://d-nb.info/122693434X/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Macedo, Alves de Lima Jean. "Développement et validation d'un nouveau critère de déformation progressive pour les REPs." Electronic Thesis or Diss., Ecully, Ecole centrale de Lyon, 2023. http://www.theses.fr/2023ECDL0011.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Lors de la conception, la construction et l'exploitation d'un composant nucléaire, il est nécessaire d’assurer l’intégrité de celui-ci quelles que soient les conditions de fonctionnement, nominales ou accidentelles. Dans le cadre de la sûreté, la démonstration de la résistance des composants fondamentaux des circuits primaires et secondaires aux modes de ruine est une nécessité pour valider le dimensionnement de ces structures. Parmi les modes possibles de ruine figure le phénomène de déformation progressive. De manière générale, le dimensionnement en déformation progressive des composants de centrales nucléaires est réalisé par des méthodes simplifiées ou des analyses inélastiques complètes. D’un point de vue industriel, les deux types d’évaluation ne sont pas satisfaisantes car elles sont soit trop sévères soit trop complexes à mettre en œuvre. Dans ce contexte, ces travaux de thèse visent à développer un nouveau critère industriel et/ou une nouvelle méthodologie de calcul applicable sur des structures complexes. Le premier chapitre est consacré à l'étude bibliographique, notamment à l’étude du phénomène de déformation progressive. Le deuxième chapitre présente la modélisation du comportement des matériaux métalliques et les méthodes numériques pour simuler les calculs cycliques. Nous proposons une nouvelle méthode d'accélération des calculs cycliques afin de rendre la méthode d'intégration pas à pas plus rapide. Le troisième chapitre est consacré à la modélisation des essais COTHAA. Des modèles de comportement sont évalués afin de proposer un modèle capable de décrire la déformation progressive observée sur des structures. Les résultats prédits par une version simplifiée du modèle de Chaboche sont en bon accord avec les mesures expérimentales. Nous montrons également l'aptitude de la nouvelle méthode d'accélération à simuler ces essais. Le quatrième chapitre est dédié à l'étude expérimentale. Dans un premier temps, nous proposons un nouvel essai de déformation progressive : l'essai DEFPROG. Dans un second temps, nous validons le modèle proposé dans le troisième chapitre sur ces résultats expérimentaux. Le cinquième, et dernier chapitre, est consacré à la proposition de la nouvelle méthode pour se prémunir contre le risque de déformation progressive. Nous proposons et validons une nouvelle méthode simplifiée, tout en s'appuyant sur des résultats expérimentaux et des modélisations
During the design, construction and operation of a nuclear component, it is necessary to ensure its integrity whatever the operating conditions : nominal or accidental. The demonstration of the components’ resistance of the primary and secondary circuits to failure modes is necessary in order to validate the design of these structures. Among the possible failure modes is the phenomenon of ratcheting. The ratcheting check of nuclear power plant structures is mainly investigated by means of simplified methods or a complete inelastic analysis. Nevertheless, these methods are either conservatives or complex to use and implement. In this context, the aim of this thesis is to develop a new industrial design rule and/or new calculation methodology that is applicable to complex structures.The first chapter is addressed to the state of the art, in particular to the ratcheting phenomenon. The second chapter presents the modeling of metallic materials and the numerical methods to simulate cyclic calculations. We propose a new method for accelerating cyclic calculations in order to make the step-by-step integration method faster.The third chapter is devoted to the modeling of COTHAA tests. Constitutive models are evaluated in order to propose a robust model capable of simulating ratcheting. Results predicted by a simplified version of Chaboche model are found in good agreement as compared to experimental measurements. We also show the ability of the new acceleration method to simulate these tests. The fourth chapter is dedicated to the experimental study. We propose a new structural ratcheting test: the DEFPROG test. Secondly, we validate the model proposed in the third chapter on these experimental results. The fifth and last chapter is devoted to the proposal of the new design rule to forecast the risk of ratcheting. We propose and validate a new simplified method, while relying on experimental results and modeling
11

Pelletier, Stéphane. "Acceleration methods for image super-resolution." Thesis, McGill University, 2010. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=86530.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Image super-resolution (SR) attempts to recover a high-resolution (HR) image or video sequence from a set of degraded and aliased low-resolution (LR) ones. The computational complexity associated with many SR algorithms may hinder their use in time-critical applications. This motivates our interest in techniques for accelerating computations associated with edge-preserving image SR problems. Edge-preserving formulations are preferable to quadratic ones since they yield perceptually improved images with sharper edges. First, we propose a simple preconditioning method for accelerating the solution of edgepreserving image restoration problems in which a linear shift-invariant (LSI) point spread function (PSF) is employed. This application is a special case of SR with a single LR image and a magnification factor of one. We demonstrate that the proposed approach offers significant advantages of simplicity, and in several cases, speed, over traditional methods for accelerating such problems.
Secondly, we adapt the previous approach to edge-preserving SR problems from multiple translated LR images. Our technique involves reordering the HR pixels in a similar way to what is done in preconditioning methods for quadratic formulations. However, due to the edge-preserving requirements, the Hessian matrix of the cost function varies during the minimization process. We develop an efficient update scheme for the preconditioner in order to cope with this situation. Unlike some other acceleration strategies that round the displacement values between the LR images on the HR grid, the proposed method does not sacrifice the optimality of the observation model.
Thirdly, we describe a technique for preconditioning SR problems involving rational magnification factors. The use of such factors is motivated in part by the fact that, under certain circumstances, optimal SR zooms are non-integers. We show that by reordering the pixels of the LR images, the structure of the problem to solve is modified in such a way that preconditioners based on circulant operators can be used.
Finally, we apply our SR acceleration techniques to compressed color video sequences and to Bayer pattern images taken from a camera whose sensor is covered with a color filter array (CFA). Through experimental results, we demonstrate that the proposed techniques can provide significant speed improvement in many scenarios.
La super-résolution (SR) vise à reconstruire une image ou une séquence vidéo de haute résolution (HR) à partir d'images dégradées de basse résolution (BR). La complexité des calculs requis par plusieurs méthodes de SR peut entraver l'utilisation de ces dernières lorsque le temps d'exécution est critique. Ceci motive notre intérêt pour l'accélération d'algorithmes de SR préservant les contours dans l'image. Les méthodes à préservation de contours sont préférables aux approches quadratiques car elles produisent des images aux contours mieux définis.
Premièrement, nous proposons une méthode de préconditionnement pour l'accélération de problèmes de restoration d'image employant une PSF spatialement invariante. Cette application est un cas spécifique de SR d'une seule image de BR avec un facteur de grossissement unitaire. Nous démontrons que l'approche proposée est plus simple et souvent plus rapide que les méthodes traditionelles employées pour l'accélération de problèmes similaires. Deuxièmement, nous adaptons l'approche précédente aux problèmes de SR s'appliquant à des images translatées. Notre technique réordonne les pixels de HR d'une manière similaire à ce qui se fait pour les formulations quadratiques. Toutefois, en raison des exigences de préservation de contours, la matrice hessienne de la fonction objective varie durant la minimisation. Nous développons une méthode de mise-à-jour rapide du préconditionneur pour surmonter cette situation. Contrairement à d'autres stratégies d'accélération qui arrondissent les déplacements entre les images de BR sur la grille de HR, notre méthode ne sacrifie pas l'optimalité du modèle d'observation.
Troisièmement, nous décrivons une technique pour le préconditionnement de problèmes de SR employant un facteur de grossissement rationel. L'utilisation de tels facteurs est motivée par le fait que, dans certaines circonstances, les facteurs optimaux ne sont pas des entiers. Nous démontrons qu'en réorganisant les pixels des images de BR, la structure du problème est modifiée de manière à permettre l'utilisation de préconditionneurs basés sur les matrices circulantes.
Finalement, nous appliquons nos techniques d'accélération à des séquences vidéo compressées et à des images Bayer acquises avec une caméra dotée d'un filtre CFA. Par le biais d'expériences, nous démontrons que les techniques proposées peuvent accélérer les calculs de manière significative dans plusieurs scénarios.
12

Chen, Binbin. "Pyrolytic biochar stability assessed by chemical accelerating aging method." Thesis, KTH, Materialvetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-277933.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Now that the EU and Sweden have adopted a new climate policy framework to regulate net carbon emission. A new concept, negative CO2 emission, has been considered to neutralize the CO2 generated from necessary consumption of fossil fuel. Biochar, as a pyrolytic product from biomass, can store carbon in a relatively stable way. Therefore, it is one of the most promising and outstanding tools for carbon sink. Biochar stability, defined as the ratio of remaining carbon in biochar after 100 years, is the most crucial factor when using biochar for carbon storage. So far, various approaches have been proposed to measure and predict biochar stability, such as elemental analysis, proximate analysis, accelerating aging methods. Each method has its pros and cons. The reliability of these methods still needs to be verified. In this project, the chemical accelerating aging method has been selected for assessing biochar stability, because this method captures both chemical and physical properties of biochar. Besides, the gas, liquid, and solid products generalized during the chemical treatment are collected and analyzed separately in order to study the oxidation mechanism. Biochar in this project is produced from miscanthus and seaweed at various pyrolysis temperature. It is found that biochar stability can be increased by enhancing pyrolysis temperature, and miscanthus biochar is more sensitive to pyrolysis temperature within the pyrolysis temperature range of 350-600℃. The highest biochar stability (73%) has been achieved with miscanthus-derived biochar produced at 550 ℃, which demonstrates high potential as carbon sequestration tool.
Nu när EU och Sverige har antagit en ny klimatpolitisk ram för att reglera nettokoldioxidutsläppen. Ett nytt koncept, negativt koldioxidutsläpp, har ansetts neutralisera den koldioxid som genereras av nödvändig förbrukning av fossila bränslen. Biokol, som en pyrolytisk produkt från biomassa, kan lagra kol på ett relativt stabilt sätt. Därför är det en av de mest lovande och enastående verktyg för kolsänka. Biokolsstabilitet, definierad som förhållandet mellan återstående kol i biokol efter 100 år, är den viktigaste faktorn vid användning av biokol för kollagring. Hittills har olika metoder föreslagits för att mäta och förutsäga biokolsstabilitet, såsom elementär analys, proximate analys, accelererande åldrande metoder. Varje metod har sina för-och nackdelar. Tillförlitligheten hos dessa metoder måste fortfarande kontrolleras. I detta projekt har den kemiska accelererande åldrandemetoden valts ut för att bedöma biokolsstabilitet, eftersom denna metod fångar upp både kemiska och fysikaliska egenskaper hos biokol. Förutom, gasen, flytande, och fasta produkter generaliserade under den kemiska behandlingen samlas in och analyseras separat för att studera oxidation mekanism. Biokol i detta projekt framställs av miscanthus och tång vid olika pyrolystemperatur. Det visar sig att biokolsstabiliteten kan ökas genom att öka pyrolystemperaturen, och miscanthusbiokol är mer känsligt för pyrolystemperatur inom pyrolystemperaturområdet 350-600°C. Den högsta biokolsstabiliteten (73%) har uppnåtts medbiokol som framställts vid 550°C och som visar stor potential som kolbindningsverktyg.
13

Li, Lulu Ph D. Massachusetts Institute of Technology. "Acceleration methods for Monte Carlo particle transport simulations." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112521.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 166-175).
Performing nuclear reactor core physics analysis is a crucial step in the process of both designing and understanding nuclear power reactors. Advancements in the nuclear industry demand more accurate and detailed results from reactor analysis. Monte Carlo (MC) eigenvalue neutron transport methods are uniquely qualified to provide these results, due to their accurate treatment of space, angle, and energy dependencies of neutron distributions. Monte Carlo eigenvalue simulations are, however, challenging, because they must resolve the fission source distribution and accumulate sufficient tally statistics, resulting in prohibitive run times. This thesis proposes the Low Order Operator (LOO) acceleration method to reduce the run time challenge, and provides analyses to support its use for full-scale reactor simulations. LOO is implemented in the continuous energy Monte Carlo code, OpenMC, and tested in 2D PWR benchmarks. The Low Order Operator (LOO) acceleration method is a deterministic transport method based on the Method of Characteristics. Similar to Coarse Mesh Finite Difference (CMFD), the other acceleration method evaluated in this thesis, LOO parameters are constructed from Monte Carlo tallies. The solutions to the LOO equations are then used to update Monte Carlo fission sources. This thesis deploys independent simulations to rigorously assess LOO, CMFD, and unaccelerated Monte Carlo, simulating up to a quarter of a trillion neutron histories for each simulation. Analysis and performance models are developed to address two aspects of the Monte Carlo run time challenge. First, this thesis demonstrates that acceleration methods can reduce the vast number of neutron histories required to converge the fission source distribution before tallies can be accumulated. Second, the slow convergence of tally statistics is improved with the acceleration methods for the earlier active cycles. A theoretical model is developed to explain the observed behaviors and predict convergence rates. Finally, numerical results and theoretical models shed light on the selection of optimal simulation parameters such that a desired statistical uncertainty can be achieved with minimum neutron histories. This thesis demonstrates that the conventional wisdom (e.g., maximizing the number of cycles rather than the number of neutrons per cycle) in performing unaccelerated MC simulations can be improved simply by using more optimal parameters. LOO acceleration provides reduction of a factor of at least 2.2 in neutron histories, compared to the unaccelerated Monte Carlo scheme, and the CPU time and memory overhead associated with LOO are small.
by Lulu Li.
Ph. D.
14

Vašíček, Zdeněk. "Acceleration Methods for Evolutionary Design of Digital Circuits." Doctoral thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-261257.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Ačkoliv můžeme v literatuře nalézt řadu příkladů prezentujících evoluční návrh jakožto zajímavou a slibnou alternativu k tradičním návrhovým technikám používaným v oblasti číslicových obvodů, praktické nasazení je často problematické zejména v důsledku tzv. problému škálovatelnosti, který se projevuje např. tak, že evoluční algoritmus je schopen poskytovat uspokojivé výsledky pouze pro malé instance řešeného problému. Vážný problém představuje tzv. problém škálovatelnosti evaluace fitness funkce, který je markantní zejména v oblasti syntézy kombinačních obvodů, kde doba potřebná pro ohodnocení kandidátního řešení typicky roste exponenciálně se zvyšujícím se počtem primárních vstupů. Tato disertační práce se zabývá návrhem několika metod umožňujících redukovat problem škálovatelnosti evaluace v oblasti evolučního návrhu a optimalizace číslicových systémů. Cílem je pomocí několika případových studií ukázat, že s využitím vhodných akceleračních technik jsou evoluční techniky schopny automaticky navrhovat inovativní/kompetitivní řešení praktických problémů. Aby bylo možné redukovat problém škálovatelnosti v oblasti evolučního návrhu číslicových filtrů, byl navržen doménově specifický akcelerátor na bázi FPGA. Tato problematika reprezentuje případ, kdy je nutné ohodnotit velké množství trénovacích dat a současně provést mnoho generací. Pomocí navrženého akcelerátoru se podařilo objevit efektivní implementace různých nelineárních obrazových filtrů. S využitím evolučně navržených filtrů byl vytvořen robustní nelineární filtr implusního šumu, který je chráněn užitným vzorem. Navržený filtr vykazuje v porovnání s konvenčními řešeními vysokou kvalitu filtrace a nízkou implementační cenu. Spojením evolučního návrhu a technik známých z oblasti formální verifikace se podařilo vytvořit systém umožňující výrazně redukovat problém škálovatelnosti evoluční syntézy kombinačních obvodů na úrovni hradel. Navržená metoda dovoluje produkovat komplexní a přesto kvalitní řešení, která jsou schopna konkurovat komerčním nástrojům pro logickou syntézu. Navržený algoritmus byl experimentálně ověřen na sadě několika benchmarkových obvodů včetně tzv. obtížně syntetizovatelných obvodů, kde dosahoval v průměru o 25% lepších výsledků než dostupné akademické i komerční nástroje. Poslední doménou, kterou se práce zabývá, je akcelerace evolučního návrhu lineárních systémů. Na příkladu evolučního návrhu násobiček s vícenásobnými konstantními koeficienty bylo ukázáno, že čas potřebný k evaluaci kandidátního řešení lze výrazně redukovat (defacto na ohodocení jediného testovacího vektoru), je-li brán v potaz charakter řešeného problému (v tomto případě linearita).
15

Bhalekar, Aniruddha Ramesh. "Internet content delivery acceleration methods for hybrid network topologies." College Park, Md. : University of Maryland, 2003. http://hdl.handle.net/1903/132.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.S.) -- University of Maryland, College Park, 2003.
Thesis research directed by: Electrical and Computer Engineering. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
16

Jonart, Douglas E. (Douglas Edward). "Methods and devices for corrosion fatigue testing without acceleration." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/107074.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis: Ph. D. in Ocean Engineering, Massachusetts Institute of Technology, Department of Mechanical Engineering, 2016.
Cataloged from PDF version of thesis.
Includes bibliographical references.
Substantial submarine procurement and maintenance costs could be saved by extending the submarine propulsion shaft inspection interval from 6 to 12 years as part of the design of the next class of vessels. On existing classes, corrosion fatigue limits this interval, but data on corrosion fatigue life is sparse and incomplete. An existing model from previous research has been updated and stands ready to provide predictions, given more relevant data. Techniques and devices are developed to obtain this data. First, traditional fatigue machines and samples are adapted to provide information on corrosion fatigue on pre-pitted and unpitted samples. Artificial seawater is used for comparative consistency; tests with enzymatic or actual seawater are recommended. Next, direct-current potential drop is proven as a means to detect transitions in the corrosion fatigue failure chain on a bending fatigue specimen exposed to artificial seawater. This method can be used to detect transition of pits to cracks in situ, and it is believed that it can be used to detect ingress of water through protective coatings, which has not previously been measured or credited in a review of predictive models and design life analyses. This technique should be verified and expanded to detect additional transitions and to apply to the devices developed as part of this research. Second, test devices are developed to more accurately reflect the operational submarine propulsion shaft, in terms of loading, environment, and number of test cycles. The benchtop prototype intended to prove the concept has been identified by the Navy as an improvement over existing machines, and is subsequently redesigned as an inexpensive and rapidly deployable test stand for uncoated shaft specimens. The originally envisioned device is also designed and assembled. It leverages non-contact air bearings and motors, as well as flexural pivots, to enable very high cycle fatigue testing while minimizing the parasitic loads imparted on the sample by the test machine. The next recommended step is deployment of this device as a tool for verification testing of fully coated samples, necessary based on the large scope of the desired increase in shaft life.
by Douglas E. Jonart.
Ph. D. in Ocean Engineering
17

Lezar, Evan. "GPU acceleration of matrix-based methods in computational electromagnetics." Thesis, Stellenbosch : University of Stellenbosch, 2011. http://hdl.handle.net/10019.1/6507.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (PhD (Electrical and Electronic Engineering))--University of Stellenbosch, 2011.
ENGLISH ABSTRACT: This work considers the acceleration of matrix-based computational electromagnetic (CEM) techniques using graphics processing units (GPUs). These massively parallel processors have gained much support since late 2006, with software tools such as CUDA and OpenCL greatly simplifying the process of harnessing the computational power of these devices. As with any advances in computation, the use of these devices enables the modelling of more complex problems, which in turn should give rise to better solutions to a number of global challenges faced at present. For the purpose of this dissertation, CUDA is used in an investigation of the acceleration of two methods in CEM that are used to tackle a variety of problems. The first of these is the Method of Moments (MOM) which is typically used to model radiation and scattering problems, with the latter begin considered here. For the CUDA acceleration of the MOM presented here, the assembly and subsequent solution of the matrix equation associated with the method are considered. This is done for both single and double precision oating point matrices. For the solution of the matrix equation, general dense linear algebra techniques are used, which allow for the use of a vast expanse of existing knowledge on the subject. This also means that implementations developed here along with the results presented are immediately applicable to the same wide array of applications where these methods are employed. Both the assembly and solution of the matrix equation implementations presented result in signi cant speedups over multi-core CPU implementations, with speedups of up to 300x and 10x, respectively, being measured. The implementations presented also overcome one of the major limitations in the use of GPUs as accelerators (that of limited memory capacity) with problems up to 16 times larger than would normally be possible being solved. The second matrix-based technique considered is the Finite Element Method (FEM), which allows for the accurate modelling of complex geometric structures including non-uniform dielectric and magnetic properties of materials, and is particularly well suited to handling bounded structures such as waveguide. In this work the CUDA acceleration of the cutoff and dispersion analysis of three waveguide configurations is presented. The modelling of these problems using an open-source software package, FEniCS, is also discussed. Once again, the problem can be approached from a linear algebra perspective, with the formulation in this case resulting in a generalised eigenvalue (GEV) problem. For the problems considered, a total solution speedup of up to 7x is measured for the solution of the generalised eigenvalue problem, with up to 22x being attained for the solution of the standard eigenvalue problem that forms part of the GEV problem.
AFRIKAANSE OPSOMMING: In hierdie werkstuk word die versnelling van matriksmetodes in numeriese elektromagnetika (NEM) deur die gebruik van grafiese verwerkingseenhede (GVEe) oorweeg. Die gebruik van hierdie verwerkingseenhede is aansienlik vergemaklik in 2006 deur sagteware pakette soos CUDA en OpenCL. Hierdie toestelle, soos ander verbeterings in verwerkings vermoe, maak dit moontlik om meer komplekse probleme op te los. Hierdie stel wetenskaplikes weer in staat om globale uitdagings beter aan te pak. In hierdie proefskrif word CUDA gebruik om ondersoek in te stel na die versnelling van twee metodes in NEM, naamlik die Moment Metode (MOM) en die Eindige Element Metode (EEM). Die MOM word tipies gebruik om stralings- en weerkaatsingsprobleme op te los. Hier word slegs na die weerkaatsingsprobleme gekyk. CUDA word gebruik om die opstel van die MOM matriks en ook die daaropvolgende oplossing van die matriksvergelyking wat met die metode gepaard gaan te bespoedig. Algemene digte lineere algebra tegnieke word benut om die matriksvergelykings op te los. Dit stel die magdom bestaande kennis in die vagebied beskikbaar vir die oplossing, en gee ook aanleiding daartoe dat enige implementasies wat ontwikkel word en resultate wat verkry word ook betrekking het tot 'n wye verskeidenheid probleme wat die lineere algebra metodes gebruik. Daar is gevind dat beide die opstelling van die matriks en die oplossing van die matriksvergelyking aansienlik vinniger is as veelverwerker SVE implementasies. 'n Verselling van tot 300x en 10x onderkeidelik is gemeet vir die opstel en oplos fases. Die hoeveelheid geheue beskikbaar tot die GVE is een van die belangrike beperkinge vir die gebruik van GVEe vir groot probleme. Hierdie beperking word hierin oorkom en probleme wat selfs 16 keer groter is as die GVE se beskikbare geheue word geakkommodeer en suksesvol opgelos. Die Eindige Element Metode word op sy beurt gebruik om komplekse geometriee asook nieuniforme materiaaleienskappe te modelleer. Die EEM is ook baie geskik om begrensde strukture soos golfgeleiers te hanteer. Hier word CUDA gebruik of om die afsny- en dispersieanalise van drie gol eierkonfigurasies te versnel. Die implementasie van hierdie probleme word gedoen deur 'n versameling oopbronkode wat bekend staan as FEniCS, wat ook hierin bespreek word. Die probleme wat ontstaan in die EEM kan weereens vanaf 'n lineere algebra uitganspunt benader word. In hierdie geval lei die formulering tot 'n algemene eiewaardeprobleem. Vir die gol eier probleme wat ondersoek word is gevind dat die algemene eiewaardeprobleem met tot 7x versnel word. Die standaard eiewaardeprobleem wat 'n stap is in die oplossing van die algemene eiewaardeprobleem is met tot 22x versnel.
18

Chaudhary, Suneal K. "Acceleration of Monte Carlo methods using low discrepancy sequences." Diss., Restricted to subscribing institutions, 2004. http://proquest.umi.com/pqdweb?did=766110621&sid=1&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
19

Rossi, Francesco <1987&gt. "Numerical and Analytical Methods for Laser-Plasma Acceleration Physics." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/6771/1/tesi3.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.
20

Rossi, Francesco <1987&gt. "Numerical and Analytical Methods for Laser-Plasma Acceleration Physics." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2015. http://amsdottorato.unibo.it/6771/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.
21

Fenelius, Jonathan. "Test method for high acceleration : A concept study of methods for testing electrical and mechanical components under high loads." Thesis, Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-72748.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This thesis objective is to present a suitable high acceleration test method for SAAB Dynamics. SAAB is in need for an easy to use and cheaper way to test components such as the fuze and electrical components embedded in the fuze system. SAAB Dynamics develops ground combat weapon systems for the global market as well as civilian products. Products produced by SAAB are being used in armed combat, making this thesis project somewhat controversial. However, the concept produced by this work can be used in civilian applications such as aeronautics, space and material science. This thesis focused on using systematical methods and research to gain as much knowledge about the needs and demands of the customer, in this thesis SAAB. The project presents its own concepts as a valid option instead of buying one from a supplier. The concept is based on the needs of SAAB and is generated through creative brainstorming sessions and a morphological matrix. The concept was benchmarked together with similar test methods and test benches on the market. In order to be able to present a suitable concept the project conducted a large feasibility study of the fuze system and the products in need of testing as well as how other industries test similar acceleration and impacts. The concept consists of a high-grade industrial compressor in order to generate high air pressure inside a pressure chamber. The built up pressure breaks a sensitive disc and releases the air into the launch chamber. In the launch chamber, the projectile accelerates through a rifled pipe and then travels freely in a wider pipe. The projectile then deaccelerates when impact occurs with an energy absorption material such as aluminum honeycomb or foam.
22

Friedrich, Ulrich [Verfasser], and Stephan [Akademischer Betreuer] Dahlke. "Adaptive Wavelet Methods for Inverse Problems: Acceleration Strategies, Adaptive Rothe Method and Generalized Tensor Wavelets / Ulrich Friedrich. Betreuer: Stephan Dahlke." Marburg : Philipps-Universität Marburg, 2015. http://d-nb.info/1076865518/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
23

ROUSSEL-RAGOT, PIERRE. "La methode du recuit simule : acceleration et parallelisation." Paris 6, 1990. http://www.theses.fr/1990PA066305.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La methode du recuit simule est une technique d'optimisation puissante et simple a mettre en uvre, qui a prouve son efficacite pour la recherche de solutions optimales ou proches de l'optimum dans de nombreuses applications pratiques. Mais, meme si l'on utilise des profils de recuit bien adaptes, le temps de calcul peut se reveler trop important. Plusieurs methodes de parallelisation de l'algorithme ont ete proposees, mais elles s'ecartent du comportement sequentiel de l'algorithme. Nous presentons une methode de parallelisation de l'algorithme du recuit simule independante du probleme, qui possede les memes qualites de convergence que l'algorithme sequentiel. Nous introduisons deux modes de parallelisation, suivant la valeur de la temperature, et nous modelisons analytiquement leur comportement, ce qui permet de prevoir l'acceleration de la methode quel que soit le probleme. Nous presentons en outre une architecture de processeur specialise qui permet d'accelerer l'execution de l'algorithme pour un probleme de placement simplifie. Les performances de l'algorithme parallele sont evaluees sur un probleme de placement simplifie a l'aide d'un reseau de transputers et les modeles ont ete compares aux resultats des experiences
24

Dammertz, Holger [Verfasser]. "Acceleration methods for ray tracing based global illumination / Holger Dammertz." Ulm : Universität Ulm. Fakultät für Ingenieurwissenschaften und Informatik, 2011. http://d-nb.info/1016659350/34.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
25

Blake, Jack. "Domain decomposition methods for nuclear reactor modelling with diffusion acceleration." Thesis, University of Bath, 2016. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.698988.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this thesis we study methods for solving the neutron transport equation (or linear Boltzmann equation). This is an integro-differential equation that describes the behaviour of neutrons during a nuclear fission reaction. Applications of this equation include modelling behaviour within nuclear reactors and the design of shielding around x-ray facilities in hospitals. Improvements in existing modelling techniques are an important way to address environmental and safety concerns of nuclear reactors, and also the safety of people working with or near radiation. The neutron transport equation typically has seven independent variables, however to facilitate rigorous mathematical analysis we consider the monoenergetic, steady-state equation without fission, and with isotropic interactions and isotropic source. Due to its high dimension, the equation is usually solved iteratively and we begin by considering a fundamental iterative method known as source iteration. We prove that the method converges assuming piecewise smooth material data, a result that is not present in the literature. We also improve upon known bounds on the rate of convergence assuming constant material data. We conclude by numerically verifying this new theory. We move on to consider the use of a specific, well-known diffusion equation to approximate the solution to the neutron transport equation. We provide a thorough presentation of its derivation (along with suitable boundary conditions) using an asymptotic expansion and matching procedure, a method originally presented by Habetler and Matkowsky in 1975. Next we state the method of diffusion synthetic acceleration (DSA) for which the diffusion approximation is instrumental. From there we move on to explore a new method of seeing the link between the diffusion and transport equations through the use of a block operator argument. Finally we consider domain decomposition algorithms for solving the neutron transport equation. Such methods have great potential for parallelisation and for the local application of different solution methods. A motivation for this work was to build an algorithm applying DSA only to regions of the domain where it is required. We give two very different domain decomposed source iteration algorithms, and we prove the convergence of both of these algorithms. This work provides a rigorous mathematical foundation for further development and exploration in this area. We conclude with numerical results to illustrate the new convergence theory, but also solve a physically-motivated problem using hybrid source iteration/ DSA algorithms and see significant reductions in the required computation time.
26

McDonald, Terry E. "A comprehensive literature review and critique of the identification of methods and practical applications of accelerated learning strategies." Online version, 2001. http://www.uwstout.edu/lib/thesis/2001/2001mcdonaldt.pdf.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
27

Wang, Xin, Joe Giacalone, Yihua Yan, Mingde Ding, Na Wang, and Hao Shan. "Particle Acceleration in Two Converging Shocks." IOP PUBLISHING LTD, 2017. http://hdl.handle.net/10150/624679.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Observations by spacecraft such as ACE, STEREO, and others show that there are proton spectral "breaks" with energy E-br at 1-10 MeV in some large CME-driven shocks. Generally, a single shock with the diffusive acceleration mechanism would not predict the "broken" energy spectrum. The present paper focuses on two converging shocks to identify this energy spectral feature. In this case, the converging shocks comprise one forward CME-driven shock on 2006 December 13 and another backward Earth bow shock. We simulate the detailed particle acceleration processes in the region of the converging shocks using the Monte Carlo method. As a result, we not only obtain an extended energy spectrum with an energy "tail" up to a few 10 MeV higher than that in previous single shock model, but also we find an energy spectral "break" occurring on similar to 5.5 MeV. The predicted energy spectral shape is consistent with observations from multiple spacecraft. The spectral "break," then, in this case is caused by the interaction between the CME shock and Earth's bow shock, and otherwise would not be present if Earth were not in the path of the CME.
28

SHAHAM, NOAM. "METHODS FOR THE ACCELERATION OF NON-LOCAL MEANS NOISE REDUCTION ALGORITHM." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2007. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11325@1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
COORDENAÇÃO DE APERFEIÇOAMENTO DO PESSOAL DE ENSINO SUPERIOR
Non-local means é um novo algoritmo de redução de ruídos para imagens apresentado por Buades e Morel em 2004. Este algoritmo funciona consideravelmente melhor do que os algoritmos anteriores, mas sua lenta execução causada pela alta complexidade o impede de ser usado em aplicações comuns. O objetivo deste trabalho é investigar maneiras de reduzir o tempo de execução do algoritmo, possibilitando seu uso em aplicações comuns de processamento de imagem, tal como fotografia e centros de impressão.
Non Local Means is an innovative noise reduction algorithm for images presented by Buades and Morel in 2004. It performs remarkably better than older generation algorithms but has a performance penalty that prevents it from being used in mainstream consumer application. The objective of this work is to find ways of reducing the time-complexity of the algorithm and enabling its use in main stream image processing applications such as home photography or photo printing centers.
29

Beach, Thomas Henry Outram. "Application acceleration : an investigation of automatic porting methods for application accelerators." Thesis, Cardiff University, 2010. http://orca.cf.ac.uk/55069/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Future HPC systems will contain both large collections of multi-core proces sors and specialist many-core co-processors. These specialised many-core co processors are typically classified as Application Accelerators. More specifically, Application Accelerations are devices such as GPUs, CELL Processors, FPGAs and custom application specific integrated circuit devices(ASICs). These devices present new challenges to overcome, including their programming difficulties, their diversity and lack of commonality of programming approach between them and the issue of selecting the most appropriate device for an application. This thesis attempts to tackle these problems by examining the suitability of automatic porting methods. In the course of this research, relevant software, both academic and com mercial, has been analysed to determine how it attempts to solve the problems relating to the use of application acceleration devices. A new approach is then constructed, this approach is an Automatic Self-Modifying Application Porting system that is able to not only port code to an acceleration device, but, using performance data, predict the appropriate device for the code being ported. Additionally, this system is also able to use the performance data that are gathered by the system to modify its own decision making model and improve its future predictions. Once the system has been developed, a series of applications are trialled and their performance, both in terms of execution time and the accuracy of the systems predictions, are analysed. This analysis has shown that, although the system is not able to flawlessly predict the correct device for an unseen application, it is able to achieve an accuracy of over 80% and, just as importantly, the code it produces is within 15% of that produced by an experienced human programmer. This analysis has also shown that while automatically ported code performs favourably in nearly all cases when compared to a single-core CPU, automatically ported code only out performs a quad-core CPU in three out of seven application case studies. From these results, it is also shown that the system is able to utilise this performance data and build a decision model allowing the users to determine if an automatically ported version of their application will provide performance improvement compared to both CPU types considered. The availability of such a system may prove valuable in allowing a diverse range of users to utilise the performance supplied by many-core devices within next generation HPC systems.
30

Raikhola, Sagar Singh. "EFFECT OF DIFFERENT BASELINE CORRECTION METHODS ON THE GROUND ACCELERATION SIGNAL." OpenSIUC, 2019. https://opensiuc.lib.siu.edu/theses/2526.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The main objective of this study is to compare the effect of different baseline correction methods on the ground acceleration signal and on the seismic response of structures. Three for strong ground motion processing methods, namely, Chiu, Akkar and EERL methods were selected in this study. For each method, a MATLAB code was written, and used in processing ground motions recorded at Belmont, IL and Gorkha, Nepal. The processed acceleration, velocity and displacement time histories obtained from the MATLAB Code were compared to the time histories provided on the USGS website. Next, the effect of each method on the response spectra was examined for five different damping ratios. A numerical integration scheme called the state-space method was used in computing the time-domain response. Finally, to get an understanding of how the processed accelerations will affect a real structure, a four-story steel frame was modeled using RISA 3D software. The response of the frame was computed using time-history analysis and the resulting story displacements were compared, and implications for structures with different natural periods were discussed
31

Kawamori, Naoki. "Sprint acceleration performance in team sports : biomechanical characteristics and training methods." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2008. https://ro.ecu.edu.au/theses/224.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Sprinting is a fundamental activity in many team sports such as soccer, rugby, football, field hockey, and basketball. Specifically, the ability to rapidly increase sprint running velocity over short distances, which is often referrcd to as sprint acceleration ability, is of major importance to team-sport athletes since sprint efforts during team-sport matches are typically of short duration (e.g., 10-20 m, 2-3 s). Biomechanical characteristics of the acceleration phase of sprinting has previously been studied in track sprinters from a block start, but there is a dearth of research exploring tile biomechanieal charactcristics of sprint acceleration in team-sport athletes from starting positions that are specific to team-sport match situations (e.g., standing start). In addition, resisted sprint training such as weighted sled towing is a popular training modality that athletes often use in an effort to improve sprint acceleration ability, but its use is largely based on choaches' observation and lacks experimental evidence. In particular, the optimal training load for resisted sprint training is currently unknown. This thesis explored to fill the research gap in such areas.
32

Marquez, Damian Jose Ignacio. "Multilevel acceleration of neutron transport calculations." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/19731.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Thesis (M.S.)--Nuclear and Radiological Engineering, Georgia Institute of Technology, 2008.
Committee Chair: Stacey, Weston M.; Committee Co-Chair: de Oliveira, Cassiano R.E.; Committee Member: Hertel, Nolan; Committee Member: van Rooijen, Wilfred F.G.
33

Helan, Tomáš. "Možnosti laboratorní přípravy a testování stříkaných betonů." Master's thesis, Vysoké učení technické v Brně. Fakulta stavební, 2013. http://www.nusl.cz/ntk/nusl-225877.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The master's thesis is focused on the posibility of laboratory testing of shotcrete. The important point of the thesis is properties comparison of shotcrete made in laboratory with vibrating press and concrete with the same recipe made by spraying machine. The influence of shotcrete recipe, type and dosage of accelerating ingredient is also examined.
34

McCartney, Maura Elizabeth. "Occupational Head Protection: Considerations for Test Methods and Use." Thesis, Virginia Tech, 2021. http://hdl.handle.net/10919/103646.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Occupational accidents are a main source of traumatic brain injuries (TBIs), with TBIs accounting for a substantial portion of all work-related deaths. Motor vehicle accidents and falls are consistently leading causes of head injury and fatality across industries. These injuries can have serious long-term consequences on an individual's quality of life and lead to large economic costs within society. This thesis investigated sources of occupational TBI prevention within two industries, construction and professional motorsports. In the last twenty years there have been major safety advancements within these industries, and yet the risk of TBI still exists. There is a need for safety standards that better reflect real-world injury scenarios. First, this thesis considered improvements to construction hard hat safety standards by evaluating the ability of Type 1 and Type 2 hard hats to reduce head injuries due to falls. Hard hats were evaluated over a range of real-world fall heights and three impact locations, using a twin-wire drop tower. Linear acceleration was used to predict injury risks. Type 2 hard hats substantially reduced skull fracture and concussion risk when compared to Type 1, indicating that if more workers wore Type 2 hard hats the risk of severe head injuries in the construction industry would be reduced. Next, this thesis compared real-world motorsport crash simulations and head impact laboratory tests designed to simulate real-world head impacts. Deformation and change in velocity were used to compare the energy managed by each system. The laboratory results generally tested higher severity impacts, with higher accelerations, compared to the simulations, despite managing a similar amount of energy. This indicates a large amount of the energy involved in the simulations was managed by the surrounding protective systems. The differences between systems create challenges for representing real-world crashes in a laboratory setting. Overall, the comparison in this thesis raises considerations for future helmet testing protocols in order to better match real-world simulations.
Master of Science
Occupational accidents are a main source of traumatic brain injuries (TBIs), with TBIs accounting for a substantial portion of all work-related deaths. Motor vehicle accidents and falls are consistently leading causes of head injury and fatality across industries. These injuries can have serious long-term consequences on an individual's quality of life and lead to large economic costs within society. This thesis investigated sources of occupational TBI prevention within two industries, construction and professional motorsports. In the last twenty years there have been major safety advancements within these industries, and yet the risk of TBI still exists. There is a need for safety standards that better reflect real-world injury scenarios. This thesis considered improvements to construction hard hat safety standards by evaluating the ability of two different hard hat types to reduce head injuries due to falls. It also compared real-world motorsport crash simulations and head impact laboratory tests designed to simulate real-world head impacts. This comparison raises considerations for future helmet testing protocols in order to better represent real-world simulations.
35

Arale, Brännvall Marian. "Accelerating longitudinal spinfluctuation theory for iron at high temperature using a machine learning method." Thesis, Linköpings universitet, Teoretisk Fysik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-170314.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the development of materials, the understanding of their properties is crucial. For magnetic materials, magnetism is an apparent property that needs to be accounted for. There are multiple factors explaining the phenomenon of magnetism, one being the effect of vibrations of the atoms on longitudinal spin fluctuations. This effect can be investigated by simulations, using density functional theory, and calculating energy landscapes. Through such simulations, the energy landscapes have been found to depend on the magnetic background and the positions of the atoms. However, when simulating a supercell of many atoms, to calculate energy landscapes for all atoms consumes many hours on the supercomputer. In this thesis, the possibility of using machine learning models to accelerate the approximation of energy landscapes is investigated. The material under investigation is body-centered cubic iron in the paramagnetic state at 1043 K. Machine learning enables statistical predictions to be made on new data based on patterns found in a previous set of data. Kernel ridge regression is used as the machine learning method. An important issue when training a machine learning model is the representation of the data in the so called descriptor (feature vector representation) or, more specific to this case, how the environment of an atom in a supercell is accounted for and represented properly. Four different descriptors are developed and compared to investigate which one yields the best result and why. Apart from comparing the descriptors, the results when using machine learning models are compared to when using other methods to approximate the energy landscapes. The machine learning models are also tested in a combined atomistic spin dynamics and ab initio molecular dynamics simulation (ASD-AIMD) where they were used to approximate energy landscapes and, from that, magnetic moment magnitudes at 1043 K. The results of these simulations are compared to the results from two other cases: one where the magnetic moment magnitudes are set to a constant value and one where they are set to their magnitudes at 0 K. From these investigations it is found that using machine learning methods to approximate the energy landscapes does, to a large degree, decrease the errors compared to the other approximation methods investigated. Some weaknesses of the respective descriptors were detected and if, in future work, these are accounted for, the errors have the potential of being lowered further.
36

Carrion, Schafer Benjamin. "Acceleration of the discrete element method on a reconfigurable co-processor." Thesis, University of Birmingham, 2003. http://etheses.bham.ac.uk//id/eprint/94/.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Granular materials are important for many different disciplines, e.g. geomechanics, civil engineering and chemical engineering. Many approaches have been used to model their behaviour, but one of the best and most important is the Discrete Element Method (DEM). The DEM was first developed during the 70’s, but its widespread use has been hampered by its extremely computationally demanding nature. The DEM can be run on a parallel computer by farming out different sub-domains onto different processors. However, particles transiting from one sub-domain to another create communication and synchronisation overheads which limit the speed-up achieved by parallel processing. Also, if some cells become much more heavily populated than others, then there will be inefficiencies due to load imbalance between the processors. As a result of these effects, the speed-up achieved by running the DEM on parallel processor computers is far less than linear. This thesis describes work on the acceleration of the DEM using reconfigurable computing. A custom hardware architecture for the DEM has been designed and implemented on a Field Programmable Gate Array (FPGA) mounted on a reconfigurable computing card. The design exploits the low level parallelism of the DEM by using long, wide computational pipelines that compute many arithmetic operations concurrently. It also exploits the high level parallelism by overlapping the main computational tasks using domain decomposition techniques. Speed-ups of a factor of at least 30 per FPGA have been achieved for simulations involving 25,000 to 200,000 particles. A multi-FPGA system has been implemented that allows the full overlap of computation with communication, so that an almost linear speed-up can be achieved as the number of FPGAs is increased. The effect of the short wordlength arithmetic used in the FPGA has been investigated, and the accuracy of the simulations has been found to be acceptable.
37

Flötteröd, Gunnar. "A search acceleration method for optimization problems with transport simulation constraints." Elsevier, 2017. https://publish.fid-move.qucosa.de/id/qucosa%3A72819.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This work contributes to the rapid approximation of solutions to optimization problems that are constrained by iteratively solved transport simulations. Given an objective function, a set of candidate decision variables and a black-box transport simulation that is solved by iteratively attaining a (deterministic or stochastic) equilibrium, the proposed method approximates the best decision variable out of the candidate set without having to run the transport simulation to convergence for every single candidate decision variable. This method can be inserted into a broad class of optimization algorithms or search heuristics that implement the following logic: (i) Create variations of a given, currently best decision variable, (ii) identify one out of these variations as the new currently best decision variable, and (iii) iterate steps (i) and (ii) until no further improvement can be attained. A probabilistic and an asymptotic performance bound are established and exploited in the formulation of an efficient heuristic that is tailored towards tight computational budgets. The efficiency of the method is substantiated through a comprehensive simulation study with a non-trivial road pricing problem. The method is compatible with a broad range of simulators and requires minimal parametrization.
38

Cai, HanQin. "Accelerating truncated singular-value decomposition: a fast and provable method for robust principal component analysis." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6068.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Principal component analysis (PCA) is one of the most popular statistical procedures for dimension reduction. A modification of PCA, called robust principal component analysis (RPCA), has been studied to overcome the well-known shortness of PCA: sensitivity to outliers and corrupted data points. Earlier works have proved RPCA can be exactly recovered via semidenite programming. Recently, researchers have provided some provable non-convex solvers for RPCA, based on projected gradient descent or alternating projections, in full or partial observed settings. Yet, we nd the computational complexity of the recent RPCA algorithms can be improved further. We study RPCA in the fully observed setting, which is about separating a low rank matrix L and a sparse matrix S from their given sum D = L + S. In this thesis, a new non-convex algorithm, dubbed accelerated alternating projections, is introduced for solving RPCA rapidly. The proposed new algorithm signicantly improves the computational efficiency of the existing alternating projections based algorithm proposed in [1] when updating the estimate of low rank factor. The acceleration is achieved by rst projecting a matrix onto some low dimensional subspace before obtaining a new estimate of the low rank matrix via truncated singular-value decomposition. Essentially, truncated singular-value decomposition (a.k.a. the best low rank approximation) is replaced by a high-efficiency sub-optimal low rank approximation, while the convergence is retained. Exact recovery guarantee has been established, which shows linear convergence of the proposed algorithm under certain natural assumptions. Empirical performance evaluations establish the advantage of our algorithm over other state-of-the-art algorithms for RPCA. An application experiment on video background subtraction has been also established.
39

Massa, Julio Cesar. "Acceleration of convergence in solving the eigenvalue problem by matrix iteration using the power method." Thesis, Virginia Polytechnic Institute and State University, 1985. http://hdl.handle.net/10919/101452.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
A modification of the matrix iteration using the power method, in conjunction with Hotelling deflation, for the solution of the problem K.x = ω².M.x is here proposed. The problem can be written in the form D.x =λ.x, and the modification consists of raising the matrix D to an appropriate power p before carrying out the iteration process. The selection of a satisfactory value of p is investigated, based on the spacing between the eigenvalues. The effect of p on the accuracy of the results is also discussed.
M.S.
40

MARTINS, FABIO JESSEN WERNECK DE ALMEIDA. "METHODS FOR ACCELERATION OF LEARNING PROCESS OF REINFORCEMENT LEARNING NEURO-FUZZY HIERARCHICAL POLITREE MODEL." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2010. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=16421@1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
FUNDAÇÃO DE APOIO À PESQUISA DO ESTADO DO RIO DE JANEIRO
Neste trabalho foram desenvolvidos e avaliados métodos com o objetivo de melhorar e acelerar o processo de aprendizado do modelo de Reinforcement Learning Neuro-Fuzzy Hierárquico Politree (RL-NFHP). Este modelo pode ser utilizado para dotar um agente de inteligência através de processo de Aprendizado por Reforço (Reinforcement Learning). O modelo RL-NFHP apresenta as seguintes características: aprendizado automático da estrutura do modelo; auto-ajuste dos parâmetros associados à estrutura; capacidade de aprendizado da ação a ser adotada quando o agente está em um determinado estado do ambiente; possibilidade de lidar com um número maior de entradas do que os sistemas neuro-fuzzy tradicionais; e geração de regras linguísticas com hierarquia. Com intenção de melhorar e acelerar o processo de aprendizado do modelo foram implementadas seis políticas de seleção, sendo uma delas uma inovação deste trabalho (Q-DC-roulette); implementado o método early stopping para determinação automática do fim do treinamento; desenvolvido o eligibility trace cumulativo; criado um método de poda da estrutura, para eliminação de células desnecessárias; além da reescrita do código computacional original. O modelo RL-NFHP modificado foi avaliado em três aplicações: o benchmark Carro na Montanha simulado, conhecido na área de agentes autônomos; uma simulação robótica baseada no robô Khepera; e uma num robô real NXT. Os testes efetuados demonstram que este modelo modificado se ajustou bem a problemas de sistemas de controle e robótica, apresentando boa generalização. Comparado o modelo RL-NFHP modificado com o original, houve aceleração do aprendizado e obtenção de menores modelos treinados.
In this work, methods were developed and evaluated in order to improve and accelerate the learning process of Reinforcement Learning Neuro-Fuzzy Hierarchical Politree Model (RL-NFHP). This model is employed to provide an agent with intelligence, making it autonomous, due to the capacity of ratiocinate (infer actions) and learning, acquired knowledge through interaction with the environment by Reinforcement Learning process. The RL-NFHP model has the following features: automatic learning of structure of the model; self-adjustment of parameters associated with its structure, ability to learn the action to be taken when the agent is in a particular state of the environment; ability to handle a larger number of inputs than the traditional neuro-fuzzy systems; and generation of rules with linguistic interpretable hierarchy. With the aim to improve and accelerate the learning process of the model, six selection action policies were developed, one of them an innovation of this work (Q-DC-roulette); implemented the early stopping method for automatically determining the end of the training; developed a cumulative eligibility trace; created a method of pruning the structure, for removing unnecessary cells; in addition to rewriting the original computer code. The modified RL-NFHP model was evaluated in three applications: the simulated benchmark Car-Mountain problem, well known in the area of autonomous agents; a simulated application in robotics based on the Khepera robot; and an application in a real robot. The experiments show that this modified model fits well the problems of control systems and robotics, with a good generalization. Compared the modified RL-NFHP model with the original one, there was acceleration of learning process and smaller structures of the model trained.
41

Ford, Wesley. "The Advancement of Stable, Efficient and Parallel Acceleration Methods for the Neutron Transport Equation." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX105/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans cet article, nous proposons une nouvelle bibliothèque de techniques non linéaires pour accélérer l’équation de transport en ordonnées discrètes. Deux nouveaux types de méthodes d'accélération non linéaire appelées méthode de rééquilibrage spatialement variable (SVRM) et accélération de matrice de réponse (RMA), respectivement, sont proposées et étudiées. La première méthode, SVRM, est basée sur le calcul de la variation spatiale de premier ordre de l'équation de la balance des neutrons. RMA, est une méthode DP0 qui utilise la connaissance de l'opérateur de transport pour former une relation cohérente. Deux variantes distinctes de RMA, appelées respectivement Explicit-RMA (E-RMA) et Balance (B-RMA), sont dérivées. Les propriétés de convergence des deux méthodes d'accélération sont étudiées pour deux schémas d'itération différents de l'opérateur de transport de la méthode des caractéristiques (MOC) pour une dalle 1D, en utilisant une analyse spectrale et une analyse de Fourier. Sur la base des résultats de la comparaison 1D, seuls les outils RMA et CMFD ont été implémentés dans la bibliothèque. Les performances de RMA sont comparées à celles de CMFD en utilisant les tests 3D C5G7, ZPPR et UH12. Les schémas de résolution parallèles et séquentiels sont considérés. L'analyse des résultats indique que les deux variantes de RMA ont une efficacité et une stabilité améliorées par rapport au CMFD, pour les matériaux à diffusion optique. De plus, le RMA montre une amélioration importante de la stabilité et de l'efficacité lorsque la géométrie est décomposée spatialement. Pour obtenir des performances numériques optimales, une combinaison de RMA et de CMFD est suggérée. Une enquête plus approfondie sur l'utilisation et l'amélioration de la RMA est proposée. De plus, de nombreuses idées pour étendre les fonctionnalités de la bibliothèque sont présentées
In this paper we propose a new library of non-linear techniques for accelerating the discrete-ordinates transport equation. Two new types of nonlinear acceleration methods called Spatially Variant Rebalancing Method (SVRM) and Response Matrix Acceleration (RMA), respectively, are proposed and investigated. The first method, SVRM, is based on the computation of the zeroth and first order spatial variation of the neutron balance equation. RMA, is a DP0 method that uses knowledge of the transport operator to form a consistent relationship. Two distinct variants of RMA, called Explicit-RMA (E-RMA) and Balance (B-RMA), respectively, are derived. The convergence properties of both acceleration methods are investigated for two different iteration schemes of the method of characteristics (MOC) transport operator for a 1D slab, using spectral and Fourier analysis. Based off the results of the 1D comparison, only RMA and CMFD were implemented in the library. The performance of RMA is compared to CMFD using the C5G7, ZPPR, and UH12 3D benchmarks. Both parallel and sequential solving schemes are considered. Analysis of the results indicates that both variants of RMA have improved effectiveness and stability relative to CMFD, for optically diffusive materials. Moreover, RMA shows great improvement in stability and effectiveness when the geometry is spatially decomposed. To achieve optimal numerical performance, a combination of RMA and CMFD is suggested. Further investigation into the use and improvement of RMA is proposed. As well, many ideas for extending the features of the library are presented
42

GUYOMARC'H, FREDERIC. "Methodes de krylov : regularisation de la solution et acceleration de la convergence." Rennes 1, 2000. http://www.theses.fr/2000REN10096.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
De nombreux problemes de calcul scientifique reclament la resolution de systemes lineaires. Des algorithmes recents et performants pour resoudre ces systemes sont bases sur les methodes de krylov. L'espace des solutions de celles-ci est un espace de krylov et la solution est alors definie par une condition d'orthogonalite dite de galerkin. Dans une premiere partie, on modifie la definition de la solution pour la resolution de systemes mal-conditionnes, en introduisant une nouvelle technique de regularisation basee sur des filtres polynomiaux. Le point fort de cette methode est que la forme des filtres n'est pas fixee par la methode mais peut etre quelconque, et donc dictee par les specificites du probleme. Dans la seconde partie, on modifie l'espace des solutions pour accelerer la convergence. Deux techniques sont explorees. La premiere permet de recycler un espace de krylov utilise pour resoudre une premiere equation. La seconde, basee sur des techniques de deflation, cherche a attenuer l'effet nefaste des plus petites valeurs propres. Cette derniere peut, de plus, s'affiner lors de la resolution de plusieurs systemes, jusqu'a eliminer completement l'impact de ces petites valeurs propres. Tous ces algorithmes sont simplementes et testes sur des problemes issus de l'analyse d'images et de la mecanique. Cette validation numerique confirme les resultats theoriques.
43

Wolfram, Heiko, and Wolfram Dötzel. "Stability Analysis of a MEMS Acceleration Sensor." Universitätsbibliothek Chemnitz, 2007. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200700143.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The electrostatic actuation with its several advantages is the main principle for micro-electro-mechanical systems (MEMS). One major drawback is the nonlinear behavior, which results into instability, known as the electrostatic pull-in effect. This effect might also push a closed-loop configuration into instability and thus makes a linear time-invariant control inapplicable to the system. The paper investigates the stability of an acceleration sensor in closed-loop operation with this setting. A simplified controller adjustment gives a first insight into this topic. Practical implementations saturate on the quantizer's full-scale value, which is also considered in the stability analysis. Numerical phase-plane analysis verifies the stability and shows further surprising results.
44

沖津, 昭慶, Akiyoshi Okitsu, 健治 山下, Kenzi Yamashita, 秀幸 畔上 та Hideyuki Azegami. "回転自由度を考慮した実験的動剛性結合法". 日本機械学会, 1988. http://hdl.handle.net/2237/7270.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
45

Serravite, Daniel H. "Whole Body Periodic Acceleration Reduces Levels of Delayed Onset Muscle Soreness After Eccentric Exercise." Scholarly Repository, 2010. http://scholarlyrepository.miami.edu/oa_dissertations/650.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Context: Several recovery strategies have been used, with limited effectiveness, to reduce the muscle discomfort or pain and the diminished muscle performance following a bout of unaccustomed physical activity, a condition known as delayed onset of muscle soreness (DOMS). Muscle damage in this condition is associated with mechanical disruption of the muscle and connective tissue and inflammation and increased oxidative stress. Low frequency, low intensity, whole body periodic acceleration (WBPA) that increases nitric oxide (NO) release from vascular endothelium into the circulation through increased pulsatile shear stress offers a potential solution. This is because endothelial derived nitric oxide has anti-inflammatory, antioxidant and anti-nociceptive properties. Objective: The purpose of this study was to examine the effects of WBPA on the pain and diminished muscle performance associated with DOMS induced by unaccustomed eccentric arm exercise in young male subjects. Design: Longitudinal. Setting: University Exercise Physiology Laboratory. Participants: Seventeen active men, 23.4 +/- 4.6 yr of age. Intervention: Subjects made six visits to the research facility over a two-week period. On day one, the subject performed a 1RM elbow flexion test and was then randomly assigned to the WBPA or control group. Criterion measurements were taken on Day 2, prior to and immediately following performance of the eccentric exercise protocol (10 sets of 10 repetitions using 120% of 1RM) and after the recovery period. During all subsequent sessions (24, 48, 72, and 96 h) these data were collected before the WBPA or passive recovery was provided. Main Outcome Measures: Isometric strength (MVC), blood markers (CPK, MYO, IL-6, TNF-alpha and Uric Acid), soreness, pain, circumference, and range of motion (ROM). Results: Significantly higher MVC values were seen for the WBPA group across the entire 96 h recovery period. Additionally, within group differences were seen in CPK, MYO, IL-6, soreness, pain, circumference, and ROM showing a smaller impact and more rapid recovery by the WBPA group. Conclusion: Application of WBPA hastens recovery from DOMS after eccentric exercise. Given the lack of other potential mechanisms, these effects appear to be mediated by the increased NO release with WBPA.
46

PEIRETTI, PARADISI BENEDETTA. "Study on Coulomb explosion induced by laser-matter interaction and application to ion acceleration." Doctoral thesis, Politecnico di Torino, 2019. http://hdl.handle.net/11583/2739923.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
47

Rustaey, Abid 1961. "A comparison of conventional acceleration schemes to the method of residual expansion functions." Thesis, The University of Arizona, 1989. http://hdl.handle.net/10150/277176.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The algebraic equations resulting from a finite difference approximation may be solved numerically. A new scheme that appears quite promising is the method of residual expansion functions. In addition to speedy convergence, it is also independent of the number of algebraic equations under consideration, hence enabling us to analyze larger systems with higher accuracies. A factor which plays an important role in convergence of some numerical schemes is the concept of diagonal dominance. Matrices that converge at high rates are indeed the ones that possess a high degree of diagonal dominance. Another attractive feature of the method of residual expansion functions is its accurate convergence with minimal degree of diagonal dominance. Methods such as simultaneous and successive displacements, Chebyshev and projection are also discussed, but unlike the method of residual expansion functions, their convergence rates are strongly dependent on the degree of diagonal dominance.
48

Al-Khayyat, Atheel Nowfal Mohammed Taher. "Accelerating the frequency dependent Finite-Difference Time-Domain method using the spatial filtering and parallel computing techniques." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/accelerating-the-frequency-dependent-finitedifference-timedomain-method-using-the-spatial-filtering-and-parallel-computing-techniques(51ca5493-1c84-4c36-ba31-36a320ebbeed).html.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Maxwell's equations are one of the important electromagnetic fundamentals that have motivated electrical, optical and communication technologies. In order to solve Maxwell's equations, many numerical techniques were introduced, each one has its own advantages and limitations. One of the robust, accurate and widely used numerical techniques is the Finite-Difference Time-Domain (FDTD) method. The FDTD method uses central-difference approximation to discretise the partial differential form of Maxwell's equations in both space and time, and Yee's algorithm to obtain the solutions. To guarantee stable and accurate solution of the partial form of Maxwell's equations, the time increment of the FDTD method must be upper bounded by the Courant-Friedrichs-Lewy (CFL) condition, hence the computational resources demand increases for large-scale electromagnetic problems. The Spatially-Filtered FDTD (SF-FDTD) method is utilised to maintain stability when the CFL condition is altered. In other words, the time increment can be conditionally set larger than the CFL limit by filtering the unwanted high spatial frequency components in the spatial frequency domain to maintain stability. The SF-FDTD method can not model the frequency dependent media hence the Frequency Dependent (FD-FDTD) method is utilised to accurately model frequency dependent media. However, the FD-FDTD method requires a high amount of computational resources for modelling 3D scenarios due to the limitation of the CFL condition. The contributions in this thesis are addressed as following. Firstly, the implementation of 1D, 2D, and 3D spatially filtered frequency dependent FDTD (SF-FD-FDTD) method with Debye model. Secondly, the application of three absorbing boundary conditions (Mur, Stretched Mesh HABC, and Complex Frequency Shifted PML) with the SF-FD-FDTD method. Thirdly, the investigations in terms of stability, accuracy, and efficiency of the SF-FD-FDTD method with each absorbing boundary condition. Fourthly, the extension of late time instability of the Huygens subgridding method (HSG) by implementing a spatial filtering algorithm with the 1D and 2D HSG method. Fifthly, the application of the shared memory architecture with OpenMP to accelerate the 2D SF-FD-FDTD method.
49

Kamenngan, Panlop. "Control of fully submerged hydrofoil craft acceleration feedback methods to improve performance in high sea states." Thesis, University of Newcastle Upon Tyne, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.260295.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
50

Vance, Jason W. "Elementary Principal Perceptions of the Tennessee Educator Acceleration Model." Digital Commons @ East Tennessee State University, 2016. https://dc.etsu.edu/etd/3146.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The Tennessee Educator Acceleration Model (TEAM) had been in a state of reform since being awarded the Race to the Top Grant. Few teachers admit that an evaluation influenced them significantly; additionally, few administrators agreed that when they evaluated a teacher, it did not significantly affect the teacher or students. The purpose of this qualitative study was to determine the perceptions of building‑level principals regarding the effectiveness (i.e., increased teacher participation and quality) and efficiency (i.e., produces the required results) of the TEAM in regard to teacher evaluations. Four elementary school principals from East Tennessee participated in the study. The researcher provided data from this study to inform stakeholders of strengths and weaknesses of the state evaluation model. Additionally, the researcher used the data to provide recommendations for improvements to the TEAM model and to identify support principals needed to adapt their leadership style to effectively execute TEAM mandates. The research revealed that the principals believed the model was a strong one that was research based; however, the model could prove to be ineffective in the delivery and inefficient in the follow‑through if the proper supports were not in place.

До бібліографії