To see the other types of publications on this topic, follow the link: Multi scale methods.

Dissertations / Theses on the topic 'Multi scale methods'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multi scale methods.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Zettervall, Niklas. "Multi-scale methods for stochastic differential equations." Thesis, Umeå universitet, Institutionen för fysik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-53704.

Full text
Abstract:
Standard Monte Carlo methods are used extensively to solve stochastic differential equations. This thesis investigates a Monte Carlo (MC) method called multilevel Monte Carlo that solves the equations on several grids, each with a specific number of grid points. The multilevel MC reduces the computational cost compared to standard MC. When using a fixed computational cost the variance can be reduced by using the multilevel method compared to the standard one. Discretization and statistical error calculations are also being conducted and the possibility to evaluate the errors coupled with the multilevel MC creates a powerful numerical tool for calculating equations numerically. By using the multilevel MC method together with the error calculations it is possible to efficiently determine how to spend an extended computational budget.
Standard Monte Carlo metoder används flitigt för att lösa stokastiska differentialekvationer. Denna avhandling undersöker en Monte Carlo-metod (MC) kallad multilevel Monte Carlo som löser ekvationerna på flera olika rutsystem, var och en med ett specifikt antal punkter. Multilevel MC reducerar beräkningskomplexiteten jämfört med standard MC. För en fixerad beräkningskoplexitet kan variansen reduceras genom att multilevel MC-metoden används istället för standard MC-metoden. Diskretiserings- och statistiska felberäkningar görs också och möjligheten att evaluera de olika felen, kopplat med multilevel MC-metoden skapar ett kraftfullt verktyg för numerisk beräkning utav ekvationer. Genom att använda multilevel MC tillsammans med felberäkningar så är det möjligt att bestämma hur en utökad beräkningsbudget speneras så effektivt som möjligt.
APA, Harvard, Vancouver, ISO, and other styles
2

Munafo, Alessandro. "Multi-Scale models and computational methods for aerothermodynamics." Phd thesis, Ecole Centrale Paris, 2014. http://tel.archives-ouvertes.fr/tel-00997437.

Full text
Abstract:
This thesis aimed at developing multi-scale models and computational methods for aerother-modynamics applications. The research on multi-scale models has focused on internal energy excitation and dissociation of molecular gases in atmospheric entry flows. The scope was two-fold: to gain insight into the dynamics of internal energy excitation and dissociation in the hydrodynamic regime and to develop reduced models for Computational Fluid Dynamics applications. The reduced models have been constructed by coarsening the resolution of a detailed rovibrational collisional model developed based on ab-initio data for the N2 (1Σ+g)-N (4Su) system provided by the Computational Quantum Chemistry Group at NASA Ames Research Center. Different mechanism reduction techniques have been proposed. Their appli-cation led to the formulation of conventional macroscopic multi-temperature models and vi-brational collisional models, and innovative energy bin models. The accuracy of the reduced models has been assessed by means of a systematic comparison with the predictions of the detailed rovibrational collisional model. Applications considered are inviscid flows behind normal shock waves, within converging-diverging nozzles and around axisymmetric bodies, and viscous flows along the stagnation-line of blunt bodies. The detailed rovibrational colli-sional model and the reduced models have been coupled to two flow solvers developed from scratch in FORTRAN 90 programming language (SHOCKING_F90 and SOLV-ER_FVMCC_F90). The results obtained have shown that the innovative energy bin models are able to reproduce the flow dynamics predicted by the detailed rovibrational collisional model with a noticeable benefit in terms of computing time. The energy bin models are also more accurate than the conventional multi-temperature and vibrational collisional models. The research on computational methods has focused on rarefied flows. The scope was to formu-late a deterministic numerical method for solving the Boltzmann equation in the case of multi-component gases with internal energy by accounting for both elastic and inelastic collisions. The numerical method, based on the weighted convolution structure of the Fourier trans-formed Boltzmann equation, is an extension of an existing spectral-Lagrangian method, valid for a mono-component gas without internal energy. During the development of the method, particular attention has been devoted to ensure the conservation of mass, momentum and en-ergy while evaluating the collision operators. Conservation is enforced through the solution of constrained optimization problems, formulated in a consistent manner with the collisional in-variants. The extended spectral-Lagrangian method has been implemented in a parallel com-putational tool (best; Boltzmann Equation Spectral Solver) written in C programming lan-guage. Applications considered are the time-evolution of an isochoric gaseous system initially set in a non-equilibrium state and the steady flow across a normal shock wave. The accuracy of the proposed numerical method has been assessed by comparing the moments extracted from the velocity distribution function with Direct Simulation Monte Carlo (DSMC) method predictions. In all the cases, an excellent agreement has been found. The computational results obtained for both space homogeneous and space inhomogeneous problems have also shown that the enforcement of conservation is mandatory for obtaining accurate numerical solutions.
APA, Harvard, Vancouver, ISO, and other styles
3

Holst, Henrik. "Multi-scale methods for wave propagation in heterogeneous media." Licentiate thesis, Stockholm : Datavetenskap och kommunikation, Kungliga Tekniska högskolan, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-10511.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Zhen. "Stochastic Simulation Methods for Biochemical Systems with Multi-state and Multi-scale Features." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/19191.

Full text
Abstract:
In this thesis we study stochastic modeling and simulation methods for biochemical systems. The thesis is focused on systems with multi-state and multi-scale features and divided into two parts. In the first part, we propose new algorithms that improve existing multi-state simulation methods. We first compare the well known Gillespie\\\'s stochastic simulation algorithm (SSA) with the StochSim, an agent-based simulation method. Based on the analysis, we propose a hybrid method that possesses the advantages of both methods. Then we propose two new methods that extend the Network-Free Algorithm (NFA) for rule-based models. Numerical results are provided to show the performance improvement by our new methods. In the second part, we investigate two simulation schemes for the multi-scale feature: Haseltine and Rawlings\\\' hybrid method and the quasi-steady-state stochastic simulation method. We first propose an efficient partitioning strategy for the hybrid method and an efficient way of building stochastic cell cycle models with this new partitioning strategy. Then, to understand conditions where the two simulation methods can be applied, we develop a way to estimate the relaxation time of the fast sub-network, and compare it with the firing interval of the slow sub-network. Our analysis are verified by numerical experiments on different realistic biochemical models.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

Joung, Young Soo. "Electric field based fabrication methods for multi-scale structured surfaces." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/92160.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 217-227).
Control of micro/nano scale surface structures and properties is crucial to developing novel functional materials. From an engineering point of view, the development of scalable and economical micro/nano-fabrication methods has been in high demand. In this dissertation, electrophoretic deposition (EPD) and breakdown anodization (BDA) are examined for their potential to produce multi-scale structured surfaces. EPD uses electrophoresis to deposit thin films of nanoparticles, dispersed in suspension, onto charged or porous substrates. Depending upon the dispersion stability, the surface roughness can be modulated in order to affect the resulting wettability. BDA can be utilized to alter surface features by employing instabilities during high voltage anodization, which lead to micro scale topography. Different microporous structures are generated depending on electric potential and electrolyte temperature during BDA. A hybrid method employing EPD and BDA results in hierarchical surface structures with both nano/micro scale features. In this work EPD and BDA are utilized for the development of superhydrophobic and superhydrophilic surfaces; sample applications include anti-wetting fabric, capillarity driven flow design, and critical heat flux enhancement. In many applications it is critical to understand how moving liquid water droplets will behave when they encounter these modified surfaces. We investigate drop impingement on porous thin films produced by BDA and EPD in order to understand the effects of surface structure and chemical properties on droplet dynamics. Using dimensional analysis we've discovered a novel dimensionless parameter, named the Washburn- Reynolds number, which can predict the droplet impingement modes. Intriguingly we've also discovered that under certain conditions drop impingement results in gas trapped in the spreading droplet, leading to the generation of aerosol above the droplet when the gas bubbles burst. The Washburn-Reynolds number also largely dictates the aerosol generation process. Our results inform the understanding of dynamic interactions between porous surfaces and liquid drops for applications ranging from droplet microfluidics to aerosol generators. In summary, EPD and BDA provide promising micro and nano-scale fabrication technologies with reasonable control of surface morphology and properties in a cost-effective and time-effective and scalable.
by Young Soo Joung.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Feickert, Aaron James. "Multi-Scale Simulation Methods of Crosslinked Polymer Networks and Degradation." Diss., North Dakota State University, 2018. https://hdl.handle.net/10365/28764.

Full text
Abstract:
Crosslinked thermoset polymers are used heavily in industrial and consumer products, as well as in infrastructure. When used as a protective coating, a thermoset's net-like structure can act as a barrier to protect an underlying substrate from permeation of moisture, salt, or other chemicals that otherwise weaken the coating or lead to substrate corrosion. Understanding how such coatings degrade, both at microscopic and macroscopic scales, is essential for the development and testing of materials for optimal service life. Several numerical and computational techniques are used to analyze the behavior of model crosslinked polymer networks under changing conditions at a succession of scales. Molecular dynamics is used to show the effects of cooling and constraints on cavitation behavior in coarse-grained bulk thermosets, as well as to investigate dynamical behavior under varying degradation conditions. Finite-element analysis is applied to examine strain distributions and loci of failure in several macroscopic coated test panel designs, discussing the effects of flexure and coating stack moduli. Finally, the transport of moisture through model coatings under cycled conditions is examined by lattice Boltzmann numerical techniques, considering several common concentration-dependent diffusivity models used in the literature and suggesting an optimal behavior regime for non-constant diffusivity.
APA, Harvard, Vancouver, ISO, and other styles
7

Wei, Jiangong. "Surface Integral Equation Methods for Multi-Scale and Wideband Problems." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1408653442.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sbailò, Luigi [Verfasser]. "Efficient multi-scale sampling methods in statistical physics / Luigi Sbailò." Berlin : Freie Universität Berlin, 2020. http://d-nb.info/1206180722/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Mahler, Nicolas. "Machine learning methods for discrete multi-scale fows : application to finance." Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00749717.

Full text
Abstract:
This research work studies the problem of identifying and predicting the trends of a single financial target variable in a multivariate setting. The machine learning point of view on this problem is presented in chapter I. The efficient market hypothesis, which stands in contradiction with the objective of trend prediction, is first recalled. The different schools of thought in market analysis, which disagree to some extent with the efficient market hypothesis, are reviewed as well. The tenets of the fundamental analysis, the technical analysis and the quantitative analysis are made explicit. We particularly focus on the use of machine learning techniques for computing predictions on time-series. The challenges of dealing with dependent and/or non-stationary features while avoiding the usual traps of overfitting and data snooping are emphasized. Extensions of the classical statistical learning framework, particularly transfer learning, are presented. The main contribution of this chapter is the introduction of a research methodology for developing trend predictive numerical models. It is based on an experimentation protocol, which is made of four interdependent modules. The first module, entitled Data Observation and Modeling Choices, is a preliminary module devoted to the statement of very general modeling choices, hypotheses and objectives. The second module, Database Construction, turns the target and explanatory variables into features and labels in order to train trend predictive numerical models. The purpose of the third module, entitled Model Construction, is the construction of trend predictive numerical models. The fourth and last module, entitled Backtesting and Numerical Results, evaluates the accuracy of the trend predictive numerical models over a "significant" test set via two generic backtesting plans. The first plan computes recognition rates of upward and downward trends. The second plan designs trading rules using predictions made over the test set. Each trading rule yields a profit and loss account (P&L), which is the cumulated earned money over time. These backtesting plans are additionally completed by interpretation functionalities, which help to analyze the decision mechanism of the numerical models. These functionalities can be measures of feature prediction ability and measures of model and prediction reliability. They decisively contribute to formulating better data hypotheses and enhancing the time-series representation, database and model construction procedures. This is made explicit in chapter IV. Numerical models, aiming at predicting the trends of the target variables introduced in chapter II, are indeed computed for the model construction methods described in chapter III and thoroughly backtested. The switch from one model construction approach to another is particularly motivated. The dramatic influence of the choice of parameters - at each step of the experimentation protocol - on the formulation of conclusion statements is also highlighted. The RNN procedure, which does not require any parameter tuning, has thus been used to reliably study the efficient market hypothesis. New research directions for designing trend predictive models are finally discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Castronovo, Anna Margherita <1984&gt. "Techniques and Methods for a multi-scale analysis of neuromuscular fatigue." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6274/7/Castronovo_AnnaMargherita_tesi.pdf.

Full text
Abstract:
This thesis proposes an integrated holistic approach to the study of neuromuscular fatigue in order to encompass all the causes and all the consequences underlying the phenomenon. Starting from the metabolic processes occurring at the cellular level, the reader is guided toward the physiological changes at the motorneuron and motor unit level and from this to the more general biomechanical alterations. In Chapter 1 a list of the various definitions for fatigue spanning several contexts has been reported. In Chapter 2, the electrophysiological changes in terms of motor unit behavior and descending neural drive to the muscle have been studied extensively as well as the biomechanical adaptations induced. In Chapter 3 a study based on the observation of temporal features extracted from sEMG signals has been reported leading to the need of a more robust and reliable indicator during fatiguing tasks. Therefore, in Chapter 4, a novel bi-dimensional parameter is proposed. The study on sEMG-based indicators opened a scenario also on neurophysiological mechanisms underlying fatigue. For this purpose, in Chapter 5, a protocol designed for the analysis of motor unit-related parameters during prolonged fatiguing contractions is presented. In particular, two methodologies have been applied to multichannel sEMG recordings of isometric contractions of the Tibialis Anterior muscle: the state-of-the-art technique for sEMG decomposition and a coherence analysis on MU spike trains. The importance of a multi-scale approach has been finally highlighted in the context of the evaluation of cycling performance, where fatigue is one of the limiting factors. In particular, the last chapter of this thesis can be considered as a paradigm: physiological, metabolic, environmental, psychological and biomechanical factors influence the performance of a cyclist and only when all of these are kept together in a novel integrative way it is possible to derive a clear model and make correct assessments.
APA, Harvard, Vancouver, ISO, and other styles
11

Castronovo, Anna Margherita <1984&gt. "Techniques and Methods for a multi-scale analysis of neuromuscular fatigue." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2014. http://amsdottorato.unibo.it/6274/.

Full text
Abstract:
This thesis proposes an integrated holistic approach to the study of neuromuscular fatigue in order to encompass all the causes and all the consequences underlying the phenomenon. Starting from the metabolic processes occurring at the cellular level, the reader is guided toward the physiological changes at the motorneuron and motor unit level and from this to the more general biomechanical alterations. In Chapter 1 a list of the various definitions for fatigue spanning several contexts has been reported. In Chapter 2, the electrophysiological changes in terms of motor unit behavior and descending neural drive to the muscle have been studied extensively as well as the biomechanical adaptations induced. In Chapter 3 a study based on the observation of temporal features extracted from sEMG signals has been reported leading to the need of a more robust and reliable indicator during fatiguing tasks. Therefore, in Chapter 4, a novel bi-dimensional parameter is proposed. The study on sEMG-based indicators opened a scenario also on neurophysiological mechanisms underlying fatigue. For this purpose, in Chapter 5, a protocol designed for the analysis of motor unit-related parameters during prolonged fatiguing contractions is presented. In particular, two methodologies have been applied to multichannel sEMG recordings of isometric contractions of the Tibialis Anterior muscle: the state-of-the-art technique for sEMG decomposition and a coherence analysis on MU spike trains. The importance of a multi-scale approach has been finally highlighted in the context of the evaluation of cycling performance, where fatigue is one of the limiting factors. In particular, the last chapter of this thesis can be considered as a paradigm: physiological, metabolic, environmental, psychological and biomechanical factors influence the performance of a cyclist and only when all of these are kept together in a novel integrative way it is possible to derive a clear model and make correct assessments.
APA, Harvard, Vancouver, ISO, and other styles
12

Läthén, Gunnar. "Segmentation methods for medical image analysis : blood vessels, multi-scale filtering and level set methods /." Norrköping : Department of Science and Technology, Linköping University, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54181.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Läthén, Gunnar. "Segmentation Methods for Medical Image Analysis : Blood vessels, multi-scale filtering and level set methods." Licentiate thesis, Linköping University, Linköping University, Linköping University, Center for Medical Image Science and Visualization, CMIV, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-54181.

Full text
Abstract:

Image segmentation is the problem of partitioning an image into meaningful parts, often consisting of an object and background. As an important part of many imaging applications, e.g. face recognition, tracking of moving cars and people etc, it is of general interest to design robust and fast segmentation algorithms. However, it is well accepted that there is no general method for solving all segmentation problems. Instead, the algorithms have to be highly adapted to the application in order to achieve good performance. In this thesis, we will study segmentation methods for blood vessels in medical images. The need for accurate segmentation tools in medical applications is driven by the increased capacity of the imaging devices. Common modalities such as CT and MRI generate images which simply cannot be examined manually, due to high resolutions and a large number of image slices. Furthermore, it is very difficult to visualize complex structures in three-dimensional image volumes without cutting away large portions of, perhaps important, data. Tools, such as segmentation, can aid the medical staff in browsing through such large images by highlighting objects of particular importance. In addition, segmentation in particular can output models of organs, tumors, and other structures for further analysis, quantification or simulation.

We have divided the segmentation of blood vessels into two parts. First, we model the vessels as a collection of lines and edges (linear structures) and use filtering techniques to detect such structures in an image. Second, the output from this filtering is used as input for segmentation tools. Our contributions mainly lie in the design of a multi-scale filtering and integration scheme for de- tecting vessels of varying widths and the modification of optimization schemes for finding better segmentations than traditional methods do. We validate our ideas on synthetical images mimicking typical blood vessel structures, and show proof-of-concept results on real medical images.

APA, Harvard, Vancouver, ISO, and other styles
14

Bonis, Ioannis. "Optimisation and control methodologies for large-scale and multi-scale systems." Thesis, University of Manchester, 2011. https://www.research.manchester.ac.uk/portal/en/theses/optimisation-and-control-methodologies-for-largescale-and-multiscale-systems(6c4a4f13-ebae-4d9d-95b7-cca754968d47).html.

Full text
Abstract:
Distributed parameter systems (DPS) comprise an important class of engineering systems ranging from "traditional" such as tubular reactors, to cutting edge processes such as nano-scale coatings. DPS have been studied extensively and significant advances have been noted, enabling their accurate simulation. To this end a variety of tools have been developed. However, extending these advances for systems design is not a trivial task . Rigorous design and operation policies entail systematic procedures for optimisation and control. These tasks are "upper-level" and utilize existing models and simulators. The higher the accuracy of the underlying models, the more the design procedure benefits. However, employing such models in the context of conventional algorithms may lead to inefficient formulations. The optimisation and control of DPS is a challenging task. These systems are typically discretised over a computational mesh, leading to large-scale problems. Handling the resulting large-scale systems may prove to be an intimidating task and requires special methodologies. Furthermore, it is often the case that the underlying physical phenomena span various temporal and spatial scales, thus complicating the analysis. Stiffness may also potentially be exhibited in the (nonlinear) models of such phenomena. The objective of this work is to design reliable and practical procedures for the optimisation and control of DPS. It has been observed in many systems of engineering interest that although they are described by infinite-dimensional Partial Differential Equations (PDEs) resulting in large discretisation problems, their behaviour has a finite number of significant components , as a result of their dissipative nature. This property has been exploited in various systematic model reduction techniques. Of key importance in this work is the identification of a low-dimensional dominant subspace for the system. This subspace is heuristically found to correspond to part of the eigenspectrum of the system and can therefore be identified efficiently using iterative matrix-free techniques. In this light, only low-dimensional Jacobians and Hessian matrices are involved in the formulation of the proposed algorithms, which are projections of the original matrices onto appropriate low-dimensional subspaces, computed efficiently with directional perturbations.The optimisation algorithm presented employs a 2-step projection scheme, firstly onto the dominant subspace of the system (corresponding to the right-most eigenvalues of the linearised system) and secondly onto the subspace of decision variables. This algorithm is inspired by reduced Hessian Sequential Quadratic Programming methods and therefore locates a local optimum of the nonlinear programming problem given by solving a sequence of reduced quadratic programming (QP) subproblems . This optimisation algorithm is appropriate for systems with a relatively small number of decision variables. Inequality constraints can be accommodated following a penalty-based strategy which aggregates all constraints using an appropriate function , or by employing a partial reduction technique in which only equality constraints are considered for the reduction and the inequalities are linearised and passed on to the QP subproblem . The control algorithm presented is based on the online adaptive construction of low-order linear models used in the context of a linear Model Predictive Control (MPC) algorithm , in which the discrete-time state-space model is recomputed at every sampling time in a receding horizon fashion. Successive linearisation around the current state on the closed-loop trajectory is combined with model reduction, resulting in an efficient procedure for the computation of reduced linearised models, projected onto the dominant subspace of the system. In this case, this subspace corresponds to the eigenvalues of largest magnitude of the discretised dynamical system. Control actions are computed from low-order QP problems solved efficiently online.The optimisation and control algorithms presented may employ input/output simulators (such as commercial packages) extending their use to upper-level tasks. They are also suitable for systems governed by microscopic rules, the equations of which do not exist in closed form. Illustrative case studies are presented, based on tubular reactor models, which exhibit rich parametric behaviour.
APA, Harvard, Vancouver, ISO, and other styles
15

Gonella, Stefano. "Homogenization and Bridging Multi-scale Methods for the Dynamic Analysis of Periodic Solids." Diss., Georgia Institute of Technology, 2007. http://hdl.handle.net/1853/16144.

Full text
Abstract:
This work investigates the application of homogenization techniques to the dynamic analysis of periodic solids, with emphasis on lattice structures. The presented analysis is conducted both through a Fourier-based technique and through an alternative approach involving Taylor series expansions directly performed in the spatial domain in conjunction with a finite element formulation of the lattice unit cell. The challenge of increasing the accuracy and the range of applicability of the existing homogenization methods is addressed with various techniques. Among them, a multi-cell homogenization is introduced to extend the region of good approximation of the methods to include the short wavelength limit. The continuous partial differential equations resulting from the homogenization process are also used to estimate equivalent mechanical properties of lattices with various internal configurations. In particular, a detailed investigation is conducted on the in-plane behavior of hexagonal and re-entrant honeycombs, for which both static properties and wave propagation characteristics are retrieved by applying the proposed techniques. The analysis of wave propagation in homogenized media is furthermore investigated by means of the bridging scales method to address the problem of modelling travelling waves in homogenized media with localized discontinuities. This multi-scale approach reduces the computational cost associated with a detailed finite element analysis conducted over the entire domain and yields considerable savings in CPU time. The combined use of homogenization and bridging method is suggested as a powerful tool for fast and accurate wave simulation and its potentials for NDE applications are discussed.
APA, Harvard, Vancouver, ISO, and other styles
16

Brunton, Alan P. "Multi-scale Methods for Omnidirectional Stereo with Application to Real-time Virtual Walkthroughs." Thesis, Université d'Ottawa / University of Ottawa, 2012. http://hdl.handle.net/10393/23552.

Full text
Abstract:
This thesis addresses a number of problems in computer vision, image processing, and geometry processing, and presents novel solutions to these problems. The overarching theme of the techniques presented here is a multi-scale approach, leveraging mathematical tools to represent images and surfaces at different scales, and methods that can be adapted from one type of domain (eg., the plane) to another (eg., the sphere). The main problem addressed in this thesis is known as stereo reconstruction: reconstructing the geometry of a scene or object from two or more images of that scene. We develop novel algorithms to do this, which work for both planar and spherical images. By developing a novel way to formulate the notion of disparity for spherical images, we are able effectively adapt our algorithms from planar to spherical images. Our stereo reconstruction algorithm is based on a novel application of distance transforms to multi-scale matching. We use matching information aggregated over multiple scales, and enforce consistency between these scales using distance transforms. We then show how multiple spherical disparity maps can be efficiently and robustly fused using visibility and other geometric constraints. We then show how the reconstructed point clouds can be used to synthesize a realistic sequence of novel views, images from points of view not captured in the input images, in real-time. Along the way to this result, we address some related problems. For example, multi-scale features can be detected in spherical images by convolving those images with a filterbank, generating an overcomplete spherical wavelet representation of the image from which the multiscale features can be extracted. Convolution of spherical images is much more efficient in the spherical harmonic domain than in the spatial domain. Thus, we develop a GPU implementation for fast spherical harmonic transforms and frequency domain convolutions of spherical images. This tool can also be used to detect multi-scale features on geometric surfaces. When we have a point cloud of a surface of a particular class of object, whether generated by stereo reconstruction or by some other modality, we can use statistics and machine learning to more robustly estimate the surface. If we have at our disposal a database of surfaces of a particular type of object, such as the human face, we can compute statistics over this database to constrain the possible shape a new surface of this type can take. We show how a statistical spherical wavelet shape prior can be used to efficiently and robustly reconstruct a face shape from noisy point cloud data, including stereo data.
APA, Harvard, Vancouver, ISO, and other styles
17

Tong, Jenna Rose. "Towards multi-scale tomography : advances in electron tomography and allied 3D imaging methods." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Steffansson, Hlynur. "Methods and algorithms for integrated multi-scale optimisation of production planning and scheduling." Thesis, Imperial College London, 2007. http://hdl.handle.net/10044/1/7437.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Corbin, Gregor [Verfasser], and Axel [Akademischer Betreuer] Klar. "Numerical methods for multi-scale cell migration models / Gregor Corbin ; Betreuer: Axel Klar." Kaiserslautern : Technische Universität Kaiserslautern, 2020. http://d-nb.info/1222974096/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Millán, Vaquero Ricardo Manuel [Verfasser]. "Visualization methods for analysis of 3D multi-scale medical data / Ricardo Manuel Millán Vaquero." Hannover : Technische Informationsbibliothek (TIB), 2016. http://d-nb.info/111916088X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Ngcobo, Mduduzi Elijah Khulekani. "Resistance to airflow and moisture loss of table grapes inside multi-scale packaging." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80192.

Full text
Abstract:
Thesis (PhD(Agric))--Stellenbosch University, 2013.
ENGLISH ABSTRACT: Postharvest quality of fresh table grapes is usually preserved through cooling using cold air. However, cooling efficiencies are affected by the multi-scale packaging that is commercially used for handling grapes after harvest. There is usually spatial temperature variability of grapes that often results in undesirable quality variations during postharvest handling and marketing. This heterogeneity of grape berry temperature inside multi-packages is largely due to uneven cold airflow patterns that are caused by airflow resistance through multi-package components. The aims of this study were therefore to conduct an in-depth experimental investigation of the contribution of grape multi-packaging components to total airflow resistance, cooling rates and patterns of grapes inside the different commercially used multi-packages, and to assess the effects of these multi-packages on table grape postharvest quality attributes. A comprehensive study of moisture loss from grapes during postharvest storage and handling, as well as a preliminary investigation of the applicability of computational fluid dynamics (CFD) modeling in predicting the transport phenomena of heat and mass transfer of grapes during cooling and cold storage in multi-packages were included in this study. Total pressure drop through different table grapes packages were measured and the percentage contribution of each package component and the fruit bulk were determined. The liner films contributed significantly to total pressure drop for all the package combinations studied, ranging from 40.33±1.15% for micro-perforated liner film to 83.34±2.13 % for non-perforated liner film. The total pressure drop through the grape bulk (1.40±0.01 % to 9.41±1.23 %) was the least compared to the different packaging combinations with different levels of liner perforation. The cooling rates of grapes in the 4.5 kg multi-packaging were significantly (P<0.05) slower than that of grapes in 5 kg punnet multi-packaging, where the 4.5 kg box resulted in a seven-eighths cooling time of 30.30-46.14% and 12.69-25.00% more than that of open-top and clamshell punnet multi-packages, respectively. After 35 days in cold storage at -0.5°C, grape bunches in the 5 kg punnet box combination (open-top and clamshell) had weight loss of 2.01 – 3.12%, while the bunches in the 4.5 kg box combination had only 1.08% weight loss. During the investigation of the effect of different carton liners on the cooling rate and quality attributes of ‘Regal seedless’ table grapes in cold storage, the non-perforated liner films maintained relative humidity (RH) close to 100 %. This high humidity inside non-perforated liner films resulted in delayed loss of stem quality but significantly (P ≤ 0.05) increased the incidence of SO2 injury and berry drop during storage compared to perforated liners. The perforated liners improved fruit cooling rates but significantly (P ≤ 0.05) reduced RH. The low RH in perforated liners also resulted in an increase in stem dehydration and browning compared to non-perforated liners. The moisture loss rate from grapes packed in non-perforated liner films was significantly (P<0.05) lower compared to the moisture loss rate from grapes packed in perforated liner films (120 x 2 mm and 36 x 4 mm). The effective moisture diffusivity values for stem parts packed in non-perforated liner films were lower than the values obtained for stem parts stored without packaging liners, and varied from 5.06x10-14 to 1.05x10-13 m2s-1. The dehydration rate of stem parts was inversely proportional to the size (diameter) of the stem parts. Dehydration rate of stems exposed (without liners) to circulating cold air was significantly (P<0.05) higher than the dehydration rates of stems packed in non-perforated liner film. Empirical models were successfully applied to describe the dehydration kinetics of the different parts of the stem. The potential of cold storage humidification in reducing grape stem dehydration was investigated. Humidification delayed and reduced the rate of stem dehydration and browning; however, it increased SO2 injury incidence on table grape bunches and caused wetting of the packages. The flow phenomenon during cooling and handling of packed table grapes was also studied using a computational fluid dynamic (CFD) model and validated using experimental results. There was good agreement between measured and predicted results. The result demonstrated clearly the applicability of CFD models to determine optimum table grape packaging and cooling procedures.
AFRIKAANSE OPSOMMING: Naoes kwaliteit van vars tafeldruiwe word gewoonlik behou deur middel van verkoeling van die produk met koue lug. Ongelukkig word die effektiwiteit van dié verkoeling beïnvloed deur die multivlakverpakking wat kommersieel gebruik word vir die naoes hantering van druiwe. Daar is gewoonlik ruimtelike variasie in die temperatuur van die druiwe wat ongewenste variasie in die kwaliteit van die druiwe veroorsaak tydens naoes hantering en bemarking. Die heterogene druiwetemperature binne die multivlakverpakkings word grootliks veroorsaak deur onegalige lugvloeipatrone van die koue lug as gevolg van die weerstand wat die verskillende komponente van die multivlakverpakkings teen lugvloei bied. Die doel van hierdie studie was dus om ‘n indiepte eksperimentele ondersoek te doen om die bydrae van multivlakverpakking op totale lugvloeiweerstand, verkoelingstempo’s en –patrone van druiwe binne kommersieël gebruikte multivlakverpakkings te ondersoek, asook die effek van die multivalkverpakking op die naoes kwaliteit van druiwe te bepaal. ‘n Omvattende studie van vogverlies van druiwe tydens naoes opberging en hantering, asook ‘n voorlopige ondersoek na die bruikbaarheid van ‘n berekende vloei dinamika (BVD) model om die bewegingsfenomeen van hitte en massa oordrag van druiwe tydens verkoeling en koelopberging in multivlakverpakkings te voorspel, was ook by die studie ingesluit. Die totale drukverskil deur verskillende tafeldruif verpakkingssisteme is gemeet en die persentasie wat deur elke verpakkingskomponent en die vruglading bygedra is, is bereken. Van al die verpakkingskombinasies wat gemeet is, het die voeringfilms betekenisvol tot die totale drukverskil bygedra, en het gewissel van 40.33±1.15% vir die mikro geperforeerde voeringfilm tot 83.34±2.13 % vir die nie-geperforeerde voeringfilm. Die totale drukverskil oor die druiflading (1.40±0.01 % to 9.41±1.23 %) was die minste in vergelyking met die verskillende verpakkingskombinasies met die verskillende vlakke van voeringperforasies. Die verkoelingstempos van die druiwe in die 4.5 kg multiverpakking was betekenisvol (P<0.05) stadiger as vir die druiwe in die 5 kg handmandjie (‘punnet’) multiverpakking. Die 4.5 kg karton het ‘n seweagstes verkoelingstyd van 30.30-46.14% en 12.69-25.00% langer, respektiewelik, as oop-vertoon en toeslaan-‘punnet’ multiverpakkings gehad. Na 35 dae van koelopberging by -0.5°C het druiwetrosse in die 5 kg ‘punnet’-kartonkombinasies (oop-vertoon en toeslaan-’punnet’) ‘n massaverlies van 2.01 – 3.12% gehad, terwyl die trosse in die 4.5 kg kartonkombinasie slegs ‘n 1.08% massaverlies gehad het. In die ondersoek na die effek van verskillende kartonvoerings op die verkoelingstempo en kwaliteitseienskappe van ‘Regal seedless’ tafeldruiwe tydens koelopbering, het die nie-geperforeerde kartonvoerings ‘n relatiewe humiditeit (RH) van byna 100 % gehandhaaf. Hierdie hoë humiditeit in die nie-geperforeerde voeringfilms het ‘n verlies in stingelkwaliteit vertraag, maar het die voorkoms van SO2-skade en loskorrels betekenisvol (P < 0.05) verhoog in vergelyking met geperforeerde voerings. Die geperforeerde voerings het vrugverkoelingstempos verbeter, maar het die RH betekenisvol (P ≤ 0.05) verlaag. Die lae RH in die geperforeerde voerings het gelei tot ‘n verhoging in stingeluitdroging en –verbruining in vergelyking met die nie-geperforeerde voerings. Die vogverliestempo uit druiwe verpak in nie-geperforeerde voeringfilms was betekenisvol (P<0.05) stadiger in vergelyking met druiwe verpak in geperforeerde voeringfilms (120 x 2 mm and 36 x 4 mm). Die effektiewe vogdiffusiewaardes vir stingelgedeeltes verpak in nie-geperforeerde voeringfilms was stadiger as vir stingelgedeeltes wat verpak is sonder verpakkingsvoerings, en het gevarieer van 5.06x10-14 – 1.05x10-13 m2s-1. Die uitdrogingstempo van stingelgedeeltes was omgekeerd eweredig aan die grootte (deursnit) van die stingelgedeeltes. Die uitdrogingstempo van stingels wat blootgestel was (sonder voerings) aan sirkulerende koue lug was betekenisvol (P<0.05) hoër as die uitdrogingstempos van stingels wat verpak was in nie-geperforeerde voeringfilms. Empiriese modelle is gebruik om die uitdrogingskinetika van die verskillende stingelgedeeltes te beskryf. Die potensiaal van koelkamer humidifisering in die vermindering van die uitdroging van druifstingels is ondersoek. Humidifisering het stingeluitdroging vertraag en het die tempo van stingeluitdroging en -verbruining verminder, maar dit het die voorkoms van SO2-skade op die tafeldruiftrosse verhoog en het die verpakkings laat nat word. Die bewegingsfenomeen tydens verkoeling en hantering van verpakte tafeldruiwe is ook ondersoek deur gebruik te maak van ‘n BVD model en is bevestig met eksperimentele resultate. Daar was goeie ooreenstemming tussen gemete en voorspelde resultate. Die resultaat demonstreer duidelik die toepaslikheid van BVD-modelle om die optimum tafeldruifverpakkings- en verkoelingsprosedures te bepaal.
PPECB and Postharvest Innovation Programme (PHI-2) for their financial support
APA, Harvard, Vancouver, ISO, and other styles
22

Massart, Thierry Jacques. "Multi-scale modeling of damage in masonry structures." Doctoral thesis, Universite Libre de Bruxelles, 2003. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/211218.

Full text
Abstract:

The conservation of structures of the historical heritage is an increasing concern nowadays for public authorities. The technical design phase of repair operations for these structures is of prime importance. Such operations usually require an estimation of the residual strength and of the potential structural failure modes of structures to optimize the choice of the repairing techniques.

Although rules of thumb and codes are widely used, numerical simulations now start to emerge as valuable tools. Such alternative methods may be useful in this respect only if they are able to account realistically for the possibly complex failure modes of masonry in structural applications.

The mechanical behaviour of masonry is characterized by the properties of its constituents (bricks and mortar joints) and their stacking mode. Structural failure mechanisms are strongly connected to the mesostructure of the material, with strong localization and damage-induced anisotropy.

The currently available numerical tools for this material are mostly based on approaches incorporating only one scale of representation. Mesoscopic models are used in order to study structural details with an explicit representation of the constituents and of their behaviour. The range of applicability of these descriptions is however restricted by computational costs. At the other end of the spectrum, macroscopic descriptions used in structural computations rely on phenomenological constitutive laws representing the collective behaviour of the constituents. As a result, these macroscopic models are difficult to identify and sometimes lead to wrong failure mode predictions.

The purpose of this study is to bridge the gap between mesoscopic and macroscopic representations and to propose a computational methodology for the analysis of plane masonry walls. To overcome the drawbacks of existing approaches, a multi-scale framework is used which allows to include mesoscopic behaviour features in macroscopic descriptions, without the need for an a priori postulated macroscopic constitutive law. First, a mesoscopic constitutive description is defined for the quasi-brittle constituents of the masonry material, the failure of which mainly occurs through stiffness degradation. The mesoscopic description is therefore based on a scalar damage model. Plane stress and generalized plane state assumptions are used at the mesoscopic scale, leading to two-dimensional macroscopic continuum descriptions. Based on periodic homogenization techniques and unit cell computations, it is shown that the identified mesoscopic constitutive setting allows to reproduce the characteristic shape of (anisotropic) failure envelopes observed experimentally. The failure modes corresponding to various macroscopic loading directions are also shown to be correctly captured. The in-plane failure mechanisms are correctly represented by a plane stress description, while the generalized plane state assumption, introducing simplified three-dimensional effects, is shown to be needed to represent out-of-plane failure under biaxial compressive loading. Macroscopic damage-induced anisotropy resulting from the constituents' stacking mode in the material, which is complex to represent properly using macroscopic phenomenological constitutive equations, is here obtained in a natural fashion. The identified mesoscopic description is introduced in a scale transition procedure to infer the macroscopic response of the material. The first-order computational homogenization technique is used for this purpose to extract this response from unit cells. Damage localization eventually appears as a natural outcome of the quasi-brittle nature of the constituents. The onset of macroscopic localization is treated as a material bifurcation phenomenon and is detected from an eigenvalue analysis of the homogenized acoustic tensor obtained from the scale transition procedure together with a limit point criterion. The macroscopic localization orientations obtained with this type of detection are shown to be strongly related to the underlying mesostructural failure modes in the unit cells.

A well-posed macroscopic description is preserved by embedding localization bands at the macroscopic localization onset, with a width directly deduced from the initial periodicity of the mesostructure of the material. This allows to take into account the finite size of the fracturing zone in the macroscopic description. As a result of mesoscopic damage localization in narrow zones of the order of a mortar joint, the material response computationally deduced from unit cells may exhibit a snap-back behaviour. This precludes the use of such a response in the standard strain-driven multi-scale scheme.

Adaptations of the multi-scale framework required to treat the mesostructural response snap-back are proposed. This multi-scale framework is finally applied for a typical confined shear wall problem, which allows to verify its ability to represent complex structural failure modes.


Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
23

Lindell, Hugo. "Methods for optimizing large scale thermal imaging camera placement problems." Thesis, Linköpings universitet, Optimeringslära, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-161946.

Full text
Abstract:
The objective of this thesis is to model and solve the problem of placing thermal imaging camera for monitoring piles of combustible bio-fuels. The cameras, of different models, can be mounted at discrete heights on poles at fixed positions and at discrete angles, and one seeks camera model and mounting combinations that monitor as much of the piles as possible to as low cost as possible. Since monitoring all piles may not be possible or desired, due to budget or customer constrains, the solution to the problem is a set of compromises between coverage and cost. We denote such a set of compromises a frontier. In the first part of the thesis a way of modelling the problem is presented. The model uses a discrete formulation where the area to monitor is partitioned into a grid of cells. Further, a pool of candidate camera placements is formed, containing all combinations of camera models and mounting positions. For each camera in this pool, all cells monitored are deduced using ray-casting. Finally, an optimization model is formulated, based on the pool of candidate cameras and their monitoring of the grid. The optimization model has the two objectives of minimizing the cost while maximizing the number of covered cells. In the second part, a number of heuristic optimization algorithms to solve the problem is presented: Greedy Search, Random Greedy Search, Fear Search, Unique Search, Meta-RaPS and Weighted Linear Neighbourhood Search. The performance of these heuristics is evaluated on a couple of test cases from existing real world depots and a few artificial test instances. Evaluation is made by comparing the solution frontiers using various result metrics and graphs. Whenever practically possible, frontiers containing all optimal cost and coverage combinations are calculated using a state-of-the-art solver. Our findings indicate that for the artificial test instances, the state-of-the-art solver is unmatched in solution quality and uses similar execution time as the heuristics. Among the heuristics, Fear Search and Greedy Search were the strongest performing. For the smaller real world instances, the state-of-the-art solver was still unmatched in terms of solution quality, but generating the frontiers in this way was fairly time consuming. By generating the frontiers using Greedy Search or Random Greedy Search we obtained solutions of similar quality as the state-of-the-art solver up to 70-80% coverage using one hundredth and one tenth of the time, respectively. For the larger real world problem instances, generating the frontier using the state-of-the-art solver was extremely time consuming and thus sometimes impracticable. Hence the use of heuristics is often necessary. As for the smaller instances, Greedy Search and Random Greedy Search generated the frontiers with the best quality. Often even better full coverage solutions could be found by the more time consuming Fear Search or Unique Search.
Syftet med detta examensarbete är att modellera och lösa kameraplaceringsproblemet då IR-kameror ska användas för brandövervakning av fastbränslehögar. Problemet består i att givet ett antal kamera modeller och monteringsstolpar bestämma de kombinationer av placeringar och modeller sådana att övervakningen av högarna är maximal, för alla möjliga kostnadsnivåer. I den första delen av examensarbetet presenteras en modell för detta kameraplaceringsproblem. Modellen använder sig av en diskret formulering, där området om ska övervaras är representerad av ett rutnät. De möjliga kameravalen beskrivas med en diskret mängd av möjliga kameraplaceringar. För att utröna vilka celler inom rutnätet som en kameraplacering övervakar används metoden ray-casting. Utifrån mängden av möjliga kameraplaceringar kan en optimeringsmodell med två målfunktioner formuleras. Målet i den första målfunktionen är att minimera kostnaden för övervakningen och i den andra att maximera storleken på det övervakade området. Utgående från denna modell presenteras därefter ett antal algoritmer för att lösa modellen. Dessa är: Greedy Search, Random Greedy Search, Fear Search, Unique Search, Meta-RaPS och Weighted Linear Neighbourhood Search. Algoritmerna utvärderas på två konstgjorda testproblem och ett antal problem från verkliga fastbränslelager. Utvärderingen baseras på lösningsfronter (grafer över de icke-dominerade lösningarna med de bästa kombinationerna av kostnad och täckning) samt ett antal resultatmått som tid, lägsta kostnad för lösning med full täckning, etc... Vid utvärderingen av resultaten framkom att för de konstgjorda testinstanserna presterade ingen av heuristikerna jämförbart med en standardlösare, varken i termer av kvalitén på lösningarna eller med hänsyn tagen till tidsåtgången. De heuristiker som presterade bäst på dessa problem var framförallt Fear Search och Greedy Search. Även på de mindre probleminstanserna från existerande fastbränslelager hittade standardlösaren optimala lösningsfronter och en lösning med full täckning, men tidsåtgången var här flera gånger större jämfört med vissa av heuristikerna. På en hundra- respektive en tiondel av tiden kan Greedy Search eller Random Greedy Search heuristikerna finna en lösningsfront som är jämförbar med standardlösare, upp till 70-80% täckning. För de största probleminstanserna är tidsåtgången vid användning av standardlösare så pass stor att det i många fall är praktiskt svårt att lösa problemen, både för att generera fronten och att hitta en lösning med full täckning. I dessa fall är heuristiker oftast de enda möjliga alternativen. Vi fann att Greedy Search och Random Greedy Search var de heuristiker som, liksom för de mindre probleminstanserna, genererade de bästa lösningsfronterna. Ofta kunde dock en bättre lösning för full täckning hittas med hjälp av Fear Search eller Unique Search.
APA, Harvard, Vancouver, ISO, and other styles
24

Singla, Puneet. "Multi-resolution methods for high fidelity modeling and control allocation in large-scale dynamical systems." Texas A&M University, 2005. http://hdl.handle.net/1969.1/3785.

Full text
Abstract:
This dissertation introduces novel methods for solving highly challenging model- ing and control problems, motivated by advanced aerospace systems. Adaptable, ro- bust and computationally effcient, multi-resolution approximation algorithms based on Radial Basis Function Network and Global-Local Orthogonal Mapping approaches are developed to address various problems associated with the design of large scale dynamical systems. The main feature of the Radial Basis Function Network approach is the unique direction dependent scaling and rotation of the radial basis function via a novel Directed Connectivity Graph approach. The learning of shaping and rota- tion parameters for the Radial Basis Functions led to a broadly useful approximation approach that leads to global approximations capable of good local approximation for many moderate dimensioned applications. However, even with these refinements, many applications with many high frequency local input/output variations and a high dimensional input space remain a challenge and motivate us to investigate an entirely new approach. The Global-Local Orthogonal Mapping method is based upon a novel averaging process that allows construction of a piecewise continuous global family of local least-squares approximations, while retaining the freedom to vary in a general way the resolution (e.g., degrees of freedom) of the local approximations. These approximation methodologies are compatible with a wide variety of disciplines such as continuous function approximation, dynamic system modeling, nonlinear sig-nal processing and time series prediction. Further, related methods are developed for the modeling of dynamical systems nominally described by nonlinear differential equations and to solve for static and dynamic response of Distributed Parameter Sys- tems in an effcient manner. Finally, a hierarchical control allocation algorithm is presented to solve the control allocation problem for highly over-actuated systems that might arise with the development of embedded systems. The control allocation algorithm makes use of the concept of distribution functions to keep in check the "curse of dimensionality". The studies in the dissertation focus on demonstrating, through analysis, simulation, and design, the applicability and feasibility of these ap- proximation algorithms to a variety of examples. The results from these studies are of direct utility in addressing the "curse of dimensionality" and frequent redundancy of neural network approximation.
APA, Harvard, Vancouver, ISO, and other styles
25

Duro, Royo Jorge. "Towards Fabrication Information Modeling (FIM) : workflow and methods for multi-scale trans-disciplinary informed design." Thesis, Massachusetts Institute of Technology, 2015. http://hdl.handle.net/1721.1/101843.

Full text
Abstract:
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2015.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 67-70).
This thesis sets the stage for Fabrication Information Modeling (FIM); a design approach for enabling seamless design-to-production workflows that can derive complex designs fusing advanced digital design technologies associated with analysis, engineering and manufacturing. Present day digital fabrication platforms enable the design and construction of high-resolution and complex material distribution structures. However, virtual-to-physical workflows and their associated software environments are yet to incorporate such capabilities. As preliminary methods towards FIM I have developed four computational strategies for the design and digital construction of custom systems. These methods are presented in this thesis in the context of specific design challenges and include a biologically driven fiber construction algorithm; an anatomically driven shell-to-wearable translation protocol; an environmentally-driven swarm printing system; and a manufacturing-driven hierarchical fabrication platform. I discuss and analyze these four challenges in terms of their capabilities to integrate design across media, disciplines and scales through the concepts of multidimensionality, media-informed computation and trans-disciplinary data in advanced digital design workflows. With FIM I aim to contribute to the field of digital design and fabrication by enabling feedback workflows where materials are designed rather than selected; where the question of how information is passed across spatiotemporal scales is central to design generation itself; where modeling at each level of resolution and representation is based on various methods and carried out by various media or agents within a single environment; and finally, where virtual and physical considerations coexist as equals.
by Jorge Duro Royo.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
26

Li, Anqi. "Possibilities for removal of micropollutants in small-scale wastewater treatment - methods and multi-criteria analysis." Thesis, KTH, Hållbar utveckling, miljövetenskap och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-232112.

Full text
Abstract:
The quality of worlds’ water resources is facing new challenges, for instance detectable concentration of various trace contaminants under the term micropollutants is discharging into water bodies from both municipal wastewater treatment plants and from on-site wastewater facilities. A project called RedMic aim at identifying and quantifying emissions of micropollutants from on-site wastewater treatments as a basis for providing innovative treatment technologies to reduce potential risks for groundwater and surface water contamination. This thesis work deals with two of the work packages in the RedMic project: a column experiment to test the capability of 10 adsorbents to remove micropollutants and a multi-criteria analysis is conducted to evaluate if a filter composed of granulated activated carbon (GAC) or ozonation can be used for on-site wastewater treatment facilities. Based on the removal efficiency of dissolved organic carbon (DOC) of selected adsorbents, two types of activated carbon reduced up to 90% DOC concentration in the effluents. Moreover, six other adsorbents also showed good removal efficiency with around 60% in the second sampling. However, the data used in this thesis was only from the initial part of the experiment that continued and the final results will be published elsewhere. Two system solutions were evaluated with multi-criteria analysis: sandbed filter with either GAC filtration (1) or with ozonation (2) System solution 1 was found to have advantage compared to system 2.
APA, Harvard, Vancouver, ISO, and other styles
27

Hastings, Robert. "Use of multi-scale phase-based methods to determine optical flow in dynamic scene analysis." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2003. https://ro.ecu.edu.au/theses/1487.

Full text
Abstract:
Estimates of optical flow in images can be made by applying a complex periodic transform to the images and tracking the movement of points of constant phase in the complex output. This approach however suffers from the problem that filters of large width give information only about broad scale image features, whilst those of small spatial extent (high resolution) cannot track fast motion, which causes a feature to move a distance that is large compared to the filter-size. A method is presented in which the flow is measured at different scales, using a series of complex filters of decreasing width. The largest filter is used to give a large scale flow estimate at each image point. Estimates at smaller scales are then carried out by using the previous result as an a priori estimate. Rather than comparing the same region in different images in order to estimate flow, the regions to be compared are displaced from one another by an amount given by the most recent previous flow estimate. This results in an estimate of flow relative to the earlier estimate. The two estimates are then added together to give a new estimate of the absolute displacement. The process is repeated at successively smaller scales. The method can therefore detect small local velocity variations superimposed on the broad scale flow, even where the magnitude of the absolute displacement is larger than the scope of the smaller filters. Without the assistance of the earlier estimates in ‘tuning' the smaller filters in this manner, a smaller filter could fail to capture these velocity variations, because the absolute displacement carry the feature out of range of-the filter during successive frames. The output of the method is a series of scale-dependent flow fields corresponding to different scales, reflecting the fact that motion in the real world is a scale-dependent quantity. Application of the method to some 1 dimensional test images gives good results, with realistic flow values that could be used as an aid to segmentation. Some synthetic 2-dimentional images containing only a small number of well defined features aIso yield good-results but the method performs poorly on a random-dot stereogram and on a real-world test image pair selected from the Hamburg Taxi sequence.
APA, Harvard, Vancouver, ISO, and other styles
28

Hemmen, Sascha Michael [Verfasser]. "Ab initio simulations of the P-cluster in nitrogenase and multi-scale methods / Sascha Michael Hemmen." [Clausthal-Zellerfeld] : [Univ.-Bibliothek], 2008. http://d-nb.info/988542870/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Martalo', G. "DIFFERENT SCALE MODELING FOR CROWD DYNAMICS AND MULTI-TEMPERATURE GAS MIXTURES." Doctoral thesis, Università degli Studi di Milano, 2014. http://hdl.handle.net/2434/243643.

Full text
Abstract:
In the first part of this work we want to propose a model able to reproduce correctly the dynamics of a crowd in bounded domains (for example rooms and corridors) and in presence of obstacles, and also to discuss the emergence of some behaviors induced by panic. Starting from the analysis of a microscopic description for a small crowd one can deduce some mesoscopic and macroscopic models when the number of agents increases and the crowd is more comparable to gases and fluids. In the second part we want to propose some multi-temperature models for gas mixtures by means of standard tools of kinetic theory. Some descriptions are proposed also in presence of chemical reactions and of an internal structure for molecules to take non-translational degrees of freedom into account. The resulting models are tested on the classical problem of the steady shock wave and the occurence of smooth solutions and sub-shocks is discussed for varying parameters.
APA, Harvard, Vancouver, ISO, and other styles
30

Martinez, Alejandro. "Multi-scale studies of particulate-continuum interface systems under axial and torsional loading conditions." Diss., Georgia Institute of Technology, 2015. http://hdl.handle.net/1853/54423.

Full text
Abstract:
The study of the shear behavior of particulate (soil) – continuum (man-made material) interfaces has received significant attention during the last three decades. The historical belief that the particulate – continuum interface represents the weak link in most geotechnical systems has been shown to be incorrect for many situations. Namely, prescribing properties of the continuum material, such as its surface roughness and hardness, can result in interface strengths that are equal to the contacting soil mass internal shear strength. This research expands the engineering implications of these findings by studying the response of interface systems in different loading conditions. Specifically, the axial and torsional shear modes are studied in detail. Throughout this thesis it is shown that taking an engineering approach to design the loading conditions induced to the interface system can result in interface strengths that exceed the previously considered limiting shear strength of the contacting soil. Fundamental experimental and numerical studies on specimens of different types of sand subjected to torsional and axial interface shear highlighted the inherent differences of these processes. Specifically, micro-scale soil deformation measurements showed that torsional shear induces larger soil deformations as compared to axial shear, as well as complex volume-change tendencies consisting of dilation and contraction in the primary and secondary shear zones. Studies on the global response of torsional and axial shear tests showed that they are affected differently by soil properties such as particle angularity and roughness. This difference in global behavior highlights the benefits of making systems that transfer load to the contacting soil in different manners available for use in geotechnical engineering. Discrete Element Modeling (DEM) simulations allowed for internal information of the specimens to be studied, such as their fabric and shear-induced loading conditions. These findings allowed for the development of links between the measured micro-scale behavior and the observed global-scale response. The understanding of the behavior of torsional and axial interfaces has allowed provides a framework for the development of enhanced geotechnical systems and applications. The global response of torsional shear found to induce larger cyclic contractive tendencies within the contacting soil mass. Therefore, this shear mode is more desirable than the conventional axial shear for the study of phenomena that depend on soil contractive behavior, such as liquefaction. A study on the influence of surface roughness form revealed that surfaces with periodic profiles of protruding elements that prevent clogging are capable of mobilizing interface friction angles that are 20 to 60% larger than the soil friction angle. These findings have direct implications in engineering design since their implementation can result in more resilient and sustainable geotechnical systems.
APA, Harvard, Vancouver, ISO, and other styles
31

Raadsen, Mark. "Aggregation and decomposition methods in traffic assignment: towards consistent and efficient planning models in a multi-scale environment." Thesis, The University of Sydney, 2018. http://hdl.handle.net/2123/18186.

Full text
Abstract:
Transport models adopt a simplified version of reality to model the movement of people within a transport system. This simplification limits the accuracy of any model. This research focuses on developing novel techniques that, depending on the application context, try to maximise the level of simplification given the minimum result accuracy that is required. To do so, we explore both aggregation and decomposition methods. Besides maximising simplification, we also investigate the requirements to ensure consistency between models that operate in the same spatial domain. In this, so called, multi-scale setting, it is paramount that differences in results between models can be attributed to a particular set of simplifying assumptions. To date, hardly any efforts have been made to formalise, or assess the conditions that need to be satisfied in order to achieve this much desired consistency. The focus of this work is therefore twofold; (i) exploit the combination of both model and application characteristics to achieve the best possible result with the least amount of computational burden, (ii) develop methodology to construct transport model representations in a multi-scale environment following the identified conditions that guarantee consistency between various model granularities.
APA, Harvard, Vancouver, ISO, and other styles
32

Yang, Yishen. "On Rank-invariant Methods for Ordinal Data." Doctoral thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-53675.

Full text
Abstract:
Data from rating scale assessments have rank-invariant properties only, which means that the data represent an ordering, but lack of standardized magnitude, inter-categorical distances, and linearity. Even though the judgments often are coded by natural numbers they are not really metric. The aim of this thesis is to further develop the nonparametric rank-based Svensson methods for paired ordinal data that are based on the rank-invariant properties only. The thesis consists of five papers. In Paper I the asymptotic properties of the measure of systematic disagreement in paired ordinal data, the Relative Position (RP), and the difference in RP between groups were studied. Based on the findings of asymptotic normality, two tests for analyses of change within group and between groups were proposed. In Paper II the asymptotic properties of rank-based measures, e.g. the Svensson’s measures of systematic disagreement and of additional individual variability were discussed, and a numerical method for approximation was suggested. In Paper III the asymptotic properties of the measures for paired ordinal data, discussed in Paper II, were verified by simulations. Furthermore, the Spearman rank-order correlation coefficient (rs) and the Svensson’s augmented rank-order agreement coefficient (ra) were compared. By demonstrating how they differ and why they differ, it is emphasized that they measure different things. In Paper IV the proposed test in Paper I for comparing two groups of systematic changes in paired ordinal data was compared with other nonparametric tests for group changes, both regarding different approaches of categorising changes. The simulation reveals that the proposed test works better for small and unbalanced samples. Paper V demonstrates that rank invariant approaches can also be used in analysis of ordinal data from multi-item scales, which is an appealing and appropriate alternative to calculating sum scores.
APA, Harvard, Vancouver, ISO, and other styles
33

Unger, Robin [Verfasser]. "Multi-scale constitutive modelling of nanoparticle/epoxy nanocomposites : molecular simulation-based methods and experimental validation / Robin Unger." Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2020. http://d-nb.info/1214367135/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

HomChaudhuri, Baisravan. "Price-Based Distributed Optimization in Large-Scale Networked Systems." University of Cincinnati / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1377868426.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Selent, Douglas A. "Creating Systems and Applying Large-Scale Methods to Improve Student Remediation in Online Tutoring Systems in Real-time and at Scale." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/308.

Full text
Abstract:
"A common problem shared amongst online tutoring systems is the time-consuming nature of content creation. It has been estimated that an hour of online instruction can take up to 100-300 hours to create. Several systems have created tools to expedite content creation, such as the Cognitive Tutors Authoring Tool (CTAT) and the ASSISTments builder. Although these tools make content creation more efficient, they all still depend on the efforts of a content creator and/or past historical. These tools do not take full advantage of the power of the crowd. These issues and challenges faced by online tutoring systems provide an ideal environment to implement a solution using crowdsourcing. I created the PeerASSIST system to provide a solution to the challenges faced with tutoring content creation. PeerASSIST crowdsources the work students have done on problems inside the ASSISTments online tutoring system and redistributes that work as a form of tutoring to their peers, who are in need of assistance. Multi-objective multi-armed bandit algorithms are used to distribute student work, which balance exploring which work is good and exploiting the best currently known work. These policies are customized to run in a real-world environment with multiple asynchronous reward functions and an infinite number of actions. Inspired by major companies such as Google, Facebook, and Bing, PeerASSIST is also designed as a platform for simultaneous online experimentation in real-time and at scale. Currently over 600 teachers (grades K-12) are requiring students to show their work. Over 300,000 instances of student work have been collected from over 18,000 students across 28,000 problems. From the student work collected, 2,000 instances have been redistributed to over 550 students who needed help over the past few months. I conducted a randomized controlled experiment to evaluate the effectiveness of PeerASSIST on student performance. Other contributions include representing learning maps as Bayesian networks to model student performance, creating a machine-learning algorithm to derive student incorrect processes from their incorrect answer and the inputs of the problem, and applying Bayesian hypothesis testing to A/B experiments. We showed that learning maps can be simplified without practical loss of accuracy and that time series data is necessary to simplify learning maps if the static data is highly correlated. I also created several interventions to evaluate the effectiveness of the buggy messages generated from the machine-learned incorrect processes. The null results of these experiments demonstrate the difficulty of creating a successful tutoring and suggest that other methods of tutoring content creation (i.e. PeerASSIST) should be explored."
APA, Harvard, Vancouver, ISO, and other styles
36

Lepenies, Ingolf G. "Zur hierarchischen und simultanen Multi-Skalen-Analyse von Textilbeton." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1231842928873-71702.

Full text
Abstract:
Die Arbeit widmet sich der Simulation und der Prognose des Materialverhaltens des Hochleistungsverbundwerkstoffes Textilbeton unter Zugbeanspruchungen. Basierend auf einer hierarchischen mechanischen Modellbildung (Multi-Skalen-Analyse) werden die Tragmechanismen des Verbundwerkstoffes auf drei Strukturebenen abgebildet. Damit lassen sich die den Verbundwerkstoff charakterisierenden mechanischen Kenngrößen aus experimentell ermittelten Kraft-Verschiebungs-Abhängigkeiten ableiten. Diese Kenngrößen sind mit heutiger Messtechnik nicht direkt experimentell bestimmbar. Es wird ein Mikro-Meso-Makro-Prognosemodell (MMM-Prognosemodell) für Textilbeton entwickelt, das basierend auf der Simulation des Mikrostrukturverhaltens das makroskopische Materialverhalten prognostiziert. Die Grundlage dafür bildet die qualitative und quantitative Bestimmung der Verbundeigenschaften zwischen der Filamentbewehrung und der einbettenden Matrix. Für das Verbundverhalten von Rovings in einer Feinbetonmatrix wird, ausgehend von einer Rovingapproximation mit superelliptischem Querschnitt, die partielle Imprägnierung des Rovings und die daraus resultierende Verbundwirkung identifiziert und simuliert. Auf Grundlage der mikro- und mesomechanischen Modelle sowie der Kalibrierung und Verifizierung des MMM-Prognosemodells durch die Simulation von Filament- und Rovingauszugsversuchen wird das makroskopische Zugverhalten von Textilbeton mit Mehrfachrissbildung prognostiziert. Die numerischen Ergebnisse werden durch die Ergebnisse der experimentellen Dehnkörperversuche validiert. Das MMM-Prognosemodell für Textilbeton wird im Rahmen einer hierarchischen Multi-Skalen-Analyse auf Zugversuche von Textilbetonbauteilen angewendet. Weiterhin wird die Verstärkungswirkung einer Textilbetonschicht an Stahlbetonbauteilen unter Biegebeanspruchung zutreffend simuliert. Es wird das nichtlineare Bauteilverhalten abgebildet, wobei die Bauteildurchbiegung, die effektiven Rovingbeanspruchungen und die Beanspruchungen der Filamente im Roving abgebildet werden
The present work deals with the simulation and the prediction of the effective material behavior of the high performance composite textile reinforced concrete (TRC) subjected to tension. Based on a hierarchical material model within a multi scale approach the load bearing mechanisms of TRC are modeled on three structural scales. Therewith, the mechanical parameters characterizing the composite material can be deduced indirectly by experimentally determined force displacement relations obtained from roving pullout tests. These parameters cannot be obtained by contemporary measuring techniques directly. A micro-meso-macro-prediction model (MMM-PM) for TRC is developed, predicting the macroscopic material behavior by means of simulations of the microscopic and the mesoscopic material behavior. The basis is the qualitative and quantitative identification of the bond properties of the roving-matrix system. The partial impregnation of the rovings and the corresponding varying bond qualities are identified to characterize the bond behavior of rovings in a fine-grained concrete matrix. The huge variety of roving cross-sections is approximated by superellipses on the meso scale. The macroscopic behavior of TRC subjected to tension including multiple cracking of the matrix material is correctly predicted on the basis of the micro- and meso-mechanical models. The calibration and verification of the MMM-PM is performed by simulations of roving pullout tests, whereas a first validation is carried out by a comparison of the numerical predictions with the experimental data from tensile tests. The MMM-PM for TRC is applied to tensile tests of structural members made of TRC. Furthermore, a steel-reinforced concrete plate strengthened by a TRC layer is accurately simulated yielding the macroscopic deflection of the plate, the mesoscopic stress state of the roving and the microscopic stresses of the filaments
APA, Harvard, Vancouver, ISO, and other styles
37

Sfantos, Georgios. "Boundary element methods for cohesive-frictional non linear problems : applications to wear, contact and multi-scale damage modelling." Thesis, Imperial College London, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.439265.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Okeson, Trent James. "Camera View Planning for Structure from Motion: Achieving Targeted Inspection Through More Intelligent View Planning Methods." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7060.

Full text
Abstract:
Remote sensors and unmanned aerial vehicles (UAVs) have the potential to dramatically improve infrastructure health monitoring in terms of accuracy of the information and frequency of data collection. UAV automation has made significant progress but that automation is also creating vast amounts of data that needs to be processed into actionable information. A key aspect of this work is the optimization (not just automation) of data collection from UAVs for targeted planning of mission objectives. This work investigates the use of camera planning for Structure from Motion for 3D modeling of infrastructure. Included in this thesis is a novel multi-scale view-planning algorithm for autonomous targeted inspection. The method presented reduced the number of photos needed and therefore reduced the processing time while maintaining desired accuracies across the test site. A second focus in this work investigates various set covering problem algorithms to use for selecting the optimal camera set. The trade-offs between solve time and quality of results are explored. The Carousel Greedy algorithm is found to be the best method for solving the problem due to its relatively fast solve speeds and the high quality of the solutions found. Finally, physical flight tests are used to demonstrate the quality of the method for determining coverage. Each of the set covering problem algorithms are used to create a camera set that achieves 95% coverage. The models from the different camera sets are comparable despite having a large amount of variability in the camera sets chosen. While this study focuses on multi-scale view planning for optical sensors, the methods could be extended to other remote sensors, such as aerial LiDAR.
APA, Harvard, Vancouver, ISO, and other styles
39

Lepenies, Ingolf G. "Zur hierarchischen und simultanen Multi-Skalen-Analyse von Textilbeton." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A23636.

Full text
Abstract:
Die Arbeit widmet sich der Simulation und der Prognose des Materialverhaltens des Hochleistungsverbundwerkstoffes Textilbeton unter Zugbeanspruchungen. Basierend auf einer hierarchischen mechanischen Modellbildung (Multi-Skalen-Analyse) werden die Tragmechanismen des Verbundwerkstoffes auf drei Strukturebenen abgebildet. Damit lassen sich die den Verbundwerkstoff charakterisierenden mechanischen Kenngrößen aus experimentell ermittelten Kraft-Verschiebungs-Abhängigkeiten ableiten. Diese Kenngrößen sind mit heutiger Messtechnik nicht direkt experimentell bestimmbar. Es wird ein Mikro-Meso-Makro-Prognosemodell (MMM-Prognosemodell) für Textilbeton entwickelt, das basierend auf der Simulation des Mikrostrukturverhaltens das makroskopische Materialverhalten prognostiziert. Die Grundlage dafür bildet die qualitative und quantitative Bestimmung der Verbundeigenschaften zwischen der Filamentbewehrung und der einbettenden Matrix. Für das Verbundverhalten von Rovings in einer Feinbetonmatrix wird, ausgehend von einer Rovingapproximation mit superelliptischem Querschnitt, die partielle Imprägnierung des Rovings und die daraus resultierende Verbundwirkung identifiziert und simuliert. Auf Grundlage der mikro- und mesomechanischen Modelle sowie der Kalibrierung und Verifizierung des MMM-Prognosemodells durch die Simulation von Filament- und Rovingauszugsversuchen wird das makroskopische Zugverhalten von Textilbeton mit Mehrfachrissbildung prognostiziert. Die numerischen Ergebnisse werden durch die Ergebnisse der experimentellen Dehnkörperversuche validiert. Das MMM-Prognosemodell für Textilbeton wird im Rahmen einer hierarchischen Multi-Skalen-Analyse auf Zugversuche von Textilbetonbauteilen angewendet. Weiterhin wird die Verstärkungswirkung einer Textilbetonschicht an Stahlbetonbauteilen unter Biegebeanspruchung zutreffend simuliert. Es wird das nichtlineare Bauteilverhalten abgebildet, wobei die Bauteildurchbiegung, die effektiven Rovingbeanspruchungen und die Beanspruchungen der Filamente im Roving abgebildet werden.
The present work deals with the simulation and the prediction of the effective material behavior of the high performance composite textile reinforced concrete (TRC) subjected to tension. Based on a hierarchical material model within a multi scale approach the load bearing mechanisms of TRC are modeled on three structural scales. Therewith, the mechanical parameters characterizing the composite material can be deduced indirectly by experimentally determined force displacement relations obtained from roving pullout tests. These parameters cannot be obtained by contemporary measuring techniques directly. A micro-meso-macro-prediction model (MMM-PM) for TRC is developed, predicting the macroscopic material behavior by means of simulations of the microscopic and the mesoscopic material behavior. The basis is the qualitative and quantitative identification of the bond properties of the roving-matrix system. The partial impregnation of the rovings and the corresponding varying bond qualities are identified to characterize the bond behavior of rovings in a fine-grained concrete matrix. The huge variety of roving cross-sections is approximated by superellipses on the meso scale. The macroscopic behavior of TRC subjected to tension including multiple cracking of the matrix material is correctly predicted on the basis of the micro- and meso-mechanical models. The calibration and verification of the MMM-PM is performed by simulations of roving pullout tests, whereas a first validation is carried out by a comparison of the numerical predictions with the experimental data from tensile tests. The MMM-PM for TRC is applied to tensile tests of structural members made of TRC. Furthermore, a steel-reinforced concrete plate strengthened by a TRC layer is accurately simulated yielding the macroscopic deflection of the plate, the mesoscopic stress state of the roving and the microscopic stresses of the filaments.
APA, Harvard, Vancouver, ISO, and other styles
40

Loison, Arthur. "Unified two-scale Eulerian multi-fluid modeling of separated and dispersed two-phase flows." Electronic Thesis or Diss., Institut polytechnique de Paris, 2024. http://www.theses.fr/2024IPPAX009.

Full text
Abstract:
Les écoulements diphasiques liquide-gaz sont présents dans de nombreuses applications industrielles telles que la propulsion aérospatiale, l'hydraulique nucléaire ou les colonnes à bulles dans l'industrie chimique.La simulation de ces écoulements est d'un intérêt primordial pour leur compréhension et leur optimisation.Cependant, la dynamique de l'interface séparant le gaz du liquide peut avoir une dynamique multi-échelle et rend alors sa simulation trop coûteuse en calcul dans un contexte industriel.Une classe de modèles - dits multi-fluides - sont moins coûteux pour des régimes particuliers de dynamique d'interface, par exemple lorsque les fluides s'écoulent de part et d'autre d'une unique interface lisse dans un régime séparé ou lorsque l'un des deux fluides est sous formes d'inclusions (gouttes ou bulles) portées par l'autre fluide dans un régime dispersé.Le couplage de ces modèles a été proposé pour des écoulements multi-échelles comme l'atomisation liquide, mais un tel couplage est souvent difficile à mettre en place du point de vue de la modélisation physique ou de ses propriétés mathématiques.Cette thèse répond à cette problématique en proposant un cadre de modélisation unifiée à deux échelles ainsi que des schémas numériques robustes.Les principales contributions liées à cette modélisation sont :1- La combinaison de modèles multi-fluides compressibles de la littérature, adaptés soit au régime séparé soit au régime dispersé, en un modèle multi-fluide unifié à deux échelles grâce au principe d'action stationnaire de Hamilton ;2- Le couplage local des modèles avec un transfert de masse inter-échelle régularisant l'interface à grande échelle en conservant l'énergie capillaire et modélisant les phénomènes de régime mixte présents dans l'atomisation primaire ;3- L'amélioration des modèles à petite échelle pour les régimes dispersés en ajoutant la dynamique de quantités géométriques pour des gouttes oscillantes ou des bulles pulsantes, construites comme des moments d'une description cinétique.D'un point de vue numérique, des schémas volumes-finis adaptés aux systèmes de lois de conservation avec relaxations ont été implémentés dans le solveur open-source Josiepy.Enfin, des simulations démonstratives des propriétés de régularisation du modèle sont proposées sur des configurations numériques conduisant à des dynamiques d'interface multi-échelles
Liquid-gas two-phase flows are present in numerous industrial applications such as aerospace propulsion, nuclear hydraulics or bubble column reactors in the chemical industry.The simulation of such flows is of primary interest for their understanding and optimization.However, the dynamics of the interface separating the gas from the liquid can present a multiscale dynamics and thus makes simulations of industrial processes computationally too expensive.Some modelling efforts have been conducted on the development of cheaper multi-fluid models adapted to particular interface dynamics regime, e.g. in the separated regime where the fluids are separated by a single smooth surface or in the disperse regime where there are inclusions of one fluid carried by the other.Attempts of coupling between these models have showed some progress to simulate multiscale flows like atomization, but usually have physical or mathematical drawbacks.This thesis then pursues the goal of proposing a unified two-scale modelling framework with appropriate numerical methods adapted to this multiscale interface dynamics which goes from a separated to a disperse regime.The main contributions related to this modelling effort are :1- The combination of compressible multi-fluid models of the literature adapted to either the separated or the disperse regime into a unified two-scale multi-fluid model relying on Hamilton’s Stationary Action Principle;2- The local coupling of the models with an inter-scale mass transfer both regularizing the large-scale inter face and modelling mixed regime phenomena such as in primary break-up;3- Enhancing the small-scale models for the disperse regimes by adding the dynamics of geometrical quantities for oscillating droplets and pulsating bubbles, built as moments of a kinetic description.From the numerical perspective, finite-volume schemes and relaxation methods are used to solve the system of conservative laws of the models.Eventually, simulations with the open-source finite solver Josiepy demonstrates the regularization properties of the model on a set of well-chosen numerical setups leading to multi-scale interface dynamics
APA, Harvard, Vancouver, ISO, and other styles
41

Sa, Shibasaki Rui. "Lagrangian Decomposition Methods for Large-Scale Fixed-Charge Capacitated Multicommodity Network Design Problem." Thesis, Université Clermont Auvergne‎ (2017-2020), 2020. http://www.theses.fr/2020CLFAC024.

Full text
Abstract:
Typiquement présent dans les domaines de la logistique et des télécommunications, le problème de synthèse de réseau multi-flot à charge fixe reste difficile, en particulier dans des contextes à grande échelle. Dans ce cas, la capacité à produire des solutions de bonne qualité dans un temps de calcul raisonnable repose sur la disponibilité d'algorithmes efficaces. En ce sens, cette thèse propose des approches lagrangiennes capables de fournir des bornes relativement proches de l'optimal pour des instances de grande taille. L'efficacité des méthodes dépend de l'algorithme appliqué pour résoudre les duals lagrangiens, nous choisissons donc entre deux des solveurs les plus efficaces de la littérature: l'algorithme de Volume et la méthode Bundle, fournissant une comparaison entre eux. Les résultats ont montré que l'algorithme de Volume est plus efficace dans le contexte considéré, étant celui choisi pour le développement du projet de recherche.Une première heuristique lagrangienne a été conçue pour produire des solutions réalisables de bonne qualité pour le problème, obtenant de bien meilleurs résultats que Cplex pour les plus grandes instances. Concernant les limites inférieures, un algorithme Relax-and-Cut a été implémenté intégrant une analyse de sensibilité et une mise à l'échelle des contraintes, ce qui a amélioré les résultats. Les améliorations des bornes inférieures ont atteint 11\%, mais en moyenne, elles sont restées inférieures à 1\%. L'algorithme Relax-and-Cut a ensuite été inclus dans un schéma Branch-and-Cut, pour résoudre des programmes linéaires dans chaque nœud de l'arbre de recherche. De plus, une heuristique Feasibility Pump lagrangienne a été implémentée pour accélérer la recherche de bonnes solutions réalisables. Les résultats obtenus ont montré que le schéma proposé est compétitif avec les meilleurs algorithmes de la littérature et fournit les meilleurs résultats dans des contextes à grande échelle. De plus, une version heuristique de l'algorithme Branch-and-Cut basé sur le Feasibility Pump lagrangien a été testée, fournissant les meilleurs résultats en général, par rapport aux heuristiques de la littérature
Typically present in logistics and telecommunications domains, the Fixed-Charge Multicommodity Capacitated Network Design Problem remains challenging, especially when large-scale contexts are involved. In this particular case, the ability to produce good quality soutions in a reasonable amount of time leans on the availability of efficient algorithms. In that sense, the present thesis proposed Lagrangian approaches that are able to provide relatively sharp bounds for large-scale instances of the problem. The efficiency of the methods depend on the algorithm applied to solve Lagrangian duals, so we choose between two of the most efficient solvers in the literature: the Volume Algorithm and the Bundle Method, providing a comparison between them. The results showed that the Volume Algorithm is more efficient in the present context, being the one kept for further research.A first Lagrangian heuristic was devised to produce good quality feasible solutions for the problem, obtaining far better results than Cplex, for the largests instances. Concerning lower bounds, a Relax-and-Cut algorithm was implemented embbeding sensitivity analysis and constraint scaling, which improved results. The increases in lower bounds attained 11\%, but on average they remained under 1\%.The Relax-and-Cut algorithm was then included in a Branch-and-Cut scheme, to solve linear programs in each node of the search tree. Moreover, a Feasibility Pump heuristic using the Volume Algorithm as solver for linear programs was implemented to accelerate the search for good feasible solutions in large-scale cases. The obtained results showed that the proposed scheme is competitive with the best algorithms in the literature, and provides the best results in large-scale contexts. Moreover, a heuristic version of the Branch-and-Cut algorithm based on the Lagrangian Feasibility Pump was tested, providing the best results in general, when compared to efficient heuristics in the literature
APA, Harvard, Vancouver, ISO, and other styles
42

Omar, Murad Ahmad [Verfasser], Vasilis [Akademischer Betreuer] [Gutachter] Ntziachristos, Thomas [Gutachter] Misgeld, and Jörg [Gutachter] Conradt. "Multi-scale thermoacoustic imaging methods of biological tissues / Murad Ahmad Omar. Betreuer: Vasilis Ntziachristos. Gutachter: Thomas Misgeld ; Jörg Conradt ; Vasilis Ntziachristos." München : Universitätsbibliothek der TU München, 2015. http://d-nb.info/1105646696/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ramos, Jubierre Javier [Verfasser], André [Akademischer Betreuer] [Gutachter] Borrmann, and Christian [Gutachter] Koch. "Consistency preservation methods for multi-scale design of subway infrastructure facilities / Javier Ramos Jubierre ; Gutachter: André Borrmann, Christian Koch ; Betreuer: André Borrmann." München : Universitätsbibliothek der TU München, 2017. http://d-nb.info/1127728598/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hidalga, García-Bermejo Patricio. "Development and validation of a multi-scale and multi-physics methodology for the safety analysis of fast transients in Light Water Reactors." Doctoral thesis, Universitat Politècnica de València, 2021. http://hdl.handle.net/10251/160135.

Full text
Abstract:
[ES] La tecnología nuclear para el uso civil genera más preocupación por la seguridad que muchas otras tecnologías que se usan a diario. La Autoridad Nuclear define las bases de cómo debe realizarse la operación segura de una Central Nuclear. De acuerdo a las directrices establecidas por la Autoridad Nuclear, una Central Nuclear debe analizar una envolvente de escenarios hipotéticos y comprobar de manera determinista que los criterios de aceptación para dicho evento se cumplen. El Análisis Determinista de Seguridad utiliza herramientas de simulación que aplican la física conocida sobre el comportamiento de la Central Nuclear para evaluar la evolución de una variable de seguridad y asegurar que los límites no se sobrepasan. El desarrollo de la tecnología informática, de los métodos matemáticos y de la física que envuelve el comportamiento de una Central Nuclear han proporcionado herra-mientas de simulación potentes que son capaces de predecir el comportamiento de las variables de seguridad con una importante precisión. Esto permite analizar escenarios de manera más realista evitando asumir condiciones conservadoras que hasta la fecha compensaban la falta de conocimiento modelado en las herramientas de simulación. Las herramientas conocidas como De Mejor Estimación son capaces de analizar even-tos transitorios en diferentes escalas. Además, emplean modelos analíticos de las dife-rentes físicas más detallados, así como correlaciones experimentales más realistas y actuales. Un paso adelante en el Análisis Determinista de Seguridad pretende combinar las diferentes herramientas de Mejor Estimación que se emplean para analizar las dis-tintas físicas de una Central Nuclear, considerando incluso la interacción entre ellas y el análisis progresivo a diferentes escalas, llegando a analizar fenómenos más locales si es necesario. Para este fin, esta tesis presenta una metodología de análisis multi-físico y multi-escala que emplea diferentes códigos de simulación analizando el escenario propuesto a dife-rentes escalas, es decir, desde un nivel de planta que incluye los distintos componentes, hasta el volumen de control que supone el refrigerante pasando entre las varillas de combustible. Esta metodología permite un flujo de información que va desde el análi-sis a mayor escala hasta el de menor escala. El desarrollo de esta metodología ha sido validado con datos de planta para poder evaluar el alcance de esta metodología y pro-porcionar nuevas líneas de trabajo futuro. Además, se han añadido los resultados de los distintos procesos de validación y verificación que han surgido a lo largo de este trabajo.
[CA] La tecnologia nuclear per a l'ús civil genera més preocupació per la seguretat que moltes altres tecnologies d'ús quotidià. L'Autoritat Nuclear defineix les bases de com ha de realitzar-se l'operació segura d'una Central Nuclear. D'acord amb les directrius establertes per l'Autoritat Nuclear, una Central Nuclear ha d'analitzar una envoltant d'escenaris hipotètics I comprovar de manera determinista que els criteris d'acceptació per a l'esdeveniment seleccionat es compleixen. L'Anàlisi Determinista de Seguretat utilitza eines de simulació que apliquen la física coneguda sobre el comportament de la Central Nuclear per avaluar l'evolució d'una variable de seguretat i assegurar que els límits no es traspassen. El desenvolupament de la tecnologia informàtica, els mètodes matemàtics i de la física que envolta el comportament d'una Central Nuclear han proporcionat eines de simulació potents amb capacitat de predir el comportament de les variables de seguretat amb una precisió significativa. Això permet analitzar escenaris de manera realista evitant assumir condicions conservadores que fins al moment compensaven la mancança de coneixement. Les eines de simulació conegudes com De Millor Estimació son capaces d'analitzar esdeveniment transitoris a diferent escales. A més, utilitzen models analítics per a les diferents físiques amb més detall així com correlacions experimentals més actualitzades i realistes. Un pas més endavant en l'Anàlisi Determinista de Seguretat pretén combinar les diferents eines de Millor Estimació que se utilitzen per analitzar les distintes físiques d'una Central Nuclear, considerant inclús la interacció entre ells i l'anàlisi progressiu a diferents escales, amb la finalitat de poder analitzar fenòmens locals. Per a aquest fi, esta tesi presenta una metodologia d'anàlisi multi-física i multi-escala que utilitza diferents codis de simulació analitzant l'escenari proposat a diferents escales, és a dir, des d'un nivell de planta que inclou els distints components, fins al volum de control que suposa el refrigerant passant entre les varetes de combustible. Esta metodologia permet un flux de informació que va des de l'anàlisi d'una escala major a una menor. El desenvolupament d'aquesta metodologia ha sigut validada i verificada amb dades de planta i els resultats han sigut analitzats a fi d'avaluar la capacitat de la metodologia i les possibles línies de treball futur. A més s'han afegit els principals resultats de verificació i validació que han sorgit en les distintes etapes d'aquest treball.
[EN] The nuclear technology for civil use has generated more concerns for the safety than several other technologies applied to the daily life. The Nuclear Regulators define the basis of how the Safety Operation of Nuclear Power Plants is to be done. According to these guidelines, a Nuclear Power Plant must analyze an envelope of hypothetical events and deterministically define if the acceptance criteria for these events is met. The Deterministic Safety Analysis uses simulation tools that apply the physics known in the behavior of the Nuclear Power Plant to evaluate the evolution of a safety varia-ble and assure that the safety limits will not be exceeded. The development of the computer science, the numerical methods and the physics involved in the behavior of a Nuclear Power Plant have yield powerful simulation tools that are capable to predict the evolution of safety variables which significant accuracy. This allows to consider more realistic simulation scenarios instead of con-servative approaches in order to compensate the lack of knowledge in the applied prediction methods. The so called Best Estimate simulation tools are capable to analyze the transient events in different scales. Furthermore, they account more detailed analytical models and experimental correlations. A step forward in the Deterministic Safety Analysis intends to combine the Best Estimate simulation tools of the different physics considering the interaction among them and analyzing the different scales, considering more local approaches if necessary. For this purpose, this thesis work presents a multi-scale and multi-physics methodology that uses different physics codes and has the aim of modeling postulated scenarios in different scales, i.e. from system models representing the components of the plants to the subchannel models that analyze the behavior of the coolant between the fuel rods. This methodology allows a flow of information where the output of one scale is used as input in a more detailed scale to predict a more local analysis of parameters, such as the Critical Power Ratio, which are of great importance for the estimation of safety margins. The development of this methodology has been validated against plant data with the aim of evaluating the scope of this methodology and in order to provide future lines of development. In addition, different results of the validation and verifi-cation yielded in the development of the parts of this methodology are presented.
Hidalga García-Bermejo, P. (2020). Development and validation of a multi-scale and multi-physics methodology for the safety analysis of fast transients in Light Water Reactors [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/160135
TESIS
APA, Harvard, Vancouver, ISO, and other styles
45

Zeng, Zhanggui. "Financial Time Series Analysis using Pattern Recognition Methods." University of Sydney, 2008. http://hdl.handle.net/2123/3558.

Full text
Abstract:
Doctor of Philosophy
This thesis is based on research on financial time series analysis using pattern recognition methods. The first part of this research focuses on univariate time series analysis using different pattern recognition methods. First, probabilities of basic patterns are used to represent the features of a section of time series. This feature can remove noise from the time series by statistical probability. It is experimentally proven that this feature is successful for pattern repeated time series. Second, a multiscale Gaussian gravity as a pattern relationship measurement which can describe the direction of the pattern relationship is introduced to pattern clustering. By searching for the Gaussian-gravity-guided nearest neighbour of each pattern, this clustering method can easily determine the boundaries of the clusters. Third, a method that unsupervised pattern classification can be transformed into multiscale supervised pattern classification by multiscale supervisory time series or multiscale filtered time series is presented. The second part of this research focuses on multivariate time series analysis using pattern recognition. A systematic method is proposed to find the independent variables of a group of share prices by time series clustering, principal component analysis, independent component analysis, and object recognition. The number of dependent variables is reduced and the multivariate time series analysis is simplified by time series clustering and principal component analysis. Independent component analysis aims to find the ideal independent variables of the group of shares. Object recognition is expected to recognize those independent variables which are similar to the independent components. This method provides a new clue to understanding the stock market and to modelling a large time series database.
APA, Harvard, Vancouver, ISO, and other styles
46

Zeng, Zhanggui. "Financial Time Series Analysis using Pattern Recognition Methods." Thesis, The University of Sydney, 2006. http://hdl.handle.net/2123/3558.

Full text
Abstract:
This thesis is based on research on financial time series analysis using pattern recognition methods. The first part of this research focuses on univariate time series analysis using different pattern recognition methods. First, probabilities of basic patterns are used to represent the features of a section of time series. This feature can remove noise from the time series by statistical probability. It is experimentally proven that this feature is successful for pattern repeated time series. Second, a multiscale Gaussian gravity as a pattern relationship measurement which can describe the direction of the pattern relationship is introduced to pattern clustering. By searching for the Gaussian-gravity-guided nearest neighbour of each pattern, this clustering method can easily determine the boundaries of the clusters. Third, a method that unsupervised pattern classification can be transformed into multiscale supervised pattern classification by multiscale supervisory time series or multiscale filtered time series is presented. The second part of this research focuses on multivariate time series analysis using pattern recognition. A systematic method is proposed to find the independent variables of a group of share prices by time series clustering, principal component analysis, independent component analysis, and object recognition. The number of dependent variables is reduced and the multivariate time series analysis is simplified by time series clustering and principal component analysis. Independent component analysis aims to find the ideal independent variables of the group of shares. Object recognition is expected to recognize those independent variables which are similar to the independent components. This method provides a new clue to understanding the stock market and to modelling a large time series database.
APA, Harvard, Vancouver, ISO, and other styles
47

Giggins, Brent Matthew. "Stochastically Modified Bred Vectors." Thesis, The University of Sydney, 2019. https://hdl.handle.net/2123/21453.

Full text
Abstract:
Bred vectors (BVs) are a computationally efficient method used to generate flow-adapted initial conditions for ensemble forecasting that project onto unstable growing modes. Such ensembles, however, often lack diversity and may collapse to a low-dimensional subspace. We introduce two stochastic methods, tailored for multi-scale systems, to increase the diversity of these BV ensembles that still feature the original method's simplicity and low computational cost. We describe how to create stochastically perturbed bred vectors (SPBVs), which constitute an effective sampling of the invariant measure of the fast dynamics in regions of phase space which are likely to grow. It is shown that SPBVs lead to improved forecast skill over BVs as measured by RMS error, as well as more reliable ensembles as quantified by the error-spread relationship and Talagrand histograms. The approach is dynamically informed and aligns with the unstable subspace as characterised by the covariant Lyapunov vectors, thereby retaining original local dynamical information about the system. We also develop random draw bred vectors (RDBVs), which are overdispersive and not dynamically informed but provide improved forecast skill over the SPBVs. We additionally extend the stochastic method approach to systems without any scale separation. Here, it is shown that the SPBVs are still dynamically informed and generate reliable ensembles provided that they do not destroy the spatial correlations of the perturbation. We illustrate the advantage of SPBVs and RDBVs over BVs in numerical simulations of the single-scale and multi-scale Lorenz-96 model.
APA, Harvard, Vancouver, ISO, and other styles
48

Peters, Andreas [Verfasser], and Moctar Bettar Ould [Akademischer Betreuer] el. "Numerical Modelling and Prediction of Cavitation Erosion Using Euler-Euler and Multi-Scale Euler-Lagrange Methods / Andreas Peters ; Betreuer: Bettar Ould el Moctar." Duisburg, 2020. http://d-nb.info/1203066783/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Del, Masto Alessandra. "Transition d’échelle entre fibre végétale et composite UD : propagation de la variabilité et des non-linéarités." Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCD022/document.

Full text
Abstract:
Bien que les matériaux composites renforcés par fibres végétales (PFCs) représentent une solution attractive pour la conception de structures légères, performantes et à faible coût environnemental, leur développement nécessite des études approfondies concernant les mécanismes à la base du comportement non-linéaire en traction exprimé, ainsi que de la variabilité des propriétés mécaniques. Compte tenu de leur caractère multi-échelle, ces travaux de thèse visent à contribuer, via une approche numérique, à l’étude de la propagation de comportement à travers les échelles des PFCs. Dans un premier temps, l’étude se focalise sur l’échelle fibre : un modèle 3D de comportement de la paroi est d’abord implémenté dans un calcul EF, afin d’établir l’influence de la morphologie de la fibre sur le comportement exprimé. Une fois l’impact non négligeable de la morphologie déterminé, une étude des liens entre morphologie, matériau et ultrastructure et comportement en traction est menée via une analyse de sensibilité dans le cas du lin et du chanvre. La deuxième partie du travail es dédiée à l’échelle du pli de composite. Une nouvelle approche multi-échelle stochastique est développée et implémentée. Elle est basée sur la définition d’un volume élémentaire (VE) à microstructure aléatoire pour décrire le comportement du pli. L’approche est ensuite utilisée pour étudier la sensibilité du comportement du VE aux paramètres nano, micro et mésoscopiques. L’analyse de sensibilité, menée via le développement de la réponse sur la base du chaos polynomial, nous permet ainsi de construire un métamodèle du comportement du pli
Although plant-fiber reinforced composites (PFCs) represent an attractive solution for the design of lightweight, high performance and low environmental cost structures, their development requires in-depth studies of the mechanisms underlying their nonlinear tensile behavior, as well as variability of mechanical properties. Given their multi-scale nature, this thesis aims to contribute, using a numerical approach, to the study of the propagation of behavior across the scales of PFCs. Firstly, the study focuses on the fiber scale: a 3D model of the behavior of the wall is first implemented in an EF calculation, in order to establish the influence of fiber morphology on the tensile behavior. Once the non-negligible impact of the morphology has been determined, a study of the links between morphology, material and ultrastructure and tensile behavior is conducted via a sensitivity analysis in the case of flax and hemp. The second part of the work is dedicated to the composite ply scale. A new stochastic multi-scale approach is developed and implemented. It is based on the definition of an elementary volume (VE) with random microstructure to describe the behavior of the ply. The approach is then used to study the sensitivity of VE behavior to nano, micro and mesoscopic parameters. Sensitivity analysis, conducted via the development of the response on the basis of polynomial chaos, allows us to construct a metamodel of the tensile behavior of the ply
APA, Harvard, Vancouver, ISO, and other styles
50

Van, Gaalen Joseph Frank. "Alternative Statistical Methods for Analyzing Geological Phenomena: Bridging the Gap Between Scientific Disciplines." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3424.

Full text
Abstract:
When we consider the nature of the scientific community in conjunction with a sense of typical economic circumstances we find that there are two distinct paths for development. One path involves hypothesis testing and evolution of strategies that are linked with iterations in equipment advances. A second, more complicated scenario, can involve external influences whether economic, political, or otherwise, such as the government closure of NASA's space program in 2011 which will no doubt influence research in associated fields. The following chapters are an account of examples of two statistical techniques and the importance of both on the two relatively unrelated geological fields of coastal geomorphology and ground water hydrology. The first technique applies a multi-dimensional approach to defining groundwater water table response based on precipitation in areas where it can reasonably be assumed to be the only recharge. The second technique applies a high resolution multi-scalar approach to a geologic setting most often restricted to either high resolution locally, or low resolution regionally. This technique uses time-frequency analysis to characterize cuspate patterns in LIDAR data are introduced using examples from the Atlantic coast of Florida, United States. These techniques permit the efficient study of beachface landforms over many kilometers of coastline at multiple spatial scales. From a LIDAR image, a beach-parallel spatial series is generated. Here, this series is the shore-normal position of a specific elevation (contour line). Well-established time-frequency analysis techniques, wavelet transforms, and S-Transforms, are then applied to the spatial series. These methods yield results compatible with traditional methods and show that it is useful for capturing transitions in cuspate shapes. To apply this new method, a land-based LIDAR study allowing for rapid high-resolution surveying is conducted on Melbourne Beach, Florida and Tairua Beach, New Zealand. Comparisons and testing of two different terrestrial scanning stations are evaluated during the course of the field investigation. Significant cusp activity is observed at Melbourne Beach. Morphological observations and sediment analysis are used to study beach cusp morphodynamics at the site. Surveys at Melbourne were run ~500 m alongshore and sediment samples were collected intertidally over a five-day period. Beach cusp location within larger scale beach morphology is shown to directly influence cusp growth as either predominantly erosional or accretional. Sediment characteristics within the beach cusp morphology are reported coincident with cusp evolution. Variations in pthesis size distribution kurtosis are exhibited as the cusps evolve; however, no significant correlation is seen between grain size and position between horn and embayment. During the end of the study, a storm resulted in beach cusp destruction and increased sediment sorting. In the former technique using multi-dimensional studies, a test of a new method for improving forecasting of surficial aquifer system water level changes with rainfall is conducted. The results provide a more rigorous analysis of common predictive techniques and compare them with the results of the tested model. These results show that linear interpretations of response-to-rainfall data require a clarification of how large events distort prediction and how the binning of data can change the interpretation. Analyses show that the binning ground water recharge data as is typically done in daily format may be useful for quick interpretation but only describes how fast the system responds to an event, not the frequency of return of such a response. Without a secure grasp on the nonlinear nature of water table and rainfall data alike, any binning or isolation of specific data carries the potential for aliasing that must be accounted for in an interpretation. The new model is proven capable of supplanting any current linear regression analysis as a more accurate means of prediction through the application of a multivariate technique. Furthermore, results show that in the Florida surficial aquifer system response-to-rainfall ratios exhibit a maxima most often linked with modal stage.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography