To see the other types of publications on this topic, follow the link: Parameter block.

Dissertations / Theses on the topic 'Parameter block'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Parameter block.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Norton, Kevin M. "Parameter optimization of seismic isolator models using recursive block-by-block nonlinear transient structural synthesis." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2002. http://library.nps.navy.mil/uhtbin/hyperion-image/02sep%5FNorton.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bläser, Max [Verfasser]. "Prediction and Parameter Coding for Non-rectangular Block Partitioning / Max Bläser." Düren : Shaker, 2020. http://d-nb.info/1215461690/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Letchford, Kevin John. "Development of methoxy poly(ethylene glycol)-block-poly(caprolactone) amphiphilic diblock copolymer nanoparticulate formulations for the delivery of paclitaxel." Thesis, University of British Columbia, 2008. http://hdl.handle.net/2429/2487.

Full text
Abstract:
The goal of this project was to develop a non-toxic amphiphilic diblock copolymer nanoparticulate drug delivery system that will solubilize paclitaxel (PTX) and retain the drug in plasma. Methoxy poly(ethylene glycol)-block-poly(ε-caprolactone) (MePEG-b-PCL) diblock copolymers loaded with PTX were characterized and their physicochemical properties were correlated with their performance as nanoparticulate drug delivery systems. A series of MePEG-b-PCL was synthesized with PCL blocks ranging from 2-104 repeat units and MePEG blocks of 17, 44 or 114 repeat units. All copolymers were water soluble and formed micelles except MePEG₁₁₄-b-PCL₁₀₄, which was water insoluble and formed nanospheres. Investigation of the effects of block length on the physicochemical properties of the nanoparticles was used to select appropriate copolymers for development as PTX nanoparticles. The critical micelle concentration, pyrene partition coefficient and diameter of nanoparticles were found to be dependent on the PCL block length. Copolymers based on a MePEG molecular weight of 750 g/mol were found to have temperature dependent phase behavior. Relationships between the concentration of micellized drug and the compatibility between the drug and core-forming block, as determined by the Flory-Huggins interaction parameter, and PCL block length were developed. Increases in the compatibility between PCL and the drug, as well as longer PCL block lengths resulted in increased drug solubilization. The physicochemical properties and drug delivery performance characteristics of MePEG₁₁₄-b-PCL₁₉ micelles and MePEG₁₁₄-b-PCL₁₀₄ nanospheres were compared. Nanospheres were larger, had a more viscous core, solubilized more PTX and released it slower, compared to micelles. No difference was seen in the hemocompatibility of the nanoparticles as assessed by plasma coagulation time and erythrocyte hemolysis. Micellar PTX had an in vitro plasma distribution similar to free drug. The majority of micellar PTX associated with the lipoprotein deficient plasma fraction (LPDP). In contrast, nanospheres were capable of retaining more of the encapsulated drug with significantly less PTX partitioning into the LPDP fraction. In conclusion, although both micelles and nanospheres were capable of solubilizing PTX and were hemocompatible, PTX nanospheres may offer the advantage of prolonged blood circulation, based on the in vitro plasma distribution data, which showed that nanospheres retained PTX more effectively.
APA, Harvard, Vancouver, ISO, and other styles
4

Block, Friederike [Verfasser], Heinz [Akademischer Betreuer] Lauffer, Heinz [Gutachter] Lauffer, and Ulrich [Gutachter] Brandl. "Somatische und paraklinische Parameter im Rahmen einer ambulanten Adipositastherapie bei Kindern und Jugendlichen / Friederike Block ; Gutachter: Heinz Lauffer, Ulrich Brandl ; Betreuer: Heinz Lauffer." Greifswald : Ernst-Moritz-Arndt-Universität, 2018. http://d-nb.info/1159703388/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Astorga, Mejia Marlem Lucia. "Simplified Performance-Based Analysis for Seismic Slope Displacements." BYU ScholarsArchive, 2016. https://scholarsarchive.byu.edu/etd/5963.

Full text
Abstract:
Millions of lives have been lost over the years as a result of the effects of earthquakes. One of these devastating effects is slope failure, more commonly known as landslide. Over the years, seismologists and engineers have teamed up to better record data during an earthquake. As technology has advanced, the data obtained have become more refined, allowing engineers to use the data in their efforts to estimate earthquakes where they have not yet occurred. Several methods have been proposed over time to utilize the earthquake data and estimate slope displacements. A pioneer in the development of methods to estimate slope displacements, Nathan Newmark, proposed what is now called the Newmark sliding block method. This method explained in very simple ways how a mass, in this case a rigid block, would slide over an incline given that the acceleration of the block surpassed the frictional resistance created between the bottom of the block and the surface of the incline. Because many of the assumptions from this method were criticized by scientists over time, modified Newmark sliding block methods were proposed. As the original and modified Newmark sliding block methods were introduced, the need to account for the uncertainty in the way soil would behave under earthquake loading became a big challenge. Deterministic and probabilistic methods have been used to incorporate parameters that would account for some of the uncertainty in the analysis. In an attempt to use a probabilistic approach in understanding how slopes might fail, the Pacific Earthquake Engineering Research Center proposed a performance-based earthquake engineering framework that would allow decision-makers to use probabilistically generated information to make decisions based on acceptable risk. Previous researchers applied this framework to simplified Newmark sliding block models, but the approach is difficult for engineers to implement in practice because of the numerous probability calculations that are required. The work presented in this thesis provides a solution to the implementation of the performance-based approach by providing a simplified procedure for the performance-based determination of seismic slope displacements using the Rathje & Saygili (2009) and the Bray and Travasarou (2007) simplified Newmark sliding block models. This document also includes hazard parameter maps, which are an important part of the simplified procedure, for five states in the United States. A validation of the method is provided, as well as a comparison of the simplified method against other commonly used approaches such as deterministic and pseudo-probabilistic.
APA, Harvard, Vancouver, ISO, and other styles
6

Pino, Guillaume. "Synthèse et auto-assemblage de copolymères à blocs à paramètre d’interaction de Flory-Huggins élevé." Thesis, Bordeaux, 2021. http://www.theses.fr/2021BORD0098.

Full text
Abstract:
Cette thèse porte sur la synthèse de copolymères à blocs (BCPs) à haut pouvoir de ségrégation pour des applications en nanolithographie. La voie de synthèse qui a été retenue est la polymérisation radicalaire contrôlée par les nitroxydes ou NMP. Parmi tous les copolymères à blocs synthétisés, nous avons choisi de nous intéresser plus particulièrement à l'association de blocs 'hydrophile' et 'hydrophobe' à squelettes polystyrène. Le bloc hydrophile poly(3,4-dihydroxystyrène) (PDHS) s'est montré très intéressant en association avec les blocs poly(4-tertbutylstyrène) (PtBS) et poly(4-triméthylsilylstyrène) (PTMSS) formant respectivement les copolymères à blocs PDHS-b-PtBS et PDHS-b-PTMSS. Ainsi des valeurs de χ très élevées ont pu être quantifiées de l'ordre de 0,7 par exemple pour le PDHS-b-PtBS. L'avantage du bloc PTMSS tient au fait qu'il a permis d'amplifier le phénomène de contraste en gravure. Le bloc PDHS a ensuite été remplacé par le poly(4-méthylétherglycérolstyrène) (PMGS) afin de faire varier notamment les caractéristiques thermiques et d'étudier leur effet sur l'auto-assemblage de copolymères PtBS-b-PMGS. Enfin, les capacités d'infiltration spécifique du bloc PDHS par un précurseur métallique ont mis en évidence la formation d'un masque dur et d'un réseau double oxyde pour respectivement le PDHS-b-PtBS et le PDHS-b-PTMSS
This thesis concerns the synthesis of block copolymers (BCPs) with high segregation power for nanolithography applications. The main synthesis patway that has been followed is the controlled radical polymerization by nitroxides or NMPs. Among all the synthesized block copolymers, we have chosen to focus on the association of 'hydrophilic' and 'hydrophobic' blocks with polystyrene backbones. The hydrophilic poly(3,4-dihydroxystyrene) (PDHS) block has proved to be very interesting in association with the poly(4-tertbutylstyrene) (PtBS) and poly(4-trimethylsilylstyrene) (PTMSS) blocks forming respectively the PDHS-b-PtBS and PDHS-b-PTMSS block copolymers. High χ values could thus be quantified in the range of 0.7 for PDHS-b-PtBS, for example. The main advantage of the PTMSS block is about the contrast amplification during etching. The PDHS block was then replaced by poly(4-methyletherglycerolstyrene) (PMGS) in order to vary, in particular, the thermo-mechanical features and to study in particular their effect on the self-assembly of PtBS-b-PMGS copolymers. Finally, the specific infiltration capacities of the PDHS block by a metallic precursor highlighted the formation of a hard mask and a double oxide network for PDHS-b-PtBS and PDHS-b-PTMSS respectively
APA, Harvard, Vancouver, ISO, and other styles
7

Čermák, Justin. "Implementace umělé neuronové sítě do obvodu FPGA." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2011. http://www.nusl.cz/ntk/nusl-219363.

Full text
Abstract:
This master's thesis describes the design of effective working artificial neural network in FPGA Virtex-5 series with the maximum use of the possibility of parallelization. The theoretical part contains basic information on artificial neural networks, FPGA and VHDL. The practical part describes the used format of the variables, creating non-linear function, the principle of calculation the single layers, or the possibility of parameter settings generated artificial neural networks.
APA, Harvard, Vancouver, ISO, and other styles
8

Harun-or-Rashid, S. M. "Cosmological parameters and black holes." Helsinki : University of Helsinki, 2001. http://ethesis.helsinki.fi/julkaisut/mat/fysii/vk/harun-or-rashid/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Teka, Kubrom Hisho. "Parameter estimation of the Black-Scholes-Merton model." Kansas State University, 2013. http://hdl.handle.net/2097/15669.

Full text
Abstract:
Master of Science
Department of Statistics
James Neill
In financial mathematics, asset prices for European options are often modeled according to the Black-Scholes-Merton (BSM) model, a stochastic differential equation (SDE) depending on unknown parameters. A derivation of the solution to this SDE is reviewed, resulting in a stochastic process called geometric Brownian motion (GBM) which depends on two unknown real parameters referred to as the drift and volatility. For additional insight, the BSM equation is expressed as a heat equation, which is a partial differential equation (PDE) with well-known properties. For American options, it is established that asset value can be characterized as the solution to an obstacle problem, which is an example of a free boundary PDE problem. One approach for estimating the parameters in the GBM solution to the BSM model can be based on the method of maximum likelihood. This approach is discussed and applied to a dataset involving the weekly closing prices for the Dow Jones Industrial Average between January 2012 and December 2012.
APA, Harvard, Vancouver, ISO, and other styles
10

Van, Schalkwyk Francois. "The influence of specimen size on the compression stress block parameters of reinforced concrete." Diss., University of Pretoria, 2017. http://hdl.handle.net/2263/62799.

Full text
Abstract:
The stress-strain distribution in flexural compression has been at the forefront of investigation ever since the 20th century. The original formulation of the flexural stress-strain distribution and the subsequent development of the stress block parameters, were based on specimens with a 127 x 203 mm (4 x 8 in.) cross section. The design of reinforced concrete flexural and flexural compression members at the Ultimate Limit State is based on the equilibrium of forces and moments obtained by using these stress block parameters. The calculation procedure entails the determination of the neutral axis depth, which varies depending on the magnitude of the applied action and the section dimensions. If the load action is small, the internal bending moment can be equilibrated with a reduced neutral axis depth, however, current design models do not consider the influence of a reduction in neutral axis depth (specimen size) on the stress block parameters, possibly resulting in an underestimation of the flexural compression capacity. This study aimed to evaluate the influence of specimen size and compressive strength on the stress block parameters of concrete by testing twenty-seven plain concrete specimens in flexural compression. Nine specimens were tested for each specimen size (50 mm, 100 mm, and 200 mm), three for each of the cylinder target strengths of 40 MPa, 65 MPa, and 80 MPa. The stress block parameters, obtained from the stress-strain curves, were compared to the data obtained by previous researchers, and the influence of specimen size on the stress block parameters evaluated for the different concrete strengths. Along with the size effect in flexural compression, the size effect for cubes and cylinders were also evaluated, and the associated cylinder strength used to eliminate the size effect of the stress block parameters. A comparison of the error between the predicted Moment-Axial force (M-N) interaction diagram, obtained by using the BS 8110-1 (1997), SANS 0100-1 (2000), ACI-318 (2014), and EN 1992-1-1 (2004) codes, and the actual M-N interaction diagram, obtained from the experimental points, were made, and conclusions regarding their applicability for the design of concrete containing South African materials drawn. Lastly, the flexural stress-strain behaviour was modelled, and a comparison made between the calculated and actual stress block parameters.
Dissertation (MEng)--University of Pretoria, 2017.
Civil Engineering
MEng
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
11

Liljebjörn, Johan, and Hugo Broman. "Mantis The Black-Box Scanner : Finding XSS vulnerabilities through parse errors." Thesis, Blekinge Tekniska Högskola, Institutionen för datavetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-19566.

Full text
Abstract:
Abstract [en] Background. Penetration testing is a good technique for finding web vulnerabilities. Vulnerability scanners are often used to aid with security testing. The increased scope is becoming more difficult for scanners to handle in a reasonable amount of time. The problem with vulnerability scanners is that they rely on fuzzing to find vulnerabilities. A problem with fuzzing is that: it generates a lot of network traffic; scans can be excruciatingly slow; limited vulnerability detection if the output string is modified due to filtering or sanitization. Objectives. This thesis aims to investigate if an XSS vulnerability scanner can be made more scalable than the current state-of-the-art. The idea is to examine how reflected parameters can be detected, and if a different methodology can be applied to improve the detection of XSS vulnerabilities. The proposed vulnerability scanner is named Mantis. Methods. The research methods used in this thesis are literature review and experiment. In the literature review, we collected information about the investigated problem to help us analyze the identified research gaps. The experiment evaluated the proposed vulnerability scanner with the current state-of-the-art using the dataset, OWASP benchmark. Results. The result shows that reflected parameters can be reliably detected using approximate string matching. Using the parameter mapping, it was possible to detect reflected XSS vulnerabilities to a great extent. Mantis had an average scan time of 78 seconds, OWASP ZAP 95 seconds and Arachni 17 minutes. The dataset had a total of 246 XSS vulnerabilities. Mantis detected the most at 213 vulnerabilities, Arachni detected 183, and OWASP ZAP 137. None of the scanners had any false positives. Conclusions. Mantis has proven to be an efficient vulnerability scanner for detecting XSS vulnerabilities. Focusing on the set of characters that may lead to the exploitation of XSS has proven to be a great alternative to fuzzing. More testing of Mantis is needed to determine the usability of the vulnerability scanner in a real-world scenario. We believe the scanner has the potential to be a great asset for penetration testers in their work.
APA, Harvard, Vancouver, ISO, and other styles
12

Gajevski, Pavel. "Periodinių sistemų parametrų įvertinimas." Master's thesis, Lithuanian Academic Libraries Network (LABT), 2004. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2004~D_20040611_161732-30868.

Full text
Abstract:
In this work are discussed a block parameter estimation method for linear periodically time varying system. The whole work consists of two parts: theoretical and practical. The theoretical part is based on model’s description, its creation and structure. There it is shown estimation Markova, or an estimation of the least squares generalized method and the description of the generalized model. The practical part is devoted to fulfilling experiments and their describing. The conclusion about estimation of block parameter method’s achievement was also made. The experiments have been fulfilled using Matlab program. In addition count correctly Matlab (matrica, period) have been used. The results of experiments are given in the tables and schedules.
APA, Harvard, Vancouver, ISO, and other styles
13

Rückert, Nadja, Robert S. Anderssen, and Bernd Hofmann. "Stable Parameter Identification Evaluation of Volatility." Universitätsbibliothek Chemnitz, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:ch1-qucosa-85402.

Full text
Abstract:
Using the dual Black-Scholes partial differential equation, Dupire derived an explicit formula, involving the ratio of partial derivatives of the evolving fair value of a European call option (ECO), for recovering information about its variable volatility. Because the prices, as a function of maturity and strike, are only available as discrete noisy observations, the evaluation of Dupire’s formula reduces to being an ill-posed numerical differentiation problem, complicated by the need to take the ratio of derivatives. In order to illustrate the nature of ill-posedness, a simple finite difference scheme is first used to approximate the partial derivatives. A new method is then proposed which reformulates the determination of the volatility, from the partial differential equation defining the fair value of the ECO, as a parameter identification activity. By using the weak formulation of this equation, the problem is localized to a subregion on which the volatility surface can be approximated by a constant or a constant multiplied by some known shape function which models the local shape of the volatility function. The essential regularization is achieved through the localization, the choice of the analytic weight function, and the application of integration-by-parts to the weak formulation to transfer the differentiation of the discrete data to the differentiation of the analytic weight function.
APA, Harvard, Vancouver, ISO, and other styles
14

Liang, Yuli. "Contributions to Estimation and Testing Block Covariance Structures in Multivariate Normal Models." Doctoral thesis, Stockholms universitet, Statistiska institutionen, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-115347.

Full text
Abstract:
This thesis concerns inference problems in balanced random effects models with a so-called block circular Toeplitz covariance structure. This class of covariance structures describes the dependency of some specific multivariate two-level data when both compound symmetry and circular symmetry appear simultaneously. We derive two covariance structures under two different invariance restrictions. The obtained covariance structures reflect both circularity and exchangeability present in the data. In particular, estimation in the balanced random effects with block circular covariance matrices is considered. The spectral properties of such patterned covariance matrices are provided. Maximum likelihood estimation is performed through the spectral decomposition of the patterned covariance matrices. Existence of the explicit maximum likelihood estimators is discussed and sufficient conditions for obtaining explicit and unique estimators for the variance-covariance components are derived. Different restricted models are discussed and the corresponding maximum likelihood estimators are presented. This thesis also deals with hypothesis testing of block covariance structures, especially block circular Toeplitz covariance matrices. We consider both so-called external tests and internal tests. In the external tests, various hypotheses about testing block covariance structures, as well as mean structures, are considered, and the internal tests are concerned with testing specific covariance parameters given the block circular Toeplitz structure. Likelihood ratio tests are constructed, and the null distributions of the corresponding test statistics are derived.
APA, Harvard, Vancouver, ISO, and other styles
15

García, Quirós Cecilio. "Waveform modelling of binary black holes in the advanced ligo era." Doctoral thesis, Universitat de les Illes Balears, 2020. http://hdl.handle.net/10803/671463.

Full text
Abstract:
[eng] The focus of this thesis is the development of accurate and computationally efficient waveform models for the description of the signal of non-precessing and precessing blackhole binary systems detected by the LIGO-Virgo detectors. Waveform models play a key role in the detection and parameter estimation of gravitational wave signals. The more accurate these models are, the more signals can be detected, but even more importantly, inaccuracies in the signal description will lead to systematic errors for the estimated parameters of the source. The models presented in this thesis include the description of several subdominant effects, which were not considered in the studies during the two first observation runs O1 and O2 of the LIGO-Virgo interferometers, but break degeneracies in the signal and generally improve the accuracy of parameter estimation. During the gap between the O2 and O3 runs several research groups have incorporated the most important subdominant harmonics into their models. However, we find that the models presented in this thesis improve the accuracy of several of these models and outperform in computational efficiency to all of them. The non-precessing model follows the standard strategy of the phenomenological models of calibrating an analytical ansatz to numerical relativity simulations. I produced a number of these simulations specifically for the calibration of the model placing them strategically in regions of the parameter space poorly populated. I also produced several waveforms in the extreme mass ratio (EMRI) limit extending the calibration region of the model from mass ratio 1 to 1000. On the other hand, the precessing model follows the standard technique of twisting-up a nonprecessing model but extended in this case to a model with subdominant harmonics. The evaluation of the models is then accelerated by incorporating the interpolation technique of “multibanding”, originally introduced by Vinciguerra et. al. I have extended this technique, adapted it to the two frequency domain models presented in this thesis, and formulated the technique in a way to make it applicable to any analytical frequency or time domain model.
[spa] El objetivo de esta tesis es el desarrollo de modelos de forma de onda precisos y computacionalmente eficientes para la descripción de la señal de sistemas binarios de agujero negros sin precesión y con precesión detectados por los detectores LIGO-Virgo. Los modelos de forma de onda juegan un papel clave en la detección y estimación de parámetros de señales de ondas gravitacionales. Cuanto más precisos sean estos modelos, más señales se pueden detectar, pero aún más importante, las inexactitudes en la descripción de la señal conducirán a errores sistemáticos para los parámetros estimados de la fuente. Los modelos presentados en esta tesis incluyen la descripción de varios efectos subdomiantes, que no se consideraron en los estudios durante las dos primeras ejecuciones de observación O1 y O2 de los interferómetros LIGO-Virgo, pero rompen las degeneraciones en la señal y generalmente mejoran la precisión de la estimación de parámetros. Durante la brecha entre las ejecuciones de O2 y O3, varios grupos de investigación han incorporado los armónicos subdominantes más importantes en sus modelos, sin embargo, encontramos que los modelos presentados en esta tesis mejoran la precisión de varios de estos modelos y superan en eficiencia computacional a todos ellos. El modelo sin precesión sigue la estrategia estándar de los modelos fenomenológicos de calibración de una respuesta analítica a simulaciones de relatividad numérica. Produje varias de estas simulaciones específicamente para la calibración del modelo colocándolas estratégicamente en regiones del espacio de parámetros poco pobladas. También elaboré una serie de formas de onda en el límite de la relación de masa extrema que extiende la región de calibración del modelo desde la relación de masa 1 a 1000. Por otro lado, el modelo de precesión sigue la técnica estándar de “twisting-up” de un modelo sin precesión, pero se extiende en este caso a un modelo con armónicos subdominantes. La evaluación de los modelos es acelerada incorporando la técnica de interpolación de “multibanding”, originalmente presentada por Vinciguerra et. al. He ampliado esta técnica, la he adaptado a los dos modelos de dominio de frecuencia presentados en esta tesis y he formulado la técnica de manera que sea aplicable a cualquier modelo analítico de frecuencia o dominio de tiempo.
[cat] L’objectiu d’aquesta tesi és el desenvolupament de models de forma d’ona precisos i computacionalment eficaços per a la descripció del senyal de sistemes binaris de forat negre no precessant i precessant detectats pels detectors LIGO-Virgo. Els models de forma d’ona tenen un paper clau en la detecció i l’estimació de paràmetres de senyals d’ona gravitacional. Com més precisos siguin aquests models, més senyals es poden detectar, però encara més important, les inexactituds en la descripció del senyal conduiran a errors sistemàtics per als paràmetres estimats de la font. Els models presentats en aquesta tesi inclouen la descripció de diversos efectes subdominants, que no es van considerar en els estudis durant els dos primers períodes d’observació O1 i O2 dels interferòmetres LIGO-Virgo, però trenquen degeneracions en el senyal i milloren generalment la precisió de l’estimació de paràmetres. Durant el període entre les execucions de O2 i O3, diversos grups de recerca han incorporat als seus models els harmònics subdominants més importants, no obstant trobem que els models presentats en aquesta tesi milloren la precisió de diversos d’aquests models i superen en eficiència computacional a tots ells. El model no precessant segueix l’estratègia estàndard dels models fenomenològics de calibració d’un ansatz analític a simulacions de relativitat numèrica. Vaig produir una sèrie d’aquestes simulacions específicament per a la calibració del model situant-les estratègicament en regions de l’espai de paràmetres mal poblats. També vaig produir diverses formes d’ona en el límit de la proporció massiva extrema que va ampliar la regió de calibració del model des de la relació de massa 1 a 1000. D’altra banda, el model precessant segueix la tècnica estàndard de retorçar un model no precessant, però estès en aquest cas a un model amb harmònics subdominants. L’avaluació dels models s’accelera després incorporant la tècnica d’interpolació de "multibanding", originalment introduïda per Vinciguerra et. al. He ampliat aquesta tècnica, l’he adaptat als dos models de domini de freqüència presentats en aquesta tesi i he formulat la tècnica per tal de fer-la aplicable a qualsevol model de domini de freqüència o analítica.
APA, Harvard, Vancouver, ISO, and other styles
16

Lundberg, Martin. "Automatic parameter tuning in localization algorithms." Thesis, Linköpings universitet, Programvara och system, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-158132.

Full text
Abstract:
Many algorithms today require a number of parameters to be set in order to perform well in a given application. The tuning of these parameters is often difficult and tedious to do manually, especially when the number of parameters is large. It is also unlikely that a human can find the best possible solution for difficult problems. To be able to automatically find good sets of parameters could both provide better results and save a lot of time. The prominent methods Bayesian optimization and Covariance Matrix Adaptation Evolution Strategy (CMA-ES) are evaluated for automatic parameter tuning in localization algorithms in this work. Both methods are evaluated using a localization algorithm on different datasets and compared in terms of computational time and the precision and recall of the final solutions. This study shows that it is feasible to automatically tune the parameters of localization algorithms using the evaluated methods. In all experiments performed in this work, Bayesian optimization was shown to make the biggest improvements early in the optimization but CMA-ES always passed it and proceeded to reach the best final solutions after some time. This study also shows that automatic parameter tuning is feasible even when using noisy real-world data collected from 3D cameras.
APA, Harvard, Vancouver, ISO, and other styles
17

Shoarian-Sattari, Kamal. "Use of vehicle flow parameters as predictors of road traffic accident risk." Thesis, Queen Mary, University of London, 1985. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.391324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Call, John B. "Large-signal characterization and modeling of nonlinear devices using scattering parameters." Thesis, Virginia Tech, 2002. http://hdl.handle.net/10919/35548.

Full text
Abstract:
Characterization and modeling of devices at high drive levels often requires specialized equipment and measurement techniques. Many large-signal devices will never have traditional nonlinear models because model development is expensive and time-consuming. Due to the complexity of the device or the size of the application market, nonlinear modeling efforts may not be cost effective. Scattering parameters, widely used for small-signal passive and active device characterization, have received only cursory consideration for large-signal nonlinear device characterization due to technical and theoretical issues. We review the theory of S-parameters, active device characterization, and previous efforts to use S-parameters with large-signal nonlinear devices. A robust, calibrated vector-measurement system is used to obtain device scattering parameters as a function of drive level. The unique measurement system architecture allows meaningful scattering parameter measurements of large-signal nonlinear devices, overcoming limitations reported by previous researchers. A three-port S-parameter device model, with a nonlinear reflection coefficient terminating the third port, can be extracted from scattering parameters measured as a function of drive level. This three-port model provides excellent agreement with device measurements across a wide range of drive conditions. The model is used to simulate load-pull data for various drive levels which are compared to measured data.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
19

Olinger, Renate Ilse. "Comparing vestibular evoked myogenic potential response parameters in young Black African and Caucasian adults." Diss., University of Pretoria, 2016. http://hdl.handle.net/2263/60408.

Full text
Abstract:
Objective: The aim of this study was to compare cervical and ocular vestibular evoked myogenic potentials (cVEMP and oVEMP) in young gender- and age-matched black African and Caucasian male and female adults. Design: A quasi-experimental between-subjects research design was utilised. This study was comparative in nature, thus data was collected in a cross-sectional manner from two age- and gender-matched racial groups, namely black African and Caucasian, and compared. Furthermore, interactions of gender and race were also examined in this research study. Methods: Sixty healthy age- and gender-matched participants (30 black African, 30 Caucasian) between the ages of 18 25 years participated in this study. Fifteen males and fifteen females, within one year of the age of their racial participant counterparts, were included in each racial group. Latencies, peak-to-peak amplitudes and asymmetry ratios were analysed for both groups in these tests. Furthermore, auditory brainstem response (ABR) and electromyography (EMG) testing were conducted to investigate whether possible racial differences in VEMP tests could be attributed to differences in neural or muscular function. Results: Black African participants demonstrated significantly shorter latencies of the n23 component of the cVEMP and the p15 component of the oVEMP, as well as larger peakto- peak amplitude of the oVEMP response. Highly significant differences were found in all EMG measurements between the two racial groups, suggesting that these racial VEMP differences are primarily based on differences in muscular function between black Africans and Caucasians. Significant gender differences were observed in all tests conducted, with females predominantly displaying shorter latencies, while males had larger amplitudes. Conclusions: Young black African adults demonstrated significant differences in both cVEMP and oVEMP responses, namely shorter latencies and larger amplitudes, in comparison to young Caucasian adults. Correlations with differences in EMG measurements suggest that these differences are primarily due to differences in muscular function as opposed to neural function. Future research is required to confirm and expand on these findings.
Dissertation (MCommunication Pathology)--University of Pretoria, 2016.
Speech-Language Pathology and Audiology
MCommunication Pathology
Unrestricted
APA, Harvard, Vancouver, ISO, and other styles
20

Malejan, Christopher John. "An Assessment of Indoor Infiltration Parameters for Black Carbon from Residential Wood Combustion and the Spectral Dependence of Light Absorption for Organic Carbon." DigitalCommons@CalPoly, 2009. https://digitalcommons.calpoly.edu/theses/207.

Full text
Abstract:
Black carbon, a proxy for woodsmoke was measured indoors and outdoors for an occupied residence in Cambria, CA during the winter months of 2009. The purpose was to investigate the infiltration parameters: air exchange rate, deposition rate, and penetration factor. The second part of this study investigated the light absorption properties of organic carbon from residential wood combustion, the dominant fraction of woodsmoke. To assess woodsmoke variation, a study conducted parallel to the one presented in this thesis (Ward, 2009), a grid array of personal emission monitors (PEMS) and aethalometers were placed in a small area, approximately one square kilometer, within a community in Cambria, California between the months of November 2008 and March 2009. In this study, PEMS were used to collect particles on filters, which were analyzed for tracers for woodsmoke, including levoglucosan, elemental carbon, and organic carbon. Aethalometers measured black carbon, an indicator of carbon combustion. Additional PEMS and aethalometers were placed inside one residential home to better understand infiltration of woodsmoke. To model the infiltration of woodsmoke, the Lawrence Berkeley National Laboratory Air Infiltration Model was used. The home of interest was chosen such that indoor sources of particulate matter (PM) were minimal. This insures that all PM measured indoors was from outdoor sources, namely household chimneys. While indoor sources such as indoor fires and resuspension of particles were of concern, homes were chosen to minimize these sources. To investigate the infiltration parameters, four different solution techniques were used. Two of the solution techniques used SOLVER, a Microsoft Excel program, to minimize the sum of squared differences between calculated indoor concentrations and measured indoor concentrations, with all three parameters (air exchange rate, penetration, and deposition) as independent variables. The other two solution techniques used the Air Exchange Rate (AER) model from Lawrence Berkeley National Laboratory (LBNL) (Sherman & Grimsrud, 1980) and then used SOLVER to calculate deposition rate and penetration factor. Solution techniques 1 and 3, which used SOLVER to find all three parameters, had average penetration factors of 0.94 and 0.97 respectively, while solution techniques 2 and 4, which used the LBNL AER model had average penetration factors of 0.85 and 0.78 respectively. The deposition rates for solution techniques 1,2,3, and 4 were 0.10, 0.07, 0.08, and 0.04 hr-1 respectively. The air exchange rates varied throughout the study and ranged from 0.1 to 0.7 hr-1. The average indoor/outdoor ratio was also found to be 0.75. The aerosols derived from the study samples were found to have light absorption properties that were heavily spectrally dependent, which is consistent with expectations for wood combustion aerosols. Conversely, traffic derived aerosols are not found to be heavily spectrally dependent and follow the power law relationship of λ-1 whereas our samples followed λ-1.7 across all wavelengths and λ-2.25 for wavelengths less than 600 nm. The reason for the difference in spectral dependence is the presence of light absorbing organic carbon in wood smoke that is not found in diesel aerosols. The optical absorbances were also calculated for our samples and average values were found to be 3 and 1 m2/g for 370 and 450 nm wavelengths respectively.
APA, Harvard, Vancouver, ISO, and other styles
21

Ogbonna, Emmanuel. "A multi-parameter empirical model for mesophilic anaerobic digestion." Thesis, University of Hertfordshire, 2017. http://hdl.handle.net/2299/17467.

Full text
Abstract:
Anaerobic digestion, which is the process by which bacteria breakdown organic matter to produce biogas (renewable energy source) and digestate (biofertiliser) in the absence of oxygen, proves to be the ideal concept not only for sustainable energy provision but also for effective organic waste management. However, the production amount of biogas to keep up with the global demand is limited by the underperformance in the system implementing the AD process. This underperformance is due to the difficulty in obtaining and maintaining the optimal operating parameters/states for anaerobic bacteria to thrive with regards to attaining a specific critical population number, which results in maximising the biogas production. This problem continues to exist as a result of insufficient knowledge of the interactions between the operating parameters and bacterial community. In addition, the lack of sufficient knowledge of the composition of bacterial groups that varies with changes in the operating parameters such as temperature, substrate and retention time. Without sufficient knowledge of the overall impact of the physico-environmental operating parameters on anaerobic bacterial growth and composition, significant improvement of biogas production may be difficult to attain. In order to mitigate this problem, this study has presented a nonlinear multi-parameter system modelling of mesophilic AD. It utilised raw data sets generated from laboratory experimentation of the influence of four operating parameters, temperature, pH, mixing speed and pressure on biogas and methane production, signifying that this is a multiple input single output (MISO) system. Due to the nonlinear characteristics of the data, the nonlinear black-box modelling technique is applied. The modelling is performed in MATLAB through System Identification approach. Two nonlinear model structures, autoregressive with exogenous input (NARX) and Hammerstein-Wiener (NLHW) with different nonlinearity estimators and model orders are chosen by trial and error and utilised to estimate the models. The performance of the models is determined by comparing the simulated outputs of the estimated models and the output in the validation data. The approach is used to validate the estimated models by checking how well the simulated output of the models fits the measured output. The best models for biogas and methane production are chosen by comparing the outputs of the best NARX and NLHW models (each for biogas and methane production), and the validation data, as well as utilising the Akaike information criterion to measure the quality of each model relative to each of the other models. The NLHW models mhw2 and mhws2 are chosen for biogas and methane production, respectively. The identified NLHW models mhw2 and mhws2 represent the behaviour of the production of biogas and methane, respectively, from mesophilic AD. Among all the candidate models studied, the nonlinear models provide a superior reproduction of the experimental data over the whole analysed period. Furthermore, the models constructed in this study cannot be used for scale-up purpose because they are not able to satisfy the rules and criteria for applying dimensional analysis to scale-up.
APA, Harvard, Vancouver, ISO, and other styles
22

Tabassum, Javeria, and javeriaajaz@yahoo co in. "Analysis of current methods of flexural design for high strength concrete beams." RMIT University. Civil, Environmental & Chemical Engineering, 2008. http://adt.lib.rmit.edu.au/adt/public/adt-VIT20080725.143153.

Full text
Abstract:
Considerable amount of research was carried out into the properties and structural performance of high strength concrete for more than few decades. Whilst this research has produced relevant and useful results, there are several properties of high strength concrete like compressive and tensile strengths, stiffness, durability etc. that need to be evaluated and investigated to determine an accurate representation for the determination of different structural properties of beams made of high strength concrete. For this purpose, an investigation into the behaviour of beams made of higher concrete strengths has been carried out and conclusions drawn for the design of high strength concrete beams in flexure. Experimental data from previous research was considered for the study to establish some understanding of flexural behavior of HSC beams. A number of spreadsheets in Excel were developed using available data and various graphs were plotted to determine the accuracy of the code provisions for calculating the ultimate moment capacity of beams. A study on flexural ductility of beams has been carried out using a computer program FRMPHI which generates moment-curvature curves for the beams. Ductility has been studied using ductility factors. The influence of ductility on the value of the depth of neutral axis has been analysed and discussed. A chapter on the short-term deflection of simply supported high strength concrete beams under instantaneous deflections is presented. This chapter includes analysis of the available formula to calculate deflection to determine if these can be adopted for high strength concrete. Extensive ongoing research on the shear strength of beams by several researchers since many years has lead to the generation of a large body of knowledge. Although each author has analysed the data comparing them with existing relationships, the whole body of information has not been analysed to establish a statistical significance. In this study, regression analysis on experimental data collected from published research is carried a relationship between the different parameters affecting the shear strength of beams. The level of significance of the association between parameters influencing shear strength is also discussed.
APA, Harvard, Vancouver, ISO, and other styles
23

Jamdar, Sunil M. "Study of carbon black characteristics and their relations to the process parameters in flash carbonization of coal." Ohio : Ohio University, 1985. http://www.ohiolink.edu/etd/view.cgi?ohiou1184008550.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sandell, Viktor. "Extraction of Material Parameters for Static and Dynamic Modeling of Carbon Black Filled Natural Rubbers." Thesis, Luleå tekniska universitet, Materialvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-65583.

Full text
Abstract:
Volvo Car Corporation (Volvo Cars) develops powertrain mounting systems that uses components made up largely of filled rubber materials. The development of such components is today relying on external suppliers to design components based on requirements set by Volvo. To reduce costs and lead-time in the development process the possibility of in-house design of such components at Volvo Cars is being investigated. For this to be possible, knowledge must be built concerning modelling the mechanical properties of rubber materials. As part of this a parameter extraction method for modelling of filled rubber materials intended for finite element use has been developed in this project. Both a simple static model fitting procedure and a more complex dynamic model fitting procedure are detailed. Mechanical testing of four filled natural rubber materials with varying hardnesswas carried out at the facilities of Volvo Cars and recommendations have been made regarding the limits of the equipment and the specific test body geometry used. It was found that the lower limit for dynamic testing in regards to displacement amplitude is 0.02 mm. The highest frequency recommended is dependent on the material hardness but a higher limit of 200 Hz is recommended for the softest material investigated. The upper limit was found to be necessary due to inertia effects in the material. The models used to describe the static behaviour were hyperelastic phenomenological models independent on the second invariant such as the Yeoh and the linear neo-Hookean models. The dynamic model used the overlay method to capture therate and amplitude dependent properties of filled rubber. A generalized viscoelastic-elastoplastic rheological model using Maxwell and friction elements in parallel with alinear elastic element was presented and used. These were limited to having maximumfive of each element and no attempts at minimizing this number was made in this work.The dynamic model was fitted to experimental data using a minimization procedure focusing on dynamic modulus and damping at a range of frequencies and strain amplitudes.The proposed fitting procedure is a three segment loop in which FE simulationsof the experimental data is used as both a correction and a validation tool.Model validation showed good correlation of the fitted model to measured databefore correction was attempted. The correction step did not improve the model qualityand the reason for this was identified as poor post-processing. The proposed method together with lessons learned during the course of the project will be of importance for the future in-house development of rubber components at Volvo Cars.
APA, Harvard, Vancouver, ISO, and other styles
25

Zappalenti, Artur. "Influência dos parâmetros de usinagem no brochamento de um bloco em ferro fundido." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/18/18145/tde-29102007-100845/.

Full text
Abstract:
O objetivo do presente trabalho é analisar a influência de alguns parâmetros de usinagem na qualidade geométrica e superficial da peça. O brochamento é uma operação de usinagem antiga, a qual se desenvolveu com a indústria automobilística, mas que atualmente pode ser encontrada em diversos segmentos industriais. O processo consiste em uma remoção progressiva do material pela ação de uma ferramenta de corte multi-arestas. Este processo é capaz de, em um único ciclo, realizar as operações de desbaste e acabamento da peça em virtude da combinação de uma geometria adequada da ferramenta, tornando esta uma importante característica deste processo. Para o trabalho em questão, foi analisado um processo brochamento de acabamento na usinagem de blocos fabricados em ferro fundido. Este material foi escolhido por estar bastante presente na indústria de transformação devido à boa usinabilidade. Os resultados foram obtidos em medições com aparelhos eletrônicos e através de análise estatística foi avaliado o grau de interferência dos fatores para as características estudadas. Os resultados mostraram que os fatores e suas combinações interferem de forma diferente para cada característica analisada. Na maioria das situações analisadas, os fatores que individualmente apresentam significância para determinada característica, quando combinados entre si também podem interferir no resultado, este fato fica evidenciado com a análise dos resultados obtidos com as ferramentas diamantadas.
The objective of the present paperwork is to analyze the influence of some machining parameters on the geometrical and surface qualities of the work piece. The broaching is an old machining operation, which was developed with the automobile industry, but which nowadays can be found in diverse industrial segments. The process consists on a progressive removal of the material by a multi-edge cutting tool. This process is able of, in a single cycle, do the operations of the part\'s looping and finishing because of a peculiar geometrical configuration of the tool, what is an important characteristic of this process. For the paperwork on proposal, it was analyzed a process broaching of finishing in the machining of blocks made of cast iron. This material was chosen for being very present in the transformation industry due to its machinability. The results were obtained on measurements with electronic devices, and through statistical analysis it was analyzed the degree of interference of the factors to the studied characteristics. The results showed which the factors and its combinations interfere on a different way for each analyzed characteristic. In the most of the analyzed situations, the factors which individually present significance for determined characteristic, when combined among themselves, can interfere in the result too. This fact becomes evident with the analysis of the obtained results with the diamonded tools.
APA, Harvard, Vancouver, ISO, and other styles
26

Ruy, Roberto da Silva [UNESP]. "Desenvolvimento e validação geométrica de um sistema para mapeamento com câmeras digitais de médio formato." Universidade Estadual Paulista (UNESP), 2008. http://hdl.handle.net/11449/100268.

Full text
Abstract:
Made available in DSpace on 2014-06-11T19:30:32Z (GMT). No. of bitstreams: 0 Previous issue date: 2008-11-19Bitstream added on 2014-06-13T19:00:45Z : No. of bitstreams: 1 ruy_rs_dr_prud.pdf: 5800893 bytes, checksum: eca044ea30603eb3cacb3a982ca7088c (MD5)
Nos últimos anos observa-se uma crescente utilização de câmaras digitais em Fotogrametria, especialmente os modelos profissionais de câmaras de pequeno e médio formato. Isso porque os sistemas digitais comerciais de grande formato possuem custos elevados e um complexo sistema de gerenciamento, armazenamento e processamento das imagens. Além disso, as câmaras digitais de pequeno e médio formato possuem algumas vantagens que as tornam altamente atrativas, como: grande disponibilidade no mercado; flexibilidade quanto ao intervalo de focalização; são pequenas, leves e de fácil manejo e; possuem custos substancialmente reduzidos quando comparadas aos sistemas digitais de grande formato. Por outro lado, algumas limitações ainda estão presentes nestes modelos de câmaras, no que se refere à confiabilidade da geometria interna e à resolução dos sensores. Contudo, estudos de caso têm mostrado que estes problemas podem ser contornados, podendo-se utilizar todo o potencial deste tipo de sensor para mapeamentos temáticos, topográficos e cadastrais em áreas de pequeno e médio porte, com grande flexibilidade em relação aos sensores aéreos e orbitais convencionais. Neste contexto, este trabalho teve como objetivo a concepção, implementação física e testes reais de um sistema de aquisição de imagens digitais, formado por câmaras digitais de médio formato integradas a sensores de orientação direta, dispositivos eletrônicos e interfaces de hardware e software. Foram desenvolvidos também estudos, análises, algoritmos e programas computacionais de Fototriangulação com parâmetros adicionais (FPA), com dados de georreferenciamento direto, voltados ao sistema desenvolvido...
In the last years there is a growing use of digital cameras in Photogrammetry, mainly the small and medium format cameras, because of high cost and problems with the images management and postprocessing in the high end digital cameras. Besides, if the small and medium format cameras are calibrated they can provide quality data, together with their advantages: variety in the market; focalization flexibility; are small, light, easy handling and; have low cost if compared with the high resolution cameras. Although, these models of digital cameras have some limitations, like the interior orientation reliability and the resolution of the sensor. Some case studies have showed that these problems can be solved and the digital sensors can be used with success in thematic, topographic and cadastral mapping of small and medium areas, with high flexibility if compared with conventional aerial and orbital sensors. In this context, the aim of this work is the conception, development and real tests performing of a digital image acquisition system composed by medium format digital cameras integrated to direct orientation systems, electronic devices and hardware and software developments. Studies, analysis and computational programs related to block triangulation with additional parameters with direct orientation data were performed for establishing the interior orientation of the cameras that compose the acquisition system... (Complete abstract click electronic access below)
APA, Harvard, Vancouver, ISO, and other styles
27

Wang, Cong. "Evaluation of a least-squares radial basis function approximation method for solving the Black-Scholes equation for option pricing." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-183042.

Full text
Abstract:
Radial basis function (RBF) approximation, is a new extremely powerful tool that is promising for high-dimensional problems, such as those arising from pricing of basket options using the Black-Scholes partial differential equation. The main problem for RBF methods have been ill-conditioning as the RBF shape parameter becomes small, corresponding to flat RBFs. This thesis employs a recently developed method called the RBF-QR method to reduce computational cost by improving the conditioning, thereby allowing for the use of a wider range of shape parameter values. Numerical experiments for the one-dimensional case are presented  and a MATLAB implementation is provided. In our thesis, the RBF-QR method performs better  than the RBF-Direct method for small shape parameters. Using Chebyshev points, instead of a standard uniform distribution, can increase the accuracy through clustering of the nodes towards the boundary. The least squares formulation for RBF methods is preferable to the collocation approach because it can result in smaller errors  for the same number of basis functions.
APA, Harvard, Vancouver, ISO, and other styles
28

Ramos, Buades Antoni. "Gravitacional waves from generic binary black holes: from numerical simulations to observational results." Doctoral thesis, Universitat de les Illes Balears, 2020. http://hdl.handle.net/10803/671467.

Full text
Abstract:
[cat] Aquesta tesi és un recull del treball realitzat en els darrers quatre anys d’investigació enfocats a la producció de simulacions de relativitat numèrica de forats negres binaris en configuracions genèriques, així com a l’anàlisi de les ones gravitacionals extretes de dites simulacions, les seves conseqüències pel models de formes d’ones gravitacionals existents i les seves implicacions per a la cerca i l’estimació dels paràmetres d’aquests sistemes en la natura. Per començar, he estudiat la prescripció de paràmetres inicials en les simulacions de relativitat numèrica. Un problema ben conegut a relativitat numèrica és la dificultat d’obtenir simulacions de forats negres en òrbites quasi-circulars, degut a imprecisions en la generació de les dades inicials que provoquen òrbites quasi-el líptiques amb una excentricitat residual. El primer projecte d’aquesta tesi ha estat el desenvolupament d’un procediment iteratiu, senzill i computacionalment eficaç per a la reducció de l’excentricitat a simulacions de relativitat numèrica de forats negres binaris, veure Cap. 4. Amb aquest mètode s’han generat formes d’ona gravitacionals quasi-circulars amb una excentricitat negligible, e O 􀀀��� 10􀀀���4 , que han estat utilitzades pel nostre grup per generar models quasi-circulars de formes d’ona gravitacionals. La flexibilitat del mètode anterior permet no tan sols reduir l’excentricitat de les simulacions numèriques, sinó també augmentar-la. Aquest fet ha permès la generació d’un banc de més de 60 simulacions de relativitat numèrica amb excentricitat moderada e 0:5. Aquest ha estat el segon projecte d’investigació de la tesi, veure Cap. 5. Amb aquest grup de simulacions s’han generat formes d’ona híbrides pel mode dominant (2; 2) entre les ones obtingudes per la teoria post-Newtoniana i les de relativitat numèrica. A més, s’ha estimat les limitacions dels models quasi-circulars actuals per estimar paràmetres d’aquestes fonts. Els resultats obtinguts demostren que els models quasi-circulars de formes d’ona que inclouen modes subdominants redueixen el biaix en alguns paràmetres com la distància i el ràtio de massa, respecte a models sense modes subdominants. Per altra banda, durant el doctorat també s’han estudiat les limitacions de dues aproximacions utilitzades habitualment per models d’ona quasi-circulars amb espins precessants, veure Cap. 6. Aquestes dues aproximacions s’han analitzat emprant únicament simulacions de relativitat numèrica incloent modes subdominants. Els resultats obtinguts confirmen el bon funcionament de les aproximacions pels modes dominants (2; 2), mentre que pel modes subdominants s’observa una degradació important degut a diferent causes depenent del mode estudiat, per exemple, els modes (2; 1) són molt sensibles a les asimetries entre modes que les aproximacions negligeixen, mentre que els modes (4; 3) i (3; 2) pateixen mescla de modes en la part del decaïment de l’ona que les aproximacions no tenen en compte. Finalment, s’ha analitzat la sensibilitat de dos algorismes de cerca emprats per les col laboracions LIGO i Virgo durant el segon període d’observació O2 per detectar senyals completes d’ones gravitacionals procedents de binàries de forats negres eccèntriques, veure Cap. 7. En aquest treball preliminar s’ha quantificat l’impacte de l’excentricitat sobre dos algorismes de cerca: un codi de filtrat adaptat basat en el coneixement de la morfologia de la senyal, i un codi de cerca sense modelat. En aquest estudi s’estima per primera vegada la sensibilitat d’ambdós algorismes injectant senyals excèntriques calculades a partir de simulacions de relativitat numèrica incloent espins alineats amb el moment angular orbital del sistema. Els resultats obtinguts mostren una major degradació de la sensibilitat de l’algorisme de filtrat adaptat a mesura que l’excentricitat augmenta, mentre que la sensibilitat de l’algorisme sense modelat no es veu quasi afectada per l’increment de l’excentricitat, i per tant, es pot identificar aquest darrer com una eina robusta per a la detecció de senyals excèntriques.
[spa] Esta tesis recoge el trabajo realizado en los últimos cuatro años de investigación enfocados en la producción de simulaciones de relatividad numérica de agujeros negros binarios en configuraciones genéricas, así como en el análisis de las ondas gravitacionales extraídas de dichas simulaciones, sus consecuencias para los modelos de formas de ondas existentes y sus implicaciones para la búsqueda y la estimación de los parámetros de dichos sistemas en la naturaleza. Para empezar, he estudiado la prescripción de parámetros iniciales en las simulaciones de relatividad numérica. Un problema bien conocido en relatividad numérica es la dificultad de obtener simulaciones de agujeros negros en órbitas casi-circulares, debido a imprecisiones en la generación de los datos iniciales que provocan órbitas casi-elípticas con una excentricidad residual. El primer proyecto de esta tesis ha sido el desarrollo de un procedimiento iterativo, sencillo y computacionalmente eficaz para la reducción de la excentricidad en simulaciones de relatividad numérica de agujeros negros binarios, ver Cap. 4. Con este método se han generado formas de onda gravitacionales casi-circulares con una excentricidad negligible, e O 􀀀�� 10􀀀��4 , que han sido usadas por nuestro grupo para generar modelos de formas de onda casi-circulares. La flexibilidad del método anterior permite no solo reducir la excentricidad de las simulaciones numéricas, sino también aumentarla. Este hecho ha permitido la generación de un banco de más de 60 simulaciones de relatividad numérica con excentricidad moderada e 0:5. Este ha sido el segundo proyecto de investigación de la tesis, ver Cap. 5. Con este grupo de simulaciones he generado formas de onda híbridas para el modo dominante (2; 2) entre las ondas obtenidas a partir de la teoría post- Newtoniana y las de relatividad numérica. Además, con colaboradores he estimado las limitaciones de los modelos casi-circulares actuales para estimar los parámetros de estas fuentes. Los resultados obtenidos demuestran que los modelos casi-circulares de formas de onda que incluyen modos subdominantes reducen el sesgo en algunos parámetros como la distancia y el ratio de masa, respecto a los modelos sin modos subdominantes. Por otro lado, durante el doctorado también se han estudiado las limitaciones de dos aproximaciones utilizadas comúnmente para modelos de onda casi-circulares con espines precesantes, ver Cap. 6. Estas dos aproximaciones se han analizado usando únicamente simulaciones de relatividad numérica incluyendo modos subdominantes. Los resultados obtenidos confirman el buen funcionamiento de las aproximaciones para los modos dominantes (2; 2), mientras que para los modos subdominantes se observa una degradación importante debido a diferentes causas dependiendo del modo estudiado, por ejemplo, los modos (2; 1) son muy sensibles a las asimetrías entre modos que las aproximaciones negligen, mientras que los modos (4; 3) y (3; 2) padecen mezcla de modos en la parte del decaimiento de la onda que las aproximaciones no tienen en cuenta. Finalmente, con colaboradores he analizado la sensibilidad de dos algoritmos de búsqueda, utilizados por las colaboraciones LIGO y Virgo durante el segundo período de observación O2, para detectar señales completas de ondas gravitacionales procedentes de binarias de agujeros negros excéntricos, ver Cap. 7. En este trabajo preliminar se ha cuantificado el impacto de la excentricidad sobre dos algoritmos de búsqueda: un código de filtrado adaptado y un código de búsqueda sin modelado. En este estudio se estima por primera vez la sensibilidad de ambos algoritmos inyectando señales excéntricas calculadas a partir de simulaciones de relatividad numérica. Los resultados muestran una mayor degradación de la sensibilidad del algoritmo de filtrado adaptado a medida que aumenta la excentricidad, mientras que el algoritmo sin modelado no se ve casi afectado por el aumento de la excentricidad, y por tanto, se puede identificar este último como una herramienta robusta para la detección robusta de señales excéntricas.
[eng] This thesis gathers all the work done in my last four years of research focused on the production of numerical relativity simulations of generic binary black holes, as well as the analysis of the gravitational waveforms from these simulations and their implications for searches and parameter estimation on those systems. I have started studying the prescription of initial parameters in numerical relativity simulations. A well known problem in numerical relativity is the difficulty to obtain simulations of black holes orbiting in quasi-circular orbits due to inaccuracies of the initial data, which cause elliptical orbits with residual eccentricity. The first project of the thesis has been the development of a simple, iterative and computationally efficient procedure to reduce the eccentricity in binary black hole numerical relativity simulations, see Chap. 4. With this method we have produced quasi-circular waveforms with negligible eccentricity, e O 􀀀� 10􀀀�4 , which have been used in our group to generate quasi-circular waveform models. The flexibility of the previous method permits not only the reduction of the eccentricity, but also increasing it. Using this fact I have produced a data set of more than 60 numerical relativity simulations with moderate eccentricity e 0:5. This has been the second project of the thesis, see Chap. 5. Taking this set of simulations, with collaborators I have generated hybrid waveforms for the dominant (2; 2) mode between post-Newtonian and numerical relativity waveforms. Moreover, we have estimated the limitations of the current quasi-circular waveform models to estimate the parameters from those sources. We have found that the quasi-circular models which include higher order modes reduce the bias in some parameters like the mass ratio and luminosity distance, with respect to those models not including higher order modes. Furthermore, during the Ph.D. I have also studied the limitations of two approximations commonly used by precessing quasi-circular waveform models, see Chap. 6. These two approximations have been analysed using exclusively numerical relativity simulations including higher order modes. The results confirm the good performance of the approximations for the (2; 2) modes, while one observes a clear degradation for higher order modes due to different reasons depending on the considered mode. For instance, the (2; 1) modes are found to be very sensitive to asymmetries which the approximations neglect, while the (4; 3) and (3; 2) modes, have mode-mixing in the ringdown part which is not properly taken into account by the simple approximations. Finally, with collaborators I have analysed the sensitivity of two search pipelines, used by the LIGO and Virgo collaborations during the O2 Science Run, to the full gravitational wave signal of eccentric binary black holes, see Chap. 7. In this preliminary work we have quantified the impact of eccentricity on two search pipelines: a matched-filter and an unmodeled search algorithm. We have for the first time estimated the sensitivity of both algorithms injecting eccentric signals computed from numerical relativity simulations. The results show a larger degradation of the sensitivity of the matched-filter algorithm with increasing eccentricity, while the sensitivity of the unmodeled search algorithm remains barely unaffected to the increase of eccentricity, thus, we consider the latter one a robust tool to detect such eccentric signals.
APA, Harvard, Vancouver, ISO, and other styles
29

Bryntesen, Rikke Nornes. "Laboratory investigation on salt migration and its effect on the geotechnical strength parameters in quick clay mini-block samples from Dragvoll." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for geologi og bergteknikk, 2014. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-27130.

Full text
Abstract:
Investigations of the effect of salt diffusion as ground improvement of quick clay is importantto provide a clear understanding of the method, to be able validate the potential of commercialuse. Previous extensive investigations by laboratory work has been carried out. Laboratory in-vestigations of salt migration lead to extended storage time for the clay samples, as diffusion inclays is considered a slow process.TheobjectiveoftheworkistoanalyzetheeffectofKCldiffusioninrelationtopotentialweather-ing and storage effects. The effect is analyzed in the laboratory with regards to both geotchnicalproperties, variations in pH and salinity. A literature study of previous findings related to saltmigration and storage effects on both geotechnical properties and geochemistry, is also consid-ered.A series of mini block samples are submerged in cells containing deaired, ionized water anddeaired, KCl solution. The samples are stored in the cell in a time period from 42 to 102 days.The samples are investigated in the laboratory to provide results relating to undisturbed andremoulded strength parameters, compressibility and general geotechnical properties. pH andsalinity is also recorded. The findings are compared to a detailed depth profile of referenceparameters,fromapreviousinvestigationdoneonminiblocksamplesfromthesameborehole.Testsarealsocarriedoutinsectionsthroughoutthewaterandsalttreatedclays,tomapchangeswith regard to the time dependent diffusion and weathering.Theresultsindicatedageneralincreaseinpeakundrainedshearstrengthforthesamplesstoredin KCl solution, approximately 50% of the observed general increase is also observed in the claystored in water. A minor increase of plasticity limit is seen in both the water and salt treatedsamples. The comparison between sections for each sample show no clear deviations betweenthe sections. Further, results from the salt migrated samples confirm the findings from previousinvestigations.The samples are to some extent effected by weathering, in particular the peak undrained shearstrength resulting from triaxial tests. However, the geotechnical properties observed after saltmigration are changed to such an extent that the clay show a completely different behavior. Thesame distinct change is not observed in the clays only exposed to potential weathering.The comparability between the samples are however, considered to be relatively low due to in-homogeneity in the soil profile and varying storage time prior to testing and cell installation.Loss of samples during sampling also lead to occasional large distance in depth, between thecompared results.
APA, Harvard, Vancouver, ISO, and other styles
30

Pfab, Jonathan Francis. "Thermal Analysis of the Detector in the Radiation Budget Instrument (RBI)." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/82035.

Full text
Abstract:
Earth radiation budget instruments are devices designed to study global climate change. These instruments use telescopes embarked on low-earth-orbit satellites to measure Earth emitted and reflected solar radiation. Radiation is sensed as temperature changes caused by radiation absorbed during scans of the earth on a delicate gold-black coated detector. This work is part of a larger effort to develop an end-to-end dynamic electro-thermal model, based on first-principles, for the next generation of earth radiation budget instruments, the Radiation Budget Instrument (RBI). A primary objective of this effort is to develop a numerical model of the detector to be used on RBI. Specifically, the sensor model converts radiation arriving at the detector, collimated and focused through telescopes, into sensible heat; thereby producing a voltage. A mathematical model characterizing this sensor is developed. Using a MATLAB algorithm, an implicit finite-volume scheme is implemented to determine the model solution. Model parameters are tuned to replicate experimental data using a robust parameter estimation scheme. With these model parameters defined, the electro-thermal sensor model can be used, in conjunction with the remaining components of the end-to-end model, to provide insight for future interpretation of data produced by the RBI.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
31

Olof, Skogby Steinholtz. "A Comparative Study of Black-box Optimization Algorithms for Tuning of Hyper-parameters in Deep Neural Networks." Thesis, Luleå tekniska universitet, Institutionen för teknikvetenskap och matematik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-69865.

Full text
Abstract:
Deep neural networks (DNNs) have successfully been applied across various data intensive applications ranging from computer vision, language modeling, bioinformatics and search engines. Hyper-parameters of a DNN are defined as parameters that remain fixed during model training and heavily influence the DNN performance. Hence, regardless of application, the design-phase of constructing a DNN model becomes critical. Framing the selection and tuning of hyper-parameters as an expensive black-box optimization (BBO) problem, obstacles encountered in manual by-hand tuning could be addressed by taking instead an automated algorithmic approach. In this work, the following BBO algorithms: Nelder-Mead Algorithm (NM), ParticleSwarm Optmization (PSO), Bayesian Optimization with Gaussian Processes (BO-GP) and Tree-structured Parzen Estimator (TPE), are evaluated side-by-side for two hyper-parameter optimization problem instances. These instances are: Problem 1, incorporating a convolutionalneural network and Problem 2, incorporating a recurrent neural network. A simple Random Search (RS) algorithm acting as a baseline for performance comparison is also included in the experiments. Results in this work show that the TPE algorithm achieves the overall highest performance with respect to mean solution quality, speed ofimprovement and with a comparatively low trial-to-trial variability for both Problem 1 and Problem 2. The NM, PSO and BO-GP algorithms are shown capable of outperforming the RS baseline for Problem 1, but fails to do so in Problem 2.
APA, Harvard, Vancouver, ISO, and other styles
32

Masipa, Mochaki Deborah. "The effects of a South African Black youth jive on selected biophysical physiological and psycho-social parameters." Thesis, Rhodes University, 1989. http://hdl.handle.net/10962/d1015682.

Full text
Abstract:
This study investigated the effects of a South African Black youth jive on selected Biophysical, Physiological and Psycho-social parameters, using 31 Black youths, males and females (mean age 19.29 yrs) as subjects. All subjects participated in the pre- and post-programme testing protocols (acting as their own control) and in a 7-week jive programme. While the female subjects were significantly (p<0.05) heavier with a greater percentage body fat than their male counterparts, a two factor analysis of variance revealed no significant changes in body composition (p<0.05) of either sex group. However, significant improvements did occur in the cardio-respiratory . parameters of working and recovery heart rates, predicted V0₂ max, and the anaerobic capacity. Here, the males exhibited superior cardio-respiratory qualities and performed better in all motor fitness parameters except flexibility, where no significant sex difference occurred. Also, there were significant improvements in all motor fitness tests with the exception of power (as tested in the 18-Item Illinois test). No significant differences occurred between male and female psycho-social responses with no changes occurring after the 7- week programme. It can be concluded that involvement in the 7-week jive programme improved physiological parameters but failed to bring about alterations in the biophysical and psycho-social domains..
APA, Harvard, Vancouver, ISO, and other styles
33

Fatuzzo, Marco, and Fulvio Melia. "Unseen Progenitors of Luminous High-z Quasars in the Rh = ct Universe." IOP PUBLISHING LTD, 2017. http://hdl.handle.net/10150/625805.

Full text
Abstract:
Quasars at high redshift provide direct information on the mass growth of supermassive black holes (SMBHs) and, in turn, yield important clues about how the universe evolved since the first (Pop III) stars started forming. Yet even basic questions regarding the seeds of these objects and their growth mechanism remain unanswered. The anticipated launch of eROSITA and ATHENA is expected to facilitate observations of high-redshift quasars needed to resolve these issues. In this paper, we compare accretion-based SMBH growth in the concordance Lambda CDM model with that in the alternative Friedmann-Robertson-Walker cosmology known as the R-h = ct universe. Previous work has shown that the timeline predicted by the latter can account for the origin and growth of the greater than or similar to 10(9) M-circle dot highest redshift quasars better than that of the standard model. Here, we significantly advance this comparison by determining the soft X-ray flux that would be observed for Eddington-limited accretion growth as a function of redshift in both cosmologies. Our results indicate that a clear difference emerges between the two in terms of the number of detectable quasars at redshift z greater than or similar to 7, raising the expectation that the next decade will provide the observational data needed to discriminate between these two models based on the number of detected high-redshift quasar progenitors. For example, while the upcoming ATHENA mission is expected to detect similar to 0.16 (i.e., essentially zero) quasars at z similar to 7 in R-h = ct, it should detect similar to 160 in Lambda CDM-a quantitatively compelling difference.
APA, Harvard, Vancouver, ISO, and other styles
34

Sobrinho, José Laurindo de Góis Nóbrega. "The possibility of primordial black hole direct detection." Doctoral thesis, Universidade da Madeira, 2011. http://hdl.handle.net/10400.13/235.

Full text
Abstract:
This thesis explores the possibility of directly detecting blackbody emission from Primordial Black Holes (PBHs). A PBH might form when a cosmological density uctuation with wavenumber k, that was once stretched to scales much larger than the Hubble radius during ination, reenters inside the Hubble radius at some later epoch. By modeling these uctuations with a running{tilt power{law spectrum (n(k) = n0 + a1(k)n1 + a2(k)n2 + a3(k)n3; n0 = 0:951; n1 = 􀀀0:055; n2 and n3 unknown) each pair (n2,n3) gives a di erent n(k) curve with a maximum value (n+) located at some instant (t+). The (n+,t+) parameter space [(1:20,10􀀀23 s) to (2:00,109 s)] has t+ = 10􀀀23 s{109 s and n+ = 1:20{2:00 in order to encompass the formation of PBHs in the mass range 1015 g{1010M (from the ones exploding at present to the most massive known). It was evenly sampled: n+ every 0.02; t+ every order of magnitude. We thus have 41 33 = 1353 di erent cases. However, 820 of these ( 61%) are excluded (because they would provide a PBH population large enough to close the Universe) and we are left with 533 cases for further study. Although only sub{stellar PBHs ( 1M ) are hot enough to be detected at large distances we studied PBHs with 1015 g{1010M and determined how many might have formed and still exist in the Universe. Thus, for each of the 533 (n+,t+) pairs we determined the fraction of the Universe going into PBHs at each epoch ( ), the PBH density parameter (PBH), the PBH number density (nPBH), the total number of PBHs in the Universe (N), and the distance to the nearest one (d). As a rst result, 14% of these (72 cases) give, at least, one PBH within the observable Universe, one{third being sub{stellar and the remaining evenly spliting into stellar, intermediate mass and supermassive. Secondly, we found that the nearest stellar mass PBH might be at 32 pc, while the nearest intermediate mass and supermassive PBHs might be 100 and 1000 times farther, respectively. Finally, for 6% of the cases (four in 72) we might have substellar mass PBHs within 1 pc. One of these cases implies a population of 105 PBHs, with a mass of 1018 g(similar to Halley's comet), within the Oort cloud, which means that the nearest PBH might be as close as 103 AU. Such a PBH could be directly detected with a probability of 10􀀀21 (cf. 10􀀀32 for low{energy neutrinos). We speculate in this possibility.
Pedro Manuel Edmond Reis da Silva Augusto
APA, Harvard, Vancouver, ISO, and other styles
35

Österlöf, Rickard. "Modelling of the Fletcher-Gent effect and obtaining hyperelastic parameters for filled elastomers." Licentiate thesis, KTH, MWL Strukturakustik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-151304.

Full text
Abstract:
The strain amplitude dependency , i.e. the Fletcher-Gent effect and Payne effect, and the strain rate dependency of rubber with reinforcing fillers is modelled using a modified boundary surface model and implemented uniaxially. In this thesis, a split of strain instead of stress is utilized, and the storage and loss modulus are captured over two decades of both strain amplitudes and frequencies. In addition, experimental results from bimodal excitation are replicated well, even though material parameters were obtained solely from harmonic excitation. These results are encouraging since the superposition principle is not valid for filled rubber, and real-life operational conditions in general contain several harmonics. This means that formulating constitutive equations in the frequency domain is a cumbersome task, and therefore the derived model is implemented in the time domain. Filled rubber is used irreplaceable in several engineering solutions, such as tires, bushings, vibrations isolators, seals and tread belts, to name just a few. In certain applications, it is sufficient to model the elastic properties of a component during finite strains. However, Hooke’s law is inadequate for this task. Instead, hyperelastic material models are used. Finally, the thesis presents a methodology for obtaining the required material parameters utilizing experiments in pure shear, uniaxial tension and the inflation of a rubber membrane. It is argued that the unloading curve rather than the loading curve is more suitable for obtaining these parameters, even at very low strain rates.

QC 20140917

APA, Harvard, Vancouver, ISO, and other styles
36

Newsome, Shaun. "Body Mass Index as a Parameter to Evaluate the Prevalence of Hypertension in NH White, NH Black, and Hispanic Americans." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/iph_theses/226.

Full text
Abstract:
Over the past 30 years, obesity has been primarily identified by the body mass index (BMI). Due to its ease of calculation, the BMI has become the most widely used diagnostic tool to identify weight problems. This study examined the association between hypertension and BMI in White, Black, and Hispanics in the United States. The study’s hypothesis was that this relationship was weaker in Blacks than in the other groups. Data for the study came from the 2007-2008 and 2009-2010 National Health and Nutrition Examination Surveys. The association was weaker in Black men than in Whites or Hispanics on a univariate basis, and at most BMI levels on a multivariate basis. For females, it was also weaker in Blacks at most BMI levels on a univariate basis. However, multivariate logistic regression analysis did not indicate that the hypothesis held for Black women when adding covariates to the models.
APA, Harvard, Vancouver, ISO, and other styles
37

Trias, Cornellana Miquel. "Gravitational wave observation of compact binaries Detection, parameter estimation and template accuracy." Doctoral thesis, Universitat de les Illes Balears, 2011. http://hdl.handle.net/10803/37402.

Full text
Abstract:
La tesi tracta, des del punt de vista de l’anàlisi de dades, la possibilitat de detecció directa d’ones gravitatòries emeses per sistemes binaris d’objectes compactes de massa similar: forats negres, estels de neutrons, nanes blanques. En els capítols introductoris, a) es dóna una descripció detallada i exhaustiva de com passar dels patrons d’ona teòrics a la senyal detectada; b) s’introdueixen les eines més emprades en l’anàlisi de dades d’ones gravitatòries, amb especial menció a la discussió sobre les amplituds efectiva i característica. A més, els resultats originals de la tesi segueixen tres línies de recerca diferents: 1) S’ha predit la precisió amb la que el futur detector interferomètric espacial LISA, estimarà els paràmetres (posició, masses, velocitat de rotació, paràmetres cosmològics…) de les observacions de xocs entre dos forats negres supermassius en la fase “inspiral”. 2) S’ha desenvolupat un algorisme propi de cerca de senyals gravitatòries procedents de sistemes binaris estel•lars, basat en teories de probabilitat Bayesiana i MCMC. Aquest algorisme distingeix alhora milers de senyals superposades en una única sèrie temporal de dades, extraient paràmetres individuals de cadascuna d’elles. 3) S’ha definit de manera matemàtica rigorosa com determinar el rang de validesa (per a extracció de paràmetres i detecció) de models aproximats de patrons d’ones gravitatòries, aplicant-ho a un cas concret de models semi-analítics
La tesis trata, desde el punto de vista del análisis de datos, la posibilidad de detección directa de ondas gravitacionales emitidas por sistemas binarios de objetos compactos de masa similar: agujeros negros, estrellas de neutrones, enanas blancas. En los capítulos introductorios, a) se desarrolla una descripción detallada y exhaustiva de como pasar de los patrones de onda teóricos a la señal detectada; b) se introducen las herramientas más utilizadas en el análisis de datos de ondas gravitacionales, con especial mención a la discusión sobre las amplitudes efectiva y característica. Además, los resultados originales de la tesis siguen tres líneas de investigación diferentes: 1) Se ha predicho la precisión con la que el futuro detector interferométrico espacial LISA, estimará los parámetros (posición, masas, velocidad de rotación, parámetros cosmológicos…) de las observaciones de choques entre dos agujeros negros supermasivos en la fase “inspiral”. 2) Se ha desarrollado un algoritmo propio de búsqueda de señales gravitacionales procedentes de sistemas binarios estelares, basado en teorías de probabilidad Bayesiana y MCMC. Este algoritmo distingue a la vez miles de señales superpuestas en una única serie temporal de datos, extrayendo parámetros individuales de cada una de ellas. 3) Se ha definido de manera matemática rigurosa como determinar el rango de validez (para extracción de parámetros y detección) de modelos aproximados de patrones de ondas gravitacionales, aplicándolo a un caso concreto de modelos semi-analíticos.
In this PhD thesis one studies, from the data analysis perspective, the possibility of direct detection of gravitational waves emitted by similar mass compact binary objects: black holes, neutron stars, white dwarfs. In the introductory chapters, a) a detailed and exhaustive description about how to derive the detected strain from the theoretical emitted waveform predictions is given; b) the most used gravitational wave data analysis results are derived, being worth pointing out the discussion about effective and characteristic amplitudes. Moreover, three different research lines have been followed in the thesis: 1) It has been predicted the parameter estimation (position, masses, spin, cosmological parameters…) of supermassive black hole binary inspiral signals, observed with the future interferometric space detector, LISA. 2) A new algorithm, based on Bayesian probability and MCMC techniques, has been developed in order to search for gravitational wave signals from stellar-mass binary systems. The algorithm is able to distinguish thousands of overlapping signals from a single observed time series, allowing for individual parameter extraction. 3) It has been, mathematically and rigorously, defined how to compute the validity range (for parameter estimation and detection purposes) of approximated gravitational waveform models, applying it to the particular case of closed-form models
APA, Harvard, Vancouver, ISO, and other styles
38

DeBlasio, Dan, and John Kececioglu. "Core column prediction for protein multiple sequence alignments." BIOMED CENTRAL LTD, 2017. http://hdl.handle.net/10150/623957.

Full text
Abstract:
Background: In a computed protein multiple sequence alignment, the coreness of a column is the fraction of its substitutions that are in so-called core columns of the gold-standard reference alignment of its proteins. In benchmark suites of protein reference alignments, the core columns of the reference alignment are those that can be confidently labeled as correct, usually due to all residues in the column being sufficiently close in the spatial superposition of the known three-dimensional structures of the proteins. Typically the accuracy of a protein multiple sequence alignment that has been computed for a benchmark is only measured with respect to the core columns of the reference alignment. When computing an alignment in practice, however, a reference alignment is not known, so the coreness of its columns can only be predicted. Results: We develop for the first time a predictor of column coreness for protein multiple sequence alignments. This allows us to predict which columns of a computed alignment are core, and hence better estimate the alignment's accuracy. Our approach to predicting coreness is similar to nearest-neighbor classification from machine learning, except we transform nearest-neighbor distances into a coreness prediction via a regression function, and we learn an appropriate distance function through a new optimization formulation that solves a large-scale linear programming problem. We apply our coreness predictor to parameter advising, the task of choosing parameter values for an aligner's scoring function to obtain a more accurate alignment of a specific set of sequences. We show that for this task, our predictor strongly outperforms other column-confidence estimators from the literature, and affords a substantial boost in alignment accuracy.
APA, Harvard, Vancouver, ISO, and other styles
39

Gong, Wei 1981. "Theoretical investigations of terascale physics." Thesis, University of Oregon, 2009. http://hdl.handle.net/1794/10339.

Full text
Abstract:
xv, 177 p. : ill. A print copy of this thesis is available through the UO Libraries. Search the library catalog for the location and call number.
In this dissertation, three different topics related to terascale physics are explored. First, a new method is suggested to match next-to-leading order (NLO) scattering matrix elements with parton showers. This method is based on the original approach which adds primary parton splittings in Born-level Feynman graphs in order to remove several types of infrared divergent subtractions from the NLO calculation. The original splitting functions are modified so that parton showering has a less severe effect on the jet structure of the generated events. We also examine the Large Hadron Collider phenomenology of quantum black holes in models of TeV scale gravity. Based on a few minimal assumptions, such as the conservation of color charges, interesting signatures are identified that should be readily visible above the Standard Model background. The detailed phenomenology depends heavily on whether one requires a Lorentz invariant, low-energy effective field theory description of black hole processes. Finally, in the calculation of cross sections in high energy collisions at NLO, one option is to perform all of the integrations, including the virtual loop integration, by Monte Carlo numerical integration. A new method is developed to perform the loop integration directly, without introducing Feynman parameters, after suitably deforming the integration contour. Our example is the N-photon scattering amplitude with a massless electron loop. Results for six photons and eight photons are reported.
Committee in charge: Stephen Hsu, Chairperson, Physics; Graham Kribs, Member, Physics; David Strom, Member, Physics; Davison Soper, Member, Physics; Marina Guenza, Outside Member, Chemistry
APA, Harvard, Vancouver, ISO, and other styles
40

Griesemer, Rebecca Lynn. "Index of Central Obesity as a Parameter to Evaluate Metabolic Syndrome for White, Black, and Hispanic Adults in the United States." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/iph_theses/42.

Full text
Abstract:
Metabolic syndrome is a cluster of disorders including central obesity, hypertension, dyslipidemia, and hyperglycemia. Today's metabolic syndrome definitions identify central obesity by waist circumference (WC) measurements. A recent pilot study suggests that cut-points derived from a waist-to-height ratio (WHtR), or Index of Central Obesity (ICO), is a more accurate measurement of central obesity. This study compared the association between the metabolic syndrome components and central obese parameters (ICO and WC) among the white, black, and Hispanic adults in the United States. The subjects' data was obtained from the 2005-2006 National Health and Nutrition Examination Survey. ICO was highly correlated with metabolic syndrome components among white subjects and the least correlated in Hispanic subjects. Multivariate logistic regression analysis did not indicate that ICO was a better parameter for metabolic syndrome than WC. Other WHtR cut-points may be more sensitive in predicting metabolic syndrome components than the values used in this study.
APA, Harvard, Vancouver, ISO, and other styles
41

Griesemer, Rebecca. "Index of central obesity as a parameter to evaluate metabolic syndrome for white, black, and hispanic adults in the United States." unrestricted, 2008. http://etd.gsu.edu/theses/available/etd-07232008-232710/.

Full text
Abstract:
Thesis (M.Ph.)--Georgia State University, 2008.
Title from file title page. Ike Okosun, committee chair; Richard Rothenberg, Rodney Lyn, committee members. Electronic text (73 p.) : digital, PDF file. Description based on contents viewed November 25, 2008. Includes bibliographical references (p. 69-73).
APA, Harvard, Vancouver, ISO, and other styles
42

Grosfils, Aline. "First principles and black box modelling of biological systems." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210677.

Full text
Abstract:
Living cells and their components play a key role within biotechnology industry. Cell cultures and their products of interest are used for the design of vaccines as well as in the agro-alimentary field. In order to ensure optimal working of such bioprocesses, the understanding of the complex mechanisms which rule them is fundamental. Mathematical models may be helpful to grasp the biological phenomena which intervene in a bioprocess. Moreover, they allow prediction of system behaviour and are frequently used within engineering tools to ensure, for instance, product quality and reproducibility.

Mathematical models of cell cultures may come in various shapes and be phrased with varying degrees of mathematical formalism. Typically, three main model classes are available to describe the nonlinear dynamic behaviour of such biological systems. They consist of macroscopic models which only describe the main phenomena appearing in a culture. Indeed, a high model complexity may lead to long numerical computation time incompatible with engineering tools like software sensors or controllers. The first model class is composed of the first principles or white box models. They consist of the system of mass balances for the main species (biomass, substrates, and products of interest) involved in a reaction scheme, i.e. a set of irreversible reactions which represent the main biological phenomena occurring in the considered culture. Whereas transport phenomena inside and outside the cell culture are often well known, the reaction scheme and associated kinetics are usually a priori unknown, and require special care for their modelling and identification. The second kind of commonly used models belongs to black box modelling. Black boxes consider the system to be modelled in terms of its input and output characteristics. They consist of mathematical function combinations which do not allow any physical interpretation. They are usually used when no a priori information about the system is available. Finally, hybrid or grey box modelling combines the principles of white and black box models. Typically, a hybrid model uses the available prior knowledge while the reaction scheme and/or the kinetics are replaced by a black box, an Artificial Neural Network for instance.

Among these numerous models, which one has to be used to obtain the best possible representation of a bioprocess? We attempt to answer this question in the first part of this work. On the basis of two simulated bioprocesses and a real experimental one, two model kinds are analysed. First principles models whose reaction scheme and kinetics can be determined thanks to systematic procedures are compared with hybrid model structures where neural networks are used to describe the kinetics or the whole reaction term (i.e. kinetics and reaction scheme). The most common artificial neural networks, the MultiLayer Perceptron and the Radial Basis Function network, are tested. In this work, pure black box modelling is however not considered. Indeed, numerous papers already compare different neural networks with hybrid models. The results of these previous studies converge to the same conclusion: hybrid models, which combine the available prior knowledge with the neural network nonlinear mapping capabilities, provide better results.

From this model comparison and the fact that a physical kinetic model structure may be viewed as a combination of basis functions such as a neural network, kinetic model structures allowing biological interpretation should be preferred. This is why the second part of this work is dedicated to the improvement of the general kinetic model structure used in the previous study. Indeed, in spite of its good performance (largely due to the associated systematic identification procedure), this kinetic model which represents activation and/or inhibition effects by every culture component suffers from some limitations: it does not explicitely address saturation by a culture component. The structure models this kind of behaviour by an inhibition which compensates a strong activation. Note that the generalization of this kinetic model is a challenging task as physical interpretation has to be improved while a systematic identification procedure has to be maintained.

The last part of this work is devoted to another kind of biological systems: proteins. Such macromolecules, which are essential parts of all living organisms and consist of combinations of only 20 different basis molecules called amino acids, are currently used in the industrial world. In order to allow their functioning in non-physiological conditions, industrials are open to modify protein amino acid sequence. However, substitutions of an amino acid by another involve thermodynamic stability changes which may lead to the loss of the biological protein functionality. Among several theoretical methods predicting stability changes caused by mutations, the PoPMuSiC (Prediction Of Proteins Mutations Stability Changes) program has been developed within the Genomic and Structural Bioinformatics Group of the Université Libre de Bruxelles. This software allows to predict, in silico, changes in thermodynamic stability of a given protein under all possible single-site mutations, either in the whole sequence or in a region specified by the user. However, PoPMuSiC suffers from limitations and should be improved thanks to recently developed techniques of protein stability evaluation like the statistical mean force potentials of Dehouck et al. (2006). Our work proposes to enhance the performances of PoPMuSiC by the combination of the new energy functions of Dehouck et al. (2006) and the well known artificial neural networks, MultiLayer Perceptron or Radial Basis Function network. This time, we attempt to obtain models physically interpretable thanks to an appropriate use of the neural networks.


Doctorat en sciences appliquées
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
43

Ruy, Roberto da Silva. "Desenvolvimento e validação geométrica de um sistema para mapeamento com câmeras digitais de médio formato /." Presidente Prudente : [s.n.], 2008. http://hdl.handle.net/11449/100268.

Full text
Abstract:
Orientador: Antonio Maria Garcia Tommaselli
Banca: Edson Aparecido Mitishita
Banca: Jorge Luis Nunes e Silva Brito
Banca: Julio Kiyoshi Hasegawa
Banca: Mauricio Galo
Resumo: Nos últimos anos observa-se uma crescente utilização de câmaras digitais em Fotogrametria, especialmente os modelos profissionais de câmaras de pequeno e médio formato. Isso porque os sistemas digitais comerciais de grande formato possuem custos elevados e um complexo sistema de gerenciamento, armazenamento e processamento das imagens. Além disso, as câmaras digitais de pequeno e médio formato possuem algumas vantagens que as tornam altamente atrativas, como: grande disponibilidade no mercado; flexibilidade quanto ao intervalo de focalização; são pequenas, leves e de fácil manejo e; possuem custos substancialmente reduzidos quando comparadas aos sistemas digitais de grande formato. Por outro lado, algumas limitações ainda estão presentes nestes modelos de câmaras, no que se refere à confiabilidade da geometria interna e à resolução dos sensores. Contudo, estudos de caso têm mostrado que estes problemas podem ser contornados, podendo-se utilizar todo o potencial deste tipo de sensor para mapeamentos temáticos, topográficos e cadastrais em áreas de pequeno e médio porte, com grande flexibilidade em relação aos sensores aéreos e orbitais convencionais. Neste contexto, este trabalho teve como objetivo a concepção, implementação física e testes reais de um sistema de aquisição de imagens digitais, formado por câmaras digitais de médio formato integradas a sensores de orientação direta, dispositivos eletrônicos e interfaces de hardware e software. Foram desenvolvidos também estudos, análises, algoritmos e programas computacionais de Fototriangulação com parâmetros adicionais (FPA), com dados de georreferenciamento direto, voltados ao sistema desenvolvido... (Resumo completo, clicar acesso eletrônico abaixo)
Abstract: In the last years there is a growing use of digital cameras in Photogrammetry, mainly the small and medium format cameras, because of high cost and problems with the images management and postprocessing in the high end digital cameras. Besides, if the small and medium format cameras are calibrated they can provide quality data, together with their advantages: variety in the market; focalization flexibility; are small, light, easy handling and; have low cost if compared with the high resolution cameras. Although, these models of digital cameras have some limitations, like the interior orientation reliability and the resolution of the sensor. Some case studies have showed that these problems can be solved and the digital sensors can be used with success in thematic, topographic and cadastral mapping of small and medium areas, with high flexibility if compared with conventional aerial and orbital sensors. In this context, the aim of this work is the conception, development and real tests performing of a digital image acquisition system composed by medium format digital cameras integrated to direct orientation systems, electronic devices and hardware and software developments. Studies, analysis and computational programs related to block triangulation with additional parameters with direct orientation data were performed for establishing the interior orientation of the cameras that compose the acquisition system... (Complete abstract click electronic access below)
Doutor
APA, Harvard, Vancouver, ISO, and other styles
44

Vyhnalkova, Renata. "Morphologies and corona compositions in aggregates of mixtures of PS-b-PAA and PS-b-P4VP block copolymers as influenced by controllable assembly parameters." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98513.

Full text
Abstract:
The present study is devoted to the morphologies and corona compositions in aggregates of mixtures of PS-b-PAA and PS-b-P4VP block copolymers as influenced by controllable assembly parameters, such as block length, pH, solvent, water content and the molar ratio of PAA/P4VP. Morphologies and surface properties of the aggregates were investigated by transmission electron microscopy and electrophoretic mobility, respectively. According to the results, the general hypothesis that the external corona is composed of long chains, while short chains form the inner corona of the vesicles, is valid only in mixtures without additives (no acid or base). In the presence of acid or base the environment during aggregate formation influences the system. Besides the numerical block length, pH and solvent composition, and therefore solubility, determines the morphology and the coil dimension; the numerically longer chains can be made to contract and to go to the inside, while the numerically shorter chains go to the outside.
APA, Harvard, Vancouver, ISO, and other styles
45

Fusaro, Jonathan L. "Estimating Baseline Population Parameters of Urban and Wildland Black Bear Populations Using a DNA-Based Capture -Mark-Recapture Approach in Mono County, California." DigitalCommons@USU, 2014. https://digitalcommons.usu.edu/etd/3706.

Full text
Abstract:
Prior to European settlement, black bear (Ursus americanus) were far less abundant in the state of California. Estimates from statewide harvest data indicate the California black bear population has tripled in the last 3 decades. Bears inhabit areas they formally never occurred (e.g., urban environments) and populations that were at historically low densities are now at high densities. Though harvest data are useful and widely used as an index for black bear population size and population demographics statewide, it lacks the ability to produce precise estimates of abundance and density at local scales or account for the numerous bears living in non-hunted areas. As the human population continues to expand into wildlife habitat, we are being forced to confront controversial issues about wildlife management and conservation. Habituated bears living in non-hunted, urban areas have been and continue to be a major concern for wildlife managers and the general public. My objective was to develop DNA-based capture-mark-recapture (CMR) survey techniques in wildland and urban environments in Mono County, California to acquire population size and density at local scales from 2010 to 2012. I also compared population density between the urban and wildland environment. To my knowledge, DNA-based CMR surveys for bears have only been implemented in wildland or rural environments. I made numerous modifications to the techniques used during wildland DNA-based CMR surveys to survey bears in an urban environment. I used a higher density of hair-snares than typically used in wildland studies, non-consumable lures, modified hair-snares for public safety, included the public throughout the entire process, and surveyed in the urban-wildland interface as well as the city center. These methods were efficient and accurate while maintaining human safety. I determined that there is likely a difference in population density between the urban and wildland environments. Population density was 1.6 to 2.5 times higher in the urban study area compared to the wildland study area. Considering the negative impacts urban environments can have on wildland bear populations, this is a serious management concern. The densities I found were similar to those found in other urban and wildland black bear populations. The baseline data acquired from this study can be used as part of a long-term monitoring effort. By surveying additional years, population vital rates such as apparent survival, recruitment, movement, and finite rate of population change can be estimated.
APA, Harvard, Vancouver, ISO, and other styles
46

Kleiva, Žilvinas. "Delfinariume laikomų Juodosios jūros delfinų (Tursiops truncatus ponticus) sveikatos tyrimų analizė." Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2013. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2013~D_20131104_101530-39609.

Full text
Abstract:
Delfinai yra jautrūs aplinkos sąlygoms gyvūnai. Norint išlaikyti delfinus, reikia sudaryti kuo geresnes laikymo sąlygas delfinariume, laiku nutatyti susirgimus, išsiaiškinti gaišimo priežastis. Tam reikia nustatyti individualius delfinų kraujo morfologinius ir biocheminius rodiklius bei jų kitimus susirgimų atvejais. Literatūroje daugiausia randami Atlanto (Tursiops truncatus) ir kitų rūšių delfinų įvairūs kraujo tyrimų duomenys. Tačiau nerasta duomenų apie skirtingų lyčių ir skirtingo amžiaus Juodosios jūros delfinų (Tursiops truncatus ponticus) kraujo rodiklius. Darbo tikslas – nustatyti delfinariume laikomų Juodosios jūros delfinų (Tursiops truncatus ponticus) įvairių veiksnių įtaką kraujo rodikliams ir kvėpavimo funkcijai bei atlikti susirgimų ir gaišimų priežasčių analizę. Darbo uždaviniai: 1. Nustatyti delfinariume laikomų sveikų delfinų kraujo fiziologinius morfologinius ir biocheminius parametrus, atsižvelgiant į amžių ir lytį. 2. Nustatyti baseinų dydžių (mažesnis – didesnis) įtaką delfinų patelių ir jų jauniklių kvėpavimo dažniui ir elgsenai postnataliniu periodu. 3. Išanalizuoti delfinų susirgimus ir jų dažnumą. 4. Nustatyti delfinų kraujo morfologinių ir biocheminių parametrų kitimą, atsižvelgiant į susirgimus. 5. Atlikti delfinų patologinius-anatominius, histopatologinius bei mikrobiologinius tyrimus, nustatyti gaišimo priežastis. Šis darbas praplėtė mokslines žinias apie delfinariume laikomų Juodosios jūros delfinų (Tursiops truncatus ponticus) fiziologines... [toliau žr. visą tekstą]
The aim of the thesis is to identify the influence of various factors on blood indices and breathing function of the Black sea dolphins (Tursiops truncatus ponticus) that are kept in dolphinariums and also to do analysis of the reasons of dolphins’ diseases and death. The goals of the thesis: 1. Determine the physiological morphological and biochemical parameters of the dolphins’ blood with regard to their age and sex. 2. Determine the influence of the size of the pools on the respiratory rate of dolphin females and calves during the postnatal period. 3. Analyze the diseases affecting dolphins and the frequency of their occurance. 4. Study morphological and biochemical changes in dolphins blood parameters with regard to the diseases. 5. Identify the causes of death through pathological, anatomical, histopathological and microbiological examinations. The research has broadened the scientific knowledge about the Black sea dolphins’ (Tursiops truncatus ponticus) physiological characteristics and pathological conditions. It has been found out that the size of a pool has influence over the Black sea dolphin females and calves' respiratory rate as well as behaviour. Bearing in mind that there are genetic differences among other Tursiops truncatus dolphins, in the current research healthy Black sea afalins’ blood physiological morphological and biochemical parameters have been identified with regard to their age and sex. It is the first time in Lithuania when the frequencies of... [to full text]
APA, Harvard, Vancouver, ISO, and other styles
47

Paulin, Carl, and Maja Lindström. "Option pricing models: A comparison between models with constant and stochastic volatilities as well as discontinuity jumps." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-172226.

Full text
Abstract:
The purpose of this thesis is to compare option pricing models. We have investigated the constant volatility models Black-Scholes-Merton (BSM) and Merton’s Jump Diffusion (MJD) as well as the stochastic volatility models Heston and Bates. The data used were option prices from Microsoft, Advanced Micro Devices Inc, Walt Disney Company, and the S&P 500 index. The data was then divided into training and testing sets, where the training data was used for parameter calibration for each model, and the testing data was used for testing the model prices against prices observed on the market. Calibration of the parameters for each model were carried out using the nonlinear least-squares method. By using the calibrated parameters the price was calculated using the method of Carr and Madan. Generally it was found that the stochastic volatility models, Heston and Bates, replicated the market option prices better than both the constant volatility models, MJD and BSM for most data sets. The mean average relative percentage error for Heston and Bates was found to be 2.26% and 2.17%, respectively. Merton and BSM had a mean average relative percentage error of 6.90% and 5.45%, respectively. We therefore suggest that a stochastic volatility model is to be preferred over a constant volatility model for pricing options.
Syftet med denna tes är att jämföra prissättningsmodeller för optioner. Vi har undersökt de konstanta volatilitetsmodellerna Black-Scholes-Merton (BSM) och Merton’s Jump Diffusion (MJD) samt de stokastiska volatilitetsmodellerna Heston och Bates. Datat vi använt är optionspriser från Microsoft, Advanced Micro Devices Inc, Walt Disney Company och S&P 500 indexet. Datat delades upp i en träningsmängd och en test- mängd. Träningsdatat användes för parameterkalibrering med hänsyn till varje modell. Testdatat användes för att jämföra modellpriser med priser som observerats på mark- naden. Parameterkalibreringen för varje modell utfördes genom att använda den icke- linjära minsta-kvadratmetoden. Med hjälp av de kalibrerade parametrarna kunde priset räknas ut genom att använda Carr och Madan-metoden. Vi kunde se att de stokastiska volatilitetsmodellerna, Heston och Bates, replikerade marknadens optionspriser bättre än båda de konstanta volatilitetsmodellerna, MJD och BSM för de flesta dataseten. Medelvärdet av det relativa medelvärdesfelet i procent för Heston och Bates beräknades till 2.26% respektive 2.17%. För Merton och BSM beräknades medelvärdet av det relativa medelvärdesfelet i procent till 6.90% respektive 5.45%. Vi anser därför att en stokastisk volatilitetsmodell är att föredra framför en konstant volatilitetsmodell för att prissätta optioner.
APA, Harvard, Vancouver, ISO, and other styles
48

Kervazo, Christophe. "Optimization framework for large-scale sparse blind source separation." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLS354/document.

Full text
Abstract:
Lors des dernières décennies, la Séparation Aveugle de Sources (BSS) est devenue un outil de premier plan pour le traitement de données multi-valuées. L’objectif de ce doctorat est cependant d’étudier les cas grande échelle, pour lesquels la plupart des algorithmes classiques obtiennent des performances dégradées. Ce document s’articule en quatre parties, traitant chacune un aspect du problème: i) l’introduction d’algorithmes robustes de BSS parcimonieuse ne nécessitant qu’un seul lancement (malgré un choix d’hyper-paramètres délicat) et fortement étayés mathématiquement; ii) la proposition d’une méthode permettant de maintenir une haute qualité de séparation malgré un nombre de sources important: iii) la modification d’un algorithme classique de BSS parcimonieuse pour l’application sur des données de grandes tailles; et iv) une extension au problème de BSS parcimonieuse non-linéaire. Les méthodes proposées ont été amplement testées, tant sur données simulées que réalistes, pour démontrer leur qualité. Des interprétations détaillées des résultats sont proposées
During the last decades, Blind Source Separation (BSS) has become a key analysis tool to study multi-valued data. The objective of this thesis is however to focus on large-scale settings, for which most classical algorithms fail. More specifically, it is subdivided into four sub-problems taking their roots around the large-scale sparse BSS issue: i) introduce a mathematically sound robust sparse BSS algorithm which does not require any relaunch (despite a difficult hyper-parameter choice); ii) introduce a method being able to maintain high quality separations even when a large-number of sources needs to be estimated; iii) make a classical sparse BSS algorithm scalable to large-scale datasets; and iv) an extension to the non-linear sparse BSS problem. The methods we propose are extensively tested on both simulated and realistic experiments to demonstrate their quality. In-depth interpretations of the results are proposed
APA, Harvard, Vancouver, ISO, and other styles
49

Morganti, Mattia. "Studio di parametri critici per la miscelazione di polveri ad uso farmaceutico." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018.

Find full text
Abstract:
Negli ultimi 15 anni si è iniziato a considerare utile e necessario, nell’ambito dell’industria farmaceutica, un miglior controllo dei processi produttivi, al fine di migliorare l’efficienza degli stessi e, contestualmente, diminuire i rischi per i pazienti; per questi motivi, l’ente governativo statunitense Food and Drug Administration (FDA) ha iniziato, a sostenere l’uso della cosiddetta Process Analytical Technology (PAT) nell’ambito dello sviluppo e della produzione di farmaci. La PAT sostenuta dall’FDA rappresenta un insieme di metodi e tecnologie utili per progettare, analizzare e controllare i processi produttivi del settore farmaceutico, al fine di conoscere e monitorare meglio i processi stessi; in questo ambito si sono rivelati particolarmente adatti ed applicabili i concetti base del “Quality by Design” (QbD), un particolare approccio alla progettazione di sistemi e impianti, secondo il quale la qualità del prodotto finale è strettamente legata al livello di qualità e conoscenza delle materie prime, delle formulazioni e dei processi produttivi. Tra le diverse tecnologie diffuse nell’ambito della PAT, la spettroscopia del Vicino Infrarosso (NIR, “Near InfraRed”) è rapidamente diventata uno strumento diffuso per il controllo dei processi grazie alla sua versatilità, precisione ed alla natura non distruttiva dell’analisi. Il presente lavoro di tesi, svolto durante un’esperienza di tirocinio presso l’industria di macchine automatiche IMA S.p.a. di Ozzano nell’Emilia (Bo), ha come obiettivo lo studio dei parametri critici nell’ambito della miscelazione di polveri ad uso farmaceutico e l’applicazione di uno spettrometro NIR per il monitoraggio in tempo reale dei processi di miscelazione.
APA, Harvard, Vancouver, ISO, and other styles
50

Cook, Ian. "Voluntary physical activity : measurement and relationship to selected health parameters in rural black South Africans resident in the Limpopo Province, South Africa." Doctoral thesis, University of Cape Town, 2012. http://hdl.handle.net/11427/2747.

Full text
Abstract:
Includes abstract.
Thesis (Ph.D. (Exercise Science and Sports Medicine))--University of Cape Town, 2012.
Includes bibliographical references.
The use of objective measures of physical activity in addition to, or in place of, subjective or self-report measures of physical activity, is being increasingly promoted in Physical Activity Epidemiology research. This thesis investigates methodological issues related to the use of objective measures of physical activity and presents pioneering objectively measured physical activity survey results from a rural South African setting. In this series of studies, we firstly explored the sources of variance in the objective measure of physical activity (uni-axial accelerometer) as a function of residence and also movement monitor placement. Secondly, we highlighted the importance of Non-Exercise Activity Thermogenesis (NEAT) in a rural African setting and the importance of considering the full spectrum of accelerometer counts when contrasting rural and urban populations. Thirdly, we demonstrated novel approaches to pedometry data from a rural African setting, such that volume-intensity effects could be inferred, and using estimated energy expenditure whether current physical activity guidelines are met. Finally, we indentified that the current recommendations for physical activity and health, applied in a rural African setting, may miss important and possible health-promoting physical activity.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography