To see the other types of publications on this topic, follow the link: Estimation of coefficient of variation.

Dissertations / Theses on the topic 'Estimation of coefficient of variation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Estimation of coefficient of variation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ali-Adib, Tarif. "Estimation et lois de variation du coefficient de transfert de chaleur surface/ liquide en ébullition pour un liquide alimentaire dans un évaporateur à flot tombant." Phd thesis, AgroParisTech, 2008. http://pastel.archives-ouvertes.fr/pastel-00004544.

Full text
Abstract:
Le coefficient de transfert de chaleur est nécessaire pour concevoir et dimensionner un évaporateur utilisé pour concentrer un liquide, tel que rencontré couramment dans les industries alimentaires. Le coefficient de transfert de chaleur le plus variable et le plus incertain est du coté produit, entre paroi et liquide, noté « h ». Il varie à la fois avec les propriétés thermo-physiques du liquide traité (ηL, σL, λL, ρL , CpL, ω, ...) et avec les paramètres du procédé (type d'évaporateur, φ ou Δθ, Γ (δ), P, rugosité de la surface, encrassement, etc), ces grandeurs étant définies dans le texte. Mais h est aussi lié au régime d'ébullition (nucléée ou non nucléée), et pour les évaporateurs de type « flot tombant », au régime d'écoulement laminaire ou turbulent, selon le nombre de Reynolds en film Ref. Nous avons étudié le cas des évaporateurs « à flot tombant », très utilisés dans les industries alimentaires pour concentrer le lait et les produits laitiers, les jus sucrés, les jus de fruits et légumes. L'objectif de notre travail était de définir une méthode fiable et économique pour évaluer a priori le coefficient de transfert de chaleur h coté liquide en ébullition, dans un évaporateur flot tombant. La première partie de la thèse a été consacrée à l'analyse bibliographique, qui a révélé une grande incertitude actuelle dans la prévision de h, sur la base des formules de la littérature, et des paramètres descripteurs proposés. La deuxième partie de la thèse a été de concevoir et construire un pilote utilisable pour estimer h, dans des conditions stationnaires connues et reproductibles. Dans la troisième partie, on présente les résultats et commente les lois de variations de h en fonction de la concentration de matière sèche du liquide XMS, de la température d'ébullition de liquide θL (ou P), du flux de chaleur φ ou (Δθ), et du débit massique de liquide par unité de périmètre de tube Γ, pour des propriétés de surface de chauffe fixées (ici, paroi en acier inoxydable poli Rs ≈ 0,8 μm). On commente l'effet sur h de chaque variable isolément, les autres étant maintenues constantes, ce qui confirme l'importance de la transition du régime non-nucléé au régime nucléé, cette transition variant avec la nature du liquide, sa concentration, et le flux de chaleur. On a montré la possibilité de modéliser un produit donné dans l'ensemble du domaine expérimental, où tous les paramètres peuvent varier simultanément, avec peu de coefficients, selon deux types d'équations (polynomiale et puissance). On a comparé le cas d'un liquide Newtonien (jus sucré) et non-Newtonien (solution de CMC dans l'eau). On a aussi observé le débit de mouillage critique Γcri et ses lois de variation. On a aussi démontré la possibilité de simplifier le plan d'expérience, aussi bien pour les liquides Newtoniens que non-Newtoniens, tout en gardant un coefficient de corrélation satisfaisant dans le domaine Γ > Γcri, cette modélisation pouvant servir de base de données produit pour l'ingénierie.
APA, Harvard, Vancouver, ISO, and other styles
2

Ali, Adib Tarif. "Estimation et lois de variation du coefficient de transfert de chaleur surface / liquide en ébullition pour un liquide alimentaire dans un évaporateur à flot tombant." Paris, AgroParisTech, 2008. http://pastel.paristech.org/4544/01/2008AGPT0007.pdf.

Full text
Abstract:
Le coefficient de transfert de chaleur est nécessaire pour concevoir et dimensionner un évaporateur utilisé pour concentrer un liquide, tel que rencontré couramment dans les industries alimentaires. Le coefficient de transfert de chaleur le plus variable et le plus incertain est du côté produit, entre paroi et liquide, noté « h ». Il varie à la fois avec les propriétés thermo-physiques du liquide traité (ηL, σL, λL, ρL, CpL, ω,. . . ) et avec les paramètres du procédés (type d’évaporateur, φ ou Δθ, Γ (δ), P, rugosité de la surface, encrassement, etc), ces grandeurs étant définies dans le texte. Mais h est aussi lié au régime d’ébullition (nucléée ou non nucléée), et pour les évaporateurs de type « flot tombant », au régime d’écoulement laminaire ou turbulent, selon le nombre de Reynolds en film Ref. Nous avons étudié le cas des évaporateurs « à flot tombant », très utilisés dans les industries alimentaires pour concentrer le lait et les produits laitiers, les jus sucrés, les jus de fruits et légumes. L’objectif de notre travail était de définir une méthode fiable et économique pour évaluer a priori le coefficient de transfert de chaleur h côté liquide en ébullition dans un évaporateur flot tombant. La première partie de la thèse a été consacrée à l’analyse bibliographique, qui a révélé une grande incertitude actuelle dans la prévision de h, sur la base des formules de la littérature, et des paramètres descripteurs proposés. La deuxième partie de la thèse a été de concevoir et construire un pilote utilisable pour estimer h, dans des conditions stationnaires connues et reproductibles. Dans la troisième partie, on présente les résultats et commente les lois de variations de h en fonction de la concentration de matière sèche du liquide XMS, de la température d’ébullition de liquide θL (ou P), du flux de chaleur φ ou Δθ, et du débit massique de liquide par unité de périmètre de tube Γ, pour des propriétés de surface de chauffe fixées (ici, paroi en acier inoxydable poli Rs≈0,8 μm). On commente l’effet sur h de chaque variable isolément, les autres étant maintenues constantes, ce qui confirme l’importance de la transition du régime non nucléé au régime nucléé, cette transition variant avec la nature du liquide, sa concentration, et le flux de chaleur. On a aussi montré la possibilité de modéliser un produit donné dans l’ensemble du domaine expérimental, où tous les paramètres peuvent varier simultanément, avec peu de coefficients, selon deux types d’équations (polynomiale et puissance). On a comparé le cas d’un liquide Newtonien (jus sucré) et non Newtonien (solution de CMC dans l’eau). On a aussi observé le débit de mouillage critique Γcri et ses lois de variation. On a aussi démontré la possibilité de simplifier le plan d’expérience, aussi bien pour les liquides Newtoniens que non Newtoniens, tout en gardant un coefficient de corrélation satisfaisant le domaine Γ > Γcri, cette modélisation pouvant servir de base de données produit pour l’ingénierie
The heat transfer coefficient value is necessary to calculate the eat exchange surface when designing an evaporator, as currently used to concentrate liquids in food industry. The boiling heat transfer coefficient on the liquid side (h) is the most uncertain and: it depends on the liquid thermo-physical properties (ηL, σL, λL, ρL, CpL, ω,. . . ) as well as on the process conditions (type of evaporator, φ ou Δθ, Γ (δ), P, surface roughness, fouling, etc). Also, h depends on the boiling regime (non-nucleate or nucleate) and on the flow regime (laminar or turbulent) according to the film Reynolds number in falling film evaporators. The objective of our work is to define an economical and robust method to estimate h in a falling film evaporator which is common in food industry for concentrating fruit juice, milk and sugar solutions. The first section of our study was a bibliographic analysis which revealed the important dispersion among the h values calculated from the formulas cited in literature The second section was to design and construct a laboratory scale falling film evaporator (pilot) used to estimate h at stationary parameters conditions. The third section was to describe the results and variation laws of h versus the liquid dry matter concentration XMS, the boiling temperature θL, the heat flux φ or temperature gap Δθ and mass flow rate per unit of perimeter length Γ (with describing the critical mass flow for some solutions) noted that the nature of heating surface is kept constant during our work. We described the effect of each variable separately on h where, the other variables being kept constant. Also we studied the transition from non nucleate regime, which varied with the nature of liquid and the liquid concentration. Finally, we presented the experimental models for h = f (XMS,θL,φ,Γ) for a Newtonian liquid (sugar solution) and non Newtonian solution (CMC) that may be used for industrial evaporator design after validation. We have also proposed a method for the simplification or the experimental design
APA, Harvard, Vancouver, ISO, and other styles
3

Ekesiöö, Anton, and Andreas Ekhamre. "Safety formats for non-linear finite element analyses of reinforced concrete beams loaded to shear failure." Thesis, KTH, Betongbyggnad, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-231087.

Full text
Abstract:
There exists several different methods that can be used to implement a level of safety when performing non-linear finite element analysis of a structure. These methods are called safety formats and they estimate safety by different means and formulas which are partly discussed further in this thesis. The aim of this master thesis is to evaluate a model uncertainty factor for one safety format method called the estimation of coefficient of variation method (ECOV) since it is suggested to be included in the next version of Eurocode. The ECOV method will also be compared with the most common and widely used safety format which is the partial factor method (PF). The first part of this thesis presents the different safety formats more thoroughly followed by a theoretical part. The theory part aims to provide a deeper knowledge for the finite element method and non-linear finite element analysis together with some beam theory that explains shear mechanism in different beam types. The study was conducted on six beams in total, three deep beams and three slender beams. The deep beams were previously tested in the 1970s and the slender beams were previously tested in the 1990s, both test series were performed in a laboratory. All beams failed due to shear in the experimental tests. A detailed description of the beams are presented in the thesis. The simulations of the beams were all performed in the FEM- programme ATENA 2D to obtain high resemblance to the experimental test. In the results from the simulations it could be observed that the ECOV method generally got a higher capacity than the PF method. For the slender beams both methods received rather high design capacities with a mean of about 82% of the experimental capacity. For the deep beams both method reached low design capacities with a mean of around 46% of the experimental capacity. The results regarding the model uncertainty factor showed that the mean value for slender beams should be around 1.06 and for deep beams it should be around 1.25.
APA, Harvard, Vancouver, ISO, and other styles
4

Vathanakhool, Khoollapath. "Estimation de la sécurité des poteaux en béton armé : compte tenu des variations aléatoires de leurs caractéristiques géométriques et mécaniques le long de leur ligne moyenne." Toulouse, INSA, 1987. http://www.theses.fr/1987ISAT0015.

Full text
Abstract:
Pour etablir l'influence des variations le long du poteau dues au coffrage, a la position des armatures et au caracteristiques des materiaux, methode de calcul de la probabilite de ruine par simulation monte carlo, en tenant compte de la correlation entre troncons; definition statistique du troncon critique, en fonctions de parametres comme l'elancement et le pourcentage d'armatures
APA, Harvard, Vancouver, ISO, and other styles
5

Pellegrini, Caius Barcellos. "Precisão da estimativa da massa de forragem com discos medidores em pastagem natural." Universidade Federal de Santa Maria, 2006. http://repositorio.ufsm.br/handle/1/10700.

Full text
Abstract:
The objective of this work was to evaluate precision of forage mass estimatings (FM) in natural pasture (NP) with discs. The treatments were three different discs areas, respectively 0.1; 0.2 and 0.3 m2 and each one combined with three weights of discs 5, 10 and 15 kg/m2. The experimental design was Completely Randomized Design with 50 replications, in an 3 x 3 factorial arrangement (3 disc areas x 3 weights of disc). The obtained results were submitted to regression analysis between height of disc and FM determined in each date of evaluation, area and weight of disc. From the mathematical models it were obtained the coefficient of residual variation (CV). Later, were adopted the analysis of variance method in Complete Blocks Design with seven replications of an factorial experiment (3 disc areas x 3 weights of disc) of periods. The relationship between three areas of discs associated with three weights and the CV of the measures gotten with discs were quadratic and positive. The disc area of 0.1 m2 and 5 weight kg/m2 presented smaller CV of the readings obtained with disc in the evaluated periods. With the increase the disc area, increased the CV for the weights of 5 and 10 kg/m2. The relations between the three weights and areas of discs and the CV were linear and positive. The smaller weight of disc of 5 kg/m2, associated with the area 0.1 m2 presented smaller CV. Weights added of 5 and 10 kg/m2 on the same disc area increased the CV in the evaluated periods. The relation between times of evaluation and the CV were linear and positive. The disc of smaller area, 0.1 m2 and 5 weight kg/m2, presented smaller CV for MF estimate in NP, therefore the most indicated to evaluate of MF in NP. The advance of the time of evaluation increased the CV of MF estimate in NP.
O objetivo do trabalho foi avaliar a precisão da estimativa da massa de forragem (MF) em pastagem natural (PN) com emprego de discos medidores. Os tratamentos foram três diferentes áreas de disco, respectivamente 0,1, 0,2 e 0,3 m2 e cada uma combinada com três pesos de disco 5, 10 e 15 kg/m2. O delineamento experimental foi inteiramente casualizado com 50 repetições, em um arranjo fatorial 3 x 3 (3 áreas de discos x 3 pesos de discos). Os resultados obtidos foram submetidos à análise de regressão entre altura do disco e MF determinada em cada data de avaliação, área e peso de disco. Dos modelos matemáticos obteve-se os coeficientes de variação residual (CV). Posteriormente, adotou-se o método de análise de variância em delineamento de blocos ao acaso com sete repetições de um experimento fatorial 3 x 3 (3 áreas de discos x 3 pesos de discos) para épocas avaliadas. As relações entre as combinações das três áreas dos discos associadas com os três pesos e o CV das medidas obtidas com disco foram quadráticas e positivas. A área de disco de 0,1 m2 e peso 5 kg/m2 apresentou o menor CV das leituras obtidas com disco nos períodos avaliados. À medida que aumentou a área de disco, aumentou o CV para os pesos de 5 e 10 kg/m2. As relações entre as combinações dos três pesos e áreas dos discos e o CV foram lineares e positivas. O menor peso de disco, de 5 kg/m2, associado à área de 0,1 m2 apresentou o menor CV. A relação entre épocas de avaliação e o CV foi linear e positiva. O disco de menor área 0,1 m2 e peso 5 kg/m2 apresentou o menor CV para a estimativa da MF da PN, sendo portanto o mais indicado para avaliar a MF da pastagem natural. O avanço da época de avaliação aumentou o CV na estimativa da MF da PN com discos.
APA, Harvard, Vancouver, ISO, and other styles
6

Chandler, I. D. "Vertical variation in diffusion coefficient within sediments." Thesis, University of Warwick, 2012. http://wrap.warwick.ac.uk/49612/.

Full text
Abstract:
River ecosystems can be strongly in uenced by contaminants in the water column, in the pore water and attached to sediment particles. Current models [TGD, 2003] predict exposure to sediments based on equilibrium partitioning between dissolved and suspended-particle-sorbed phase in the water column despite numerous studies showing significant direct mass transfer across the sediment water interface. When exchange across the interface (hyporheic exchange) is included in modelling the diffusion coefficient is assumed to be constant with depth. The overall aims of this research were to quantify the vertical variation in diffusion coefficient below the sediment water interface and asses the use of a modified EROSIMESS-System (erosimeter) in the study of hyporheic exchange. The modified erosimeter and novel fibre optic uorometers measuring in-bed concentrations Rhodamine WT were employed in an experimental investigation. Five different diameter glass sphere beds (0.15 to 5.0mm) and five bed shear velocities (0.01 to 0.04m/s) allowed the vertical variation in diffusion coefficient to be quantified to a depth of 0.134m below the sediment water interface. The vertical variation in diffusion coefficient can be described using an exponential function that was found to be consistent for all the parameter combinations tested. This function, combined with the scaling relationship proposed by O'Connor and Harvey [2008] allows a prediction of the diffusion coefficient below the sediment water interface based on bed shear velocity, roughness height and permeability. 1D numerical diffusion model simulations using the exponential function compare favourably with the experimental data.
APA, Harvard, Vancouver, ISO, and other styles
7

曾達誠 and Tat-shing Tsang. "Statistical inference on the coefficient of variation." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31223503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tsang, Tat-shing. "Statistical inference on the coefficient of variation /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B21903980.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jung, Aekyung. "Interval Estimation for the Correlation Coefficient." Digital Archive @ GSU, 2011. http://digitalarchive.gsu.edu/math_theses/109.

Full text
Abstract:
The correlation coefficient (CC) is a standard measure of the linear association between two random variables. The CC plays a significant role in many quantitative researches. In a bivariate normal distribution, there are many types of interval estimation for CC, such as z-transformation and maximum likelihood estimation based methods. However, when the underlying bivariate distribution is unknown, the construction of confidence intervals for the CC is still not well-developed. In this thesis, we discuss various interval estimation methods for the CC. We propose a generalized confidence interval and three empirical likelihood-based non-parametric intervals for the CC. We also conduct extensive simulation studies to compare the new intervals with existing intervals in terms of coverage probability and interval length. Finally, two real examples are used to demonstrate the application of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
10

Achouri, Ali. "Cartes de contrôle pour le coefficient de variation." Nantes, 2014. http://archive.bu.univ-nantes.fr/pollux/show.action?id=7658d471-1a91-4022-9493-9f85b2a06a86.

Full text
Abstract:
La Maîtrise Statistique des Procédés (MSP) est une méthode de suivi de la production basée sur les statistiques. Elle se base essentiellement sur les cartes de contrôle. Une hypothèse indispensable pour le développement des cartes de contrôle est que les paramètres μ0 et 0 du procédé sous-contrôle soient supposés constants. Mais, dans la pratique, il existe de nombreux procédés pour lesquels ces paramètres peuvent être variables. Dans cette optique, le recours au coefficient de variation est une alternative intéressante. Dans cette thèse, nous avons essayé de systématiquement proposer de nouvelles cartes de contrôle pour le coefficient de variation qui n’ont pas encore été traitées jusqu’à présent dans la littérature. Des cartes de contrôle avec règles supplémentaires, des cartes VSI, VSS sont proposées pour le coefficient de variation lorsque les paramètres sont connus. De plus, une carte de type Shewhart pour le coefficient de variation avec paramètres estimés est aussi proposée. Les performances de chacune des cartes ont été évaluées et les paramètres optimaux ont été systématiquement calculés. Une validation empirique des résultats a été élaborée dans des processus industriels existants
The Statistical Process Control (SPC) is an effective method based on statistics and used to monitor production. Control charts are the most important and primary tools of SPC. An indispensable assumption for the development of control charts is that the process parameters μ0 and 0 are assumed constant. In practice, the process parameters are often variables and the use of the coefficient of variation seems to be an interesting alternative. In this thesis, we will investigate the properties (in terms of the Run Length) of some control charts for the coefficient of variation in the case of known parameters, which have not been researched till now, such as Run Rules Chart, VSI Chart and VSS Chart. In addition, a Shewhart control chart for the coefficient of variation with estimated parameters is proposed. The performance of each control chart has been evaluated and the optimal parameters were systematically computed. An empirical validation of the results has been developed for real industrial processes
APA, Harvard, Vancouver, ISO, and other styles
11

Panizo, Ríos Diego. "Backscatter coefficient estimation using highly focused ultrasound transducers." Master's thesis, Pontificia Universidad Católica del Perú, 2013. http://tesis.pucp.edu.pe/repositorio/handle/123456789/5339.

Full text
Abstract:
The backscatter coefficient (BSC) is an intrinsic property that quantifies the amount of energy that is reflected by a material as function of the ultrasound wave frequency. BSCs have been proposed for decades for tissue characterization, along with quantitative ultrasound (QUS) parameters derived from BSCs that have been used to construct images that represent how these properties vary spatially. The availability of formulations based on weakly focusing conditions has resulted in a widespread use of large focal number transducers for BSC estimation. The use of highly focused transducers offers the possibility of improving the spatial resolution of BSC-based imaging. The model by Chen et al. [1] was developed for estimating BSCs using transducers of arbitrary focal number. However, to this date only preliminary experimental validation of this method has been performed. The goals of the present study are to analyze for the first time the accuracy of Chen’s [1] method when estimating BSCs using highly focused transducers through both simulations and experiments, and to analyze the accuracy on the estimation of QUS parameters derived from BSCs (specifically the effective scatterer size (ESD) and concentration (ESC)) applying the Chen et al. [1] model. To achieve these goals, a theoretical model of BSC synthesis based on the method of Chen et al. [1]. was derived and used with simulated data. The model considers frequency dependent diffraction patterns, and the scatterers in the synthetic data replicate the properties of solid spheres. In experiments, data obtained using highly focused transducers from a physical phantom containing glass beads was used. This experimental data was appropriately compensated for attenuation and transmission effects. The accuracy of Chen’s method was evaluated calculating the mean fractional error between the estimated and theoretical BSCs curves for both simulations and experiments. Also, the QUS parameters were estimated and compared with real known parameters. BSCs and QUS parameter estimates were obtained from regions of interest from both the transducer focus and throughout the transducer focal region. Finally, the sound speed and the transducer focus were varied in appropriate ranges when processing the data for the BSC and QUS values estimation in order to assess the robustness of the method to uncertainties in these parameters. The results showed that BSCs and QUS parameters can be accurately estimated using highly focused transducers if the appropriate model is used, with regions of interest not restricted to be centered at the focus but to the full extension of the -6-dB transducer focal region. It was also verified that well estimated parameters as the sound speed and transducer focus are necessary in order to obtain accurate BSCs and QUS parameters estimates.
Tesis
APA, Harvard, Vancouver, ISO, and other styles
12

PACE, MARIA LUCIA. "La diseguaglianza di opportunità in Italia." Doctoral thesis, Università Cattolica del Sacro Cuore, 2017. http://hdl.handle.net/10280/35716.

Full text
Abstract:
La diseguaglianza dei redditi è comunemente analizzata e misurata attraverso l’impiego di varie misure quali l’indice di Gini, il coefficiente di variazione, l’indice di Theil, la varianza dei logaritmi ed altri ancora (Sen, 1970). A partire dagli anni ’90 l’applicazione di tecniche di scomposizione relative, ad esempio, all’indice di Theil hanno reso possibile quantificare due diverse componenti della diseguaglianza ovvero la disuguaglianza legata allo sforzo individuale e la disuguaglianza dovuta alle ineguali opportunità. Questa seconda componente dipende esclusivamente da fattori esogeni, non controllabili dall’individuo, e, per questa ragione, è a ragione considerata una diseguaglianza “ingiusta”. Alla componente residua della scomposizione è di solito attribuito, invece, il significato di disuguaglianza nello sforzo, ovvero quanto ciascun individuo si è impegnato per raggiungere un determinato obiettivo di successo economico. L’applicazione di questo approccio alle misure di diseguaglianza ha permesso di studiare quale tipo di disuguaglianza prevalga all’interno di un Paese e, soprattutto, quali siano le circostanze esogene che incrementano la disparità nelle opportunità. Il presente lavoro si muove lungo questa linea di ricerca proponendo un metodo per testare il peso relativo delle due componenti e la loro significativita’. Come misura di diseguaglianza si e’ scelto di considerare il coefficiente di variazione in modo da ricondurre il test ad un problema di Analisi della varianza (ANOVA) a piu’ vie. Il test viene presentato facendo riferimento ai dati dell’ISTAT e dell'indagine Bankitalia sui redditi delle famiglie. Dopo quest'analisi preliminare sulle determinanti della diseguaglianza di opportunità in Italia, si utilizza la scomposizione della diseguaglianza nelle sue due componenti: diseguaglianza di opportunità e diseguaglianza legata all'impegno, per definire univocamente l'effetto della diseguaglianza sulla crescita economica. L'analisi econometria è svolta sui dati dell'indagine sulla ricchezza e sui redditi delle famiglie forniti dalla Banca d'Italia. L'effetto viene stimato utilizzando il modello panel dinamico con il metodo di stima GMM.
While the analysis of inequality has been central to economic studies for cen- turies, in recent years many studies concentrated on the distinction between in- equality of opportunity (IO) and inequality of returns to effort (IE) and attempted empirical estimates of the two components, e.g. in US and in Europe. The decompo- sition of a general inequality index into these two components allows to analyze the prevalence of fair or unfair income inequality within a country. This paper suggests to test the differences between the two sources of inequality in a simple way using the ANOVA framework adapted to decompose the coefficient of variation, to better suit the requirements of an inequality index. The proposed procedure is applied to the Italian Survey on Income and Living Condition (IT-SILC data, wave 2005 and 2011). The analysis of the results help identifying the circumstances that foster the rise of inequality of opportunities in Italy. Our analysis shows in particular, that father education, region of residence and gender result as the most relevant circumstances determining inequality of opportunity. On the other side, the role of mother education starting from a lower level, as an inequality of opportunity factor, is increasing its influence over time. The decomposition of inequality index in two components allows not only to analyze the prevalence of fair or unfair income inequality in a country, but also to find a clearer relation between inequality and growth. In fact, it is still missing an analysis of the relation between inequality of opportunity and economic growth in Italy. This paper aims at filling in that gap, by using Italian data from Bank of Italys Survey on Income and Wealth from 1998 to 2014. We choose the coefficient of variation to measure inequality of opportunity at the regional level and, then, we studied its relation with economic growth using Dynamic Panel Data models estimated through System- GMM. Finally, in order to check if the coefficient of variation could be a measure as good as the Entropy’s index, I will compare the results of the estimated panel models with the two different inequality of opportunity indeces. We evaluate the effect of inequality of opportunity on different length of the economic growth rate, going from a short term (2 years) to a very long term growth rate (10 years). Our results shows that, in Italy, inequality of opportunity is negative in the short period, but it does not have any effect on long run growth.
APA, Harvard, Vancouver, ISO, and other styles
13

Byars, Beverly J. "Variation of the drag coefficient with wind and wave state." Thesis, Monterey, Calif. : Naval Postgraduate School, 1985. http://catalog.hathitrust.org/api/volumes/oclc/52763691.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Finger, George. "Estimation of Tangential Momentum Accommodation Coefficient Using Molecular Dynamics Simulation." Doctoral diss., University of Central Florida, 2005. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/3014.

Full text
Abstract:
The Tangential Momentum Accommodation Coefficient (TMAC) is used to improve the accuracy of fluid flow calculations in the slip flow regime. Under such conditions (indicated by Knudsen number greater than 0.001), the continuum assumption that a fluid velocity at a solid surface is equal to the surface velocity is inaccurate because relatively significant fluid "slip" occurs at the surface. Prior work has not led to a method to quickly estimate a value for TMAC - it is frequently assumed. In this work, Molecular Dynamics techniques are used to study the impacts of individual gas atoms upon solid surfaces to understand how approach velocity, crystal geometry and interatomic forces affect the scattering of the gas atoms, specifically from the perspective of tangential momentum. It is a logical step in the development of a comprehensive technique to estimate total coefficient values to be used by those investigating flows in micro- and nano-channels or on orbit spacecraft where slip flow occurs. TMAC can also help analysis in transitional or free molecular regimes of flow. The gas - solid impacts were modeled using Lennard Jones potentials. Solid surfaces were modeled with approximately 3 atoms wide by 3 atoms deep by 40 or more atoms long. The crystal surface was modeled as a Face Centered Cubic (100). The gas was modeled as individual free gas atoms. Gas approach angles were varied from 10 degrees to 70 degrees from normal. Gas speed was either specified directly or by way of a ratio relationship with the Lennard-Jones energy potential (Energy Ratio). In order to adequately model the trajectories and maintain conservation of energy, very small time steps (on the order of 0.0005 [tau] , where [tau] is the natural time unit) were used. For each impact the initial and final tangential momenta were determined and after a series of many impacts, a value of TMAC was calculated for those conditions. The modeling was validated with available experimental data for He gas atoms at 1770 m/s impacting Cu over angles ranging from 10° to 70°. The model agreed within 3% of the experimental values and correctly predicted that the coefficient changes with angle of approach. Molecular Dynamics results estimate TMAC values from a high of 1.2 to a low of 0.25, generally estimating a higher coefficient at the smaller angles. TMAC values above 1.0 indicate backscattering, which has been experimentally observed in numerous instances. The ratio of final to initial momenta, when plotted for a given sequence of gas atoms spaced across a lattice cycle typically follows a discontinuous curve, with continuous portions indicating forward and back scattering and discontinuous portions indicating multiple bounces. Increasing the Energy Ratio above a value of 5 tends to decrease the coefficient at all angles. Adsorbed layers atop a surface influence the coefficient similar to their Energy Ratio. The results provide encouragement to develop the model further, so as to be able in the future to evaluate TMAC for gas flows with Maxwell temperature distributions involving numerous impact angles simultaneously.
Ph.D.
Department of Mechanical, Materials and Aerospace Engineering;
Engineering and Computer Science
Mechanical Engineering
APA, Harvard, Vancouver, ISO, and other styles
15

Forouzanfar, Mohamad. "A Modeling Approach for Coefficient-Free Oscillometric Blood Pressure Estimation." Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31213.

Full text
Abstract:
Oscillometry is the most common measurement method used in automatic blood pressure (BP) monitors. However, most of the oscillometric algorithms are without physiological and theoretical foundation, and rely on empirically derived coefficients for systolic and diastolic pressure evaluation which affects the reliability of the technique. In this thesis, the oscillometric BP estimation problem is addressed using a comprehensive modeling approach, based on which coefficient-free estimation of BP becomes possible. A feature-based neural network approach is developed to find an implicit relationship between BP and the oscillometric waveform (OMW). The modeling approach is then extended by developing a mathematical model for the OMW as a function of the arterial blood pressure, cuff pressure, and cuff-arm-artery system parameters. Based on the developed model, the explicit relationship between the OMW and the systolic and diastolic pressures is found and a new coefficient-free oscillometric BP estimation method using the trust region reflective algorithm is proposed. In order to improve the reliability of BP estimates, the electrocardiogram signal is recorded simultaneously with the OMW, as another independent source of information. The electrocardiogram signal is used to identify the true oscillometric pulses and calculate the pulse transit time (PTT). By combining our developed model of oscillomtery with an existing model of the pulse wave velocity, a new mathematical model is derived for the PTT during the cuff deflation. The derived model is incorporated to study the PTT-cuff pressure dependence, based on which a new coefficient-free BP estimation method is proposed. In order to obtain accurate and robust estimates of BP, the proposed model-based BP estimation method sare fused by computing the weighted arithmetic mean of their estimates. With fusion of the proposed methods, it is observed that the mean absolute error (MAE) in estimation of systolic and diastolic pressures is 4.40 and 3.00 mmHg, respectively, relative to the Food and Drug Administration-approved Omron monitor. In addition, the proposed feature-based neural network was compared with auscultatory measurements by trained observers giving MAE of 6.28 and 5.73 mmHg in estimation of systolic and diastolic pressures, respectively. The proposed models thus show promise toward developing robust BP estimation methods.
APA, Harvard, Vancouver, ISO, and other styles
16

Amdouni, Asma. "Surveillance statistique du coefficient de variation dans un contexte de petites séries." Nantes, 2015. http://archive.bu.univ-nantes.fr/pollux/show.action?id=dcf36868-32b2-41d6-916b-f9533ee12902.

Full text
Abstract:
La maîtrise statistique des procédés (MSP) est une méthode de contrôle de la qualité basée sur les statistiques. La surveillance du coefficient de variation est une approche efficace à la MSP lorsque la moyenne du processus µ et son écart type σ ne sont pas constants mais leur rapport est constant. Jusqu’à présent, les études portant sur la surveillance du coefficient de variation se sont limitées au cas d’une production avec un horizon infini. Cette thèse présente de nouvelles cartes de contrôle pour surveiller le coefficient de variation dans le contexte fini lorsque les paramètres sont connus : des cartes de contrôle séparées de type Shewhart, des cartes avec des règles supplémentaires et des cartes VSI et VSS. Les paramètres optimaux ont été systématiquement calculés et les performances de chacune de ces cartes ont été également évaluées en développant de nouvelles mesures statistique de performance appropriées dans un contexte de production à horizon fini. Une validation empirique des résultats a été élaborée pour des procédés industriels existants
Statistical Process Control (SPC) is a method of quality control based on statistics and used to monitor production. Monitoring the coefficient of variation is an effective approach to SPC when the process mean µ and standard deviation σ are not constant but their ratio is constant. Until now, research has not investigated the monitoring of the coefficient of variation for short production runs. Viewed under this perspective, in this thesis, we will propose new methods to monitor the coefficient of variation for a finite horizon production, we will investigate the properties (in terms of the Truncated Run Length) of some control charts for the coefficient of variation in a Short Run context in the case of known parameters, such as the one-sided Shewhart Chart, the Run Rules Chart, the VSI and VSS Charts. The performance of each control chart has been evaluated by developing statistical measures of performance appropriate in a Short Run context and the optimal parameters were systematically computed. An empirical validation of the results has been developed for real industrial processes
APA, Harvard, Vancouver, ISO, and other styles
17

Kamanu, Timothy Kevin Kuria. "Location-based estimation of the autoregressive coefficient in ARX(1) models." Thesis, University of the Western Cape, 2006. http://etd.uwc.ac.za/index.php?module=etd&action=viewtitle&id=gen8Srv25Nme4_9551_1186751947.

Full text
Abstract:

In recent years, two estimators have been proposed to correct the bias exhibited by the leastsquares (LS) estimator of the lagged dependent variable (LDV) coefficient in dynamic regression models when the sample is finite. They have been termed as &lsquo
mean-unbiased&rsquo
and &lsquo
medianunbiased&rsquo
estimators. Relative to other similar procedures in the literature, the two locationbased estimators have the advantage that they offer an exact and uniform methodology for LS estimation of the LDV coefficient in a first order autoregressive model with or without exogenous regressors i.e. ARX(1).


However, no attempt has been made to accurately establish and/or compare the statistical properties among these estimators, or relative to those of the LS estimator when the LDV coefficient is restricted to realistic values. Neither has there been an attempt to 
compare their performance in terms of their mean squared error (MSE) when various forms of the exogenous regressors are considered. Furthermore, only implicit confidence intervals have been given for the &lsquo
medianunbiased&rsquo
estimator. Explicit confidence bounds that are directly usable for inference are not available for either estimator. In this study a new estimator of the LDV coefficient is proposed
the &lsquo
most-probably-unbiased&rsquo
estimator. Its performance properties vis-a-vis the existing estimators are determined and compared when the parameter space of the LDV coefficient is restricted. In addition, the following new results are established: (1) an explicit computable form for the density of the LS estimator is derived for the first time and an efficient method for its numerical evaluation is proposed
(2) the exact bias, mean, median and mode of the distribution of the LS estimator are determined in three specifications of the ARX(1) model
(3) the exact variance and MSE of LS estimator is determined
(4) the standard error associated with the determination of same quantities when simulation rather than numerical integration method is used are established and the methods are compared in terms of computational time and effort
(5) an exact method of evaluating the density of the three estimators is described
(6) their exact bias, mean, variance and MSE are determined and analysed
and finally, (7) a method of obtaining the explicit exact confidence intervals from the distribution functions of the estimators is proposed.


The discussion and results show that the estimators are still biased in the usual sense: &lsquo
in expectation&rsquo
. However the bias is substantially reduced compared to that of the LS estimator. The findings are important in the specification of time-series regression models, point and interval estimation, decision theory, and simulation.

APA, Harvard, Vancouver, ISO, and other styles
18

Wang, Luqiang. "Contributions to estimation of measures for assessing rater reliability." Diss., Temple University Libraries, 2009. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/44053.

Full text
Abstract:
Statistics
Ph.D.
Reliability measures have been well studied over many years, beginning with an entire chapter devoted to intraclass correlation in the first edition of Fisher (1925). Such measures have been thoroughly studied for two factor models. This dissertation, motivated by a medical research problem, extends point and confidence interval estimation of both intraclass correlation coefficient and interater reliability coefficient to models containing three crossed random factors -- subjects, raters and occasions. The intraclass correlation coefficient is used when decision is made on an absolute basis with rater's scores, while the interater reliability coefficient is defined for decisions made on a relative basis. The estimation is conducted using both ANOVA and MCMC methods. The results from the two methods are compared. The MCMC method is preferred for analyses of small data sets when ICC values are high. Besides, the bias of estimator of intraclass correlation coefficient in one-way random effects model is evaluated.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
19

Archer, Robert Joseph 1957. "Effects of spacial variation of the thermal coefficient of expansion on optical surfaces." Thesis, The University of Arizona, 1988. http://hdl.handle.net/10150/276887.

Full text
Abstract:
The deformation of a mirror's optical surface due to a spacial variation of the coefficient of thermal expansion is examined. Four types of variations of the coefficient of thermal expansion are studied. These represent variations which result after typical manufacturing and/or fabrication processes. Equations describing the deformations resulting from the variations in the coefficient of thermal expansion are derived for some of the cases. Deformations due to more complex variations in the coefficient of thermal expansion are developed empirically using data generated by the finite-element method.
APA, Harvard, Vancouver, ISO, and other styles
20

Beale, James H. "Internal flow subjected to an axial variation of the external heat transfer coefficient." Thesis, Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/91162.

Full text
Abstract:
A theoretical investigation of internal flow subjected to an axial variation of the external convection coefficient is presented. Since the variable boundary condition parameter causes the problem to become nonseparable, conventional techniques do not apply. Instead, the Green's function technique is used to convert the governing partial differential equations into a singular Volterra integral equation for the temperature of the fluid at the wall. The integral equation is resolved numerically by the trapezoid rule with the aid of a singularity subtraction procedure. The solution methodology is developed in terms of a fully turbulent flow which is shown to contain fully laminar and slug flow as special cases. Before examining the results generated by numerical solution of the integral equation, a thorough study is made of each of the building blocks required in the solution procedure. A comparison of the respective dimensionless velocity profiles and dimensionless total diffusivities for each of the flow models is presented. Next, an analysis of the eigenvalue problem for each flow model is presented with consideration given to the normalized eigenfunctions and the eigenvalues themselves. Finally, the singular nature of the Green's function is examined showing the effect of the parameters Ho, Re and Pr. The technique is applied to study the heat transfer from a finned tube. A parameter study is presented to examine the effects of the external finning and the flow model. The effect of external finning is examined through specific variations of the external convection coefficient, while the flow model is selected through the velocity profile and eddy diffusivity. In examining turbulent flow, the effects of the parameters, Re and Pr, are considered.
M.S.
APA, Harvard, Vancouver, ISO, and other styles
21

Bederman, S. Samuel. "Estimation methods in random coefficient regression for continuous and binary longitudinal data." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0004/MQ29203.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Peterson, Eric W. "Tire-Road Friction Coefficient Estimation Using a Multi-scale, Physics-based Model." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/51148.

Full text
Abstract:
The interaction between a tire and road surface is of critical importance as the motion of a car in both transient and steady-state maneuvers is predicated on the friction forces generated at the tire-road interface. A general method for predicting friction coefficients for an arbitrary asphalt pavement surface would be an invaluable engineering tool for designing many vehicle safety and performance features, tire design, and improving asphalt-aggregate mixtures used for pavement surfaces by manipulating texture. General, physics-based methods for predicting friction are incredibly difficult, if not impossible to realize—However, for the specific case of rubber sliding across a rough surface, the primary physical mechanisms responsible for friction, notably rubber hysteresis, can be modeled. The objective of the subsequent research is to investigate one such physics model, referred to as Persson Theory, and implement the constitutive equations into a MatLab® code to be solved numerically. The model uses high-resolution surface measurements, along with some of the physical properties of rubber as inputs and outputs the kinetic friction coefficient. The Persson model was successfully implemented into MatLab® and high resolution measurements (from optical microscopy and imaging software) were obtained for a variety of surfaces. Friction coefficients were calculated for each surface and compared with measured friction values obtained from British Pendulum testing. The accuracy and feasibility of the Persson model are discussed and results are compared with a simpler, semi-empirical indenter model. A brief discussion of the merits and drawbacks of the Persson model are offered along with recommendations for future research based on the information acquired from the present study.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
23

He, Xiaoqi. "Essays in identification and estimation of duration models and varying coefficient models." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/57057.

Full text
Abstract:
Chapter 1 studies the identification of a preemption game where the timing decisions are expressed as mixed hitting time (MHT). It considers a preemption game with private information, where agents choose optimal time to invest, with payoffs driven by Geometric Brownian Motion. The game delivers the optimal timing of investment based on a threshold rule that depends on both the observed covariates and the unobserved heterogeneity. The timing decision rules specify durations before the irreversible investment as the first time the Geometric Brownian Motion hits a heterogeneous threshold, which fits the MHT framework. As a result, identification strategies for MHT can be used for a first stage identification analysis of the model primitives. Estimation results are performed in a Monte Carlo simulation study. Chapter 2 studies the identification of a real options game similar to chapter 1, but with complete information. Because of the multiple equilibria problems associated with the complete information game, the point identification is only achieved for a duopoly case. This simple complete information game delivers two possible different kinds of equilibria, and we can separate the parameter space of unobserved investment cost accordingly for different equilibria. We also show the non-identification result for a three-player case in appendix B.4. Chapter 3 studies the estimation of a varying coefficient model without a complete data set. We use a nearest-matching method to combine two incomplete samples to get a complete data set. We demonstrate that the simple local linear estimator of the varying coefficient model using the combined sample is inconsistent and in general the convergence rate is slower than the parametric rate to its probability limit. We propose the bias-corrected estimator and investigate the asymptotic properties. In particular, the bias-corrected estimator attains the parametric convergence rate if the number of matching variables is one. Monte Carlo simulation results are consistent with our findings.
Arts, Faculty of
Vancouver School of Economics
Graduate
APA, Harvard, Vancouver, ISO, and other styles
24

Ollikainen, Kati. "PARAMETER ESTIMATION IN LINEAR REGRESSION." Doctoral diss., University of Central Florida, 2006. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4138.

Full text
Abstract:
Today increasing amounts of data are available for analysis purposes and often times for resource allocation. One method for analysis is linear regression which utilizes the least squares estimation technique to estimate a model's parameters. This research investigated, from a user's perspective, the ability of linear regression to estimate the parameters' confidence intervals at the usual 95% level for medium sized data sets. A controlled environment using simulation with known data characteristics (clean data, bias and or multicollinearity present) was used to show underlying problems exist with confidence intervals not including the true parameter (even though the variable was selected). The Elder/Pregibon rule was used for variable selection. A comparison of the bootstrap Percentile and BCa confidence interval was made as well as an investigation of adjustments to the usual 95% confidence intervals based on the Bonferroni and Scheffe multiple comparison principles. The results show that linear regression has problems in capturing the true parameters in the confidence intervals for the sample sizes considered, the bootstrap intervals perform no better than linear regression, and the Scheffe method is too wide for any application considered. The Bonferroni adjustment is recommended for larger sample sizes and when the t-value for a selected variable is about 3.35 or higher. For smaller sample sizes all methods show problems with type II errors resulting from confidence intervals being too wide.
Ph.D.
Department of Industrial Engineering and Management Systems
Engineering and Computer Science
Industrial Engineering and Management Systems
APA, Harvard, Vancouver, ISO, and other styles
25

Delattre, Sylvain. "Estimation du coefficient de diffusion d'un processus de diffusion en présence d'erreurs d'arrondi." Paris 6, 1997. http://www.theses.fr/1997PA066297.

Full text
Abstract:
Soit un processus de diffusion a valeurs reelles x dont le coefficient de diffusion depend d'un parametre reel inconnu. On etudie le probleme de l'estimation de ce parametre quand on echantillonne une trajectoire de x en un grand nombre, n, d'instants equireparties entre l'instant et l'instant 1, et quand de plus les observations sont sujettes a des erreurs d'arrondi dont l'ordre de grandeur, alpha, est petit. On decrit tout d'abord le comportement, quand n tend vers l'infini et a tend vers , de la variation quadratique approchee de x calculee a partir de ces observations (et aussi d'autres fonctionnelles similaires) : sous l'hypothese que la suite alpha x racine (n) admet un limite (eventuellement infinie) et apres normalisation, la convergence en probabilite est prouvee ainsi qu'un theoreme de type central-limite associe. Un outil essentiel est le fait que si y est une variable aleatoire reelle admettant une densite reguliere alors la partie fractionnaire de (y : alpha) est approximativement uniformement distribuee. Grace a ces resultats, on construit une suite consistante d'estimateurs, qui converge a la vitesse suivante : soit racine(n) si alpha x racine(n) tend vers une limite finie, soit 1 : alpha si alpha x racine(n) tend vers l'infini. En particulier, si alpha x racine(n) tend vers , ces estimateurs sont optimaux dans la classe des estimateurs bases sur des observations sans erreur d'arrondi. En outre, dans le cas ou alpha x racine(n) tend vers une limite finie, la suite de modeles statistiques correspondante est localement asymptotiquement normale mixte (propriete lamn), avec la vitesse racine(n). La principale difficulte par rapport au cas sans erreur d'arrondi est que les observations ne constituent plus une chaine de markov.
APA, Harvard, Vancouver, ISO, and other styles
26

Chen, Yixin. "Statistical inference for varying coefficient models." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/17690.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Weixin Yao
This dissertation contains two projects that are related to varying coefficient models. The traditional least squares based kernel estimates of the varying coefficient model will lose some efficiency when the error distribution is not normal. In the first project, we propose a novel adaptive estimation method that can adapt to different error distributions and provide an efficient EM algorithm to implement the proposed estimation. The asymptotic properties of the resulting estimator is established. Both simulation studies and real data examples are used to illustrate the finite sample performance of the new estimation procedure. The numerical results show that the gain of the adaptive procedure over the least squares estimation can be quite substantial for non-Gaussian errors. In the second project, we propose a unified inference for sparse and dense longitudinal data in time-varying coefficient models. The time-varying coefficient model is a special case of the varying coefficient model and is very useful in longitudinal/panel data analysis. A mixed-effects time-varying coefficient model is considered to account for the within subject correlation for longitudinal data. We show that when the kernel smoothing method is used to estimate the smooth functions in the time-varying coefficient model for sparse or dense longitudinal data, the asymptotic results of these two situations are essentially different. Therefore, a subjective choice between the sparse and dense cases may lead to wrong conclusions for statistical inference. In order to solve this problem, we establish a unified self-normalized central limit theorem, based on which a unified inference is proposed without deciding whether the data are sparse or dense. The effectiveness of the proposed unified inference is demonstrated through a simulation study and a real data application.
APA, Harvard, Vancouver, ISO, and other styles
27

Fike, Gregory Michael. "Using Infrared Thermography to Image the Drying of Polymer Surfaces." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4808.

Full text
Abstract:
During the drying of a surface, the liquid evaporation acts to keep the temperature relatively constant, due to evaporative cooling. As the drying nears completion the liquid film begins to break, exposing areas that are no longer cooled through evaporation, which begin to heat. Although this heating can be measured with an Infrared (IR) camera, the sensitivity is often not sufficient to recognize the point at which the film breaks. Complicating the measurement is the changing emissivity that commonly occurs as objects dry. The sensitivity and emissivity issues can be addressed by analyzing the temperature in the area of interest and computing the coefficient of variance (COV) of the temperature. This technique is compared to temperature and standard deviation measurements made with an IR camera and the COV technique is shown to be superior for determining when the liquid film breaks. The film breakage point is found to vary with temperature and material roughness in two industrially significant applications: the drying of wood flakes and the drying of polymer films. Film breakage in wood flakes is related to detrimental finished quality problems and also to emission problems. The rate at which an adhesive dries affects the roughness of the polymer film and subsequently, the bond strength. The COV technique is used to predict the roughness of the finished polymer film. Use of the COV technique allows the drying of a liquid film to be visualized in a way that has been previously unreported.
APA, Harvard, Vancouver, ISO, and other styles
28

Manikas, Theodoros. "Robust volatility estimation for multiscale diffusions with zero quadratic variation." Thesis, University of Warwick, 2018. http://wrap.warwick.ac.uk/111074/.

Full text
Abstract:
This thesis is concerned with the problem of volatility estimation in the context of multiscale diffusions. In particular, we consider data that exhibit two widely separated time scales. Fast/slow systems of SDEs that adopt a homogenized SDE are employed to model such data. The problem that one is confronted with, is the mismatch between the multiscale data and the homogenized SDE. In this context, we examine whether if by using the multiscale data, the diffusion coefficient of the homogenized SDE can be estimated. Our proposed estimator consists on subsampling the initial data by considering only the local extremals to overcome the issue associated with the underlying model. We provide both theoretical and numerical heuristics, suggesting that our proposed estimator when it is applied to multiscale data of bounded variation is asymptotically unbiased for the volatility coefficient of the homogenized SDE. Furthermore, for the particular example of a multiscale Ornstein-Uhlenbeck process, the numerical results indicate that the L2-error of our estimator is very small. Moreover, we illustrate situations where the proposed estimator can also be used for multiscale data with bounded non-zero quadratic variation.
APA, Harvard, Vancouver, ISO, and other styles
29

Kane, David Alan. "Penetration Depth Variation in Atomic Layer Deposition on Multiwalled Carbon Nanotube Forests." BYU ScholarsArchive, 2018. https://scholarsarchive.byu.edu/etd/7124.

Full text
Abstract:
Atomic Layer Deposition (ALD) of Al2O3 on tall multiwalled carbon nanotube forests shows concentration variation with the depth in the form of discrete steps. While ALD is capable of extremely conformal deposition in high aspect ratio structures, decreasing penetration depth has been observed over multiple thermal ALD cycles on 1.3 mm tall multiwalled carbon nanotube forests. SEM imaging with Energy Dispersive X-ray Spectroscopy elemental analysis shows steps of decreasing intensity corresponding to decreasing concentrations of Al2O3. A study of these steps suggests that they are produced by a combination of diffusion limited delivery of precursors with increasing precursor adsorption site density as discrete nuclei grow during the ALD process. This conceptual model has been applied to modify literature models for ALD penetration on high aspect ratio structures, allowing several parameters to be extracted from the experimental data. The Knudsen diffusion constant for trimethylaluminum (TMA) in these carbon nanotube forests has been found to be 0.3 cm2s-1. From the profile of the Al2O3 concentration at the steps, the sticking coefficient of TMA on Al2O3 was found to be 0.003.
APA, Harvard, Vancouver, ISO, and other styles
30

Olvera, Astivia Oscar Lorenzo. "On the estimation of the polychoric correlation coefficient via Markov Chain Monte Carlo methods." Thesis, University of British Columbia, 2013. http://hdl.handle.net/2429/44349.

Full text
Abstract:
Bayesian statistics is an alternative approach to traditional frequentist statistics that is rapidly gaining adherents across different scientific fields. Although initially only accessible to statisticians or mathematically-sophisticated data analysts, advances in modern computational power are helping to make this new paradigm approachable to the everyday researcher and this dissemination is helping open doors to problems that have remained unsolvable or whose solution was extremely complicated through the use of classical statistics. In spite of this, many researchers in the behavioural or educational sciences are either unaware of this new approach or just vaguely familiar with some of its basic tenets. The primary purpose of this thesis is to take a well-known problem in psychometrics, the estimation of the polychoric correlation coefficient, and solve it using Bayesian statistics through the method developed by Albert (1992). Through the use of computer simulations this method is compared to traditional maximum likelihood estimation across various sample sizes, skewness levels and numbers of discretisation points for the latent variable, highlighting the cases where the Bayesian approach is superior, inferior or equally effective to the maximum likelihood approach. Another issue that is investigated is a sensitivity analysis of sorts of the prior probability distributions where a skewed (bivariate log-normal) and symmetric (bivariate normal) priors are used to calculate the polychoric correlation coefficient when feeding them data with varying degrees of skewness, helping demonstrate to the reader how does changing the prior distribution for certain kinds of data helps or hinders the estimation process. The most important results of these studies are discussed as well as future implications for the use of Bayesian statistics in psychometrics
APA, Harvard, Vancouver, ISO, and other styles
31

Liu, Yingxue. "Estimation of circadian parameters and investigation in cyanobacteria via semiparametric varying coefficient periodic models." [College Station, Tex. : Texas A&M University, 2007. http://hdl.handle.net/1969.1/ETD-TAMU-1628.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Li, Xiang. "Maximum rank correlation estimation for generalized varying-coefficient models with unknown monotonic link function." Thesis, University of York, 2016. http://etheses.whiterose.ac.uk/15574/.

Full text
Abstract:
Generalized varying coefficient models (GVCMs) form a family of statistical utilities that are applicable to real world questions for exploring associations between covariates and response variables. Researchers frequently fit GVCMs with particular link transformation functions. It is vital to recognize that to invest a model with a wrong link could provide extremely misleading knowledge. This thesis intends to bypass the actual form of the link function and explore a set of GVCMs whose link functions are monotonic. With the monotonicity being secured, this thesis endeavours to make use of the maximum rank correlation idea and proposes a maximum rank correlation estimation (MRCE) method for GVCMs. In addition to the introduction of MRCE, this thesis further extends the consideration to Generalized Semi-Varying Coefficient Models (GSVCMs), Panel data, simulations and empirical studies.
APA, Harvard, Vancouver, ISO, and other styles
33

Müller, Werner, and Michaela Nettekoven. "A Panel Data Analysis: Research & Development Spillover." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1998. http://epub.wu.ac.at/620/1/document.pdf.

Full text
Abstract:
Panel data analysis has become an important tool in applied econometrics and the respective statistical techniques are well described in several recent textbooks. However, for an analyst using these methods there remains the task of choosing a reasonable model for the behavior of the panel data. Of special importance is the choice between so-called fixed and random coefficient models. This choice can have a crucial effect on the interpretation of the analyzed phenomenon, which is demonstrated by an application on research and development spillover. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
34

Coutin, Laure. "Systeme cad-lag en observation incomplete : estimation des coefficients du modele ; application du calcul des variations stochastiques a l'etude de la densite du filtre (existence, regularite, unicite)." Orléans, 1994. http://www.theses.fr/1994ORLE2016.

Full text
Abstract:
L'objet de ce travail est l'etude d'un systeme cad-lag en observation incomplete. Il est compose de deux parties. La premiere partie est consacree a l'estimation des coefficients du signal lorsqu'il est dirige par un mouvement brownien et deux processus de poisson independants. La matrice de variance, les processus de poisson et la taille des sauts sont estimes a partir de l'approximation de la variation quadratique. Les estimateurs du drift et des intensites des processus de poisson sont ceux du maximum de vraisemblance. L'estimation est validee en montrant que le processus simule converge en probabilite vers le processus etudie. La seconde partie est consacree a l'etude d'un probleme de filtrage lorsque les processus signal et observation sont cad-lag et plus particulierement a l'existence, l'unicite et la regularite de la densite du filtre. Les intensites des processus de sauts sont des fonctions du signal. D'abord les equations de zakai et de kushner-stratonovitch sont etablies en utilisant la methode de la probabilite de reference. L'outil principal pour demontrer l'absolue continuite du filtre non normalise par rapport a la mesure de lebesgue est le calcul des variations stochastiques. Ensuite cet outil, avec des hypotheses supplementaires, permet de demontrer que la densite ainsi obtenue est a valeurs dans l'ensemble des fonctions regulieres. Enfin l'application de resultats sur les equations differentielles stochastiques aux derivees partielles dans le cas continu a l'equation duale de l'equation de zakai, entre les sauts de l'observation, entraine l'existence d'une unique solution a l'equation de zakai
APA, Harvard, Vancouver, ISO, and other styles
35

Neukermans, Griet. "Les particules en suspension dans les eaux côtières turbides : estimation par mesures optique in situ et depuis l'espace." Thesis, Littoral, 2012. http://www.theses.fr/2012DUNK0406/document.

Full text
Abstract:
Les particules en suspension dans l'eau de mer incluent les sédiments, le phytoplancton, le zooplancton, les bactéries, les virus et des détritus. Ces particules sont communément appelés matière en suspension (MES). Dans les eaux côtières, la MES peut parcourir de longues distances et être transportée verticalement à travers la colonne d'eau sous l'effet des vents et des marées favorisant les processus d'advection et de resuspension. Ceci implique une large variabilité spatio-temporelle de MES et quasiment impossible à reconstituer à travers les mesures traditionnelles des concentrations de MES [MES], par filtration de l'eau de mer à bord de bateaux. La [MES] peut être obtenue à partir de capteurs optiques enregistrant la diffusion et déployés soit de manière in-situ, soit à partir d'un satellite dans l'espace. Depuis la fin des années 70, par exemple, les satellites "couleur de l'eau" permettent d'établir des cartes de [MES] globales. La fréquence d'une image par jour pour la mer di Nord de ces capteurs polaires représente un obstacle non négligeable pour l'étude de variabilité de la [MES] dans les eaux côtières où la marée et les vents engendrent des variations rapides au cours de la journée. Cette limitation est d'autant plus importante pour les régions avec une couverture nuageuse fréquente. Les méthodes in-situ à partir d'un navire autonome ou d'une plateforme amarrée permettent d'enregistrer des données en continu mais leur couverture spatiale reste néanmoins limitée. Ce travail a pour objectif de mettre en avant les techniques de mesures in-situ et satellite de la [MES] en se concentrant principalement sur deux points. Premièrement, d'acquérir une meilleure connaissance de la variabilité de la relation entre la [MES] et la lumière diffuse, et deuxièmement, d'établir des cartes de [MES] dans la mer du Nord avec le capteur géostationnaire météorologique Européen (SEVIRI) qui donne des images chaque 15 minutes.La variabilité de la relation entre la [MES] et la lumière diffuse est étudiée à l'aide d'une banque de données in-situ. Nous démontrons que la [MES] est le mieux estimée à partir des mesures dans l'intervalle rouge du spectre de lumière rétro-diffuse. Par ailleurs, la relation entre la [MES] et la rétrodiffusion est gouvernée par la composition organique/inorganique des particules, ce qui représente des possibilités d'amélioration pour les algorithmes d'estimation de [MES] à partir de la couleur de l'eau. Nous démontrons aussi qu'avec SEVIRI il est possible d'estimer la [MES], la turbidité et le coefficient d'atténuation, deux variables étroitement liées à la [MES], avec généralement une bonne précision. Bien qu'il y ait d'importantes incertitudes dans les eaux claires, cette réussite est remarquable pour un capteur météorologique initialement conçu pour le suivi des nuages et des masses glaciaires, cibles beaucoup plus brillantes que la mer! Ce travail démontre pour la première fois que la variabilité de la [MES] à l'échelle temporelle des marées dans les eaux côtières au sud de la mer du Nord peut être capturée et mesurée par le biais de la télédétection de la couleur de l'eau ; ce qui ouvre des opportunités pour le monitoring de la turbidité et pour la modélisation des écosystèmes. Le premier capteur géostationnaire couleur de l'eau a été lancé en juin 2012, donnant des images multispectrale des eaux coréennes chaque heure. D'autres capteurs vont probablement suivre dans l'avenir, couvrant le reste des eaux du globe. Ce travail nous permet donc de préparer, de façon optimale, l'arrivée de ces capteurs qui vont révolutionner l'océanographie optique
Particles suspended in seawater include sediments, phytoplankton, zooplankton, bacteria, viruses, and detritus, and are collectively referred to as suspended particulate matter, SPM. In coastal waters, SPM is transported over long distances and in the water column by biological, tide or wind-driven advection and resuspension processes, thus varying strongly in time and space. These strong dynamics challenge the traditional measurement of the concentration of SPM, [SPM], through filtration of seawater sampled from ships. Estimation of [SPM] from sensors recording optical scattering allows to cover larger temporal or spatial scales. So called ocean colour satelittes, for example, have been used for the mapping of [SPM] on a global scale since the late 1970s. These polar-orbiting satellites typically provide one image per day forthe North Sea area. However, the sampling frequency of these satellites is a serious limitation in coastal waters where [SPM] changes rapidly during the day due to tides and winds.Optical instruments installed on moored platforms or on under-water vehicles can be operated continuously, but their spatial coverage is limited. This work aims to advance in situ and space-based optical techniques for [SPM] retrieval by investigating the natural variability in the relationship between [SPM] and light scattering by particles and by investigating whether the European geostationary meteorological SEVIRI sensor, which provides imagery every 15 minutes, can be used for the mapping of [SPM] in the southern North Sea. Based on an extensive in situ dataset, we show that [SPM] is best estimated from red light scattered in the back directions (backscattering). Moreover, the relationship between [SPM]] and particulate backscattering is driven by the organic/inorganic composition of suspended particles, offering opportunities to improve [SPM] retrieval algorithms. We also show that SEVIRI successfully retrieves [SPM] and related parameters such as turbidity and the vertical light attenuation coefficient in turbid waters. Even though uncertainties are considerable in clear waters, this is a remarkable result for a meteorological sensor designed to monitor clouds and ice, much brighter targets than the sea! On cloud free days, tidal variability of [SPM] can now be resolved by remote sensing for the first time, offering new opportunities for monitoring of turbidity and ecosystem modelling. In June 2010 the first geostationary ocean colour sensor was launched into space which provides hourly multispectral imagery of Korean waters. Other geostationary ocean colour sensors are likely to become operational in the (near?) future over the rest of the world's sea. This work allows us to maximally prepare for the coming of geostationary ocean colour satellites, which are expected to revolutionize optical oceanography
De in zeewater aanwezige zwevende materie zoals sedimenten, fytoplankton, zooplankton, bacteriën, virussen en detritus, worden collectief "suspended particulate matter" (SPM) genoemd. In kustwateren worden deze deeltjes over lange afstanden en in de waterkolom getransporteerd door biologische processen of wind- of getijdenwerking, waardoor SPM sterk varieert in ruimte en tijd. Door deze sterke dynamiek wordt de traditionele bemonstering van de concentratie van SPM, [SPM], door middel van filtratie van zeewaterstalen aan boord van schepen ontoereikend. Optische technieken die gebruik maken van de lichtverstriioongseigenschappen van SPM bieden een gebieds- of tijdsdekkend alternatief. Zogenaamde "ocean colour" satellieten bijvoorbeeld leveren beelden van o.a. [SPM] aan het zeeoppervlak op globale schaal sinds eind 1970, met een frequantie van één beeld per dag voor de Noordzee. Deze frequentie is echter onvoldoende in onze kustwateren waar [SPM] drastisch kan veranderen in enkele uren tijd. Optische instrumenten aan boord vann schepen of op onderwatervoertuigen kunnen continu meten, maar de gebiedsdekking is deperkt. Dit werk heeft tot doel de lichtverstriioongseigenschappen van SPM te karakterizeren en te onderzoeken of de Europese geostationaire weersatelliet, die elk kwartier een beeld geeft, kan worden gebruikt voor de kartering van [SPM] in de zuidelijke Noordzee. Op basis van een grote dataset van in situ metingen tonen wij aan dat [SPM] het nauwkeurigst kan worden bepaald door de meting van de verstrooiing van rood licht in achterwaartse richtingen (terugverstrooiing). Bovendien blijkt de relatie tussen [SPM] en terugverstrooiing afhankelijk van de organische-anorganische samenstelling van zwenvende stof, wat mogelijkhenden biedt tot het verfijnen van teledetectiealgoritmen voor [SPM]. Voorts tonen woj aan dat de Europese weersatelliet, SEVIRI, successvol kan worden aangewend voor de kartering van [SPM] en gerelateerde parameters zoals troebelheid en lichtdemping in de waterkolom. Hoewel met grote meetonzekerheid in klaar water toch een opmerkelijk resultaat voor een sensor die ontworpen werd voor detectie van wolken en ijs! Op wolkenvrije dagen wordt hierdoor de getijdendynamiek van [SPM] in de zuidelijke Noordzee voor het eerst detecteerbaar vanuit de ruimte, wat nieuwe mogelijkheden biedt voor de monitoring van waterkwaliteit en verbetering van ecosysteellodellen. Sinds juni 2010 is de eerste geostationaire ocean colour satelliet een feit : elk uur een multispectraal beeld van Koreaanse wateren. Vermoedelijk zullen er in de (nabije?) toekomst meer volgen over Europa en Amerika. Dit werk laat toe ons maximaal voor te bereiden op te komst van zo'n satellieten, waarvan verwacht wordt dat zij een nieuwe revolutie in optische oceanografie zullen ontketenen
APA, Harvard, Vancouver, ISO, and other styles
36

Devkota, Jay P. "Variation of Manning’s Roughness Coefficient with Diameter, Discharge, Slope and Depth in Partially Filled HDPE Culverts." Youngstown State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ysu1340991250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Shields, Kelly Jean. "The Importance of Individual and Population Variation to Human Stature Estimation." The University of Montana, 2007. http://etd.lib.umt.edu/theses/available/etd-05292007-145755/.

Full text
Abstract:
Human stature estimation is a central part of forensic anthropological investigation. It is one of several factors used to identify unknown individuals. The statistical relationship between body length and body segment length allows for long bone lengths from an unidentified individual to be used in a linear regression equation to estimate living stature. These linear regression equations are often formulated from a data set of an entirely different population. This research explores the necessity for the unknown individual to be similar on a number of points to the known population that makes up the equation. Populations are highly variable, and one or two equations should not be applicable for every population. The sample to be examined consists of 22 Hispanic males with known stature and long bone lengths, drawn from the Forensic Data Bank. This data was applied to some of the most commonly used equations today, including: Trotter and Glesers Korean War equations, Hispanic and American White equations from FORDISC, and Genoves Mesoamerican equations. Statistical analysis revealed the necessity for more data collection from Central and South American populations. If this were done, a greater number of unknown individuals could be identified and their remains returned to their families.
APA, Harvard, Vancouver, ISO, and other styles
38

Ysusi, Mendoza Carla Mariana. "Estimation of the variation of prices using high-frequency financial data." Thesis, University of Oxford, 2005. http://ora.ox.ac.uk/objects/uuid:1b520271-2a63-428d-b5a0-e7e9c4afdc66.

Full text
Abstract:
When high-frequency data is available, realised variance and realised absolute variation can be calculated from intra-day prices. In the context of a stochastic volatility model, realised variance and realised absolute variation can estimate the integrated variance and the integrated spot volatility respectively. A central limit theory enables us to do filtering and smoothing using model-based and model-free approaches in order to improve the precision of these estimators. When the log-price process involves a finite activity jump process, realised variance estimates the quadratic variation of both continuous and jump components. Other consistent estimators of integrated variance can be constructed on the basis of realised multipower variation, i.e., realised bipower, tripower and quadpower variation. These objects are robust to jumps in the log-price process. Therefore, given adequate asymptotic assumptions, the difference between realised multipower variation and realised variance can provide a tool to test for jumps in the process. Realised variance becomes biased in the presence of market microstructure effect, meanwhile realised bipower, tripower and quadpower variation are more robust in such a situation. Nevertheless there is always a trade-off between bias and variance; bias is due to market microstructure noise when sampling at high frequencies and variance is due to the asymptotic assumptions when sampling at low frequencies. By subsampling and averaging realised multipower variation this effect can be reduced, thereby allowing for calculations with higher frequencies.
APA, Harvard, Vancouver, ISO, and other styles
39

Liu, Gang. "Nanostructure morphology variation modeling and estimation for nanomanufacturing process yield improvement." [Tampa, Fla] : University of South Florida, 2009. http://purl.fcla.edu/usf/dc/et/SFE0003185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Lewis, Robert R. "Similarity Estimation with Non-Transitive LSH." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin162323979030229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Berg, Tobias [Verfasser]. "Non-parametric estimation of the diffusion coefficient of a branching diffusion with immigration / Tobias Berg." Mainz : Universitätsbibliothek Mainz, 2015. http://d-nb.info/1073275140/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Dhoot, Gaurav. "Estimation of eugenol diffusion coefficient in LLDPE using FTIR-ATR flow cell and HPLC techniques." Diss., Connect to online resource - MSU authorized users, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
43

Iacus, Stefano Maria. "Statistique semi-paramétrique pour un processus de diffusion avec coefficient de diffusion petit." Le Mans, 1999. http://www.theses.fr/1999LEMAA003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Amiri, Saeid. "On the Application of the Bootstrap : Coefficient of Variation, Contingency Table, Information Theory and Ranked Set Sampling." Doctoral thesis, Uppsala universitet, Matematiska institutionen, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-159206.

Full text
Abstract:
This thesis deals with the bootstrap method. Three decades after the seminal paper by Bradly Efron, still the horizons of this method need more exploration. The research presented herein has stepped into different fields of statistics where the bootstrap method can be utilized as a fundamental statistical tool in almost any application. The thesis considers various statistical problems, which is explained briefly below. Bootstrap method: A comparison of the parametric and the nonparametric bootstrap of variance is presented. The bootstrap of ranked set sampling is dealt with, as well as the wealth of theories and applications on the RSS bootstrap that exist nowadays. Moreover, the performance of RSS in resampling is explored. Furthermore, the application of the bootstrap method in the inference of contingency table test is studied. Coefficient of variation: This part shows the capacity of the bootstrap for inferring the coefficient of variation, a task which the asymptotic method does not perform very well. Information theory: There are few works on the study of information theory, especially on the inference of entropy. The papers included in this thesis try to achieve the inference of entropy using the bootstrap method.
APA, Harvard, Vancouver, ISO, and other styles
45

Indralingam, Maheswaran. "Sequential estimation, parameter variation and predictive power of econometric market response models." Thesis, Lancaster University, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.255352.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Riahi, Mohamed Hédi. "Identification de paramètres hydrogéologiques dans un milieu poreux." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066741/document.

Full text
Abstract:
On identifie simultanément les coefficients d'emmagasinement et de transmissivité hydraulique dans un écoulement souterrain gouvernent par une équation parabolique linéaire. Ces deux paramètres sont supposés être des fonctions constantes par morceaux en espace. Les inconnues du problème sont non seulement les valeurs de ces coefficients mais aussi la géométrie des zones dans lesquelles ces coefficients sont constants. Le problème est formule comme la minimisation d'une fonction de moindres carres calculant la différence entre les mesures et les quantités correspondantes évaluées avec la valeur courante des paramètres. L'objectif principal de ce travail est la construction d'une technique de paramétrisation adaptative guidée par des indicateurs de raffinement. L'utilisation d'indicateurs de raffinement, nous permet de construisons la paramétrisation de façon itérative, on allant d'une paramétrisation à une seule zone à une paramétrisation avec m zones où m est une valeur optimale à identifier. Nous distinguons les cas ou les deux paramètres ont la même paramétrisation et le cas où les deux paramètres ont des paramétrisations différentes. Pour améliorer la résolution du problème inverse d'estimation de paramètres, nous incorporons des estimateurs d'erreurs a posteriori
We identify simultaneously storage and hydraulic transmissivity coefficients in groundwater flow governed by a linear parabolic equation. Both parameters are assumed to be functions piecewise constant in space. The unknowns are the coefficient values as well as the geometry of the zones where these coefficients are constant. This problem is formulated as minimizing a least-square function calculating the difference between measurements and the corresponding quantities computed with the current parameters values. The main point of this work is to construct an adaptative parameterization technique guided by refinement indicators. Using refinement indicators, we build the parameterization iteratively, going from a one zone parametrization to a parametrization with $m$ zones where $m$ is an optimal value to identify. We distinguish the cases where the two parameters have the same parameterization and different parameterizations.\\ To improve the resolution of the inverse problem, we incorporate a posteriori error estimations
APA, Harvard, Vancouver, ISO, and other styles
47

Lee, Wooyong. "Kernel estimation of the drift coefficient of a diffusion process in the presence of measurement error." Thesis, University of British Columbia, 2014. http://hdl.handle.net/2429/46990.

Full text
Abstract:
Diffusion processes, a class of continuous-time stochastic processes, can be used to model time-series data observed at discrete time points. A diffusion process can be completely characterized by two functions, called the drift coefficient and the diffusion coefficient. For the nonparametric estimation of these two functions, Bandi and Phillips (2003) proved consistency and asymptotic normality of Nadaraya-Watson kernel estimators of the drift and the diffusion coefficient. In some cases, we observe the time-series data with measurement error. For instance, it is a well-known fact that we observe the financial time-series data with measurement errors (Zhou, 1996). For the nonparametric estimation of the drift and the diffusion coefficients in the presence of measurement error, some works are done for the estimation of integrated volatility, which is the integral of the diffusion coefficient over a fixed period of time, but little work exists on the estimation of the drift and the diffusion coefficients themselves. In this thesis, we focus on the estimation of the drift coefficient, and we propose a consistent and asymptotically normal Nadaraya-Watson type kernel estimator of the drift coefficient in the presence of measurement error.
APA, Harvard, Vancouver, ISO, and other styles
48

Ho, Cheng-Yu, and 何承諭. "Method of Measuring Common-Mode Current Conversion Coefficient for Estimating Variation in Radiated Emission from Printed Circuit Board Components." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/97977793511870205264.

Full text
Abstract:
博士
國立中山大學
電機工程學系研究所
101
This dissertation proposes a novel measurement method using a network analyzer with a bulk current injection (BCI) probe that is used in standard electromagnetic susceptibility (EMS) test to estimate the far-field radiated emissions from printed circuit board (PCB). Generally speaking, radiated emission from PCBs is very complex and difficult to resolve. The proposed method is used to predict the common-mode radiated emission caused by the DC supply loops on a driver PCB of thin film transistor-liquid crystal display (TFT-LCD) panel, which highly correlates with the radiated emission measurements obtained for the TFT-LCD panel in a fully anechoic chamber (FAC). The proposed technique is also successful to estimate the reduction of a specific peak in the radiated emission spectrum by shielding the DC supply loops. Electromagnetic simulation and equivalent-circuit modeling approaches are developed to confirm the common-mode radiation mechanism in this study. As the operating frequency has reached the gigahertz range for an RF PCB, the on-PCB microstrip components radiate more efficiently than ever at low frequencies. The proposed method can also be to measure the common-mode current conversion coefficient of microstrip components in an RF PCB. Based on the proposed measurement method, far-field radiated emissions from microstrip components are obtained, which closely corresponds to measurements in a FAC. The proposed method also estimates the radiated emission reduction by miniaturizing the physical size of microstrip bandpass filters (BPFs). Full-wave electromagnetic simulation further demonstrates the effectiveness of the measurement method.
APA, Harvard, Vancouver, ISO, and other styles
49

JHONG, HAO-REN, and 鍾皓仁. "Monitoring the Coefficient of Variation Using a Double Sampling Coefficient of Variation Control Charts." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/hr5fu6.

Full text
Abstract:
碩士
國立雲林科技大學
工業工程與管理系
106
In recent years, control charts have been used in clinical testing and the agronomic industry where it is found that means and variances are not fixed constants and standard deviations increase or decrease in proportion to mean increase or decrease. Nevertheless, CV control charts mainly monitor CV and are not well capable of detecting small to moderate degree deviations of quality features. To increase the detection capability of control charts, the most common way is to improve their detection performance by increasing the sample size. However, increasing the sample size will increase sampling costs and therefore is not an economic practice. In this study, the combination of double sampling plans and CV control charts was leveraged for CV monitoring. In addition, a genetic algorithm was used to obtain the average run length (ARL1) and average sample size (ASS1) for detection of process deviations with minimized control charts for determination of the best parameter combination. Results show that double sampling CV (DS-CV) control charts perform better in detection of high (100%), moderate (50%) and low (10%, 20%) degree deviations than both CV control charts and the same with varying sample size when the performance indicator is ARL1; perform similarly well as the latter two when the performance indicator is ASS1; and perform better in ARL1 with faster detection of degree deviations in CV than CV charts with varying sample size.
APA, Harvard, Vancouver, ISO, and other styles
50

Chan, You-Ning, and 詹又寧. "Estimation of Retardance Coefficient." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/hjgmvf.

Full text
Abstract:
碩士
國立臺北科技大學
土木與防災研究所
95
The Acoustic Doppler Velocimete (ADV) was adopted in this experimental study on investigate the characteristics of the open channel flow over a smooth boundary which was growed aquatic plant. The experiment uses the flume to establish the biology parameters for two kinds of aquatic plant and flow parameter. The biology parameters include aquatic plant loading angle, Density of aquatic plant, Ratio of aquatic plant height and water depth, aquatic plant retardance coefficient. The flow parameters include Froude number and Manning’s coefficient. The objectives of this research were to study the different vegetation, density of vegetation and the relation between water gauge and Manning roughness. In addition, the results of the experiment can be use to estimation the roughness of the channel.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography