To see the other types of publications on this topic, follow the link: Least Square Regression Method.

Dissertations / Theses on the topic 'Least Square Regression Method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Least Square Regression Method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Kim, Jingu. "Nonnegative matrix and tensor factorizations, least squares problems, and applications." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/42909.

Full text
Abstract:
Nonnegative matrix factorization (NMF) is a useful dimension reduction method that has been investigated and applied in various areas. NMF is considered for high-dimensional data in which each element has a nonnegative value, and it provides a low-rank approximation formed by factors whose elements are also nonnegative. The nonnegativity constraints imposed on the low-rank factors not only enable natural interpretation but also reveal the hidden structure of data. Extending the benefits of NMF to multidimensional arrays, nonnegative tensor factorization (NTF) has been shown to be successful in analyzing complicated data sets. Despite the success, NMF and NTF have been actively developed only in the recent decade, and algorithmic strategies for computing NMF and NTF have not been fully studied. In this thesis, computational challenges regarding NMF, NTF, and related least squares problems are addressed. First, efficient algorithms of NMF and NTF are investigated based on a connection from the NMF and the NTF problems to the nonnegativity-constrained least squares (NLS) problems. A key strategy is to observe typical structure of the NLS problems arising in the NMF and the NTF computation and design a fast algorithm utilizing the structure. We propose an accelerated block principal pivoting method to solve the NLS problems, thereby significantly speeding up the NMF and NTF computation. Implementation results with synthetic and real-world data sets validate the efficiency of the proposed method. In addition, a theoretical result on the classical active-set method for rank-deficient NLS problems is presented. Although the block principal pivoting method appears generally more efficient than the active-set method for the NLS problems, it is not applicable for rank-deficient cases. We show that the active-set method with a proper starting vector can actually solve the rank-deficient NLS problems without ever running into rank-deficient least squares problems during iterations. Going beyond the NLS problems, it is presented that a block principal pivoting strategy can also be applied to the l1-regularized linear regression. The l1-regularized linear regression, also known as the Lasso, has been very popular due to its ability to promote sparse solutions. Solving this problem is difficult because the l1-regularization term is not differentiable. A block principal pivoting method and its variant, which overcome a limitation of previous active-set methods, are proposed for this problem with successful experimental results. Finally, a group-sparsity regularization method for NMF is presented. A recent challenge in data analysis for science and engineering is that data are often represented in a structured way. In particular, many data mining tasks have to deal with group-structured prior information, where features or data items are organized into groups. Motivated by an observation that features or data items that belong to a group are expected to share the same sparsity pattern in their latent factor representations, We propose mixed-norm regularization to promote group-level sparsity. Efficient convex optimization methods for dealing with the regularization terms are presented along with computational comparisons between them. Application examples of the proposed method in factor recovery, semi-supervised clustering, and multilingual text analysis are presented.
APA, Harvard, Vancouver, ISO, and other styles
2

Oravcová, Lenka. "Determinanty cien automobilov." Master's thesis, Vysoká škola ekonomická v Praze, 2015. http://www.nusl.cz/ntk/nusl-205901.

Full text
Abstract:
The aim of the thesis Determinants of car prices is to create econometric model for price predictions of new and used cars. The prediction is based on the data provided by website of Slovak retailer of new and used cars. The model should detect statistically significant variables and determine their impact on final price. In the first part of this study, there is theoretical description of automobile industry and factors influencing price of car. The second part is devoted on developing the predictive model, suitable transformation of explanatory variables, interpretation of results and the car price classification in form of decision tree.
APA, Harvard, Vancouver, ISO, and other styles
3

Tang, Tian. "Infrared Spectroscopy in Combination with Advanced Statistical Methods for Distinguishing Viral Infected Biological Cells." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/math_theses/59.

Full text
Abstract:
Fourier Transform Infrared (FTIR) microscopy is a sensitive method for detecting difference in the morphology of biological cells. In this study FTIR spectra were obtained for uninfected cells, and cells infected with two different viruses. The spectra obtained are difficult to discriminate visually. Here we apply advanced statistical methods to the analysis of the spectra, to test if such spectra are useful for diagnosing viral infections in cells. Logistic Regression (LR) and Partial Least Squares Regression (PLSR) were used to build models which allow us to diagnose if spectral differences are related to infection state of the cells. A three-fold, balanced cross-validation method was applied to estimate the shrinkages of the area under the receiving operator characteristic curve (AUC), and specificities at sensitivities of 95%, 90% and 80%. AUC, sensitivity and specificity were used to gauge the goodness of the discrimination methods. Our statistical results shows that the spectra associated with different cellular states are very effectively discriminated. We also find that the overall performance of PLSR is better than that of LR, especially for new data validation. Our analysis supports the idea that FTIR microscopy is a useful tool for detection of viral infections in biological cells.
APA, Harvard, Vancouver, ISO, and other styles
4

Ulgen, Burcin Emre. "Estimation In The Simple Linear Regression Model With One-fold Nested Error." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/3/12606171/index.pdf.

Full text
Abstract:
In this thesis, estimation in simple linear regression model with one-fold nested error is studied. To estimate the fixed effect parameters, generalized least squares and maximum likelihood estimation procedures are reviewed. Moreover, Minimum Norm Quadratic Estimator (MINQE), Almost Unbiased Estimator (AUE) and Restricted Maximum Likelihood Estimator (REML) of variance of primary units are derived. Also, confidence intervals for the fixed effect parameters and the variance components are studied. Finally, the aforesaid estimation techniques and confidence intervals are applied to a real-life data and the results are presented
APA, Harvard, Vancouver, ISO, and other styles
5

Potůčková, Lenka. "Detekce odlehlých a vlivných pozorování v lineární regresi v rámci metody nejmenších čtverců. Kvalitativní porovnání s postupy založenými na robustní regresi." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-165078.

Full text
Abstract:
This Thesis deals with the methods for detection of the outliers and influential points based on method of least squares. The first part of the thesis summarizes the teoretical findings of the method of least squares and both methods for detection of the outliers and influential points based on the method of least squares and also based on robust regression. The practical part of this thesis deals with the application of classic methods for detection of the outliers and influential points on three types of datasets (artifical data, data from specialized literature and real data). The results of the application are subject to qualitative comparisson with the results produced by the methods for detection of the outliers and influentials point based on the robust regression.
APA, Harvard, Vancouver, ISO, and other styles
6

Ferreira, Wellington Vieira. "Regressão linear simples aplicado na física experimental do ensino médio." Universidade Federal de Goiás, 2017. http://repositorio.bc.ufg.br/tede/handle/tede/7842.

Full text
Abstract:
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2017-10-02T16:50:28Z No. of bitstreams: 2 Dissertacao - Wellington Vieira Ferreira - 2017.pdf: 6954589 bytes, checksum: 5abbfbaabae1b612e006f3f6cb213cf1 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Rejected by Luciana Ferreira (lucgeral@gmail.com), reason: Não estamos usando a expressão "Profissional" na citação Errado: FERREIRA, Wellington Vieira. Regressão linear simples aplicado na física experimental do ensino médio. 2017. 92 f. Dissertação ( Mestrado Profissional em Matemática em Rede Nacional) - Universidade Federal de Goiás, Jataí, 2017. Certo: FERREIRA, Wellington Vieira. Regressão linear simples aplicado na física experimental do ensino médio. 2017. 92 f. Dissertação ( Mestrado em Matemática em Rede Nacional) - Universidade Federal de Goiás, Jataí, 2017. on 2017-10-03T13:18:55Z (GMT)
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2017-10-03T16:07:15Z No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao - Wellington Vieira Ferreira - 2017.pdf: 6954589 bytes, checksum: 5abbfbaabae1b612e006f3f6cb213cf1 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-10-04T12:05:34Z (GMT) No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao - Wellington Vieira Ferreira - 2017.pdf: 6954589 bytes, checksum: 5abbfbaabae1b612e006f3f6cb213cf1 (MD5)
Made available in DSpace on 2017-10-04T12:05:35Z (GMT). No. of bitstreams: 2 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Dissertacao - Wellington Vieira Ferreira - 2017.pdf: 6954589 bytes, checksum: 5abbfbaabae1b612e006f3f6cb213cf1 (MD5) Previous issue date: 2017-09-05
In this work we approached an interdisciplinary proposal between mathematics and physics, from the mathematical modeling of some basic physics experiments, using the Least Square Method and QtiPlot software.
Neste trabalho abordamos uma proposta interdisciplinar entre a Matemática e a Física, a partir da modelagem matemática de alguns experimentos de física básica, utilizando o Método dos Mínimos Quadrados e o software QtiPlot.
APA, Harvard, Vancouver, ISO, and other styles
7

Haubeltova, Libuse. "Case study of Airbnb listings in Berlin : Hedonic pricing approach to measuring demand for tourist accommodation characteristics." Thesis, Högskolan Dalarna, Nationalekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:du-29979.

Full text
Abstract:
The main purpose of this degree project is to reveal the Airbnb customer’s preferences and quantify the impact of non-market factors on the market price of tourist accommodation in Berlin, Germany. The data retrieved from Airbnb listings, publicly available on Inside Airbnb (2017), was supplemented on indicator of sharing economy accommodation using machine learning method in order to distinguish between amateur and business-running professional hosts. The main aim is to examine the consumers’ preferences and quantify the marginal effect of "real sharing economy" accommodation and other key variables on market price. This is accomplished by model approach using hedonic pricing method, which is used to estimate the economic value of particular attribute. Surprisingly, our data indicates the negative impact of sharing economy indicator on price. The set of motivations of consumers, which determine their valuation of Airbnb listings, was identified. The trade-off between encompass and parsimony of the set was desired in order to build an effective model. Calculation of proportion of explained variance showed that the price is affected mainly by number of accommodated persons, degree of privacy, number of bedrooms, cancellation policy, distance from the city centre and sharing economy indicator in decreasing order.
APA, Harvard, Vancouver, ISO, and other styles
8

Luo, Shan. "Advanced Statistical Methodologies in Determining the Observation Time to Discriminate Viruses Using FTIR." Digital Archive @ GSU, 2009. http://digitalarchive.gsu.edu/math_theses/86.

Full text
Abstract:
Fourier transform infrared (FTIR) spectroscopy, one method of electromagnetic radiation for detecting specific cellular molecular structure, can be used to discriminate different types of cells. The objective is to find the minimum time (choice among 2 hour, 4 hour and 6 hour) to record FTIR readings such that different viruses can be discriminated. A new method is adopted for the datasets. Briefly, inner differences are created as the control group, and Wilcoxon Signed Rank Test is used as the first selecting variable procedure in order to prepare the next stage of discrimination. In the second stage we propose either partial least squares (PLS) method or simply taking significant differences as the discriminator. Finally, k-fold cross-validation method is used to estimate the shrinkages of the goodness measures, such as sensitivity, specificity and area under the ROC curve (AUC). There is no doubt in our mind 6 hour is enough for discriminating mock from Hsv1, and Coxsackie viruses. Adeno virus is an exception.
APA, Harvard, Vancouver, ISO, and other styles
9

Muratori, Giacomo. "Application of multivariate statistical methods to the modelling of a flue gas treatment stage in a waste-to-energy plant." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2018. http://amslaurea.unibo.it/17262/.

Full text
Abstract:
Among all the flue gas components produced in waste-to-energy plants, acid airborne pollutants such as SO2 and HCl have the most rigorous emission standards provided by the European Parliament. Their removal is thus a key step of the flue gas treatment which is mainly achieved with the Dry Treatment Systems (DTS), technologies based on the direct injection of dry solid sorbents which is capable to subtract the acid from the gas stream with several important advantages and high removal efficiencies. However, the substantial lack of a deeper industrial knowledge makes difficult to determine accurately an optimal operating zone which should be required for the design and operation of these systems. The aim of this study has been therefore the exploration, while basing on an essential engineering expertise, of some of the possible solutions which the application of multivariate statistical methods on process data obtained from real plants can give in order to identify all those phenomena which rule dry treatment systems. In particular, a key task of this work has been the seeking for a general procedure which can be possibly applied for the characterization of any type of DTS system, regardless of the specific duty range or design configuration. This required to overcome the simple mechanical application of the available techniques and made necessary to tailor and even redefine some of the available standard procedures in order to guarantee specific and objective results for the studied case. Specifically, in this so called chemometric analysis, after a pre-treatment and quality assessment, the process data obtained from a real working plant was analyzed with basic and advanced techniques in order to characterize the relations among all the available variables. Then, starting from the results of the data analysis, a linear model has been produced in order to be employed to predict with a certain grade of accuracy the operating conditions of the system.
APA, Harvard, Vancouver, ISO, and other styles
10

Ramalho, Guilherme Matiussi. "Uma abordagem estatística para o modelo do preço spot da energia elétrica no submercado sudeste/centro-oeste brasileiro." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/3/3139/tde-26122014-145848/.

Full text
Abstract:
O objetivo deste trabalho e o desenvolvimento de uma ferramenta estatistica que sirva de base para o estudo do preco spot da energia eletrica do subsistema Sudeste/Centro-Oeste do Sistema Interligado Nacional, utilizando a estimacao por regressao linear e teste de razao de verossimilhanca como instrumentos para desenvolvimento e avaliacao dos modelos. Na analise dos resultados estatsticos descritivos dos modelos, diferentemente do que e observado na literatura, a primeira conclusao e a verificacao de que as variaveis sazonais, quando analisadas isoladamente, apresentam resultados pouco aderentes ao preco spot PLD. Apos a analise da componente sazonal e verificada a influencia da energia fornecida e a energia demandada como variaveis de entrada, com o qual conclui-se que especificamente a energia armazenada e producao de energia termeletrica sao as variaveis que mais influenciam os precos spot no subsistema estudado. Entre os modelos testados, o que particularmente ofereceu os melhores resultados foi um modelo misto criado a partir da escolha das melhores variaveis de entrada dos modelos testados preliminarmente, alcancando um coeficiente de determinacao R2 de 0.825, resultado esse que pode ser considerado aderente ao preco spot. No ultimo capitulo e apresentada uma introducao ao modelo de predicao do preco spot, possibilitando dessa forma a analise do comportamento do preco a partir da alteracao das variaveis de entrada.
The objective of this work is the development of a statistical method to study the spot prices of the electrical energy of the Southeast/Middle-West (SE-CO) subsystem of the The Brazilian National Connected System, using the Least Squares Estimation and Likelihood Ratio Test as tools to perform and evaluate the models. Verifying the descriptive statistical results of the models, differently from what is observed in the literature, the first observation is that the seasonal component, when analyzed alone, presented results loosely adherent to the spot price PLD. It is then evaluated the influence of the energy supply and the energy demand as input variables, verifying that specifically the stored water and the thermoelectric power production are the variables that the most influence the spot prices in the studied subsystem. Among the models, the one that offered the best result was a mixed model created from the selection of the best input variables of the preliminarily tested models, achieving a coeficient of determination R2 of 0.825, a result that can be considered adherent to the spot price. At the last part of the work It is presented an introduction to the spot price prediction model, allowing the analysis of the price behavior by the changing of the input variables.
APA, Harvard, Vancouver, ISO, and other styles
11

V, Zozulia O., and Radzhabova D. V. "Economic and mathematical model for forecasting the volume of traffic using ms excel." Thesis, National Aviation University, 2021. https://er.nau.edu.ua/handle/NAU/50748.

Full text
Abstract:
1. LItnarovich R.M.Pobudova I doslIdzhennya matematichnoYi modelI za dzherelami eksperimentalnih danih metodami regresIynogo analIzu. Navchalniy posIbnik, MEGU, RIvne,2011.-140s. 2. Metodi ta zasobi komp’yuternih obchislen. – Elektronniy navchalniy posIbnik / E. M. Krizhanovskiy, V.B. MokIn, G.V. Goryachev, I.V. Varchuk. – VInnitsya : VNTU, 2016. –90 s. 3. Gianpaolo Ghiani (2012). Introduction to Logistics Systems Management. Italy, Salento. 96 p.
In this article we consider forecasting the economic process using pairwise linear regression and the least squares method using MS Excel. Entrust the transportation of industrial products to the company. We have data on traffic volume for the last 9 months. Determine the estimated traffic volume for the 10th month.
У цій статті ми розглядаємо прогнозування економічного процесу за допомогою попарно лінійної регресії та методу найменших квадратів за допомогою MS Excel. Розглядається проблема транспортування промислової продукції компанії. Заюача: за даними про обсяг трафіку за останні 9 місяців, треба розрахунковий обсяг трафіку на 10-й місяць.
APA, Harvard, Vancouver, ISO, and other styles
12

Moller, Jurgen Johann. "The implementation of noise addition partial least squares." Thesis, Stellenbosch : University of Stellenbosch, 2009. http://hdl.handle.net/10019.1/3362.

Full text
Abstract:
Thesis (MComm (Statistics and Actuarial Science))--University of Stellenbosch, 2009.
When determining the chemical composition of a specimen, traditional laboratory techniques are often both expensive and time consuming. It is therefore preferable to employ more cost effective spectroscopic techniques such as near infrared (NIR). Traditionally, the calibration problem has been solved by means of multiple linear regression to specify the model between X and Y. Traditional regression techniques, however, quickly fail when using spectroscopic data, as the number of wavelengths can easily be several hundred, often exceeding the number of chemical samples. This scenario, together with the high level of collinearity between wavelengths, will necessarily lead to singularity problems when calculating the regression coefficients. Ways of dealing with the collinearity problem include principal component regression (PCR), ridge regression (RR) and PLS regression. Both PCR and RR require a significant amount of computation when the number of variables is large. PLS overcomes the collinearity problem in a similar way as PCR, by modelling both the chemical and spectral data as functions of common latent variables. The quality of the employed reference method greatly impacts the coefficients of the regression model and therefore, the quality of its predictions. With both X and Y subject to random error, the quality the predictions of Y will be reduced with an increase in the level of noise. Previously conducted research focussed mainly on the effects of noise in X. This paper focuses on a method proposed by Dardenne and Fernández Pierna, called Noise Addition Partial Least Squares (NAPLS) that attempts to deal with the problem of poor reference values. Some aspects of the theory behind PCR, PLS and model selection is discussed. This is then followed by a discussion of the NAPLS algorithm. Both PLS and NAPLS are implemented on various datasets that arise in practice, in order to determine cases where NAPLS will be beneficial over conventional PLS. For each dataset, specific attention is given to the analysis of outliers, influential values and the linearity between X and Y, using graphical techniques. Lastly, the performance of the NAPLS algorithm is evaluated for various
APA, Harvard, Vancouver, ISO, and other styles
13

Björkström, Anders. "Regression methods in multidimensional prediction and estimation." Doctoral thesis, Stockholm University, Department of Mathematics, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-7025.

Full text
Abstract:

In regression with near collinear explanatory variables, the least squares predictor has large variance. Ordinary least squares regression (OLSR) often leads to unrealistic regression coefficients. Several regularized regression methods have been proposed as alternatives. Well-known are principal components regression (PCR), ridge regression (RR) and continuum regression (CR). The latter two involve a continuous metaparameter, offering additional flexibility.

For a univariate response variable, CR incorporates OLSR, PLSR, and PCR as special cases, for special values of the metaparameter. CR is also closely related to RR. However, CR can in fact yield regressors that vary discontinuously with the metaparameter. Thus, the relation between CR and RR is not always one-to-one. We develop a new class of regression methods, LSRR, essentially the same as CR, but without discontinuities, and prove that any optimization principle will yield a regressor proportional to a RR, provided only that the principle implies maximizing some function of the regressor's sample correlation coefficient and its sample variance. For a multivariate response vector we demonstrate that a number of well-established regression methods are related, in that they are special cases of basically one general procedure. We try a more general method based on this procedure, with two meta-parameters. In a simulation study we compare this method to ridge regression, multivariate PLSR and repeated univariate PLSR. For most types of data studied, all methods do approximately equally well. There are cases where RR and LSRR yield larger errors than the other methods, and we conclude that one-factor methods are not adequate for situations where more than one latent variable are needed to describe the data. Among those based on latent variables, none of the methods tried is superior to the others in any obvious way.

APA, Harvard, Vancouver, ISO, and other styles
14

Tano, Kent. "Multivariate modelling and monitoring of mineral processes using partial least square regression." Licentiate thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 1996. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-16872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Urbášková, Martina. "Hodnocení vlivu větrných elektráren na krajinný ráz." Master's thesis, Vysoká škola ekonomická v Praze, 2011. http://www.nusl.cz/ntk/nusl-114006.

Full text
Abstract:
The goal of the work is to provide monetary valuation of changes in visual aspects of the landscape as a result of construction of an additional wind turbine in the village Maletín. For a suitable method for achieving the goal is being selected the contingent valuation method. A key element of this method is being considered the carefully compiled questionnaire, on which basis is made the quantification and evaluation of collected data. The representative sample consists of 112 households and the selected payment method is the increase of the monthly bill for electricity. The questionnaire reports that 54.3% of households consider the impact of wind turbines on the landscape Maletín to be positive. With the construction of additional wind turbine agree less than 74.3% of households and the most common reason is to obtain grants for the village and to produce cleaner energy from wind turbines. With the construction of new wind turbine while increasing monthly bill agrees 28.6% of all households living in the village Maletín. Estimation of changes in a welfare, thus improving the quality of the environment, is based on estimated central values, that has been calculated from selected characteristics and nonparametric estimation. The average household's willingness to pay for construction of wind turbine is estimated to be between 77 CZK - 200 CZK per month.
APA, Harvard, Vancouver, ISO, and other styles
16

Sandnes, Pål Grøthe. "Meshfree Least Square-based Finite Difference method in CFD applications." Thesis, Norges teknisk-naturvitenskapelige universitet, Institutt for marin teknikk, 2011. http://urn.kb.se/resolve?urn=urn:nbn:no:ntnu:diva-15454.

Full text
Abstract:
Most commercial computational fluid dynamics (CFD) packages available today are based on the finite volume- or finite element method. Both of these methods have been proven robust, efficient and appropriate for complex geometries. However, due to their crucial dependence on a well constructed grid, extensive preliminary work have to be invested in order to obtain satisfying results. During the last decades, several so-called meshfree methods have been proposed with the intension of entirely eliminating the grid dependence. Instead of a grid, meshfree methods use the nodal coordinates directly in order to calculate the spatial derivatives. In this master thesis, the meshfree least square-based finite difference (LSFD) method has been considered. The method has initially been thoroughly derived and tested for a simple Poisson equation. With its promising numerical performance, it has further been applied to the full Navier- Stokes equations, describing fluid motions in a continuum media. Several numerical methods used to solve the incompressible Navier-Stokes equations have been proposed, and some of them have also been presented in this thesis. However, the temporal discretization has finally been done using a 1st order semi-implicit projection method, for which the primitive variables (velocity and pressure) are solved directly. In order to verify the developed meshfree LSFD code, in total four flow problems have been considered. All of these cases are well known due to their benchmarking relevance, and LSFD performs well compared to both earlier observations and theory. Even though the developed program in this thesis only supports two dimensional, incompressible and laminar flow regimes, the idea of meshfree LSFD is quite general and may very well be applied to more complex flows, including turbulence
APA, Harvard, Vancouver, ISO, and other styles
17

Li, Ying. "A Comparison Study of Principle Component Regression, Partial Least Square Regression and Ridge Regression with Application to FTIR Data." Thesis, Uppsala University, Department of Statistics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-127983.

Full text
Abstract:

Least squares estimator may fail when the number of explanatory vari-able is relatively large in comparison to the sample or if the variablesare almost collinear. In such a situation, principle component regres-sion, partial least squares regression and ridge regression are oftenproposed methods and widely used in many practical data analysis,especially in chemometrics. They provide biased coecient estima-tors with the relatively smaller variation than the variance of the leastsquares estimator. In this paper, a brief literature review of PCR,PLS and RR is made from a theoretical perspective. Moreover, a dataset is used, in order to examine their performance on prediction. Theconclusion is that for prediction PCR, PLS and RR provide similarresults. It requires substantial verication for any claims as to thesuperiority of any of the three biased regression methods.

APA, Harvard, Vancouver, ISO, and other styles
18

Anderson, Cynthia 1962. "A Comparison of Five Robust Regression Methods with Ordinary Least Squares: Relative Efficiency, Bias and Test of the Null Hypothesis." Thesis, University of North Texas, 2001. https://digital.library.unt.edu/ark:/67531/metadc5808/.

Full text
Abstract:
A Monte Carlo simulation was used to generate data for a comparison of five robust regression estimation methods with ordinary least squares (OLS) under 36 different outlier data configurations. Two of the robust estimators, Least Absolute Value (LAV) estimation and MM estimation, are commercially available. Three authormodified variations on MM were also included (MM1, MM2, and MM3). Design parameters that were varied include sample size (n=60 and n=180), number of independent predictor variables (2, 3 and 6), outlier density (0%, 5% and 15%) and outlier location (2x,2y s, 8x8y s, 4x,8y s and 8x,4y s). Criteria on which the regression methods were measured are relative efficiency, bias and a test of the null hypothesis. Results indicated that MM2 was the best performing robust estimator on relative efficiency. The best performing estimator on bias was MM1. The best performing regression method on the test of the null hypothesis was MM2. Overall, the MM-type robust regression methods outperformed OLS and LAV on relative efficiency, bias, and the test of the null hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
19

Haddad, Khaled. "Design flood estimation for ungauged catchments in Victoria ordinary & generalised least squares methods compared /." View thesis, 2008. http://handle.uws.edu.au:8081/1959.7/30369.

Full text
Abstract:
Thesis (M.Eng. (Hons.)) -- University of Western Sydney, 2008.
A thesis submitted towards the degree of Master of Engineering (Honours) in the University of Western Sydney, College of Health and Science, School of Engineering. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
20

Peiris, Thelge Buddika. "Constrained Statistical Inference in Regression." OpenSIUC, 2014. https://opensiuc.lib.siu.edu/dissertations/934.

Full text
Abstract:
Regression analysis constitutes a large portion of the statistical repertoire in applications. In case where such analysis is used for exploratory purposes with no previous knowledge of the structure one would not wish to impose any constraints on the problem. But in many applications we are interested in a simple parametric model to describe the structure of a system with some prior knowledge of the structure. An important example of this occurs when the experimenter has the strong belief that the regression function changes monotonically in some or all of the predictor variables in a region of interest. The analyses needed for statistical inference under such constraints are nonstandard. The specific aim of this study is to introduce a technique which can be used for statistical inferences of a multivariate simple regression with some non-standard constraints.
APA, Harvard, Vancouver, ISO, and other styles
21

Beedell, David C. (David Charles). "The effect of sampling error on the interpretation of a least squares regression relating phosporus and chlorophyll." Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22720.

Full text
Abstract:
Least squares linear regression is a common tool in ecological research. One of the central assumptions of least squares linear regression is that the independent variable is measured without error. But this variable is measured with error whenever it is a sample mean. The significance of such contraventions is not regularly assessed in ecological studies. A simulation program was made to provide such an assessment. The program requires a hypothetical data set, and using estimates of S$ sp2$ it scatters the hypothetical data to simulate the effect of sampling error. A regression line is drawn through the scattered data, and SSE and r$ sp2$ are measured. This is repeated numerous times (e.g. 1000) to generate probability distributions for r$ sp2$ and SSE. From these distributions it is possible to assess the likelihood of the hypothetical data resulting in a given SSE or r$ sp2$. The method was applied to survey data used in a published TP-CHLa regression (Pace 1984). Beginning with a hypothetical, linear data set (r$ sp2$ = 1), simulated scatter due to sampling exceeded the SSE from the regression through the survey data about 30% of the time. Thus chances are 3 out of 10 that the level of uncertainty found in the surveyed TP-CHLa relationship would be observed if the true relationship were perfectly linear. If this is so, more precise and more comprehensive models will only be possible when better estimates of the means are available. This simulation approach should apply to all least squares regression studies that use sampled means, and should be especially relevant to studies that use log-transformed values.
APA, Harvard, Vancouver, ISO, and other styles
22

Li, Yang. "An Empirical Analysis of Family Cost of Children : A Comparison of Ordinary Least Square Regression and Quantile Regression." Thesis, Uppsala University, Department of Statistics, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-126660.

Full text
Abstract:

Quantile regression have its advantage properties comparing to the OLS model regression which are full measurement of the effects of a covariate on response, robustness and Equivariance property. In this paper, I use a survey data in Belgium and apply a linear model to see the advantage properites of quantile regression. And I use a quantile regression model with the raw data to analyze the different cost of family on different numbers of children and apply a Wald test. The result shows that for most of the family types and living standard, from the lower quantile to the upper quantile the family cost on children increases along with the increasing number of children and the cost of each child is the same. And we found a common behavior that the cost of the second child is significantly more than the cost of the first child for a nonworking type of family and all living standard families, at the upper quantile (from 0.75 quantile to 0.9 quantile) of the conditional distribution.

APA, Harvard, Vancouver, ISO, and other styles
23

Filippov, V., and A. Rodionov. "On the justification of the least square method for nonpotential, nonlinear operators." Pontificia Universidad Católica del Perú, 2014. http://repositorio.pucp.edu.pe/index/handle/123456789/97171.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Bahenský, Miloš. "Závislost hodnoty stavebního závodu na velikosti vlastního kapitálu." Doctoral thesis, Vysoké učení technické v Brně. Ústav soudního inženýrství, 2019. http://www.nusl.cz/ntk/nusl-402119.

Full text
Abstract:
The doctoral thesis deals with the valuer issues of business valuation with construction production in the condition of the Czech economy. The business valuation issue is, and will always be, highly relevant in a market economy environment, with regard to both methodical and practical approaches. The main aim of the doctoral thesis is to demonstrate the dependence constructing empirical regression model to determine the value of the construction enterprise by the chosen income valuation method based on the equity (book value of equity in historical costs). The first part of the doctoral thesis is a research study describing the approach of the authors to the current state of knowledge concerning the issues of business valuation, aspects of equity, using the principles of system methodology. Based on these findings, a space is defined in which it is possible to propose a solution of a partial problem in terms of selecting the enterprise value category and the associated income valuation methods suitable for extensive time-series analysis. An integral part of the doctoral thesis is the determination of the sample size of construction enterprises according to the assumptions and limitations of the chosen methodology. Empirical research for data collection is based on Justice.cz database. Another important part is, in the spirit of system approach principles, the choice and application of the method of system discipline for the solved problem of doctoral thesis. The result of the solution is an empirical regression model, which after subsequent validation in multiple case studies could also be recommended for wider verification in valuers practice. Part of the thesis will also include discussions in the wider context of the potential benefits of the doctoral thesis for practical, theoretical and pedagogical use.
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Xinyu. "Inference in Constrained Linear Regression." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/405.

Full text
Abstract:
Regression analyses constitutes an important part of the statistical inference and has great applications in many areas. In some applications, we strongly believe that the regression function changes monotonically with some or all of the predictor variables in a region of interest. Deriving analyses under such constraints will be an enormous task. In this work, the restricted prediction interval for the mean of the regression function is constructed when two predictors are present. I use a modified likelihood ratio test (LRT) to construct prediction intervals.
APA, Harvard, Vancouver, ISO, and other styles
26

Clack, Jhules. "Theoretical Analysis for Moving Least Square Method with Second Order Pseudo-Derivatives and Stabilization." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1418910272.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Erdas, Ozlem. "Modelling And Predicting Binding Affinity Of Pcp-like Compounds Using Machine Learning Methods." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/3/12608792/index.pdf.

Full text
Abstract:
Machine learning methods have been promising tools in science and engineering fields. The use of these methods in chemistry and drug design has advanced after 1990s. In this study, molecular electrostatic potential (MEP) surfaces of PCP-like compounds are modelled and visualized in order to extract features which will be used in predicting binding affinity. In modelling, Cartesian coordinates of MEP surface points are mapped onto a spherical self-organizing map. Resulting maps are visualized by using values of electrostatic potential. These values also provide features for prediction system. Support vector machines and partial least squares method are used for predicting binding affinity of compounds, and results are compared.
APA, Harvard, Vancouver, ISO, and other styles
28

Bai, Xiuqin. "Robust mixtures of regression models." Diss., Kansas State University, 2014. http://hdl.handle.net/2097/18683.

Full text
Abstract:
Doctor of Philosophy
Department of Statistics
Kun Chen and Weixin Yao
This proposal contains two projects that are related to robust mixture models. In the robust project, we propose a new robust mixture of regression models (Bai et al., 2012). The existing methods for tting mixture regression models assume a normal distribution for error and then estimate the regression param- eters by the maximum likelihood estimate (MLE). In this project, we demonstrate that the MLE, like the least squares estimate, is sensitive to outliers and heavy-tailed error distributions. We propose a robust estimation procedure and an EM-type algorithm to estimate the mixture regression models. Using a Monte Carlo simulation study, we demonstrate that the proposed new estimation method is robust and works much better than the MLE when there are outliers or the error distribution has heavy tails. In addition, the proposed robust method works comparably to the MLE when there are no outliers and the error is normal. In the second project, we propose a new robust mixture of linear mixed-effects models. The traditional mixture model with multiple linear mixed effects, assuming Gaussian distribution for random and error parts, is sensitive to outliers. We will propose a mixture of multiple linear mixed t-distributions to robustify the estimation procedure. An EM algorithm is provided to and the MLE under the assumption of t- distributions for error terms and random mixed effects. Furthermore, we propose to adaptively choose the degrees of freedom for the t-distribution using profile likelihood. In the simulation study, we demonstrate that our proposed model works comparably to the traditional estimation method when there are no outliers and the errors and random mixed effects are normally distributed, but works much better if there are outliers or the distributions of the errors and random mixed effects have heavy tails.
APA, Harvard, Vancouver, ISO, and other styles
29

Sequeira, Bernardo Pinto Machado Portugal. "American put option pricing : a comparison between neural networks and least-square Monte Carlo method." Master's thesis, Instituto Superior de Economia e Gestão, 2019. http://hdl.handle.net/10400.5/19631.

Full text
Abstract:
Mestrado em Mathematical Finance
Esta tese compara dois métodos de pricing de opções de venda Americanas. Os métodos estudados são redes neurais (NN), um método de Machine Learning, e Least-Square Monte Carlo Method (LSM). Em termos de redes neurais foram desenvolvidos dois modelos diferentes, um modelo mais simples, Model 1, e um modelo mais complexo, Model 2. O estudo depende dos preços das opões de 4 gigantes empresas norte-americanas, de Dezembro de 2018 a Março de 2019. Todos os métodos mostram uma precisão elevada, no entanto, uma vez calibradas, as redes neuronais mostram um tempo de execução muito inferior ao LSM. Ambos os modelos de redes neurais têm uma raiz quadrada do erro quadrático médio (RMSE) menor que o LSM para opções de diferentes maturidades e preço de exercício. O Modelo 2 supera substancialmente os outros modelos, tendo um RMSE ca. 40% inferior ao do LSM. O menor RMSE é consistente em todas as empresas, níveis de preço de exercício e maturidade.
This thesis compares two methods to evaluate the price of American put options. The methods are the Least-Square Monte Carlo Method (LSM) and Neural Networks, a machine learning method. Two different models for Neural Networks were developed, a simple one, Model 1, and a more complex model, Model 2. It relies on market option prices on 4 large US companies, from December 2018 to March 2019. All methods show a good accuracy, however, once calibrated, Neural Networks show a much better execution time, than the LSM. Both Neural Network end up with a lower Root Mean Square Error (RMSE) than the LSM for options of different levels of maturity and strike. Model 2 substantially outperforms the other models, having a RMSE ca. 40% lower than that of LSM. The lower RMSE is consistent across all companies, strike levels and maturities.
info:eu-repo/semantics/publishedVersion
APA, Harvard, Vancouver, ISO, and other styles
30

Tao, Jinxin. "Comparison Between Confidence Intervals of Multiple Linear Regression Model with or without Constraints." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-theses/404.

Full text
Abstract:
Regression analysis is one of the most applied statistical techniques. The sta- tistical inference of a linear regression model with a monotone constraint had been discussed in early analysis. A natural question arises when it comes to the difference between the cases of with and without the constraint. Although the comparison be- tween confidence intervals of linear regression models with and without restriction for one predictor variable had been considered, this discussion for multiple regres- sion is required. In this thesis, I discuss the comparison of the confidence intervals between a multiple linear regression model with and without constraints.
APA, Harvard, Vancouver, ISO, and other styles
31

Wang, Hailun. "Some Conclusions of Statistical Analysis of the Spectropscopic Evaluation of Cervical Cancer." Digital Archive @ GSU, 2008. http://digitalarchive.gsu.edu/math_theses/58.

Full text
Abstract:
To significantly improve the early detection of cervical precancers and cancers, LightTouch™ is under development by SpectRx Inc.. LightTouch™ identifies cancers and precancers quickly by using a spectrometer to analyze light reflected from the cervix. Data from the spectrometer is then used to create an image of the cervix that highlights the location and severity of disease. Our research is conducted to find the appropriate models that can be used to generate map-like image showing disease tissue from normal and further diagnose the cervical cancerous conditions. Through large work of explanatory variable search and reduction, logistic regression and Partial Least Square Regression successfully applied to our modeling process. These models were validated by 60/40 cross validation and 10 folder cross validation. Further examination of model performance, such as AUC, sensitivity and specificity, threshold had been conducted.
APA, Harvard, Vancouver, ISO, and other styles
32

Zhang, Zongjun. "Adaptive Robust Regression Approaches in data analysis and their Applications." University of Cincinnati / OhioLINK, 2015. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1445343114.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Скворчевський, Олександр Євгенович, Тетяна Кравцова, and Анастасія Свічкарь. "Економетрична оцінка залежності купівельної спроможності населення України від його доходів." Thesis, Львівська політехніка, 2017. http://repository.kpi.kharkov.ua/handle/KhPI-Press/32787.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Yoldas, Mine. "Predicting The Effect Of Hydrophobicity Surface On Binding Affinity Of Pcp-like Compounds Using Machine Learning Methods." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613215/index.pdf.

Full text
Abstract:
This study aims to predict the binding affinity of the PCP-like compounds by means of molecular hydrophobicity. Molecular hydrophobicity is an important property which affects the binding affinity of molecules. The values of molecular hydrophobicity of molecules are obtained on three-dimensional coordinate system. Our aim is to reduce the number of points on the hydrophobicity surface of the molecules. This is modeled by using self organizing maps (SOM) and k-means clustering. The feature sets obtained from SOM and k-means clustering are used in order to predict binding affinity of molecules individually. Support vector regression and partial least squares regression are used for prediction.
APA, Harvard, Vancouver, ISO, and other styles
35

Gaspard, Guetchine. "FLOOD LOSS ESTIMATE MODEL: RECASTING FLOOD DISASTER ASSESSMENT AND MITIGATION FOR HAITI, THE CASE OF GONAIVES." OpenSIUC, 2013. https://opensiuc.lib.siu.edu/theses/1236.

Full text
Abstract:
This study aims at developing a model to estimate flood damage cost caused in Gonaives, Haiti by Hurricane Jeanne in 2004. In order to reach this goal, the influence of income, inundation duration and inundation depth, slope, population density and distance to major roads on the loss costs was investigated. Surveyed data were analyzed using Excel and ArcGIS 10 software. The ordinary least square and the geographically weighted regression analyses were used to predict flood damage costs. Then, the estimates were delineated using voronoi geostatistical map tool. As a result, the factors account for the costs as high as 83%. The flood damage cost in a household varies between 24,315 through 37,693 Haitian Gourdes (approximately 607.875 through 942.325 U.S. Dollars). Severe damages were spotted in the urban area and in the rural section of Bassin whereas very low and low losses are essentially found in Labranle. The urban area was more severely affected by comparison with the rural area. Damages in the urban area are estimated at 41,206,869.57USD against 698,222,174.10 17,455,554.35USD in the rural area. In the urban part, damages were more severe in Raboteau-Jubilée and in Downtown but Bigot-Parc Vincent had the highest overall damage cost estimated at 9,729,368.95 USD. The lowest cost 7,602,040.42USD was recorded in Raboteau. Approximately, 39.38% of the rural area underwent very low to moderate damages. Bassin was the most severely struck by the 2004 floods, but Bayonnais turned out to have the highest loss cost: 4,988,487.66 USD. Bassin along with Labranle had the least damage cost, 2,956,131.11 and 2,268,321.41 USD respectively. Based on the findings, we recommended the implementation and diversification of income-generating activities, the maintenance and improvement of drains, sewers and gullies cleaning and the establishment of conservation practices upstream of the watersheds. In addition, the model should be applied and validated using actual official records as reference data. Finally, the use of a calculation-based approach is suggested to determine flood damage costs in order to reduce subjectivity during surveys.
APA, Harvard, Vancouver, ISO, and other styles
36

Wang, Shuo. "An Improved Meta-analysis for Analyzing Cylindrical-type Time Series Data with Applications to Forecasting Problem in Environmental Study." Digital WPI, 2015. https://digitalcommons.wpi.edu/etd-theses/386.

Full text
Abstract:
This thesis provides a case study on how the wind direction plays an important role in the amount of rainfall, in the village of Somi$acute{o}$. The primary goal is to illustrate how a meta-analysis, together with circular data analytic methods, helps in analyzing certain environmental issues. The existing GLS meta-analysis combines the merits of usual meta-analysis that yields a better precision and also accounts for covariance among coefficients. But, it is quite limited since information about the covariance among coefficients is not utilized. Hence, in my proposed meta-analysis, I take the correlations between adjacent studies into account when employing the GLS meta-analysis. Besides, I also fit a time series linear-circular regression as a comparable model. By comparing the confidence intervals of parameter estimates, covariance matrix, AIC, BIC and p-values, I discuss an improvement on the GLS meta analysis model in its application to forecasting problem in Environmental study.
APA, Harvard, Vancouver, ISO, and other styles
37

Savas, Berkant. "Algorithms in data mining using matrix and tensor methods." Doctoral thesis, Linköpings universitet, Beräkningsvetenskap, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-11597.

Full text
Abstract:
In many fields of science, engineering, and economics large amounts of data are stored and there is a need to analyze these data in order to extract information for various purposes. Data mining is a general concept involving different tools for performing this kind of analysis. The development of mathematical models and efficient algorithms is of key importance. In this thesis we discuss algorithms for the reduced rank regression problem and algorithms for the computation of the best multilinear rank approximation of tensors. The first two papers deal with the reduced rank regression problem, which is encountered in the field of state-space subspace system identification. More specifically the problem is \[ \min_{\rank(X) = k} \det (B - X A)(B - X A)\tp, \] where $A$ and $B$ are given matrices and we want to find $X$ under a certain rank condition that minimizes the determinant. This problem is not properly stated since it involves implicit assumptions on $A$ and $B$ so that $(B - X A)(B - X A)\tp$ is never singular. This deficiency of the determinant criterion is fixed by generalizing the minimization criterion to rank reduction and volume minimization of the objective matrix. The volume of a matrix is defined as the product of its nonzero singular values. We give an algorithm that solves the generalized problem and identify properties of the input and output signals causing a singular objective matrix. Classification problems occur in many applications. The task is to determine the label or class of an unknown object. The third paper concerns with classification of handwritten digits in the context of tensors or multidimensional data arrays. Tensor and multilinear algebra is an area that attracts more and more attention because of the multidimensional structure of the collected data in various applications. Two classification algorithms are given based on the higher order singular value decomposition (HOSVD). The main algorithm makes a data reduction using HOSVD of 98--99 \% prior the construction of the class models. The models are computed as a set of orthonormal bases spanning the dominant subspaces for the different classes. An unknown digit is expressed as a linear combination of the basis vectors. The resulting algorithm achieves 5\% in classification error with fairly low amount of computations. The remaining two papers discuss computational methods for the best multilinear rank approximation problem \[ \min_{\cB} \| \cA - \cB\| \] where $\cA$ is a given tensor and we seek the best low multilinear rank approximation tensor $\cB$. This is a generalization of the best low rank matrix approximation problem. It is well known that for matrices the solution is given by truncating the singular values in the singular value decomposition (SVD) of the matrix. But for tensors in general the truncated HOSVD does not give an optimal approximation. For example, a third order tensor $\cB \in \RR^{I \x J \x K}$ with rank$(\cB) = (r_1,r_2,r_3)$ can be written as the product \[ \cB = \tml{X,Y,Z}{\cC}, \qquad b_{ijk}=\sum_{\lambda,\mu,\nu} x_{i\lambda} y_{j\mu} z_{k\nu} c_{\lambda\mu\nu}, \] where $\cC \in \RR^{r_1 \x r_2 \x r_3}$ and $X \in \RR^{I \times r_1}$, $Y \in \RR^{J \times r_2}$, and $Z \in \RR^{K \times r_3}$ are matrices of full column rank. Since it is no restriction to assume that $X$, $Y$, and $Z$ have orthonormal columns and due to these constraints, the approximation problem can be considered as a nonlinear optimization problem defined on a product of Grassmann manifolds. We introduce novel techniques for multilinear algebraic manipulations enabling means for theoretical analysis and algorithmic implementation. These techniques are used to solve the approximation problem using Newton and Quasi-Newton methods specifically adapted to operate on products of Grassmann manifolds. The presented algorithms are suited for small, large and sparse problems and, when applied on difficult problems, they clearly outperform alternating least squares methods, which are standard in the field.
APA, Harvard, Vancouver, ISO, and other styles
38

Sousa, Neto Theófilo Machado de. "Ajuste de curvas usando métodos numéricos." Universidade Federal de Goiás, 2018. http://repositorio.bc.ufg.br/tede/handle/tede/8755.

Full text
Abstract:
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2018-08-01T12:06:20Z No. of bitstreams: 2 Dissertação - Theófilo Machado de Sousa Neto - 2018.pdf: 5352330 bytes, checksum: 633a1463e2e997810ceffbed30fe9665 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2018-08-01T13:38:17Z (GMT) No. of bitstreams: 2 Dissertação - Theófilo Machado de Sousa Neto - 2018.pdf: 5352330 bytes, checksum: 633a1463e2e997810ceffbed30fe9665 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5)
Made available in DSpace on 2018-08-01T13:38:17Z (GMT). No. of bitstreams: 2 Dissertação - Theófilo Machado de Sousa Neto - 2018.pdf: 5352330 bytes, checksum: 633a1463e2e997810ceffbed30fe9665 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2018-06-28
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Given the need to discuss mathematical methods capable of adjusting curves that represent experimental data. This work presents seven methods of curves adjustment, three of these, methods that use least squares regression techniques and the other four, using interpolation techniques. Initially, it brings some definitions that present to the reader all the mathematical foundation that rules the equations. In parallel, it seeks to discuss, through examples, the area of attribution of the described methods, realizing whenever possible a comparation between the several techniques presented and their errors in the estimates. In order to demonstrate that the techniques discussed here are feasible for use in basic education, it exposes an experience of applying one of these methods in solving a basic problem of the discipline of Physics. After presenting the step-by-step method of obtaining soil resistivity, a variable that is of the utmost importance for the elaboration of projects for grounding meshes that supply energy substations, We finish this work by solving the problem with the aid of adjustment techniques curves studied, proposing the inclusion of the methods addressed in one of the steps of the procedure to obtain soil resistivity.
Diante da necessidade de se discutir sobre métodos matemáticos capazes de ajustar curvas que representem dados experimentais. Este trabalho apresenta como escopo sete métodos de ajustes de curvas, sendo que três destes, utilizam as técnicas de regressão por mínimos quadrados e os outros quatro, usando técnicas de interpolação. Inicialmente, traremos algumas definições que apresentam ao leitor todo o embasamento matemático que rege os equacionamentos. Em paralelo, procuramos discutir, através de exemplos, a área de atribuição dos métodos descritos, realizando sempre que possível um comparativo entre as variadas técnicas apresentadas e seus erros nas estimativas.Com o intuito de demonstrar que as técnicas aqui discutidas são viáveis para utilização na educação básica, apresentaremos uma experiência de aplicação de um desses métodos na resolução de um problema básico da disciplina de Física. Após relatar os procedimentos do método de obtenção da resistividade do solo, que é uma variável de suma importância para a elaboração de projetos de malhas de aterramento que atendem subestações de energia. Finaliza-se este trabalho resolvendo o problema com auxílio das técnicas de ajustes de curva estudados, propondo a inclusão dos métodos abordados em uma das etapas do procedimento de obtenção da resistividade do solo.
APA, Harvard, Vancouver, ISO, and other styles
39

Critchfield, Brian L. "Statistical Methods For Kinetic Modeling Of Fischer Tropsch Synthesis On A Supported Iron Catalyst." Diss., CLICK HERE for online access, 2006. http://contentdm.lib.byu.edu/ETD/image/etd1670.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Krba, Martin. "Identifikace počítače na základě časových značek paketů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236536.

Full text
Abstract:
Basic way how to identify a device in computer network is by MAC address and IP address. Main goal of this work is to create an application capable of clear identification of devices in computer network regardless change of their MAC address or IP address. This is done by exploiting tiny deviations in hardware clock known as clock skew. They appear in every clock based on quartz oscillator. Using clock skew is beneficial, because there is no need of any changes in fingerprinted device nor their cooperation. Accessing these values is done by capturing packets with timestamps included. Application of this method is very wide, for example computer forensics, tracking the device using different access points or counting devices behind router with NAT.
APA, Harvard, Vancouver, ISO, and other styles
41

Weng, Lichen. "A Hardware and Software Integrated Approach for Adaptive Thread Management in Multicore Multithreaded Microprocessors." FIU Digital Commons, 2012. http://digitalcommons.fiu.edu/etd/653.

Full text
Abstract:
The Multicore Multithreaded Microprocessor maximizes parallelism on a chip for the optimal system performance, such that its popularity is growing rapidly in high-performance computing. It increases the complexity in resource distribution on a chip by leading it to two directions: isolation and unification. On one hand, multiple cores are implemented to deliver the computation and memory accessing resources to more than one thread at the same time. Nevertheless, it limits the threads’ access to resources in different cores, even if extensively demanded. On the other hand, simultaneous multithreaded architectures unify the domestic execu- tion resources together for concurrently running threads. In such an environment, threads are greatly affected by the inter-thread interference. Moreover, the impacts of the complicated distribution are enlarged by variation in workload behaviors. As a result, the microprocessor requires an adaptive management scheme to schedule threads throughout different cores and coordinate them within cores. In this study, an adaptive thread management scheme was proposed, integrating both hardware and software approaches. The instruction fetch policy at the hardware level took the responsibility by prioritizing domestic threads, while the Operating System scheduler at the software level was used to pair threads dynami- vi cally to multiple cores. The tie between them was the proposed online linear model, which was dynamically constructed for every thread based on data misses by the regression algorithm. Consequently, the hardware part of the proposed scheme proactively granted higher priority to the threads with less predicted long-latency loads, expecting they would better utilize the shared execution resources. Mean- while, the software part was invoked by such a model upon significant changes in the execution phases and paired threads with different demands to the same core to minimize competition on the chip. The proposed scheme was compared to its peer designs and overall 43% speedup was achieved by the integrated approach over the combination of two baseline policies in hardware and software, respectively. The overhead was examined carefully regarding power, area, storage and latency, as well as the relationship between the overhead and the performance.
APA, Harvard, Vancouver, ISO, and other styles
42

Castro-Lucas, de Souza Cristina. "Les relations entre l'innovation et la performance internationale pour les activités de service aux entreprises." Thesis, Aix-Marseille 3, 2011. http://www.theses.fr/2011AIX32037.

Full text
Abstract:
Cette recherche vise à comprendre la relation existante entre l’innovation de services et l’internationalisation, à savoir, comment les entreprises obtiennent des avantages compétitifs sur les marchés internationaux grâce à l'innovation dans le secteur des services aux entreprises.Nous avons testé la relation entre l'innovation et la performance internationale, avons évalué l'impact de l'innovation par rapport à d'autres avantages internationaux. Symétriquement, nous avons également vérifié dans quelle mesure le processus d'internationalisation peut être un puissant moteur d'innovation pour les entreprises de services. Après le développement d'un modèle théorique, nous avons procédé à une enquête téléphonique auprès de 807 entreprises de services exportatrices. Les répondants cible de l'enquête étaient des cadres supérieurs de sociétés de services internationalisés en France. 51 réponses exploitables ont été reçues. Les données recueillies ont été analysées par Modélisation des Équations Structurelles (SEM), en utilisant la méthode Partial Least Square. Le modèle testé montre que l'innovation de service a une influence positive sur le développement international et que la compétence internationale, obtenue sur les marchés étrangers, stimule la dynamique de l'innovation dans les entreprises de services. Le modèle proposé met en évidence les capacités de R & D (organisationnelle), relationnelles, les TIC, la compétence internationale, l’innovation de service et l’expérience internationale comme facteurs qui influent les résultats des entreprises internationalisées, ou plus précisément, la performance international
This research deals with service innovation and internationalization: how firms perform on international markets and get an edge thanks to innovation on service concept or service process. We tested the relationship between innovation and international performance, assessed the impact of innovation compared to other international advantages. Symmetrically, we also checked how far the internationalization process can be a powerful driver of innovation for service firms. After the development of a theoretical model, data were collected from a telephone survey. The target respondents of the survey were senior executives of internationalized service companies in France. Out of the 807 companies which were contacted, 51 usable responses were received. The data collected were analyzed by Structural Equation Modeling (SEM), using the Partial Least Square method. The tested model shows that service innovation has a positive influence on international development and that the international competence, obtained in foreign markets, drives the dynamics of innovation in services company. The model proposed highlights the capabilities for R & D (organizational), relational, ICT, international competence, service and international experience as factors that impact the final results of internationalized companies, or more specifically, the international performance
APA, Harvard, Vancouver, ISO, and other styles
43

Christoforo, André Luis. "Influência das irregularidades da forma em peças de madeira na determinação do módulo de elasticidade longitudinal." Universidade de São Paulo, 2007. http://www.teses.usp.br/teses/disponiveis/18/18134/tde-10042008-092846/.

Full text
Abstract:
Atualmente, os documentos normativos que tratam da determinação das propriedades de rigidez e resistência para elementos roliços estruturais de madeira não levam em consideração em seus modelos matemáticos a influência das irregularidades existentes na geometria dessas peças. O presente trabalho tem como objetivo determinar o valor ótimo do módulo de elasticidade longitudinal para peças roliças estruturais de madeira por intermédio de uma técnica de otimização aliada ao método da análise inversa, ao método dos elementos finitos e ao método dos mínimos quadrados.
Currently, the normative documents that deal with the determination of the properties of rigidity and resistance for round structural timber elements round timber do not take in consideration in both calculations and mathematical models the influence of the existing of irregularities in the geometry of these elements. An objective of this work is to determine the optimum value of the modulus of elasticity for round structural timber elements by an optimization technique associated to the inverse analysis method, to the finite element method and the least squares method.
APA, Harvard, Vancouver, ISO, and other styles
44

Levitskaya, T. "The features of construction the empirical description of the drop contour in automation calculations of the surface properties of the melts." Thesis, Sumy State University, 2017. http://essuir.sumdu.edu.ua/handle/123456789/55770.

Full text
Abstract:
This paper considers the automation of the process of calculation the density and the surface tension of the melts according to the method of a recumbent drop. To solve the assigned task, it has been derived the empirical formulas of the analytical description of numeral solution Laplace’s differential equation for the contour of a drop. It has made possible to automate fully the calculation of thermodynamic characteristics.
APA, Harvard, Vancouver, ISO, and other styles
45

Subedi, Santosh. "Determination of fertility rating (FR) in the 3-PG model for loblolly pine (Pinus taeda L.) plantations in the southeastern United States." Diss., Virginia Tech, 2015. http://hdl.handle.net/10919/52588.

Full text
Abstract:
Soil fertility is an important component of forest ecosystem, yet evaluating soil fertility remains one of the least understood aspects of forest science. Phytocentric and geocenctric approaches were used to assess soil fertility in loblolly pine plantations throughout their geographic range in the United States. The model to assess soil fertility using a phytocentric approach was constructed using the relationship between site index and aboveground productivity. Geocentric models used physical and chemical properties of the A-horizon. Soil geocentric models were constructed using two modeling approaches. In the first approach, ordinary least squares methods of multiple regression were used to derive soil fertility estimated from site index using soil physical and chemical properties from the A-horizon. Ordinary least squares methods were found unsuitable due to multicollinearity among the soil variables. In the second approach, a multivariate modeling approach, partial least squares regression, was used to mitigate multicollinearity effects. The best model to quantify soil fertility using soil physical and chemical properties included N, Ca, Mg, C, and sand percentage as the significant predictors. The 3-PG process-based model was evaluated for simulating the response of loblolly pine to changes in soil fertility. Fertility rating (FR) is a parameter in 3-PG that scales soil fertility in the range of 0 to 1. FR values estimated from phytocentric and geocentric approaches were tested against observed production. The 3-PG model prediction of aboveground productivity described 89% percent of the variation in observed aboveground productivity using FR derived from site index and 84% percent of the vari- ation in observed aboveground productivity using FR derived from physical and chemical properties of the A-horizon. A response function to model dynamics of FR (∆FR) due to one time midrotatoin fertilization of N and P was developed using the Weibull function. The magnitude of ∆FR varied with intensity of N and time since application of fertilizer. The hypothesis that repeated fertilization with N and P eliminate major nutrient deficiency in the southeastern US was tested and a relationship between baseline fertility rating and fertilizer response was developed. An inverse relationship was observed between fertilizer response and baseline FR.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
46

Sun, Ruting (Michelle). "Characterization of the acoustic properties of cementitious materials." Thesis, Loughborough University, 2017. https://dspace.lboro.ac.uk/2134/27308.

Full text
Abstract:
The primary aim of this research was to investigate the fundamental acoustic properties of several cementitious materials, the influence of mix design parameters/constituents, and finally the effect of the physical and mechanical properties of cementitious material concrete/mortar on the acoustic properties of the material. The main objectives were: To understand the mechanism of sound production in musical instruments and the effects of the material(s) employed on the sound generated; To build upon previous research regarding selection of the tested physical/mechanical properties and acoustic properties of cementitious materials; To draw conclusions regarding the effect of different constituents, mix designs and material properties upon the acoustic properties of the material; To build a model of the relationship between the acoustic properties of a cementitious material and its mix design via its physical/mechanical properties. In order to meet the aim, this research was conducted by employing the semi-experimental (half analytical) method: two experimental programmes were performed (I and II); a mathematical optimization technique (least square method) was then implemented in order to construct an optimized mathematical model to match with the experimental data. In Experimental Programme I, six constituents/factors were investigated regarding the effect on the physical/mechanical and acoustic properties: cementitious material additives (fly ash, silica fume, and GGBS), superplasticizer, and basic mix design parameters (w/c ratio, and sand grading). 11 properties (eight physical/mechanical properties: compressive strength, density, hardness, flexural strength, flexural modulus, elastic modulus, dynamic modulus and slump test; and three acoustic properties: resonant frequency, speed of sound and quality factor (internal damping)) were tested for each constituents/factors related mortar type. For each type of mortar, there were three cubes, three prisms and three cylinders produced. In Experimental Programme I, 20 mix designs were investigated, 180 specimens produced, and 660 test results recorded. After analysing the results of Experimental Programme I, fly ash (FA), w/b ratio and b/s ratio were selected as the cementitious material/factors which had the greatest influence on the acoustic properties of the material; these were subsequently investigated in detail in Experimental Programme II. In Experimental Programme II, various combinations of FA replacement level, w/b ratios and b/s ratios (three factors) resulted in 1122 test results. The relationship between these three factors on the selected 11 properties was then determined. Through using regression analysis and optimization technique (least square method), the relationship between the physical/mechanical properties and acoustic properties was then determined. Through both experimental programmes, 54 mix designs were investigated in total, with 486 specimens produced and tested, and 1782 test results recorded. Finally, based upon well-known existing relationships (including, model of compressive strength and elastic modulus, and the model of elastic modulus and dynamic modulus), and new regressioned models of FA-mortar (the relationship of compressive strength and constituents, which is unique for different mixes), the optimized object function of acoustic properties (speed of sound and damping ratio) and mix design (proportions of constituents) were constructed via the physical/mechanical properties.
APA, Harvard, Vancouver, ISO, and other styles
47

Wang, Kuo-Lung, and 王國龍. "Least Square Method for Concave Regression." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/14849511950359715572.

Full text
Abstract:
碩士
淡江大學
數學學系碩士班
98
Search for a simple, smooth and efficient estimator of a smooth concave regression function is of considerable interest. In this thesis, we describe a least square method for concave regression in which the regression function is modeled by the Bernstein polynomial. We employ the Akaike’s information criterion to determine the degree of Bernstein polynomial, propose a penalty function method based algorithm to compute estimate and provide a pointwise confidence interval estimator and a prediction interval band for regression function. The success of this method is demonstrated in simulation studies and in an analysis of real data.
APA, Harvard, Vancouver, ISO, and other styles
48

Lo, Lu-Lee, and 駱如儀. "ON ROBUST FUZZY LEAST-SQUARES METHOD FOR FUZZY LINEAR REGRESSION MODEL." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/05307628210573279983.

Full text
Abstract:
碩士
中原大學
應用數學研究所
92
Since Tanaka et al. in 1982 proposed a study in linear regression with a fuzzy model, fuzzy regression analysis has been widely studied and applied in various areas. In general, the analysis of fuzzy regression models can be roughly divided into two categories. One is based on Tanaka's linear-programming approach. Another category is based on the fuzzy least-squares approach. In this paper, a robust fuzzy least- squares algorithm is considered in the estimation of fuzzy linear regression (FLR) models. Then numerical comparisons between this fuzzy least-square and Tanaka's methods for FLR models are implemented. According to these comparisons, it is suggested that the proposed fuzzy least-square is preferred for use in the parameter estimation of FLR models.
APA, Harvard, Vancouver, ISO, and other styles
49

Abarin, Taraneh. "Second-order least squares estimation in regression models with application to measurement error problems." 2009. http://hdl.handle.net/1993/3126.

Full text
Abstract:
This thesis studies the Second-order Least Squares (SLS) estimation method in regression models with and without measurement error. Applications of the methodology in general quasi-likelihood and variance function models, censored models, and linear and generalized linear models are examined and strong consistency and asymptotic normality are established. To overcome the numerical difficulties of minimizing an objective function that involves multiple integrals, a simulation-based SLS estimator is used and its asymptotic properties are studied. Finite sample performances of the estimators in all of the studied models are investigated through simulation studies.
February 2009
APA, Harvard, Vancouver, ISO, and other styles
50

Wasser, Thomas E. "Comparison and evaluation of the effect of outliers on ordinary least squares and Theil nonparametric regression with the evaluation of standard error estimates for the Theil nonparametric regression method /." Diss., 1998. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:9914439.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography