Academic literature on the topic 'Correlation-regression modeling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Correlation-regression modeling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Correlation-regression modeling"

1

Alexandrova, Natalia, Natalia Klimushkina, Elena Leshina, and Maria Surkova. "Correlation and regression modeling of the grain production cost." BIO Web of Conferences 37 (2021): 00006. http://dx.doi.org/10.1051/bioconf/20213700006.

Full text
Abstract:
The study of the efficiency of the grain economy has shown that its level is determined by production costs. Correlation and regression analysis of the factors of the production cost of 100 kg of grain has revealed that its value is determined, first of all, by the level of intensification of the industry and the yield capacity of grain and leguminous crops. According to the obtained result, in order to lower level the production cost of 100 kg of grain, the value of production costs per hectare of area under crops should not exceed 10.5 thousand rubles (the level of intensification) and the yield of grain and leguminous crops should not be less than 17.5 dt/ha. Otherwise, the production cost of 100 kg of grain will be too high, which will lead to a decrease in the profitability of the grain economy.
APA, Harvard, Vancouver, ISO, and other styles
2

Chen, S., X. X. Wang, and D. J. Brown. "Sparse incremental regression modeling using correlation criterion with boosting search." IEEE Signal Processing Letters 12, no. 3 (2005): 198–201. http://dx.doi.org/10.1109/lsp.2004.842250.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ostapenko, Ya, and D. Pastukh. "Correlation-regression analysis of factors affecting inventories of production enterprise." 101, no. 101 (December 30, 2021): 124–29. http://dx.doi.org/10.26565/2311-2379-2021-101-12.

Full text
Abstract:
The article highlights the feasibility of correlation-regression analysis and inventory modeling at a manufacturing enterprise using applications. Inventory modeling will help to optimize them and further increase the profitability of the enterprise. The use of applications will speed up and simplify the modeling process and strengthen the analytical component. Among the modern variety of applications for statistical and econometric analysis, it is important to choose an effective software product, simple and easy to use, which does not require significant costs. It is offered to use a free, but no less high-quality R-Studio product, which is easy to use and fast to calculate. On the example of application of the free application program R-Studio the correlation-regression analysis is carried out and the regression model of stocks at the production enterprise of PJSC "Novokramatorsk Machine-Building Plant" is constructed. The influence of internal factors on the company's stocks, such as: net income from sales of products (goods, works, services), net financial result: profit, accounts payable for goods (works, services) and external: GDP and the dollar. According to the simulation results, the greatest influence among internal factors has the net income from sales of products (goods, works, services). Among external factors, GDP is the most influential. The constructed model is adequate, as evidenced by a significant indicator of the Fisher criterion and the coefficient of determination. 90% of the stocks of the studied enterprise depend on the selected factors. The construction of a matrix of correlation coefficients and correlation analysis confirmed the close relationship between the selected factors and their impact on stocks as a result. The example of PJSC "Novokramatorsk Machine-Building Plant" demonstrates the practical usefulness of inventory modeling using computer programs.
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Yadi, Wenbo Zhang, Minghu Fan, et al. "Regression with adaptive lasso and correlation based penalty." Applied Mathematical Modelling 105 (May 2022): 179–96. http://dx.doi.org/10.1016/j.apm.2021.12.016.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Xiang, Liwen, Jie Jin, Jinghui Zhang, Changjiang Ma, Bingnan Xiang, and Wenhua Sun. "Modeling the Correlation Relationship of Aqueous Battery Parameters Based on Regression Analysis." IOP Conference Series: Earth and Environmental Science 898, no. 1 (2021): 012020. http://dx.doi.org/10.1088/1755-1315/898/1/012020.

Full text
Abstract:
Abstract This study is a research on the new Aqueous battery. Based on the experimental data, the author selects the charge and discharge capacity, voltage and current of the battery during the charging and discharging process, establishing the correlation model between the parameters of the battery through regression analysis and other methods, and concluding the methods to optimise the performance and power supply capacity of the battery, which can explore a new high-efficiency battery that can replace the ordinary battery and provide new ideas and methods for the development of battery business.
APA, Harvard, Vancouver, ISO, and other styles
6

et al., Jabeen. "Regression modeling and correlation analysis spread of COVID-19 data for Pakistan." International Journal of ADVANCED AND APPLIED SCIENCES 9, no. 3 (2022): 71–81. http://dx.doi.org/10.21833/ijaas.2022.03.009.

Full text
Abstract:
This study presents a mathematical analysis of the coronavirus spread in Pakistan by analyzing the (COVID-19) situation in six provinces, including Gilgit Baltistan, Azad Jammu Kashmir and federal capital (seven zones) individually. The influence of each province and the Federal Capital territory is then observed over the other territories. By subdividing the associated data into confirmed cases, death cases, and recovery cases, the dependence of the (COVID-19) situation from one province to the other provinces is investigated. Since the worsening circumstance in the neighboring countries were considered as a catalyst to initiate the outburst in Pakistan, it seemed necessary to have an understanding of the situation in neighboring countries, particularly, Iran, India, and Bangladesh. Exploratory data analysis is utilized to understand the behavior of confirmed cases, death cases, and recovery cases data of (COVID-19) in Pakistan. Also, an understanding of the pandemic spread during different waves of (COVID-19) is obtained. Depending on the individual situation in each of the provinces, it is expected to have a different ARIMA model in each case. Hunt for the most suitable ARIMA models is an essential part of this study. The time-series data forecasts by processing the most suitable ARIMA models to observe the influence of one territory over the other. Moreover, forecasting for the month of August 2021 is performed and a possible correlation with actual data is determined. Linear, multiple regression, and exponential models have been applied and the best-fitted model is acquired. The information obtained from such analysis can be employed to vary possible parameters and variables in the system to achieve optimal performance.
APA, Harvard, Vancouver, ISO, and other styles
7

Holomsha, Nataliia, and Olha Holomsha. "Correlation-regression modeling of competitiveness of ukrainian wheat on the world markets." Ekonomika APK, no. 10 (October 30, 2019): 88–97. http://dx.doi.org/10.32317/2221-1055.201910088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Maryna, Litvinova, Andrieieva Nataliia, Zavodyannyi Viktor, Loi Sergii, and Shtanko Olexandr. "APPLICATION OF MULTIPLE CORRELATION ANALYSIS METHOD TO MODELING THE PHYSICAL PROPERTIES OF CRYSTALS (ON THE EXAMPLE OF GALLIUM ARSENIDE)." Eastern-European Journal of Enterprise Technologies 6, no. 12 (102) (2019): 39–45. https://doi.org/10.15587/1729-4061.2019.188512.

Full text
Abstract:
The use of modern applied computer programs expands the possibility of multicomponent statistical analysis in materials science. The procedure for applying the method of multiple correlation and regression analysis for the study and modeling of multifactorial relationships of physical characteristics in crystalline structures is considered. The consideration is carried out using single crystals of undoped gallium arsenide as an example. The statistical analysis involved a complex of seven physical characteristics obtained by non-destructive methods for each of 32 points along the diameter of the crystal plate. The data array is investigated using multiple correlation analysis methods. A computational model of regression analysis is built. Based on it, using the programs Excel, STADIA and SPSS Statistics 17.0, statistical data processing and analytical study of the relationships of all characteristics are carried out. Regression relationships are obtained and analyzed in determining the concentration of the background carbon impurity, residual mechanical stresses, and the concentration of the background silicon impurity. The ability to correctly conduct multiple statistical analysis to model the properties of a GaAs crystal is established. New relationships between the parameters of the GaAs crystal are revealed. It is found that the concentration of the background silicon impurity is related to the vacancy composition of the crystal and the concentration of cents EL2. It is also found that there is no relationship between the silicon concentration and the value of residual mechanical stresses. These facts and the thermal conditions for the formation of point defects during the growth of a single crystal indicate the absence of a redistribution of background impurities during cooling of an undoped GaAs crystal. The use of the multiple regression analysis method in materials science allows not only to model multifactor bonds in binary crystals, but also to carry out stochastic modeling of factor systems of variable composition
APA, Harvard, Vancouver, ISO, and other styles
9

KAZAMBAYEVA, Aigul, Zein AYDYNOV, Zhanbolat DAUKHARIN, and Railash TURCHEKENOVA. "MODELING OF STATE SUPPORT FOR AGRICULTURE." Public Administration and Civil Service 91, no. 4 (2024): 30–43. https://doi.org/10.52123/1994-2370-2024-1277.

Full text
Abstract:
Abstract. The article examines the impact of state support through subsidies on the outcomes of agricultural production in the Republic of Kazakhstan. The purpose of the study is to analyze the dynamics of subsidies and model the correlation between the subsidy size and gross agricultural output. The study draws on gross crop and livestock production data, as well as on the amount of state support from 2008 to 2022. The analysis showed a significant increase in gross agricultural output and the subsidy size over the specified period, which demonstrates a significant development of the agricultural sector and increased government support. Conclusion of the article also highlights the importance of continuing and expanding government support programs to ensure sustainable growth of the agricultural sector and increase its competitiveness. The article also examines successful cases of state support for agriculture in developed countries, which allows to offer recommendations for optimizing existing support programs in Kazakhstan. Keywords: state support, subsidies, agriculture, gross output, crop production, livestock, correlation, regression analysis.Удалить Keywords: state support, subsidies, agriculture, gross output, crop production, livestock, correlation, regression analysis.
APA, Harvard, Vancouver, ISO, and other styles
10

Et. al., P. D. Sheena Smart,. "Regression Tree Based Correlation Technique in Spatial Data Classification." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 10 (2021): 6184–95. http://dx.doi.org/10.17762/turcomat.v12i10.5458.

Full text
Abstract:
Data mining is the process of discovering useful patterns from large geo-spatial datasets with the help of machine learning methods. . The machine learning methods plays an important role for data analytics modeling and visualization. Geo-spatial data is a significant task in many application domains, such as environmental science, geographic information science, and social networks. However, the existing spatial pattern discovery and prediction techniques failed to predict the event accurately with minimum error and time consumption. . In this paper, a novel Pearson Correlated Regression Tree-based Affine Projective spatial data Classification (PCRT-APSDC) technique is proposed to improve the spatial data classification and minimize error based on the Affine Projective classification technique. The proposed algorithm employs a fuzzy rule procedure that constructs the regression tree. The fuzzy rule is applied for linking the inputs (i.e. spatial data) with the outputs (i.e. classification results). Our goal is to classify the data into two subsets such as fired region and non-fired region. Experimental evaluation is carried out using a forest fire dataset with different factors such as classification accuracy, false-positive rate, and classification time. The results confirm that the proposed technique predicts the fired region with increased spatial data classification accuracy and minimized time as well as false-positive rate than the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Correlation-regression modeling"

1

Sim, Nicholas. "Modeling Quantile Dependence." Thesis, Boston College, 2009. http://hdl.handle.net/2345/2467.

Full text
Abstract:
Thesis advisor: Zhijie Xiao<br>In recent years, quantile regression has achieved increasing prominence as a quantitative method of choice in applied econometric research. The methodology focuses on how the quantile of the dependent variable is influenced by the regressors, thus providing the researcher with much information about variations in the relationship between the covariates. In this dissertation, I consider two quantile regression models where the information set may contain quantiles of the regressors. Such frameworks thus capture the dependence between quantiles - the quantile of the dependent variable and the quantile of the regressors - which I call models of quantile dependence. These models are very useful from the applied researcher's perspective as they are able to further uncover complex dependence behavior and can be easily implemented using statistical packages meant for standard quantile regressions. The first chapter considers an application of the quantile dependence model in empirical finance. One of the most important parameter of interest in risk management is the correlation coefficient between stock returns. Knowing how correlation behaves is especially important in bear markets as correlations become unstable and increase quickly so that the benefits of diversification are diminished especially when they are needed most. In this chapter, I argue that it remains a challenge to estimate variations in correlations. In the literature, either a regime-switching model is used, which can only estimate correlation in a finite number of states, or a model based on extreme-value theory is used, which can only estimate correlation between the tails of the returns series. Interpreting the quantile of the stock return as having information about the state of the financial market, this chapter proposes to model the correlation between quantiles of stock returns. For instance, the correlation between the 10th percentiles of stock returns, say the U.S. and the U.K. returns, reflects correlation when the U.S. and U.K. are in the bearish state. One can also model the correlation between the 60th percentile of one series and the 40th percentile of another, which is not possible using existing tools in the literature. For this purpose, I propose a nonlinear quantile regression where the regressor is a conditional quantile itself, so that the left-hand-side variable is a quantile of one stock return and the regressor is a quantile of the other return. The conditional quantile regressor is an unknown object, hence feasible estimation entails replacing it with the fitted counterpart, which then gives rise to problems in inference. In particular, inference in the presence of generated quantile regressors will be invalid when conventional standard errors are used. However, validity is restored when a correction term is introduced into the regression model. In the empirical section, I investigate the dependence between the quantile of U.S. MSCI returns and the quantile of MSCI returns to eight other countries including Canada and major equity markets in Europe and Asia. Using regression models based on the Gaussian and Student-t copula, I construct correlation surfaces that reflect how the correlations between quantiles of these market returns behave. Generally, the correlations tend to rise gradually when the markets are increasingly bearish, as reflected by the fact that the returns are jointly declining. In addition, correlations tend to rise when markets are increasingly bullish, although the magnitude is smaller than the increase associated with bear markets. The second chapter considers an application of the quantile dependence model in empirical macroeconomics examining the money-output relationship. One area in this line of research focuses on the asymmetric effects of monetary policy on output growth. In particular, letting the negative residuals estimated from a money equation represent contractionary monetary policy shocks and the positive residuals represent expansionary shocks, it has been widely established that output growth declines more following a contractionary shock than it increases following an expansionary shock of the same magnitude. However, correctly identifying episodes of contraction and expansion in this manner presupposes that the true monetary innovation has a zero population mean, which is not verifiable. Therefore, I propose interpreting the quantiles of the monetary shocks as having information about the monetary policy stance. For instance, the 10th percentile shock reflects a restrictive stance relative to the 90th percentile shock, and the ranking of shocks is preserved regardless of shifts in the shock's distribution. This idea motivates modeling output growth as a function of the quantiles of monetary shocks. In addition, I consider modeling the quantile of output growth, which will enable policymakers to ascertain whether certain monetary policy objectives, as indexed by quantiles of monetary shocks, will be more effective in particular economic states, as indexed by quantiles of output growth. Therefore, this calls for a unified framework that models the relationship between the quantile of output growth and the quantile of monetary shocks. This framework employs a power series method to estimate quantile dependence. Monte Carlo experiments demonstrate that regressions based on cubic or quartic expansions are able to estimate the quantile dependence relationships well with reasonable bias properties and root-mean-squared errors. Hence, using the cubic and quartic regression models with M1 or M2 money supply growth as monetary instruments, I show that the right tail of the output growth distribution is generally more sensitive to M1 money supply shocks, while both tails of output growth distribution are more sensitive than the center is to M2 money supply shocks, implying that monetary policy is more effective in periods of very low and very high growth rates. In addition, when non-neutral, the influence of monetary policy on output growth is stronger when it is restrictive than expansive, which is consistent with previous findings on the asymmetric effects of monetary policy on output<br>Thesis (PhD) — Boston College, 2009<br>Submitted to: Boston College. Graduate School of Arts and Sciences<br>Discipline: Economics
APA, Harvard, Vancouver, ISO, and other styles
2

Kureshy, Imran A. 1965. "Credit derivatives : market dimensions, correlation with equity and implied option volatility, regression modeling and statistical price risk." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/17896.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management, 2004.<br>Includes bibliographical references (leaves 50-51).<br>This research thesis explores the market dimensions of credit derivatives including the prevalent product structures, leading participants, market applications and the issues confronting this relatively new product. We find the market continues to experience significant growth particularly in single name default swaps. This growth is fueled in part by increased participation of hedge funds and applications beyond risk management as an acceptable trading instrument. As this market continues to grow it must address the need for specialized technology infrastructure to support continued growth and consistency in documentation to ensure confidence. We then set out to explore the relationship between CDS spreads and the equity markets. We find a strong correlation with implied option volatility. While this relationship does not suggest causation, the magnitude of these relationships should assist market participants in developing effective trading and portfolio management strategies. We also explore the volatility of default spreads and find that there is wide disparity in volatility among common credit ratings. This leads to a suggestion that market participants may be able to reduce spread volatility and earn enhanced risk-adjusted yields by constructing credit portfolios based on spread widening risk rather than default risk. Finally, since the focus of this thesis report was market application, we take the analyses and develop a regression model that serves as a quick and easy to use reference tool for credit derivative market participants. The data sample for this research paper spans 97 issuers across 19 industries and 10 credit rating levels including non-investment grade. The sample period covers daily observations between<br>(cont.) September 20, 2002 and December 31, 2003.<br>by Imran A. Kureshy.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
3

Apanasovich, Tatiyana Vladimirovna. "Testing for spatial correlation and semiparametric spatial modeling of binary outcomes with application to aberrant crypt foci in colon carcinogenesis experiments." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/2674.

Full text
Abstract:
In an experiment to understand colon carcinogenesis, all animals were exposed to a carcinogen while half the animals were also exposed to radiation. Spatially, we measured the existence of aberrant crypt foci (ACF), namely morphologically changed colonic crypts that are known to be precursors of colon cancer development. The biological question of interest is whether the locations of these ACFs are spatially correlated: if so, this indicates that damage to the colon due to carcinogens and radiation is localized. Statistically, the data take the form of binary outcomes (corresponding to the existence of an ACF) on a regular grid. We develop score??type methods based upon the Matern and conditionally autoregression (CAR) correlation models to test for the spatial correlation in such data, while allowing for nonstationarity. Because of a technical peculiarity of the score??type test, we also develop robust versions of the method. The methods are compared to a generalization of Moran??s test for continuous outcomes, and are shown via simulation to have the potential for increased power. When applied to our data, the methods indicate the existence of spatial correlation, and hence indicate localization of damage. Assuming that there are correlations in the locations of the ACF, the questions are how great are these correlations, and whether the correlation structures di?er when an animal is exposed to radiation. To understand the extent of the correlation, we cast the problem as a spatial binary regression, where binary responses arise from an underlying Gaussian latent process. We model these marginal probabilities of ACF semiparametrically, using ?xed-knot penalized regression splines and single-index models. We ?t the models using pairwise pseudolikelihood methods. Assuming that the underlying latent process is strongly mixing, known to be the case for many Gaussian processes, we prove asymptotic normality of the methods. The penalized regression splines have penalty parameters that must converge to zero asymptotically: we derive rates for these parameters that do and do not lead to an asymptotic bias, and we derive the optimal rate of convergence for them. Finally, we apply the methods to the data from our experiment.
APA, Harvard, Vancouver, ISO, and other styles
4

Bajracharya, Dinesh. "Econometric Modeling vs Artificial Neural Networks : A Sales Forecasting Comparison." Thesis, Högskolan i Borås, Institutionen Handels- och IT-högskolan, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:hb:diva-20400.

Full text
Abstract:
Econometric and predictive modeling techniques are two popular forecasting techniques. Both ofthese techniques have their own advantages and disadvantages. In this thesis some econometricmodels are considered and compared to predictive models using sales data for five products fromICA a Swedish retail wholesaler. The econometric models considered are regression model,exponential smoothing, and ARIMA model. The predictive models considered are artificialneural network (ANN) and ensemble of neural networks. Evaluation metrics used for thecomparison are: MAPE, WMAPE, MAE, RMSE, and linear correlation. The result of this thesisshows that artificial neural network is more accurate in forecasting sales of product. But it doesnot differ too much from linear regression in terms of accuracy. Therefore the linear regressionmodel which has the advantage of being comprehensible can be used as an alternative to artificialneural network. The results also show that the use of several metrics contribute in evaluatingmodels for forecasting sales.<br>Program: Magisterutbildning i informatik
APA, Harvard, Vancouver, ISO, and other styles
5

Bonnaire, Fils Prony. "Modeling Travel Time and Reliability on Urban Arterials for Recurrent Conditions." Scholar Commons, 2012. http://scholarcommons.usf.edu/etd/3984.

Full text
Abstract:
Abstract Travel time reliability is defined as the consistency or dependability in travel times during a specified period of time under stated conditions, and it can be used for evaluating the performance of traffic networks based on LOS (Level of Service) of the HCM (Highway Capacity Manual). Travel time reliability is also one of the most understood measures for road users to perceive the current traffic conditions, and help them make smart decisions on route choices, and hence avoid unnecessary delays (Liu & Ma, 2009). Therefore, travel time reliability on urban arterials has become a major concern for daily commuters, business owners, urban transportation planners, traffic engineers, MPO (Metropolitan Planning Organization) members as congestion has grown substantially over the past thirty (30) years in urban areas of every size. Many studies have been conducted in the past on travel time reliability without a full analysis or explanation of the fundamental traffic and geometric components of the corridors. However, a generalized model which captures the different factors that influence travel time reliability such as posted speed, access density, arterial length, traffic conditions, signalized intersection spacing, roadway and intersection geometrics, and signal control settings is still lacking. Specially, there is a need that these factors be weighted according to their impacts. This dissertation by using a linear regression model has identified 10 factors that influence travel time reliability on urban arterials. The reliability is measured in term of travel time threshold, which represents the addition of the extra time (buffer or cushion time) to average travel time when most travelers are planning trips to ensure on-time arrival. "Reliable" segments are those on which travel time threshold is equal to or lowers than the sum of buffer time and average travel time. After validation many scenarios are developed to evaluate the influencing factors and determine appropriate travel times reliability. The linear regression model will help 1) evaluate strategies and tactics to satisfy the travel time reliability requirements of users of the roadway network--those engaged in person transport in urban areas 2) monitor the performance of road network 3) evaluate future options 4) provide guidance on transportation planning, roadway design, traffic design, and traffic operations features.
APA, Harvard, Vancouver, ISO, and other styles
6

Anikėnienė, Asta. "Research and modeling of the recent vertical movements of the Earth’s Сrust on the basis of geodetic measurements (samples on Lithuanian territory)". Doctoral thesis, Lithuanian Academic Libraries Network (LABT), 2009. http://vddb.library.lt/obj/LT-eLABa-0001:E.02~2009~D_20090309_141501-24272.

Full text
Abstract:
The thesis deals with the studies on the velocities of the recent vertical movements of the Earth’s Crust by applying correlation, regression and multi-criteria analysis of geo-parameters of the territory. The objective of the research involves the regularities of the recent vertical movements of the Earth’s Crust, the relationship with the geo-parameters of the territory, models of forecasting for movements and methodology of compiling maps on the vertical movements of the Earth’s Crust. The experimental subject matter is the territory of Lithuania. The major task of the thesis is to work out the method for estimation and modelling of the recent vertical movements of the Earth’s Crust measured by applying geodetic methods and to implement the suggested method for compilation of the map of the recent vertical movements of the Earth’s Crust within the territory of Lithuania. In order to achieve the determined target, the following tasks were solved: 1) there were determined the values of the measured recent vertical movements of the Earth’s crust from the data of the repeated levelling; 2) there were examined the regularities of the change of the recent vertical movements of the Earth’s Crust; 3) there was investigated and determined the relationship of the measured recent vertical movements of the Earth’s Crust and geo-parameters of the territory; 4) there were analysed the possibilities of application regressive models for forecasting regarding recent verticals movements of... [to full text]<br>Disertacijoje nagrinėjami dabartiniai vertikalieji Žemės plutos judesių greičiai taikant teritorijos georodiklių koreliacinę, regresinę ir daugiakriterinę analizę. Pagrindinis tyrimo objektas – dabartinių vertikaliųjų Žemės plutos judesių dėsningumai, sąsajos su teritorijos georodikliais, judesių prognozavimo modeliai ir vertikaliųjų Žemės plutos judesių žemėlapio sudarymo metodika. Eksperimentinis objektas – Lietuvos teritorija. Pagrindinis disertacijos tikslas – parengti geodeziniais metodais išmatuotų dabartinių vertikaliųjų Žemės plutos judesių modeliavimo bei vertinimo metodiką ir ją taikant sudaryti Lietuvos teritorijos dabartinių vertikaliųjų Žemės plutos judesių žemėlapį. Siekiant įgyvendinti užsibrėžtą tikslą, išspręsti šie uždaviniai: 1) remiantis kartotinių niveliacijų duomenimis, nustatytos išmatuotų dabartinių Žemės plutos judesių reikšmės; 2) ištirti dabartinių vertikaliųjų Žemės plutos judesių kaitos dėsningumai; 3) ištirtos ir nustatytos išmatuotų dabartinių vertikaliųjų Žemės plutos judesių ir teritorijos georodiklių sąsajos; 4) išnagrinėtos regresinių modelių taikymo dabartiniams vertikaliesiems Žemės plutos judesiams prognozuoti galimybės ir parengtos rekomendacijos juos taikyti sudarant vertikaliųjų Žemės plutos judesių žemėlapius; 5) įvertinti dabartinių vertikaliųjų Žemės plutos judesių regresiniai prognozavimo modeliai taikant daugiakriterinės analizės metodiką; 6) taikant pasiūlytą metodiką, sudarytas Lietuvos teritorijos dabartinių vertikaliųjų Žemės... [toliau žr. visą tekstą]
APA, Harvard, Vancouver, ISO, and other styles
7

González, Rojas Victor Manuel. "Análisis conjunto de múltiples tablas de datos mixtos mediante PLS." Doctoral thesis, Universitat Politècnica de Catalunya, 2014. http://hdl.handle.net/10803/284659.

Full text
Abstract:
The fundamental content of this thesis corresponds to the development of the GNM-NIPALIS, GNM-PLS2 and GNM-RGCCA methods, used to quantify qualitative variables parting from the first k components given by the appropriate methods in the analysis of J matrices of mixed data. These methods denominated GNM-PLS (General Non Metric Partial Least Squares) are an extension of the NM-PLS methods that only take the first principal component in the quantification function. The transformation of the qualitative variables is done through optimization processes, usually maximizing functions of covariance or correlation, taking advantage of the flexibility of the PLS algorithms and keeping the properties of group belonging and order if it exists; The metric variables are keep their original state as well, excepting standardization. GNM-NIPALS has been created for the purpose of treating one (J = 1) mixed data matrix through the quantification via ACP type reconstruction of the qualitative variables parting from a k components aggregated function. GNM-PLS2 relates two (J = 2) mixed data sets Y~X through PLS regression, quantifying the qualitative variables of a space with the first H PLS components aggregated function of the other space, obtained through cross validation under PLS2 regression. When the endogenous matrix Y contains only one answer variable the method is denominated GNM-PLS1. Finally, in order to analyze more than two blocks (J = 2) of mixed data Y~X1+...+XJ through their latent variables (LV) the GNM-RGCCA was created, based on the RGCCA (Regularized Generalized Canonical Correlation Analysis) method, that modifies the PLS-PM algorithm implementing the new mode A and specifies the covariance or correlation maximization functions related to the process. The quantification of the qualitative variables on each Xj block is done through the inner Zj = Σj ej Yj function, which has J dimension due to the aggregation of the outer Yj estimations. Zj, as well as Yj estimate the ξj component associated to the j-th block.<br>El contenido fundamental de esta tesis corresponde al desarrollo de los métodos GNM-NIPALS, GNM-PLS2 y GNM-RGCCA para la cuantificación de las variables cualitativas a partir de las primeras k componentes proporcionadas por los métodos apropiados en el análisis de J matrices de datos mixtos. Estos métodos denominados GNM-PLS (General Non Metric Partial Least Squares) son una extensión de los métodos NM-PLS que toman sólo la primera componente principal en la función de cuantificación. La trasformación de las variables cualitativas se lleva a cabo mediante procesos de optimización maximizando generalmente funciones de covarianza o correlación, aprovechando la flexibilidad de los algoritmos PLS y conservando las propiedades de pertenencia grupal y orden si existe; así mismo se conservan las variables métricas en su estado original excepto por estandarización. GNM-NIPALS ha sido creado para el tratamiento de una (J=1) matriz de datos mixtos mediante la cuantificación vía reconstitución tipo ACP de las variables cualitativas a partir de una función agregada de k componentes. GNM-PLS2 relaciona dos (J=2) conjuntos de datos mixtos Y~X mediante regresión PLS, cuantificando las variables cualitativas de un espacio con la función agregada de las primeras H componentes PLS del otro espacio, obtenidas por validación cruzada bajo regresión PLS2. Cuando la matriz endógena Y contiene sólo una variable de respuesta el método se denomina GNM-PLS1. Finalmente para el análisis de más de dos bloques (J>2) de datos mixtos Y~X1+...+XJ a través de sus variables latentes (LV) se implementa el método NM-RGCCA basado en el método RGCCA (Regularized Generalized Canonical Correlation Analysis) que modifica el algoritmo PLS-PM implementando el nuevo modo A y especifica las funciones de maximización de covarianzas o correlaciones asociadas al proceso. La cuantificación de las variables cualitativas en cada bloque Xj se realiza mediante la función inner Zj de dimensión J debido a la agregación de las estimaciones outer Yj. Tanto Zj como Yj estiman la componente ξj asociad al j-ésimo bloque.
APA, Harvard, Vancouver, ISO, and other styles
8

Garay, Aldo William Medina. "Modelos de regressão para dados censurados sob distribuições simétricas." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/45/45133/tde-15062014-000915/.

Full text
Abstract:
Este trabalho tem como objetivo principal apresentar uma abordagem clássica e Bayesiana dos modelos lineares com observações censuradas, que é uma nova área de pesquisa com grandes possibilidades de aplicações. Aqui, substituimos o uso convencional da distribuição normal para os erros por uma família de distribuições mais flexíveis, o que nos permite lidar de forma mais adequada com observações censuradas na presença de outliers. Esta família é obtida através de um mecanismo de fácil construção e possui como casos especiais as distribuições t de Student, Pearson tipo VII, slash, normal contaminada e, obviamente, a normal. Para o caso de respostas correlacionadas e censuradas propomos um modelo de regressão linear robusto baseado na distribuição t de Student, desenvolvendo um algoritmo tipo EM que depende dos dois primeiros momentos da distribuição t de Student truncada.<br>This work aims to present a classical and Bayesian approach to linear models with censored observations, which is a new area of research with great potential for applications. Here, we replace the conventional use of the normal distribution for the errors of a more flexible family of distributions, which deal in more appropriately with censored observations in the presence of outliers. This family is obtained through a mechanism easy to construct and has as special cases the distributions Student t, Pearson type VII, slash, contaminated normal, and obviously normal. For the case of correlated and censored responses we propose a model of robust linear regression based on Student\'s t distribution and we developed an EM type algorithm based on the first two moments of the truncated Student\'s t distribution.
APA, Harvard, Vancouver, ISO, and other styles
9

Alexander, Ödlund Lindholm. "The Salience of Issues in Parliamentary Debates : Its Development and Relation to the Support of the Sweden Democrats." Thesis, Linköpings universitet, Institutionen för ekonomisk och industriell utveckling, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-167610.

Full text
Abstract:
The aim of this study was to analyze the salience of issue dimensions in the Swedish parliament debates by the established parties during the rise of the Sweden Democrats Party (SD). Structural topic modeling was used to construct a measurement of the salience of issues, examining the full body of speeches in the Swedish parliament between September 2006 and December 2019. Trend analysis revealed a realignment from a focus on socio-economic to socio-cultural issues in Swedish politics. Cross-correlation analyses had conflicting results, indicating a weak positive relationship between the salience of issues and the support of SD – but low predictive ability; it also showed that changes in the support of SD did lead (precede) changes in the salience of issues in the parliament. The ramifications of socio-cultural issues being the most salient are that so-called radical right-wing populist parties (RRPs), or neo-nationalist parties, has a greater opportunity to gain support. It can make voters more inclined to base their voting decision on socio-cultural issues, which favors parties who fight for and are trustworthy in those issues – giving them more valence in the eyes of the voters.
APA, Harvard, Vancouver, ISO, and other styles
10

Ledolter, Johannes. "Multi-Unit Longitudinal Models with Random Coefficients and Patterned Correlation Structure: Modelling Issues." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/432/1/document.pdf.

Full text
Abstract:
The class of models which is studied in this paper, multi-unit longitudinal models, combines both the cross-sectional and the longitudinal aspects of observations. Many empirical investigations involve the analysis of data structures that are both cross-sectional (observations are taken on several units at a specific time period or at a specific location) and longitudinal (observations on the same unit are taken over time or space). Multi-unit longitudinal data structures arise in economics and business where panels of subjects are studied over time, biostatistics where groups of patients on different treatments are observed over time, and in situations where data are taken over time and space. Modelling issues in multi-unit longitudinal models with random coefficients and patterned correlation structure are illustrated in the context of two data sets. The first data set deals with short time series data on annual death rates and alcohol consumption for twenty-five European countries. The second data set deals with glaceologic time series data on snow temperature at 14 different locations within a small glacier in the Austrian Alps. A practical model building approach, consisting of model specification, estimation, and diagnostic checking, is outlined. (author's abstract)<br>Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Correlation-regression modeling"

1

Whittaker, Tiffany A., and Randall E. Schumacker. "Correlation and Regression Methods." In A Beginner's Guide to Structural Equation Modeling, 5th ed. Routledge, 2022. http://dx.doi.org/10.4324/9781003044017-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rebelo, Efigénio, Patrícia Oom do Valle, and Rui Nunes. "Testing Serial Correlation Using the Gauss–Newton Regression." In New Advances in Statistical Modeling and Applications. Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-05323-3_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pollock, D. A., R. F. Gunst, K. Kim, and W. R. Schucany. "A Comparison of Least Squares Linear Regression and Measurement Error Modeling of Warm / Cold Multipole Correlation in SSC Prototype Dipole Magnets." In Supercollider 5. Springer US, 1994. http://dx.doi.org/10.1007/978-1-4615-2439-7_168.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Casaburo, Alessandro, Giuseppe Petrone, Elena Ciappi, Francesco Franco, and Sergio De Rosa. "Data-Driven Symbolic Regression for the Modelling of Correlation Functions of Wall Pressure Fluctuations Under Turbulent Boundary Layers." In Flinovia—Flow Induced Noise and Vibration Issues and Aspects—IV. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-73935-4_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

"Bivariate Linear Regression and Correlation." In Regression Modeling. Chapman and Hall/CRC, 2009. http://dx.doi.org/10.1201/9781420091984-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

"Bivariate Linear Regression and Correlation." In Regression Modeling. Chapman and Hall/CRC, 2009. http://dx.doi.org/10.1201/9781420091984-c2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"Correlation Analysis in Multiple Regression." In Handbook of Regression and Modeling. Chapman and Hall/CRC, 2006. http://dx.doi.org/10.1201/9781420017380-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

"Correlation Analysis in Multiple Regression." In Handbook of Regression and Modeling. Chapman and Hall/CRC, 2006. http://dx.doi.org/10.1201/9781420017380.ch5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

"Special Problems in Simple Linear Regression: Serial Correlation and Curve Fitting." In Handbook of Regression and Modeling. Chapman and Hall/CRC, 2006. http://dx.doi.org/10.1201/9781420017380-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

"Correlation Analysis and Linear Regression : Assessing the Covariability of Two Quantitative Properties." In Understanding Statistical Analysis and Modeling. SAGE Publications, Inc, 2018. http://dx.doi.org/10.4135/9781071878866.n14.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Correlation-regression modeling"

1

Woollam, Richard C., Ethan Proudlove, Michael Jones, Harvey M. Thompson, and Richard Barker. "Symbolic Regression Based Surrogate Modelling of a High-Fidelity Multiphysics CO2 Corrosion Model." In CONFERENCE 2025. AMPP, 2025. https://doi.org/10.5006/c2025-00433.

Full text
Abstract:
Abstract A high-fidelity mechanistic carbon steel/carbon dioxide (CO2) corrosion prediction model developed at the University of Leeds has been implemented in the multi-physics (MP) software package COMSOL. The model integrates the bulk solution equilibria, interfacial mass transport and solution reactions, and surface anodic and cathodic electrochemical processes. In this work, the high-fidelity model is randomly sampled in the 5-input dimensions (bulk pH, temperature, CO2 partial pressure, piping velocity and diameter) using a Beta distribution, B(α, β), with the parameters α and β set to 12 in order to oversample the extremes of the input range. The high-fidelity MP model was sampled approximately 225,000 times and the data refined using a Deep Neural Network (DNN) as described by Proudlove et al.. The output from the high-fidelity model went through three rounds of refinement via the DNN to produce the final dataset. After data refinement and 5-fold cross-validation, the data was further analyzed using a symbolic regression (SR) genetic algorithm. Each generation from the SR was plotted on a Pareto optimization plot, displaying the relationship between model complexity and the fit quality metric, (1 − R2). From the Pareto front, showing the trade-off between maximum quality of fit for minimal model complexity, the ‘best’ mathematical expression representing the data was chosen. Additionally, the mathematical expressions generated were reviewed subjectively for ‘physical reasonableness’ and for ‘ease of implementation in a typical spreadsheet. The output from the three models were compared in a 3-dimensional correlation diagram with excellent agreement between all models, R2 &amp;gt; 0.99.
APA, Harvard, Vancouver, ISO, and other styles
2

Ton Alias, Mohd Aswadi, Nurul Asni Mohamed, M. Iskandar Bakeri, et al. "Application of Artificial Intelligence in Corrosion Management." In CONFERENCE 2024. AMPP, 2024. https://doi.org/10.5006/c2024-20711.

Full text
Abstract:
Abstract Corrosion management system encompassing the various stages of an asset life starting from design, construction, through to operation and decommissioning remains the key focus in ensuring integrity and safe operation of the asset. Corrosion study is conducted during the initial design phase, followed by multiple reviews during the operational stage as part of the overall corrosion management process. These studies aim to identify all damage mechanisms that can be present, including both non-age-related and age-related mechanisms. Currently in the oil and gas industry, corrosion rate predictions for age-related mechanisms are generated via mathematical equations or correlations as outcome from laboratory testing and analyses which may not be representative of the actual operating condition. These predictions impose limitations with regards to utilizing inputs produced from big data. Application of artificial intelligence to predict corrosion rate offers advantages where real high frequency data streams from IoT sensors are analyzed via machine learning algorithm thus providing prediction based on historical experience of specific asset. Data preprocessing is an important step in machine learning that involves transforming raw data from various parameters so that issues owing to the incompleteness, inconsistency, and/or lack of appropriate representation of trends are resolved to arrive at a data set that is in an understandable format. Feature engineering will then be performed which analyze the parameter correlation to obtain the most suitable combination and the best features and data characteristics. For corrosion rate prediction, the supervised learning algorithm is applicable such as logistic regression, naive bayes, support vector machines, artificial neural networks, and random forests. The final step of the machine learning modelling is the model validation. The predicted corrosion rates will be verified with actual thickness measurement at site. To date, we have covered 30 process units which includes different trains, 120 corrosion groups selected from a total of about 3800 corrosion groups for the whole facility. 700 customized machine learning models were developed. Success is defined by best highest accuracy (&amp;gt;80%) with an optimum model run time. Recent validation has shown the ability to predict an anomaly in future trend which coincides with actual increase in corrosion rate.
APA, Harvard, Vancouver, ISO, and other styles
3

Chertov, M., and V. Antsiferova. "SALES FORECAST MODELING BASED ON CORRELATION AND REGRESSION METHODS." In Modern aspects of modeling systems and processes. FSBE Institution of Higher Education Voronezh State University of Forestry and Technologies named after G.F. Morozov, 2021. http://dx.doi.org/10.34220/mamsp_185-191.

Full text
Abstract:
The presented article explores the methods of correlation and regression analysis, developed its own algorithm for obtaining the initial data of interest, and also presents the calculation algo-rithm for the law of change in sales volumes. Based on the presented calculations, the errors of the correlation coefficients and the results of the regression analysis were calculated.
APA, Harvard, Vancouver, ISO, and other styles
4

Petrov, B., and V. Antsiferova. "RESEARCH AND DEVELOPMENT OF SALES FORECAST MODELS BASED ON CORRELATION AND REGRESSION." In CHALLENGING ISSUES IN SYSTEMS MODELING AND PROCESSES. FSBE Institution of Higher Education Voronezh State University of Forestry and Technologies named after G.F. Morozov, 2025. https://doi.org/10.58168/cismp2024_122-125.

Full text
Abstract:
The paper discusses the issues of forecasting financial activity in the context of the strategic development of the organization. It also presents in detail one of the economic and mathematical algorithms used to forecast the financial results of the organization.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Xinyao, Jianbo Yi, Meng Hu, and Binghe Feng. "Research on Parameter Correlation and Regression Modeling of Shear Strength Indicators." In ADMIT 2023: 2023 2nd International Conference on Algorithms, Data Mining, and Information Technology. ACM, 2023. http://dx.doi.org/10.1145/3625403.3625420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Díaz-Labrador, Anabel, Álvaro Michelena, Héctor Quintián, and José-Luis Calvo-Rolle. "Intelligent Modeling of a Solar Photovoltaic System Located in a Bioclimatic Home." In VII Congreso XoveTIC: impulsando el talento científico. Servizo de Publicacións. Universidade da Coruña, 2024. https://doi.org/10.17979/spudc.9788497498913.21.

Full text
Abstract:
The use of renewable energy is expanding globally, driven by the need to reduce greenhouse gas emissions and mitigate climate change. This study focuses on modelling the electrical power generated by photovoltaic panels in a bioclimatic home, analyzing the performance of linear regression and multilayer perceptron models, while considering atmospheric factors such as solar radiation and ambient temperature. The process includes a correlation analysis to select the most relevant variables, followed by dataset preprocessing techniques. Finally, performance metrics of the models are evaluated, which indicate a strong correlation between solar radiation and the power generated, resulting in robust regression models.
APA, Harvard, Vancouver, ISO, and other styles
7

Yoo, Yeuntae, Seokheon Cho, Sung-Geun Song, and Ramash R. Rao. "Forecast Error Modeling for Microgrid Operation Considering Correlation among Distributed Generators Using Gaussian Process Regression." In 2022 IEEE Conference on Technologies for Sustainability (SusTech). IEEE, 2022. http://dx.doi.org/10.1109/sustech53338.2022.9794262.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Fachao, and Liqun Chen. "Research on the Correlation Between China's Meat Non-staple Food Prices Based on Univariate Regression." In Proceedings of the International Conference on Information Economy, Data Modeling and Cloud Computing, ICIDC 2022, 17-19 June 2022, Qingdao, China. EAI, 2022. http://dx.doi.org/10.4108/eai.17-6-2022.2322723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Walker, Stephen S. "Modeling and estimation of OH− absorption insingle-mode optical fibers." In OSA Annual Meeting. Optica Publishing Group, 1985. http://dx.doi.org/10.1364/oam.1985.thc7.

Full text
Abstract:
Understanding the structure of the OH− absorption peaks at 1.25 and 1.39-µm is necessary for effective long-wavelength lightguide system design and production. This paper reports the use of a nonlinear regression algorithmf, based on the Marquardt method, to estimate the amplitudes, center wavelengths, and halfwidths of a twelve-parameter four-component Gaussian OH− peak model at 1.39 µm and a six-parameter two-component Gaussian OH− peak model at 1.25 µm. These models, combined with the Marquardt regression, provide accurate estimation of OH− absorption, and the analysis is much faster than previously reported techniques involving measurement of infrared and Raman spectra. Numerical values for the Gaussian center wavelengths and halfwidths are tabulated, and correlation analysis and linear regression are used to determine the interrelationship of the six-component amplitudes. Simplified OH models incorporating the estimated parameter values are proposed. The OH- peak equations are expressed in terms of the measured maximum loss of the 1.39-µm peak. A comparison of the closeness of fit is made between the twelve- and six-parameter models, the simplified models, and the measured maximum-loss model. The simplified models are shown to provide an accurate representation of the OH− absorption profiles for even a limited set of measurement data.
APA, Harvard, Vancouver, ISO, and other styles
10

Statsenko, Ekaterina. "ANALYSIS OF THE GROWING PROCESS OF SOY BEAN." In XIV International Scientific Conference "System Analysis in Medicine". Far Eastern Scientific Center of Physiology and Pathology of Respiration, 2020. http://dx.doi.org/10.12737/conferencearticle_5fe01d9d640096.59446052.

Full text
Abstract:
Prediction of technological processes in various sectors of the food industry using mathematical modeling is becoming increasingly important. The paper describes the effect of various factors on the grain weight after germination by the method of correlation-regression analysis. Using this modeling method, based on the established relationship, it is possible to predict the weight of the grain after germination in various conditions.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography