To see the other types of publications on this topic, follow the link: Pearson correlation coefficient (r).

Dissertations / Theses on the topic 'Pearson correlation coefficient (r)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'Pearson correlation coefficient (r).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lima, Leonardo da Silva e. "Centralidades em redes espaciais urbanas e localização de atividades econômicas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/122524.

Full text
Abstract:
Nos últimos anos, o estudo de propriedades de redes espaciais urbanas conhecidas como centralidades, tem sido utilizado com frequência para descrever fenômenos de ordem sócio-econômica associados à forma da cidade. Autores têm sugerido que centralidades são capazes de descrever a estrutura espacial urbana (KRAFTA, 1994; ANAS et al., 1998) e, portanto através do estudo de centralidades, é possível reconhecer os espaços que mais concentram fluxos, os que possuem os maiores valores de renda da terra, os mais seguros, entre outros aspectos que parecem estar diretamente relacionados com o fenômeno urbano. A hipótese dessa pesquisa admite que centralidades em redes espaciais urbanas desempenham um papel fundamental na formação da estrutura espacial urbana e na maneira como são organizados os usos do solo da cidade. Assim, essa pesquisa investiga qual modelo de centralidade, processado sobre diversas formas de se descrever o espaço urbano na forma de uma rede, é capaz de apresentar resultados mais fortemente correlacionados com a distribuição espacial de atividades econômicas urbanas. Nessa pesquisa são avaliados cinco modelos de centralidade, aplicados sobre diferentes redes espaciais urbanas com a intenção de se verificar qual deles apresenta maior correlação com a ocorrência de atividades econômicas. Para realizar tal exercício, esses modelos são aplicados sobre três tipos de redes espaciais urbanas (axial, nodal e trechos de rua) – oriundas da configuração espacial de três cidades brasileiras – processados de forma geométrica e topológica. Os modelos de centralidade aplicados são conhecidos como centralidade por Alcance (SEVTSUK; 2010), centralidade por Excentricidade (PORTA et al.; 2009, 2011), centralidade por Intermediação (FREEMAN, 1977), centralidade por Intermediação Planar (KRAFTA, 1994) e centralidade por Proximidade (INGRAM, 1971). O coeficiente de correlação Pearson (r) é utilizado como ferramenta capaz de descrever qual modelo de centralidade, associado a qual tipo de representação espacial e qual modo de processamento de distâncias melhor se correlaciona com a distribuição de atividades econômicas urbanas nessas cidades. As evidências encontradas nessa pesquisa sugerem que os modelos de centralidade por Alcance, centralidade por Excentricidade e centralidade por Intermediação Planar destacam-se em comparação com os demais modelos processados. Além disso, os valores de correlação Pearson (r) mais relevantes foram obtidos quando os modelos de centralidade foram processados considerando-se a geometria da rede formada por trechos de rua, indicando que um tipo de representação espacial mais desagregada e processada de forma geométrica seria mais capaz de apresentar os melhores valores de correlação para a compreensão do fenômeno urbano estudado.
In recent years, the study of urban spatial networks has been often used to describe urban phenomena associated with the shape of the city. Researches suggested that centralities are able to describe the urban spatial structure (KRAFTA, 1994; ANAS et al., 1998) and then it is possible to recognize the spaces with more flows, which have the highest values of land revenue, the safest, among other aspects related to urban phenomenon. The hypothesis of this research accepts that centrality in urban spatial networks play a key role for the urban spatial structure and the way of land uses is organized. Thus, there would be some measures of centrality in urban spatial networks that would be more associated with economic activities occurring in the city. The research will evaluate five measures of centrality applied on three types of urban spatial networks (axial map, node map and segment map). Therefore we will use five models of centrality in urban spatial networks known as reach (SEVTSUK, MEKONNEN, 2012), straightness (PORTA et al., 2006b), betweenness (FREEMAN, 1977), planar betweenness (KRAFTA, 1994) and closeness (INGRAM, 1971) in order to determine which this most highly correlated with the occurrence of economic activities. The relationships between these measures of centrality and locations of economic activities are examined in three Brazilian cities, using as methodology the Pearson correlation coefficient (r). The highest correlation between the results of centrality in urban spatial networks and the location of economic activities will suggest which centrality measure, way of to describe urban space like a network and distance processing method (euclidian or topologic) is more associated with the occurrence of these activities in the city. The results indicate that Reach, Straightness and Planar Betweenness are most outstanding models of centrality. In addition, Pearson correlation coefficients (r) most relevant were obtained when models of centrality are processed considering euclidian paths in the street segments network, suggesting that this type of spatial network and distances processing method generates centralities with more significant correlation values within the urban phenomenon studied.
APA, Harvard, Vancouver, ISO, and other styles
2

Le, Trang Thi, Doan Dang Phan, Bao Dang Khoa Huynh, Van Tho Le, and Van Tu Nguyen. "Phytoplankton diversity and its relation to the physicochemical parameters in main water bodies of Vinh Long province, Vietnam." Technische Universität Dresden, 2019. https://tud.qucosa.de/id/qucosa%3A70829.

Full text
Abstract:
Phytoplankton samples were collected in 2016 during the dry and rainy seasons at nine sampling sites in Vinh Long province, Vietnam. Some basic environment parameters such as temperature, pH, dissolved oxygen, nitrate and phosphate were measured and a total of 209 phytoplankton species (six phyla, 96 genera) were identified. The phylum that had the greatest number of species was Bacillariophyta (82 species), followed by Chlorophyta (61 species), Cyanophyta (39 species), Euglenophyta (21 species), Chrysophyta (three species) and Dinophyta (three3 species). The phytoplankton density ranged from 4,128 to 123,029 cells/liter. The dominant algae recorded in the study area include Microcystis aeruginosa, Merismopedia glauca, Oscillatoria perornata, Jaaginema sp., Planktothrix agardhii, Coscinodiscus subtilis, Melosira granulata. In particular, Microcystis aeruginosa was the most density dominant species in the total number of sampling sites during the dry season survey, and this species was classified as a group producing toxins harmful to the environment. Surface water quality, according to QCVN 08: 2015/BTNMT was classified into Column A1 for pH, nitrate and Column B1 for dissolved oxygen, and Column B2 for phosphate. Phytoplankton community structure and environmental factors changed substantially between dry and rainy seasons. A Pearson (r) correlation coefficient was used for the relative analysis. The results indicated that the number of phytoplankton species were a significantly positive correlation with pH, dissolved oxygen and nitrate in the rainy season. The phytoplankton abundance was uncorrelated with environmental factors in both seasons.
Các mẫu thực vật phù du được thu thập trong năm 2016 (mùa khô và mùa mưa) tại 9 vị trí ở tỉnh Vĩnh Long, Việt Nam. Một số thông số môi trường như nhiệt độ, pH, oxy hòa tan, nitrat và phốt phát được đo ngay tại hiện trường. Tổng cộng 209 loài thực vật phù du được ghi nhận (6 ngành, 96 chi). Số lượng loài cao nhất là tảo Silic (82 loài), kế đến là tảo Lục (61 loài), tảo Lam (39 loài), tảo Mắt (21 loài), tảo Vàng ánh (3 loài) và tảo Giáp (3 loài). Mật độ thực vật phù du dao động từ 4.128 đến 123.029 tế bào/ lít. Các loài ưu thế ghi nhận được ở khu vực nghiên cứu gồm có: Microcystis aeruginosa, Merismopedia glauca, Oscillatoria perornata, Jaaginema sp., Planktothrix agardhii; Coscinodiscus subtilis, Melosira granulata. Trong đó, loài Microcystis aeruginosa chiếm ưu thế nhiều nhất trên tổng số điểm thu mẫu trong đợt khảo sát mùa khô, đồng thời loài này được xếp vào nhóm sản sinh độc tố gây hại cho môi trường. Chất lượng nước mặt theo QCVN 08:2015/BTNMT được xếp vào loại A1 đối với thông số pH, nitrat và loại B1 đối với thông số oxy hòa tan, và loại B2 đối với phốt phát. Cấu trúc quần xã thực vât nổi và các yếu tố môi trường thay đổi đáng kể giữa mùa mưa và mừa khô. Hệ số tương quan Pearson (r) được dùng để phân tích. Kết quả cho thấy số lượng thực vật phù du có tương quan thuận với pH, oxy hòa tan và nitrat trong mùa mưa và có ý nghĩa về mặt thống kê. Mật độ của thực vật phù du không tương quan với các yếu tố môi trường trong cả hai mùa.
APA, Harvard, Vancouver, ISO, and other styles
3

Kalaitzis, Angelos. "Bitcoin - Monero analysis: Pearson and Spearman correlation coefficients of cryptocurrencies." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-41402.

Full text
Abstract:
In this thesis, an analysis of Bitcoin, Monero price and volatility is conducted with respect to S&P500 and the VIX index. Moreover using Python, we computed correlation coefficients of nine cryptocurrencies with two different approaches: Pearson and Spearman from July 2016 -July 2018. Moreover the Pearson correlation coefficient was computed for each year from July2016 - July 2017 - July 2018. It has been concluded that in 2016 the correlation between the selected cryptocurrencies was very weak - almost none, but in 2017 the correlation increased and became moderate positive. In 2018, almost all of the cryptocurrencies were highly correlated. For example, from January until July of 2018, the Bitcoin - Monero correlation was 0.86 and Bitcoin - Ethereum was 0.82.
APA, Harvard, Vancouver, ISO, and other styles
4

Bergamaschi, Denise Pimentel. "Correlação intraclasse de Pearson para pares repetidos: comparação entre dois estimadores." Universidade de São Paulo, 1999. http://www.teses.usp.br/teses/disponiveis/6/6132/tde-01102014-105050/.

Full text
Abstract:
Objetivo. Comparar, teórica e empiricamente, dois estimadores do coeficiente de correlação intraclasse momento-produto de Pearson para pares repetidos Pi. O primeiro é o estimador \"natural\", obtido mediante a correlação momento-produto de Pearson para membros de uma mesma classe (rI) e o segundo, obtido como função de componentes de variância (icc). Métodos. Comparação teórica e empírica dos parâmetros e estimadores. A comparação teórica envolve duas definições do coeficiente de correlação intraclasse PI como medida de confiabilidade (*), para o caso de duas réplicas, assim como uma apresentação da técnica de análise de variância e a definição e interpretação dos estimadores ri e icc. A comparação empírica é realizada mediante um estudo de simulação Monte Carlo com a geração de pares de valores correlacionados segundo o coeficiente de correlação intraclasse, momento-produto de Pearson para pares repetidos. Os pares de valores são distribuídos segundo uma distribuição Normal bivariada, com valores do tamanho da amostra e da correlação intraclasse previamente fixados em: n= 15, 30 e 45 e pI = {O; 0,15; 0,30; 0,45; 0,60; 0,75; 0,9}. Resultados. Comparando-se o vício e o erro quadrático médio dos estimadores, bem como as amplitudes dos intervalos de confiança, tem-se como resultado que o vício de icc foi sempre menor que o vício de rI, mesmo ocorrendo com o erro quadrático médio. Conclusões. O icc é um estimador melhor, principalmente para n pequeno (por exemplo 15). Para valores maiores de n (30 ou mais), os estimadores produzem resultados iguais até a segunda casa decimal.
Objective. This thesis presents and compares, theoretically and empirically, two estimators of the intraclass correlation coefficient pI, defined as Pearson\'s pairwise intraclass correlation coefficient. The first is the \"natural\" estimator, obtained by Pearson\'s moment-product correlation for members of one class (rI) while the second was obtained as a function of components of variance (icc). Methods. Theoretical and empirical comparison of the parameters and estimators are performed. The theoretical comparison involves two definitions of the intrac1ass correlation coefficient pI as a measure of reliability (*) for two repeated measurements in the same class and the presentation of the technique of analysis of variance, as well as for the definition and interpretation of the estimators ri and icc. The empirical comparison was carried out by means of a Monte Carlo simulation study of pairs of correlated values according Pearson\'s pairwise correlation. The pairs of values follow a normal bivariate distribution, with correlation values and sample size previously fixed: n= 15, 30 e 45 and Pl = . Results. Bias and mean square error for the estimators were compared as well as the range of the intervals of confidence. The comparison shows that the bias of icc is always smaller than of rI This also applies to the mean square error. Conclusions. The icc is a better estimator, especially for n less than or equal to 15. For larger samples sízes (n 30 or more), the estimators produce results that are equal to the second decimal place. (*) Fórmula
APA, Harvard, Vancouver, ISO, and other styles
5

Truong, Thi Kim Tien. "Grandes déviations précises pour des statistiques de test." Thesis, Orléans, 2018. http://www.theses.fr/2018ORLE2057/document.

Full text
Abstract:
Cette thèse concerne l’étude de grandes déviations précises pour deux statistiques de test:le coefficient de corrélation empirique de Pearson et la statistique de Moran.Les deux premiers chapitres sont consacrés à des rappels sur les grandes déviations précises et sur la méthode de Laplace qui seront utilisés par la suite. Par la suite, nous étudions les grandes déviations précises pour des coefficients de Pearson empiriques qui sont définis par:$r_n=\sum_{i=1}^n(X_i-\bar X_n)(Y_i-\bar Y_n)/\sqrt{\sum_{i=1}(X_i-\bar X_n)^2 \sum_{i=1}(Y_i-\bar Y_n)^2}$ ou, quand les espérances sont connues, $\tilde r_n=\sum_{i=1}^n(X_i-\mathbb E(X))(Y_i-\mathbb E(Y))/\sqrt{\sum_{i=1}(X_i-\mathbb E(X))^2 \sum_{i=1}(Y_i-\mathbb E(Y))^2} \, .$. Notre cadre est celui d’échantillons (Xi, Yi) ayant une distribution sphérique ou une distribution gaussienne. Dans chaque cas, le schéma de preuve suit celui de Bercu et al.Par la suite, nous considérons la statistique de Moran $T_n=\frac{1}{n}\sum_{k=1}^n\log\frac{X_i}{\bar X_n}+\gamma \, ,$o\`u $\gamma$, où γ est la constante d’ Euler. Enfin l’appendice est consacré aux preuves de résultats techniques
This thesis focuses on the study of Sharp large deviations (SLD) for two test statistics:the Pearson’s empirical correlation coefficient and the Moran statistic.The two first chapters aim to recall general results on SLD principles and Laplace’s methodsused in the sequel. Then we study the SLD of empirical Pearson coefficients, name $r_n=\sum_{i=1}^n(X_i-\bar X_n)(Y_i-\bar Y_n)/\sqrt{\sum_{i=1}(X_i-\bar X_n)^2 \sum_{i=1}(Y_i-\bar Y_n)^2}$ and when the meansare known,$\tilde r_n=\sum_{i=1}^n(X_i-\mathbb E(X))(Y_i-\mathbb E(Y))/\sqrt{\sum_{i=1}(X_i-\mathbb E(X))^2 \sum_{i=1}(Y_i-\mathbb E(Y))^2} \, .$ .Our framework takes place in two cases of random sample (Xi, Yi): spherical distributionand Gaussian distribution. In each case, we follow the scheme of Bercu et al. Next, westate SLD for the Moran statistic $T_n=\frac{1}{n}\sum_{k=1}^n\log\frac{X_i}{\bar X_n}+\gamma \, ,$o\`u $\gamma$ , where γ is the Euler constant.Finally the appendix is devoted to some technical results
APA, Harvard, Vancouver, ISO, and other styles
6

Johansson, Emilia. "Factors controlling the sorption of Cs, Ni and U in soil : A statistical analysis with experimental sorption data of caesium, nickel and uranium in soils from the Laxemar area." Thesis, KTH, Hållbar utveckling, miljövetenskap och teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-281938.

Full text
Abstract:
In the fall of 2006, soils from three small valleys in the Laxemar/Oskarshamn area were sampled. A total of eight composite samples were characterized for a number of soil parameters that are important for geochemical sorption and were later also used in batch sorption experiments. Solid/liquid partition coefficients (Kd values) were then determined for seven radionuclides in each of the eight samples. To contribute to the interpretation of the sorption results together with the soil characterizations, this study aims to describe the sorption behavior of the radionuclides caesium, nickel and uranium and also discern which parameters that could provide a basis for estimating the strength of sorption of radionuclides in general. The methodology included quantitative methodologies such as compilation of chemical equilibrium diagrams by the software Hydra/Medusa and correlation analyses using the statistical software SPSS statistics. Based on the speciation diagrams of each radionuclide and identified important linear and non-linear relationships of the Kd values with a number of soil parameters, the following soil- and soil solution properties were found to have controlled the sorption of Cs, Ni and U, respectively, in the Laxemar soils. Cs: the specific surface area of the soil coupled to the clay content. Ni: the cation exchange capacity, alkaline solution pH, soil organic matter and dissolved organic matter. U: the cation exchange capacity, soil organic matter, dissolved organic matter, dissolved carbonate and alkaline solution pH. The soil that showed the strongest sorption varied between the nuclides, which can be related to the individual sorption behavior of caesium, nickel and uranium, as well as the different physicochemical properties of the soils. The parameters that should be prioritized in characterizations of soil samples are identified to be: solution pH, the cation exchange capacity, the specific surface area of the soil, soil organic matter and soil texture (clay content).
För att kunna fatta beslut relaterade till hypotetisk framtida kontaminering från slutförvar av radioaktivt avfall är det direkt avgörande att förstå mobiliteten av radioaktiva element i miljön. Sorption är en av de viktigaste kemiska mekanismerna som kan minska spridningen av radionuklider i vatten/jord/bergssystem, där nukliderna fördelar sig mellan vätskefasen och ytor på fasta partiklar i dessa system. Fördelningskoefficienter (Kd värden) används generellt som ett kvantitativt mått på sorptionen, där ett högt Kd värde innebär att en större andel av ämnet i fråga är bundet till den fasta fasen. Under hösten 2006 togs jordprover från tre dalgångar i Laxemar/Oskarshamn. Totalt åtta jordprover karakteriserades för ett antal jordparametrar som är viktiga för geokemisk sorption och användes senare i batchförsök tillsammans med ett naturligt grundvatten. Fördelningskoefficienter (Kd värden) bestämdes för sju radionuklider (Cs, Eu, I, Ni, Np, Sr and U) för vart och ett av de åtta jordproverna. För att bidra till tolkningen av sorptionsresultaten tillsammans med jordprovernas egenskaper syftar denna studie till att beskriva sorptionsbeteendet hos radionukliderna caesium, nickel och uran samt urskilja vilka parametrar som kan fungera som grund för att uppskatta sorptionsstyrkan av radionuklider i allmänhet. För att uppnå detta syfte så har studien följande mål. Identifiera de jord- och marklösningsegenskaper som kontrollerar sorptionen av Cs, Ni respektive U i de åtta Laxemar proverna. Bestämma vilket Laxemar-jordprov som starkast sorberar de tre radionukliderna. Identifiera de jordparametrar som bör prioriteras vid jordkarakteriseringar, baserat på deras sorptionsinflytande, för att kunna uppskatta Kd värden endast med begränsad information om ett jordsystem. Metoden innefattade kvantitativa metoder såsom sammanställning av kemiska jämviktsdiagram med programvaran Hydra/Medusa och korrelationsanalyser med hjälp av statistikprogramvaran SPSS statistics. De kemiska jämviktsdiagrammen bidrog till att beskriva specieringen av respektive nuklid som en funktion av pH och korrelationsanalyserna bidrog till att identifiera linjära samband mellan par av variabler, tex mellan Kd och jordparametrar. Baserat på specieringsdiagrammen för varje radionuklid och identifierade viktiga linjära och icke-linjära förhållanden mellan Kd-värdena och ett antal jordparametrar har följande egenskaper hos jordarna och marklösningen visat sig huvudsakligen kontrollera sorptionen av Cs, Ni respektive U i de åtta Laxemar jordarna: För caesium gäller jordens specifika ytarea kopplad till lerinnehållet, medan för nickel är det katjonbytarkapaciteten, organiskt material, alkaliska pH-värden samt löst organiskt material. Sorptionen av uran befanns kontrolleras av katjonbytarkapaciteten, organiskt material, löst organiskt material, alkaliska pH-värden samt lösta karbonater. Den jord som visade starkast sorption varierar mellan de tre nukliderna, vilket kan relateras till nuklidernas individuella sorptionsbeteende i jord samt jordarnas olika fysikaliska och kemiska egenskaper. Parametrarna som bör prioriteras vid karaktärisering av jordprov identifierades vara: pH, katjonbytarkapaciteten, jordens specifika ytarea, mängden organiskt material samt jordtexturen (lerinnehåll).
APA, Harvard, Vancouver, ISO, and other styles
7

Venter, Philip van Zyl. "A supercritical R-744 heat transfer simulation implementing various Nusselt number correlations / Philip van Zyl Venter." Thesis, North-West University, 2010. http://hdl.handle.net/10394/4234.

Full text
Abstract:
During the past decade research has shown that global warming may have disastrous effects on our planet. In order to limit the damage that the human race seems to be causing, it was acknowledged that substances with a high global warming potential (GWP) should be phased out. In due time, R-134a with a GWP = 1300, may probably be phased out to make way for nature friendly refrigerants with a lower GWP. One of these contenders is carbon dioxide, R-744, with a GWP = 1. Literature revealed that various Nusselt number (Nu) correlations have been developed to predict the convection heat transfer coefficients of supercritical R-744 in cooling. No proof could be found that any of the reported correlations accurately predict Nusselt numbers (Nus) and the subsequent convection heat transfer coefficients of supercritical R-744 in cooling. Although there exist a number of Nu correlations that may be used for R-744, eight different correlations were chosen to be compared in a theoretical simulation program forming the first part of this study. A water-to-transcritical R-744 tube-in-tube heat exchanger was simulated. Although the results emphasise the importance of finding a more suitable Nu correlation for cooling supercritical R-744, no explicit conclusions could be made regarding the accuracy of any of the correlations used in this study. For the second part of this study experimental data found in literature were used to evaluate the accuracy of the different correlations. Convection heat transfer coefficients, temperatures, pressures and tube diameter were employed for the calculation of experimental Nusselt numbers (Nuexp). The theoretical Nu and Nuexp were then plotted against the length of the heat exchanger for different pressures. It was observed that both Nuexp and Nu increase progressively to a maximal value and then decline as the tube length increases. From these results it were possible to group correlations according to the general patterns of their Nu variation over the tube length. Graphs of Nuexp against Nus, calculated according to the Gnielinski correlation, generally followed a linear regression, with R2 > 0.9, when the temperature is equal or above the pseudocritical temperature. From this data a new correlation, Correlation I, based on average gradients and intersects, was formulated. Then a modification on the Haaland friction factor was used with the Gnielinski correlation to yield a second correlation, namely Correlation II. A third and more advanced correlation, Correlation III, was then formulated by employing graphs where gradients and y-intercepts were plotted against pressure. From this data a new parameter, namely the turning point pressure ratio of cooling supercritical R-744, was defined. It was concluded that the employed Nu correlations under predict Nu values (a minimum of 0.3% and a maximum of 81.6%). However, two of the correlations constantly over predicted Nus at greater tube lengths, i.e. below pseudocritical temperatures. It was also concluded that Correlation III proved to be more accurate than both Correlations I and II, as well as the existing correlations found in the literature and employed in this study. Correlation III Nus for cooling supercritical R-744 may only be valid for a diameter in the order of the experimental diameter of 7.73 mm, temperatures that are equal or above the pseudocritical temperature and at pressures ranging from 7.5 to 8.8 MPa.
Thesis (M.Ing. (Mechanical Engineering))--North-West University, Potchefstroom Campus, 2010.
APA, Harvard, Vancouver, ISO, and other styles
8

Kasianenko, Stanislav. "Predicting Software Defectiveness by Mining Software Repositories." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-78729.

Full text
Abstract:
One of the important aims of the continuous software development process is to localize and remove all existing program bugs as fast as possible. Such goal is highly related to software engineering and defectiveness estimation. Many big companies started to store source code in software repositories as the later grew in popularity. These repositories usually include static source code as well as detailed data for defects in software units. This allows analyzing all the data without interrupting programing process. The main problem of large, complex software is impossibility to control everything manually while the price of the error can be very high. This might result in developers missing defects on testing stage and increase of maintenance cost. The general research goal is to find a way of predicting future software defectiveness with high precision. Reducing maintenance and development costs will contribute to reduce the time-to-market and increase software quality. To address the problem of estimating residual defects an approach was found to predict residual defectiveness of a software by the means of machine learning. For a prime machine learning algorithm, a regression decision tree was chosen as a simple and reliable solution. Data for this tree is extracted from static source code repository and divided into two parts: software metrics and defect data. Software metrics are formed from static code and defect data is extracted from reported issues in the repository. In addition to already reported bugs, they are augmented with unreported bugs found on “discussions” section in repository and parsed by a natural language processor. Metrics were filtered to remove ones, that were not related to defect data by applying correlation algorithm. Remaining metrics were weighted to use the most correlated combination as a training set for the decision tree. As a result, built decision tree model allows to forecast defectiveness with 89% chance for the particular product. This experiment was conducted using GitHub repository on a Java project and predicted number of possible bugs in a single file (Java class). The experiment resulted in designed method for predicting possible defectiveness from a static code of a single big (more than 1000 files) software version.
APA, Harvard, Vancouver, ISO, and other styles
9

Fernandes, Catarina Marques. "Liderança de empoderamento e trabalho digno." Master's thesis, Universidade de Évora, 2018. http://hdl.handle.net/10174/24511.

Full text
Abstract:
O conceito de trabalho digno foi legitimado pela Organização Internacional do Trabalho em 1999, procurando dar resposta a questões no âmbito das políticas internacionais relacionadas com o trabalho. Devido às recentes mudanças no contexto organizacional, a liderança de empoderamento tem adquirido destaque na investigação e na prática. O objetivo do presente estudo é analisar a relação entre o trabalho digno e a liderança de empoderamento e de que forma as dimensões dos dois conceitos se associam. Os dados foram recolhidos através de dois questionários, o Decent Work Questionnaire e o Empowering Leadership Questionnaire, aplicados a 901 trabalhadores portugueses. Os dados foram analisados através do coeficiente de correlação de Pearson, cujos resultados indicaram que as correlações no geral são elevadas e que a dimensão Princípios e valores fundamentais no trabalho do trabalho digno e as dimensões Participação na tomada de decisão, Coaching e Demonstração de preocupação/interação com a equipa pertencentes à liderança de empoderamento são as que apresentam as correlações mais elevadas. Estes resultados demonstram a associação entre trabalho digno e a liderança de empoderamento, sugerindo que são conceitos com forte relação, embora distintos entre si;Empowering Leardership and Decent Work Abstract: The concept of decent work was legitimized by the International Labor Organization in 1999, seeking to address issues in international labor-related policies. Due to recent changes in the organizational context, empowering leadership has gained prominence in research and practice. The aim of the present study is to analyze the relationship between decent work and empowering leadership and how the dimensions of the two concepts are associated. Data were collected through two questionnaires, the Decent Work Questionnaire and the Empowering Leadership Questionnaire, applied to 901 Portuguese workers. The data were analyzed using Pearson's correlation coefficient, whose results indicated that the correlations are generally high and that the dimension Principles and fundamental values in decent work work and the dimensions Participation in decision making, Coaching and Demonstration of concern / interaction with the team belonging to the empowering leadership are those with the highest correlations. These results demonstrate the association between decent work and empowering leadership, suggesting that these concepts are strongly related, although different from each other.
APA, Harvard, Vancouver, ISO, and other styles
10

Siqueira, Lucas Alfredo. "Titulador automático baseado em filmes digitais para determinação de dureza e alcalinidade total em águas minerais/Titulador automático baseado em filmes digitais para determinação de dureza e alcalinidade total em águas minerais." Universidade Federal da Paraíba, 2016. http://tede.biblioteca.ufpb.br:8080/handle/tede/9013.

Full text
Abstract:
Submitted by Maike Costa (maiksebas@gmail.com) on 2017-06-21T14:26:33Z No. of bitstreams: 1 arquivototal.pdf: 3690977 bytes, checksum: 752560aa5c7d78968c32cb55f0778788 (MD5)
Made available in DSpace on 2017-06-21T14:26:33Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3690977 bytes, checksum: 752560aa5c7d78968c32cb55f0778788 (MD5) Previous issue date: 2016-02-29
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Total hardness and Total alkalinity are important physico-chemical parameters for the evaluation of water quality and are determined by volumetric analytical methods. These methods have difficult to detect the endpoint of the titration due to the difficult of viewing the color transition inherent to each of them. To circumvent this problem, here is proposed a new automatic method for the detection of the titration end point for the determination of total hardness and total alkalinity in mineral water samples. The proposed flow-batch titrator consists of a peristaltic pump, five three-way solenoid valves, a magnetic stirrer, an electronic actuator, an Arduino MEGA 2560TM board, a mixing chamber and a webcam. The webcam records the digital movie (DM) during the addition of the titrant towards mixing chamber, also recording the color variations resulting from chemical reactions between titrant and sample within chamber. While the DM is recorded, it is decompiled into frames ordered sequentially at a constant rate of 30 frames per second (FPS). The first frame is used as a reference to define the region of interest (RI) of 48 × 50 pixels and the R channel values, which are used to calculate the Pearson's correlation coefficient (r) values. r is calculated between the R values of the initial frame and each subsequent frame. The titration curves are plotted in real time using the values of r (ordinate axis) and the total opening time of the valve titrant (abscissa axis). The end point is estimated by the second derivative method. A software written in ActionScript 3.0 language manages all analytical steps and data treatment in real time. The feasibility of the method was attested by its application for the analysis of natural water samples. Results were compared with classical titration and did not present statistically significant differences when the paired ttest at the 95% confidence level was applied. The proposed method is able to process about 71 samples per hour, and its precision was confirmed by overall relative standard deviation (RSD) values, always lower than the 2,4% for total hardness and 1,4% for total alkalinity.
A dureza total e a alcalinidade total são importantes parâmetros físico-químicos para avaliação da qualidade de águas e são determinados por métodos volumétricos de análise. Estes métodos apresentam difícil detecção do ponto final da titulação devido à dificuldade de visualização das transições de cores inerentes a cada um deles. Para contornar este problema, foi proposta neste trabalho uma nova metodologia automática para a detecção do ponto final nas determinações de dureza total e alcalinidade total em águas. O titulador em fluxo-batelada proposto é composto de uma bomba peristáltica, cinco válvulas solenoides de três vias, um agitador magnético, um acionador de válvulas, uma placa Arduíno MEGA 2560TM, uma câmara de mistura e uma webcam. O programa de gerenciamento e controle do titulador foi escrito em linguagem ActionScript 3.0. A webcam grava o filme digital durante a adição do titulante na câmara de mistura, registrando as variações de cor decorrentes das reações químicas entre titulante e amostra no interior de câmara. Enquanto o filme é gravado, este é decomposto em quadros ordenados sequencialmente a uma taxa constante de 30 quadros por segundo (FPS). O primeiro quadro é utilizado como referência para definir uma região de interesse (RI) com 48 x 50 pixels, na qual seus valores R, G e B são utilizados para calcular os valores de coeficiente de correlação de Pearson (r). O valor de r é calculado entre os valores de R do quadro inicial e de cada quadro subsequente. As curvas de titulação são obtidas em tempo real usando os valores de r (ordenadas) e o tempo total de abertura da válvula de titulante (abscissas). O ponto final é estimado pelo método de segunda derivada. O método foi aplicado na análise de águas minerais e os resultados foram comparados com a titulação clássica, não apresentando diferenças estatisticamente significativas com aplicação do teste t pareado a 95% de confiança. O método proposto foi capaz de processar até 71 amostras por hora e a sua precisão foi confirmada pelos valores de desvio padrão relativos (DPR) globais, sempre inferiores as 2,4% para dureza total e 1,4% para alcalinidade total.
APA, Harvard, Vancouver, ISO, and other styles
11

Öberg, Elin. "L’influence de l’âge de début d’acquisition et de l’input linguistique sur l’apprentissage du FLE : Une étude empirique d’étudiants suédois du lycée et de l’université au niveau A2." Thesis, Stockholms universitet, Romanska och klassiska institutionen, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-196563.

Full text
Abstract:
In the light of recent findings regarding age and cumulative language exposure in the domain of Second Language Acquisition, the present study examines how starting age and linguistic input influences Swedish learners of French in a formal instructional setting. In contrast to natural settings, research suggests that a younger starting age in formal settings does not result in more advanced long-term competences in the target language. For the benefits associated with a younger age to be triggered, significant amounts of rich linguistic input need to be obtained by the learner on a daily basis. To test the validity of these findings, two groups with different starting ages were asked to fill in a questionnaire about their age and language contact as well as to perform a grammar and vocabulary test. A correlation analysis showed that an older starting age did in fact have a statistically significant relationship with higher test results and that the participants who reported having more frequent self-regulatory habits of studying French also performed better than the ones with little to no extracurricular exposure. However, a regression analysis did not manage to confirm these correlations and did instead find that other variables such as motivation and which group the participants belonged to have a much stronger significance than mere starting age and the amount of received input.
APA, Harvard, Vancouver, ISO, and other styles
12

Ozbal, Gozde. "A Content Boosted Collaborative Filtering Approach For Movie Recommendation Based On Local &amp." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610984/index.pdf.

Full text
Abstract:
Recently, it has become more and more difficult for the existing web based systems to locate or retrieve any kind of relevant information, due to the rapid growth of the World Wide Web (WWW) in terms of the information space and the amount of the users in that space. However, in today'
s world, many systems and approaches make it possible for the users to be guided by the recommendations that they provide about new items such as articles, news, books, music, and movies. However, a lot of traditional recommender systems result in failure when the data to be used throughout the recommendation process is sparse. In another sense, when there exists an inadequate number of items or users in the system, unsuccessful recommendations are produced. Within this thesis work, ReMovender, a web based movie recommendation system, which uses a content boosted collaborative filtering approach, will be presented. ReMovender combines the local/global similarity and missing data prediction v techniques in order to handle the previously mentioned sparseness problem effectively. Besides, by putting the content information of the movies into consideration during the item similarity calculations, the goal of making more successful and realistic predictions is achieved.
APA, Harvard, Vancouver, ISO, and other styles
13

Kovařík, Tomáš. "Řízení poslechových testů pro subjektivní hodnocení kvality audio signálu." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219467.

Full text
Abstract:
The point of this thesis was to perform listening tests. Appropriate methods of performance were selected for these tests, tests were carried out and the data were analyzed using statistical analysis. Then was compiled the resulting interval scale from results of the first test and in the second listening test were determined average values SNR for background noises.
APA, Harvard, Vancouver, ISO, and other styles
14

Mihulka, Tomáš. "Evoluční optimalizace analogových obvodů." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2017. http://www.nusl.cz/ntk/nusl-363843.

Full text
Abstract:
The aim of this work was to create a system for optimisaton of specific analog circuits by evolution using multiple fitness functions . A set of experiments was run, and the results analyzed to evaluate the feasibility of evolutionary optimisation of analog circuits . A requirement for this goal is the study and choice of certain types of analog circuits and evolutionary algorithms . For the scope of this work , amplifiers and oscillators were chosen as target circuits , and genetic algorithms and evolutionary strategies as evolutionary algorithms . The motivation for this work is the ongoing effort to automate the design and optimisation of analog circuits , where evolutionary optimisation is one of the options .
APA, Harvard, Vancouver, ISO, and other styles
15

Watanabe, Jorge. "Métodos geoestatísticos de co-estimativas: estudo do efeito da correlação entre variáveis na precisão dos resultados." Universidade de São Paulo, 2008. http://www.teses.usp.br/teses/disponiveis/44/44137/tde-14082008-165227/.

Full text
Abstract:
Esta dissertação de mestrado apresenta os resultados de uma investigação sobre os métodos de co-estimativa comumente utilizados em geoestatística. Estes métodos são: cokrigagem ordinária; cokrigagem colocalizada e krigagem com deriva externa. Além disso, a krigagem ordinária foi considerada apenas a título de ilustração como esse método trabalha quando a variável primária estiver pobremente amostrada. Como sabemos, os métodos de co-estimativa dependem de uma variável secundária amostrada sobre o domínio a ser estimado. Adicionalmente, esta variável deveria apresentar correlação linear com a variável principal ou variável primária. Geralmente, a variável primária é pobremente amostrada enquanto a variável secundária é conhecida sobre todo o domínio a ser estimado. Por exemplo, em exploração petrolífera, a variável primária é a porosidade medida em amostras de rocha retiradas de testemunhos e a variável secundária é a amplitude sísmica derivada de processamento de dados de reflexão sísmica. É importante mencionar que a variável primária e a variável secundária devem apresentar algum grau de correlação. Contudo, nós não sabemos como eles funcionam dependendo do grau de correlação. Esta é a questão. Assim, testamos os métodos de co-estimativa para vários conjuntos de dados apresentando diferentes graus de correlação. Na verdade, esses conjuntos de dados foram gerados em computador baseado em algoritmos de transformação de dados. Cinco valores de correlação foram considerados neste estudo: 0,993, 0,870, 0,752, 0,588 e 0,461. A cokrigagem colocalizada foi o melhor método entre todos testados. Este método tem um filtro interno que é aplicado no cálculo do peso da variável secundária, que por sua vez depende do coeficiente de correlação. De fato, quanto maior o coeficiente de correlação, maior é o peso da variável secundária. Então isso significa que este método funciona mesmo quando o coeficiente de correlação entre a variável primária e a variável secundária é baixo. Este é o resultado mais impressionante desta pesquisa.
This master dissertation presents the results of a survey into co-estimation methods commonly used in geostatistics. These methods are ordinary cokriging, collocated cokriging and kriging with an external drift. Besides that ordinary kriging was considered just to illustrate how it does work when the primary variable is poorly sampled. As we know co-estimation methods depend on a secondary variable sampled over the estimation domain. Moreover, this secondary variable should present linear correlation with the main variable or primary variable. Usually the primary variable is poorly sampled whereas the secondary variable is known over the estimation domain. For instance in oil exploration the primary variable is porosity as measured on rock samples gathered from drill holes and the secondary variable is seismic amplitude derived from processing seismic reflection data. It is important to mention that primary and secondary variables must present some degree of correlation. However, we do not know how they work depending on the correlation coefficient. That is the question. Thus, we have tested co-estimation methods for several data sets presenting different degrees of correlation. Actually, these data sets were generated in computer based on some data transform algorithms. Five correlation values have been considered in this study: 0.993; 0.870; 0.752; 0.588 and 0.461. Collocated simple cokriging was the best method among all tested. This method has an internal filter applied to compute the weight for the secondary variable, which in its turn depends on the correlation coefficient. In fact, the greater the correlation coefficient the greater the weight of secondary variable is. Then it means this method works even when the correlation coefficient between primary and secondary variables is low. This is the most impressive result that came out from this research.
APA, Harvard, Vancouver, ISO, and other styles
16

Gaspar, Willians Cesar Rocha. "A correlação entre jornada de trabalho e produtividade: uma perspectiva macroeconômica entre países." reponame:Repositório Institucional do FGV, 2017. http://hdl.handle.net/10438/19961.

Full text
Abstract:
Submitted by Willians Gaspar (willians.gaspar@fgv.br) on 2018-01-22T16:33:59Z No. of bitstreams: 1 A Correlação entre Jornada de Trabalho e Produtividade - Uma Perspectiva Macroeconômica entre Países.pdf: 1651221 bytes, checksum: 10a95ba6074b04f5e4e0f6d88a9bf7b6 (MD5)
Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2018-01-24T12:00:40Z (GMT) No. of bitstreams: 1 A Correlação entre Jornada de Trabalho e Produtividade - Uma Perspectiva Macroeconômica entre Países.pdf: 1651221 bytes, checksum: 10a95ba6074b04f5e4e0f6d88a9bf7b6 (MD5)
Made available in DSpace on 2018-01-29T18:55:15Z (GMT). No. of bitstreams: 1 A Correlação entre Jornada de Trabalho e Produtividade - Uma Perspectiva Macroeconômica entre Países.pdf: 1651221 bytes, checksum: 10a95ba6074b04f5e4e0f6d88a9bf7b6 (MD5) Previous issue date: 2017-12-19
This research has as general objective to identify the variables or contributing factors to subsidize the discussion about reduction of the Working Day. As a specific objective, what is proposed is to verify how these same variables affect Productivity. For both objectives the macroeconomic aspects of the countries analyzed are considered. The criterion for selecting these countries is based on the "ranking" of the OECD and World Bank database for the year 2013, considering all the major world economies, which together represent 65.22% of global GDP. The data extracted refer to the "Gross Domestic Product - GDP at (PPP) - Purchasing Power Parity", which consists of the Gross Domestic Product, in international dollars, with a view to the comparative possibility of these economies by purchasing power parity (PPP). Other sources of information were considered as objects of analysis and observations, including the statistical series of secondary data from the International Labor Office (ILO), the International Monetary Fund (IMF), the United Nations (UNDP), the Brazilian Institute of Geography and Economics (IBGE), the Department of Statistics and Socioeconomic Studies (DIEESE) and the Institute of Economic and Applied Research (IPEA). The research was conducted at the macroeconomic level of the countries, with a longitudinal temporal cut between the years 2007 and 2013, in order to observe the behavior of these economies, including during the period of the 2008 global crisis. evolution of the historical series of GDP, revealing the size of the economy, GDP per capita, which captures wealth in relation to the population. Finally, we consider the labor productivity factor itself, which deals with the relationship between GDP, the number of people and the number of hours worked in the period. This research has as general objective to identify the variables or contributing factors to subsidize the discussion about reduction of the Working Day. As a specific objective, what is proposed is to verify how these same variables affect Productivity. For both objectives the macroeconomic aspects of the countries analyzed are considered. The criterion for selecting these countries is based on the "ranking" of the OECD and World Bank database for the year 2013, considering all the major world economies, which together represent 65.22% of global GDP. The data extracted refer to the "Gross Domestic Product - GDP at (PPP) - Purchasing Power Parity", which consists of the Gross Domestic Product, in international dollars, with a view to the comparative possibility of these economies by purchasing power parity (PPP). Other sources of information were considered as objects of analysis and observations, including the statistical series of secondary data from the International Labor Office (ILO), the International Monetary Fund (IMF), the United Nations (UNDP), the Brazilian Institute of Geography and Economics (IBGE), the Department of Statistics and Socioeconomic Studies (DIEESE) and the Institute of Economic and Applied Research (IPEA). The research was conducted at the macroeconomic level of the countries, with a longitudinal temporal cut between the years 2007 and 2013, in order to observe the behavior of these economies, including during the period of the 2008 global crisis. evolution of the historical series of GDP, revealing the size of the economy, GDP per capita, which captures wealth relative to the population. Finally, we consider the labor productivity factor itself, which deals with the relationship between GDP, the number of people and the number of hours worked in the period. Design/Methodology/ approach – The method is a qualitative research of the exploratory type, subsidized by quantitative correlation analysis, and the statistical design is directed to the verification of the degree of association between the variables: Working day and Labor productivity; that is, calculation and interpretation of the degree of correlation between these two variables. Findings – In the final conclusion of the study, it is inferred based on the theoretical reference and the analysis of the statistical data, if the reduction in the working day contributes to changes in productivity indexes, and just as other variables are considered in this discussion. Research limitations – No aspects of the national culture, climatic conditions and segregation of nations by percentage of participation in agriculture, industry, and services were considered in the composition of their economies, with a view to performing comparative analysis by subgroups. In addition, the sample set is restricted both in number of countries and in relation to the relatively short period between 2007 and 2013, in addition to being marked by an atypical event such as the global economic crisis of 2008. Practical contributions – To governments, organizations and workers to rethink the possible economic and social benefits, through public policies that allow greater flexibility in working hours, focusing on the competitive advantages and the balance of the relation between labor and capital, observing the legal aspects, productivity, quality of life, unit costs and the generation of jobs
Esta pesquisa tem como objetivo geral identificar as variáveis ou fatores contribuintes para subsidiar a discussão sobre redução da Jornada de Trabalho. Como objetivo específico, o que se propõe é verificar como essas mesmas variáveis afetam a Produtividade. Para ambos os objetivos são considerados os aspectos macroeconômicos dos países analisados. O critério para seleção desses países se fundamenta no “ranking” da base de dados da Organização para a Cooperação e Desenvolvimento Econômico – OCDE e do Banco Mundial, ano base 2013, considerando-se o conjunto das maiores economias mundiais, que, juntas, representam 65,22% do PIB global. Os dados extraídos são referentes ao “Gross Domestic Product – GDP at (PPP) - Purchasing Power Parity”, que consiste no Produto Interno Bruto, em dólares internacionais, com vistas à possibilidade comparativa destas economias pela paridade do poder de compra (PPC). Outras fontes de informações foram consideradas como objetos de análise e observações, incluindo-se as séries estatísticas de dados secundários do Instituto Internacional do Trabalho (OIT), do Fundo Monetário Internacional (FMI), das Nações Unidas (UNDP), do Instituto Brasileiro de Geografia e Economia (IBGE), do Departamento Intersindical de Estatística e Estudos Socioeconômicos (DIEESE) e do Instituto de Pesquisa Econômica e Aplicada (IPEA). A pesquisa foi conduzida no nível macroeconômico dos países, com corte temporal longitudinal entre os anos de 2007 a 2013, com o objetivo de observar-se o comportamento dessas economias, inclusive durante o período da crise mundial de 2008. Nesse sentido, foi avaliada a evolução da série histórica do PIB, como reveladora do tamanho da economia, o PIB per capita, que captura a riqueza em relação à população. Por último, considera-se o fator produtividade do trabalho propriamente dito, que trata da relação entre o PIB, o número de pessoas e o número de horas trabalhadas no período. Quanto ao método, trata-se de pesquisa qualitativa do tipo exploratória, subsidiada por análise quantitativa correlacional, sendo o delineamento estatístico direcionado para a verificação do grau de associação entre as varáveis: Jornada de trabalho e Produtividade do trabalho; ou seja, cálculo e interpretação do grau de correlação entre essas duas variáveis. Na conclusão final do trabalho, infere-se com base no referencial teórico e na análise dos dados estatísticos, se a redução na jornada de trabalho contribui para alterações nos índices de produtividade, e assim como outras variáveis são consideradas nesta discussão. Não foram considerados aspectos da cultura nacional, condições climáticas e segregação das nações por percentual de participação respectivamente em agricultura, indústria, e serviços, na composição de suas economias, visando realizar análise comparativa por subgrupos. Além disto o conjunto amostral é restrito, tanto em número de países, quanto em relação ao período, relativamente curto, entre 2007 e 2013, além de ter sido marcado por fato atípico como a crise econômica mundial de 2008. Á governos, organizações e trabalhadores para repensarem os eventuais benefícios econômicos e sociais, através de políticas públicas que permitam maior flexibilização das jornadas de trabalho, com foco nas vantagens competitivas e no equilíbrio da relação entre mão de obra e capital, observando os aspectos legais, a produtividade, a qualidade de vida, os custos unitários e a geração de empregos
APA, Harvard, Vancouver, ISO, and other styles
17

Lee, Wan Yi, and 李宛懌. "Is Pearson Sample Correlation Coefficient Always Feasible To Test For Correlations ?" Thesis, 2016. http://ndltd.ncl.edu.tw/handle/43295031226491363248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lei, Cheng. "Student performance prediction based on course grade correlation." Thesis, 2019. http://hdl.handle.net/1828/10654.

Full text
Abstract:
This research explored the relationship between an earlier-year technical course and one later year technical course, for students who graduated between 2010 and 2015 with the degree of bachelor of engineering. The research only focuses on the courses in the program of Electrical Engineering at the University of Victoria. Three approaches based on the two major factors, coefficient and enrolment, were established to select the course grade predictor including Max(Pearson Coefficient), Max(Enrolment), and Max(Pi) which is a combination of the two factors. The prediction algorithm used is linear regression and the prediction results were evaluated by Mean Absolute Error and prediction precision. The results show that the predictions of most course pairs could not be reliably used for the student performance in one course based on another one. However, the fourth-year courses are specialization-related and have relatively small enrolments in general, some of the course pairs with fourth-year CourseYs and having acceptable MAE and prediction precision could be used as early references and advices for the students to select the specialization direction while they are in their first or second academic year.
Graduate
APA, Harvard, Vancouver, ISO, and other styles
19

Chen, Chin-Han, and 陳勁含. "Constructing Molecular Phylogeny by Pearson''s Correlation Coefficient and Molecular Phylogenetic Analysis of PRMT Super Family." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/70958272384702013640.

Full text
Abstract:
碩士
中山醫學大學
生物醫學科學學系碩士班
101
Evolutionary relationship of all living organisms can be viewed by the phylogenetic tree. So far there are many methods have been developed to evaluate evolutionary relationships. However, multiple sequence alignment (MSA) should be performed before those methods. Several studies have shown that the order in which sequences were added to a MSA could significantly affect the end result. Therefore we want to find if there is another method that makes more reliable results. Our goal is to construct a unique and reasonable phylogenetic tree building method better than the others. Here we propose a novel approach to replace the MSA process. We combine pair-wise sequence alignment (BLAST) and Pearson''s correlation coefficient (PCC) to simulate the interactive relationship of compared sequences. The relationship would be clustered by hierarchical clustering (HC) method. The results have shown that our method indeed improved the problem that MSA may occur. Our method also has a better clustering ability than the conventional methods and could produce a more reasonable tree. We subsequently use our method to perform a phylogenetic analysis of protein arginine methyltransferase (PRMT) families. In addition, we are curious to find if there is a way to identify the pattern of each PRMT family, which makes a fast classification of an unknown sequence.
APA, Harvard, Vancouver, ISO, and other styles
20

Lin, Jian-Fa, and 林建發. "Transitive Pearson Product-Moment Correlation Coefficient Based Particle Swarm Optimization on Applying Hyperspectral Image Dimension Reduction." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/5ruft5.

Full text
Abstract:
碩士
國立臺北科技大學
電機工程研究所
105
In recent years, the satellite remote sensing technology progress in wide applications, resulting in the increased amount of hyperspectral imaging bands and datasets. To avoid containing wrong data and noise bands lead to the correct rate drops, we utilize hyperspectral image processing to select these bands representative in the spectral bands. Thus, reducing the data complexity is essential procedure. This paper proposed Transitive Pearson Product-Moment Correlation Coefficient (TPMCC), improving the correlation coefficient between two bands via those similar neighbor bands according with specified conditions to enhance the ability to select the bands and effectively achieve the effect of dimensionality reduction. The previous study posed Particle Swarm Optimization (PSO), this algorithm clusters the correlation coefficient matrix generated by original hyperspectral image into a cluster module of feature space, and then choose the representative bands for the effect of reduction dimension. However, when dealing with more category images, each category bands grouped by the correlation coefficient matrix inefficiently. Additionally, PSO as vulnerable as to be disturbed cannot find out suitable universal correlation coefficient matrix. In this dissertation, Salinas’s AVIRIS and Washington DC Mall’s HYDICE remote sensing images are the experiments. The experimental results show that TPMCC algorithm have more effective than Pearson Product-Moment Correlation Coefficient to prove the dimension reduction rate and reduce the selection of bands then achieve a good classification result.
APA, Harvard, Vancouver, ISO, and other styles
21

Říha, Samuel. "Parciální a podmíněné korelační koeficienty." Master's thesis, 2015. http://www.nusl.cz/ntk/nusl-350851.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Vondra, Jan. "Vliv vybraných kondičních faktorů na výkonnost ve vodním slalomu." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-342055.

Full text
Abstract:
Title: Influence of selected conditional factors on performance in white water slalom. Aims: The aim of the study was to investigate the relationship of selected specific movement abilities being examined modified test battery with the performance of athletes in the water slalom. Methods: It was used field measurements where the applied modified test battery. Using GPS module to determine the distance partial tests from batery. For measuring was used manual measurement. To determine the statistical correlation between the modified battery and performance ability of competitors was used two different coefficients of correlation and regression analysis. According to the order of the test and the race was used nonparametric correlation study - Spearman correlation coefficient. Determining the statistical significance of the relationship of measured times in tests and final time in the nomination races have used the Pearson correlation coefficient. Results: For a statistically significant relationship was determined value when r ≥ 0.8. Spearman's correlation coefficient: In the test at 40 m were obtained these correlation coefficients: Nomination races rs = 0,380952, Czech cup rs = 0,595238. In the test at 80 meters they were obtained these correlation coefficients: nomination races rs = 0,857143,...
APA, Harvard, Vancouver, ISO, and other styles
23

KŘÍŽOVÁ, Tereza. "Měření návštěvnosti." Master's thesis, 2019. http://www.nusl.cz/ntk/nusl-394863.

Full text
Abstract:
The objective of this thesis is to proof possibilities to use data from the electronic revenue records to measure the visit rate and process recommendations for the use in tourism. The thesis focuses on the tourism sector. Concepts and related terminology are explained. Described in this thesis are sources of the information about visitors, profiles of visitors, decision-making process about visits, and selected technologies used to measure the visit rate. Reasons, problems and classification related to measurements of the visit rate are included in the thesis as well. The practical part examines the use of information from electronic revenue records for the purpose of measuring the number of visitors based on the calculation of Pearson's correlation coefficients. The principal how EET functions is explained in the thesis. Significant part of the work is the analysis of daily and monthly revenues of electronic records in the sector of lodging in regions of the Czech Republic. Based on this analysis, 6 groups are determined in which the development of daily seasonality takes place in a specific way. An important part is also the calculation of average cost of accommodation in regions, which identifies certain economic impacts of tourism. Part of the thesis are summarized recommendations for the use of data from EET.
APA, Harvard, Vancouver, ISO, and other styles
24

Bárta, Vít. "Komparace konsolidace demokracie na území bývalého východního bloku." Master's thesis, 2016. http://www.nusl.cz/ntk/nusl-353713.

Full text
Abstract:
The aim of this thesis is to numerically evaluate democratic consolidation in Eastern European countries of the former Eastern Bloc. To compare these countries with each other and decide which of these countries can be considered as consolidated democracies. Secondary aim is to find which factors supported this consolidation or at least correlate with it. Theoretical basis of this work is Wolfgang Merkel's theory of democratic consolidation. He divides democratic consolidation into four levels: constitutional consolidation, representative consolidation, behavioral consolidation and democratic consolidation of the political culture. Each level of democratic consolidation is numerically expressed, with usage of Bertelsmann's transformation index data, separately for all states in two-year intervals since 2005 to 2015. Based on that, overall democratic consolidation is calculated. Therefore, we can compare countries between each other and in time. Correlation between factors supporting consolidation and overall democratic consolidation is expressed by Pearson correlation coefficient. This work is beneficial in creating and describing method, which can be used for numerical expression of democratic consolidation in any state since 2005 to 2015 without author's subjective influence. Another benefit is...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography