Academic literature on the topic 'Statistical Data Interpretation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Statistical Data Interpretation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Statistical Data Interpretation"

1

Balaji, SM. "Data interpretation and statistical significance." Indian Journal of Dental Research 30, no. 2 (2019): 163. http://dx.doi.org/10.4103/ijdr.ijdr_363_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Latypova, T. R., and N. Yu Stepanova. "Application Statistical Models for Interpretation Toxicological Data." IOP Conference Series: Earth and Environmental Science 988, no. 4 (February 1, 2022): 042030. http://dx.doi.org/10.1088/1755-1315/988/4/042030.

Full text
Abstract:
Abstract The results of bioassay on infusoria of 75 samples of bottom sediments from 6 water bodies of the Middle Volga region were analyzed using traditional nonparametric methods and statistical models Generalized linear mixed model (GLMM) and Cumulative link mixed model (CLMM). The ambiguity of the interpretation of the results of biotesting performed by nonparametric methods is due to the fact that the toxicological data often do not correspond to the normal distribution. The use of the GLMM and CLMM models allow analyze data that do not correspond to the normal distribution and made it possible to clarify the level of toxicity of a number of ambiguous samples, which, after processing by mathematical model algorithms, acquire the status of either exactly toxic or exactly non-toxic.
APA, Harvard, Vancouver, ISO, and other styles
3

Papaioannou, Agelos, Vasili Simeonov, Panagiotis Plageras, Eleni Dovriki, and Thomas Spanos. "Multivariate statistical interpretation of laboratory clinical data." Open Medicine 2, no. 3 (September 1, 2007): 319–34. http://dx.doi.org/10.2478/s11536-007-0035-1.

Full text
Abstract:
AbstractLaboratory aids are extensively used in the diagnosis of diseases, in preventive medicine, and as management tools. Reference values of clinically healthy people serve as a guide to the clinician in evaluating biochemical parameters. Determination of 21 biochemical parameters of healthy persons using standard methods of analysis. Cluster analysis and principal components analysis were applied on the above 21 biochemical parameters data. The application of a typical classification approach as cluster analysis proved that four major groups of similarity between all 21 clinical parameters are formed, which correspond to the authors assumption of the existence of several summarizing pattern of clinical parameters such as “enzyme,” “major component excretion”, “general health state,” and “blood specific” pattern. These patterns appear also in the subsets obtained by separation of the general dataset into “male”, “female”, “young”, and “adult” healthy groups. The results obtained from principal components analysis have additionally proved the validity of a similar assumption. The intelligent data analysis on the clinical parameter dataset has shown that when a complex system is considered as a multivariate one, the information about the system substantially increases. All these results support an idea that probably a general health indicator could be constructed taking into account the existing classification groups in the list of clinical parameters.
APA, Harvard, Vancouver, ISO, and other styles
4

Chowdhury, Shamsul, Ove Wigertz, and Bo Sundgren. "A Knowledge-based System for Data Analysis and Interpretation." Methods of Information in Medicine 28, no. 01 (January 1989): 6–13. http://dx.doi.org/10.1055/s-0038-1635541.

Full text
Abstract:
Abstract:Traditionally, statistical packages are employed to derive or infer facts about a Universe of Discourse through data analysis and interpretation. It is analysis that serves to transform data into information. Statistical packages provide the users with relatively easy-to-use and powerful mechanics of data analysis, but up to now they do not provide much help with the design and strategies of the analysis. As such, there is a risk of misuse of these packages by statistically inexperienced users. We propose the use of knowledge-based interfaces to support this category of users in statistical evaluations. This paper discusses our experiences from the implementation of a knowledge-based system called MAXITAB. It provides guidance in the processes of data analysis and interpretation and has been programmed as an interface to the statistical package MINITAB.
APA, Harvard, Vancouver, ISO, and other styles
5

Howel, D., R. P. Hirsch, and R. K. Riegelman. "Statistical First Aid: Interpretation of Health Research Data." Biometrics 50, no. 4 (December 1994): 1231. http://dx.doi.org/10.2307/2533472.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Simeonov, V., I. Stanimirova, and S. Tsakovski. "Multivariate statistical interpretation of coastal sediment monitoring data." Fresenius' Journal of Analytical Chemistry 370, no. 6 (July 1, 2001): 719–22. http://dx.doi.org/10.1007/s002160100863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Thompson, W. D. "Statistical criteria in the interpretation of epidemiologic data." American Journal of Public Health 77, no. 2 (February 1987): 191–94. http://dx.doi.org/10.2105/ajph.77.2.191.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Hurich, C. A., and A. Kocurko. "Statistical approaches to interpretation of seismic reflection data." Tectonophysics 329, no. 1-4 (December 2000): 251–67. http://dx.doi.org/10.1016/s0040-1951(00)00198-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lee, Dong Kyu. "Data transformation: a focus on the interpretation." Korean Journal of Anesthesiology 73, no. 6 (December 1, 2020): 503–8. http://dx.doi.org/10.4097/kja.20137.

Full text
Abstract:
Several assumptions such as normality, linear relationship, and homoscedasticity are frequently required in parametric statistical analysis methods. Data collected from the clinical situation or experiments often violate these assumptions. Variable transformation provides an opportunity to make data available for parametric statistical analysis without statistical errors. The purpose of variable transformation to enable parametric statistical analysis and its final goal is a perfect interpretation of the result with transformed variables. Variable transformation usually changes the original characteristics and nature of units of variables. Back-transformation is crucial for the interpretation of the estimated results. This article introduces general concepts about variable transformation, mainly focused on logarithmic transformation. Back-transformation and other important considerations are also described herein.
APA, Harvard, Vancouver, ISO, and other styles
10

QUEIROZ, TAMIRES, CARLOS MONTEIRO, LILIANE CARVALHO, and KAREN FRANÇOIS. "INTERPRETATION OF STATISTICAL DATA: THE IMPORTANCE OF AFFECTIVE EXPRESSIONS." STATISTICS EDUCATION RESEARCH JOURNAL 16, no. 1 (May 31, 2017): 163–80. http://dx.doi.org/10.52041/serj.v16i1.222.

Full text
Abstract:
In recent years, research on teaching and learning of statistics emphasized that the interpretation of data is a complex process that involves cognitive and technical aspects. However, it is a human activity that involves also contextual and affective aspects. This view is in line with research on affectivity and cognition. While the affective aspects are recognized as important for the interpretation of data, they were not sufficiently discussed in the literature. This paper examines topics from an empirical study that investigates the influence of affective expression during the interpretation of statistical data by final-year undergraduate students of statistics and pedagogy. These two university courses have different curricular components, which are related to specific goals in the future professional careers of the students. The results suggest that despite differing academic backgrounds in both groups, the participants’ affective expressions were the most frequent type of category used during the interpretation of research assignments. First published May 2017 at Statistics Education Research Journal Archives
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Statistical Data Interpretation"

1

Senan, Campos Oriol. "Statistical tools for classification, interpretation and prediction of biological data." Doctoral thesis, Universitat Rovira i Virgili, 2017. http://hdl.handle.net/10803/458361.

Full text
Abstract:
Les tecnologies òmiques prometen una nova aproximació sistèmica a la biologia. Malgrat que ja coneixem el genoma de moltes espècies, i una aproximació del nombre de proteïnes, no sabem quants metabòlits hi ha en una mostra biológica, o en un organisme. Ni tan sols amb la tècnica més efectiva per a detectar el major nombre de metabòlits tenim alguna estimació, sovint només s'aconsegueixen identificar 20-30 metabolíts després d'un llarg treball manual. Em desenvolupat un algorisme, CliqueMS, que soluciona un dels problemes més grans per a anotar un experiment de metabolòmica, la correcta identificació de senyals múltiples per un mateix metabolit. En altres treballs de la tesi, explorem com combinar diferents dades òmiques en un cas pràctic, l'estudi dels efectes terapèutics de l'hibiscus analitzant la resposta metabòlica i l'expressió génica posterior a la seva ingestió. També abordem un procés complex biológic com la trombosi, per a estudiar el rol de diferents models en la interpretació i la predicció de l'acumulació de plaquetes. Mitjançant aquests models som capaços de predir l'acumulació de plaquetes en una nova mostra, demostrant una possible aplicació clinica en un cas hipotètic cas on estimem un model a partir d'una sèrie de pacients per a aplicar-ho a predir una variable difícil de mesurar (com l'acumulació de plaquetes) en nou pacient. predir una variable difícil de mesurar (com l'acumulació de plaquetes) en nou pacient.
de muchas especies, y un buen número aproximado de las proteïnas, pero no sabemos ni cuantos metabolitos hay en un organismo ni en una muestra biológica. Con la técnica más efectiva para detectar el mayor número de metabolitos posible solamente se identifican entre 20 y 30 metabolitos por muestra, después de un largo trabajo manual. Hemos desarrolado un algoritmo, CliqueMS, para solucionar uno de los impedimentos más importantes para anotar en su totalidad un experiemento de metabolòmica no dirigida, la correcta agrupación e identificación de las múltiple de señales de un mismo metabolito. En otros trabajos de la tesis exploramos como combinar diversos fuentes de datos ómicos. En un caso práctico, estudiamos el efecto terapéutico del hibisco a partir de su respuesta metabólica y transciptomica tras su ingestión. Tambien en otro trabajo abordamos un proceso biológico complejo como es la trombosis. Estudiamos como varia la interpretación y la predicción mediante unos modelos de la acumulación de plaquetas, el desencadenante de la trombosis. Mediante estos modelos somos capaces de predecir la acumulación de plaquetas en una nueva muestra, demostrando una posible aplicación clínica, en un hipotético caso donde ajustmos un modelo a partir de los datos de un grupo de pacientes, y lo aplicamos para predecir una variable muy difícil de medir
Omics technologies arise a new systemic approach towards biology. We already know the genome of many species, and an approximated number of proteins, but we do not know how many metabolites are present in an organism or in a biological sample. With the most suited technique for metabolite identification, usually only 20-30 metabolites are identified after hard manual work. To solve this problem, we have developed CliqueMS, that tackles one of the main bottlenecks for the annotation of metabolomic experiments, the correct grouping and annotation of the multiple signals produced by a metabolite. In another investigation of the thesis, we explore how to combine several sources of omics data. In a practical application we study the therapeutic effect of hibiscus, by analyzing the response in metabolism and in gene expression, after its ingestion. The last investigation included in this thesis tackles a complex biological process, thrombosis. We study how changes interpretation and prediction of platelet deposition by using different computational models. By this models we demonstrate that platelet deposition can be predicted by measuring platelet concentration, the vessel tissue and some other variables. This models can be used to predict variables that are very difficult to measure, as it is platelet deposition
APA, Harvard, Vancouver, ISO, and other styles
2

Patrick, Ellis. "Statistical methods for the analysis and interpretation of RNA-Seq data." Thesis, The University of Sydney, 2013. http://hdl.handle.net/2123/10438.

Full text
Abstract:
In the post-genomic era, sequencing technologies have become a vital tool in the global analysis of biological systems. RNA-Seq, the sequencing of messenger RNA, in particular has the potential to answer many diverse and interesting questions about the inner workings of cells. Despite the decreasing cost of sequencing data, the majority of RNA-Seq experiments are still suffering from low replication numbers. The statistical methodology for dealing with low replicate RNA-Seq experiments is still in its infancy and has room for further development. Incorporating additional information from publicly accessible databases may provide a plausible avenue to overcome the shortcomings of low replication. Not only could this additional information improve on the ability to find statistically significant signal but this signal should also be more biologically interpretable. This thesis is separated into three distinct statistical problems that arise when processing and analysing RNA-Seq data. Firstly, the use of experimental data to customise gene annotations is proposed. When customised annotations are used to summarise read counts, the corresponding measures of transcript abundance include more information than alternate summarisation approaches and offer improved concordance with qRT-PCR data. A moderation methodology that exploits external estimates of variation is then developed to address the issue of small sample differential expression analysis. This approach performs favourably against existing approaches when comparing gene rankings and sensitivity. With the aim of identifying groups of miRNA-mRNA regulatory relationships, a framework for integrating various databases of prior knowledge with small sample miRNA-Seq and mRNA-Seq data is then outlined. This framework appears to identify more signal than simpler approaches and also provides highly interpretable models of miRNA-mRNA regulation. To conclude, a small sample miRNA-Seq and mRNA-Seq experiment is presented that seeks to discover miRNA-mRNA regulatory relationships associated with loss of Notch2 function and its links to neurodegeneration. This experiment is used to illustrate the methodologies developed in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
3

Chamma, Ahmad. "Statistical interpretation of high-dimensional complex prediction models for biomedical data." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG028.

Full text
Abstract:
Les grands jeux de données de santé produits, qui représentent les caractéristiques de la population selon de multiples modalités, permettent de prédire et de comprendre les résultats individuels. À mesure que la collecte de données s'étend aux domaines scientifiques, tels que l'imagerie cérébrale, les variables sont liées par des dépendances complexes, éventuellement non linéaires, ainsi que par des degrés élevés de corrélation. Par conséquent, les modèles populaires tels que les techniques linéaires et à base d'arbres de décision ne sont plus efficaces dans ces contextes à haute dimension. De puissants algorithmes d'apprentissage automatique non linéaires, tels que les forêts aléatoires et les réseaux de neurones profonds, sont devenus des outils importants pour caractériser les différences interindividuelles et prédire les résultats biomédicaux, tels que l'âge du cerveau. Il est essentiel d'expliquer le processus de décision des algorithmes d'apprentissage automatique, à la fois pour améliorer les performances d'un modèle et pour faciliter la compréhension. Cet objectif peut être atteint en évaluant l'importance des variables. Traditionnellement, les scientifiques ont privilégié des modèles simples et transparents tels que la régression linéaire, où l'importance des variables peut être facilement mesurée par des coefficients. Cependant, avec l'utilisation de méthodes plus avancées, l'accès direct à la structure interne est devenu limité et/ou ininterprétable d'un point de vue humain. C'est pourquoi ces méthodes sont souvent appelées méthodes "boîte noire". Les approches standard basées sur l'importance par permutation (PI) évaluent l'importance d'une variable en mesurant la diminution du score de perte lorsque la variable d'intérêt est remplacée par sa version permutée. Bien que ces approches augmentent la transparence des modèles de boîte noire et offrent une validité statistique, elles peuvent produire des évaluations d'importance peu fiables lorsque les variables sont corrélées.L'objectif de ce travail est de surmonter les limites de l'importance de permutation standard en intégrant des schémas conditionnels. Par conséquent, nous développons deux cadres génériques, l'importance par permutation conditionnelle (CPI) et l'importance par permutation conditionnelle basée sur des blocs (BCPI), qui prennent efficacement en compte les corrélations entre les variables et surmontent les limites de l'importance par permutation. Nous présentons deux nouveaux algorithmes conçus pour traiter les situations où les variables sont corrélées, qu'elles soient groupées ou non. Nos résultats théoriques et empiriques montrent que CPI fournit des méthodes efficaces sur le plan du calcul et solides sur le plan théorique pour l'évaluation des variables individuelles. Le cadre de CPI garantit le contrôle des erreurs de type-I et produit une sélection concise des variables significatives dans les grands ensembles de données.BCPI présente une stratégie de gestion des variables individuelles et groupées. Elle intègre le regroupement statistique et utilise la connaissance préalable du regroupement pour adapter l'architecture du réseau DNN à l'aide de techniques d'empilement. Ce cadre est robuste et maintient le contrôle de l'erreur de type-I même dans des scénarios avec des groupes de variables fortement corrélées. Il donne de bons résultats sur divers points de référence. Les évaluations empiriques de nos méthodes sur plusieurs jeux de données biomédicales ont montré une bonne validité apparente. Nous avons également appliqué ces méthodes à des données cérébrales multimodales ainsi qu'à des données sociodémographiques, ouvrant la voie à de nouvelles découvertes et avancées dans les domaines ciblés. Les cadres CPI et BCPI sont proposés en remplacement des méthodes conventionnelles basées sur la permutation. Ils améliorent l'interprétabilité de l'estimation de l'importance des variables pour les modèles d'apprentissage à haute performance
Modern large health datasets represent population characteristics in multiple modalities, including brain imaging and socio-demographic data. These large cohorts make it possible to predict and understand individual outcomes, leading to promising results in the epidemiological context of forecasting/predicting the occurrence of diseases, health outcomes, or other events of interest. As data collection expands into different scientific domains, such as brain imaging and genomic analysis, variables are related by complex, possibly non-linear dependencies, along with high degrees of correlation. As a result, popular models such as linear and tree-based techniques are no longer effective in such high-dimensional settings. Powerful non-linear machine learning algorithms, such as Random Forests (RFs) and Deep Neural Networks (DNNs), have become important tools for characterizing inter-individual differences and predicting biomedical outcomes, such as brain age. Explaining the decision process of machine learning algorithms is crucial both to improve the performance of a model and to aid human understanding. This can be achieved by assessing the importance of variables. Traditionally, scientists have favored simple, transparent models such as linear regression, where the importance of variables can be easily measured by coefficients. However, with the use of more advanced methods, direct access to the internal structure has become limited and/or uninterpretable from a human perspective. As a result, these methods are often referred to as "black box" methods. Standard approaches based on Permutation Importance (PI) assess the importance of a variable by measuring the decrease in the loss score when the variable of interest is replaced by its permuted version. While these approaches increase the transparency of black box models and provide statistical validity, they can produce unreliable importance assessments when variables are correlated.The goal of this work is to overcome the limitations of standard permutation importance by integrating conditional schemes. Therefore, we investigate two model-agnostic frameworks, Conditional Permutation Importance (CPI) and Block-Based Conditional Permutation Importance (BCPI), which effectively account for correlations between covariates and overcome the limitations of PI. We present two new algorithms designed to handle situations with correlated variables, whether grouped or ungrouped. Our theoretical and empirical results show that CPI provides computationally efficient and theoretically sound methods for evaluating individual variables. The CPI framework guarantees type-I error control and produces a concise selection of significant variables in large datasets.BCPI presents a strategy for managing both individual and grouped variables. It integrates statistical clustering and uses prior knowledge of grouping to adapt the DNN architecture using stacking techniques. This framework is robust and maintains type-I error control even in scenarios with highly correlated groups of variables. It performs well on various benchmarks. Empirical evaluations of our methods on several biomedical datasets showed good face validity. Our methods have also been applied to multimodal brain data in addition to socio-demographics, paving the way for new discoveries and advances in the targeted areas. The CPI and BCPI frameworks are proposed as replacements for conventional permutation-based methods. They provide improved interpretability and reliability in estimating variable importance for high-performance machine learning models
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Bin. "Statistical learning and predictive modeling in data mining." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1155058111.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Lu. "Analysis and Interpretation of Complex Lipidomic Data Using Bioinformatic Approaches." Thesis, Boston College, 2012. http://hdl.handle.net/2345/2656.

Full text
Abstract:
Thesis advisor: Jeffrey H. Chuang
The field of lipidomics has rapidly progressed since its inception only a decade ago. Technological revolutions in mass spectrometry, chromatography, and computational biology now enables high-throughput high-accuracy quantification of the cellular lipidome. One significant improvement of these technologies is that lipids can now be identified and quantified as individual molecular species. Lipidomics provides an additional layer of information to genomics and proteomics and opens a new opportunity for furthering our understanding of cellular signaling networks and physiology, which have broad therapeutic values. As with other 'omics sciences, these new technologies are producing vast amounts of lipidomic data, which require sophisticated statistical and computational approaches for analysis and interpretation. However, computational tools for utilizing such data are sparse. The complexity of lipid metabolic systems and the fact that lipid enzymes remain poorly understood also present challenges to computational lipidomics. The focus of my dissertation has been the development of novel computational methods for systematic study of lipid metabolism in cellular function and human diseases using lipidomic data. In this dissertation, I first present a mathematical model describing cardiolipin molecular species distribution in steady state and its relationship with fatty acid chain compositions. Knowledge of this relationship facilitates determination of isomeric species for complex lipids, providing more detailed information beyond current limits of mass spectrometry technology. I also correlate lipid species profiles with diseases and predict potential therapeutics. Second, I present statistical studies of mechanisms influencing phosphatidylcholine and phosphatidylethanolamine molecular architectures, respectively. I describe a statistical approach to examine dependence of sn1 and sn2 acyl chain regulatory mechanisms. Third, I describe a novel network inference approach and illustrate a dynamic model of ethanolamine glycerophospholipid acyl chain remodeling. The model is the first that accurately and robustly describes lipid species changes in pulse-chase experiments. A key outcome is that the deacylation and reacylation rates of individual acyl chains can be determined, and the resulting rates explain the well-known prevalence of sn1 saturated chains and sn2 unsaturated chains. Lastly, I summarize and remark on future studies for lipidomics
Thesis (PhD) — Boston College, 2012
Submitted to: Boston College. Graduate School of Arts and Sciences
Discipline: Biology
APA, Harvard, Vancouver, ISO, and other styles
6

Knox, Kathryn M. G. "Statistical interpretation of a veterinary hospital database : from data to decision support." Thesis, University of Glasgow, 1998. http://theses.gla.ac.uk/6735/.

Full text
Abstract:
Research was undertaken to investigate whether data maintained within a veterinary hospital database could be exploited such that important medical information could be realised. At the University of Glasgow Veterinary School (GUVS), a computerised hospital database system, which had maintained biochemistry and pathology data for a number of years, was upgraded and expanded to enable recording of signalment, historical and clinical data for referral cases. Following familiarisation with the computerised database, clinical diagnosis and biochemistry data pertaining to 740 equine cases were extracted. Graphical presentation of the results obtained for each of 18 biochemistry parameters investigated indicated that the distributions of the data were variable. This had important implications with respect to the statistical techniques which were subsequently applied, and also to the appropriateness of the reference range method currently used for interpretation of clinical biochemistry data. A percentile analysis was performed for each of the biochemistry parameters; data were grouped into ten appropriate percentile band intervals; and the corresponding diagnoses tabulated and ranked according to frequency. Adoption of a Bayesian method enabled determination of how many times more likely a diagnosis was than before the biochemistry parameter concentration had been ascertained. The likelihood ratio was termed the "Biochemical Factor". Consequently, a measurement on a parameter, such as urea, could be classified on the percentile scale, and a diagnosis, such as hepatopathy, judged to be less or many times more likely, based on the numerical evaluation of the Biochemical Factor. One issue associated with the interrogation of the equine cases was that the diagnoses were clinical in origin, and, because they may have been made with the assistance of biochemistry data, this may have yielded biased results. Although this was considered unlikely to have affected the findings to a large extent, a database containing biochemistry and post mortem diagnosis data for cattle was also assessed.
APA, Harvard, Vancouver, ISO, and other styles
7

Leek, Jeffrey Tullis. "Surrogate variable analysis /." Thesis, Connect to this title online; UW restricted, 2007. http://hdl.handle.net/1773/9586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dinh, Phillip V. "Some methods for the analysis of skewed data /." Thesis, Connect to this title online; UW restricted, 2006. http://hdl.handle.net/1773/9546.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sloan, Lauren Elizabeth. "Methods for analysis of missing data using simulated longitudinal data with a binary outcome." Oklahoma City : [s.n.], 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mwewa, Chilufya. "Statistical interpretation of exotics monojet data in search of an invisibly decaying Higgs Boson." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/9216.

Full text
Abstract:
Includes bibliographical references.
Following the recent discovery of a Standard Model Higgs-like particle at the Large Hadron Collider, this study searches for the evidence of invisible decays of this particle. Assuming that this is the Standard Model Higgs boson, its decay to invisible particles is not expected to be measurable in the current data. However, it could have a large contribution from its decay to stable non-Standard Model particles such as the hypothetical dark matter particles. This study corresponds to 4.7 fb!1 of 7 TeV proton-proton collisions and 20.3 fb!1 of 8 TeV proton-proton collisions. At the time of thesis submission, the 8 TeV results were not unblinded by the ATLAS Collaboration, so toy-data are presented here to demonstrate the procedure. The performance of the statistical framework to be used in the combination of the 7 TeV data with the real 8 TeV data is assessed and is found to perform very well. The results are interpreted to set 95 confidence level limits on the branching ratio to invisible particles of the newly discovered Higgs-like particle at a mass of 125 GeV. Limits are also set on the production cross section ⇥ branching ratio of additional Higgs-like particles that decay invisibly in the mass range: 115 GeV to 300 GeV. In the combination of the 7 TeV data and 8 TeV toy-data, an expected (observed) upper limit of0.89 (0.59) is set on the branching ratio to invisible particles of a 125 GeV Higgs boson. In the mass range 115 to 300 GeV, no excess beyond the Standard Model expectation is observed.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Statistical Data Interpretation"

1

Herbert, Keller, and Trendelenburg Ch 1945-, eds. Data presentation interpretation. Berlin: W. de Gruyter, 1989.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cook, W. Rupert. Canadian statistical data: An introduction to their interpretation. 2nd ed. Sainte-Foy, Quebec: Presses de l'Université du Québec, 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cook, W. Rupert. Canadian statistical data: An introduction to sources and interpretation. [Canada]: Micromedia Ltd., 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Cook, W. Rupert. Canadian statistical data: An introduction to source and interpretation. Toronto: Micromedia, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Cook, W. Rupert. Canadian statistical data: An introduction to sources and interpretation. Toronto: Micromedia, 1986.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

K, Riegelman Richard, ed. Statistical first aid: Interpretation of health research data. Boston: Blackwell Scientific Publications, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

S, Guy Christopher, Brown Michael L, and American Fisheries Society, eds. Analysis and Interpretation of freshwater fisheries data. Bethesda, Md: American Fisheries Society, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

1955-, Gibbons Robert D., ed. Longitudinal data analysis. Hoboken, NJ: J. Wiley, 2006.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Taylor, John K. Statistical techniques for data analysis. Chelsea, Mich: Lewis Publishers, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Cheryl, Cihon, ed. Statistical techniques for data analysis. 2nd ed. Boca Raton: Chapman & Hall/CRC, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Statistical Data Interpretation"

1

Russ, John C., and Robert T. Dehoff. "Statistical Interpretation of Data." In Practical Stereology, 149–81. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4615-1233-2_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gaylor, David W. "Statistical Interpretation of Toxicity Data." In Toxic Substances and Human Risk, 77–91. New York, NY: Springer US, 1987. http://dx.doi.org/10.1007/978-1-4684-5290-7_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Politser, P. E. "Chapter 2.1. How to Make Laboratory Information More Informative: Psychological and Statistical Considerations." In Data Presentation / Interpretation, edited by H. Keller and Ch Trendelenburg, 11–32. Berlin, Boston: De Gruyter, 1989. http://dx.doi.org/10.1515/9783110869880-005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ahlberg, Ernst, Ola Spjuth, Catrin Hasselgren, and Lars Carlsson. "Interpretation of Conformal Prediction Classification Models." In Statistical Learning and Data Sciences, 323–34. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-17091-6_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Hryniewicz, Olgierd. "Possibilistic interpretation of fuzzy statistical tests." In Statistical Modeling, Analysis and Management of Fuzzy Data, 226–38. Heidelberg: Physica-Verlag HD, 2002. http://dx.doi.org/10.1007/978-3-7908-1800-0_15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Svatoňová, Hana, and Radovan Šikl. "Cognitive Aspects of Interpretation of Image Data." In Mathematical-Statistical Models and Qualitative Theories for Economic and Social Sciences, 161–75. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-54819-7_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Avseth, Per, Tapan Mukerji, Gary Mavko, and Ezequiel Gonzalez. "Integrating statistical rock physics and sedimentology for quantitative seismic interpretation." In Subsurface Hydrology: Data Integration for Properties and Processes, 45–60. Washington, D. C.: American Geophysical Union, 2007. http://dx.doi.org/10.1029/171gm06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Dan, Yuzhou Luo, Nan Singhasemanon, and Kean S. Goh. "Quantitative Interpretation ofSurface Water Monitoring Data UsingPhysical and Statistical Models." In Pesticides in Surface Water: Monitoring, Modeling, Risk Assessment, and Management, 377–89. Washington, DC: American Chemical Society, 2019. http://dx.doi.org/10.1021/bk-2019-1308.ch019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Guthrie, William F., Hung-kung Liu, Andrew L. Rukhin, Blaza Toman, Jack C. M. Wang, and Nien-fan Zhang. "Three Statistical Paradigms for the Assessment and Interpretation of Measurement Uncertainty." In Data Modeling for Metrology and Testing in Measurement Science, 1–45. Boston: Birkhäuser Boston, 2008. http://dx.doi.org/10.1007/978-0-8176-4804-6_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Arakawa, Kazuharu, and Masaru Tomita. "Merging Multiple Omics Datasets In Silico: Statistical Analyses and Data Interpretation." In Methods in Molecular Biology, 459–70. Totowa, NJ: Humana Press, 2013. http://dx.doi.org/10.1007/978-1-62703-299-5_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Statistical Data Interpretation"

1

He, Xinwei. "Statistical Interpretation and Modeling Analysis of Multidimensional Complicated Computer Data." In 2021 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS). IEEE, 2021. http://dx.doi.org/10.1109/icpics52425.2021.9524118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cichecki, P., E. Gulski, J. J. Smit, R. Jongen, and F. Petzold. "Interpretation of MV power cables PD diagnostic data using statistical analysis." In 2008 IEEE International Symposium on Electrical Insulation. IEEE, 2008. http://dx.doi.org/10.1109/elinsl.2008.4570266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wagner, Frank R., Anne Hildenbrand, Laurent Gallais, Hassan Akhouayri, Mireille Commandre, and Jean-Yves Natoli. "Statistical interpretation of S-on-1 data and the damage initiation mechanism." In Boulder Damage Symposium XL Annual Symposium on Optical Materials for High Power Lasers, edited by Gregory J. Exarhos, Detlev Ristau, M. J. Soileau, and Christopher J. Stolz. SPIE, 2008. http://dx.doi.org/10.1117/12.804422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Gite, Priti, and A. S. Sindekar. "Interpretation of sweep frequency response data (SFRA) using graphical and statistical technique." In 2017 International Conference of Electronics, Communication and Aerospace Technology (ICECA). IEEE, 2017. http://dx.doi.org/10.1109/iceca.2017.8212810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Michael, Nikolaos A., Christian Scheibe, and Neil W. Craigie. "Automations in Chemostratigraphy: Toward Robust Chemical Data Analysis and Interpretation." In SPE Middle East Oil & Gas Show and Conference. SPE, 2021. http://dx.doi.org/10.2118/204892-ms.

Full text
Abstract:
Abstract Elemental chemostratigraphy has become an established stratigraphic correlation technique over the last 15 years. Geochemical data are generated from rock samples (e.g., ditch cuttings, cores or hand specimens) for up to c. 50 elements in the range Na-U in the periodic table using various analytical techniques. The data are commonly displayed and interpreted as ratios, indices and proxy values in profile form against depth. The large number of possible combinations between the determined elements (more than a thousand combinations), makes it a time-consuming effort to identify meaningful variations that resulted in correlative chemostratigraphic boundaries and zones between wells. The large number of combination means that 30-40% of the information is not used for the correlations that maybe crucial to understand the geological processes. Automation and artificial intelligence (AI) are envisaged as likely solutions to this challenge. Statistical and machine learning techniques are tested as a first step to automate and establish a workflow to define (chemo-) stratigraphic boundaries, and to identify geological formations. The workflow commences with a quality check of the input data and then with principle component analysis (PCA) as a multivariate statistical method. PCA is used to minimize the number of elements/ratios plotted in profile form, whilst simultaneously identifying multidimensional relationships between them. A statistical boundary picking method is then applied define chemostratigraphic zones, for which reliability is determined utilizing quartile analysis, which tests the overlap of chemical signals across these statistical boundaries. Machine learning via discriminant function analysis (DFA) has been developed to predict the placement of correlative boundaries between adjacent sections/wells. The proposed workflow has been tested on various geological formations and areas in Saudi Arabia. The chemostratigraphic correlations proposed using this workflow broadly correspond to those defined in the standard workflow by experienced chemostratigraphers, while interpretation times and subjectivity are reduced. While machine learning via DFA is currently further researched, early results of the workflow are very encouraging. A user-friendly software application with workflows and algorithms ultimately leading to automation of the processes is under development.
APA, Harvard, Vancouver, ISO, and other styles
6

Chan, Vivian. "Promoting statistical literacy among students." In Statistics education for Progress: Youth and Official Statistics. International Association for Statistical Education, 2013. http://dx.doi.org/10.52041/srap.13701.

Full text
Abstract:
In this knowledge-based era, plenty of decisions which have a possible impact on people and the environment are backed by statistical considerations. The Census and Statistics Department (C&SD), as the central statistical office in Hong Kong, has been playing an active role in promoting proper and effective application and interpretation of statistics among students, the future pillars of our society. Over all these years, C&SD has been adopting a variety of means to reach out to students with a view to equipping our future generation with the necessary statistical knowledge and skills in this increasingly data-centric world. In particular, continuous efforts have been made to foster statistical education, including : facilitating easy access of official statistics; organising talks and visits for students; and collaborating with the local statistical community. This paper will discuss in detail how C&SD promotes and enhances statistical literacy of students in Hong Kong.
APA, Harvard, Vancouver, ISO, and other styles
7

Isphording, Wayne C. "FAULTY STATISTICAL INTERPRETATION OF GEOCHEMICAL DATA: RESULTS CAN BE FATAL IN A COURT-OF-LAW!!" In 68th Annual GSA Southeastern Section Meeting - 2019. Geological Society of America, 2019. http://dx.doi.org/10.1130/abs/2019se-326385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Johnston, Carol. "Statistical Analysis of Fatigue Test Data." In ASME 2017 36th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/omae2017-62212.

Full text
Abstract:
The offshore environment contains many sources of cyclic loading. Standard design S-N curves, such as those in DNVGL-RP-C203, are usually assigned to ensure a particular design life can be achieved for a particular set of anticipated loading conditions. Girth welds are often the ‘weak link’ in terms of fatigue strength and so it is important to show that girth welds made using new procedures for new projects that are intended to be used in fatigue sensitive risers or flowlines do indeed have the required fatigue performance. Alternatively, designers of new subsea connectors, used for example in tendons for tension leg platforms, mooring applications or well-heads which will experience cyclic loading in service, also wish to verify the fatigue performance of their new designs. Often operators require contractors to carry out resonance fatigue tests on representative girth welds in order to show that girth welds made using new procedures qualify to the required design S-N curve. Operators and contractors must then interpret the results, which is not necessarily straightforward if the fatigue lives are lower than expected. Many factors influence a component’s fatigue strength so there is usually scatter in results obtained when a number of fatigue tests are carried out on real, production standard components. This scatter means that it is important first to carry out the right number of tests in order to obtain a reasonable understanding of the component’s fatigue strength, and then to interpret the fatigue test results properly. A working knowledge of statistics is necessary for both specifying the test programme and interpreting the test results and there is often confusion over various aspects of test specification and interpretation. This paper describes relevant statistical concepts in a way that is accessible to non-experts and that can be used, practically, by designers. The paper illustrates the statistical analysis of test data with examples of the ‘target life’ approach (that is now included in BS7608:2014 + A1) and the equivalent approach in DNVGL-RP-C203, which uses the stress modification factor. It gives practical examples to designers of a pragmatic method that can be used when specifying test programmes and interpreting the results obtained from tests carried out during qualification programmes, which for example, aim to determine whether girth welds made using a new procedure qualify to a particular design curve. It will help designers who are tasked with specifying test programmes to choose a reasonable number of test specimens and stress ranges, and to understand the outcome when results have been obtained.
APA, Harvard, Vancouver, ISO, and other styles
9

Rekik, Karim, Abdelkabir Bouyghf, Olfa Zened, and Tanya Kontsedal. "Augmented Learning Parameter Advisor for Wellbore Domain Interpretations." In ADIPEC. SPE, 2023. http://dx.doi.org/10.2118/216491-ms.

Full text
Abstract:
Abstract The Parameter Advisor introduces an AI-powered solution for automating the selection of optimal parameter values in wellbore data interpretation. The aim is to reduce effort and time required for accurate interpretations. The software leverages machine learning algorithms, a comprehensive knowledge base, and collaboration among experts to enhance the interpretation process. The overall approach includes data gathering, quality control, and validation. Relevant data is collected and stored in a cloud storage system. The software applies statistical techniques and unsupervised learning algorithms to ensure accuracy and identify patterns in the data. Once the database is established, the software provides recommendations for future analyses based on past interpretations and expert knowledge. The results of tests conducted in the GRONINGEN and CASABE fields showed 92% accuracy compared to manual interpretation. The execution time for a Shale Volume interpretation was reduced by 64%. Collaborative studies with AkerBP in the Valhall field demonstrated an interpretation time reduction of approximately 70%. This study presents a novel approach in the petroleum industry by automating parameter initiation using machine learning and cloud computing. It improves the speed, accuracy, and efficiency of wellbore data interpretation. The software's ability to recommend optimal parameter values based on previous interpretations and expert knowledge contributes to better decision-making. The findings emphasize the effectiveness of machine learning in automating interpretation tasks and enabling non-experts to interpret data accurately. In summary, the proposed software streamlines the wellbore data interpretation process, reduces errors, and saves time. It enhances collaboration among experts, captures expert knowledge, and improves decision-making. The solution adds valuable insights to the petroleum industry by showcasing the power of machine learning in interpretation tasks and demonstrating its potential for transforming the field.
APA, Harvard, Vancouver, ISO, and other styles
10

Gattuso, Linda, and Marc Bourdeau. "Data analysis or how high school students “read” statistics." In Statistics Education and the Communication of Statistics. International Association for Statistical Education, 2005. http://dx.doi.org/10.52041/srap.05303.

Full text
Abstract:
In most countries, statistics are included in the mathematics curriculum and taught by mathematics teachers. This leads to students learning the elements of statistical concepts as mathematical and to more emphasis placed on being able to compute different measures (e.g. mean, median, standard deviation) rather than their meaning and use. Moreover, in Quebec, the high school curriculum favours a scattered presentation of statistical concepts: tables and simple graphical representations are seen in the first year; averages, medians and histograms in the third; position measures in the fourth and some aspects of correlation and standard deviation are seen in the fifth. Some elements of probability are seen in the second year. But “statistics requires a different kind of thinking” (Cobb & Moore, 1997). Is it possible by making students compute statistical measures to foster the development of statistical thinking and prepare to draw conclusions from different data sets - all important abilities for “reading” statistics, an essential part of communication. This study attempts to determine if high school graduates develop the ability to effectively interpret the use and meaning of statistics (i.e. develop a “statistical way of thinking”). To this purpose we investigated (a) if a mode of data representation (1) list of data, (2) graphical, (3) principal location and dispersion parameters (mean, median, quartiles, standard deviation, etc), influences the students’ answers, (b) if students take into account the context of the data in their analysis, and (3) if students’ reasoning reveals “statistical thinking” as described in McGatha, Cobb and McClain (1998). A multiplicative argument combined with the use of the context in which the data are presented is preferable to an argument using only one point or only one measure (usually the mean) or only an arithmetic reasoning. To do so, a questionnaire with seven items asking students to choose from two or three samples and to justify the choice was presented to 141 fifth year high school students in the three different modes of presentation mentioned above. The results show that almost one third of the students revealed correct statistical thinking and another 41 % take the whole sample into account. The majority of the explanations are linked to the context. However, for some students more difficult tasks seemed to trigger a more global interpretation.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Statistical Data Interpretation"

1

Teskey, D. J. Statistical interpretation of aeromagnetic data. Natural Resources Canada/ESS/Scientific and Technical Publishing Services, 1990. http://dx.doi.org/10.4095/128046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Detcheva, Albena K., and Vasil D. Simeonov. Multivariate Statistical Interpretation of a Data Set of Medieval Glass Fragments. "Prof. Marin Drinov" Publishing House of Bulgarian Academy of Sciences, January 2010. http://dx.doi.org/10.7546/crabs.2020.01.05.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Linda Stetzenbach, Lauren Nemnich, and Davor Novosel. Statistical Analysis and Interpretation of Building Characterization, Indoor Environmental Quality Monitoring and Energy Usage Data from Office Buildings and Classrooms in the United States. Office of Scientific and Technical Information (OSTI), August 2009. http://dx.doi.org/10.2172/1004553.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Moeyaert, Mariola. Introduction to Meta-Analysis. Instats Inc., 2023. http://dx.doi.org/10.61700/9egp6tqy3koga469.

Full text
Abstract:
This seminar provides hands-on instruction covering the process of conducting a meta-analysis, from the planning stage through the selection of appropriate statistical techniques, the issues involved in analyzing data, and the interpretation of results. During the seminar you will learn how to use meta-analytic techniques to combine evidence across different research studies; integrate multiple studies into a single statistical framework; obtain precise and generalizable estimates of effect sizes; and explain differences arising from conflicting study results. An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, each seminar offers 2 ECTS Equivalent points.
APA, Harvard, Vancouver, ISO, and other styles
5

Moeyaert, Mariola. Introduction to Meta-Analysis. Instats Inc., 2023. http://dx.doi.org/10.61700/z1ui6nlaom67q469.

Full text
Abstract:
This seminar provides hands-on instruction covering the process of conducting a meta-analysis, from the planning stage through the selection of appropriate statistical techniques, the issues involved in analyzing data, and the interpretation of results. During the seminar you will learn how to use meta-analytic techniques to combine evidence across different research studies; integrate multiple studies into a single statistical framework; obtain precise and generalizable estimates of effect sizes; and explain differences arising from conflicting study results. An official Instats certificate of completion is provided at the conclusion of the seminar. For European PhD students, each seminar offers 2 ECTS Equivalent points.
APA, Harvard, Vancouver, ISO, and other styles
6

Panchenko, Liubov, and Andrii Khomiak. Education Statistics: Looking for Case-Study for Modeling. [б. в.], November 2020. http://dx.doi.org/10.31812/123456789/4461.

Full text
Abstract:
The article deals with the problem of using modeling in social statistics courses. It allows the student-researcher to build one-dimensional and multidimensional models of the phenomena and processes that are being studied. Social Statistics course programs from foreign universities (University of Arkansas; Athabasca University; HSE University, Russia; McMaster University, Canada) are analyzed. The article provides an example using the education data set – Guardian UK universities ranking in Social Statistics course. Examples of research questions are given, data analysis for these questions is performed (correlation, hypothesis testing, discriminant analysis). During the research the discriminant model with group variable – modified Guardian score – and 9 predictors: course satisfaction, teaching quality, feedback, staff-student ratio, money spent on each student and other) was built. Lower student’s satisfaction with feedback was found to be significantly different from the satisfaction with teaching. The article notes the modeling and statistical analysis should be accompanied by a meaningful interpretation of the results. In this example, we discussed the essence of university ratings, the purpose of Guardian rating, the operationalization and measurement of such concepts as satisfaction with teaching, feedback; ways to use statistics in education, data sources etc. with students. Ways of using this education data in group and individual work of students are suggested.
APA, Harvard, Vancouver, ISO, and other styles
7

Kent, Jonathan, and Caroline Wallbank. The use of hypothesis testing in transport research. TRL, February 2021. http://dx.doi.org/10.58446/rrzh8247.

Full text
Abstract:
Hypothesis testing is a well-used statistical method to evaluate whether a proposition is true or false. A fundamental part of the testing procedure is the calculation and interpretation of a p-value, which represents the probability of a set of data being observed, under the assumption that the proposition is true. This null hypothesis is then rejected if the p-value is less than a certain threshold, usually 0.05. In recent years, some members of the scientific community have called into question the validity of the hypothesis testing approach, because it places so much emphasis on whether or not a value is above or below an arbitrary threshold. We think that hypothesis testing is still a valid method, but it is important that, as well as the p-value additional information such as effect sizes is taken into account when interpreting results. In addition, there are alternative approaches, such as equivalence testing or Bayesian hypothesis testing, which should be considered in certain circumstances.
APA, Harvard, Vancouver, ISO, and other styles
8

Alwan, Iktimal, Dennis D. Spencer, and Rafeed Alkawadri. Comparison of Machine Learning Algorithms in Sensorimotor Functional Mapping. Progress in Neurobiology, December 2023. http://dx.doi.org/10.60124/j.pneuro.2023.30.03.

Full text
Abstract:
Objective: To compare the performance of popular machine learning algorithms (ML) in mapping the sensorimotor cortex (SM) and identifying the anterior lip of the central sulcus (CS). Methods: We evaluated support vector machines (SVMs), random forest (RF), decision trees (DT), single layer perceptron (SLP), and multilayer perceptron (MLP) against standard logistic regression (LR) to identify the SM cortex employing validated features from six-minute of NREM sleep icEEG data and applying standard common hyperparameters and 10-fold cross-validation. Each algorithm was tested using vetted features based on the statistical significance of classical univariate analysis (p<0.05) and extended () 17 features representing power/coherence of different frequency bands, entropy, and interelectrode-based distance. The analysis was performed before and after weight adjustment for imbalanced data (w). Results: 7 subjects and 376 contacts were included. Before optimization, ML algorithms performed comparably employing conventional features (median CS accuracy: 0.89, IQR [0.88-0.9]). After optimization, neural networks outperformed others in means of accuracy (MLP: 0.86), the area under the curve (AUC) (SLPw, MLPw, MLP: 0.91), recall (SLPw: 0.82, MLPw: 0.81), precision (SLPw: 0.84), and F1-scores (SLPw: 0.82). SVM achieved the best specificity performance. Extending the number of features and adjusting the weights improved recall, precision, and F1-scores by 48.27%, 27.15%, and 39.15%, respectively, with gains or no significant losses in specificity and AUC across CS and Function (correlation r=0.71 between the two clinical scenarios in all performance metrics, p<0.001). Interpretation: Computational passive sensorimotor mapping is feasible and reliable. Feature extension and weight adjustments improve the performance and counterbalance the accuracy paradox. Optimized neural networks outperform other ML algorithms even in binary classification tasks. The best-performing models and the MATLAB® routine employed in signal processing are available to the public at (Link 1).
APA, Harvard, Vancouver, ISO, and other styles
9

Randolph, KaDonna C. Descriptive statistics of tree crown condition in the Southern United States and impacts on data analysis and interpretation. Asheville, NC: U.S. Department of Agriculture, Forest Service, Southern Research Station, 2006. http://dx.doi.org/10.2737/srs-gtr-94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Randolph, KaDonna C. Descriptive statistics of tree crown condition in the Southern United States and impacts on data analysis and interpretation. Asheville, NC: U.S. Department of Agriculture, Forest Service, Southern Research Station, 2006. http://dx.doi.org/10.2737/srs-gtr-94.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography