To see the other types of publications on this topic, follow the link: FDI statistics.

Dissertations / Theses on the topic 'FDI statistics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 34 dissertations / theses for your research on the topic 'FDI statistics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Iyer, Vishwanath. "An adaptive single-step FDR controlling procedure." Diss., Temple University Libraries, 2010. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/75410.

Full text
Abstract:
Statistics<br>Ph.D.<br>This research is focused on identifying a single-step procedure that, upon adapting to the data through estimating the unknown parameters, would asymptotically control the False Discovery Rate when testing a large number of hypotheses simultaneously, and exploring some of the characteristics of this procedure.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
2

Vidal, Puig Santiago. "FAULT DIAGNOSIS TOOLS IN MULTIVARIATE STATISTICAL PROCESS AND QUALITY CONTROL." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/61292.

Full text
Abstract:
[EN] An accurate fault diagnosis of both, faults sensors and real process faults have become more and more important for process monitoring (minimize downtime, increase safety of plant operation and reduce the manufacturing cost). Quick and correct fault diagnosis is required in order to put back on track our processes or products before safety or quality can be compromised. In the study and comparison of the fault diagnosis methodologies, this thesis distinguishes between two different scenarios, methods for multivariate statistical quality control (MSQC) and methods for latent-based multivariate statistical process control: (Lb-MSPC). In the first part of the thesis the state of the art on fault diagnosis and identification (FDI) is introduced. The second part of the thesis is devoted to the fault diagnosis in multivariate statistical quality control (MSQC). The rationale of the most extended methods for fault diagnosis in supervised scenarios, the requirements for their implementation, their strong points and their drawbacks and relationships are discussed. The performance of the methods is compared using different performance indices in two different process data sets and simulations. New variants and methods to improve the diagnosis performance in MSQC are also proposed. The third part of the thesis is devoted to the fault diagnosis in latent-based multivariate statistical process control (Lb-MSPC). The rationale of the most extended methods for fault diagnosis in supervised Lb-MSPC is described and one of our proposals, the Fingerprints contribution plots (FCP) is introduced. Finally the thesis presents and compare the performance results of these diagnosis methods in Lb-MSPC. The diagnosis results in two process data sets are compared using a new strategy based in the use of the overall sensitivity and specificity<br>[ES] La realización de un diagnóstico preciso de los fallos, tanto si se trata de fallos de sensores como si se trata de fallos de procesos, ha llegado a ser algo de vital importancia en la monitorización de procesos (reduce las paradas de planta, incrementa la seguridad de la operación en planta y reduce los costes de producción). Se requieren diagnósticos rápidos y correctos si se quiere poder recuperar los procesos o productos antes de que la seguridad o la calidad de los mismos se pueda ver comprometida. En el estudio de las diferentes metodologías para el diagnóstico de fallos esta tesis distingue dos escenarios diferentes, métodos para el control de estadístico multivariante de la calidad (MSQC) y métodos para el control estadístico de procesos basados en el uso de variables latentes (Lb-MSPC). En la primera parte de esta tesis se introduce el estado del arte sobre el diagnóstico e identificación de fallos (FDI). La segunda parte de la tesis está centrada en el estudio del diagnóstico de fallos en control estadístico multivariante de la calidad. Se describen los fundamentos de los métodos más extendidos para el diagnóstico en escenarios supervisados, sus requerimientos para su implementación sus puntos fuertes y débiles y sus posibles relaciones. Los resultados de diagnóstico de los métodos es comparado usando diferentes índices sobre los datos procedentes de dos procesos reales y de diferentes simulaciones. En la tesis se proponen nuevas variantes que tratan de mejorar los resultados obtenidos en MSQC. La tercera parte de la tesis está dedicada al diagnóstico de fallos en control estadístico multivariante de procesos basados en el uso de modelos de variables latentes (Lb-MSPC). Se describe los fundamentos de los métodos mas extendidos en el diagnóstico de fallos en Lb-MSPC supervisado y se introduce una de nuestras propuestas, el fingerprint contribution plot (FCP). Finalmente la tesis presenta y compara los resultados de diagnóstico de los métodos propuestos en Lb-MSPC. Los resultados son comparados sobre los datos de dos procesos usando una nueva estrategia basada en el uso de la sensitividad y especificidad promedia.<br>[CAT] La realització d'un diagnòstic precís de les fallades, tant si es tracta de fallades de sensors com si es tracta de fallades de processos, ha arribat a ser de vital importància en la monitorització de processos (reduïx les parades de planta, incrementa la seguretat de l'operació en planta i reduïx els costos de producció) . Es requerixen diagnòstics ràpids i correctes si es vol poder recuperar els processos o productes abans de que la seguretat o la qualitat dels mateixos es puga veure compromesa. En l'estudi de les diferents metodologies per al diagnòstic de fallades esta tesi distingix dos escenaris diferents, mètodes per al control estadístic multivariant de la qualitat (MSQC) i l mètodes per al control estadístic de processos basats en l'ús de variables latents (Lb-MSPC). En la primera part d'esta tesi s'introduïx l'estat de l'art sobre el diagnòstic i identificació de fallades (FDI). La segona part de la tesi està centrada en l'estudi del diagnòstic de fallades en control estadístic multivariant de la qualitat. Es descriuen els fonaments dels mètodes més estesos per al diagnòstic en escenaris supervisats, els seus requeriments per a la seua implementació els seus punts forts i febles i les seues possibles relacions. Els resultats de diagnòstic dels mètodes és comparat utilitzant diferents índexs sobre les dades procedents de dos processos reals i de diferents simulacions. En la tesi es proposen noves variants que tracten de millorar els resultats obtinguts en MSQC. La tercera part de la tesi està dedicada al diagnòstic de fallades en control estadístic multivariant de processos basat en l'ús de models de variables latents (Lb-MSPC). Es descriu els fonaments dels mètodes més estesos en el diagnòstic de fallades en MSPC supervisat i s'introdueix una nova proposta, el fingerprint contribution plot (FCP). Finalment la tesi presenta i compara els resultats de diagnòstic dels mètodes proposats en MSPC. Els resultats són comparats sobre les dades de dos processos utilitzant una nova estratègia basada en l'ús de la sensibilitat i especificitat mitjana.<br>Vidal Puig, S. (2016). FAULT DIAGNOSIS TOOLS IN MULTIVARIATE STATISTICAL PROCESS AND QUALITY CONTROL [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/61292<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
3

Banerjee, Bhramori. "Multiple Testing in the Presence of Correlations." Diss., Temple University Libraries, 2011. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/210207.

Full text
Abstract:
Statistics<br>Ph.D.<br>Simultaneous testing of multiple null hypotheses has now become an integral part of statistical analysis of data arising from modern scientific investigations. Often the test statistics in such multiple testing problem are correlated. The research in this dissertation is motivated by the scope of improving or extending existing methods to incorporate correlation in the data. Sarkar (2008) proposes controlling the pairwise false discovery rate (Pairwise-FDR), which inherently takes into account the dependence among the p-values, thereby making it a more robust, less conservative and more powerful under dependence than the usual notion of FDR. In this dissertation, we further investigate the performance of Pairwise-FDR under a dependent mixture model. In particular, we consider a step-up method to control the Pairwise-FDR under this model assuming that the correlation between any two p-values is the same (exchangeable). We also suggest improving this method by incorporating an estimate of the number of pairs of true null hypotheses developed under this model. Efron (2007, Journal of the American Statistical Association 102, 93-103) proposed a novel approach to incorporate dependence among the null p-values into a multiple testing method controlling false discoveries. In this dissertation, we try to investigate the scope of utilizing this approach by proposing alternative versions of adaptive Bonferroni and BH methods which estimates the number of true null hypotheses from the empirical null distribution introduced by Efron. These newer adaptive procedures have been numerically shown to perform better than existing adaptive Bonferroni or BH methods within a wider range of dependence. A gene expression microarray data set has been used to highlight the difference in results obtained upon applying the proposed and other adaptive BH methods. Another approach to address the presence of correlation is motivated by the scope of utilizing the dependence structure of the data towards further improving some multiple testing methods while maintaining control of some error rate. The dependence structure of the data is incorporated using pairwise weights. In this dissertation we propose a weighted version of the pairwise FDR (Sarkar, 2008) using pairwise weights and a method controlling the weighted pairwise- FDR. We give a discussion on the application of such weighted procedure and suggest some weighting schemes that generates pairwise weights.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
4

Bennett, Paul J. "Fault detection on an experimental aircraft fuel rig using a Kalman filter based FDI screen." Thesis, Loughborough University, 2010. https://dspace.lboro.ac.uk/2134/8506.

Full text
Abstract:
Reliability is an important issue across industry. This is due to a number of drivers such as the requirement of high safety levels within industries such as aviation, the need for mission success with military equipment, or to avoid monetary losses (due to unplanned outage) within the process and many other industries. The application of fault detection and identification helps to identify the presence of faults to improve mission success or increase up-time of plant equipment. Implementation of such systems can take the form of pattern recognition, statistical and geometric classifiers, soft computing methods or complex model based methods. This study deals with the latter, and focuses on a specific type of model, the Kalman filter. The Kalman filter is an observer which estimates the states of a system, i.e. the physical variables, based upon its current state and knowledge of its inputs. This relies upon the creation of a mathematical model of the system in order to predict the outputs of the system at any given time. Feedback from the plant corrects minor deviation between the system and the Kalman filter model. Comparison between this prediction of outputs and the real output provides the indication of the presence of a fault. On systems with several inputs and outputs banks of these filters can used in order to detect and isolate the various faults that occur in the process and its sensors and actuators. The thesis examines the application of the diagnostic techniques to a laboratory scale aircraft fuel system test-rig. The first stage of the research project required the development of a mathematical model of the fuel rig. Test data acquired by experiment is used to validate the system model against the fuel rig. This nonlinear model is then simplified to create several linear state space models of the fuel rig. These linear models are then used to develop the Kalman filter Fault Detection and Identification (FDI) system by application of appropriate tuning of the Kalman filter gains and careful choice of residual thresholds to determine fault condition boundaries and logic to identify the location of the fault. Additional performance enhancements are also achieved by implementation of statistical evaluation of the residual signal produced and by automatic threshold calculation. The results demonstrate the positive capture of a fault condition and identification of its location in an aircraft fuel system test-rig. The types of fault captured are hard faults such sensor malfunction and actuator failure which provide great deviation of the residual signals and softer faults such as performance degradation and fluid leaks in the tanks and pipes. Faults of a smaller magnitude are captured very well albeit within a larger time range. The performance of the Fault Diagnosis and Identification was further improved by the implementation of statistically evaluating the residual signal and by the development of automatic threshold determination. Identification of the location of the fault is managed by the use of mapping the possible fault permutations and the Kalman filter behaviour, this providing full discrimination between any faults present. Overall the Kalman filter based FDI developed provided positive results in capturing and identifying a system fault on the test-rig.
APA, Harvard, Vancouver, ISO, and other styles
5

Fan, Zhi Lei. "FDR control and a Cramér moderate deviation theorem for Hotelling's T2-statistic." Thesis, University of Macau, 2017. http://umaclib3.umac.mo/record=b3691388.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Woodworth, Michael Allen. "Fire Hazard Assessment for Highway Bridges with Thermal Mechanical Modeling." Diss., Virginia Tech, 2013. http://hdl.handle.net/10919/23683.

Full text
Abstract:
Bridges are critical pieces of infrastructure important to public safety and welfare. Fires have the potential to damage bridges and have been responsible for taking many bridges out of service. The hazard fire poses to bridges is a little studied risk unlike more common threats such as impact, scour and earthquake. Information on the rate of occurrence of bridge fires and the mechanisms of structural response of bridges subjected to fire are both vital to policy makers seeking to address the hazard rationally.<br />The investigation presented developed frequency statistics of bridge fire incidents from several sources of vehicle accident and fire statistics. To further investigate the fire hazard a computational model integrating the simulation of large fires and the simulation of bridge superstructure mechanical response was created. The simulation was used to perform a parametric study of fire size and location to investigate the relationship between these parameters and damage tot bridge super-""structure. The statistics investigation resulted in an observed rate of fires due to vehicle accidents of approximately 175 per year. Approximately one of these per year was the result of a tanker truck carrying a flammable liquid leading to extensive superstructure damage. The simulation showed that a tanker fire resulted in permanent damage to the bridge by several measures where as the affects of a bus fire were minimal. The simulations also demonstrated the mechanisms of bridge response; the importance of girder temperature in that response; and the differences in the response to a tanker fire that can lead to collapse.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
7

Budé, Lucas Michel. "On the improvement of students' conceptual understanding in statistics education." Maastricht : Maastricht : Universiteit Maastricht ; University Library, Universiteit Maastricht [host], 2007. http://arno.unimaas.nl/show.cgi?fid=9200.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kröger, Viktor. "Classification in Functional Data Analysis : Applications on Motion Data." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184963.

Full text
Abstract:
Anterior cruciate knee ligament injuries are common and well known, especially amongst athletes.These injuries often require surgeries and long rehabilitation programs, and can lead to functionloss and re-injuries (Marshall et al., 1977). This work aims to explore the possibility of applyingsupervised classification on knee functionality, using different types of models, and testing differentdivisions of classes. The data used is gathered through a performance test, where individualsperform one-leg hops with motion sensors attached to their bodies. The obtained data representsthe position over time, and is considered functional data.With functional data analysis (FDA), a process can be analysed as a continuous function of time,instead of being reduced to finite data points. FDA includes many useful tools, but also somechallenges. A functional observation can for example be differentiated, a handy tool not found inthe multivariate tool-box. The speed, and acceleration, can then be calculated from the obtaineddata. How to define "similarity" is, on the other hand, not as obvious as with points. In this work,an FDA-approach is taken on classifying knee kinematic data, from a long-term follow-up studyon knee ligament injuries.This work studies kernel functional classifiers, and k-nearest neighbours models, and performssignificance tests on the model accuracy, using re-sampling methods. Additionally, depending onhow similarity is defined, the models can distinguish different features of the data. Attempts atutilising more information through incorporation of ensemble-methods, does not exceed the singlemodels it is created from. Further, it is shown that classification on optimised sub-domains, canbe superior to classifiers using the full domain, in terms of predictive power.<br>Främre korsbandsskador är vanliga och välkända skador, speciellt bland idrottsutövare. Skadornakräver ofta operationer och långa rehabiliteringsprogram, och kan leda till funktionell nedsättningoch återskador (Marshall et al., 1977). Målet med det här arbetet är att utforska möjligheten attklassificera knän utifrån funktionalitet, där utfallet är känt. Detta genom att använda olika typerav modeller, och genom att testa olika indelningar av grupper. Datat som används är insamlatunder ett prestandatest, där personer hoppat på ett ben med rörelsesensorer på kroppen. Deninsamlade datan representerar position över tid, och betraktas som funktionell data.Med funktionell dataanalys (FDA) kan en process analyseras som en kontinuerlig funktion av tid,istället för att reduceras till ett ändligt antal datapunkter. FDA innehåller många användbaraverktyg, men även utmaningar. En funktionell observation kan till exempel deriveras, ett händigtverktyg som inte återfinns i den multivariata verktygslådan. Hastigheten och accelerationen kandå beräknas utifrån den insamlade datan. Hur "likhet" är definierat, å andra sidan, är inte likauppenbart som med punkt-data. I det här arbetet används FDA för att klassificera knärörelsedatafrån en långtidsuppföljningsstudie av främre korsbandsskador.I detta arbete studeras både funktionella kärnklassificerare och k-närmsta grannar-metoder, och ut-för signifikanstest av modellträffsäkerheten genom omprovtagning. Vidare kan modellerna urskiljaolika egenskaper i datat, beroende på hur närhet definieras. Ensemblemetoder används i ett försökatt nyttja mer av informationen, men lyckas inte överträffa någon av de enskilda modellerna somutgör ensemblen. Vidare så visas också att klassificering på optimerade deldefinitionsmängder kange en högre förklaringskraft än klassificerare som använder hela definitionsmängden.
APA, Harvard, Vancouver, ISO, and other styles
9

Paranagama, Dilan C. "Correlation and variance stabilization in the two group comparison case in high dimensional data under dependencies." Diss., Kansas State University, 2011. http://hdl.handle.net/2097/13132.

Full text
Abstract:
Doctor of Philosophy<br>Department of Statistics<br>Gary L. Gadbury<br>Multiple testing research has undergone renewed focus in recent years as advances in high throughput technologies have produced data on unprecedented scales. Much of the focus has been on false discovery rates (FDR) and related quantities that are estimated (or controlled for) in large scale multiple testing situations. Recent papers by Efron have directly addressed this issue and incorporated measures to account for high-dimensional correlation structure when estimating false discovery rates and when estimating a density. Other authors also have proposed methods to control or estimate FDR under dependencies with certain assumptions. However, not much focus is given to the stability of the results obtained under dependencies in the literature. This work begins by demonstrating the effect of dependence structure on the variance of the number of discoveries and the false discovery proportion (FDP). A variance of the number of discoveries is shown and the density of a test statistic, conditioned on the status (reject or failure to reject) of a different correlated test, is derived. A closed form solution to the correlation between test statistics is also derived. This correlation is a combination of correlations and variances of the data within groups being compared. It is shown that these correlations among the test statistics affect the conditional density and alters the threshold for significance of a correlated test, causing instability in the results. The concept of performing tests within networks, Conditional Network Testing (CNT) is introduced. This method is based on the conditional density mentioned above and uses the correlation between test statistics to construct networks. A method to simulate realistic data with preserved dependence structures is also presented. CNT is evaluated using simple simulations and the proposed simulation method. In addition, existing methods that controls false discovery rates are used on t-tests and CNT for comparing performance. It was shown that the false discovery proportion and type I error proportions are smaller when using CNT versus using t-tests and, in general, results are more stable when applied to CNT. Finally, applications and steps to further improve CNT are discussed.
APA, Harvard, Vancouver, ISO, and other styles
10

Pavlicova, Martina. "Thresholding FMRI images." The Ohio State University, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=osu1097769474.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Mahieu, Ronaldus Johannes. "Financial market volatility statistical models and empirical analysis /." Maastricht : Maastricht : Universitaire Pers Maastricht ; University Library, Maastricht University [Host], 1995. http://arno.unimaas.nl/show.cgi?fid=8347.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Clements, Nicolle. "Multiple Testing in Grouped Dependent Data." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/253695.

Full text
Abstract:
Statistics<br>Ph.D.<br>This dissertation is focused on multiple testing procedures to be used in data that are naturally grouped or possess a spatial structure. We propose `Two-Stage' procedure to control the False Discovery Rate (FDR) in situations where one-sided hypothesis testing is appropriate, such as astronomical source detection. Similarly, we propose a `Three-Stage' procedure to control the mixed directional False Discovery Rate (mdFDR) in situations where two-sided hypothesis testing is appropriate, such as vegetation monitoring in remote sensing NDVI data. The Two and Three-Stage procedures have provable FDR/mdFDR control under certain dependence situations. We also present the Adaptive versions which are examined under simulation studies. The `Stages' refer to testing hypotheses both group-wise and individually, which is motivated by the belief that the dependencies among the p-values associated with the spatially oriented hypotheses occur more locally than globally. Thus, these `Staged' procedures test hypotheses in groups that incorporate the local, unknown dependencies of neighboring p-values. If a group is found significant, further investigation is done to the individual p-values within that group. For the vegetation monitoring data, we extend the investigation by providing some spatio-temporal models and forecasts to some regions where significant change was detected through the multiple testing procedure.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
13

Albers, W. "De ongrijpbare zekerheid." Maastricht : Maastricht : Maastricht University ; University Library, Maastricht University [Host], 1985. http://arno.unimaas.nl/show.cgi?fid=12839.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Strijbosch, Leonardus Wilhelmus Gerardus. "Experimental design and statistical evaluation of limiting dilution assays." [Maastricht : Maastricht : Rijksuniversiteit Limburg] ; University Library, Maastricht University [Host], 1989. http://arno.unimaas.nl/show.cgi?fid=5454.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Lemmens, Paulus Hermanus Hubertus Marie. "Measurement and distribution of alcohol consumption." [Maastricht : Maastricht : Rijksuniversiteit Limburg] ; University Library, Maastricht University [Host], 1991. http://arno.unimaas.nl/show.cgi?fid=5641.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Chen, Jingjing. "Multiplicity Adjustments in Adaptive Design." Diss., Temple University Libraries, 2012. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/165453.

Full text
Abstract:
Statistics<br>Ph.D.<br>There are a number of available statistical methods for adaptive designs, among which the combination method of Bauer and Kohne's (1994) is well known and widely used. In this work, we revisit the the Bauer-Kohne method in three ways: overall FWER control for single-hypothesis in a two-stage adaptive design, overall FWER control for two-hypothesis in a two-stage adaptive design, and overall FDR control for multiple-hypothesis in a two-stage adaptive design. We first take the Bauer-Kohne method in a more direct manner to have more flexibility in the choice of the early rejection and acceptance boundaries as well as the second stage critical value based on the chosen combination function. Our goal is not to develop a new method, but focus primarily on developing a comprehensive understanding of two-stage designs. Rather than tying up the early rejection and acceptance boundaries by considering the second stage critical value to be the same as that of the level á combination test, as done in the original Bauer-Kohne method, we allow the second-stage critical value to be determined from prefixed early rejection and acceptance boundaries. An explicit formula is derived for the overall Type I error probability to determine the second stage critical value from these stopping boundaries not only for Fisher's combination function but also for other types of combination function. Tables of critical values corresponding to several different choices of early rejection and acceptance boundaries and these combination functions are presented. A dataset from a clinical study is used to apply the different methods based on directly computed second stage critical values from pre fixed stopping boundaries and discuss the outcomes in relation to those produced by the original Bauer-Kohne method. We then extend the Bauer-Kohne method to two-hypothesis setting and propose a stepwise-combination method for a two-stage adaptive design. In particular, we modify Holm's step-down procedure (1979) and suggest a step-down combination method to control the overall FWER at a desired level á. In many scientific studies requiring simultaneous testing of multiple null hypotheses, it is often necessary to carry out the multiple testing in two stages to decide which of the hypotheses can be rejected or accepted at the first stage and which should be followed up for further testing having combined their p-values from both stages. Unfortunately, no multiple testing procedure is available yet to perform this task meeting pre-specified boundaries on the first-stage p-values in terms of the false discovery rate (FDR) and maintaining a control over the overall FDR at a desired level. Our third goal in this work is to present two procedures, extending the classical Benjamini-Hochberg (BH) procedure and its adaptive version incorporating an estimate of the number of true null hypotheses from single-stage to a two-stage setting. These procedures are theoretically proved to control the overall FDR when the pairs of first- and second-stage p-values are independent and those corresponding to the null hypotheses are identically distributed as a pair (p1, p2) satisfying the p-clud property of Brannath, Posch and Bauer (2002, Journal of the American Statistical Association, 97, 236 -244). We consider two types of combination function, Fisher's and Simes', and present explicit formulas involving these functions towards carrying out the proposed procedures based on pre-determined critical values or through estimated FDR's. Simulations were carried to compare the proposed methods with class BH procedure using first stage data only and full data from both stages respectively. Our simulation studies indicate that the proposed procedures can have significant power improvement over the single-stage BH procedure based on the first stage data, at least under independence, and can continue to control the FDR under some dependence situations. Application of the proposed procedures to a real gene expression data set produces more discoveries compared to the single-stage BH procedure using the first stage data and full data as well.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
17

Hagerf, Alexander. "Complexity in Statistical Relational Learning : A Study on Learning Bayesian Logic Programs." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-170160.

Full text
Abstract:
Most work that is done within machine learning today uses statistical methods which assume that the data is identically and independently distributed. However, the problem domains that we face in the real world are often much more complicated and present both complex relational/logical parts as well as parts with uncertainty. Statistical relational learning (SRL) is a sub-field of machine learning and A.I. that tries to solve these limitations by combining both relational and statistical learning and has become a big research sector in recent years. This thesis will present SRL further and specifically introduce, test and review one of the implementations, namely Bayesian logic programs.<br>Idag används inom maskininlärning nästan alltid statistiska metoder som antar att datat för lärande är identiskt och oberoende distribuerat. Men de problemområden som vi står inför i den verkliga världen är ofta mycket mer komplicerade och har både komplexa relationella/logiska delar samt osäkerhet. Statistiskt relationslärande (SRL) är en del av maskininlärning och A.I. som försöker lösa dessa begränsningar genom att kombinera både relationer och statistiskt lärande och har på senare år blivit ett stort forskningsområde. Denna avhandling presenterar SRL mer i detalj och utreder och testar en specifik implementation, Bayesianska logikprogram.
APA, Harvard, Vancouver, ISO, and other styles
18

Angelici, L. "STATISTICAL METHODS TO ASSESS THE SUSCEPTIBILITY TO PARTICULATE MATTER AND HEALTH EFFECTS MEDIATED BY MICRORNASCARRIED IN PLASMA EXTRACELLULAR VESICLES." Doctoral thesis, Università degli Studi di Milano, 2015. http://hdl.handle.net/2434/369892.

Full text
Abstract:
ABSTRACT Air pollution exposure is a major problem worldwide and has been linked to many diseases. PM10 is one of the components of air pollution and it includes a mixture of compounds. Several studies suggest that PM produces significant effects on respiratory and cardiovascular system, in relation to acute as well as chronic exposure. This process has been extensively studied, but to date it has not yet been fully understood. Ambient particles have been shown to produce a strong inflammatory reaction, and beside pro-inflammatory mediators, cell-derived membrane Extracellular Vesicles (EVs) are also released. EVs (particularly microvesicles) might be the ideal candidate to mediate the effects of air pollution, since potentially they could transfer miRNAs, after internalization within target cells through surface-expressed ligands, enabling intercellular communication in the body. Another gap in our current knowledge regarding PM-related health effects is the identification of susceptible subjects. Recent research findings pointed out obesity as a susceptibility factor to the adverse effects of PM exposure partly due to an increase in particle absorption. According these findings, our hypothesis is that, EVs might be the ideal candidate mechanism to mediate the effects of air pollution, since potentially they could be produced by the respiratory system, reach the systemic circulation and lead to the development of endothelial dysfunction. Moreover, EVs after internalization within target cells through surface-expressed ligands, may transfer miRNAs enabling intercellular communication in the body. Finally, obese individuals might represent one of the best population to investigate the effects of environmental air particles on several molecular mechanisms and, as a final objective, on cardiovascular and respiratory parameters. The main proposal of this research project is to develop the appropriate statistical methodology to address the following specific aims: • Aim 1. Determine whether exposure to air particles and PM-associated metals can modify EVs in plasma in terms of miRNAs content. • Aim 2. Determine whether the changes found in ECVs (Aim 1) are associated with respiratory, cardiac and inflammatory outcomes such as: single breath carbon monoxide diffusing capacity DLcoRapp, Forced expiratory volume in the 1st second FEV1, Forced Vital Capacity FVC, Heart Rate, Sistolic Blood Pressure SBP, Diastolic Blood Pressure DBP, C-Reactive Protein CRP, and Fibrinogen. • Aim 3. Investigate the potential role of miRNAs as mediators of the effect of PM10 exposure on respiratory, cardiac and inflammatory outcomes listed in Aim2. We used a cross-sectional study investigating the effects of particulate air pollution on a population of susceptible overweight/obese subjects, recruited in Lombardy Region, Italy. The population study will include 2000 overweighed/obese (BMI between 25 and 29.9 is considered overweight and an adult who has a BMI of 30 or higher is considered obese subjects, recruited at the Center for Obesity and Weight Control (Department of Environmental and Occupational Health, University of Milan and IRCCS Fondazione Ca’Granda – Ospedale Maggiore Policlinico). We will follow a two-stage, split sample study design. The first (discovery) stage involves genome-wide miRNA expression profiling, by means of OpenArray technology, among 1000 of the aforementioned 2000 participants (the first 1000 subjects consecutively recruited at the Center for Obesity and Weight Control). The second (replication) stage involves a replication analysis of the top 10 miRNAs that resulted from the first stage. At December 31, 2013 (first stage) we recruited 1303 subjects, 87% of whom living in the province of Milan. At April 2015 we recruited a total of 1786 evaluable subjects. Due to technical problems the replication data were not available for statistical analysis at the time of the layout of the thesis. Different normalization strategies on miRNAs expression data were evaluated and compared in different set of miRNAs: Endogenous U6, Global Mean and Mean of 4 more stable miRNAs. The performance of the different normalization strategies was assessed by: (1) evaluating their ability to reduce the experimental induced (technical) variation, (2) determining their power to extract true biological variation. We showed for large scale miRNA expression profiling Global Mean normalization strategy outperforms the other normalization strategy in terms of:  better reduction of technical variation: - lower % of miRNAs differentially expressed before and after FDR adjustment - lower Fold change range;  more accurate appreciation of biological changes. - higher % of miRNAs differentially expressed before and after FDR adjustment; - higher Fold Change range; PM10 exposure assessment is based on daily PM10 concentration estimates by the FARM model (the flexible air quality regional model), a three-dimensional Eulerian grid model for dispersion, transformation and deposition of particulates, capable to simulate PM10 concentration. By means of ArchGis software the residential address of each subject was georeferenced and the resulting map was superimposed on the map of FARM Model. In this this way to each subject was attributed: (a) the estimated daily exposure of the cell containing their residential address; (b) the exposure of the cell containing the address of the Center for Obesity and Work; (c) the daily average exposure for Milan, calculated as the average of the 22 cells that falls into the city boundaries. Since in each run of OpenArray were simultaneous analysed up to 4 OpenArray plates, identified by a barcode, for a total of 12 samples (3 per plate) it was possible identify an hierarchical data structure with three levels: sample level (level-1), barcode level (level-2) and run level (level-3). In order to verify the association between miRNAs expression and PM10 we developed a three-levels hierarchical linear model (HLM) using the MIXED procedure in SAS. The following list of first 10 top miRNAs were identified: miR_106a_002169, miR_152_000475, miR_181a_2__002317, miR_218_000521, miR_27b_000409, miR_30d_000420, miR_652_002352, miR_92a_000431, miR_25_000403, miR_375_000564. Simple mediation models were applied in order to investigate the role of miRNAs expression as potential mediator on the effect of PM10 on respiratory, cardiac and inflammatory outcomes such as: single breath carbon monoxide diffusing capacity DLco, Forced expiratory volume in the 1st second FEV1, Forced Vital Capacity FVC, Heart Rate, Sistolic Blood Pressure SBP, Diastolic Blood Pressure DBP, C-Reactive Protein CRP, and Fibrinogen. 95% BC bootstrap Confidence intervals for Indirect effect were estimated. Finally, Multiple Parallel mediation models were applied in order to investigate the role of a set of miRNAs expression identified by means of simple mediation models as potential set of parallel mediator on the effect of PM10 on respiratory, cardiac and inflammatory outcomes. A significant indirect effect of PM10 on: - DLcoRapp, was found through the following mediators: mir_106a_002169, mir_152_000475, mir_218_000521 expression; - FEV1Rapp was found through the following mediators: mir_27b_000409 mir_30d_000420 mir_92a_000431 mir_181a_2_002317 mir_218_000521 expression; - FVCRapp was found through the following mediators: mir_27b_000409, mir_92a_000431 and mir_181a_2_002317 expression; - Heart Rate was found through the following mediator: mir_218_000521 expression; - Sistolic Blood Pressure was found through the following mediator: mir_92a_000431 expression; - CRP was found through the following mediator: mir_106a_002169 and mir_652_002352 expression. - Fibrinogeno was found through the following mediator: mir_375_000564 expression. Finally, the total indirect effect of PM10 exposure: - on DLcoRapp obtained summed the indirect effects across all mediators: mir_106a_002169, mir_152_000475, and mir_218_000521 expression is statistically different from zero; - on FEV1Rapp obtained summed the indirect effects across all mediators: mir_27b_000409 mir_30d_000420 mir_92a_000431 mir_181a_2_002317 mir_218_000521 expression is statistically different from zero; - on FVCRapp obtained summed the indirect effects across all mediators mir_27b_000409, mir_92a_000431 and mir_181a_2_002317 expression is statistically different from zero; - on CRP obtained summed the indirect effects across all mediators mir_106a_002169 and mir_652_002352 expression is statistically different from zero.
APA, Harvard, Vancouver, ISO, and other styles
19

London, Kevin Ian. "Cerebral F18 -FDG PET CT in Children: Patterns during Normal Childhood and Clinical Application of Statistical Parametric Mapping." Thesis, The University of Sydney, 2015. http://hdl.handle.net/2123/13962.

Full text
Abstract:
The first aim was to recruit and analyse a high quality dataset of cerebral FDG PET CT scans in neurologically normal children. Using qualitative, semi-quantitative and statistical parametric mapping (SPM) techniques, the results showed that a pattern of FDG uptake similar to adults does not occur by one year of age as was previously believed, but the regional FDG uptake changes throughout childhood driven by differing age related regional rates of increasing FDG uptake. The second aim was to use this normal dataset in the clinical analysis of cerebral FDG PET CT scans in children with epilepsy and Neurofibromatosis type 1 (NF1). The normal dataset was validated for single-subject-versus-group SPM analysis and was highly specific for identifying the epileptogenic focus likely to result in a good post-operative outcome in children with epilepsy. Qualitative, semi-quantitative and group-versus-group SPM analyses were applied to FDG PET CT scans in children with NF1. The results showed reduced metabolism in the thalami and medial temporal lobes compared to neurologically normal children. This thesis has produced novel findings that advance the understanding of childhood brain development and has developed SPM techniques that can be applied to cerebral FDG PET CT scans in children with neurological disorders.
APA, Harvard, Vancouver, ISO, and other styles
20

Houtman, Martijn. "Nonparametric consumer and producer analysis." [Maastricht : Maastricht : Rijksuniversiteit Limburg] ; University Library, Maastricht University [Host], 1995. http://arno.unimaas.nl/show.cgi?fid=5770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Xu, Haiyan. "Using the partitioning principle to control generalized familywise error rate." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1127161016.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.<br>Title from first page of PDF file. Document formatted into pages; contains xiii, 104 p.; also includes graphics (some col.). Includes bibliographical references (p. 101-104). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
22

Liu, Fang. "New Results on the False Discovery Rate." Diss., Temple University Libraries, 2010. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/96718.

Full text
Abstract:
Statistics<br>Ph.D.<br>The false discovery rate (FDR) introduced by Benjamini and Hochberg (1995) is perhaps the most standard error controlling measure being used in a wide variety of applications involving multiple hypothesis testing. There are two approaches to control the FDR - the fixed error rate approach of Benjamini and Hochberg (BH, 1995) where a rejection region is determined with the FDR below a fixed level and the estimation based approach of Storey (2002) where the FDR is estimated for a fixed rejection region before it is controlled. In this proposal, we concentrate on both these approaches and propose new, improved versions of some FDR controlling procedures available in the literature. A number of adaptive procedures have been put forward in the literature, each attempting to improve the method of Benjamini and Hochberg (1995), the BH method, by incorporating into this method an estimate of number true null hypotheses. Among these, the method of Benjamini, Krieger and Yekutieli (2006), the BKY method, has been receiving lots of attention recently. In this proposal, a variant of the BKY method is proposed by considering a different estimate of number true null hypotheses, which often outperforms the BKY method in terms of the FDR control and power. Storey's (2002) estimation based approach to controlling the FDR has been developed from a class of conservatively biased point estimates of the FDR under a mixture model for the underlying p-values and a fixed rejection threshold for each null hypothesis. An alternative class of point estimates of the FDR with uniformly smaller conservative bias is proposed under the same setup. Numerical evidence is provided to show that the mean squared error (MSE) is also often smaller for this new class of estimates. Compared to Storey's (2002), the present class provides a more powerful estimation based approach to controlling the FDR.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
23

Boschi, Filippo. "Imaging quantitativo cerebrale in pazienti affetti da patologie neurodegenerative con traccianti PiB e FDG marcati con 18-F." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017. http://amslaurea.unibo.it/13506/.

Full text
Abstract:
Il numero di casi riguardanti patologie neurodegenerative che colpiscono il sistema nervoso centrale sotto forma di demenza aumenta di anno in anno. Si ha quindi la necessità di individuare la presenza della malattia non solo a livello qualitativo, ovvero tramite la sola diagnosi del medico, ma anche a livello quantitativo, cercando di elaborare un metodo di analisi che tenga conto della variabilità tra individuo e individuo, ma che rimanga applicabile per la maggior parte della popolazione. In questo lavoro di tesi sono stati utilizzati due metodi di analisi statistica di immagini PET cerebrali, denominati SPM e SSP Z-score, al fine di individuare mappe statistiche 3D che evidenzino, con probabilità fissata, le aree del cervello colpite. Entrambe le analisi statistiche si basano su un confronto del paziente con un gruppo di soggetti sani. Le immagini elaborate in questo lavoro di tesi provengono dal database dell'Ospedale di Parma, registrate dallo stesso scanner PET-CT, con l'utilizzo di traccianti diversi, denominati FDG e PiB marcato Fluoro. I due traccianti rilevano uno stato cerebrale anomalo, ma attraverso due indicatori diversi: FDG rende visibile l'afflusso sanguigno all'interno del cervello, legato al metabolismo; il PiB invece è in grado di rilevare i depositi di beta-amiloide, una proteina che, legandosi ai tessuti cerebrali, forma delle placche in grado di bloccare la connessioni tra neuroni. I traccianti sono legati al radionuclide 18F. Le analisi sono state effettuate principalmente su pazienti con morbo di Alzheimer utilizzando i due metodi di analisi precedentemente introdotti. Sono anche stati confrontati attraverso SPM due gruppi di pazienti con iniziale demenza lieve, che nel tempo si è evoluta in morbo di Alzheimer in alcuni casi, mentre in altri non sono state riscontrate variazioni. Infine, tramite un approccio SPM, sono stati valutati due template diversi per determinare quale potesse essere considerato migliore per l'analisi quantitativa.
APA, Harvard, Vancouver, ISO, and other styles
24

Nunzi, Edoardo. "Effetto di alcuni fattori di processo sulla risposta statica di provini realizzati mediante FDM." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2019.

Find full text
Abstract:
Il presente elaborato ha lo scopo di analizzare l’effetto di alcuni parametri di stampa della tecnologia Fused Deposition Modeling (modellazione a deposizione fusa FDM) sulle proprietà meccaniche dell’ABS TITAN XTM. Il lavoro di tesi si è particolarmente focalizzato sui parametri di stampa: spessore del filamento (0,2 mm e 0,3 mm), numero di contour lines (1, 4 o 7), angolo di orientamento (90°, +180°/-180°, 45° /-45°). Combinando tutte le possibilità di stampa, si ottengono 18 tipologie di provini per un totale di 90 provini complessivi studiati con la prova di trazione. Ogni tipologia considerata in laboratorio infatti ha richiesto, come previsto dalla norma consolidata ASTM D638, la ripetizione su 5 provini. La stampa dei provini, realizzata con una stampante Delta Kossel autocostruita, ha rappresentato una fase dell’elaborato finale di laurea. Alla stampa e alla rottura dei provini, è seguita una prima elaborazione analitica dei dati sperimentali finalizzata alla determinazione delle curve tensione-deformazione e di tutte le grandezze connesse alla prova a trazione, in particolare: tensione di rottura, modulo elastico, limite di snervamento e deformazione a rottura. Tali quattro grandezze sono poi state analizzate statisticamente attraverso il modello ANOVA a 3 e a 2 fattori con ripetizioni: ciò ha consentito di stabilire la significatività dei fattori di stampa esaminati. L’elaborato è organizzato in tre parti nelle quali vengono presentate: generalità sulla stampa 3D, materiali e metodi, elaborazione e analisi dei risultati statistici.
APA, Harvard, Vancouver, ISO, and other styles
25

Wahlstrand, Björn. "Wave Transport and Chaos in Two-Dimensional Cavities." Thesis, Linköping University, Linköping University, The Swedish Institute for Disability Research, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-16492.

Full text
Abstract:
<p>This thesis focuses on chaotic stationary waves, both quantum mechanical and classical. In particular we study different statistical properties regarding thesewaves, such as energy transport, intensity (or density) and stress tensor components. Also, the methods used to model these waves are investigated, and somelimitations and specialities are pointed out.</p>
APA, Harvard, Vancouver, ISO, and other styles
26

Stephens, Nathan Wallace. "A Comparison of Microarray Analyses: A Mixed Models Approach Versus the Significance Analysis of Microarrays." BYU ScholarsArchive, 2006. https://scholarsarchive.byu.edu/etd/1115.

Full text
Abstract:
DNA microarrays are a relatively new technology for assessing the expression levels of thousands of genes simultaneously. Researchers hope to find genes that are differentially expressed by hybridizing cDNA from known treatment sources with various genes spotted on the microarrays. The large number of tests involved in analyzing microarrays has raised new questions in multiple testing. Several approaches for identifying differentially expressed genes have been proposed. This paper considers two: (1) a mixed models approach, and (2) the Signiffcance Analysis of Microarrays.
APA, Harvard, Vancouver, ISO, and other styles
27

Bellon, Ludovic. "Exploring nano-mechanics through thermal fluctuations." Habilitation à diriger des recherches, Ecole normale supérieure de lyon - ENS LYON, 2010. http://tel.archives-ouvertes.fr/tel-00541336.

Full text
Abstract:
This mémoire presents my current research interests in micro and nano-mechanics in a comprehensive manuscript. Our experimental device is first presented: this atomic force microscope, designed and realized in the Laboratoire de Physique de l'ENS Lyon, is based on a quadrature phase differential interferometer. It features a very high resolution (down to 10 fm/rtHz) in the measurement of deflexion, down to low frequencies and on a huge input range. The dual output of the interferometer implies a specific handling to interface common scanning probe microscope controllers. We developed analog circuitries to tackle static (contact mode) and dynamic (tapping mode) operations, and we demonstrate their performance by imaging a simple calibration sample. As a first application, we used the high sensitivity of our interferometer to study the mechanical behavior of micro-cantilevers from their fluctuations. The keystone of the analysis is the Fluctuation-Dissipation Theorem (FDT), relating the thermal noise spectrum to the dissipative part of the response. We apply this strategy to confront Sader's model for viscous dissipation with measurements on raw silicon cantilevers in air, demonstrating an excellent agreement. When a gold coating is added, the thermal noise is strongly modified, presenting a 1/f like trend at low frequencies: we show that this behavior is due to a viscoelastic damping, and we provide a quantitative phenomenological model. We also characterize the mechanical properties of cantilevers (stiffness and Elastic Moduli) from a mapping of the thermal noise on their surface. This analysis validates the description of the system in term of its normal modes of oscillations in an Euler-Bernoulli framework for flexion and in Saint-Venant approach for torsion, but points toward a refined model for the dispersion relation of torsional modes. Finally, we present peeling experiments on a single wall carbon nanotube attached to the cantilever tip. It is pushed against a flat substrate, and we measure the quasi-static force as well as the dynamic stiffness using an analysis of the thermal noise during this process. The most striking feature of these two observables is a plateau curve for a large range of compression, the values of which are substrate dependent. We use the Elastica to describe the shape of the nanotube, and a simple energy of adhesion per unit length Ea to describe the interaction with the substrate. We analytically derive a complete description of the expected behavior in the limit of long nanotubes. The analysis of the experimental data within this simple framework naturally leads to every quantity of interest in the problem: the force plateau is a direct measurement of the energy of adhesion Ea for each substrate, and we easily determine the mechanical properties of the nanotube itself.
APA, Harvard, Vancouver, ISO, and other styles
28

Hughes, Steven. "The use of performance assessments and force-time curve analysis to measure mental and physical fatigue." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2023. https://ro.ecu.edu.au/theses/2643.

Full text
Abstract:
As management of fatigue and recovery is a key objective of modern sport, athlete monitoring is now commonplace. With assessments such as perceptual questionnaires, countermovement jumps (CMJ), squat jumps (SJ), isometric mid-thigh pulls (IMTP), and postural sway having been used with varied success, the use of emerging analysis techniques may improve sensitivity. For example, while vertical jump assessment traditionally employs analysis of discrete individual metrics calculated from a location on the force-time curve, complete force-time curve analysis may provide greater sensitivity by retaining information commonly discarded in discrete analysis. Furthermore, with mental fatigue reportedly impairing cognitive and physical performance, identifying an objective, practical assessment of mental fatigue would provide practitioners with a more comprehensive assessment of athlete fatigue. Study 1 investigated whether several performance assessments (perceptual questionnaire, cognitive assessment, postural sway, CMJ, SJ, IMTP) could model fatigue, improving assessment over stand-alone analysis at multiple time points. These findings supported the use of perceptual questionnaires, CMJ, SJ and maximal cycling sprints, with perceptual questionnaires and cycling sprints demonstrating impairment at 24 and 48 hr. However, when attempting to model 24 and 48 hr power outputs from metrics at earlier time points, only CMJ height explained power output. Study 2 sought to determine whether a complete time-series analysis of biomechanical data through Statistical Parametric Mapping (SPM) could better detect fatigue than traditional discrete methods. Two-way repeated measures ANOVA was performed on relative force-time curves (SPM analysis) and peak relative force (discrete analysis) taken from the same data set with SPM analysis alone detecting fatigue. Study 3 assessed the ability of four cognitive performance assessments and a time-series analysis of CMJ force through functional principal components analysis (fPCA), to objectively measure mental fatigue after 60 min of a mentally fatiguing task. Despite post hoc testing not reaching significance, a significant interaction effect in reaction task linear mixed model analysis (LMM) and medium effect sizes, suggested that the reaction task alone was likely impaired by 60 min of cognitive tasks. Furthermore, time-series analysis explained 95.8% of the total variation in force-time curves, with LMM analysis reporting a significant difference after mental fatigue. Utilising performance assessment tasks, Study 4 explored mental and physical fatigue responses across four conditions consisting of varying distributions of physical and cognitive loads: futsal (physical+cognitive), workload-matched treadmill session (physical), timematched soccer video gameplay (cognitive) and a control. The findings indicate that perceptual questionnaires and SJ were impaired by physical workloads of match play and treadmill sessions, while CMJ was impaired by match play but not treadmill fatigue. No significant differences were observed in IMTP or reaction tasks in any condition. This research suggests that reaction tasks were ineffective at measuring mental fatigue after sporting competition and video gameplay. This thesis supports perceptual feedback and jump testing for subjective and objective assessment of both physical and mental fatigue. Furthermore, discrete and time-series jump analysis can be used to assess specific performance outcomes or execution of movements. Finally, the cognitive reaction task likely provides an objective measure of severe mental fatigue with the brevity of the aforementioned assessments promoting practicality in sports.
APA, Harvard, Vancouver, ISO, and other styles
29

Chatelain, Megan E. "Minority Representations in Crime Drama: An Examination of Roles, Identity, and Power." Scholarly Commons, 2020. https://scholarlycommons.pacific.edu/uop_etds/3716.

Full text
Abstract:
The storytelling ability of television can be observed in any genre. Crime drama offers a unique perspective because victims and offenders change every episode increasing stereotypes with each new character. In other words, the more victims and criminals observed by the audience, the more likely the show creates the perception of a mean world. Based on previous literature, three questions emerged which this study focused on by asking the extent of Criminal Minds’ ability to portray crime accurately compared to the Federal Bureau of Investigations Uniform Crime Report (UCR) and the Behavioral Analysis Unit’s (BAU-4) report on serial murderers and how those portrayals changed over the fifteen years of the show. A content analysis was conducted through the lens of cultivation theory, coding 324 episodes which produced a sample size of 354 different cases to answer the research questions. Two additional coders focused on the first, middle, and last episodes of each season (N=45) for reliability. The key findings are low levels of realism with the UCR and high levels of realism with the BAU-4 statistics. Mean-world syndrome was found to be highly likely to be cultivated in heavy viewers. Finally, roles for minority groups did improve overtime for Black and Brown bodies, yet Asian bodies saw a very small increase in representation. LGBT members were nearly nonexistent. The findings indicated that there is still not enough space in television for minority roles and found that the show perpetuated stereotypes. Additional implications and themes include a lack discourse on violence and erasure of sexual assault victims.
APA, Harvard, Vancouver, ISO, and other styles
30

Chen, Pei-Nong, and 陳培穠. "Constructing A FDC Framework with Statistical Models Embedded for Semiconductor Fabrications." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/70936724311068529728.

Full text
Abstract:
碩士<br>國立清華大學<br>工業工程與工程管理學系<br>93<br>Semiconductor fabrication involves highly complex and lengthy processes in which a large number of variables are interrelated. Process control and monitoring are necessary in a semiconductor fab to ensure the yield and thus the profitability of huge investments. This study aims to develop a framework for Fault Detection and Classification (FDC) and statistical models to be embedded in the FDC system to monitor the semiconductor fabrication process in which multi-faults may exist. The objective of the proposed framework is to structure the process operation from a large number of correlated variables, to detect faults, diagnose them by clustering and classifying the abnormal events, eliminate the cause of the faults, and then improve the performance of the process. The information and knowledge are discovered and extracted by the multivariate statistical methods and data mining approaches form the historical process data, and can be an aid of fault diagnosis and recovery. Simple rules can be generated to classify and predict the wafers. Also, an empirical study is conducted in an advanced 300mm DRAM fab for validation. The results showed the practical viability of this approach. The developed model can be embedded in a Fault Detection System for process or equipment monitoring. As a result of routine monitoring process and the extracted information from process data, critical variables of the process are identified, and the faults can be removed, process excursion is decreased. Moreover, the monitoring process are simplified by reducing the number of variables in the system, fewer key variables are monitored by the FDC instead of all the variables in the system.
APA, Harvard, Vancouver, ISO, and other styles
31

Hossain, Ahmed. "Contribution to Statistical Techniques for Identifying Differentially Expressed Genes in Microarray Data." Thesis, 2011. http://hdl.handle.net/1807/29749.

Full text
Abstract:
With the development of DNA microarray technology, scientists can now measure the expression levels of thousands of genes (features or genomic biomarkers) simultaneously in one single experiment. Robust and accurate gene selection methods are required to identify differentially expressed genes across different samples for disease diagnosis or prognosis. The problem of identifying significantly differentially expressed genes can be stated as follows: Given gene expression measurements from an experiment of two (or more)conditions, find a subset of all genes having significantly different expression levels across these two (or more) conditions. Analysis of genomic data is challenging due to high dimensionality of data and low sample size. Currently several mathematical and statistical methods exist to identify significantly differentially expressed genes. The methods typically focus on gene by gene analysis within a parametric hypothesis testing framework. In this study, we propose three flexible procedures for analyzing microarray data. In the first method we propose a parametric method which is based on a flexible distribution, Generalized Logistic Distribution of Type II (GLDII), and an approximate likelihood ratio test (ALRT) is developed. Though the method considers gene-by-gene analysis, the ALRT method with distributional assumption GLDII appears to provide a favourable fit to microarray data. In the second method we propose a test statistic for testing whether area under receiver operating characteristic curve (AUC) for each gene is greater than 0.5 allowing different variances for each gene. This proposed method is computationally less intensive and can identify genes that are reasonably stable with satisfactory prediction performance. The third method is based on comparing two AUCs for a pair of genes that is designed for selecting highly correlated genes in the microarray datasets. We propose a nonparametric procedure for selecting genes with expression levels correlated with that of a ``seed" gene in microarray experiments. The test proposed by DeLong et al. (1988) is the conventional nonparametric procedure for comparing correlated AUCs. It uses a consistent variance estimator and relies on asymptotic normality of the AUC estimator. Our proposed method includes DeLong's variance estimation technique in comparing pair of genes and can identify genes with biologically sound implications. In this thesis, we focus on the primary step in the gene selection process, namely, the ranking of genes with respect to a statistical measure of differential expression. We assess the proposed approaches by extensive simulation studies and demonstrate the methods on real datasets. The simulation study indicates that the parametric method performs favorably well at any settings of variance, sample size and treatment effects. Importantly, the method is found less sensitive to contaminated by noise. The proposed nonparametric methods do not involve complicated formulas and do not require advanced programming skills. Again both methods can identify a large fraction of truly differentially expressed (DE) genes, especially if the data consists of large sample sizes or the presence of outliers. We conclude that the proposed methods offer good choices of analytical tools to identify DE genes for further biological and clinical analysis.
APA, Harvard, Vancouver, ISO, and other styles
32

Wang, Xuan. "Statistical Methods for the Analysis of Mass Spectrometry-based Proteomics Data." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-10777.

Full text
Abstract:
Proteomics serves an important role at the systems-level in understanding of biological functioning. Mass spectrometry proteomics has become the tool of choice for identifying and quantifying the proteome of an organism. In the most widely used bottom-up approach to MS-based high-throughput quantitative proteomics, complex mixtures of proteins are first subjected to enzymatic cleavage, the resulting peptide products are separated based on chemical or physical properties and then analyzed using a mass spectrometer. The three fundamental challenges in the analysis of bottom-up MS-based proteomics are as follows: (i) Identifying the proteins that are present in a sample, (ii) Aligning different samples on elution (retention) time, mass, peak area (intensity) and etc, (iii) Quantifying the abundance levels of the identified proteins after alignment. Each of these challenges requires knowledge of the biological and technological context that give rise to the observed data, as well as the application of sound statistical principles for estimation and inference. In this dissertation, we present a set of statistical methods in bottom-up proteomics towards protein identification, alignment and quantification. We describe a fully Bayesian hierarchical modeling approach to peptide and protein identification on the basis of MS/MS fragmentation patterns in a unified framework. Our major contribution is to allow for dependence among the list of top candidate PSMs, which we accomplish with a Bayesian multiple component mixture model incorporating decoy search results and joint estimation of the accuracy of a list of peptide identifications for each MS/MS fragmentation spectrum. We also propose an objective criteria for the evaluation of the False Discovery Rate (FDR) associated with a list of identifications at both peptide level, which results in more accurate FDR estimates than existing methods like PeptideProphet. Several alignment algorithms have been developed using different warping functions. However, all the existing alignment approaches suffer from a useful metric for scoring an alignment between two data sets and hence lack a quantitative score for how good an alignment is. Our alignment approach uses "Anchor points" found to align all the individual scan in the target sample and provides a framework to quantify the alignment, that is, assigning a p-value to a set of aligned LC-MS runs to assess the correctness of alignment. After alignment using our algorithm, the p-values from Wilcoxon signed-rank test on elution (retention) time, M/Z, peak area successfully turn into non-significant values. Quantitative mass spectrometry-based proteomics involves statistical inference on protein abundance, based on the intensities of each protein's associated spectral peaks. However, typical mass spectrometry-based proteomics data sets have substantial proportions of missing observations, due at least in part to censoring of low intensities. This complicates intensity-based differential expression analysis. We outline a statistical method for protein differential expression, based on a simple Binomial likelihood. By modeling peak intensities as binary, in terms of "presence / absence", we enable the selection of proteins not typically amendable to quantitative analysis; e.g., "one-state" proteins that are present in one condition but absent in another. In addition, we present an analysis protocol that combines quantitative and presence / absence analysis of a given data set in a principled way, resulting in a single list of selected proteins with a single associated FDR.
APA, Harvard, Vancouver, ISO, and other styles
33

Yadav, Puneet. "Increasing the speed and efficiency of search in FBI/CODIS DNA database : throught multivariate statistical clustering approach and development of a similarity ranking scheme /." 2001. http://etd.utk.edu/2001/YadavPuneet.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

(10716540), Emily A. Kerstiens. "NEW BIOINFORMATIC METHODS OF BACTERIOPHAGE PROTEIN STUDY." Thesis, 2021.

Find full text
Abstract:
<p>Bacteriophages are viruses that infect and kill bacteria. They are the most abundant organism on the planet and the largest source of untapped genetic information. Every year, more bacteriophages are isolated from the environment, purified, and sequenced. Once sequenced, their genomes are annotated to determine the location and putative function of each gene expressed by the phage. Phages have been used in the past for genetic engineering and new research is being done into how they can be used for the treatment of disease, water safety, agriculture, and food safety. </p> <p>Despite the influx of sequenced bacteriophages, a majority of the genes annotated are hypothetical proteins, also known as No Known Function (NKF) proteins. They are expressed by the phages, but research has not identified a possible function. Wet lab research into the functions of the hundreds of NKF phages genes would be costly and could take years. Bioinformatics methods could be used to determine putative functions and functional categories for these hypothetical proteins. A new bioinformatics method using algorithms such as Domain Assignments, Hidden Markov Models, Structure Prediction, Sub-Cellular Localization, and iterative algorithms is proposed here. This new method was tested on the bacteriophage genome PotatoSplit and dropped the number of NKF genes from 57 to 40. A total of 17 new functions were found. The functional class was identified for an additional six proteins, though no specific functions were named. Structure Prediction and Simulations were tested with a focus on two NKF proteins within lytic phages and both returned possible functional categories with high confidence.</p> <p>Additionally, this research focuses on the possibility of phage therapy and FDA regulation. A database of phage proteins was built and tested using R Statistical Analysis to determine proteins significant to phage infecting <i>M. tuberculosis</i> and to the lytic cycle of phages. The statistical methods were also tested on both pharmaceutical products recalled by the FDA between 2012 and 2018 to determine ingredients/manufacturing steps that could affect product quality and on the FDA Adverse Event Reporting System (FAERS) data to determine if AERs could be used to judge the quality of a product. Many significant excipients/manufacturing steps were identified and used to score products on their quality. The AERs were evaluated on two case studies with mixed results. </p>
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography