To see the other types of publications on this topic, follow the link: Statistical set.

Dissertations / Theses on the topic 'Statistical set'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Statistical set.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Marchant, Alexander. "Set representation by statistical properties." Thesis, Goldsmiths College (University of London), 2011. http://research.gold.ac.uk/6518/.

Full text
Abstract:
This thesis has investigated the apparent ability of the visual system to represent a set of similar objects with a summary description instead of information about the individual items themselves (Ariely, 2001; Chong and Treisman, 2005a). Summary descriptions can be based on set sizes that are beyond the capacity of focussed attention, leading to the proposal that a distributed attention mechanism, statistical processing, underlies this process (Chong and Treisman, 2003, 2005a, 2005b; Chong et al. 2008; Treisman, 2006). However, the conclusion that summary descriptions are formed by a mechanism involving distributed attention has been questioned on the basis of parsimony, and a proposal for the role of focussed attention strategies in producing these summary descriptions has been made (Myzcek & Simons, 2008; Simons & Myzcek, 2008; see also De Fockert & Marchant, 2008). The aim of this thesis was to further elucidate the process of set representation by statistical properties, exploring the evidence that the summary description is given preferential representational status over individual items (Chapter 2), that summary descriptions can be produced within the known capacity limits of focussed attention (Chapter 3), that the results found in these experiments are not affected by the development of a prototypical average across the experimental session (Chapter 4), and that similar summary descriptions may also be rapidly extracted from more complex stimuli (Chapter 5). These findings are discussed in the context of current average size perception theory, and the proposal of a dual process view of set representation by statistical properties is briefly outlined. The dual process view combines both focussed attention when stimulus complexity is low and/or cognitive resources are available and distributed attention when stimulus complexity is high and/or cognitive resources are restricted. Finally, a selection of further studies and research areas that follow from the current research and the dual process view are briefly detailed.
APA, Harvard, Vancouver, ISO, and other styles
2

Crafford, Gretel. "Statistical analysis of grouped data." Thesis, University of Pretoria, 2007. http://hdl.handle.net/2263/25968.

Full text
Abstract:
The maximum likelihood (ML) estimation procedure of Matthews and Crowther (1995: A maximum likelihood estimation procedure when modelling in terms of constraints. South African Statistical Journal, 29, 29-51) is utilized to fit a continuous distribution to a grouped data set. This grouped data set may be a single frequency distribution or various frequency distributions that arise from a cross classification of several factors in a multifactor design. It will also be shown how to fit a bivariate normal distribution to a two-way contingency table where the two underlying continuous variables are jointly normally distributed. This thesis is organized in three different parts, each playing a vital role in the explanation of analysing grouped data with the ML estimation of Matthews and Crowther. In Part I the ML estimation procedure of Matthews and Crowther is formulated. This procedure plays an integral role and is implemented in all three parts of the thesis. In Part I the exponential distribution is fitted to a grouped data set to explain the technique. Two different formulations of the constraints are employed in the ML estimation procedure and provide identical results. The justification of the method is further motivated by a simulation study. Similar to the exponential distribution, the estimation of the normal distribution is also explained in detail. Part I is summarized in Chapter 5 where a general method is outlined to fit continuous distributions to a grouped data set. Distributions such as the Weibull, the log-logistic and the Pareto distributions can be fitted very effectively by formulating the vector of constraints in terms of a linear model. In Part II it is explained how to model a grouped response variable in a multifactor design. This multifactor design arise from a cross classification of the various factors or independent variables to be analysed. The cross classification of the factors results in a total of T cells, each containing a frequency distribution. Distribution fitting is done simultaneously to each of the T cells of the multifactor design. Distribution fitting is also done under the additional constraints that the parameters of the underlying continuous distributions satisfy a certain structure or design. The effect of the factors on the grouped response variable may be evaluated from this fitted design. Applications of a single-factor and a two-factor model are considered to demonstrate the versatility of the technique. A two-way contingency table where the two variables have an underlying bivariate normal distribution is considered in Part III. The estimation of the bivariate normal distribution reveals the complete underlying continuous structure between the two variables. The ML estimate of the correlation coefficient ρ is used to great effect to describe the relationship between the two variables. Apart from an application a simulation study is also provided to support the method proposed.
Thesis (PhD (Mathematical Statistics))--University of Pretoria, 2007.
Statistics
unrestricted
APA, Harvard, Vancouver, ISO, and other styles
3

譚玉貞 and Yuk-ching Tam. "Some practical issues in estimation based on a ranked set sample." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Tao Balakrishnan N. "Ordered ranked set samples and applications to statistical inference." *McMaster only, 2005.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Frey, Jesse C. "Inference procedures based on order statistics." Connect to this title online, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1122565389.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xi, 148 p.; also includes graphics. Includes bibliographical references (p. 146-148). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
6

Cotellesso, Paul. "Statistical and Fuzzy Set Modeling for the Risk Analysis for Critical Infrastructure Protection." The Ohio State University, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=osu1250427229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Martin, Russell Andrew. "Paths, sampling, and markov chain decomposition." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Arendt, Christopher D. "Adaptive Pareto Set Estimation for Stochastic Mixed Variable Design Problems." Ft. Belvoir : Defense Technical Information Center, 2009. http://handle.dtic.mil/100.2/ADA499860.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Wynn, Troy Alden. "Statistical Analysis of the USU Lidar Data Set with Reference to Mesospheric Solar Response and Cooling Rate Calculation, with Analysis of Statistical Issues Affecting the Regression Coefficients." DigitalCommons@USU, 2010. https://digitalcommons.usu.edu/etd/797.

Full text
Abstract:
Though the least squares technique has many advantages, its possible limitations as applied in the atmospheric sciences have not yet been fully explored in the literature. The assumption that the atmosphere responds either in phase or out of phase to the solar input is ubiquitous. However, our analysis found this assumption to be incorrect. If not properly addressed, the possible consequences are bias in the linear trend coefficient and attenuation of the solar response coefficient. Using USU Rayleigh lidar temperature data, we found a significant phase offset to the solar input in the temperatures that varies ±5 years depending on altitude. In addition to introducing a phase offset into the linear regression model, we argue that separating what we identify as the solar-noise is to be preferred because (1) the solar-noise can contain important physical information, (2) its omission could lead to spurious conclusions about the significance of the solar-proxy coefficient, and (3) its omission could also bias the solar proxy coefficient. We also argue that the Mt. Pinatubo eruption caused a positive temperature perturbation in our early mesopause temperatures, exerting leverage on the linear trend coefficient. In the upper mesosphere, we found a linear cooling trend of greater than -1.5 K/year, which is possibly exaggerated because of leverage from the earlier temperatures and/or collinearity. In the middle mesosphere we found a cooling trend of -1 K/year to near zero. We use the autocorrelation coefficient of the model residuals as a physical parameter. The autocorrelation can provide information about how strongly current temperatures are affected by prior temperatures or how quickly a physical process is occurring. The amplitudes and phases of the annual oscillation in our data compare favorably with those from the OHP and CEL French lidars, as well has the HALOE satellite instrument measurements. The semiannual climatology from the USU temperatures is similar to that from the HALOE temperatures. We also found that our semiannual and annual amplitudes and phases compare favorably with those from the HALOE, OHP, and CPC data.
APA, Harvard, Vancouver, ISO, and other styles
10

Wright, Christopher M. "Using Statistical Methods to Determine Geolocation Via Twitter." TopSCHOLAR®, 2014. http://digitalcommons.wku.edu/theses/1372.

Full text
Abstract:
With the ever expanding usage of social media websites such as Twitter, it is possible to use statistical inquires to form a geographic location of a person using solely the content of their tweets. According to a study done in 2010, Zhiyuan Cheng, was able to detect a location of a Twitter user within 100 miles of their actual location 51% of the time. While this may seem like an already significant find, this study was done while Twitter was still finding its ground to stand on. In 2010, Twitter had 75 million unique users registered, as of March 2013, Twitter has around 500 million unique users. In this thesis, my own dataset was collected and using Excel macros, a comparison of my results to that of Cheng’s will see if the results have changed over the three years since his study. If found to be that Cheng’s 51% can be shown more efficiently using a simpler methodology, this could have a significant impact on Homeland Security and cyber security measures.
APA, Harvard, Vancouver, ISO, and other styles
11

Baird, Denis Andrew. "Statistical and bioinformatics approaches for discovering pathogenic single nucleotide variants in idiopathic early on-set nephrotic syndrome using exome sequencing." Thesis, University of Bristol, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.723503.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Wang, Jiachao. "Bayesian analysis for quantification of individual rat and human behavioural patterns during attentional set-shifting tasks." Thesis, University of St Andrews, 2018. http://hdl.handle.net/10023/14843.

Full text
Abstract:
Attentional set-shifting tasks, consisting of multiple stages of discrimination learning, have been widely used in animals and humans to investigate behavioural flexibility. However, there are several learning criteria (e.g., 6-correct-choice-in-a-row, or 10-out- of-12-correct) by which a subject might be judged to have learned a discrimination. Furthermore, the currently frequentist approach does not provide a detailed analysis of individual performance. In this PhD study, a large set of archival data of rats performing a 7-stage intra-dimensional/extra-dimensional (ID/ED) attentional set- shifting task was analysed, using a novel Bayesian analytical approach, to estimate each rat's learning processes over its trials within the task. The analysis showed that the Bayesian learning criterion may be an appropriate alternative to the frequentist n- correct-in-a-row criterion for studying performance. The individual analysis of rats' behaviour using the Bayesian model also suggested that the rats responded according to a number of irrelevant spatial and perceptual information sources before the correct stimulus-reward association was established. The efficacy of the Bayesian analysis of individual subjects' behaviour and the appropriateness of the Bayesian learning criterion were also supported by the analysis of simulated data in which the behavioural choices in the task were generated by known rules. Additionally, the efficacy was also supported by analysis of human behaviour during an analogous human 7-stage attentional set-shifting task, where participants' detailed learning processes were collected based on their trial-by-trial oral report. Further, an extended Bayesian approach, which considers the effects of feedback (correct vs incorrect) after each response in the task, can even help infer whether individual human participants have formed an attentional set, which is crucial when applying the set-shifting task to an evaluation of cognitive flexibility. Overall, this study demonstrates that the Bayesian approach can yield additional information not available to the conventional frequentist approach. Future work could include refining the rat Bayesian model and the development of an adaptive trial design.
APA, Harvard, Vancouver, ISO, and other styles
13

Morgan, Nathaniel Ray. "A New Liquid-Vapor Phase Transition Technique for the Level Set Method." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6895.

Full text
Abstract:
The level set method offers a simple and robust approach to modeling liquid-vapor interfaces that arise in boiling and condensing flows. The current liquid-vapor phase-transition techniques used with the level set method are not able to account for different thermal conductivities and specific heats in each respective phase, nor are they able to accurately account for latent heat absorption and release. This paper presents a new level set based technique for liquid-vapor phase-transition that accounts for different material properties in each respective phase, such as thermal conductivity and specific heat, while maintaining the interface at the saturation temperature. The phase-transition technique is built on the ghost fluid framework coupled with the standard level set method. A new technique is presented for constructing ghost nodes that implicitly captures the immersed boundary conditions and is second order accurate. The method is tested against analytical solutions, and it is used to model film boiling. The new phase-transition technique will greatly assist efforts to accurately capture the physics of boiling and condensing flows. In addition to presenting a new phase transition technique, a coupled level set volume of fluid advection scheme is developed for phase transition flows. The new scheme resolves the mass loss problem associated with the level set method, and the method provides an easy way to accurately calculate the curvature of an interface, which can be difficult with the volume of fluid method. A film boiling simulation is performed to illustrate the superior performance of the coupled level set volume of fluid approach over the level set method and the volume of fluid method.
APA, Harvard, Vancouver, ISO, and other styles
14

Grover, Piyush. "Finding and exploiting structure in complex systems via geometric and statistical methods." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/28019.

Full text
Abstract:
The dynamics of a complex system can be understood by analyzing the phase space structure of that system. We apply geometric and statistical techniques to two Hamiltonian systems to find and exploit structure in the phase space that helps us get qualitative and quantitative results about the phase space transport. While the structure can be revealed by the study of invariant manifolds of fixed points and periodic orbits in the first system, there do not exist any fixed points (and hence invariant manifolds) in the second system. The use of statistical (or measure theoretic) and topological methods reveals the phase space structure even in the absence of fixed points or stable and unstable invariant manifolds. The first problem we study is the four-body problem in the context of a spacecraft in the presence of a planet and two of its moons, where we exploit the phase space structure of the problem to devise an intelligent control strategy to achieve mission objectives. We use a family of analytically derived controlled Keplerian Maps in the Patched-Three-Body framework to design fuel efficient trajectories with realistic flight times. These maps approximate the dynamics of the Planar Circular Restricted Three Body Problem (PCR3BP) and we patch solutions in two different PCR3BPs to form the desired trajectories in the four body system. The second problem we study concerns phase space mixing in a two-dimensional time dependent Stokes flow system. Topological analysis of the braiding of periodic points has been recently used to find lower bounds on the complexity of the flow via the Thurston-Nielsen classification theorem (TNCT). We extend this framework by demonstrating that in a perturbed system with no apparent periodic points, the almost-invariant sets computed using a transfer operator approach are the natural objects on which to pin the TNCT.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Okada, Daigo. "Decomposition of a set of distributions in extended exponential family form for distinguishing multiple oligo-dimensional marker expression profiles of single-cell populations and visualizing their dynamics." Doctoral thesis, Kyoto University, 2021. http://hdl.handle.net/2433/263569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Fetisova, Ekaterina. "Towards a flexible statistical modelling by latent factors for evaluation of simulated responses to climate forcings." Doctoral thesis, Stockholms universitet, Matematiska institutionen, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-148208.

Full text
Abstract:
In this thesis, using the principles of confirmatory factor analysis (CFA) and the cause-effect concept associated with structural equation modelling (SEM), a new flexible statistical framework for evaluation of climate model simulations against observational data is suggested. The design of the framework also makes it possible to investigate the magnitude of the influence of different forcings on the temperature as well as to investigate a general causal latent structure of temperature data. In terms of the questions of interest, the framework suggested here can be viewed as a natural extension of the statistical approach of 'optimal fingerprinting', employed in many Detection and Attribution (D&A) studies. Its flexibility means that it can be applied under different circumstances concerning such aspects as the availability of simulated data, the number of forcings in question, the climate-relevant properties of these forcings, and the properties of the climate model under study, in particular, those concerning the reconstructions of forcings and their implementation. It should also be added that although the framework involves the near-surface temperature as a climate variable of interest and focuses on the time period covering approximately the last millennium prior to the industrialisation period, the statistical models, included in the framework, can in principle be generalised to any period in the geological past as soon as simulations and proxy data on any continuous climate variable are available.  Within the confines of this thesis, performance of some CFA- and SEM-models is evaluated in pseudo-proxy experiments, in which the true unobservable temperature series is replaced by temperature data from a selected climate model simulation. The results indicated that depending on the climate model and the region under consideration, the underlying latent structure of temperature data can be of varying complexity, thereby rendering our statistical framework, serving as a basis for a wide range of CFA- and SEM-models, a powerful and flexible tool. Thanks to these properties, its application ultimately may contribute to an increased confidence in the conclusions about the ability of the climate model in question to simulate observed climate changes.

At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 2: Manuscript. Paper 3: Manuscript. Paper 3: Manuscript.

APA, Harvard, Vancouver, ISO, and other styles
17

Bui, Minh Thanh. "Statistical modeling, level-set and ensemble learning for automatic segmentation of 3D high-frequency ultrasound data : towards expedited quantitative ultrasound in lymph nodes from cancer patients." Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066146/document.

Full text
Abstract:
Afin d'accélérer et automatiser l'analyse par ultrasons quantitatifs de ganglions lymphatiques de patients atteints d'un cancer, plusieurs segmentations automatiques des trois milieux rencontrés (le parenchyme du ganglion, la graisse périnodale et le sérum physiologique) sont étudiées. Une analyse statistique du signal d'enveloppe a permis d'identifier la distribution gamma comme le meilleur compromis en termes de qualité de la modélisation, simplicité du modèle et rapidité de l'estimation des paramètres. Deux nouvelles méthodes de segmentation basées sur l'approche par ensemble de niveaux et la distribution gamma sont décrites. Des statistiques locales du signal d'enveloppe permettent de tenir compte des inhomogénéités du signal dues à l'atténuation et la focalisation des ultrasons. La méthode appelée LRGDF modélise les statistiques du speckle dans des régions dont la taille est contrôlable par une fonction lisse à support compact. La seconde, appelée STS-LS, considère des coupes transverses, perpendiculaires au faisceau, pour gagner en efficacité. Une troisième méthode basée sur la classification par forêt aléatoire a été conçue pour initialiser et accélérer les deux précédentes. Ces méthodes automatiques sont comparées à une segmentation manuelle effectuée par un expert. Elles fournissent des résultats satisfaisants aussi bien sur des données simulées que sur des données acquises sur des ganglions lymphatiques de patients atteints d'un cancer colorectal ou du sein. Les paramètres ultrasonores quantitatifs estimés après segmentation automatique ou après segmentation manuelle par un expert sont comparables
This work investigates approaches to obtain automatic segmentation of three media (i.e., lymph node parenchyma, perinodal fat and normal saline) in lymph node (LN) envelope data to expedite quantitative ultrasound (QUS) in dissected LNs from cancer patients. A statistical modeling study identified a two-parameter gamma distribution as the best model for data from the three media based on its high fitting accuracy, its analytically less-complex probability density function (PDF), and closed-form expressions for its parameter estimation. Two novel level-set segmentation methods that made use of localized statistics of envelope data to handle data inhomogeneities caused by attenuation and focusing effects were developed. The first, local region-based gamma distribution fitting (LRGDF), employed the gamma PDFs to model speckle statistics of envelope data in local regions at a controllable scale using a smooth function with a compact support. The second, statistical transverse-slice-based level-set (STS-LS), used gamma PDFs to locally model speckle statistics in consecutive transverse slices. A novel method was then designed and evaluated to automatically initialize the LRGDF and STS-LS methods using random forest classification with new proposed features. Methods developed in this research provided accurate, automatic and efficient segmentation results on simulated envelope data and data acquired for LNs from colorectal- and breast-cancer patients as compared with manual expert segmentation. Results also demonstrated that accurate QUS estimates are maintained when automatic segmentation is applied to evaluate excised LN data
APA, Harvard, Vancouver, ISO, and other styles
18

Lu, Yingzhou. "Multi-omics Data Integration for Identifying Disease Specific Biological Pathways." Thesis, Virginia Tech, 2018. http://hdl.handle.net/10919/83467.

Full text
Abstract:
Pathway analysis is an important task for gaining novel insights into the molecular architecture of many complex diseases. With the advancement of new sequencing technologies, a large amount of quantitative gene expression data have been continuously acquired. The springing up omics data sets such as proteomics has facilitated the investigation on disease relevant pathways. Although much work has previously been done to explore the single omics data, little work has been reported using multi-omics data integration, mainly due to methodological and technological limitations. While a single omic data can provide useful information about the underlying biological processes, multi-omics data integration would be much more comprehensive about the cause-effect processes responsible for diseases and their subtypes. This project investigates the combination of miRNAseq, proteomics, and RNAseq data on seven types of muscular dystrophies and control group. These unique multi-omics data sets provide us with the opportunity to identify disease-specific and most relevant biological pathways. We first perform t-test and OVEPUG test separately to define the differential expressed genes in protein and mRNA data sets. In multi-omics data sets, miRNA also plays a significant role in muscle development by regulating their target genes in mRNA dataset. To exploit the relationship between miRNA and gene expression, we consult with the commonly used gene library - Targetscan to collect all paired miRNA-mRNA and miRNA-protein co-expression pairs. Next, by conducting statistical analysis such as Pearson's correlation coefficient or t-test, we measured the biologically expected correlation of each gene with its upstream miRNAs and identify those showing negative correlation between the aforementioned miRNA-mRNA and miRNA-protein pairs. Furthermore, we identify and assess the most relevant disease-specific pathways by inputting the differential expressed genes and negative correlated genes into the gene-set libraries respectively, and further characterize these prioritized marker subsets using IPA (Ingenuity Pathway Analysis) or KEGG. We will then use Fisher method to combine all these p-values derived from separate gene sets into a joint significance test assessing common pathway relevance. In conclusion, we will find all negative correlated paired miRNA-mRNA and miRNA-protein, and identifying several pathophysiological pathways related to muscular dystrophies by gene set enrichment analysis. This novel multi-omics data integration study and subsequent pathway identification will shed new light on pathophysiological processes in muscular dystrophies and improve our understanding on the molecular pathophysiology of muscle disorders, preventing and treating disease, and make people become healthier in the long term.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
19

Plaß, Julia [Verfasser], and Thomas [Akademischer Betreuer] Augustin. "Statistical modelling of categorical data under ontic and epistemic imprecision : contributions to power set based analyses, cautious likelihood inference and (non-)testability of coarsening mechanisms / Julia Plaß ; Betreuer: Thomas Augustin." München : Universitätsbibliothek der Ludwig-Maximilians-Universität, 2018. http://d-nb.info/116087624X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Anxionnat, Adrien. "Segmentation of high frequency 3D ultrasound images for skin disease characterization." Thesis, KTH, Teknisk informationsvetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-209203.

Full text
Abstract:
This work is rooted in a need for dermatologists to explore skin characteristicsin depth. The inuence of skin disease such as acne in dermal tissues is stilla complex task to assess. Among the possibilities, high frequency ultrasoundimaging is a paradigm shift to probe and characterizes upper and deep dermis.For this purpose, a cohort of 58 high-frequency 3D images has been acquiredby the French laboratory Pierre Fabre in order to study acne vulgaris disease.This common skin disorder is a societal challenge and burden aecting late adolescentsacross the world. The medical protocol developed by Pierre Fabre wasto screen a lesion every day during 9 days for dierent patients with ultrasoundimaging. The provided data features skin epidermis and dermis structure witha fantastic resolution. The strategy we led to study these data can be explainedin three steps. First, epidermis surface is detected among artifacts and noisethanks to a robust level-set algorithm. Secondly, acne spots are located on theresulting height map and associated to each other among the data by computingand thresholding a local variance. And eventually potential inammatorydermal cavities related to each lesion are geometrically and statistically characterizedin order to assess the evolution of the disease. The results presentan automatic algorithm which permits dermatologists to screen acne vulgarislesions and to characterize them in a complete data set. It can hence be a powerfultoolbox to assess the eciency of a treatment.
Detta arbete är grundat i en dermatologs behov att undersöka hudens egenskaperpå djupet. Påverkan av hudsjukdomar så som acne på dermala vävanderär fortfarande svårt att bedöma. Bland möjligheterna är högfrekvent ultraljudsavbildningett paradigmskifte för undersökning och karakterisering av övre ochdjupa dermis. I detta syfte har en kohort av 58 högfrekventa 3D bilder förvärvatsav det Franska laboratoriet Pierre Fabre för att studera sjukdomen acne vulgaris.Denna vanliga hudsjukdom är en utmaning för samhället och en bördasom påverkar de i slutet av tonåren över hela världen. Protokollet utvecklatav Pierre Fabre innebar att undersöka en lesion varje dag över 9 dagar förolika patienter med ultraljudavbildning. Den insamlade datan visar hudens epidermisoch dermis struktur med en fantastiskt hög upplösning. Strategin vianvände för att studera denna data kan förklaras i tre steg. För det första,hittas epidermis yta bland artifakter och brus tack vare en robust level-set algoritm.För det andra, acne äckar hittas på höjdkartan och associeras tillvarandra bland mätdatan genom en tröskeljämförelse över lokala variationer.Även potentiellt inammatoriska dermala hålrum relaterade till varje lesion blirgeometriskt ochj statistiskt kännetecknade för att bedöma sjukdomens förlopp.Resultaten framför en automatisk algoritm som gör det möjligt för dermatologeratt undersöka acne vulgaris lesioner och utmärka de i ett dataset. Detta kandärmed vara en kraftfull verktygslåda för att undersöka inverkan av en behandlingtill denna sjukdom.
APA, Harvard, Vancouver, ISO, and other styles
21

Ghafouri, Soheila. "Två synsätt på elevers lärande av ämnet statistik : En studie av elever i årskurs 7." Thesis, Stockholms universitet, Institutionen för matematikämnets och naturvetenskapsämnenas didaktik, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-105995.

Full text
Abstract:
The purpose of this paper is to create increased understanding of how pupils learn statistics. This includes gaining insight into pupils' use of their own experience and group experience to help to get a better understanding of statistical problem solving. The study's research questions are about how pupils learn to work with data in tables and diagram and how pupils learn to work with measures. The theoretical framework consists of two approaches to studying learning. One approach is based on pupils’ cognitive conditions, called set-befores, and the pupils' previous experiences, called met-befores. The second starting point is the pragmatic mindset that focuses on the language game – how pupils learn during meetings between pupils and between pupils and teachers. The survey was conducted by using structured observations of pupils' statistical problem solving and the discourse that went on in the classroom. The one teacher and the teacher's pupils were observed during six sessions with small groups of Year 7 pupils, who in turn were part of two larger groups. The result showed that pupils were able to identify, understand and interpret statistical data by seeing patterns, similarities and differences. The participants' learning was affected by the language they used. Pupils were able to recreate images using reflective thought experiments during the meetings. The discussions helped the participants to get started with their thoughts and to give those thoughts some structure in developing and understanding the relationships between different diagrams. The teacher and the group helped the pupils to learn to interpret data while working. It made it easier if pupils to used the correct words when pupils had to argue. Proper use of words from the statistical register, when pupils worked with measures of center, also helped the pupils to develop cognitively. The pupils who could use the statistical register also became easier understood and respected by the group.
Syftet med denna uppsats är att skapa en ökad förståelse för hur eleverna löser statistiska uppgifter och lär sig statistik. I detta ingår det även att få insikt i elevers användning av sina erfarenheter och gruppens erfarenhet till hjälp för att få bättre förståelse för statistisk problemlösning. Studiens forskningsfrågor handlar om: Hur elever lär sig att arbeta med data i tabeller och diagram samt hur elever lär sig att arbeta med lägesmått. Det teoretiska ramverket består av två synsätt på lärande. Ett synsätt utgår från elevernas kognitiva förutsättningar, set-befores, samt elevernas tidigare erfarenheter, met-befores. Den andra utgångspunkten är det pragmatiska tankesättet som fokuserar på språkspelet. Hur eleverna lär sig under möten mellan eleverna samt mellan elever och lärare. Undersökningen genomfördes genom att använda strukturerade observationsstudier av elevernas statistiska problemlösning och de diskurser som pågick i klassrummet. Observationerna utgick från en lärare och den lärarens elever vilka observerades under sex lektionspass med smågrupper av årskurs 7 elever, vilka i sin tur ingick i två större grupper. Resultatet visar att eleverna kunde identifiera, förstå och tolka statistiskuppgifter genom att se mönster, likheter och olikheter. Deltagarnas lärande påverkades av språket och språkspelet som pågick. Eleverna kunde återskapa bilder med hjälp av reflekterande tankeexperiment under mötena. Mötena hjälpte deltagarna att komma igång med sina tankar och få struktur över dem samt utvecklas och förstå relationerna mellan olika diagram. Läraren och gruppen hjälpte eleverna att lära sig tolka data under arbetet. Det underlättade att använda rätt ord och statistikregister när eleverna behövde argumentera. Korrekt användning av ord från statistikregistret, exempelvis när eleverna arbetade med lägesmått, hjälpte även eleverna att utvecklas kognitivt. De elever som kunde använda statistikregistret blev också lättare förstådda och de respekterades av gruppen.
APA, Harvard, Vancouver, ISO, and other styles
22

Nahhas, Ramzi William. "Ranked set sampling : ranking error models, cost, and optimal set size /." The Ohio State University, 1999. http://rave.ohiolink.edu/etdc/view?acc_num=osu1488187049542056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Chalamandaris, Alexandros-Georgios. "EVIDENCE-BASED HEALTH PROMOTION: EXPLORING THE EVOLUTION OF THE EFFECTIVENESS OF SCHOOL-BASED ANTI-BULLYING INTERVENTIONS OVER TIME." Doctoral thesis, Universite Libre de Bruxelles, 2018. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/269925.

Full text
Abstract:
The objectives of this thesis were to explore how effectiveness of school-based anti-bullying interventions (SBABI) evolves over time and to assess the possibility to predict the medium-term or long-term effectiveness of SBABIs on the basis of their short-term effectiveness. The first step included a literature review in order to understand the study designs and evaluation techniques that researches used to assess the effectiveness. This literature review described the methodologies based on which researchers collected evidence and concluded on the effectiveness of their SBABIs. In order to address the thesis objectives, a collaborative project was established, named SET-Bullying (“Statistical modelling of the Effectiveness of school based anti-bullying interventions and Time”). The above-mentioned literature review was used to identify potentially eligible studies. After addressing a call for collaboration to the corresponding authors of these studies, this project included data from two of them, the DFE-SHEFFIELD study from United Kingdom and the RESPEKT study from Norway. Both of these studies have used pupil self-reported frequencies on being bullied and bullying others as an effectiveness measure, but using different instruments to elicit this information. Thus, the subsequent step of this thesis was to harmonize the data from these studies using polychoric principal components analysis, in order to be able to perform the same analysis with the data from both studies. The data from both studies were analysed using mixed effect models in order to take into account the hierarchical (i.e. the responses of pupils from the same school may be more correlated with each other as opposed to the responses of pupils from different schools) and the longitudinal structure (i.e. same pupils are more likely to respond in a similar way in the repeated measurements of each studies) of the data. With regard to the primary objective of the thesis, it was observed that effectiveness (where it is observed) may evolve either in a linear fashion or a “delayed effect” may be observed. This refers to a minimal evolution of effectiveness over the first study measurements and a sharper evolution at the later study measurements. This finding is only hypothesis generating at this point. Would this be confirmed in future studies, it will have important implication of the design, implementation and evaluations of SBABIs. About the secondary objective of this thesis, there were some preliminary findings of the possibility to predict the medium-term or long-term effectiveness based on the short-term effectiveness. However, these predictions in some cases seemed to be very variable. Future research should focus on how to make these predictions more accurate in order that this allows for dynamic and adaptable delivery of SBABIs.
Doctorat en Santé Publique
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
24

Dixon, Mark J. "Statistical analysis of extreme sea levels." Thesis, Lancaster University, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.296884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Lafont, Thibault. "Statistical vibroacoustics : study of SEA assumptions." Thesis, Ecully, Ecole centrale de Lyon, 2015. http://www.theses.fr/2015ECDL0003/document.

Full text
Abstract:
La méthode SEA (Statistical Energy Analysis) est une approche statistique de la vibroacoustique permettant de décrire les systèmes complexes en termes d'échanges d'énergies vibratoires et acoustiques. En moyennes et hautes fréquences, cette méthode se présente comme une alternative aux méthodes déterministes (coût des calculs dû au grand nombre de modes, de degrés de liberté, unicité de la solution) Néanmoins, son utilisation requiert la connaissance et le respect d'hypothèses fortes qui limitent son domaine d'application. Dans ce mémoire, les fondements de la SEA ont été examinés afin de discuter chaque hypothèse. Le champ diffus, l'équipartition de l’énergie modale, le couplage faible, l'influence des modes non résonants et l'excitation rain-on-the-roof sont les cinq hypothèses qui ont été abordées. Sur la base d'exemples simples (oscillateurs couplés, plaques couplées), les équivalences et leurs influences sur la qualité des résultats ont été étudiées pour contribuer à la clarification des hypothèses nécessaires à l'application de la SEA ct pour borner son domaine de validité SEA
Statistical energy analysis is a statistical approach of vibroacoustics which allows to describe complex systems in terms of vibrational or acoustical energies. ln the high frequency range, this method constitutes an alternative to bypass the problems which can occur when applying deterministic methods (computation cost due to the large number of modes, the large number of degrees of freedom and the unicity of the solution). But SEA has numerous assumptions which are sometimes forgotten or misunderstood ln this thesis, foundations of SEA have been examined in order to discuss each assumption. Diffuse field, modal energy equipartition, weak coupling, the influence of non-resonant modes and the rain on the roof excitation are the five look up hypotheses. Based on simple examples (coupled oscillators, coupled plates), the possible equivalences and their influence on the quality of the results have been discussed to contribute to the clarification of the useful SEA assumptions and to mark out it's the validity domain
APA, Harvard, Vancouver, ISO, and other styles
26

Fendrich, Samuel. "From axiomatization to generalizatrion of set theory." Thesis, London School of Economics and Political Science (University of London), 1987. http://etheses.lse.ac.uk/3272/.

Full text
Abstract:
The thesis examines the philosophical and foundational significance of Cohen's Independence results. A distinction is made between the mathematical and logical analyses of the "set" concept. It is argued that topos theory is the natural generalization of the mathematical theory of sets and is the appropriate foundational response to the problems raised by Cohen's results. The thesis is divided into three parts. The first is a discussion of the relationship between "informal" mathematical theories and their formal axiomatic realizations this relationship being singularly problematic in the case of set theory. The second part deals with the development of the set concept within the mathemtical approach. In particular Skolem's reformulation of Zermlelo's notion of "definite properties". In the third part an account is given of the emergence and development of topos theory. Then the considerations of the first two parts are applied to demonstrate that the shift to topos theory, specifically in its guise of LST (local set theory), is the appropriate next step in the evolution of the concept of set, within the mathematical approach, in the light of the significance of Cohen's Independence results.
APA, Harvard, Vancouver, ISO, and other styles
27

Parnell, Andrew Christopher. "The statistical analysis of former sea level." Thesis, University of Sheffield, 2005. http://etheses.whiterose.ac.uk/10284/.

Full text
Abstract:
This thesis provides the first template for estimating relative sea level curves and their associated uncertainties. More specifically, the thesis estimates the changing state of sea level in the Humber estuary, UK, over the course of the Holocene. These estimates are obtained through Bayesian methods involving Gaussian processes. Part of the task involves collating data sources from both archaeologists and geologists which have been collected during frequent study of the region. A portion of the thesis is devoted to studying the nature of the data, and the adjustment of the archaeological information so it can be used in a format suitable for estimating former sea level. The Gaussian processes are used to model sea-level change via a correlation function which assumes that data points close together in time and space should be at a similar elevation. This assumption is relaxed by incorporating non-stationary correlation functions and aspects of anisotropy. A sequence of models are fitted using Markov chain Monte Carlo. The resultant curves do not pre-suppose a functional form, and give a comprehensive framework for accounting for their uncertainty. A further complication is introduced as the temporal explanatory variables are stochastic: they arise as radiocarbon dates which require statistical calibration. The resulting posterior date densities are irregular and multi-modal. The spatio-temporal Gaussian process 2 model takes account of such irregularities via Monte Carlo simulation. The resultant sea-level curves are scrutinised at a number of locations around the Humber over a selection of time periods. It is hoped that they can provide insight into other areas of sea-level research, and into a broader palaeoclimate framework.
APA, Harvard, Vancouver, ISO, and other styles
28

Bohn, Lora L. "A nonparametric approach for ranked-set samples from two populations /." The Ohio State University, 1992. http://rave.ohiolink.edu/etdc/view?acc_num=osu1487775034177341.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Al-Olimat, Hussein S. "Knowledge-Enabled Entity Extraction." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1578100367105233.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Kartsonaki, Christiana. "Some aspects of complex statistical dependencies." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:878f4fcf-30de-4cbb-93fe-a8645cd13ba0.

Full text
Abstract:
In the first part parametric models for which the likelihood is intractable are discussed. A method for fitting such models when simulation from the model is possible is presented, which gives estimates that are linear functions of a possibly large set of candidate features. A combination of simulations based on a fractional design and sets of discriminant analyses is used to find an optimal estimate of the parameter vector and its covariance matrix. The procedure is an alternative to Approximate Bayesian Computation and Indirect Inference methods. A way of assessing goodness of fit is briefly described. In the second part the aim is to give a relationship between the effect of one or more explanatory variables on the response when adjusting for an intermediate variable and when not. This relationship is examined mainly for the cases in which the response depends on the two variables via a logistic regression or a proportional hazards model. Some of the theoretical results are illustrated using a set of data on prostate cancer. Then matched pairs with binary outcomes are discussed, for which two methods of analysis are described and compared.
APA, Harvard, Vancouver, ISO, and other styles
31

Gehly, Steve. "Estimation of geosynchronous space objects using finite set statistics filtering methods." Thesis, University of Colorado at Boulder, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10195335.

Full text
Abstract:

The use of near Earth space has increased dramatically in the past few decades, and operational satellites are an integral part of modern society. The increased presence in space has led to an increase in the amount of orbital debris, which poses a growing threat to current and future space missions. Characterization of the debris environment is crucial to our continued use of high value orbit regimes such as the geosynchronous (GEO) belt. Objects in GEO pose unique challenges, by virtue of being densely spaced and tracked by a limited number of sensors in short observation windows. This research examines the use of a new class of multitarget filters to approach the problem of orbit determination for the large number of objects present. The filters make use of a recently developed mathematical toolbox derived from point process theory known as Finite Set Statistics (FISST). Details of implementing FISST-derived filters are discussed, and a qualitative and quantitative comparison between FISST and traditional multitarget estimators demonstrates the suitability of the new methods for space object estimation. Specific challenges in the areas of sensor allocation and initial orbit determination are addressed in the framework. The sensor allocation scheme makes use of information gain functionals as formulated for FISST to efficiently collect measurements on the full multitarget system. Results from a simulated network of three ground stations tracking a large catalog of geosynchronous objects demonstrate improved performance as compared to simpler, non-information theoretic tasking schemes. Further studies incorporate an initial orbit determination technique to initiate new tracks in the multitarget filter. Together with a sensor allocation scheme designed to search for new targets and maintain knowledge of the existing catalog, the method comprises a solution to the search-detect-track problem. Simulation results for a single sensor case show that the problem can be solved for multiple objects with no a priori information, even in the presence of missed detections and false measurements. Collectively, this research seeks to advance the capabilities of FISST-derived filters for use in the estimation of geosynchronous space objects; additional directions for future research are presented in the conclusion.

APA, Harvard, Vancouver, ISO, and other styles
32

Connelly, Terence. "Structural vibration transmission in ships using statistical energy analysis." Thesis, Heriot-Watt University, 1999. http://hdl.handle.net/10399/1234.

Full text
Abstract:
This thesis presents the results of an investigation into the application of statistical energy analysis (SEA) to predict structure-borne noise transmission in ship structures. The first three chapters introduce the problems of noise and vibration in ships; the previous research on the application of SEA to ships; the basic theory of SEA and the experimental measurement techniques and procedures used to gather data The main body of this thesis presents a wave transmission model for the hull frame joint which is commonly encountered on the hull, bulkheads and deck plates of ship structures. The wave model allows the transmission coefficients to be calculated for hull frame joints which can be used in the coupling loss factor equations of SEA models. The joint model has been verified against measured data taken on a simple two subsystem single joint laboratory structures and a large complex 38 plate test structure with multiple joints intended to represent a 1/10' scale model of a hull section. In addition to the laboratory structures, the SEA modelling of sections of a ship is presented for a large ribbed deck plate, a section of the ship superstructure and a section of the ships hull. The results from the SEA models are compared with measured attenuation data taken on the respective ship sections. A large amount of damping data has been gathered on the test and ship structures and an equation for the internal steel based on data gathered by other researchers has been verified. It has been shown in this thesis that SEA can be applied to ships. Better agreement is found with real structures in contrast to the poor results presented for SEA when applied to simple one dimensional structures. The level of detail of the model is important as a coarse model yields better predictions of vibration level. As with all models the results are sensitive to the damping level and it is necessary to include bending, longitudinal and transverse wave types in any SEA model to obtain the best prediction. It was also found that the flange plates can be neglected from the frame joint model without compromising the accuracy.
APA, Harvard, Vancouver, ISO, and other styles
33

Bashir, Hussam. "Calculation of Wave Propagation for Statistical Energy Analysis Models." Thesis, Uppsala universitet, Tillämpad mekanik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-267928.

Full text
Abstract:
This thesis investigates the problems of applying Statistical Energy Analysis (SEA) tomodels that include solid volumes. Three wave types (Rayleigh waves, Pressure wavesand Shear waves) are important to SEA and the mathematics behind them is explainedhere. The transmission coefficients between the wave types are needed for energytransfer in SEA analysis and different approaches to solving the properties of wavepropagation on a solid volume are discussed. For one of the propagation problems, asolution, found in Momoi [6] is discussed, while the other problem remains unsolveddue to the analytical difficulties involved.
APA, Harvard, Vancouver, ISO, and other styles
34

Alexandridis, Roxana Antoanela. "Minimum disparity inference for discrete ranked set sampling data." Connect to resource, 2005. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1126033164.

Full text
Abstract:
Thesis (Ph. D.)--Ohio State University, 2005.
Title from first page of PDF file. Document formatted into pages; contains xi, 124 p.; also includes graphics. Includes bibliographical references (p. 121-124). Available online via OhioLINK's ETD Center
APA, Harvard, Vancouver, ISO, and other styles
35

Tarazona, Campos Sonia. "Statistical methods for transcriptomics: From microarrays to RNA-seq." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/48485.

Full text
Abstract:
La transcriptómica estudia el nivel de expresión de los genes en distintas condiciones experimentales para tratar de identificar los genes asociados a un fenotipo dado así como las relaciones de regulación entre distintos genes. Los datos ómicos se caracterizan por contener información de miles de variables en una muestra con pocas observaciones. Las tecnologías de alto rendimiento más comunes para medir el nivel de expresión de miles de genes simultáneamente son los microarrays y, más recientemente, la secuenciación de RNA (RNA-seq). Este trabajo de tesis versará sobre la evaluación, adaptación y desarrollo de modelos estadísticos para el análisis de datos de expresión génica, tanto si ha sido estimada mediante microarrays o bien con RNA-seq. El estudio se abordará con herramientas univariantes y multivariantes, así como con métodos tanto univariantes como multivariantes.
Tarazona Campos, S. (2014). Statistical methods for transcriptomics: From microarrays to RNA-seq [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48485
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
36

Stupnikov, Aleksei. "Statistical models for RNA-seq data analysis of cancer." Thesis, Queen's University Belfast, 2017. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.728670.

Full text
Abstract:
In our research we addressed several major points, related with RNA-seq-based models for Cancer. The first chapter reviews various genomics technologies from the pre-NGS era and most commonSy used NGS platforms, as well as recently developed methods. From here the main concepts of differential expression for SAGE technology and RNA-seq were considered, going on to discuss several the most widely used methods in the field. In the third chapter we formulated the biological problem, that is, reproducibility and robustness of RNA-seq Differential Expression Analysis, and made some general observations on counts distributions of cancer-related RNA-seq data as well as sequencing depth alterations impact on data. In the chapter five we employed this robustness approach to rank the performance of existing differential gene expression (DGE) models and studied effects of subsamping in terms of library, size and number of samples on the outcome of a DGE analysis. In addition, in this chapter we introduced samExploreR - an R package that allows one to implement the sequencing depth altering simulations quickly and efficiently. Building on this work we applied the concept of subsampling to Quadratic - a candidate compound discovery framework based on connectivity mapping and explored its robustness and reproducibility for various, datasets. Finally, in chapter seven we explored how integrating information from different RNA-seq based approaches may affect the resulting outcome of the analysis and studied robustness' of those methods. The approaches adapted in this body of work allowed us to introduce the procedure of subsampling as a quality control measure that can allow an inference of quality when applied to datasets in research and clinical procedures.
APA, Harvard, Vancouver, ISO, and other styles
37

Tam, Yuk-ching. "Some practical issues in estimation based on a ranked set sample /." Hong Kong : University of Hong Kong, 1999. http://sunzi.lib.hku.hk/hkuto/record.jsp?B20897169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Stark, Gregory V. "mperfect ranking models and their use in the evaluation of ranked-set sampling procedures /." The Ohio State University, 2001. http://rave.ohiolink.edu/etdc/view?acc_num=osu1486398528559461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Lambert, Richard M. "Comparing Performance of Gene Set Test Methods Using Biologically Relevant Simulated Data." DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7377.

Full text
Abstract:
Today we know that there are many genetically driven diseases and health conditions.These problems often manifest only when a set of genes are either active or inactive. Recent technology allows us to measure the activity level of genes in cells, which we call gene expression. It is of great interest to society to be able to statistically compare the gene expression of a large number of genes between two or more groups. For example, we may want to compare the gene expression of a group of cancer patients with a group of non-cancer patients to better understand the genetic causes of that particular cancer. Understanding these genetic causes could potentially lead to improved treatment options. Initially, gene expression was tested on a per gene level for statistical difference. In more recent years, it has been determined that grouping genes together by biological processes into gene sets and comparing groups at the gene set level probably makes more sense biologically. A number of gene set test methods have since been developed. It is critically important that we know if these gene set test methods are accurate. In this research, we compare the accuracy of a group of popular gene set test methods across a range of biologically realistic scenarios. In order to measure accuracy, we need to know whether each gene set is differentially expressed or not. Since this is not possible in real gene expression data, we use simulated data. We develop a simulation framework that generates gene expression data that is representative of actual gene expression data and use it to test each gene set method over a range of biologically relevant scenarios. We then compare the power and false discovery rate of each method across these scenarios.
APA, Harvard, Vancouver, ISO, and other styles
40

Malmberg, Hannes. "Random Choice over a Continuous Set of Options." Licentiate thesis, Stockholms universitet, Matematiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-89917.

Full text
Abstract:
Random choice theory has traditionally modeled choices over a -nite number of options. This thesis generalizes the literature by studyingthe limiting behavior of choice models as the number of optionsapproach a continuum.The thesis uses the theory of random elds, extreme value theoryand point processes to calculate this limiting behavior. For a numberof distributional assumptions, we can give analytic expressions forthe limiting probability distribution of the characteristics of the bestchoice. In addition, we also outline a straightforward extension to ourtheory which would signicantly relax the distributional assumptionsneeded to derive analytical results.Some examples from commuting research are discussed to illustratepotential applications of the theory.
APA, Harvard, Vancouver, ISO, and other styles
41

Sroka, Christopher J. "Extending Ranked Set Sampling to Survey Methodology." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1218543909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Teichman, Jason A. "Automated Sea State Classification from Parameterization of Survey Observations and Wave-Generated Displacement Data." ScholarWorks@UNO, 2016. http://scholarworks.uno.edu/td/2199.

Full text
Abstract:
Sea state is a subjective quantity whose accuracy depends on an observer’s ability to translate local wind waves into numerical scales. It provides an analytical tool for estimating the impact of the sea on data quality and operational safety. Tasks dependent on the characteristics of local sea surface conditions often require accurate and immediate assessment. An attempt to automate sea state classification using eleven years of ship motion and sea state observation data is made using parametric modeling of distribution-based confidence and tolerance intervals and a probabilistic model using sea state frequencies. Models utilizing distribution intervals are not able to exactly convert ship motion data into various sea states scales with significant accuracy. Model averages compared to sea state tolerances do provide improved statistical accuracy but the results are limited to trend assessment. The probabilistic model provides better prediction potential than interval-based models, but is spatially and temporally dependent.
APA, Harvard, Vancouver, ISO, and other styles
43

Ritchie, M. A. "Statistical analysis of coherent monostatic and bistatic radar sea clutter." Thesis, University College London (University of London), 2013. http://discovery.ucl.ac.uk/1397655/.

Full text
Abstract:
Radar sea clutter analysis has been an important area of radar research for many years. Very limited research has been carried out on coherent monostatic sea clutter analysis and even less on bistatic sea clutter. This has left a significant gap in the global scientific knowledge within this area. This thesis describes research carried out to analyse, quantify and model coherent sea clutter statistics from multiple radar sources. The ultimate goal of the research is to improve maritime radars' ability to compensate for clutter and achieve effective detection of targets on or over the sea surface. The first analyses used monostatic data gathered during the fight trials of the Thales Searchwater 2000 AEW radar. A further sea clutter trials database from CSIR was then used to investigate the variation of clutter statistics with look angle and grazing angle. Finally simultaneous monostatic and bistatic sea clutter data recorded in South Africa using the S-band UCL radar system NetRAD were analysed. No simultaneous monostatic and bistatic coherent analysis has ever been reported before in the open literature. The datasets recorded included multiple bistatic angles at both horizontal and vertical polarisations. Throughout the analysis real data have been compared to accepted theoretic models of sea clutter. An additional metric of comparison was investigated relating to the area of information theoretic techniques. Information theory is a significant subject area, and some concepts from it have been applied in this research. In summary this research has produced quantifiable and novel results on the characteristics of sea clutter statistics as a function of Doppler. Analysis has been carried out on a wide range of monostatic and bistatic data. The results of this research will be extremely valuable in developing sea clutter suppression algorithms and thus improving detection performance in future maritime radar designs.
APA, Harvard, Vancouver, ISO, and other styles
44

Fang, Zhou. "Reweighting methods in high dimensional regression." Thesis, University of Oxford, 2012. http://ora.ox.ac.uk/objects/uuid:26f8541a-9e2d-466a-84aa-e6850c4baba9.

Full text
Abstract:
In this thesis, we focus on the application of covariate reweighting with Lasso-style methods for regression in high dimensions, particularly where p ≥ n. We apply a particular focus to the case of sparse regression under a-priori grouping structures. In such problems, even in the linear case, accurate estimation is difficult. Various authors have suggested ideas such as the Group Lasso and the Sparse Group Lasso, based on convex penalties, or alternatively methods like the Group Bridge, which rely on convergence under repetition to some local minimum of a concave penalised likelihood. We propose in this thesis a methodology that uses concave penalties to inspire a procedure whereupon we compute weights from an initial estimate, and then do a single second reweighted Lasso. This procedure -- the Co-adaptive Lasso -- obtains excellent results in empirical experiments, and we present some theoretical prediction and estimation error bounds. Further, several extensions and variants of the procedure are discussed and studied. In particular, we propose a Lasso style method of doing additive isotonic regression in high dimensions, the Liso algorithm, and enhance it using the Co-adaptive methodology. We also propose a method of producing rules based regression estimates for high dimensional non-parametric regression, that often outperforms the current leading method, the RuleFit algorithm. We also discuss extensions involving robust statistics applied to weight computation, repeating the algorithm, and online computation.
APA, Harvard, Vancouver, ISO, and other styles
45

Robinson, Michael E. "Statistics for offshore extremes." Thesis, Lancaster University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Shen, Shihao. "Statistical methods for deep sequencing data." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/5059.

Full text
Abstract:
Ultra-deep RNA sequencing has become a powerful approach for genome-wide analysis of pre-mRNA alternative splicing. We develop MATS (Multivariate Analysis of Transcript Splicing), a Bayesian statistical framework for flexible hypothesis testing of differential alternative splicing patterns on RNA-Seq data. MATS uses a multivariate uniform prior to model the between-sample correlation in exon splicing patterns, and a Markov chain Monte Carlo (MCMC) method coupled with a simulation-based adaptive sampling procedure to calculate the P value and false discovery rate (FDR) of differential alternative splicing. Importantly, the MATS approach is applicable to almost any type of null hypotheses of interest, providing the flexibility to identify differential alternative splicing events that match a given user-defined pattern. We evaluated the performance of MATS using simulated and real RNA-Seq data sets. In the RNA-Seq analysis of alternative splicing events regulated by the epithelial-specific splicing factor ESRP1, we obtained a high RT-PCR validation rate of 86% for differential alternative splicing events with a MATS FDR of < 10%. Additionally, over the full list of RT-PCR tested exons, the MATS FDR estimates matched well with the experimental validation rate. Our results demonstrate that MATS is an effective and flexible approach for detecting differential alternative splicing from RNA-Seq data.
APA, Harvard, Vancouver, ISO, and other styles
47

Benfenati, Francesco Maria. "Statistical analysis of oceanographic extreme events." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/19885/.

Full text
Abstract:
Condizioni ambientali estreme del mare possono avere un forte impatto sulla navigazione e/o sul successo di operazioni di salvataggio. Le tecniche statistiche sono cruciali per quantificare la presenza di eventi estremi e monitorarne variazioni di frequenza e intensità. Gli eventi estremi "vivono" nella coda di una funzione distribuzione di probabilità (PDF), per questo è importante studiare la PDF in punti lontani diverse deviazioni standard dalla media. L’altezza significativa dell’onda (SWH) è il parametro solitamente usato per valutare l’intensità degli stati del mare. L’analisi degli estremi nella coda di una distribuzione richiede lunghe serie temporali per stime ragionevoli della loro intesità e e frequenza. Dati osservativi (i.e. dati storici da boe), sono spesso assenti e vengono invece utilizzate ricostruzioni numeriche delle onde, con il vantaggio che l’analisi di eventi estremi diventa possibile su una vasta area. Questa tesi vuole condurre un’analisi preliminare delle variazioni spaziali dei valori estremi della SWH nel Mediterraneo. Vengono usati dati orari dal modello del Med-MFC (dal portale del CMEMS), una ricostruzione numerica di onde per il Mediterraneo, che sfrutta il modello "WAM Cycle 4.5.4", coprendo il periodo 2006-2018, con risoluzione spaziale 0.042° (~ 4km). In particolare, consideriamo dati di 11 anni (dal 2007 al 2017), concentrandoci sulle regioni del Mar Ionio e del Mar Iberico. La PDF della SWH è seguita piuttosto bene dall’andamento di una curva Weibull a 2 parametri sia durante l’inverno (Gennaio) che durante l’estate (Luglio), con difetti per quanto riguarda il picco e la coda della distribuzione. A confronto, la curva a 3 parametri Weibull Esponenziata sembra essere più appropriata, anche se non è stato trovato un metodo per dimostrare che sia un fit migliore. Alla fine, viene proposto un metodo di stima del rischio basato sul periodo giornaliero di ritorno delle onde più alte di un certo valore di soglia, ritenute pericolose.
APA, Harvard, Vancouver, ISO, and other styles
48

Garrigues, Laurent. "Statistical analysis and forecasting of sea ice conditions in Canadian waters." Thesis, McGill University, 2001. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=19621.

Full text
Abstract:
Historical data of sea ice concentration in Canadian waters are analysed using projections methods (Principal Component Analysis, Singular Value Decomposition, Canonical Correlation Analysis and Projection on Latent Structures) to identify the main patterns of evolution in the sea ice cover. Three different areas of interest are studied: (1) the Gulf of St Lawrence, (2) the Beaufort Sea and (3) the Labrador Sea down to the east coast of Newfoundland. Forcing parameters that drive the evolution of the sea ice cover such as surface air temperature and wind field are also analysed in order to explain some of the variability observed in the sea ice field. Only qualitative correlations have been identified, essentially because of the singular nature of the sea ice concentration itself and the accuracy of available data. However, several statistical models based on identified patterns have been developed showing forecasting skills far better than those of the persistence assumption, which currently remains one of the best 'model' available. Forecasts are tested over periods of time ranging from a few days to several weeks. Some of these models constitute innovative approaches in the context of statistical sea ice forecasting. Some others models have been developed using a probabilistic approach. These models provide forecasts in terms of sea ice severity (low-medium-high), which is often accurate enough for navigation purposes for the three areas of interest. Forecasting skills of these models are also better than the persistence assumption. Finally, an existing dynamic sea-ice model has been adapted and used to predict sea ice conditions in the Gulf of St Lawrence during the Winter season 1992-1993. Simulations provided by this model are compared to the forecasts of different statistical models over the same period of time.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Dao-Peng. "Statistical power for RNA-seq data to detect two epigenetic phenomena." The Ohio State University, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=osu1357248975.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Gemayel, Nader M. "Bayesian Nonparametric Models for Ranked Set Sampling." The Ohio State University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=osu1271420479.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography