To see the other types of publications on this topic, follow the link: Statisics.

Dissertations / Theses on the topic 'Statisics'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Statisics.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Richards, Andrew. "New Species Tree Inference Methods Under the Multispecies Coalescent Model." The Ohio State University, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=osu1618507147603501.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

黃式鈞 and Sik-kwan Francis Wong. "Outcome of a web-based statistic laboratory for teaching and learning of medical statistics." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2009. http://hub.hku.hk/bib/B43251687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wong, Sik-kwan Francis. "Outcome of a web-based statistic laboratory for teaching and learning of medical statistics." Click to view the E-thesis via HKUTO, 2009. http://sunzi.lib.hku.hk/hkuto/record/B43251687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Johnson, Eric P. "Composite strength statistics from fiber strength statistics." Thesis, Monterey, California. Naval Postgraduate School, 1991. http://hdl.handle.net/10945/26420.

Full text
Abstract:
Utilization of composites in critical design applications requires an extensive engineering experience data base which is generally lacking, especially for rapidly developing constituent fibers. As a supplement, an accurate reliability theory can be applied in design. This investigation is a part of a research effort to develop a probabilistic model of composite reliability capable of using data produced in small laboratory test samples to predict the behavior of large structures with respect to their actual dimensions. This work included testing of composite strength which was then used in exploring the methodology of predicting composite reliability from the parent single filament fiber strength statistics. This required testing of a coordinate set of test samples which consisted of a composite and its parent fibers. Previously collected fiber strength statistics from two different production spools were used in conjunction with the current effort. This investigation established that, for a well made composite, the Local Load Sharing Model of reliability prediction exhibited outstanding correlation with experimental data and was sufficiently sensitive to predict deficient composite strength due to a specific fiber spool with an abnormally weak lower tail. In addition, it provided an upper bound on the composite reliability. This investigation is unique in that is used a coordinate set of data with an unambiguous genesis of parent fiber and subsequent composite. The findings of this investigation are also definitive in that six orders of extrapolation of size in reliability prediction has been verified
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Baoyong. "Fractionation Statistics." Thèse, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31001.

Full text
Abstract:
Paralog reduction, the loss of duplicate genes after whole genome duplication (WGD) is a pervasive process. Whether this loss proceeds gene by gene or through deletion of multi-gene DNA segments is controversial, as is the question of fractionation bias, namely whether one homeologous chromosome is more vulnerable to gene deletion than the other. As a null hypothesis, we first assume deletion events, on one homeolog only, excise a geometrically distributed number of genes with unknown mean mu, and these events combine to produce deleted runs of length l, distributed approximately as a negative binomial with unknown parameter r; itself a random variable with distribution pi(.). A biologically more realistic model requires deletion events on both homeologs distributed as a truncated geometric. We simulate the distribution of run lengths l in both models, as well as the underlying pi(r), as a function of mu, and show how sampling l allows us to estimate mu. We apply this to data on a total of 15 genomes descended from 6 distinct WGD events and show how to correct the bias towards shorter runs caused by genome rearrangements. Because of the difficulty in deriving pi(.) analytically, we develop a deterministic recurrence to calculate each pi(r) as a function of mu and the proportion of unreduced paralog pairs. This is based on a computing formula containing nested sums. The parameter mu can be estimated based on run lengths of single-copy regions. We then reduce the computing formulae, at least in the one-sided case, to closed form. This virtually eliminates computing time due to highly nested summations. We formulate a continuous version of the fractionation process, deleting line segments of exponentially distributed lengths in analogy to geometric distributed numbers of genes. We derive nested integrals and discover that the number of previously deleted regions to be skipped by a new deletion event is exactly geometrically distributed. We undertook a large simulation experiment to show how to discriminate between the gene-by-gene duplicate deletion model and the deletion of a geometrically distributed number of genes. This revealed the importance of the effects of genome size N, the mean of the geometric distribution, the progress towards completion of the fractionation process, and whether the data are based on runs of deleted genes or undeleted genes.
APA, Harvard, Vancouver, ISO, and other styles
6

Chou, Chihong. "Fractional statistics." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/31030.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schneider, William Ray. "The Relationship Between Statistics Self-Efficacy, Statistics Anxiety, and Performance in an Introductory Graduate Statistics Course." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3335.

Full text
Abstract:
The purpose of this study was to determine the relationship between statistics self-efficacy, statistics anxiety, and performance in introductory graduate statistics courses. The study design compared two statistics self-efficacy measures developed by Finney and Schraw (2003), a statistics anxiety measure developed by Cruise and Wilkins (1980), and a course performance measure. To view self-efficacy from two perspectives, the Current Statistics Self-Efficacy (CSSE) assessed student confidence in their ability to complete specific statistics tasks in the present, whereas Self-Efficacy to Learn Statistics (SELS) assessed student confidence in their ability to learn statistics in the future. The performance measure was the combined average of the midterm and final exam scores only, excluding grades from other course activities. The instruments were distributed to four sections of an introductory graduate statistics course (N=88) in a College of Education at a large metropolitan university during the first week of the semester during Fall 2009 and Spring 2010. Both of the statistics self-efficacy measures revealed a low to moderate inverse relationship with statistics anxiety and a low to moderate direct relationship with each other. In this study there was no correlation between statistics anxiety (CSCS), statistics self-efficacy (CSSE and SELS), and course performance. There was high internal reliability for each instrument's items making the instruments suitable for use with graduate students. However, none of the instruments' results were significant in relation to course performance with graduate students in this sample. Unlike prior research involving undergraduate-level statistic students that has reported a relationship between the CSSE and SELS, the present study, involving graduate students, did not find any significant correlation with performance. Additional research is suggested to investigate the reasons for the differences between the studies.
APA, Harvard, Vancouver, ISO, and other styles
8

Thayne, Jeffrey L. "Making Statistics Matter: Using Self-data to Improve Statistics Learning." DigitalCommons@USU, 2016. https://digitalcommons.usu.edu/etd/5214.

Full text
Abstract:
Research has demonstrated that well into their undergraduate and even graduate education, learners often struggle to understand basic statistical concepts, fail to see their relevance in their personal and professional lives, and often treat them as little more than mere mathematics exercises. This study explored ways help learners in an undergraduate learning context to treat statistical inquiry as mattering in a practical research context, by inviting them to ask questions about and analyze large, real, messy datasets that they have collected about their own personal lives (i.e., self-data). This study examined the conditions under which such an intervention might (and might not) successfully lead to a greater sense of the relevance of statistics to undergraduate learners.
APA, Harvard, Vancouver, ISO, and other styles
9

Lindblad, Niclas. "Subversion Statistics Tool." Thesis, Linköping University, Department of Computer and Information Science, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-15792.

Full text
Abstract:

På Linköpings universitet använder man sig sällan av versionshanteringssystem i kurser. Istället sparas till exempel programmeringslabbar på enskilda personers UNIX konton. Detta leder till problem både när man programmerar i grupp men också om någonting skulle gå snett. Labhandledaren har också mycket sämre vetskap om hur specifika gruppers arbeten framskrider och många grupper kan få hjälp för sent på grund av detta. Troligtvis kommer versionshanteringssystem användas mycket mer i framtiden.

Att läsa versionsloggar för att följa upp grupper är ett osmidigt och tidskrävande jobb och ger dålig överblick. Detta examensarbete beskriver ett verktyg till hjälp för labhandledaren i kurser där versionshanteringssystem används. Tillvägagångssättet och designen men även vilka problem som uppstod är fokus i rapporten.

Resultatet av examensarbetet är en webapplikation som visar statistik för alla labgrupper i en specifik kurs, både textuellt och grafiskt. Webapplikationen strävar efter att bete sig som en vanlig och lättanvänd skrivbordsapplikation. Detta verktyg ger labhandledaren bättre överblick över individernas arbete samtidigt som fusk kan komma att uppmärksammas.

APA, Harvard, Vancouver, ISO, and other styles
10

Karlslätt, David. "Improved Statistics Handling." Thesis, Linköping University, Department of Computer and Information Science, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-18238.

Full text
Abstract:

Ericsson is a global provider of telecommunications systems equipment and related services for mobile and fixed network operators. 

3Gsim is a tool used by Ericsson in tests of the 3G RNC node.

In order to validate the tests, statistics are constantly gathered within 3Gsim and users can use telnet to access the statistics using some system specific 3Gsim commands.

The statistics can be retrieved but is unstructured for the human eye and needs parsing and arranging to be readable. 

The statistics handler that is implemented during this thesis provides a possibility for users of 3Gsim to present information that favors their personal interest.

The implementation can produce one prototype output document which contains the most common statistics needed by the 3Gsim user. A main focus of this final thesis has been to simplify content and format control for the user as much as possible.

Presenting and structuring information now comes down to simple text editing and rid the user of the time consuming work of updating and recompiling the entire application.

Earlier, scripts written in Perl, an iterative oriented language, were used for presenting the statistics. These scripts were often difficult to comprehend since there were many different authors with inadequate experience and knowledge.

The new statistics handler has been written in Java, a high-level object-oriented language which should better suite the users and developers of 3Gsim. 

APA, Harvard, Vancouver, ISO, and other styles
11

Ghoudi, Kilani. "Multivariate randomness statistics." Thesis, University of Ottawa (Canada), 1993. http://dx.doi.org/10.20381/ruor-17165.

Full text
Abstract:
During the startup phase of a production process while statistics on the product quality are being collected it is useful to establish that the process is under control. Small samples ni qi=1 are taken periodically for q periods. We shall assume each measurement is multivariate. A process is under control or on-target if all the observations are deemed to be independent and identically distributed. Let Fi represent the empirical distribution function of the ith sample. Let F¯ represent the empirical distribution function of all observations. Following Lehmann (1951) we propose statistics of the form i=1q -infinityinfinityFi s-F- s2d Fs. The asymptotics of nonparametric q-sample Cramer-Von Mises statistics were studied in Kiefer (1959). The emphasis there, however, is on the case where n(i) → infinity while q stayed fixed. Here we study the asymptotics of a family of randomness statistics, that includes the above. These asymptotics are in the quality control situation (i.e q → infinity while n( i) stay fixed). Such statistics can be used in many situations; in fact one can use randomness statistics in any situation where the problem amounts to a test of homoscedasticity or homogeneity of a collection of observations. We give two such applications. First we show how such statistics can be used in nonparametric regression. Second we illustrate the application to retrospective quality control.
APA, Harvard, Vancouver, ISO, and other styles
12

Underhill, Les, and Dave Bradfield. "INTROSTAT (Statistics textbook)." Thesis, University of Cape Town, 2013. https://vula.uct.ac.za/access/content/group/23066897-bf3d-4a8d-9637-049c04424e24/IntroStat-%20Dr%20Underhill/.

Full text
Abstract:
IntroStat was designed to meet the needs of students, primarily those in business, commerce and management, for a course in applied statistics. IntroSTAT is designed as a lecture-book. One of the aims is to maximize the time spent in explaining concepts and doing examples. The book is commonly used as part of first year courses into Statistics.
APA, Harvard, Vancouver, ISO, and other styles
13

Johnson, Earl E. "A Statistics Primer." Digital Commons @ East Tennessee State University, 2011. https://dc.etsu.edu/etsu-works/1728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Ammons, Mark Joseph. "Comparison of career statistics and season statistics in major league baseball." Click here to access dissertation, 2008. http://www.georgiasouthern.edu/etd/archive/fall2007/mark_j_ammons/Ammons_mark_j_200801_ms.pdf.

Full text
Abstract:
Thesis (M.S.)--Georgia Southern University, 2008.
"A dissertation submitted to the Graduate Faculty of Georgia Southern University in partial fulfillment of the requirements for the degree Master of Science." Under the direction of Pat Humphrey. ETD. Electronic version approved: May 2008. Includes bibliographical references (p. 79-80) and appendices.
APA, Harvard, Vancouver, ISO, and other styles
15

D'ANGELO, Nicoletta. "Local methods for complex spatio-temporal point processes." Doctoral thesis, Università degli Studi di Palermo, 2022. https://hdl.handle.net/10447/574349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Fontdecaba, Rigat Sara. "Contributions to industrial statistics." Doctoral thesis, Universitat Politècnica de Catalunya, 2015. http://hdl.handle.net/10803/336105.

Full text
Abstract:
This thesis is about statistics' contributions to industry. It is an article compendium comprising four articles divided in two blocks: (i) two contributions for a water supply company, and (ii) significance of the effects in Design of Experiments. In the first block, great emphasis is placed on how the research design and statistics can be applied to various real problems that a water company raises and it aims to convince water management companies that statistics can be very useful to improve their services. The article "A methodology to model water demand based on the identification of homogeneous client segments. Application to the city of Barcelona", makes a comprehensive review of all the steps carried out for developing a mathematical model to forecast future water demand. It pays attention on how to know more about the influence of socioeconomic factors on customer's consumption in order to detect segments of customers with homogenous habits to objectively explain the behavior of the demand. The second article -related to water demand management, "An Approach to disaggregating total household water consumption into major end-uses" describes the procedure to assign water consumption to microcomponents (taps, showers, cisterns, washer machines and dishwashers) on the basis of the readings of water consumption of the water meter. The main idea to accomplish this is, to determine which of the devices has caused the consumption, to treat the consumption of each device as a stochastic process. In the second block of the thesis, a better way to judge the significance of effects in unreplicated factorial experiments is described. The article "Proposal of a Single Critical Value for the Lenth Method" analyzes the many analytical procedures that have been proposed for identifying significant effects in not replicated two level factorial designs. Many of them are based on the original "Lenth Method and explain and try to overcome the problems that it presents". The article proposes a new strategy to choose the critical values to better differentiate the inert from the active factors. The last article "Analysing DOE with Statistical Software Packages: Controversies and Proposals" review the most important and commonly used in industry statistical software with DOE capabilities: JMP, Minitab, SigmaXL, StatGraphics and Statistica and evaluates how well they resolve the problem of analyzing the significance of effects in unreplicated factorial designs
APA, Harvard, Vancouver, ISO, and other styles
17

Adriannse, Robert. "Adaptive local statistics filtering." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp04/mq21530.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Leung, Bartholomew Ping Kei. "Contributions to industrial statistics." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape10/PQDD_0013/NQ41618.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Teng, Yunlong, and Yingrui Zhao. "Statistics in Ella Mathematics." Thesis, Högskolan i Halmstad, Sektionen för Informationsvetenskap, Data– och Elektroteknik (IDE), 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-21475.

Full text
Abstract:
"Ella Mathematics" is a web-based e-learning system which aims to improve elementary school students’ mathematics learning in Sweden. Such an e-learning tool has been partially completed in May 2012, except descriptive statistics module summarizing students’ performance in the learning process. This project report presents and describes the design and implementation of such descriptive statistics module, which intends to allow students to check their own grades and learning progress; teachers to check and compare students’ grades and progress, as well as parents to compare their children’s grades and learning progress with the average grade and progress of other students. To better understand and design such functionalities, different mathematical e-learning systems were investigated. Another contribution of this project relates to the evaluation and redesign of the existing database model of the “Ella Mathematics” system. The redesign improved performance and reduced data redundancy.
APA, Harvard, Vancouver, ISO, and other styles
20

Robinson, Michael E. "Statistics for offshore extremes." Thesis, Lancaster University, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.387465.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Jahangir, Mohammed. "Coherent radar clutter statistics." Thesis, University College London (University of London), 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.313422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Downie, Timothy Ross. "Wavlet methods in statistics." Thesis, University of Bristol, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.389339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Koloydenko, Alexey. "Modeling natural microimage statistics." Thesis, Royal Holloway, University of London, 2000. http://repository.royalholloway.ac.uk/items/6ade349d-d5d0-1bcf-4bc1-dc382d440027/9/.

Full text
Abstract:
A large collection of digital images of natural scenes provides a database for analyzing and modeling small scene patches (e.g., 2 x 2) referred to as natural microimages. A pivotal ¯nding is the stability of the empirical microimage distribution across scene samples and with respect to scaling. With a view toward potential applications (e.g. classi¯cation, clutter modeling, segmentation), we present a hierarchy of microimage probability models which capture essential local image statistics. Tools from information theory, algebraic geometry and of course statistical hypothesis testing are employed to assess the "match" between candidate models and the empirical distribution. Geometric symmetries play a key role in the model selection process. One central result is that the microimage distribution exhibits reflection and rotation symmetry and is well-represented by a Gibbs law with only pairwise interactions. However, the acceptance of the up-down reflection symmetry hypothesis is borderline and intensity inversion symmetry is rejected. Finally, possible extensions to larger patches via entropy maximization and to patch classification via vector quantization are briefly discussed.
APA, Harvard, Vancouver, ISO, and other styles
24

Gustar, Andrew. "Statistics in historical musicology." Thesis, Open University, 2014. http://oro.open.ac.uk/41851/.

Full text
Abstract:
Statistical techniques are well established in many historical disciplines and are used extensively in music analysis, music perception, and performance studies. However, statisticians have largely ignored the many music catalogues, databases, dictionaries, encyclopedias, lists and other datasets compiled by institutions and individuals over the last few centuries. Such datasets present fascinating historical snapshots of the musical world, and statistical analysis of them can reveal much about the changing characteristics of the population of musical works and their composers, and about the datasets and their compilers. In this thesis, statistical methodologies have been applied to several case studies covering, among other things, music publishing and recording, composers’ migration patterns, nineteenth-century biographical dictionaries, and trends in key and time signatures. These case studies illustrate the insights to be gained from quantitative techniques; the statistical characteristics of the populations of works and composers; the limitations of the predominantly qualitative approach to historical musicology; and some practical and theoretical issues associated with applying statistical techniques to musical datasets. Quantitative methods have much to offer historical musicology, revealing new insights, quantifying and contextualising existing information, providing a measure of the quality of historical sources, revealing the biases inherent in music historiography, and giving a collective voice to the many minor and obscure works and composers that have historically formed the vast majority of musical activity but who have been largely absent from the received history of music.
APA, Harvard, Vancouver, ISO, and other styles
25

Hothorn, Torsten, and Achim Zeileis. "Generalized Maximally Selected Statistics." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 2007. http://epub.wu.ac.at/1252/1/document.pdf.

Full text
Abstract:
Maximally selected statistics for the estimation of simple cutpoint models are embedded into a generalized conceptual framework based on conditional inference procedures. This powerful framework contains most of the published procedures in this area as special cases, such as maximally selected chi-squared and rank statistics, but also allows for direct construction of new test procedures for less standard test problems. As an application, a novel maximally selected rank statistic is derived from this framework for a censored response partitioned with respect to two ordered categorical covariates and potential interactions. This new test is employed to search for a high-risk group of rectal cancer patients treated with a neo-adjuvant chemoradiotherapy. Moreover, a new efficient algorithm for the evaluation of the asymptotic distribution for a large class of maximally selected statistics is given enabling the fast evaluation of a large number of cutpoints.
Series: Research Report Series / Department of Statistics and Mathematics
APA, Harvard, Vancouver, ISO, and other styles
26

Dean, Caroline Elizabeth. "Statistics for electronic resources." Master's thesis, University of Cape Town, 2008. http://hdl.handle.net/11427/14704.

Full text
Abstract:
Includes bibliographical references (leaves 67-74).
Electronic resources represent a large portion of many libraries' information resources in the current climate of hybrid libraries where print and electronic formats coexist. Since the dramatic uptake of electronic resources in libraries during the 1990's the topic of usage statistics has been on librarians' lips. The expectations that librarians had of being able to compare resources based on usage statistics were soon dashed as it became apparent that electronic resource providers were not measuring usage uniformly. Given the initial disappointments that librarians had in terms of electronic resource usage statistics the author set out to find the reasons why librarians were keeping statistics for electronic resources, which statistics they were keeping for electronic resources, and what were the issues and concerns with regard to statistics for electronic resources. To get an international answer to these questions a literature review was undertaken. The South African point of view was sought through an e-mail survey that was sent out to the 23 South African academic libraries that form the South African National Library and Information Consortium (SANLiC). A 65% response rate was recorded. The international and South African answers to the three questions were very similar. The study found that the reasons why librarians keep electronic resources statistics were to "assess the value of different online products/services"; to "make better-informed purchasing decisions"; to "plan infrastructure and allocation of resources"; and to "support internal marketing and promotion of library services". The study also found that the statistics that librarians were keeping are: sessions, searches, documents downloaded, turnaways, location of use, number of electronic resources, expenditure and virtual visits. The number of virtual visits was kept by international libraries but no South African libraries reported keeping this information. The concerns that were raised by both international and South African libraries were found to be about: the continued lack of standardisation; the time-consuming nature of data collection; the reliability of the usage data; the fact that the data need to be looked at in context; the management of the data; and how to count electronic resources. Clear definitions of the latter are essential. A concern raised in South Africa but not in the international literature is that there exists a lack of understanding amongst some South African librarians of the basic concepts of electronic resources usage statistics. The author concludes with a suggestion that the CHELSA Measures for Quality be implemented so that librarians can see that the collection of usage data for electronic resources has some purpose. Once this is in place one or more training events under the auspices of SANLiC should be organised in order to train librarians in the best practice of electronic resource usage statistics.
APA, Harvard, Vancouver, ISO, and other styles
27

Niedermaier, Andrew Gerard. "Statistics on wreath products." Diss., [La Jolla] : University of California, San Diego, 2009. http://wwwlib.umi.com/cr/ucsd/fullcit?p3355638.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2009.
Title from first page of PDF file (viewed June 23, 2009). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 170-172).
APA, Harvard, Vancouver, ISO, and other styles
28

Attarha, Mouna. "Summary statistics in vision." Diss., University of Iowa, 2015. https://ir.uiowa.edu/etd/1535.

Full text
Abstract:
It is said that our visual experience is a ‘Grand Illusion’. Our brains can only process a fraction of the total information available in the natural world, and yet our subjective impression of that world appears richly detailed and complete. The apparent disparity between our conscious experience of the visual landscape and the precision of our internal representation has suggested to some that our brains are equipped with specialized mechanisms that surmount the inherent limitations of our perceptual and cognitive systems. One proposed set of mechanisms, called summary statistics, processes information in a scene by representing the regularities that are often shared among groups of similar in terms of descriptive statistics. For example, snowflakes blowing in the wind may be represented in terms of their mean direction and speed. Prevailing views hold that summary statistics may underlie all aspects of our subjective visual experience, inasmuch as such representations are thought to form automatically across multiple visual fields, exhaustively summarizing all available visual features regardless of attention. We challenge this view by showing that summary statistics are mediated by limited-capacity processes and therefore cannot unfold independently across multiple areas of the visual field. We also show that summary statistics require attention and thus cannot account for our sense of visual completeness outside attended visual space. In light of this evidence, we suggest that the application of summary representations to daily perceptual life has been overstated for the past decade. Indeed, many observations interpreted in terms of summary statistics can be accounted for by alternative cognitive processes, such as visual working memory.
APA, Harvard, Vancouver, ISO, and other styles
29

Thayne, Jeffrey L. "Making statistics matter| Self-data as a possible means to improve statistics learning." Thesis, Utah State University, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10250713.

Full text
Abstract:

Research has demonstrated that well into their undergraduate and even graduate education, learners often struggle to understand basic statistical concepts, fail to see their relevance in their personal and professional lives, and often treat them as little more than mere mathematics exercises. Undergraduate learners often see statistical concepts as means to passing exams, completing required courses, and moving on with their degree, and not as instruments of inquiry that can illuminate their world in new and useful ways.

This study explored ways help learners in an undergraduate learning context to treat statistical inquiry as mattering in a practical research context, by inviting them to ask questions about and analyze large, real, messy datasets that they have collected about their own personal lives (i.e., self -data). This study examined the conditions under which such an intervention might (and might not) successfully lead to a greater sense of the relevance of statistics to undergraduate learners. The goal is to place learners in a context where their relationship with data analysis can more closely mimic that of disciplinary professionals than that of students with homework; that is, where they are illuminating something about their world that concerns them for reasons beyond the limited concerns of the classroom.

The study revealed five themes in the experiences of learners working with self-data that highlight contexts in which data-analysis can be made to matter to learners (and how self-data can make that more likely): learners must be able to form expectations of the data, whether based on their own experiences or external benchmarks; the data should have variation to account for; the learners should treat the ups and downs of the data as more or less preferable in some way; the data should address or related to ongoing projects or concerns of the learner; and finally, learners should be able to investigate quantitative or qualitative covariates of their data. In addition, narrative analysis revealed that learners using self-data treated data analysis as more than a mere classroom exercise, but as exercises in inquiry and with an invested engagement that mimicked (in some ways) that of a disciplinary professional.

APA, Harvard, Vancouver, ISO, and other styles
30

Melbourne, Davayne A. "A New method for Testing Normality based upon a Characterization of the Normal Distribution." FIU Digital Commons, 2014. http://digitalcommons.fiu.edu/etd/1248.

Full text
Abstract:
The purposes of the thesis were to review some of the existing methods for testing normality and to investigate the use of generated data combined with observed to test for normality. The approach to testing for normality is in contrast to the existing methods which are derived from observed data only. The test of normality proposed follows a characterization theorem by Bernstein (1941) and uses a test statistic D*, which is the average of the Hoeffding’s D-Statistic between linear combinations of the observed and generated data to test for normality. Overall, the proposed method showed considerable potential and achieved adequate power for many of the alternative distributions investigated. The simulation results revealed that the power of the test was comparable to some of the most commonly used methods of testing for normality. The test is performed with the use of a computer-based statistical package and in general takes a longer time to run than some of the existing methods of testing for normality.
APA, Harvard, Vancouver, ISO, and other styles
31

Reimer, Sean. "The Practicality of Statistics: Why Money as Expected Value Does Not Make Statistics Practical." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/997.

Full text
Abstract:
This thesis covers the uncertainty of empirical prediction. As opposed to objectivity, I will discuss the practicality of statistics. Practicality defined as "useful" in an unbiased sense, in relation to something in the external world that we care about. We want our model of prediction to give us unbiased inference whilst also being able to speak about something we care about. For the reasons explained, the inherent uncertainty of statistics undermines the unbiased inference for many methods. Bayesian Statistics, by valuing hypotheses is more plausible but ultimately cannot arrive at an unbiased inference. I posit the value theory of money as a concept that might be able to allow us to derive unbiased inferences from while still being something we care about. However, money is of instrumental value, ultimately being worth less than an object of “transcendental value.” Which I define as something that is worth more than money since money’s purpose is to help us achieve “transcendental value” under the value theory. Ultimately, as long as an individual has faith in a given hypothesis it will be worth more than any hypothesis valued with money. From there we undermine statistic’s practicality as it seems as though without the concept of money we have no manner of valuing hypotheses unbiasedly, and uncertainty undermines the “objective” inferences we might have been able to make.
APA, Harvard, Vancouver, ISO, and other styles
32

Yeung, Conson. "Fracture statistics of brittle materials /." View the Table of Contents & Abstract, 2005. http://sunzi.lib.hku.hk/hkuto/record/B31490323.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Grossmann, Steffen. "Statistics of optimal sequence alignments." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968907466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Laird, Anne Marie. "Observed statistics of extreme waves." Thesis, Monterey, Calif. : Naval Postgraduate School, 2006. http://bosun.nps.edu/uhtbin/hyperion.exe/06Dec%5FLaird.pdf.

Full text
Abstract:
Thesis (M.S. in Physical Oceanography)--Naval Postgraduate School, December 2006.
Thesis Advisor(s): Thomas H. C. Herbers. "December 2006." Includes bibliographical references (p. 49-50). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
35

Mroczkowski, Piotr. "Identity Verification using Keyboard Statistics." Thesis, Linköping University, Department of Electrical Engineering, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-2265.

Full text
Abstract:

In the age of a networking revolution, when the Internet has changed not only the way we see computing, but also the whole society, we constantly face new challenges in the area of user verification. It is often the case that the login-id password pair does not provide a sufficient level of security. Other, more sophisticated techniques are used: one-time passwords, smart cards or biometric identity verification. The biometric approach is considered to be one of the most secure ways of authentication.

On the other hand, many biometric methods require additional hardware in order to sample the corresponding biometric feature, which increases the costs and the complexity of implementation. There is however one biometric technique which does not demand any additional hardware – user identification based on keyboard statistics. This thesis is focused on this way of authentication.

The keyboard statistics approach is based on the user’s unique typing rhythm. Not only what the user types, but also how she/he types is important. This report describes the statistical analysis of typing samples which were collected from 20 volunteers, as well as the implementation and testing of the identity verification system, which uses the characteristics examined in the experimental stage.

APA, Harvard, Vancouver, ISO, and other styles
36

Bedwell, Mike. "Rescuing Statistics from the Mathematicians." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-79428.

Full text
Abstract:
Drawing on some 30 years’ experience in the UK and Central Europe, the author offers four assertions, three about education generally and the fourth that of the title. There the case is argued that statistics is a branch of logic, and therefore should be taught by experts in such subjects as philosophy and law and not exclusively by athematicians. Education in both Statistics and these other subjects would profit in consequence.
APA, Harvard, Vancouver, ISO, and other styles
37

Gleim, Alexander [Verfasser]. "Essays in Statistics / Alexander Gleim." Bonn : Universitäts- und Landesbibliothek Bonn, 2016. http://d-nb.info/1096329824/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hörmann, Wolfgang, and Gerhard Derflinger. "Fast Generation of Order Statistics." Department of Statistics and Mathematics, Abt. f. Angewandte Statistik u. Datenverarbeitung, WU Vienna University of Economics and Business, 2001. http://epub.wu.ac.at/1052/1/document.pdf.

Full text
Abstract:
Generating a single order statistic without generating the full sample can be an important task for simulations. If the density and the CDF of the distribution are given it is no problem to compute the density of the order statistic. In the main theorem it is shown that the concavity properties of that density depend directly on the distribution itself. Especially for log-concave distributions all order statistics have log-concave distributions themselves. So recently suggested automatic transformed density rejection algorithms can be used to generate single order statistics. This idea leads to very fast generators. For example for the normal and gamma distribution the suggested new algorithms are between 10 and 60 times faster than the algorithms suggested in the literature. (author's abstract)
Series: Preprint Series / Department of Applied Statistics and Data Processing
APA, Harvard, Vancouver, ISO, and other styles
39

Downie, Alan Stewart. "Efficiency of statistics of stereology." Thesis, Imperial College London, 1991. http://hdl.handle.net/10044/1/46750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Durmén, Blunt Tina. "Personalized visualization of blog statistics." Thesis, Linköpings universitet, Medie- och Informationsteknik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-92842.

Full text
Abstract:
This report documents the research, implementation and result for a master thesis in Media Technology and Engineering at Linkoping University. The aim of the project was to develop a personalized visualization application of blog statistics to be implemented on a web based community for blog authors. The purpose of the application is to provide the users with a tool to explore statistics connected to their own blog. Based on a literature study in usability and information visualization the application design was developed and implemented. The implementation resulted in a JavaScript based application, BlogVis, that allows the users to compare their own blog statistics with others, as well as compare periods of time in the statistic history of the blog.
APA, Harvard, Vancouver, ISO, and other styles
41

Burkschat, Marco. "Estimation with generalized order statistics /." Aachen : Mainz, 2006. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016030518&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wang, Ke. "On concomitants of order statistics." Columbus, Ohio : Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1202154248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Cumyn, Lucy A. "Pedagogical reflection in statistics instruction." Thesis, McGill University, 2008. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=115594.

Full text
Abstract:
Today, education is arguably one of the most important facets used to prepare and train students for the future. Society expects that students will acquire the requisite knowledge and competence in their respective fields to prepare them to successfully navigate the demands of today's competitive markets. This expectation has consequences on teachers at all levels of education across many domains. Teachers have a significant role: to prepare students for the future. Competent teachers spend a great deal of time reflecting on their own practices and beliefs, reviewing their teaching goals and evaluating if students have met these goals effectively. The process of reflection in teaching is vital in the preparation and training of students.
The purpose of this dissertation therefore was to investigate how statistics professors reflect on their practice. The research questions were designed to access what statistics teachers thought about before giving their courses and before giving two of their classes (hypothesis testing, t-tests). Post class evaluation interviews were conducted to determine where professors thought they were effective and whether they considered a need for change based on student understanding. More specifically, the questions asked: 1) What are the main themes in teacher reflection? 2) How is the content of reflection similar or different between statistics teachers? 3) How is the content of teacher reflection defined in statistics?
The design was based on a grounded theory approach whereby data collection consisted solely of interviews conducted throughout the semester: one pre-course interview and two sets of pre-class and post-class interviews. There were 13 participants in total. Participants were either statistics teachers from Quebec Cegeps or university professors. Participants were from the following departments: anthropology, economics, psychology, sociology, education, math, and biology. The analyses dealt with three data sources: pre class reflection, in class reflection, and post class reflection.
Data analysis focused on defining the main themes of teacher reflection that emerged from the data, identifying the content of reflection between and within participants in terms of similarities or differences. The pre course interview revealed five main themes: the course (logistics), the teacher as 'self, teaching approaches (what do they say they do in the classroom?), teaching and learning influences, and evaluation of teaching.
The pre and post class interviews addressed class planning. What did the professors foresee as any issues students might have in understanding hypothesis testing and t-tests? What changes would they make the next time they taught these concepts? Results showed that the focus of professor reflection centered around three main categories: the class, the student, and the teacher. For the main category, class, some professors reviewed lecture notes, added examples that emphasized authentic statistical problems, and others did no preparation. Student related themes addressed issues students had with understanding statistical content, learning associated difficulties, and student affect. The last category, the teacher, looked at self evaluation, their in-class strategies, methods of promoting and gauging student understanding, and decisions made in class and for future classes. Recommendations for future research include examining the role of experience in professor's level of reflection as well as defining the process of decision making and its role in reflection.
APA, Harvard, Vancouver, ISO, and other styles
44

Omar, Yasser Revez. "Particle statistics in quantum information." Thesis, University of Oxford, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.396407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

張益軍 and Yijun Zhang. "Pulsar statistics in our galaxy." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B31225585.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

楊光俊 and Conson Yeung. "Fracture statistics of brittle materials." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2005. http://hub.hku.hk/bib/B45015211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Siegert, Stefan. "Rank statistics of forecast ensembles." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-102152.

Full text
Abstract:
Ensembles are today routinely applied to estimate uncertainty in numerical predictions of complex systems such as the weather. Instead of initializing a single numerical forecast, using only the best guess of the present state as initial conditions, a collection (an ensemble) of forecasts whose members start from slightly different initial conditions is calculated. By varying the initial conditions within their error bars, the sensitivity of the resulting forecasts to these measurement errors can be accounted for. The ensemble approach can also be applied to estimate forecast errors that are due to insufficiently known model parameters by varying these parameters between ensemble members. An important (and difficult) question in ensemble weather forecasting is how well does an ensemble of forecasts reproduce the actual forecast uncertainty. A widely used criterion to assess the quality of forecast ensembles is statistical consistency which demands that the ensemble members and the corresponding measurement (the ``verification\'\') behave like random independent draws from the same underlying probability distribution. Since this forecast distribution is generally unknown, such an analysis is nontrivial. An established criterion to assess statistical consistency of a historical archive of scalar ensembles and verifications is uniformity of the verification rank: If the verification falls between the (k-1)-st and k-th largest ensemble member it is said to have rank k. Statistical consistency implies that the average frequency of occurrence should be the same for each rank. A central result of the present thesis is that, in a statistically consistent K-member ensemble, the (K+1)-dimensional vector of rank probabilities is a random vector that is uniformly distributed on the K-dimensional probability simplex. This behavior is universal for all possible forecast distributions. It thus provides a way to describe forecast ensembles in a nonparametric way, without making any assumptions about the statistical behavior of the ensemble data. The physical details of the forecast model are eliminated, and the notion of statistical consistency is captured in an elementary way. Two applications of this result to ensemble analysis are presented. Ensemble stratification, the partitioning of an archive of ensemble forecasts into subsets using a discriminating criterion, is considered in the light of the above result. It is shown that certain stratification criteria can make the individual subsets of ensembles appear statistically inconsistent, even though the unstratified ensemble is statistically consistent. This effect is explained by considering statistical fluctuations of rank probabilities. A new hypothesis test is developed to assess statistical consistency of stratified ensembles while taking these potentially misleading stratification effects into account. The distribution of rank probabilities is further used to study the predictability of outliers, which are defined as events where the verification falls outside the range of the ensemble, being either smaller than the smallest, or larger than the largest ensemble member. It is shown that these events are better predictable than by a naive benchmark prediction, which unconditionally issues the average outlier frequency of 2/(K+1) as a forecast. Predictability of outlier events, quantified in terms of probabilistic skill scores and receiver operating characteristics (ROC), is shown to be universal in a hypothetical forecast ensemble. An empirical study shows that in an operational temperature forecast ensemble, outliers are likewise predictable, and that the corresponding predictability measures agree with the analytically calculated ones.
APA, Harvard, Vancouver, ISO, and other styles
48

Luo, Man. "Data mining and classical statistics." Virtual Press, 2004. http://liblink.bsu.edu/uhtbin/catkey/1304657.

Full text
Abstract:
This study introduces an overview of data mining. It suggests that methods derived from classical statistics are an integrated part of data mining. However, there are substantial differences between these two areas. Classical statistical models and non-statistical models used in data mining, such as regression trees and artificial neural networks, are presented to emphasize their unique approaches to extract information from data. In summation, this research provides some background to data mining and the role of classical statistics played in it.
Department of Mathematical Sciences
APA, Harvard, Vancouver, ISO, and other styles
49

Teytaud, Fabien. "Introduction of statistics in optimization." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00655731.

Full text
Abstract:
In this thesis we study two optimization fields. In a first part, we study the use of evolutionary algorithms for solving derivative-free optimization problems in continuous space. In a second part we are interested in multistage optimization. In that case, we have to make decisions in a discrete environment with finite horizon and a large number of states. In this part we use in particular Monte-Carlo Tree Search algorithms. In the first part, we work on evolutionary algorithms in a parallel context, when a large number of processors are available. We start by presenting some state of the art evolutionary algorithms, and then, show that these algorithms are not well designed for parallel optimization. Because these algorithms are population based, they should be we well suitable for parallelization, but the experiments show that the results are far from the theoretical bounds. In order to solve this discrepancy, we propose some rules (such as a new selection ratio or a faster decrease of the step-size) to improve the evolutionary algorithms. Experiments are done on some evolutionary algorithms and show that these algorithms reach the theoretical speedup with the help of these new rules.Concerning the work on multistage optimization, we start by presenting some of the state of the art algorithms (Min-Max, Alpha-Beta, Monte-Carlo Tree Search, Nested Monte-Carlo). After that, we show the generality of the Monte-Carlo Tree Search algorithm by successfully applying it to the game of Havannah. The application has been a real success, because today, every Havannah program uses Monte-Carlo Tree Search algorithms instead of the classical Alpha-Beta. Next, we study more precisely the Monte-Carlo part of the Monte-Carlo Tree Search algorithm. 3 generic rules are proposed in order to improve this Monte-Carlo policy. Experiments are done in order to show the efficiency of these rules.
APA, Harvard, Vancouver, ISO, and other styles
50

Maruri-Aguilar, Hugo. "Algebraic statistics in experimental design." Thesis, University of Warwick, 2007. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.441697.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography