To see the other types of publications on this topic, follow the link: Statistical sampling by attributes.

Dissertations / Theses on the topic 'Statistical sampling by attributes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Statistical sampling by attributes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Tʻang, Min. "Extention of evaluating the operating characteristics for dependent mixed variables-attributes sampling plans to large first sample size /." Online version of thesis, 1991. http://hdl.handle.net/1850/11208.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bil, Miroslav. "Uvolnění nakupovaných dílů do sériové výroby bez vstupní kontroly." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2015. http://www.nusl.cz/ntk/nusl-232071.

Full text
Abstract:
The theoretical part of this diploma thesis describes methods of incoming inspection with an emphasis on statistical sampling by attributes. The practical part contains an analysis of the qualifying process of suppliers and an analysis of the system of incoming inspection in the Kollmorgen Company. The outcome of the thesis is the system of sampling plans, review of the qualification process and setting up a parameter for monitoring the released parts into serial production.
APA, Harvard, Vancouver, ISO, and other styles
3

Majerek, Pavel. "Dynamizace vstupní kontroly za pomocí statistické přijímky srovnáváním." Master's thesis, Vysoké učení technické v Brně. Fakulta strojního inženýrství, 2012. http://www.nusl.cz/ntk/nusl-230184.

Full text
Abstract:
This master´s thesis deals with the description of control selection methods with emphasis on sampling procedures for inspection by attributes in the theoretical part. In the practicel part, the input control system in the plant Siemens motors Frenstat is analyzed. The outcome of the thesis is the direction of the input control created in accordance with the requirements of the management of the organization and based on ISO 2859 standards.
APA, Harvard, Vancouver, ISO, and other styles
4

ARAUJO, AURELIA APARECIDA DE. "DOUBLE-SAMPLING CONTROL CHARTS FOR ATTRIBUTES." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2005. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=6938@1.

Full text
Abstract:
CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO
Nesta tese é proposta a incorporação da estratégia de amostragem dupla, já utilizada em inspeção de lotes, ao gráfico de controle de np (número de defeituosos), com o objetivo de aumentar a sua eficiência, ou seja, reduzir o número médio de amostras até a detecção de um descontrole (NMA1), sem aumentar o tamanho médio de amostra (TMA) nem reduzir o número médio de amostras até um alarme falso (NMA0). Alternativamente, este esquema pode ser usado para reduzir o custo de amostragem do gráfico de np, uma vez que para obter o mesmo NMA1 que um gráfico de np com amostragem simples, o gráfico com amostragem dupla requererá menor tamanho médio de amostra. Para vários valores de p0 (fração defeituosa do processo em controle) e p1 (fração defeituosa do processo fora de controle), foi obtido o projeto ótimo do gráfico, ou seja, aquele que minimiza NMA1, tendo como restrições um valor máximo para TMA e valor mínimo para NMA0. O projeto ótimo foi obtido para vários valores dessas restrições. O projeto consiste na definição dos dois tamanhos de amostra, para o primeiro e o segundo estágios, e de um conjunto de limites para o gráfico. Para cada projeto ótimo foi também calculado o valor de NMA1 para uma faixa de valores de p1, além daquele para o qual o projeto foi otimizado. Foi feita uma comparação de desempenho entre o esquema desenvolvido e outros esquemas de monitoramento do número de defeituosos na amostra: o clássico gráfico de np (com amostragem simples), o esquema CuSum, o gráfico de controle de EWMA e o gráfico np VSS (gráfico adaptativo, com tamanho de amostra variável). Para a comparação, foram obtidos os projetos ótimos de cada um desses esquemas, sob as mesmas restrições e para os mesmos valores de p0 e p1. Assim, uma contribuição adicional dessa tese é a análise e otimização do desempenho dos esquemas CuSum, EWMA e VSS para np. O resultado final foi a indicação de qual é o esquema de controle de processo mais eficiente para cada situação. O gráfico de np com amostragem dupla aqui proposto e desenvolvido mostrou ser em geral o esquema mais eficiente para a detecção de aumentos grandes e moderados na fração defeituosa do processo, perdendo apenas para o gráfico VSS, nos casos em que p0, o tamanho (médio) de amostra e o aumento em p0 (razão p1/p0) são todos pequenos.
In this thesis, it is proposed the incorporation of the double-sampling strategy, used in lot inspection, to the np control chart (control chart for the number nonconforming), with the purpose of improving its efficiency, that is, reducing the out-of-control average run length (ARL1), without increasing the average sample size (ASS) or the in-control average run length (ARL0). Alternatively, this scheme can be used to reduce the np chart sampling costs, since that in order to get the same ARL1 of the single-sampling np chart, the doublesampling chart will require smaller average sample size. For a number of values of p0 (in-control defective rate of the process) and p1 (out-of-control defective rate of the process), the optimal chart designs were obtained, namely the designs that minimize ARL1, subject to maximum ASS and minimum ARL0 constraints. Optimal designs were obtained for several values of these constraints. The design consists of two sample sizes, for the first and second stages, and a set of limits for the chart. For each optimal design the value of ARL1 was also computed for a range of p1 values besides the one for which the design ARL1 was minimized. A performance comparison was carried out between the proposed scheme and the classical (single-sampling) np chart, the CuSum np scheme, the EWMA np control chart and the VSS np chart (the variable sample size control chart). For comparison, optimal designs for each scheme were considered, under same constraints and values of p0 and p1. An additional contribution of this thesis is the performance analysis and optimization of the np CuSum, EWMA and VSS schemes. The final result is the indication of the most efficient process control scheme for each situation. The double-sampling np control chart here proposed and developed has proved to be in general the most efficient scheme for the detection of large and moderate increases in the process fraction defective, being only surpassed by the VSS chart in the cases in which p0, the (average) sample size and the increase in p0 (p1/p0 ratio) are all small.
APA, Harvard, Vancouver, ISO, and other styles
5

Curram, J. B. "Computer-aided design in sampling inspection by attributes." Thesis, University of Kent, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.384480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gwet, Jean-Philippe. "Robust statistical inference in survey sampling." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq22168.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nourmohammadi, Mohammad. "Statistical inference with randomized nomination sampling." Elsevier B.V, 2014. http://hdl.handle.net/1993/30150.

Full text
Abstract:
In this dissertation, we develop several new inference procedures that are based on randomized nomination sampling (RNS). The first problem we consider is that of constructing distribution-free confidence intervals for quantiles for finite populations. The required algorithms for computing coverage probabilities of the proposed confidence intervals are presented. The second problem we address is that of constructing nonparametric confidence intervals for infinite populations. We describe the procedures for constructing confidence intervals and compare the constructed confidence intervals in the RNS setting, both in perfect and imperfect ranking scenario, with their simple random sampling (SRS) counterparts. Recommendations for choosing the design parameters are made to achieve shorter confidence intervals than their SRS counterparts. The third problem we investigate is the construction of tolerance intervals using the RNS technique. We describe the procedures of constructing one- and two-sided RNS tolerance intervals and investigate the sample sizes required to achieve tolerance intervals which contain the determined proportions of the underlying population. We also investigate the efficiency of RNS-based tolerance intervals compared with their corresponding intervals based on SRS. A new method for estimating ranking error probabilities is proposed. The final problem we consider is that of parametric inference based on RNS. We introduce different data types associated with different situation that one might encounter using the RNS design and provide the maximum likelihood (ML) and the method of moments (MM) estimators of the parameters in two classes of distributions; proportional hazard rate (PHR) and proportional reverse hazard rate (PRHR) models.
APA, Harvard, Vancouver, ISO, and other styles
8

Gwet, J. P. (Jean Philippe) Carleton University Dissertation Mathematics and Statistics. "Robust statistical inference in survey sampling." Ottawa, 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baldock, Robert John Nicholas. "Classical statistical mechanics with nested sampling." Thesis, University of Cambridge, 2015. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.709164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Chalom, Edmond. "Statistical image sequence segmentation using multidimensional attributes." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/34335.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (leaves 192-202).
by Edmond Chalom.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
11

Bryan, Paul David. "Accelerating microarchitectural simulation via statistical sampling principles." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/47715.

Full text
Abstract:
The design and evaluation of computer systems rely heavily upon simulation. Simulation is also a major bottleneck in the iterative design process. Applications that may be executed natively on physical systems in a matter of minutes may take weeks or months to simulate. As designs incorporate increasingly higher numbers of processor cores, it is expected the times required to simulate future systems will become an even greater issue. Simulation exhibits a tradeoff between speed and accuracy. By basing experimental procedures upon known statistical methods, the simulation of systems may be dramatically accelerated while retaining reliable methods to estimate error. This thesis focuses on the acceleration of simulation through statistical processes. The first two techniques discussed in this thesis focus on accelerating single-threaded simulation via cluster sampling. Cluster sampling extracts multiple groups of contiguous population elements to form a sample. This thesis introduces techniques to reduce sampling and non-sampling bias components, which must be reduced for sample measurements to be reliable. Non-sampling bias is reduced through the Reverse State Reconstruction algorithm, which removes ineffectual instructions from the skipped instruction stream between simulated clusters. Sampling bias is reduced via the Single Pass Sampling Regimen Design Process, which guides the user towards selected representative sampling regimens. Unfortunately, the extension of cluster sampling to include multi-threaded architectures is non-trivial and raises many interesting challenges. Overcoming these challenges will be discussed. This thesis also introduces thread skew, a useful metric that quantitatively measures the non-sampling bias associated with divergent thread progressions at the beginning of a sampling unit. Finally, the Barrier Interval Simulation method is discussed as a technique to dramatically decrease the simulation times of certain classes of multi-threaded programs. It segments a program into discrete intervals, separated by barriers, which are leveraged to avoid many of the challenges that prevent multi-threaded sampling.
APA, Harvard, Vancouver, ISO, and other styles
12

Eng, Frida. "Non-Uniform Sampling in Statistical Signal Processing." Doctoral thesis, Linköping : Department of Electrical Engineering, Linköpings universitet, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-8480.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Paulse, Bradley. "The statistical analysis of complex sampling data." University of the Western Cape, 2018. http://hdl.handle.net/11394/6754.

Full text
Abstract:
>Magister Scientiae - MSc
Most standard statistical techniques illustrated in text books assume that the data are collected from a simple random sample (SRS) and hence are independently and identically distributed (i.i.d.). In reality, data are often sourced through complex sampling (CS) designs, with a combination of stratification and clustering at different levels of the design. Consequently, the CS data are not i.i.d. and sampling weights that are developed over different stages, are calculated and included in the analysis of this data to account for the sampling design. Logistic regression is often employed in the modelling of survey data since the response under investigation typically has a dichotomous outcome. Furthermore, since the logistic regression model has no homogeneity or normality assumptions, it is appealing when modelling a dichotomous response from survey data. This research considers the comparison of the estimates of the logistic regression model parameters when the CS design is accounted for, i.e. weighting is present, to when the data are modelled using an SRS design, i.e. no weighting. In addition, the standard errors of the estimators will be obtained using three different variance techniques, viz. Taylor series linearization, the jackknife and the bootstrap. The different estimated standard errors will be used in the calculation of the standard (asymptotic) interval which will be compared to the bootstrap percentile interval in terms of the interval coverage probability. A further level of comparison is obtained when using only design weights to those obtained using calibrated and integrated sampling weights. This simulation study is based on the Income and Expenditure Survey (IES) of 2005/2006. The results showed that generally when weighting was used the estimators performed better as opposed to when the design was ignored, i.e. under the assumption of SRS, with the results for the Taylor series linearization being more stable.
APA, Harvard, Vancouver, ISO, and other styles
14

Habegger, Jerrell Wayne. "An internal auditing innovation decision: statistical sampling." Diss., Virginia Polytechnic Institute and State University, 1988. http://hdl.handle.net/10919/53522.

Full text
Abstract:
In planning an effective and efficient audit examination, the auditor has to choose appropriate auditing technologies and procedures. This audit choice problem has been explored from several perspectives. However, it has not been viewed as an innovation process. This dissertation reports the results of an innovation decision study in internal auditing. Hypotheses of associations between the internal auditor’s decision to use statistical sampling and the perceived characteristics of statistical sampling are derived from Rogers’ Innovation Diffusion model (Everett Rogers, Diffusion of Innovations, 1983). Additional hypotheses relating the decision to use statistical sampling to personal and organizational characteristics are derived from the innovation adoption and implementation research literature. Data for this study were gathered by mailing a questionnaire to a sample of internal audit directors. Incorporated into the questionnaire are several scales for measuring (1) innovation attributes, (2) professionalism, (3) professional and organizational commitment, (4) management support for innovation, and (5) creativity decision style. The usable response rate was 32.5% (n= 260). The primary finding of this study is that the extent of use of attributes, dollar unit, and variables sampling techniques is positively associated with the respondents’ perceptions of their relative advantage, trialability, compatibility, and observability, and negatively associated with the techniques’ perceived complexity. A secondary finding is that there is no overall association between the extent of use of statistical sampling by the internal auditors and their (1) professionalism, (2) professional and organizational commitment, (3) decision style, and (4) organizational support for innovation. Further exploration using multiple regression and logistic regression analyses indicate that several of the personal and organizational characteristics add to the ability of the regression models to explain the extent of use of statistical sampling. Evidence that organization types do have an effect upon the innovation decision process is presented. The study concludes by discussing its implications for understanding the innovation decision process of internal auditors, for designing and managing future innovation processes in auditing, and for further research into audit choice problems and innovation decisions of auditors and accountants.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
15

Ayam, Rufus Tekoh. "Audit sampling: A qualitative study on the role of statistical and non-statistical sampling approaches on audit practices in Sweden." Thesis, Umeå universitet, Handelshögskolan vid Umeå universitet, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-45258.

Full text
Abstract:
PURPOSE: The two approaches to audit sampling; statistical and nonstatistical have been examined in this study. The overall purpose of the study is to explore the current extent at which statistical and nonstatistical sampling approaches are utilized by independent auditors during auditing practices. Moreover, the study also seeks to achieve two additional purposes; the first is to find out whether auditors utilize different sampling techniques when auditing SME´s (Small and Medium-Sized Enterprise) and big companies and the second is to find out some common selection methods that are used by auditors for selecting statistical or nonstatistical audit samples during audit sampling practices.   METHOD: The population that has been investigated consists of professional auditors residing in Umeå-Sweden.Data for the study was collected by conducting semi-structured interviews and convenient sampling; a non-probability sampling technique was used for respondent’s selection. An interviewed guide was sent to respondents in advance with the objective of giving them the opportunity to have both mental and psychological preparations prior to each interview scheduled date. The semi-structured interview technique was adopted because it was a suitable approach to extract valuable information and in-depth explanations from auditors about the current extent of the use of statistical audit sampling and nonstatistical audit sampling during auditing practices. Ultimately, the selected respondents actively participated in which they thoroughly expressed their views and experiences about audit sampling, statistical audit sampling, and nonstatistical audit sampling.   RESULTS: Statistical audit sampling and nonstatistical audit sampling were found to be used most often by auditors when auditing the financial statements of big companies compare to SME´s where nonstatistical audit sampling is most often used. Therefore, both statistical and nonstatistical samplings are in dominant utilization by auditors in Sweden. Audit samples are selected through random selection method and systematic selection methods when using statistical audit sampling and for nonstatistical audit sampling; items are selected by the use of professional judgment. However, auditors in Sweden are more inclined with the use of random selection method for statistical audit sampling and their professional judgment for nonstatistical audit sampling. The main reasons for the auditors using both statistical audit sampling and nonstatistical audit sampling are to minimize risks and to guarantee high quality audit. The conclusion of the study was that auditors in Sweden use both statistical and nonstatistical audit sampling techniques when auditing big companies, use nonstatistical audit sampling when auditing SME´s, select samples using random selection method and systematic selection method for statistical audit sampling and for nonstatistical audit sampling, items are selected within the parameters of their professional judgment.
APA, Harvard, Vancouver, ISO, and other styles
16

Pollard, John. "Adaptive distance sampling." Thesis, University of St Andrews, 2002. http://hdl.handle.net/10023/15176.

Full text
Abstract:
We investigate mechanisms to improve efficiency for line and point transect surveys of clustered populations by combining the distance methods with adaptive sampling. In adaptive sampling, survey effort is increased when areas of high animal density are located, thereby increasing the number of observations. We begin by building on existing adaptive sampling techniques, to create both point and line transect adaptive estimators, these are then extended to allow the inclusion of covariates in the detection function estimator. However, the methods are limited, as the total effort required cannot be forecast at the start of a survey, and so a new fixed total effort adaptive approach is developed. A key difference in the new method is that it does not require the calculation of the inclusion probabilities typically used by existing adaptive estimators. The fixed effort method is primarily aimed at line transect sampling, but point transect derivations are also provided. We evaluate the new methodology by computer simulation, and report on surveys of harbour porpoise in the Gulf of Maine, in which the approach was compared with conventional line transect sampling. Line transect simulation results for a clustered population showed up to a 6% improvement in the adaptive density variance estimate over the conventional, whilst when there was no clustering the adaptive estimate was 1% less efficient than the conventional. For the harbour porpoise survey, the adaptive density estimate cvs showed improvements of 8% for individual porpoise density and 14% for school density over the conventional estimates. The primary benefit of the fixed effort method is the potential to improve survey coverage, allowing a survey to complete within a fixed time and effort; an important feature if expensive survey resources are involved, such as an aircraft, crew and observers.
APA, Harvard, Vancouver, ISO, and other styles
17

Moffa, Giuseppina. "Some computational advances in sampling based statistical inference." Thesis, University of Bristol, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.535203.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Aziz, Wilker Ferreira. "Exact sampling and optimisation in statistical machine translation." Thesis, University of Wolverhampton, 2014. http://hdl.handle.net/2436/314591.

Full text
Abstract:
In Statistical Machine Translation (SMT), inference needs to be performed over a high-complexity discrete distribution de ned by the intersection between a translation hypergraph and a target language model. This distribution is too complex to be represented exactly and one typically resorts to approximation techniques either to perform optimisation { the task of searching for the optimum translation { or sampling { the task of nding a subset of translations that is statistically representative of the goal distribution. Beam-search is an example of an approximate optimisation technique, where maximisation is performed over a heuristically pruned representation of the goal distribution. For inference tasks other than optimisation, rather than nding a single optimum, one is really interested in obtaining a set of probabilistic samples from the distribution. This is the case in training where one wishes to obtain unbiased estimates of expectations in order to t the parameters of a model. Samples are also necessary in consensus decoding where one chooses from a sample of likely translations the one that minimises a loss function. Due to the additional computational challenges posed by sampling, n-best lists, a by-product of optimisation, are typically used as a biased approximation to true probabilistic samples. A more direct procedure is to attempt to directly draw samples from the underlying distribution rather than rely on n-best list approximations. Markov Chain Monte Carlo (MCMC) methods, such as Gibbs sampling, o er a way to overcome the tractability issues in sampling, however their convergence properties are hard to assess. That is, it is di cult to know when, if ever, an MCMC sampler is producing samples that are compatible iii with the goal distribution. Rejection sampling, a Monte Carlo (MC) method, is more fundamental and natural, it o ers strong guarantees, such as unbiased samples, but is typically hard to design for distributions of the kind addressed in SMT, rendering an intractable method. A recent technique that stresses a uni ed view between the two types of inference tasks discussed here | optimisation and sampling | is the OS approach. OS can be seen as a cross between Adaptive Rejection Sampling (an MC method) and A optimisation. In this view the intractable goal distribution is upperbounded by a simpler (thus tractable) proxy distribution, which is then incrementally re ned to be closer to the goal until the maximum is found, or until the sampling performance exceeds a certain level. This thesis introduces an approach to exact optimisation and exact sampling in SMT by addressing the tractability issues associated with the intersection between the translation hypergraph and the language model. The two forms of inference are handled in a uni ed framework based on the OS approach. In short, an intractable goal distribution, over which one wishes to perform inference, is upperbounded by tractable proposal distributions. A proposal represents a relaxed version of the complete space of weighted translation derivations, where relaxation happens with respect to the incorporation of the language model. These proposals give an optimistic view on the true model and allow for easier and faster search using standard dynamic programming techniques. In the OS approach, such proposals are used to perform a form of adaptive rejection sampling. In rejection sampling, samples are drawn from a proposal distribution and accepted or rejected as a function of the mismatch between the proposal and the goal. The technique is adaptive in that rejected samples are used to motivate a re nement of the upperbound proposal that brings it closer to the goal, improving the rate of acceptance. Optimisation can be connected to an extreme form of sampling, thus the framework introduced here suits both exact optimisation and exact iv sampling. Exact optimisation means that the global maximum is found with a certi cate of optimality. Exact sampling means that unbiased samples are independently drawn from the goal distribution. We show that by using this approach exact inference is feasible using only a fraction of the time and space that would be required by a full intersection, without recourse to pruning techniques that only provide approximate solutions. We also show that the vast majority of the entries (n-grams) in a language model can be summarised by shorter and optimistic entries. This means that the computational complexity of our approach is less sensitive to the order of the language model distribution than a full intersection would be. Particularly in the case of sampling, we show that it is possible to draw exact samples compatible with distributions which incorporate a high-order language model component from proxy distributions that are much simpler. In this thesis, exact inference is performed in the context of both hierarchical and phrase-based models of translation, the latter characterising a problem that is NP-complete in nature.
APA, Harvard, Vancouver, ISO, and other styles
19

Shipitsyn, Aleksey. "Statistical Learning with Imbalanced Data." Thesis, Linköpings universitet, Filosofiska fakulteten, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-139168.

Full text
Abstract:
In this thesis several sampling methods for Statistical Learning with imbalanced data have been implemented and evaluated with a new metric, imbalanced accuracy. Several modifications and new algorithms have been proposed for intelligent sampling: Border links, Clean Border Undersampling, One-Sided Undersampling Modified, DBSCAN Undersampling, Class Adjusted Jittering, Hierarchical Cluster Based Oversampling, DBSCAN Oversampling, Fitted Distribution Oversampling, Random Linear Combinations Oversampling, Center Repulsion Oversampling. A set of requirements on a satisfactory performance metric for imbalanced learning have been formulated and a new metric for evaluating classification performance has been developed accordingly. The new metric is based on a combination of the worst class accuracy and geometric mean. In the testing framework nonparametric Friedman's test and post hoc Nemenyi’s test have been used to assess the performance of classifiers, sampling algorithms, combinations of classifiers and sampling algorithms on several data sets. A new approach of detecting algorithms with dominating and dominated performance has been proposed with a new way of visualizing the results in a network. From experiments on simulated and several real data sets we conclude that: i) different classifiers are not equally sensitive to sampling algorithms, ii) sampling algorithms have different performance within specific classifiers, iii) oversampling algorithms perform better than undersampling algorithms, iv) Random Oversampling and Random Undersampling outperform many well-known sampling algorithms, v) our proposed algorithms Hierarchical Cluster Based Oversampling, DBSCAN Oversampling with FDO, and Class Adjusted Jittering perform much better than other algorithms, vi) a few good combinations of a classifier and sampling algorithm may boost classification performance, while a few bad combinations may spoil the performance, but the majority of combinations are not significantly different in performance.
APA, Harvard, Vancouver, ISO, and other styles
20

Hochwald, Cathy. "Statistical application of seismic attributes Cooper/Eromanga Basin, South Australia /." Title page, contents and abstract only, 1995. http://web4.library.adelaide.edu.au/theses/09SB/09sbh685.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Nguyen, Trang. "Comparison of Sampling-Based Algorithms for Multisensor Distributed Target Tracking." ScholarWorks@UNO, 2003. http://scholarworks.uno.edu/td/20.

Full text
Abstract:
Nonlinear filtering is certainly very important in estimation since most real-world problems are nonlinear. Recently a considerable progress in the nonlinear filtering theory has been made in the area of the sampling-based methods, including both random (Monte Carlo) and deterministic (quasi-Monte Carlo) sampling, and their combination. This work considers the problem of tracking a maneuvering target in a multisensor environment. A novel scheme for distributed tracking is employed that utilizes a nonlinear target model and estimates from local (sensor-based) estimators. The resulting estimation problem is highly nonlinear and thus quite challenging. In order to evaluate the performance capabilities of the architecture considered, advanced sampling-based nonlinear filters are implemented: particle filter (PF), unscented Kalman filter (UKF), and unscented particle filter (UPF). Results from extensive Monte Carlo simulations using different configurations of these algorithms are obtained to compare their effectiveness for solving the distributed target tracking problem.
APA, Harvard, Vancouver, ISO, and other styles
22

Wan, Choi-ying. "Statistical analysis for capture-recapture experiments in discrete time." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22753217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Martin, Russell Andrew. "Paths, sampling, and markov chain decomposition." Diss., Georgia Institute of Technology, 2001. http://hdl.handle.net/1853/29383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sbailò, Luigi [Verfasser]. "Efficient multi-scale sampling methods in statistical physics / Luigi Sbailò." Berlin : Freie Universität Berlin, 2020. http://d-nb.info/1206180722/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Dowd, Andrew Vernon 1962. "Statistical considerations of modified two bit sampling for astronomical correlators." Thesis, The University of Arizona, 1991. http://hdl.handle.net/10150/277840.

Full text
Abstract:
When applied to radio astronomy instrumentation, practical limitations on circuitry restrict digital correlators to extremely coarse quantization. This paper examines non-linear effects of modified two-bit quantization on astronomical correlators. Equations are presented to correct quantization bias in estimates. The degradation due to quantization is given and plotted. The necessary tolerance in threshold levels is established to avoid systematic errors in power spectrum measurement. An alternative method of measuring power is derived that reduces parameter sensitivity.
APA, Harvard, Vancouver, ISO, and other styles
26

Xu, Fei. "Correlation-aware statistical methods for sampling-based group by estimates." [Gainesville, Fla.] : University of Florida, 2009. http://purl.fcla.edu/fcla/etd/UFE0024750.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Lu, Tsui-Shan Zhou Haibo. "Statistical inferences for outcome dependent sampling design with multivariate outcomes." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2009. http://dc.lib.unc.edu/u?/etd,2447.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2009.
Title from electronic title page (viewed Sep. 3, 2009). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Biostatistics, Gillings School of Global Public Health." "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Biostatistics, Gillings School of Global Public Health." Discipline: Biostatistics; Department/School: Public Health.
APA, Harvard, Vancouver, ISO, and other styles
28

Girardi, Benur A. "Bulk sampling : some strategies for improving quality control in chemical industries." Thesis, City University London, 1993. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.359184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

林達明 and Daming Lin. "Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B3123429X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ignatieva, Ekaterina. "Adaptive Bayesian sampling with application to 'bubbles'." Connect to e-thesis, 2008. http://theses.gla.ac.uk/356/.

Full text
Abstract:
Thesis (MSc(R)) - University of Glasgow, 2008.
MSc(R). thesis submitted to the Department of Mathematics, Faculty of Information and Mathematical Sciences, University of Glasgow, 2008. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
31

Kanyongo, Gibbs Y. "Using Large-Scale Datasets to Teach Abstract Statistical Concepts: Sampling Distribution." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-82613.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

尹再英 and Choi-ying Wan. "Statistical analysis for capture-recapture experiments in discrete time." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31225287.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Wang, Yan, and 王艷. "Statistical inference for capture-recapture studies in continuoustime." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2001. http://hub.hku.hk/bib/B31243721.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Kanchinadam, Krishna M. "DataMapX a tool for cross-mapping entities and attributes between bioinformatics databases /." Fairfax, VA : George Mason University, 2008. http://hdl.handle.net/1920/3135.

Full text
Abstract:
Thesis (M.S.)--George Mason University, 2008.
Vita: p. 29. Thesis director: Jennifer Weller. Submitted in partial fulfillment of the requirements for the degree of Master of Science in Bioinformatics. Title from PDF t.p. (viewed July 7, 2008). Includes bibliographical references (p. 28). Also issued in print.
APA, Harvard, Vancouver, ISO, and other styles
35

Pan, Zhiwei. "Statistical learning algorithms : multi-class classification and regression with non-i.i.d. sampling /." access full-text access abstract and table of contents, 2009. http://libweb.cityu.edu.hk/cgi-bin/ezdb/thesis.pl?phd-ma-b30082316f.pdf.

Full text
Abstract:
Thesis (Ph.D.)--City University of Hong Kong, 2009.
"Submitted to Department of Mathematics in partial fulfillment of the requirements for the degree of Doctor of Philosophy." Includes bibliographical references (leaves [65]-75)
APA, Harvard, Vancouver, ISO, and other styles
36

Lipson, Kay, and klipson@swin edu au. "The role of the sampling distribution in developing understanding of statistical inference." Swinburne University of Technology, 2000. http://adt.lib.swin.edu.au./public/adt-VSWT20050711.161903.

Full text
Abstract:
There has been widespread concern expressed by members of the statistics education community in the past few years about the lack of any real understanding demonstrated by many students completing courses in introductory statistics. This deficiency in understanding has been particularly noted in the area of inferential statistics, where students, particularly those studying statistics as a service course, have been inclined to view statistical inference as a set of unrelated recipes. As such, these students have developed skills that have little practical application and are easily forgotten. This thesis is concerned with the development of understanding in statistical inference for beginning students of statistics at the post-secondary level. This involves consideration of the nature of understanding in introductory statistical inference, and how understanding can be measured in the context of statistical inference. In particular, the study has examined the role of the sampling distribution in the students? schemas for statistical inference, and its relationship to both conceptual and procedural understanding. The results of the study have shown that, as anticipated, students will construct highly individual schemas for statistical inference but that the degree of integration of the concept of sampling distribution within this schema is indicative of the level of development of conceptual understanding in that student. The results of the study have practical implications for the teaching of courses in introductory statistics, in terms of content, delivery and assessment.
APA, Harvard, Vancouver, ISO, and other styles
37

Butler, Ruth Catherine. "An exploration of the statistical consequences of sub-sampling for species identification." Thesis, University of Reading, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Argyraki, Ariadni. "Estimation of measurement uncertainty in the sampling of contaminated land." Thesis, Imperial College London, 1997. http://hdl.handle.net/10044/1/8489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Ayala, Christian A. "Acceptance-Rejection Sampling with Hierarchical Models." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/1162.

Full text
Abstract:
Hierarchical models provide a flexible way of modeling complex behavior. However, the complicated interdependencies among the parameters in the hierarchy make training such models difficult. MCMC methods have been widely used for this purpose, but can often only approximate the necessary distributions. Acceptance-rejection sampling allows for perfect simulation from these often unnormalized distributions by drawing from another distribution over the same support. The efficacy of acceptance-rejection sampling is explored through application to a small dataset which has been widely used for evaluating different methods for inference on hierarchical models. A particular algorithm is developed to draw variates from the posterior distribution. The algorithm is both verified and validated, and then finally applied to the given data, with comparisons to the results of different methods.
APA, Harvard, Vancouver, ISO, and other styles
40

Wang, Yan. "Statistical inference for capture-recapture studies in continuous time /." Hong Kong : University of Hong Kong, 2001. http://sunzi.lib.hku.hk/hkuto/record.jsp?B23501765.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Jinn, Nicole Mee-Hyaang. "Toward Error-Statistical Principles of Evidence in Statistical Inference." Thesis, Virginia Tech, 2014. http://hdl.handle.net/10919/48420.

Full text
Abstract:
The context for this research is statistical inference, the process of making predictions or inferences about a population from observation and analyses of a sample. In this context, many researchers want to grasp what inferences can be made that are valid, in the sense of being able to uphold or justify by argument or evidence. Another pressing question among users of statistical methods is: how can spurious relationships be distinguished from genuine ones? Underlying both of these issues is the concept of evidence. In response to these (and similar) questions, two questions I work on in this essay are: (1) what is a genuine principle of evidence? and (2) do error probabilities have more than a long-run role? Concisely, I propose that felicitous genuine principles of evidence should provide concrete guidelines on precisely how to examine error probabilities, with respect to a test's aptitude for unmasking pertinent errors, which leads to establishing sound interpretations of results from statistical techniques. The starting point for my definition of genuine principles of evidence is Allan Birnbaum's confidence concept, an attempt to control misleading interpretations. However, Birnbaum's confidence concept is inadequate for interpreting statistical evidence, because using only pre-data error probabilities would not pick up on a test's ability to detect a discrepancy of interest (e.g., "even if the discrepancy exists" with respect to the actual outcome. Instead, I argue that Deborah Mayo's severity assessment is the most suitable characterization of evidence based on my definition of genuine principles of evidence.
Master of Arts
APA, Harvard, Vancouver, ISO, and other styles
42

Mohamed, Nuri Eltabit [Verfasser], Rainer [Akademischer Betreuer] Schwabe, and Waltraud [Akademischer Betreuer] Kahle. "Statistical analysis in multivariate sampling / Nuri Eltabit Mohamed. Betreuer: Rainer Schwabe ; Waltraud Kahle." Magdeburg : Universitätsbibliothek, 2011. http://d-nb.info/1047558963/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Lin, Daming. "Reliability growth models and reliability acceptance sampling plans from a Bayesian viewpoint /." Hong Kong : University of Hong Kong, 1995. http://sunzi.lib.hku.hk/hkuto/record.jsp?B13999618.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Xi, Liqun, and 奚李群. "Estimating population size for capture-recapture/removal models with heterogeneity and auxiliary information." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B29957783.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

譚玉貞 and Yuk-ching Tam. "Some practical issues in estimation based on a ranked set sample." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221683.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Collopy, Ethan Richard. "Data mining temporal and indefinite relations with numerical dependencies." Thesis, University College London (University of London), 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Wu, Qin. "Reliable techniques for survey with sensitive question." HKBU Institutional Repository, 2013. http://repository.hkbu.edu.hk/etd_ra/1496.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Oe, Bianca Madoka Shimizu. "Statistical inference in complex networks." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/55/55134/tde-28032017-095426/.

Full text
Abstract:
The complex network theory has been extensively used to understand various natural and artificial phenomena made of interconnected parts. This representation enables the study of dynamical processes running on complex systems, such as epidemics and rumor spreading. The evolution of these dynamical processes is influenced by the organization of the network. The size of some real world networks makes it prohibitive to analyse the whole network computationally. Thus it is necessary to represent it by a set of topological measures or to reduce its size by means of sampling. In addition, most networks are samples of a larger networks whose structure may not be captured and thus, need to be inferred from samples. In this work, we study both problems: the influence of the structure of the network in spreading processes and the effects of sampling in the structure of the network. Our results suggest that it is possible to predict the final fraction of infected individuals and the final fraction of individuals that came across a rumor by modeling them with a beta regression model and using topological measures as regressors. The most influential measure in both cases is the average search information, that quantifies the ease or difficulty to navigate through a network. We have also shown that the structure of a sampled network differs from the original network and that the type of change depends on the sampling method. Finally, we apply four sampling methods to study the behaviour of the epidemic threshold of a network when sampled with different sampling rates and found out that the breadth-first search sampling is most appropriate method to estimate the epidemic threshold among the studied ones.
Vários fenômenos naturais e artificiais compostos de partes interconectadas vem sendo estudados pela teoria de redes complexas. Tal representação permite o estudo de processos dinâmicos que ocorrem em redes complexas, tais como propagação de epidemias e rumores. A evolução destes processos é influenciada pela organização das conexões da rede. O tamanho das redes do mundo real torna a análise da rede inteira computacionalmente proibitiva. Portanto, torna-se necessário representá-la com medidas topológicas ou amostrá-la para reduzir seu tamanho. Além disso, muitas redes são amostras de redes maiores cuja estrutura é difícil de ser capturada e deve ser inferida de amostras. Neste trabalho, ambos os problemas são estudados: a influência da estrutura da rede em processos de propagação e os efeitos da amostragem na estrutura da rede. Os resultados obtidos sugerem que é possível predizer o tamanho da epidemia ou do rumor com base em um modelo de regressão beta com dispersão variável, usando medidas topológicas como regressores. A medida mais influente em ambas as dinâmicas é a informação de busca média, que quantifica a facilidade com que se navega em uma rede. Também é mostrado que a estrutura de uma rede amostrada difere da original e que o tipo de mudança depende do método de amostragem utilizado. Por fim, quatro métodos de amostragem foram aplicados para estudar o comportamento do limiar epidêmico de uma rede quando amostrada com diferentes taxas de amostragem. Os resultados sugerem que a amostragem por busca em largura é a mais adequada para estimar o limiar epidêmico entre os métodos comparados.
APA, Harvard, Vancouver, ISO, and other styles
49

Kim, Keunpyo. "Process Monitoring with Multivariate Data:Varying Sample Sizes and Linear Profiles." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/29741.

Full text
Abstract:
Multivariate control charts are used to monitor a process when more than one quality variable associated with the process is being observed. The multivariate exponentially weighted moving average (MEWMA) control chart is one of the most commonly recommended tools for multivariate process monitoring. The standard practice, when using the MEWMA control chart, is to take samples of fixed size at regular sampling intervals for each variable. In the first part of this dissertation, MEWMA control charts based on sequential sampling schemes with two possible stages are investigated. When sequential sampling with two possible stages is used, observations at a sampling point are taken in two groups, and the number of groups actually taken is a random variable that depends on the data. The basic idea is that sampling starts with a small initial group of observations, and no additional sampling is done at this point if there is no indication of a problem with the process. But if there is some indication of a problem with the process then an additional group of observations is taken at this sampling point. The performance of the sequential sampling (SS) MEWMA control chart is compared to the performance of standard control charts. It is shown that that the SS MEWMA chart is substantially more efficient in detecting changes in the process mean vector than standard control charts that do not use sequential sampling. Also the situation is considered where different variables may have different measurement costs. MEWMA control charts with unequal sample sizes based on differing measurement costs are investigated in order to improve the performance of process monitoring. Sequential sampling plans are applied to MEWMA control charts with unequal sample sizes and compared to the standard MEWMA control charts with a fixed sample size. The steady-state average time to signal (SSATS) is computed using simulation and compared for some selected sets of sample sizes. When different variables have significantly different measurement costs, using unequal sample sizes can be more cost effective than using the same fixed sample size for each variable. In the second part of this dissertation, control chart methods are proposed for process monitoring when the quality of a process or product is characterized by a linear function. In the historical analysis of Phase I data, methods including the use of a bivariate T² chart to check for stability of the regression coefficients in conjunction with a univariate Shewhart chart to check for stability of the variation about the regression line are recommended. The use of three univariate control charts in Phase II is recommended. These three charts are used to monitor the Y-intercept, the slope, and the variance of the deviations about the regression line, respectively. A simulation study shows that this type of Phase II method can detect sustained shifts in the parameters better than competing methods in terms of average run length (ARL) performance. The monitoring of linear profiles is also related to the control charting of regression-adjusted variables and other methods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
50

Shen, Gang. "Bayesian predictive inference under informative sampling and transformation." Link to electronic thesis, 2004. http://www.wpi.edu/Pubs/ETD/Available/etd-0429104-142754/.

Full text
Abstract:
Thesis (M.S.) -- Worcester Polytechnic Institute.
Keywords: Ignorable Model; Transformation; Poisson Sampling; PPS Sampling; Gibber Sampler; Inclusion Probabilities; Selection Bias; Nonignorable Model; Bayesian Inference. Includes bibliographical references (p.34-35).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography