Academic literature on the topic 'Non-probability sampling'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Non-probability sampling.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Non-probability sampling"

1

Uprichard, Emma. "Sampling: bridging probability and non-probability designs." International Journal of Social Research Methodology 16, no. 1 (January 2013): 1–11. http://dx.doi.org/10.1080/13645579.2011.633391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Raina, SunilKumar. "External validity & non-probability sampling." Indian Journal of Medical Research 141, no. 4 (2015): 487. http://dx.doi.org/10.4103/0971-5916.159311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yang, Keming, and Ahmad Banamah. "Quota Sampling as an Alternative to Probability Sampling? An Experimental Study." Sociological Research Online 19, no. 1 (February 2014): 56–66. http://dx.doi.org/10.5153/sro.3199.

Full text
Abstract:
In spite of the establishment of probability sampling methods since the 1930s, non-probability sampling methods have remained popular among many commercial and polling agents, and they have also survived the embarrassment from a few incorrect predictions in American presidential elections. The increase of costs and the decline of response rates for administering probability samples have led some survey researchers to search for a non-probability sampling method as an alternative to probability sampling. In this study we aim to test whether results from a quota sample, believed to be the non-probability sampling method that is the closest in representativeness to probability sampling, are statistically equivalent to those from a probability sample. Further, we pay special attention to the effects of the following two factors for understanding the difference between the two sampling methods: the survey's topic and the response rate. An experimental survey on social capital was conducted in a student society in Northeast England. The results suggest that the survey topic influences who responded and that the response rate was associated with the sample means as well. For these reasons, we do not think quota sampling should be taken as an acceptable alternative to probability sampling.
APA, Harvard, Vancouver, ISO, and other styles
4

Berzofsky, Marcus, Rick Williams, and Paul Biemer. "Combining Probability and Non-Probability Sampling Methods: Model-Aided Sampling and the O*NET Data Collection Program." Survey Practice 2, no. 6 (September 1, 2009): 1–6. http://dx.doi.org/10.29115/sp-2009-0028.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Berndt, Andrea E. "Sampling Methods." Journal of Human Lactation 36, no. 2 (March 10, 2020): 224–26. http://dx.doi.org/10.1177/0890334420906850.

Full text
Abstract:
Knowledge of sampling methods is essential to design quality research. Critical questions are provided to help researchers choose a sampling method. This article reviews probability and non-probability sampling methods, lists and defines specific sampling techniques, and provides pros and cons for consideration. In addition, issues related to sampling methods are described to highlight potential problems.
APA, Harvard, Vancouver, ISO, and other styles
6

Kim, Kyu-Seong. "A Study of Non-probability Sampling Methodology in Sample Surveys." Survey Research 18, no. 1 (February 28, 2017): 1–29. http://dx.doi.org/10.20997/sr.18.1.1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schillewaert, Niels, Fred Langerak, and Tim Duharnel. "Non-Probability Sampling for WWW Surveys: A Comparison of Methods." Market Research Society. Journal. 40, no. 4 (July 1998): 1–13. http://dx.doi.org/10.1177/147078539804000403.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tansey, Oisín. "Process Tracing and Elite Interviewing: A Case for Non-probability Sampling." PS: Political Science & Politics 40, no. 04 (October 2007): 765–72. http://dx.doi.org/10.1017/s1049096507071211.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Baker, R., J. M. Brick, N. A. Bates, M. Battaglia, M. P. Couper, J. A. Dever, K. J. Gile, and R. Tourangeau. "Summary Report of the AAPOR Task Force on Non-probability Sampling." Journal of Survey Statistics and Methodology 1, no. 2 (September 26, 2013): 90–143. http://dx.doi.org/10.1093/jssam/smt008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Franco, Francesco, and Anteo Di Napoli. "Metodi di campionamento negli studi epidemiologici." Giornale di Tecniche Nefrologiche e Dialitiche 31, no. 3 (August 28, 2019): 171–74. http://dx.doi.org/10.1177/0394936219869152.

Full text
Abstract:
Sampling methods in epidemiological studies Sampling allows researchers to obtain information about a population through data obtained from a subset of the population, with a saving in terms of costs and workload compared to a study based on the entire population. Sampling allows the collecting of high quality information, provided that the sample size is large enough to detect a true association between exposure and outcome. There are two types of sampling methods: probability and non-probability sampling. In probability sampling the subset of the population is extracted randomly from all eligible individuals; this method, as all subjects have a chance of being chosen, allows researchers to generalize the findings of their study. In non-probability sampling, some individuals have no chance of being selected, because researchers do not extract the sample from all eligible subjects of a population; the sample is probably non-representative, the effect of sampling error cannot be estimated, so that the study produces non-generalizable results. Examples of probability sampling methods are: simple random sampling, systematic sampling, stratified sampling, and clustered sampling. Examples of non-probability sampling methods are: convenience sampling, judgement sampling.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Non-probability sampling"

1

Grafström, Anton. "On unequal probability sampling designs." Doctoral thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-33701.

Full text
Abstract:
The main objective in sampling is to select a sample from a population in order to estimate some unknown population parameter, usually a total or a mean of some interesting variable. When the units in the population do not have the same probability of being included in a sample, it is called unequal probability sampling. The inclusion probabilities are usually chosen to be proportional to some auxiliary variable that is known for all units in the population. When unequal probability sampling is applicable, it generally gives much better estimates than sampling with equal probabilities. This thesis consists of six papers that treat unequal probability sampling from a finite population of units. A random sample is selected according to some specified random mechanism called the sampling design. For unequal probability sampling there exist many different sampling designs. The choice of sampling design is important since it determines the properties of the estimator that is used. The main focus of this thesis is on evaluating and comparing different designs. Often it is preferable to select samples of a fixed size and hence the focus is on such designs. It is also important that a design has a simple and efficient implementation in order to be used in practice by statisticians. Some effort has been made to improve the implementation of some designs. In Paper II, two new implementations are presented for the Sampford design. In general a sampling design should also have a high level of randomization. A measure of the level of randomization is entropy. In Paper IV, eight designs are compared with respect to their entropy. A design called adjusted conditional Poisson has maximum entropy, but it is shown that several other designs are very close in terms of entropy. A specific situation called real time sampling is treated in Paper III, where a new design called correlated Poisson sampling is evaluated. In real time sampling the units pass the sampler one by one. Since each unit only passes once, the sampler must directly decide for each unit whether or not it should be sampled. The correlated Poisson design is shown to have much better properties than traditional methods such as Poisson sampling and systematic sampling.
APA, Harvard, Vancouver, ISO, and other styles
2

Amirichimeh, Reza 1958. "Simulation and analytic evaluation of false alarm probability of a non-linear detector." Thesis, The University of Arizona, 1991. http://hdl.handle.net/10150/277966.

Full text
Abstract:
One would like to evaluate and compare complex digital communication systems based upon their overall bit error rate. Unfortunately, analytical expressions for bit error rate for even simple communication systems are notoriously difficult to evaluate accurately. Therefore, communication engineers often resort to simulation techniques to evaluate these error probabilities. In this thesis importance sampling techniques (variations of standard Monte Carlo methods) are studied in relation to both linear and non-linear detectors. Quick simulation, an importance sampling method based upon the asymptotics of the error estimator, is studied in detail. The simulated error probabilities are compared to values obtained by numerically inverting Laplace Transform expressions for these quantities.
APA, Harvard, Vancouver, ISO, and other styles
3

Dahlan, Kinda. "Between Us and Them: Deconstructing Ideologies behind the Portrayal of Saudi Women in Canadian Media." Thèse, Université d'Ottawa / University of Ottawa, 2011. http://hdl.handle.net/10393/20145.

Full text
Abstract:
The purpose of this study is to investigate binary discourses of self and other constructed by Canadian media in the representation of Saudi women. One of the modest aims of this research is to expound on the status of centralized media coverage in Canada. Drawing on Hegel’s model of dialectics, as framed by Edward Said’s Orientalism (1978) and David Nikkel’s conception of a moderate postmodernism, this research also aims at contributing to the ongoing modern-postmodern discussion by delineating and examining the ways in which dialectical analysis can aid in the deconstruction of metanarratives in Western culture. Utilizing a qualitative research design that employs multidimensional modes of textual analysis, the thesis examined the changes in the portrayal of Saudi Women through a non-probability sampling of 88 Canadian newspaper articles selected from the Toronto Star, Globe and Mail, and National post between 2001-2009. One major finding was that the metanarratives guiding these representations did not change significantly despite changes in narratives as brought about by several major political events. The implications of this thesis revealed what the ideological influences framing these depictions, as well as whether or not the changes that they have undergone, were self-reifying in nature. The research also highlighted the implications resulting from assessing the ontological identities of Saudi women vis-à-vis a Western framework of values.
APA, Harvard, Vancouver, ISO, and other styles
4

Chabridon, Vincent. "Analyse de sensibilité fiabiliste avec prise en compte d'incertitudes sur le modèle probabiliste - Application aux systèmes aérospatiaux." Thesis, Université Clermont Auvergne‎ (2017-2020), 2018. http://www.theses.fr/2018CLFAC054/document.

Full text
Abstract:
Les systèmes aérospatiaux sont des systèmes complexes dont la fiabilité doit être garantie dès la phase de conception au regard des coûts liés aux dégâts gravissimes qu’engendrerait la moindre défaillance. En outre, la prise en compte des incertitudes influant sur le comportement (incertitudes dites « aléatoires » car liées à la variabilité naturelle de certains phénomènes) et la modélisation de ces systèmes (incertitudes dites « épistémiques » car liées au manque de connaissance et aux choix de modélisation) permet d’estimer la fiabilité de tels systèmes et demeure un enjeu crucial en ingénierie. Ainsi, la quantification des incertitudes et sa méthodologie associée consiste, dans un premier temps, à modéliser puis propager ces incertitudes à travers le modèle numérique considéré comme une « boîte-noire ». Dès lors, le but est d’estimer une quantité d’intérêt fiabiliste telle qu’une probabilité de défaillance. Pour les systèmes hautement fiables, la probabilité de défaillance recherchée est très faible, et peut être très coûteuse à estimer. D’autre part, une analyse de sensibilité de la quantité d’intérêt vis-à-vis des incertitudes en entrée peut être réalisée afin de mieux identifier et hiérarchiser l’influence des différentes sources d’incertitudes. Ainsi, la modélisation probabiliste des variables d’entrée (incertitude épistémique) peut jouer un rôle prépondérant dans la valeur de la probabilité obtenue. Une analyse plus profonde de l’impact de ce type d’incertitude doit être menée afin de donner une plus grande confiance dans la fiabilité estimée. Cette thèse traite de la prise en compte de la méconnaissance du modèle probabiliste des entrées stochastiques du modèle. Dans un cadre probabiliste, un « double niveau » d’incertitudes (aléatoires/épistémiques) doit être modélisé puis propagé à travers l’ensemble des étapes de la méthodologie de quantification des incertitudes. Dans cette thèse, le traitement des incertitudes est effectué dans un cadre bayésien où la méconnaissance sur les paramètres de distribution des variables d‘entrée est caractérisée par une densité a priori. Dans un premier temps, après propagation du double niveau d’incertitudes, la probabilité de défaillance prédictive est utilisée comme mesure de substitution à la probabilité de défaillance classique. Dans un deuxième temps, une analyse de sensibilité locale à base de score functions de cette probabilité de défaillance prédictive vis-à-vis des hyper-paramètres de loi de probabilité des variables d’entrée est proposée. Enfin, une analyse de sensibilité globale à base d’indices de Sobol appliqués à la variable binaire qu’est l’indicatrice de défaillance est réalisée. L’ensemble des méthodes proposées dans cette thèse est appliqué à un cas industriel de retombée d’un étage de lanceur
Aerospace systems are complex engineering systems for which reliability has to be guaranteed at an early design phase, especially regarding the potential tremendous damage and costs that could be induced by any failure. Moreover, the management of various sources of uncertainties, either impacting the behavior of systems (“aleatory” uncertainty due to natural variability of physical phenomena) and/or their modeling and simulation (“epistemic” uncertainty due to lack of knowledge and modeling choices) is a cornerstone for reliability assessment of those systems. Thus, uncertainty quantification and its underlying methodology consists in several phases. Firstly, one needs to model and propagate uncertainties through the computer model which is considered as a “black-box”. Secondly, a relevant quantity of interest regarding the goal of the study, e.g., a failure probability here, has to be estimated. For highly-safe systems, the failure probability which is sought is very low and may be costly-to-estimate. Thirdly, a sensitivity analysis of the quantity of interest can be set up in order to better identify and rank the influential sources of uncertainties in input. Therefore, the probabilistic modeling of input variables (epistemic uncertainty) might strongly influence the value of the failure probability estimate obtained during the reliability analysis. A deeper investigation about the robustness of the probability estimate regarding such a type of uncertainty has to be conducted. This thesis addresses the problem of taking probabilistic modeling uncertainty of the stochastic inputs into account. Within the probabilistic framework, a “bi-level” input uncertainty has to be modeled and propagated all along the different steps of the uncertainty quantification methodology. In this thesis, the uncertainties are modeled within a Bayesian framework in which the lack of knowledge about the distribution parameters is characterized by the choice of a prior probability density function. During a first phase, after the propagation of the bi-level input uncertainty, the predictive failure probability is estimated and used as the current reliability measure instead of the standard failure probability. Then, during a second phase, a local reliability-oriented sensitivity analysis based on the use of score functions is achieved to study the impact of hyper-parameterization of the prior on the predictive failure probability estimate. Finally, in a last step, a global reliability-oriented sensitivity analysis based on Sobol indices on the indicator function adapted to the bi-level input uncertainty is proposed. All the proposed methodologies are tested and challenged on a representative industrial aerospace test-case simulating the fallout of an expendable space launcher
APA, Harvard, Vancouver, ISO, and other styles
5

Ahn, Jae Youn. "Non-parametric inference of risk measures." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/2808.

Full text
Abstract:
Responding to the changes in the insurance environment of the past decade, insurance regulators globally have been revamping the valuation and capital regulations. This thesis is concerned with the design and analysis of statistical inference procedures that are used to implement these new and upcoming insurance regulations, and their analysis in a more general setting toward lending further insights into their performance in practical situations. The quantitative measure of risk that is used in these new and upcoming regulations is the risk measure known as the Tail Value-at-Risk (T-VaR). In implementing these regulations, insurance companies often have to estimate the T-VaR of product portfolios from the output of a simulation of its cash flows. The distributions for the underlying economic variables are either estimated or prescribed by regulations. In this situation the computational complexity of estimating the T-VaR arises due to the complexity in determining the portfolio cash flows for a given realization of economic variables. A technique that has proved promising in such settings is that of importance sampling. While the asymptotic behavior of the natural non-parametric estimator of T-VaR under importance sampling has been conjectured, the literature has lacked an honest result. The main goal of the first part of the thesis is to give a precise weak convergence result describing the asymptotic behavior of this estimator under importance sampling. Our method also establishes such a result for the natural non-parametric estimator for the Value-at-Risk, another popular risk measure, under weaker assumptions than those used in the literature. We also report on a simulation study conducted to examine the quality of these asymptotic approximations in small samples. The Haezendonck-Goovaerts class of risk measures corresponds to a premium principle that is a multiplicative analog of the zero utility principle, and is thus of significant academic interest. From a practical point of view our interest in this class of risk measures arose primarily from the fact that the T-VaR is, in a sense, a minimal member of the class. Hence, a study of the natural non-parametric estimator for these risk measures will lend further insights into the statistical inference for the T-VaR. Analysis of the asymptotic behavior of the generalized estimator has proved elusive, largely due to the fact that, unlike the T-VaR, it lacks a closed form expression. Our main goal in the second part of this thesis is to study the asymptotic behavior of this estimator. In order to conduct a simulation study, we needed an efficient algorithm to compute the Haezendonck-Goovaerts risk measure with precise error bounds. The lack of such an algorithm has clearly been noticed in the literature, and has impeded the quality of simulation results. In this part we also design and analyze an algorithm for computing these risk measures. In the process of doing we also derive some fundamental bounds on the solutions to the optimization problem underlying these risk measures. We also have implemented our algorithm on the R software environment, and included its source code in the Appendix.
APA, Harvard, Vancouver, ISO, and other styles
6

Vestin, Albin, and Gustav Strandberg. "Evaluation of Target Tracking Using Multiple Sensors and Non-Causal Algorithms." Thesis, Linköpings universitet, Reglerteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-160020.

Full text
Abstract:
Today, the main research field for the automotive industry is to find solutions for active safety. In order to perceive the surrounding environment, tracking nearby traffic objects plays an important role. Validation of the tracking performance is often done in staged traffic scenarios, where additional sensors, mounted on the vehicles, are used to obtain their true positions and velocities. The difficulty of evaluating the tracking performance complicates its development. An alternative approach studied in this thesis, is to record sequences and use non-causal algorithms, such as smoothing, instead of filtering to estimate the true target states. With this method, validation data for online, causal, target tracking algorithms can be obtained for all traffic scenarios without the need of extra sensors. We investigate how non-causal algorithms affects the target tracking performance using multiple sensors and dynamic models of different complexity. This is done to evaluate real-time methods against estimates obtained from non-causal filtering. Two different measurement units, a monocular camera and a LIDAR sensor, and two dynamic models are evaluated and compared using both causal and non-causal methods. The system is tested in two single object scenarios where ground truth is available and in three multi object scenarios without ground truth. Results from the two single object scenarios shows that tracking using only a monocular camera performs poorly since it is unable to measure the distance to objects. Here, a complementary LIDAR sensor improves the tracking performance significantly. The dynamic models are shown to have a small impact on the tracking performance, while the non-causal application gives a distinct improvement when tracking objects at large distances. Since the sequence can be reversed, the non-causal estimates are propagated from more certain states when the target is closer to the ego vehicle. For multiple object tracking, we find that correct associations between measurements and tracks are crucial for improving the tracking performance with non-causal algorithms.
APA, Harvard, Vancouver, ISO, and other styles
7

Cooper, Cynthia. "Developing a basis for characterizing precision of estimates produced from non-probability samples on continuous domains." Thesis, 2006. http://hdl.handle.net/1957/1025.

Full text
Abstract:
Graduation date: 2006
This research addresses sample process variance estimation on continuous domains and for non-probability samples in particular. The motivation for the research is a scenario in which a program has collected non-probability samples for which there is interest in characterizing how much an extrapolation to the domain would vary given similarly arranged collections of observations. This research does not address the risk of bias and a key assumption is that the observations could represent the response on the domain of interest. This excludes any hot-spot monitoring programs. The research is presented as a collection of three manuscripts. The first (to be published in Environmetrics (2006)) reviews and compares model- and design-based approaches for sampling and estimation in the context of continuous domains and promotes a model-assisted sample-process variance estimator. The next two manuscripts are written to be companion papers. With the objective of quantifying uncertainty of an estimator based on a non-probability sample, the proposed approach is to first characterize a class of sets of locations that are similarly arranged to the collection of locations in the non-probability sample, and then to predict variability of an estimate over that class of sets using the covariance structure indicated by the non-probability sample (assuming the covariance structure is indicative of the covariance structure on the study region). The first of the companion papers discusses characterizing classes of similarly arranged sets with the specification of a metric density. Goodness-of-fit tests are demonstrated on several types of patterns (dispersed, random and clustered) and on a non-probability collection of locations surveyed by Oregon Department of Fish & Wildlife on the Alsea River basin in Oregon. The second paper addresses predicting the variability of an estimate over sets in a class of sets (using a Monte Carlo process on a simulated response with appropriate covariance structure).
APA, Harvard, Vancouver, ISO, and other styles
8

Bai, Xuezheng. "Beyond Merton's utopia : effects of non-normality and dependence on the precision of variance estimaters using high-frequency financial data /." 2000. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:9990525.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Non-probability sampling"

1

Sampling threatened and endangered species with non-constant occurrence and detectability:: A sensitivity analysis of power when sampling low-occurrence populations with varying probability parameters. Arcata, California: Humboldt State University, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Coolen, A. C. C., A. Annibale, and E. S. Roberts. Random graph ensembles. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198709893.003.0003.

Full text
Abstract:
This chapter presents some theoretical tools for defining random graph ensembles systematically via soft or hard topological constraints including working through some properties of the Erdös-Rényi random graph ensemble, which is the simplest non-trivial random graph ensemble where links appear between two nodes with a fixed probability p. The chapter sets out the central representation of graph generation as the result of a discrete-time Markovian stochastic process. This unites the two flavours of graph generation approaches – because they can be viewed as simply moving forwards or backwards through this representation. It is possible to define a random graph by an algorithm, and then calculate the associated stationary probability. The alternative approach is to specify sampling weights and then to construct an algorithm that will have these weights as the stationary probabilities upon convergence.
APA, Harvard, Vancouver, ISO, and other styles
3

Ray, Sumantra (Shumone), Sue Fitzpatrick, Rajna Golubic, Susan Fisher, and Sarah Gibbings, eds. Navigating research methods: basic concepts in biostatistics and epidemiology. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780199608478.003.0002.

Full text
Abstract:
This chapter provides an overview of the basic concepts in biostatistics and epidemiology. Section 1: Basic concepts in biostatistics The concepts in biostatistics include: 1. descriptive statistical methods (which comprise of frequency distribution, distribution shapes, and measures of central tendency and dispersion); and 2. inferential statistics which is applied to make inferences about a population from the sample data. Non-probability and probability sampling methods are outlined. This section provides simple explanation of the complex concepts of significance tests and confidence intervals and their corresponding interpretation. Correlation and regression methods used to describe the association between two quantitative variables are also explained. This section also provides an overview of when to use which statistical test given the type of data and the nature of the research question. Section 2: Basic concepts in epidemiology This section begins with the definitions of normality. Next, the interpretation of diagnostic tests and clinical prediction are explained and the definitions of sensitivity, specificity, positive predictive value and negative predictive value are provided. The relationship between these four constructs is discussed. The application of this concepts in the treatment and prevention is discussed.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Non-probability sampling"

1

Ayhan, H. Öztaş. "Non-probability Sampling Survey Methods." In International Encyclopedia of Statistical Science, 979–82. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-04898-2_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Galloway, Alison. "Non-Probability Sampling." In Encyclopedia of Social Measurement, 859–64. Elsevier, 2005. http://dx.doi.org/10.1016/b0-12-369398-5/00382-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"Non-probability Sampling." In Encyclopedia of Quality of Life and Well-Being Research, 4374. Dordrecht: Springer Netherlands, 2014. http://dx.doi.org/10.1007/978-94-007-0753-5_102758.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Clark, Tom, Liam Foster, and Alan Bryman. "10. Sampling." In How to do your Social Research Project or Dissertation, 161–82. Oxford University Press, 2019. http://dx.doi.org/10.1093/hepl/9780198811060.003.0010.

Full text
Abstract:
Whether the research project adopts a quantitative, qualitative, or mixed strategy, there is little point in asking a few non-random people a few non-random questions as the student has no idea what those answers might indicate, or whether they might apply in other situations. Therefore, the student needs to think carefully about his or her sampling strategy and justify this in the dissertation. This chapter explains the key principles of probability and non-probability sampling and explores why ‘who’ is asked is just as important as ‘what’ is asked. It discusses the two key stages of sampling: defining the appropriate population for study and developing strategies for recruiting the sample.
APA, Harvard, Vancouver, ISO, and other styles
5

"Qualitative Sampling Methods." In Data Analysis and Methods of Qualitative Research, 99–120. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-8549-8.ch005.

Full text
Abstract:
The chapter takes the readers through five key types of non-probability procedures and is divided into six sections. The first section addresses purposive sampling and its seven variants. The variants discussed are maximum variation, homogeneous, total population, expert, critical case study, deviant case, and typical case sampling. The second section contains information related to quota sampling procedure and its differences with stratified and cluster sampling. The convenience sampling procedure is discussed in Section 3, snowball sampling in Section 4, and self-selective sampling in Section 5. The chapter is concluded by a question and answer section, which contains questions summarizing the entire chapter.
APA, Harvard, Vancouver, ISO, and other styles
6

Delgado, Jorge M., Antonio Abel R. Henriques, and Raimundo M. Delgado. "Structural Non-Linear Models and Simulation Techniques." In Advances in Systems Analysis, Software Engineering, and High Performance Computing, 540–84. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-8823-0.ch018.

Full text
Abstract:
Advances in computer technology allow nowadays the use of powerful computational models to describe the non-linear structural behavior of reinforced concrete (RC) structures. However their utilization for structural analysis and design is not so easy to be combined with the partial safety factors criteria presented in civil engineering international codes. Trying to minimize this type of difficulties, it is proposed a method for safety verification of RC structures based on a probabilistic approach. This method consists in the application of non-linear structural numerical models and simulation methods. In order to reduce computational time consuming the Latin Hypercube sampling method was adopted, providing a constrained sampling scheme instead of general random sampling like Monte Carlo method. The proposed methodology permits to calculate the probability of failure of RC structures, to evaluate the accuracy of any design criteria and, in particular, the accuracy of simplified structural design rules, like those proposed in civil engineering codes.
APA, Harvard, Vancouver, ISO, and other styles
7

Delgado, Jorge M., Antonio Abel R. Henriques, and Raimundo M. Delgado. "Structural Non-Linear Models and Simulation Techniques." In Civil and Environmental Engineering, 369–406. IGI Global, 2016. http://dx.doi.org/10.4018/978-1-4666-9619-8.ch015.

Full text
Abstract:
Advances in computer technology allow nowadays the use of powerful computational models to describe the non-linear structural behavior of reinforced concrete (RC) structures. However their utilization for structural analysis and design is not so easy to be combined with the partial safety factors criteria presented in civil engineering international codes. Trying to minimize this type of difficulties, it is proposed a method for safety verification of RC structures based on a probabilistic approach. This method consists in the application of non-linear structural numerical models and simulation methods. In order to reduce computational time consuming the Latin Hypercube sampling method was adopted, providing a constrained sampling scheme instead of general random sampling like Monte Carlo method. The proposed methodology permits to calculate the probability of failure of RC structures, to evaluate the accuracy of any design criteria and, in particular, the accuracy of simplified structural design rules, like those proposed in civil engineering codes.
APA, Harvard, Vancouver, ISO, and other styles
8

Ricci, Edmund M., Ernesto A. Pretto, and Knut Ole Sundnes. "Construct a Sampling Plan (Step 5)." In Disaster Evaluation Research, edited by Edmund M. Ricci, Ernesto A. Pretto, and Knut Ole Sundnes, 97–110. Oxford University Press, 2019. http://dx.doi.org/10.1093/med/9780198796862.003.0009.

Full text
Abstract:
In disaster studies it is necessary to obtain information from several groups of those involved and affected by the disaster and from various types of medical and administrative documents. We have suggested in Chapter 8 that information be obtained from, at a minimum, (1) survivors/victims/families; (2) professional responders and coordinators (both public safety and EMS/medical); (3) officials of governmental and non-governmental organizations; (4) medical records; and (5) administrative documents. This typically involves accessing a large number of individuals and reports. It is therefore almost always necessary to select a sample from each group and source. When possible some form of random (probability) sampling should be used; a different type of sampling called ‘purposive’ may be employed for key informants. ‘Convenience samples’ are not generally used in scientific evaluation studies due to their great potential for introducing bias into the data. Preparing and implementing a scientific sample design will prove to be one of the most challenging aspects of disaster evaluation studies. It will usually be necessary to consult with an individual who has statistical expertise when preparing the sampling plan; therefore, in this chapter we present some basic concepts in sampling and then conclude with four descriptions of sample designs used in past evaluation studies.
APA, Harvard, Vancouver, ISO, and other styles
9

"Measuring people – variables, samples and the qualitative critique Variables; Operational definitions of psychological constructs; Reliability and validity; Samples; Probability based sampling methods; Non-probability based sampling methods; Purposive sampling; Introducing the qualitative/quantitative debate." In Research Methods and Statistics in Psychology, 38–65. Routledge, 2013. http://dx.doi.org/10.4324/9780203769669-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Messinger, Adam M. "How do we Know What we Know?" In LGBTQ Intimate Partner Violence. University of California Press, 2017. http://dx.doi.org/10.1525/california/9780520286054.003.0002.

Full text
Abstract:
This chapter details the challenges in studying LGBTQ IPV and tips for improving future research. It begins by examining differing epistemologies (i.e., beliefs about what research is and is not capable of learning) and how this has informed the types of research scholars have been willing to conduct. Given that the vast majority of LGBTQ IPV research thus far has involved survey methodologies—be it qualitative or quantitative, entailing interviews or questionnaires—the chapter then moves into two of the key hurdles in conducting surveys on this topic. First, IPV measurement issues are addressed, including differentiating IPV from other types of interpersonal crimes, sensitivity versus specificity in IPV survey measurement, not omitting or merging distinct types of IPV, choosing an appropriate time frame in which IPV may have occurred, adequately distinguishing victimization from perpetration, and adequately designing IPV measures for LGBTQ populations. Second, population issues are unpackaged, including how to define the LGBTQ population, challenges in probability and non-probability sampling, and a series of additional sampling issues. The chapter concludes with implications for future policy, practice, and research.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Non-probability sampling"

1

Yun, Kimin, and Jin Young Choi. "Robust and fast moving object detection in a non-stationary camera via foreground probability based sampling." In 2015 IEEE International Conference on Image Processing (ICIP). IEEE, 2015. http://dx.doi.org/10.1109/icip.2015.7351738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Singh, Amandeep, Zissimos P. Mourelatos, and Efstratios Nikolaidis. "An Importance Sampling Approach for Time-Dependent Reliability." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-47200.

Full text
Abstract:
Reliability is an important engineering requirement for consistently delivering acceptable product performance through time. The reliability usually degrades with time increasing the lifecycle cost due to potential warranty costs, repairs and loss of market share. Reliability is the probability that the system will perform its intended function successfully for a specified time. In this article, we consider the first-passage reliability which accounts for the first time failure of non-repairable systems. Methods are available which provide an upper bound to the true reliability which may overestimate the true value considerably. The traditional Monte-Carlo simulation is accurate but computationally expensive. A computationally efficient importance sampling technique is presented to calculate the cumulative probability of failure for random dynamic systems excited by a stationary input random process. Time series modeling is used to characterize the input random process. A detailed example demonstrates the accuracy and efficiency of the proposed importance sampling method over the traditional Monte Carlo simulation.
APA, Harvard, Vancouver, ISO, and other styles
3

Mallahzadeh, H., Y. Wang, M. K. Abu Husain, N. I. Mohd Zaki, and G. Najafian. "Efficient Derivation of the Probability Distribution of Extreme Responses due to Random Wave Loading From the Probability Distribution of Extreme Surface Elevations." In ASME 2013 32nd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/omae2013-10917.

Full text
Abstract:
Offshore structures are exposed to random wave loading in the ocean environment and hence the probability distribution of the extreme values of their response to wave loading is required for their safe and economical design. Due to nonlinearity of the drag component of Morison’s wave loading and also due to intermittency of wave loading on members in the splash zone, the response is often non-Gaussian; therefore, simple techniques for derivation of the probability distribution of extreme responses are not available. To this end, the conventional Monte Carlo time simulation technique is frequently used for predicting the probability distribution of the extreme responses. However, this technique suffers from excessive sampling variability and hence a large number of simulated response records are required to reduce the sampling variability to acceptable levels. This paper takes advantage of the correlation between extreme responses and their corresponding extreme surface elevations to derive the probability distribution of the extreme responses accurately and efficiently, i.e. without the need for extensive simulations.
APA, Harvard, Vancouver, ISO, and other styles
4

Tsembelis, Konstantinos, Seyun Eom, John Jin, and Christopher Cole. "Effects of Non-Normal Input Distributions and Sampling Region on Monte Carlo Results." In ASME 2018 Pressure Vessels and Piping Conference. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/pvp2018-84767.

Full text
Abstract:
In order to address the risks associated with the operation of ageing pressure boundary components, many assessments incorporate probabilistic analysis tools for alleviating excessive conservatism of deterministic methodologies. In general, deterministic techniques utilize conservative bounding values for all critical parameters. Recently, various Probabilistic Fracture Mechanics (PFM) codes have been employed to identify governing parameters which could affect licensing basis margins of pressure retaining components. Moreover, these codes are used to calculate a probability of failure in order to estimate potential risks under operating and design loading conditions for the pressure retaining components experiencing plausible and active degradation mechanisms. Probabilistic approaches typically invoke the Monte-Carlo (MC) method where a set of critical input variables are randomly distributed and inserted in deterministic computer models. Estimates of results from probabilistic assessments are then compared against various assessment criteria. During the PVP-2016 conference, we investigated the assumption of normality of the Monte Carlo results utilizing a non-linear system function. In this paper, we extend the study by employing non-normal input distributions and investigating the effects of sampling region on the system function.
APA, Harvard, Vancouver, ISO, and other styles
5

Najafian, G. "A New Method of Moments for Reducing the Sampling Variability of Estimated Values of Probability Distribution Parameters." In ASME 2007 26th International Conference on Offshore Mechanics and Arctic Engineering. ASMEDC, 2007. http://dx.doi.org/10.1115/omae2007-29530.

Full text
Abstract:
Offshore structures are exposed to random wave loading in the ocean environment and hence the probability distribution of their response to wave loading is the minimum requirement for probabilistic analysis of these structures. Even if the structural system can be assumed to be linear, due to nonlinearity of the wave loading mechanism and also due to the intermittency of wave loading on members in the splash zone, the response is often non-Gaussian. The method of moments is frequently used to determine the parameters of an adopted probability model from a simulated or measured record of response. However, when higher order moments (e.g. 3rd or 4th order moments) are required for estimation of the distribution parameters, the estimated values show considerable scatter due to high sampling variability. In this paper, a more efficient form of the method of moments is introduced, which will lead to more accurate estimates of the distribution parameters by reducing their sampling variability.
APA, Harvard, Vancouver, ISO, and other styles
6

Abu Husain, M. K., N. I. Mohd Zaki, and G. Najafian. "Prediction of Extreme Values of Offshore Structural Response by an Efficient Time Simulation Technique." In ASME 2014 33rd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/omae2014-23126.

Full text
Abstract:
Offshore structures are exposed to random wave loading in the ocean environment and hence the probability distribution of the extreme values of their response to wave loading is required for their safe and economical design. Due to nonlinearity of the drag component of Morison’s wave loading and also due to intermittency of wave loading on members in the splash zone, the response is often non-Gaussian; therefore, simple techniques for derivation of the probability distribution of extreme responses are not available. To this end, the conventional Monte Carlo simulation technique (CTS) is frequently used for predicting the probability distribution of the extreme values of response. However, this technique suffers from excessive sampling variability and hence a large number of simulated extreme response values (hundreds of simulated response records) are required to reduce the sampling variability to acceptable levels. In this paper, the efficiency of an alternative technique in comparison with the conventional simulation technique is investigated.
APA, Harvard, Vancouver, ISO, and other styles
7

Mallahzadeh, H., Y. Wang, M. K. Abu Husain, N. I. Mohd Zaki, and G. Najafian. "Accurate Estimation of the 100-Year Responses From the Probability Distribution of Extreme Surface Elevations." In ASME 2014 33rd International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2014. http://dx.doi.org/10.1115/omae2014-24589.

Full text
Abstract:
Accurate estimation of the 100-year responses (derived from the long-term distribution of extreme responses) is required for the safe and economical design of offshore structures. However, due to nonlinearity of the drag component of Morison’s wave loading and also due to intermittency of wave loading on members in the splash zone, the response is often non-Gaussian; therefore, simple techniques for derivation of the probability distribution of extreme responses are not available. To this end, conventional Monte Carlo time simulation technique could be used for predicting the long-term probability distribution of the extreme responses. However, this technique suffers from excessive sampling variability and hence a large number of simulated response records are required to reduce the sampling variability to acceptable levels. This paper takes advantage of the correlation between extreme responses and their corresponding extreme surface elevations to derive the values of the 100-year responses without the need for extensive simulations. It is demonstrated that the technique could be used for both quasi-static and dynamic responses.
APA, Harvard, Vancouver, ISO, and other styles
8

Andrade, Matheus Guedes de, Franklin De Lima Marquezino, and Daniel Ratton Figueiredo. "Characterizing the Relationship Between Unitary Quantum Walks and Non-Homogeneous Random Walks." In Concurso de Teses e Dissertações da SBC. Sociedade Brasileira de Computação, 2021. http://dx.doi.org/10.5753/ctd.2021.15756.

Full text
Abstract:
Quantum walks on graphs are ubiquitous in quantum computing finding a myriad of applications. Likewise, random walks on graphs are a fundamental building block for a large number of algorithms with diverse applications. While the relationship between quantum and random walks has been recently discussed in specific scenarios, this work establishes a formal equivalence between the two processes on arbitrary finite graphs and general conditions for shift and coin operators. It requires empowering random walks with time heterogeneity, where the transition probability of the walker is non-uniform and time dependent. The equivalence is obtained by equating the probability of measuring the quantum walk on a given node of the graph and the probability that the random walk is at that same node, for all nodes and time steps. The first result establishes procedure for a stochastic matrix sequence to induce a random walk that yields the exact same vertex probability distribution sequence of any given quantum walk, including the scenario with multiple interfering walkers. The second result establishes a similar procedure in the opposite direction. Given any random walk, a time-dependent quantum walk with the exact same vertex probability distribution is constructed. Interestingly, the matrices constructed by the first procedure allows for a different simulation approach for quantum walks where node samples respect neighbor locality and convergence is guaranteed by the law of large numbers, enabling efficient (polynomial-time) sampling of quantum graph trajectories. Furthermore, the complexity of constructing this sequence of matrices is discussed in the general case.
APA, Harvard, Vancouver, ISO, and other styles
9

Hernandez-Solis, Augusto, Christian Ekberg, Arvid O¨dega˚rd Jensen, Christophe Demaziere, and Ulf Bredolt. "Statistical Uncertainty Analyses of Void Fraction Predictions Using Two Different Sampling Strategies: Latin Hypercube and Simple Random Sampling." In 18th International Conference on Nuclear Engineering. ASMEDC, 2010. http://dx.doi.org/10.1115/icone18-30096.

Full text
Abstract:
In recent years, a more realistic safety analysis of nuclear reactors has been based on best estimate (BE) computer codes. Because their predictions are unavoidably affected by conceptual, aleatory and experimental sources of uncertainty, an uncertainty analysis is needed if useful conclusions are to be obtained from BE codes. In this paper, statistical uncertainty analyses of cross-sectional averaged void fraction calculations using the POLCA-T system code, and based on the BWR Full-Size Fine-Mesh Bundle Test (BFBT) benchmark are presented by means of two different sampling strategies: Latin Hypercube (LHS) and Simple Random Sampling (SRS). LHS has the property of densely stratifying across the range of each input probability distribution, allowing a much better coverage of the input uncertainties than SRS. The aim here is to compare both uncertainty analyses on the BWR assembly void axial profile prediction in steady-state, and on the transient void fraction prediction at a certain axial level coming from a simulated re-circulation pump trip scenario. It is shown that the replicated void fraction mean (either in steady-state or transient conditions) has less variability when using LHS than SRS for the same number of calculations (i.e. same input space sample size) even if the resulting void fraction axial profiles are non-monotonic. It is also shown that the void fraction uncertainty limits achieved with SRS by running 458 calculations (sample size required to cover 95% of 8 uncertain input parameters with a 95% confidence), result in the same uncertainty limits achieved by LHS with only 100 calculations. These are thus clear indications on the advantages of using LHS.
APA, Harvard, Vancouver, ISO, and other styles
10

Manjunatha, Hemanth, Jida Huang, Binbin Zhang, and Rahul Rai. "A Sequential Sampling Algorithm for Multi-Stage Static Coverage Problems." In ASME 2016 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/detc2016-60305.

Full text
Abstract:
It is critical in many system-engineering problems (such as surveillance, environmental monitoring, and cooperative task performance) to optimally allocate resources in the presence of limited resources. Static coverage problem is an important class of the resource allocation problems that focuses on covering an area of interest so that the activities in the area of interest can be detected/monitored with higher probability. In many practical settings (primarily due to financial constraints) a system designer has to allocate resources in multiple stages. In each stage, the system designer can assign a fixed number of resources (agents). In the multi-stage formulation, the agents locations for the next stage are dependent on all the agents location in the previous stages. Such multi-stage static coverage problems are non-trivial to solve. In this paper, we propose a robust and efficient sequential sampling algorithm to solve the multi-stage static coverage problem in the presence of probabilistic resource intensity allocation maps (RIAMs). The agents locations are determined by formulating this problem as an optimization problem in the successive stage . Three different objective functions are compared and discussed from the aspects of decreasing L2 difference and Sequential Minimum Energy Design (SMED). It is shown that utilizing SMED objective function leads to a better approximation of the RIAMs. Two heuristic algorithms, i.e. cuckoo search, and pattern search, are used as optimization algorithms. Numerical functions and real-life applications are provided to demonstrate the robustness and efficiency of the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography