Academic literature on the topic 'Insensitivity to sample size bias'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Insensitivity to sample size bias.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Insensitivity to sample size bias"

1

Kang, Mijung, and Min Jae Park. "Employees’ judgment and decision making in the banking industry." International Journal of Bank Marketing 37, no. 1 (2019): 382–400. http://dx.doi.org/10.1108/ijbm-04-2018-0111.

Full text
Abstract:
Purpose Heuristics are used in the judgment and decision-making process of bank employees; however, discussions and research on the type or range of judgmental heuristics are very difficult to find throughout the world. In light of this, the purpose of this paper is to empirically analyze what types of heuristics are used in bank employees’ judgment and decision-making processes and the extent to which those types of heuristics prevent rational decision making due to the systematic biases they generate. In particular, this study aims to conduct empirical research based on various scenarios related to the banking industry. Design/methodology/approach To examine the heuristics in decision-making circumstances and the level of subsequent biases, the present study narrowed the scope of research to the three main types of heuristics introduced by Tversky and Kahneman (1974), namely, representativeness heuristics, availability heuristics and anchoring and adjustment heuristics. To analyze the bank employees’ decision making, this study specifically investigated the level of decision-making heuristics and the level of bias by focusing on these three types of heuristics. This study targeted bank employees who either sell financial products or are engaged in customer service work at a real/physical bank. Findings For representativeness heuristics, this study found bank employees’ judgment of probability was influenced by biases, such as insensitivity to prior probability, insensitivity to sample size, misconception of chance and insensitivity to predictability. Regarding availability heuristics, it found that bank employees judge the probability of events based on the ease of recalling an event instead of the actual frequency of the event, and so they fall prey to systematic biases. Finally, regarding anchoring and adjustment heuristics, this study found that employees fall prey to judgment biases as they judge the probability of conjunctive events and disjunctive events based on anchoring and insufficient adjustment. Originality/value Although people who are well-trained in statistics can avoid rudimentary errors, they fall prey to biased judgment at a similar level to those who are not properly trained in statistics when it comes to more complicated and ambiguous issues. It clearly indicates that it is risky to determine that financial experts would be more rational than the general public in making various judgments required in the policy-making process. To conclude, it is imperative to recognize the existence of heuristics-based systematic biases in the judgment and decision-making process and, furthermore, to reinforce the education and training system to improve bank employees’ rational choice and judgment ability.
APA, Harvard, Vancouver, ISO, and other styles
2

Reagan, Robert Timothy. "Variations on a seminal demonstration of people's insensitivity to sample size." Organizational Behavior and Human Decision Processes 43, no. 1 (1989): 52–57. http://dx.doi.org/10.1016/0749-5978(89)90057-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Shiffler, Ronald E., and Arthur J. Adams. "A Correction for Biasing Effects of Pilot Sample Size on Sample Size Determination." Journal of Marketing Research 24, no. 3 (1987): 319–21. http://dx.doi.org/10.1177/002224378702400309.

Full text
Abstract:
When a pilot study variance is used to estimate σ2 in the sample size formula, the resulting [Formula: see text] is a random variable. The authors investigate the theoretical behavior of [Formula: see text]. Though [Formula: see text] is more likely to underachieve than overachieve the unbiased n, correction factors to balance the bias are provided.
APA, Harvard, Vancouver, ISO, and other styles
4

Smith, Andrew R., and Paul C. Price. "Sample size bias in the estimation of means." Psychonomic Bulletin & Review 17, no. 4 (2010): 499–503. http://dx.doi.org/10.3758/pbr.17.4.499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Price, Paul C., Nicole M. Kimura, Andrew R. Smith, and Lindsay D. Marshall. "Sample size bias in judgments of perceptual averages." Journal of Experimental Psychology: Learning, Memory, and Cognition 40, no. 5 (2014): 1321–31. http://dx.doi.org/10.1037/a0036576.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gunthorpe, Deborah, and Haim Levy. "Rational expectation, firm size, and sample selection bias." Economics Letters 37, no. 4 (1991): 429–32. http://dx.doi.org/10.1016/0165-1765(91)90082-v.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Posch, Martin, Florian Klinglmueller, Franz König, and Frank Miller. "Estimation after blinded sample size reassessment." Statistical Methods in Medical Research 27, no. 6 (2016): 1830–46. http://dx.doi.org/10.1177/0962280216670424.

Full text
Abstract:
Blinded sample size reassessment is a popular means to control the power in clinical trials if no reliable information on nuisance parameters is available in the planning phase. We investigate how sample size reassessment based on blinded interim data affects the properties of point estimates and confidence intervals for parallel group superiority trials comparing the means of a normal endpoint. We evaluate the properties of two standard reassessment rules that are based on the sample size formula of the z-test, derive the worst case reassessment rule that maximizes the absolute mean bias and obtain an upper bound for the mean bias of the treatment effect estimate.
APA, Harvard, Vancouver, ISO, and other styles
8

Anderson, Richard B., and Beth M. Hartzler. "Belief bias in the perception of sample size adequacy." Thinking & Reasoning 20, no. 3 (2013): 297–314. http://dx.doi.org/10.1080/13546783.2013.787121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Smith, Andrew R., Shanon Rule, and Paul C. Price. "Sample size bias in retrospective estimates of average duration." Acta Psychologica 176 (May 2017): 39–46. http://dx.doi.org/10.1016/j.actpsy.2017.03.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Doganer, Adem. "Different Approaches to Reducing Bias in Classification of Medical Data by Ensemble Learning Methods." International Journal of Big Data and Analytics in Healthcare 6, no. 2 (2021): 15–30. http://dx.doi.org/10.4018/ijbdah.20210701.oa2.

Full text
Abstract:
In this study, different models were created to reduce bias by ensemble learning methods. Reducing the bias error will improve the classification performance. In order to increase the classification performance, the most appropriate ensemble learning method and ideal sample size were investigated. Bias values and learning performances of different ensemble learning methods were compared. AdaBoost ensemble learning method provided the lowest bias value with n: 250 sample size while Stacking ensemble learning method provided the lowest bias value with n: 500, n: 750, n: 1000, n: 2000, n: 4000, n: 6000, n: 8000, n: 10000, and n: 20000 sample sizes. When the learning performances were compared, AdaBoost ensemble learning method and RBF classifier achieved the best performance with n: 250 sample size (ACC = 0.956, AUC: 0.987). The AdaBoost ensemble learning method and REPTree classifier achieved the best performance with n: 20000 sample size (ACC = 0.990, AUC = 0.999). In conclusion, for reduction of bias, methods based on stacking displayed a higher performance compared to other methods.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Insensitivity to sample size bias"

1

AlKhars, Mohammed. "Decision Makers’ Cognitive Biases in Operations Management: An Experimental Study." Thesis, University of North Texas, 2016. https://digital.library.unt.edu/ark:/67531/metadc849675/.

Full text
Abstract:
Behavioral operations management (BOM) has gained popularity in the last two decades. The main theme in this new stream of research is to include the human behavior in Operations Management (OM) models to increase the effectiveness of such models. BOM is classified into 4 areas: cognitive psychology, social psychology, group dynamics and system dynamics (Bendoly et al. 2010). This dissertation will focus on the first class, namely cognitive psychology. Cognitive psychology is further classified into heuristics and biases. Tversky and Kahneman (1974) discussed 3 heuristics and 13 cognitive biases that usually face decision makers. This dissertation is going to study 6 cognitive biases under the representativeness heuristic. The model in this dissertation states that cognitive reflection of the individual (Frederick 2005) and training about cognitive biases in the form of warning (Kaufmann and Michel 2009) will help decisions’ makers make less biased decisions. The 6 cognitive biases investigated in this dissertation are insensitivity to prior probability, insensitivity to sample size, misconception of chance, insensitivity to predictability, the illusion of validity and misconception of regression. 6 scenarios in OM contexts have been used in this study. Each scenario corresponds to one cognitive bias. Experimental design has been used as the research tool. To see the impact of training, one group of the participants received the scenarios without training and the other group received them with training. The training consists of a brief description of the cognitive bias as well as an example of the cognitive bias. Cognitive reflection is operationalized using cognitive reflection test (CRT). The survey was distributed to students at University of North Texas (UNT). Logistic regression has been employed to analyze data. The research shows that participants show the cognitive biases proposed by Tversky and Kahneman. Moreover, CRT is significant factor to predict the cognitive bias in two scenarios. Finally, providing training in terms of warning helps participants to make more rational decisions in 4 scenarios. This means that although cognitive biases are inherent in the mind of people, management of corporations has the tool to educate its managers and professionals about such biases which helps companies make more rational decisions.
APA, Harvard, Vancouver, ISO, and other styles
2

Haardoerfer, Regine. "Power and Bias in Hierarchical Linear Growth Models: More Measurements for Fewer People." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/eps_diss/57.

Full text
Abstract:
Hierarchical Linear Modeling (HLM) sample size recommendations are mostly made with traditional group-design research in mind, as HLM as been used almost exclusively in group-design studies. Single-case research can benefit from utilizing hierarchical linear growth modeling, but sample size recommendations for growth modeling with HLM are scarce and generally do not consider the sample size combinations typical in single-case research. The purpose of this Monte Carlo simulation study was to expand sample size research in hierarchical linear growth modeling to suit single-case designs by testing larger level-1 sample sizes (N1), ranging from 10 to 80, and smaller level-2 sample sizes (N2), from 5 to 35, under the presence of autocorrelation to investigate bias and power. Estimates for the fixed effects were good for all tested sample-size combinations, irrespective of the strengths of the predictor-outcome correlations or the level of autocorrelation. Such low sample sizes, however, especially in the presence of autocorrelation, produced neither good estimates of the variances nor adequate power rates. Power rates were at least adequate for conditions in which N2 = 20 and N1 = 80 or N2 = 25 and N1 = 50 when the squared autocorrelation was .25.Conditions with lower autocorrelation provided adequate or high power for conditions with N2 = 15 and N1 = 50. In addition, conditions with high autocorrelation produced less than perfect power rates to detect the level-1 variance.
APA, Harvard, Vancouver, ISO, and other styles
3

Liv, Per. "Efficient strategies for collecting posture data using observation and direct measurement." Doctoral thesis, Umeå universitet, Yrkes- och miljömedicin, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-59132.

Full text
Abstract:
Relationships between occupational physical exposures and risks of contracting musculoskeletal disorders are still not well understood; exposure-response relationships are scarce in the musculoskeletal epidemiology literature, and many epidemiological studies, including intervention studies, fail to reach conclusive results. Insufficient exposure assessment has been pointed out as a possible explanation for this deficiency. One important aspect of assessing exposure is the selected measurement strategy; this includes issues related to the necessary number of data required to give sufficient information, and to allocation of measurement efforts, both over time and between subjects in order to achieve precise and accurate exposure estimates. These issues have been discussed mainly in the occupational hygiene literature considering chemical exposures, while the corresponding literature on biomechanical exposure is sparse. The overall aim of the present thesis was to increase knowledge on the relationship between data collection design and the resulting precision and accuracy of biomechanical exposure assessments, represented in this thesis by upper arm postures during work, data which have been shown to be relevant to disorder risk. Four papers are included in the thesis. In papers I and II, non-parametric bootstrapping was used to investigate the statistical efficiency of different strategies for distributing upper arm elevation measurements between and within working days into different numbers of measurement periods of differing durations. Paper I compared the different measurement strategies with respect to the eventual precision of estimated mean exposure level. The results showed that it was more efficient to use a higher number of shorter measurement periods spread across a working day than to use a smaller number for longer uninterrupted measurement periods, in particular if the total sample covered only a small part of the working day. Paper II evaluated sampling strategies for the purpose of determining posture variance components with respect to the accuracy and precision of the eventual variance component estimators. The paper showed that variance component estimators may be both biased and imprecise when based on sampling from small parts of working days, and that errors were larger with continuous sampling periods. The results suggest that larger posture samples than are conventionally used in ergonomics research and practice may be needed to achieve trustworthy estimates of variance components. Papers III and IV focused on method development. Paper III examined procedures for estimating statistical power when testing for a group difference in postures assessed by observation. Power determination was based either on a traditional analytical power analysis or on parametric bootstrapping, both of which accounted for methodological variance introduced by the observers to the exposure data. The study showed that repeated observations of the same video recordings may be an efficient way of increasing the power in an observation-based study, and that observations can be distributed between several observers without loss in power, provided that all observers contribute data to both of the compared groups, and that the statistical analysis model acknowledges observer variability. Paper IV discussed calibration of an inferior exposure assessment method against a superior “golden standard” method, with a particular emphasis on calibration of observed posture data against postures determined by inclinometry. The paper developed equations for bias correction of results obtained using the inferior instrument through calibration, as well as for determining the additional uncertainty of the eventual exposure value introduced through calibration. In conclusion, the results of the present thesis emphasize the importance of carefully selecting a measurement strategy on the basis of statistically well informed decisions. It is common in the literature that postural exposure is assessed from one continuous measurement collected over only a small part of a working day. In paper I, this was shown to be highly inefficient compared to spreading out the corresponding sample time across the entire working day, and the inefficiency was also obvious when assessing variance components, as shown in paper II. The thesis also shows how a well thought-out strategy for observation-based exposure assessment can reduce the effects of measurement error, both for random methodological variance (paper III) and systematic observation errors (bias) (paper IV).
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Insensitivity to sample size bias"

1

Michael, Kennedy. The influence of sample size, effect size, and percentage of DIF items on the performance of the Mantel-Haenszel and logistic regression DIF identification procedures. 1994.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gelman, Andrew, and Deborah Nolan. Statistical inference. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198785699.003.0009.

Full text
Abstract:
This chapter begins with a very successful demonstration that illustrates many of the general principles of statistical inference, including estimation, bias, and the concept of the sampling distribution. Students each take a “random” sample of different size candies, weigh them, and estimate the total weight of all candies. Then various demonstrations and examples are presented that take the students on the transition from probability to hypothesis testing, confidence intervals, and more advanced concepts such as statistical power and multiple comparisons. These activities include use an inflatable globe, short-term memory test, first digits of street addresses, and simulated student IQs.
APA, Harvard, Vancouver, ISO, and other styles
3

Gelman, Andrew, and Deborah Nolan. Student activities in survey sampling. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198785699.003.0018.

Full text
Abstract:
This chapter outlines some of our more effective demonstrations for teaching sampling. Part I of this book contains many activities related to sampling that we also use in our more advanced courses on the subject (e.g., see chapter 6 for an activity on estimating family size, and chapter 9 for a candy weighing activity). This chapter describes additional student activities that we have developed for the advanced undergraduate survey sampling class. These include provocative questionnaires to demonstrate question bias and statistical literacy packets for dissecting news stories about surveys. In addition, this chapter contains sample handouts used for teaching particular topics, techniques for encouraging student participation, and materials to organize student projects on complex surveys.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Insensitivity to sample size bias"

1

Liefbroer, Aart C., and Mioara Zoutewelle-Terovan. "Meta-Analysis and Meta-Regression: An Alternative to Multilevel Analysis When the Number of Countries Is Small." In Social Background and the Demographic Life Course: Cross-National Comparisons. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-67345-1_6.

Full text
Abstract:
AbstractHierarchically nested data structures are often analyzed by means of multilevel techniques. A common situation in cross-national comparative research is data on two levels, with information on individuals at level 1 and on countries at level 2. However, when dealing with few level-2 units (e.g. countries), results from multilevel models may be unreliable due to estimation bias (e.g. underestimated standard errors, unreliable country-level variance estimates). This chapter provides a discussion on multilevel modeling inaccuracies when using a small level-2 sample size, as well as a list of available alternative analytic tools for analyzing such data. However, as in practice many of these alternatives remain unfeasible in testing hypotheses central to cross-national comparative research, the aim of this chapter is to propose and illustrate a new technique – the 2-step meta-analytic approach – reliable in the analysis of nested data with few level-2 units. In addition, this method is highly infographic and accessible to the average social scientist (not skilled in advanced simulation techniques).
APA, Harvard, Vancouver, ISO, and other styles
2

Jennions, Michael D., Christopher J. Lortie, Michael S. Rosenberg, and Hannah R. Rothstein. "Publication and Related Biases." In Handbook of Meta-analysis in Ecology and Evolution. Princeton University Press, 2013. http://dx.doi.org/10.23943/princeton/9780691137285.003.0014.

Full text
Abstract:
This chapter discusses the increased occurrence of publication bias in the scientific literature. Publication bias is associated with the inaccurate representation of the merit of a hypothesis or idea. A strict definition is that it occurs when the published literature reports results that systematically differ from those of all studies and statistical tests conducted; the result is that false conclusions are drawn. The chapter presents five main approaches used to either detect potential narrow sense publication bias or assess how sensitive the results of a meta-analysis are to the possible exclusion. These include funnel plots, tests for relationships between effect size and sample size using nonparametric correlation or regression, trim and fill method, fail-safe numbers, and model selection.
APA, Harvard, Vancouver, ISO, and other styles
3

Haute, Emilie van. "Sampling Techniques." In Research Methods in the Social Sciences: An A-Z of key concepts. Oxford University Press, 2021. http://dx.doi.org/10.1093/hepl/9780198850298.003.0057.

Full text
Abstract:
This chapter assesses sampling techniques. Researchers may restrict their data collection to a sample of a population for convenience or necessity if they lack the time and resources to collect data for the entire population. Therefore, a sample is any subset of units collected from a population. Research sampling techniques refer to case selection strategy — the process and methods used to select a subset of units from a population. While sampling techniques reduce the costs of data collection, they induce a loss in terms of comprehensiveness and accuracy, compared to working on the entire population. The data collected are subject to errors or bias. Two main decisions determine the size or margin of error and whether the results of a sample study can be generalized and applied to the entire population with accuracy: the choice of sample type and the sample size.
APA, Harvard, Vancouver, ISO, and other styles
4

Djannatian, Minou, Clarissa Valim, Andre Brunoni, and Felipe Fregni. "Observational Studies." In Critical Thinking in Clinical Research, edited by Raquel Ajub Moyses, Valeriam Angelim, Scott Evans, Rui Imamura, and Felipe Fregni. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780199324491.003.0016.

Full text
Abstract:
This chapter on observational studies provides an understanding of the main concepts in epidemiology, introduces common study designs, such as cross-sectional, case-control, and cohort studies, and outlines their importance for clinical research. The hallmark of epidemiological research is that it observes unexposed and exposed individuals under “real-life conditions” without intervening itself. The chapter emphasizes the important role of bias and confounding in interpreting results from such studies and explains how bias and confounding can be controlled. It furthermore discusses specific aspects of sample size determination that are relevant to observational studies. The chapter concludes with a brief review of the special nature of surgical research.
APA, Harvard, Vancouver, ISO, and other styles
5

Chniguir, Mounira, Asma Sghaier, Mohamed Soufeljil, and Zouhayer Mighri. "The Degree of Home Bias in the Holding of Share Portfolio." In Foreign Direct Investments. IGI Global, 2020. http://dx.doi.org/10.4018/978-1-7998-2448-0.ch056.

Full text
Abstract:
The objective of this paper is to measure the degree of Home Bias within the holdings of portfolio and to identify their determining factors. By following an intuitive reasoning, the authors have chosen a number of susceptible factors that have an impact on Home Bias. In fact, they have developed an international CAPM (Capital Asset Pricing Model). This model is estimated for 20 countries, with the use of cross-section econometrics. The authors' results show that all countries have recorded a high level of Home bias in their holdings of portfolio. In order to study whether the Home Bias of the newly emerging markets and that of the developed markets react differently to the determining factors or not the authors have evaluated the model so much jointly for all markets as separately for the developed and the newly emerging ones. In the case of classification of the sample, the results have permitted us to draw an important conclusion and to have cognizance that the volatility of the exchange rate is statistically significant concerning the newly emerging economies at a threshold of 1%, while it is hardly remarkable for the developed countries. This means that this variable prevents the American investors from investing in the former countries. Samely, for both variables of joint- variance and size.
APA, Harvard, Vancouver, ISO, and other styles
6

Murad, M. Hassan, and Qian Shi. "Biostatistics 1: Basic Concepts." In Mayo Clinic Preventive Medicine and Public Health Board Review. Oxford University Press, 2010. http://dx.doi.org/10.1093/med/9780199743018.003.0001.

Full text
Abstract:
Chapter 1 reviews basic concepts of biostatistics. Topics include descriptive data, probability and odds, estimation and sampling error, hypothesis testing, and power and sample size calculations. The discussion of descriptive data includes types of data (discrete vs continuous and nominal vs ordinal), central tendency (mean, median, and mode), skewed distributions, and measures of dispersion (range, variance, standard deviation). Probability and odds are broken down into laws of probability, odds, odds ratio, relative risk, and probability distribution. The examination of estimation and sampling error covers concepts such as random error, bias, standard error, point estimation, and interval estimation.
APA, Harvard, Vancouver, ISO, and other styles
7

Lippert, Susan K. "Social Issues in the Administration of Information Systems Survey Research." In Computing Information Technology. IGI Global, 2003. http://dx.doi.org/10.4018/978-1-93177-752-0.ch005.

Full text
Abstract:
Survey responses differ between direct paper and pencil (manual) administration and Internet-based (electronic) survey data collection methods. Social dynamics (issues) play an important role in influencing respondent participation. A review of the existing literature suggests that the medium and administration context affect differences in instrument performance parameters, i.e., response rate, participation ease, attractiveness of survey, novelty effect, administrative costs, response flexibility, response time, population size, sample bias, instrument validity, the management of non-response data, and response error. This chapter attempts to identify, describe and map the differences between survey data collection media as a function of selected social variables.
APA, Harvard, Vancouver, ISO, and other styles
8

Norcross, John C., Thomas P. Hogan, Gerald P. Koocher, and Lauren A. Maggio. "Appraising Research Reports." In Clinician's Guide to Evidence-Based Practices. Oxford University Press, 2016. http://dx.doi.org/10.1093/med:psych/9780190621933.003.0007.

Full text
Abstract:
Assessing and interpreting research reports involves examination of individual studies as well as summaries of many studies. Summaries may be conveyed in narrative reviews or, more typically, in meta-analyses. This chapter reviews how researchers conduct a meta-analysis and report the results, especially by means of forest plots, which incorporate measures of effect size and their confidence intervals. A meta-analysis may also use moderator analyses or meta-regressions to identify important influences on the results. Critical appraisal of a study requires careful attention to the details of the sample used, the independent variable (treatment), dependent variable (outcome measure), the comparison groups, and the relation between the stated conclusions and the actual results. The CONSORT flow diagram provides a context for interpreting the sample and comparison groups. Finally, users must be alert to possible artifacts of publication bias.
APA, Harvard, Vancouver, ISO, and other styles
9

Esteban Garcia-Robledo, Juan, Alejandro Ruíz-Patiño, Carolina Sotelo, et al. "Diagnosis and Management of Radiation Necrosis in Patients with Brain Metastases and Primary Tumors." In CNS Malignancies [Working Title]. IntechOpen, 2021. http://dx.doi.org/10.5772/intechopen.96824.

Full text
Abstract:
The incidence of radiation necrosis has increased secondary to combined modality therapy for brain tumors and stereotactic radiosurgery. The pathology of progressive brain radiation necrosis (RN) primarily includes inflammation and angiogenesis in which cytokines, chemokines, and vascular endothelial growth factors are upregulated. Combined multiparametric imaging, including lesional metabolism, spectroscopy, and blood flow, could enhance diagnostic accuracy compared with a single imaging study. Nevertheless, a substantial risk of bias restricts firm conclusions about the best imaging technique for diagnosing brain RN. Bevacizumab shows promising results of improving radiographic edema and post-gadolinium enhancement with associated symptomatic improvement. However, this was based on small double-blinded randomized controlled trials, which introduces a high risk of bias due to the small sample size despite the high-quality trial design. Edaravone combined with corticosteroids also resulted in a more significant reduction in radiographic edema than corticosteroids alone but had no impact on reducing the enhancing lesion. There is a great need for further prospective randomized controlled trials (RCTs) to treat brain RN.
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, P. J. "Estimating Mature Plays." In Statistical Methods for Estimating Petroleum Resources, edited by Jo Anne DeGraffenreid. Oxford University Press, 2008. http://dx.doi.org/10.1093/oso/9780195331905.003.0009.

Full text
Abstract:
A key objective in petroleum resource evaluation is to estimate oil and gas pool size (or field size) or oil and gas joint probability distributions for a particular population or play. The pool-size distribution, together with the number-of-pools distribution in a play can then be used to predict quantities such as the total remaining potential, the individual pool sizes, and the sizes of the largest undiscovered pools. These resource estimates provide the fundamental information upon which petroleum economic analyses and the planning of exploration strategies can be based. The estimation of these types of pool-size distributions is a difficult task, however, because of the inherent sampling bias associated with exploration data. In many plays, larger pools tend to be discovered during the earlier phases of exploration. In addition, a combination of attributes, such as reservoir depth and distance to transportation center, often influences the order of discovery. Thus exploration data cannot be considered a random sample from the population. As stated by Drew et al. (1988), the form and specific parameters of the parent field-size distribution cannot be inferred with any confidence from the observed distribution. The biased nature of discovery data resulting from selective exploration decision making must be taken into account when making predictions about undiscovered oil and gas resources in a play. If this problem can be overcome, then the estimation of population mean, variance, and correlation among variables can be achieved. The objective of this chapter is to explain the characterization of the discovery process by statistical formulation. To account for sampling bias, Kaufman et al. (1975) and Barouch and Kaufman (1977) used the successive sampling process of the superpopulation probabilistic model (discovery process model) to estimate the mean and variance of a given play. Here we shall discuss how to use superpopulation probabilistic models to estimate pool-size distribution. The models to be discussed include the lognormal (LDSCV), nonparametric (NDSCV), lognormal/nonparametric–Poisson (BDSCV), and the bivariate lognormal, multivariate (MDSCV) discovery process methods. Their background, applications, and limitations will be illustrated by using play data sets from the Western Canada Sedimentary Basin as well as simulated populations.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Insensitivity to sample size bias"

1

Rahman, Foyzur, Daryl Posnett, Israel Herraiz, and Premkumar Devanbu. "Sample size vs. bias in defect prediction." In the 2013 9th Joint Meeting. ACM Press, 2013. http://dx.doi.org/10.1145/2491411.2491418.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Turek, Steven, and Sam Anand. "A Hull Normal Approach for Determining the Size of Cylindrical Features." In ASME 2007 International Manufacturing Science and Engineering Conference. ASMEDC, 2007. http://dx.doi.org/10.1115/msec2007-31206.

Full text
Abstract:
In coordinate metrology, discrete data is sampled from a continuous form to assess the manufactured feature’s deviation from its design specifications. Although coordinate measuring machines have a high degree of accuracy, the unsampled portion of the manufactured object cannot be completely described. The definition of cylindrical size for an external feature as specified by ASME Y14.5.1M-1994 [1,2] matches the analytical definition of a minimum circumscribing cylinder (MCC) when Rule #1 is applied. Even though the MCC is a logical analysis technique for size determination, it is highly sensitive to the sampling method and any uncertainties encountered in that process. Determining the least-sum-of-squares solution is an alternative method commonly utilized in size determination. However, the least-squares formulation seeks an optimal solution not based on the cylindrical size definition [1,2], and hence has been shown to be biased [6,7]. This research presents a novel Hull Normal method for size determination of cylindrical bosses. The goal of the proposed method is to recreate the sampled surface using computational geometry methods and determine the cylinder’s axis and radius based upon the reconstructed surface. Through varying the random sample size of data from an actual measured part, repetitive analyses resulted in the Hull Normal method having a lower bias and distributions that were skewed towards the true value of the radius.
APA, Harvard, Vancouver, ISO, and other styles
3

Xiong, Haoyi, Wei Cheng, Yanjie Fu, Wenqing Hu, Jiang Bian, and Zhishan Guo. "De-biasing Covariance-Regularized Discriminant Analysis." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/401.

Full text
Abstract:
Fisher's Linear Discriminant Analysis (FLD) is a well-known technique for linear classification, feature extraction and dimension reduction. The empirical FLD relies on two key estimations from the data -- the mean vector for each class and the (inverse) covariance matrix. To improve the accuracy of FLD under the High Dimension Low Sample Size (HDLSS) settings, Covariance-Regularized FLD (CRLD) has been proposed to use shrunken covariance estimators, such as Graphical Lasso, to strike a balance between biases and variances. Though CRLD could obtain better classification accuracy, it usually incurs bias and converges to the optimal result with a slower asymptotic rate. Inspired by the recent progress in de-biased Lasso, we propose a novel FLD classifier, DBLD, which improves classification accuracy of CRLD through de-biasing. Theoretical analysis shows that DBLD possesses better asymptotic properties than CRLD. We conduct experiments on both synthetic datasets and real application datasets to confirm the correctness of our theoretical analysis and demonstrate the superiority of DBLD over classical FLD, CRLD and other downstream competitors under HDLSS settings.
APA, Harvard, Vancouver, ISO, and other styles
4

Spring, Ryan, and Anshumali Shrivastava. "Mutual Information Estimation using LSH Sampling." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/389.

Full text
Abstract:
Learning representations in an unsupervised or self-supervised manner is a growing area of research. Current approaches in representation learning seek to maximize the mutual information between the learned representation and original data. One of the most popular ways to estimate mutual information (MI) is based on Noise Contrastive Estimation (NCE). This MI estimate exhibits low variance, but it is upper-bounded by log(N), where N is the number of samples. In an ideal scenario, we would use the entire dataset to get the most accurate estimate. However, using such a large number of samples is computationally prohibitive. Our proposed solution is to decouple the upper-bound for the MI estimate from the sample size. Instead, we estimate the partition function of the NCE loss function for the entire dataset using importance sampling (IS). In this paper, we use locality-sensitive hashing (LSH) as an adaptive sampler and propose an unbiased estimator that accurately approximates the partition function in sub-linear (near-constant) time. The samples are correlated and non-normalized, but the derived estimator is unbiased without any assumptions. We show that our LSH sampling estimate provides a superior bias-variance trade-off when compared to other state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
5

Nutsugah, Redeemer, Patrick Mensah, Stephen Akwaboa, and Michael Martin. "Pressure and Thermophysical Property of Gas Dependence of Effective Thermal Conductivity of a Porous Silica Insulator." In ASME 2015 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/imece2015-53011.

Full text
Abstract:
The thermal conductivity of a high-temperature calcium silicate block insulation product was measured in gaseous environments at pressures up to 100 bar at room temperature. The thermal conductivity of the porous material was tested in nitrogen, argon, and carbon dioxide gaseous environments. These tests were performed in a newly-constructed pressure chamber integrated with a thermal conductivity testing device. A standardized testing method was employed in the design of the apparatus. The test method used was based on the ASTM c177, the guarded-hot-plate method [1]. Tests performed in a carbon dioxide pressure medium have produced data with thermal conductivity as a nonlinear function of pressure. The results of tests conducted using nitrogen and argon show that the variations of thermal conductivity of the porous silica insulating material are linear functions of pressure and specific heat (Cv) of the fill gas. Tests performed in a nitrogen gaseous environment have relatively higher thermal conductivity values than thermal conductivity values at corresponding pressures in argon gaseous environment. This trend is attributable to the higher thermophysical property values of nitrogen than those of argon. This observation suggests that the thermophysical properties of the fill gas have significant effect on the effective thermal conductivity of the porous material. Thermal conductivity data collected in both nitrogen and argon pressure media have coefficients of determination (r2) of 0.9955 and 0.9956, respectively. An exponential function fitted to the carbon dioxide data produced a coefficient of determination of 0.9175. A precision study for the newly-constructed steady-state thermal conductivity measuring apparatus was performed in atmospheric air. With a standard deviation of 0.00076 W/m · K and a mean thermal conductivity value of 0.07294 W/m · K, a 95% confidence interval was assumed for a sample space size of 13 for the baseline tests in air. This produced a precision error of ±0.00046 W/m · K (±0.63%), a mean bias error of ±0.00955 W/m · K (±13.09%), and a mean steady-state error of ±1.67%. Hence, the total uncertainty in the mean thermal conductivity value of the baseline tests in atmospheric air could be reported as 0.07294W/m · K ± 13.22% with 95% confidence. The result of the precision study is indicative of the reliability of the apparatus. The single-sample precision uncertainty in thermal conductivity values at varying pressures in the various fill gases were estimated based on the standard deviation of the repeated tests in atmospheric air as 0.001166W/m · K.
APA, Harvard, Vancouver, ISO, and other styles
6

Breedlove, Evan L., Mark T. Gibson, Aaron T. Hedegaard, and Emilie L. Rexeisen. "Evaluation of Dynamic Mechanical Test Methods." In ASME 2016 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/imece2016-65742.

Full text
Abstract:
Dynamic mechanical properties are critical in the evaluation of materials with viscoelastic behavior. Various techniques, including dynamic mechanical analysis (DMA), rheology, nanoindentation, and others have been developed for this purpose and typically report complex modulus. Each of these techniques has strengths and weaknesses depending on sample geometry and length scale, mechanical properties, and skill of the user. In many industry applications, techniques may also be blindly applied according to a standard procedure without optimization for a specific sample. This can pose challenges for correct characterization of novel materials, and some techniques are more robust to agnostic application than others. A relative assessment of dynamic mechanical techniques is important when considering the appropriate technique to use to characterize a material. It also has bearing on organizations with limited resources that must strategically select one or two capabilities to meet as broad a set of materials as possible. The purpose of this study was to evaluate the measurement characteristics (e.g., precision and bias) of a selection of six dynamic mechanical test methods on a range of polymeric materials. Such a comprehensive comparison of dynamic mechanical testing methods was not identified in the literature. We also considered other technical characteristics of the techniques that influence their usability and strategic value to a laboratory and introduce a novel use of the House of Quality method to systematically compare measurement techniques. The selected methods spanned a range of length scales, frequency ranges, and prevalence of use. DMA, rheology, and oscillatory loading using a servohydraulic tensile tester were evaluated as traditional bulk techniques. Determination of complex modulus by beam vibration was also considered as a bulk technique. At a small length scale, both an oscillatory nanoindentation method and AFM were evaluated. Each method was employed to evaluate samples of polycarbonate, polypropylene, amorphous PET, and semi-crystalline PET. A measurement systems analysis (MSA) based on the ANOVA methods outlined in ASTM E2782 was conducted using storage modulus data obtained at 1 Hz. Additional correlations over a range of frequencies were tested between rheology/DMA and the remaining methods. Note that no attempts were made to optimize data collection for the test specimens. Rather, typical test methods were applied in order to simulate the type of results that would be expected in typical industrial characterization of materials. Data indicated low levels of repeatability error (<5%) for DMA, rheology, and nanoindentation. Biases were material dependent, indicating nonlinearity in the measurement systems. Nanoindentation and AFM results differed from the other techniques for PET samples, where anisotropy is believed to have affected in-plane versus out-of-plane measurements. Tensile-tester based results were generally poor and were determined to be related to the controllability of the actuator relative to the size of test specimens. The vibrations-based test method showed good agreement with time-temperature superposition determined properties from DMA. This result is particularly interesting since the vibrations technique directly accesses higher frequency responses and does not rely on time-temperature superposition, which is not suitable for all materials. MSA results were subsequently evaluated along with other technical attributes of the instruments using the House of Quality method. Technical attributes were weighted against a set of “user demands” that reflect the qualitative expectations often placed on measurement systems. Based on this analysis, we determined that DMA and rheology provide the broadest capability while remaining robust and easy to use. Other techniques, such as nanoindentation and vibrations, have unique qualities that fulfill niche applications where DMA and rheology are not suitable. This analysis provides an industry-relevant evaluation of measurement techniques and demonstrates a framework for evaluating the capabilities of analytical equipment relative to organizational needs.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Insensitivity to sample size bias"

1

McDonagh, Marian, Andrea C. Skelly, Amy Hermesch, et al. Cervical Ripening in the Outpatient Setting. Agency for Healthcare Research and Quality (AHRQ), 2021. http://dx.doi.org/10.23970/ahrqepccer238.

Full text
Abstract:
Objectives. To assess the comparative effectiveness and potential harms of cervical ripening in the outpatient setting (vs. inpatient, vs. other outpatient intervention) and of fetal surveillance when a prostaglandin is used for cervical ripening. Data sources. Electronic databases (Ovid® MEDLINE®, Embase®, CINAHL®, Cochrane Central Register of Controlled Trials, and Cochrane Database of Systematic Reviews) to July 2020; reference lists; and a Federal Register notice. Review methods. Using predefined criteria and dual review, we selected randomized controlled trials (RCTs) and cohort studies of cervical ripening comparing prostaglandins and mechanical methods in outpatient versus inpatient settings; one outpatient method versus another (including placebo or expectant management); and different methods/protocols for fetal surveillance in cervical ripening using prostaglandins. When data from similar study designs, populations, and outcomes were available, random effects using profile likelihood meta-analyses were conducted. Inconsistency (using I2) and small sample size bias (publication bias, if ≥10 studies) were assessed. Strength of evidence (SOE) was assessed. All review methods followed Agency for Healthcare Research and Quality Evidence-based Practice Center methods guidance. Results. We included 30 RCTs and 10 cohort studies (73% fair quality) involving 9,618 women. The evidence is most applicable to women aged 25 to 30 years with singleton, vertex presentation and low-risk pregnancies. No studies on fetal surveillance were found. The frequency of cesarean delivery (2 RCTs, 4 cohort studies) or suspected neonatal sepsis (2 RCTs) was not significantly different using outpatient versus inpatient dinoprostone for cervical ripening (SOE: low). In comparisons of outpatient versus inpatient single-balloon catheters (3 RCTs, 2 cohort studies), differences between groups on cesarean delivery, birth trauma (e.g., cephalohematoma), and uterine infection were small and not statistically significant (SOE: low), and while shoulder dystocia occurred less frequently in the outpatient group (1 RCT; 3% vs. 11%), the difference was not statistically significant (SOE: low). In comparing outpatient catheters and inpatient dinoprostone (1 double-balloon and 1 single-balloon RCT), the difference between groups for both cesarean delivery and postpartum hemorrhage was small and not statistically significant (SOE: low). Evidence on other outcomes in these comparisons and for misoprostol, double-balloon catheters, and hygroscopic dilators was insufficient to draw conclusions. In head to head comparisons in the outpatient setting, the frequency of cesarean delivery was not significantly different between 2.5 mg and 5 mg dinoprostone gel, or latex and silicone single-balloon catheters (1 RCT each, SOE: low). Differences between prostaglandins and placebo for cervical ripening were small and not significantly different for cesarean delivery (12 RCTs), shoulder dystocia (3 RCTs), or uterine infection (7 RCTs) (SOE: low). These findings did not change according to the specific prostaglandin, route of administration, study quality, or gestational age. Small, nonsignificant differences in the frequency of cesarean delivery (6 RCTs) and uterine infection (3 RCTs) were also found between dinoprostone and either membrane sweeping or expectant management (SOE: low). These findings did not change according to the specific prostaglandin or study quality. Evidence on other comparisons (e.g., single-balloon catheter vs. dinoprostone) or other outcomes was insufficient. For all comparisons, there was insufficient evidence on other important outcomes such as perinatal mortality and time from admission to vaginal birth. Limitations of the evidence include the quantity, quality, and sample sizes of trials for specific interventions, particularly rare harm outcomes. Conclusions. In women with low-risk pregnancies, the risk of cesarean delivery and fetal, neonatal, or maternal harms using either dinoprostone or single-balloon catheters was not significantly different for cervical ripening in the outpatient versus inpatient setting, and similar when compared with placebo, expectant management, or membrane sweeping in the outpatient setting. This evidence is low strength, and future studies are needed to confirm these findings.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography