To see the other types of publications on this topic, follow the link: Statistical size effect.

Journal articles on the topic 'Statistical size effect'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Statistical size effect.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Soric, Branko. "Statistical "Discoveries" and Effect-Size Estimation." Journal of the American Statistical Association 84, no. 406 (June 1989): 608. http://dx.doi.org/10.2307/2289950.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Sorić, Branko. "Statistical “Discoveries” and Effect-Size Estimation." Journal of the American Statistical Association 84, no. 406 (June 1989): 608–10. http://dx.doi.org/10.1080/01621459.1989.10478811.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kim, Hae-Young. "Statistical notes for clinical researchers: effect size." Restorative Dentistry & Endodontics 40, no. 4 (2015): 328. http://dx.doi.org/10.5395/rde.2015.40.4.328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Aigner, Roman, Sebastian Pomberger, Martin Leitner, and Michael Stoschka. "On the Statistical Size Effect of Cast Aluminium." Materials 12, no. 10 (May 14, 2019): 1578. http://dx.doi.org/10.3390/ma12101578.

Full text
Abstract:
Manufacturing process based imperfections can reduce the theoretical fatigue strength since they can be considered as pre-existent microcracks. The statistical distribution of fatigue fracture initiating defect sizes also varies with the highly-stressed volume, since the probability of a larger highly-stressed volume to inherit a potentially critical defect is elevated. This fact is widely known by the scientific community as the statistical size effect. The assessment of this effect within this paper is based on the statistical distribution of defect sizes in a reference volume V 0 compared to an arbitrary enlarged volume V α . By implementation of the crack resistance curve in the Kitagawa–Takahashi diagram, a fatigue assessment model, based on the volume-dependent probability of occurrence of inhomogeneities, is set up, leading to a multidimensional fatigue assessment map. It is shown that state-of-the-art methodologies for the evaluation of the statistical size effect can lead to noticeable over-sizing in fatigue design of approximately 10 % . On the other hand, the presented approach, which links the statistically based distribution of defect sizes in an arbitrary highly-stressed volume to a crack-resistant dependent Kitagawa–Takahashi diagram leads to a more accurate fatigue design with a maximal conservative deviation of 5 % to the experimental validation data. Therefore, the introduced fatigue assessment map improves fatigue design considering the statistical size effect of lightweight aluminium cast alloys.
APA, Harvard, Vancouver, ISO, and other styles
5

Lininger, Monica R., and Bryan L. Riemann. "Statistical Primer for Athletic Trainers: Understanding the Role of Statistical Power in Comparative Athletic Training Research." Journal of Athletic Training 53, no. 7 (July 1, 2018): 716–19. http://dx.doi.org/10.4085/1062-6050-284-17.

Full text
Abstract:
Objective: To describe the concept of statistical power as related to comparative interventions and how various factors, including sample size, affect statistical power.Background: Having a sufficiently sized sample for a study is necessary for an investigation to demonstrate that an effective treatment is statistically superior. Many researchers fail to conduct and report a priori sample-size estimates, which then makes it difficult to interpret nonsignificant results and causes the clinician to question the planning of the research design.Description: Statistical power is the probability of statistically detecting a treatment effect when one truly exists. The α level, a measure of differences between groups, the variability of the data, and the sample size all affect statistical power.Recommendations: Authors should conduct and provide the results of a priori sample-size estimations in the literature. This will assist clinicians in determining whether the lack of a statistically significant treatment effect is due to an underpowered study or to a treatment's actually having no effect.
APA, Harvard, Vancouver, ISO, and other styles
6

Heidel, R. Eric. "Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size." Scientifica 2016 (2016): 1–5. http://dx.doi.org/10.1155/2016/8920418.

Full text
Abstract:
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by ana priorisample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up ana priorisample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
APA, Harvard, Vancouver, ISO, and other styles
7

Valladares-Neto, José. "Effect size: a statistical basis for clinical practice." Revista Odonto Ciência 33, no. 1 (December 30, 2018): 84. http://dx.doi.org/10.15448/1980-6523.2018.1.29437.

Full text
Abstract:
OBJECTIVE: Effect size (ES) is the statistical measure which quantifies the strength of a phenomenon and is commonly applied to observational and interventional studies. The aim of this review was to describe the conceptual basis of this measure, including its application, calculation and interpretation.RESULTS: As well as being used to detect the magnitude of the difference between groups, to verify the strength of association between predictor and outcome variables, to calculate sample size and power, ES is also used in meta-analysis. ES formulas can be divided into these categories: I – Difference between groups, II – Strength of association, III – Risk estimation, and IV – Multivariate data. The d value was originally considered small (0.20 > d ≤ 0.49), medium (0.50 > d≤ 0.79) or large (d ≥ 0.80); however, these cut-off limits are not consensual and could be contextualized according to a specific field of knowledge. In general, a larger score implies that a larger difference was detected.CONCLUSION: The ES report, in conjunction with the confidence interval and P value, aims to strengthen interpretation and prevent the misinterpretation of data, and thus leads to clinical decisions being based on scientific evidence studies.
APA, Harvard, Vancouver, ISO, and other styles
8

Sheth, Bhavin, and Jasmine Patel. "Human Perception of Statistical Significance and Effect Size." Journal of Vision 15, no. 12 (September 1, 2015): 337. http://dx.doi.org/10.1167/15.12.337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

BATES, BARRY T., JANET S. DUFEK, and HOWARD P. DAVIS. "The effect of trial size on statistical power." Medicine & Science in Sports & Exercise 24, no. 9 (September 1992): 1059???1065. http://dx.doi.org/10.1249/00005768-199209000-00017.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Clark-Carter, David. "Effect Size and Statistical Power in Psychological Research." Irish Journal of Psychology 28, no. 1-2 (January 2007): 3–12. http://dx.doi.org/10.1080/03033910.2007.10446244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Tomaszewski, Tomasz. "Statistical Size Effect in Fatigue Properties for Mini-Specimens." Materials 13, no. 10 (May 22, 2020): 2384. http://dx.doi.org/10.3390/ma13102384.

Full text
Abstract:
The study verifies the sensitivity of selected construction materials (S235JR structural steel and 1.4301 stainless steel) to the statistical size effect. The P–S–N curves were determined experimentally under high-cycle fatigue conditions for two specimen sizes (mini-specimen and standard specimen). The results were analyzed using a probabilistic model of the three-parameter Weibull cumulative distribution function. The analysis included the evaluation of the technological process effects on the results based on the material microstructure near the surface layer and the macro-fractography. The differences in the susceptibility to the size effect validated the applicability of the test method to mini-specimen and showed different populations of the distribution of critical material defects.
APA, Harvard, Vancouver, ISO, and other styles
12

Makkonen, M. "Statistical size effect in the fatigue limit of steel." International Journal of Fatigue 23, no. 5 (May 2001): 395–402. http://dx.doi.org/10.1016/s0142-1123(01)00003-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Wilkerson, Matt, and Mary R. Olson. "Misconceptions About Sample Size, Statistical Significance, and Treatment Effect." Journal of Psychology 131, no. 6 (November 1997): 627–31. http://dx.doi.org/10.1080/00223989709603844.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Li, Zheng, and Hartmut Pasternak. "Statistical size effect of flexural members in steel structures." Journal of Constructional Steel Research 144 (May 2018): 176–85. http://dx.doi.org/10.1016/j.jcsr.2018.01.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Nelson, Matthew S., Alese Wooditch, and Lisa M. Dario. "Sample size, effect size, and statistical power: a replication study of Weisburd’s paradox." Journal of Experimental Criminology 11, no. 1 (July 30, 2014): 141–63. http://dx.doi.org/10.1007/s11292-014-9212-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Oberreiter, Matthias, Sebastian Pomberger, Martin Leitner, and Michael Stoschka. "Validation Study on the Statistical Size Effect in Cast Aluminium." Metals 10, no. 6 (May 27, 2020): 710. http://dx.doi.org/10.3390/met10060710.

Full text
Abstract:
Imperfections due to the manufacturing process can significantly affect the local fatigue strength of the bulk material in cast aluminium alloys. Most components possess several sections of varying microstructure, whereat each of them may inherit a different highly-stressed volume (HSV). Even in cases of homogeneous local casting conditions, the statistical distribution parameters of failure causing defect sizes change significantly, since for a larger highly-stressed volume the probability for enlarged critical defects gets elevated. This impact of differing highly-stressed volume is commonly referred as statistical size effect. In this paper, the study of the statistical size effect on cast material considering partial highly-stressed volumes is based on the comparison of a reference volume V 0 and an arbitrary enlarged, but disconnected volume V α utilizing another specimen geometry. Thus, the behaviour of disconnected highly-stressed volumes within one component in terms of fatigue strength and resulting defect distributions can be assessed. The experimental results show that doubling of the highly-stressed volume leads to a decrease in fatigue strength of 5% and shifts the defect distribution towards larger defect sizes. The highly-stressed volume is numerically determined whereat the applicable element size is gained by a parametric study. Finally, the validation with a prior developed fatigue strength assessment model by R. Aigner et al. leads to a conservative fatigue design with a deviation of only about 0.3% for cast aluminium alloy.
APA, Harvard, Vancouver, ISO, and other styles
17

Bažant, Zdeněk P. "Probability distribution of energetic-statistical size effect in quasibrittle fracture." Probabilistic Engineering Mechanics 19, no. 4 (October 2004): 307–19. http://dx.doi.org/10.1016/j.probengmech.2003.09.003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Tabiei, Ala, and Jianmin Sun. "Statistical aspects of strength size effect of laminated composite materials." Composite Structures 46, no. 3 (November 1999): 209–16. http://dx.doi.org/10.1016/s0263-8223(99)00056-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

DUFEK, JANET S., BARRY T. BATES, and HOWARD P. DAVIS. "The effect of trial size and variability on statistical power." Medicine & Science in Sports & Exercise 27, no. 2 (February 1995): 288???295. http://dx.doi.org/10.1249/00005768-199502000-00021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Cutter, Gary. "EFFECT size or statistical significance, where to put your money." Multiple Sclerosis and Related Disorders 38 (February 2020): 101490. http://dx.doi.org/10.1016/j.msard.2019.101490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Téllez, Arnoldo, Cirilo H. García, and Victor Corral-Verdugo. "Effect size, confidence intervals and statistical power in psychological research." Psychology in Russia: State of the Art 8, no. 3 (2015): 27–46. http://dx.doi.org/10.11621/pir.2015.0303.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Bažant, Zdeněk P., and Yunping Xi. "Statistical Size Effect in Quasi‐Brittle Structures: II. Nonlocal Theory." Journal of Engineering Mechanics 117, no. 11 (November 1991): 2623–40. http://dx.doi.org/10.1061/(asce)0733-9399(1991)117:11(2623).

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Miočević, Milica, Holly P. O’Rourke, David P. MacKinnon, and Hendricks C. Brown. "Statistical properties of four effect-size measures for mediation models." Behavior Research Methods 50, no. 1 (March 24, 2017): 285–301. http://dx.doi.org/10.3758/s13428-017-0870-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Vedula, S. Swaroop. "Effect Size Estimation as an Essential Component of Statistical Analysis." Archives of Surgery 145, no. 4 (April 1, 2010): 401. http://dx.doi.org/10.1001/archsurg.2010.33.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Nassar, Yahya H. "Effect Size as a Complementally Statistical Procedure of Testing Hypotheses." Journal of Educational & Psychological Sciences 07, no. 02 (June 1, 2006): 35–60. http://dx.doi.org/10.12785/jeps/070202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Le, Jia-Liang, and Bing Xue. "Energetic-statistical size effect in fracture of bimaterial hybrid structures." Engineering Fracture Mechanics 111 (October 2013): 106–15. http://dx.doi.org/10.1016/j.engfracmech.2013.09.008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Bird, Kevin D., and Wayne Hall. "Statistical Power in Psychiatric Research." Australian & New Zealand Journal of Psychiatry 20, no. 2 (June 1986): 189–200. http://dx.doi.org/10.3109/00048678609161331.

Full text
Abstract:
Statistical power is neglected in much psychiatric research, with the consequence that many studies do not provide a reasonable chance of detecting differences between groups if they exist in the population. This paper attempts to improve current practice by providing an introduction to the essential quantities required for performing a power analysis (sample size, effect size, type 1 and type 2 error rates). We provide simplified tables for estimating the sample size required to detect a specified size of effect with a type 1 error rate of α and a type 2 error rate of β, and for estimating the power provided by a given sample size for detecting a specified size of effect with a type 1 error rate of α. We show how to modify these tables to perform power analyses for multiple comparisons in univariate and some multivariate designs. Power analyses for each of these types of design are illustrated by examples.
APA, Harvard, Vancouver, ISO, and other styles
28

James Hung, H. M., Lu Cui, Sue-Jane Wang, and John Lawrence. "Adaptive Statistical Analysis Following Sample Size Modification Based on Interim Review of Effect Size." Journal of Biopharmaceutical Statistics 15, no. 4 (July 1, 2005): 693–706. http://dx.doi.org/10.1081/bip-200062855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Owens, Max M., Alexandra Potter, Courtland S. Hyatt, Matthew Albaugh, Wesley K. Thompson, Terry Jernigan, Dekang Yuan, Sage Hahn, Nicholas Allgaier, and Hugh Garavan. "Recalibrating expectations about effect size: A multi-method survey of effect sizes in the ABCD study." PLOS ONE 16, no. 9 (September 23, 2021): e0257535. http://dx.doi.org/10.1371/journal.pone.0257535.

Full text
Abstract:
Effect sizes are commonly interpreted using heuristics established by Cohen (e.g., small: r = .1, medium r = .3, large r = .5), despite mounting evidence that these guidelines are mis-calibrated to the effects typically found in psychological research. This study’s aims were to 1) describe the distribution of effect sizes across multiple instruments, 2) consider factors qualifying the effect size distribution, and 3) identify examples as benchmarks for various effect sizes. For aim one, effect size distributions were illustrated from a large, diverse sample of 9/10-year-old children. This was done by conducting Pearson’s correlations among 161 variables representing constructs from all questionnaires and tasks from the Adolescent Brain and Cognitive Development Study® baseline data. To achieve aim two, factors qualifying this distribution were tested by comparing the distributions of effect size among various modifications of the aim one analyses. These modified analytic strategies included comparisons of effect size distributions for different types of variables, for analyses using statistical thresholds, and for analyses using several covariate strategies. In aim one analyses, the median in-sample effect size was .03, and values at the first and third quartiles were .01 and .07. In aim two analyses, effects were smaller for associations across instruments, content domains, and reporters, as well as when covarying for sociodemographic factors. Effect sizes were larger when thresholding for statistical significance. In analyses intended to mimic conditions used in “real-world” analysis of ABCD data, the median in-sample effect size was .05, and values at the first and third quartiles were .03 and .09. To achieve aim three, examples for varying effect sizes are reported from the ABCD dataset as benchmarks for future work in the dataset. In summary, this report finds that empirically determined effect sizes from a notably large dataset are smaller than would be expected based on existing heuristics.
APA, Harvard, Vancouver, ISO, and other styles
30

Gómez-Benito, Juana, M. Dolores Hidalgo, and José-Luis Padilla. "Efficacy of Effect Size Measures in Logistic Regression." Methodology 5, no. 1 (January 2009): 18–25. http://dx.doi.org/10.1027/1614-2241.5.1.18.

Full text
Abstract:
Statistical techniques based on logistic regression (LR) are adequate for the detection of differential item functioning (DIF) in dichotomous items. Nevertheless, they return more false positives (FPs) than do other DIF detection techniques. This paper compares the efficacy of DIF detection using the LR significance test and the estimation of the effect size that these procedures provide using R2 of Nagelkerke. The variables manipulated were different conditions of sample size, focal and reference group sample size ratio, amount of DIF, test length and percentage of test items with DIF. In addition, examinee responses were generated to simulate both uniform and nonuniform DIF (symmetric and asymmetric). In all cases, dichotomous response tests were used. The results show that the use of R2 as a strategy for detecting DIF obtained lower correct detection percentages than those obtained from significance tests. Moreover, the LR significance test showed adequate control of FP rates, close to the nominal 5%, although the rate was slightly higher than the nominal 5% when the sample size was smaller. However, when the effect size measure was used to detect DIF, the FP rates were lower and <1% for a wide number of conditions. In addition, a statistically significant main effect of the sample size variable was obtained. Thus, the FP percentages were higher when the sample size was small (100/100). The results obtained indicate that the use of R2 as a measure of effect size together with the statistical significance test reduces the rate of FP.
APA, Harvard, Vancouver, ISO, and other styles
31

He, Xi Xi, and Shan Wu. "Experimental Study on Size Effect in Strength of Pervious Concrete and its Associated Factor." Advanced Materials Research 634-638 (January 2013): 2684–92. http://dx.doi.org/10.4028/www.scientific.net/amr.634-638.2684.

Full text
Abstract:
Based on the test results of compressive strength and splitting strength of three kinds of cubic specimens of pervious concrete whose side length is respectively 100mm, 150mm and 200 mm, the size effect on strength and its associated impact factors which include porosity and particle size of coarse aggregate analyzed. In the test, water cement ratio of every group of concrete mix proportion is constant. The main results are as follows: (1) Size effect on concrete of the pervious concrete is greater than that of ordinary concrete; (2) Size effect on splitting strength is greater than that on cubic compressive strength. (3) Size effect on splitting strength significantly increases with the increase of the aggregate size; (4) Weibull modulus m obtained in statistical test for compressive strength equals to 9, which should be more than twice the value of tensile strength. (5)Size effect on strength of concrete is related to its statistical discreteness, that is, the size effect is more obvious when the dispersion coefficient Cv is greater; (6)Weibull’s statistical size effect can be used to describe the size effect on strength indicators of concrete; Theoretical values of Weibull’s statistical size effect derived from the experiment agree with the test results well. (7) The abnormal trends of size effect are related to the abnormal changes of dispersion coefficient.
APA, Harvard, Vancouver, ISO, and other styles
32

Shieh, Gwowen. "Effect size, statistical power, and sample size for assessing interactions between categorical and continuous variables." British Journal of Mathematical and Statistical Psychology 72, no. 1 (November 23, 2018): 136–54. http://dx.doi.org/10.1111/bmsp.12147.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Bazˇant, Zdeneˇk P., Yong Zhou, Drahomı´r Nova´k, and Isaac M. Daniel. "Size Effect on Flexural Strength of Fiber-Composite Laminates." Journal of Engineering Materials and Technology 126, no. 1 (January 1, 2004): 29–37. http://dx.doi.org/10.1115/1.1631031.

Full text
Abstract:
The size effect on the flexural strength (or modulus of rupture) of fiber-polymer laminate beams failing at fracture initiation is analyzed. A generalized energetic-statistical size effect law recently developed on the basis of a probabilistic nonlocal theory is introduced. This law represents asymptotic matching of three limits: (1) the power-law size effect of the classical Weibull theory, approached for infinite structure size; (2) the deterministic-energetic size effect law based on the deterministic nonlocal theory, approached for vanishing structure size; and (3) approach to the same law at any structure size when the Weibull modulus tends to infinity. The limited test data that exist are used to verify this formula and examine the closeness of fit. The results show that the new energetic-statistical size effect theory can match the existing flexural strength data better than the classical statistical Weibull theory, and that the optimum size effect fits with Weibull theory are incompatible with a realistic coefficient of variation of scatter in strength tests of various types of laminates. As for the energetic-statistical theory, its support remains entirely theoretical because the existing test data do not reveal any improvement of fit over its special case, the purely energetic theory—probably because the size range of the data is not broad enough or the scatter is too high, or both.
APA, Harvard, Vancouver, ISO, and other styles
34

Capraro, Robert M. "Statistical Significance, Effect Size Reporting, and Confidence Intervals: Best Reporting Strategies." Journal for Research in Mathematics Education 35, no. 1 (January 1, 2004): 57. http://dx.doi.org/10.2307/30034803.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

TSURUI, Akira, Hiroshi ISHIKAWA, Akihiro UTSUMI, and Akira SAKO. "The size effect on statistical properties of fatigue crack propagation process." Journal of the Society of Materials Science, Japan 35, no. 393 (1986): 578–82. http://dx.doi.org/10.2472/jsms.35.578.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Bosco, Frank A., Kulraj Singh, James G. Field, and Charles A. Pierce. "Effect-Size Magnitude Benchmarks: Implications for Scientific Progress and Statistical Inferences." Academy of Management Proceedings 2013, no. 1 (January 2013): 16542. http://dx.doi.org/10.5465/ambpp.2013.16542abstract.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

McClung, D. M., and C. P. Borstad. "Probability distribution of energetic-statistical strength size effect in alpine snow." Probabilistic Engineering Mechanics 29 (July 2012): 53–63. http://dx.doi.org/10.1016/j.probengmech.2011.08.009.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Devaney, Thomas A. "Statistical Significance, Effect Size, and Replication: What Do the Journals Say?" Journal of Experimental Education 69, no. 3 (January 2001): 310–20. http://dx.doi.org/10.1080/00220970109599490.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hawkins, D., E. Gallacher, and M. Gammell. "Statistical power, effect size and animal welfare: recommendations for good practice." Animal Welfare 22, no. 3 (August 1, 2013): 339–44. http://dx.doi.org/10.7120/09627286.22.3.339.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Cook, J. A., and J. Ranstam. "Statistical analyses that provide an effect size are to be preferred." British Journal of Surgery 103, no. 10 (August 24, 2016): 1365. http://dx.doi.org/10.1002/bjs.10237.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Leitner, Martin, Michael Vormwald, and Heikki Remes. "Statistical size effect on multiaxial fatigue strength of notched steel components." International Journal of Fatigue 104 (November 2017): 322–33. http://dx.doi.org/10.1016/j.ijfatigue.2017.08.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Wei, Xiaoding, Tobin Filleter, and Horacio D. Espinosa. "Statistical shear lag model – Unraveling the size effect in hierarchical composites." Acta Biomaterialia 18 (May 2015): 206–12. http://dx.doi.org/10.1016/j.actbio.2015.01.040.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Norouzian, Reza. "SAMPLE SIZE PLANNING IN QUANTITATIVE L2 RESEARCH." Studies in Second Language Acquisition 42, no. 4 (April 6, 2020): 849–70. http://dx.doi.org/10.1017/s0272263120000017.

Full text
Abstract:
AbstractResearchers are traditionally advised to plan for their required sample size such that achieving a sufficient level of statistical power is ensured (Cohen, 1988). While this method helps distinguishing statistically significant effects from the nonsignificant ones, it does not help achieving the higher goal of accurately estimating the actual size of those effects in an intended study. Adopting an open-science approach, this article presents an alternative approach, accuracy in effect size estimation (AESE), to sample size planning that ensures that researchers obtain adequately narrow confidence intervals (CI) for their effect sizes of interest thereby ensuring accuracy in estimating the actual size of those effects. Specifically, I (a) compare the underpinnings of power-analytic and AESE methods, (b) provide a practical definition of narrow CIs, (c) apply the AESE method to various research studies from L2 literature, and (d) offer several flexible R programs to implement the methods discussed in this article.
APA, Harvard, Vancouver, ISO, and other styles
44

Rubin, Donald B. "Meta-Analysis: Literature Synthesis or Effect-Size Surface Estimation?" Journal of Educational Statistics 17, no. 4 (December 1992): 363–74. http://dx.doi.org/10.3102/10769986017004363.

Full text
Abstract:
A traditional meta-analysis can be thought of as a literature synthesis, in which a collection of observed studies is analyzed to obtain summary judgments about overall significance and size of effects. Many aspects of the current set of statistical tools for meta-analysis are highly useful—for example, the development of clear and concise effect-size indicators with associated standard errors. I am less happy, however, with more esoteric statistical techniques and their implied objects of estimation (i.e., their estimands) which are tied to the conceptualization of average effect sizes, weighted or otherwise, in a population of studies. In contrast to these average effect sizes of literature synthesis, I believe that the proper estimand is an effect-size surface, which is a function only of scientifically relevant factors, and which can only be estimated by extrapolating a response surface of observed effect sizes to a region of ideal studies. This effect-size surface perspective is presented and contrasted with the literature synthesis perspective. The presentation is entirely conceptual. Moreover, it is designed to be provocative, thereby prodding researchers to rethink traditional meta-analysis and ideally stimulating meta-analysts to attempt effect-surface estimations.
APA, Harvard, Vancouver, ISO, and other styles
45

Feingold, Alan. "Effect of Parameterization on Statistical Power and Effect Size Estimation in Latent Growth Modeling." Structural Equation Modeling: A Multidisciplinary Journal 28, no. 4 (March 23, 2021): 609–21. http://dx.doi.org/10.1080/10705511.2021.1878895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Morris, Richard W. "Testing Statistical Hypotheses about Rat Liver Foci." Toxicologic Pathology 17, no. 4_part_1 (April 1989): 569–78. http://dx.doi.org/10.1177/0192623389017004103.

Full text
Abstract:
Tests of statistical hypotheses concerning treatment effect on the development of hepatocellular foci can be carried out directly on two-dimensional observations made on histologic sections or on estimates of the density and volume of foci in three dimensions. Inferences about differences in the density or size of foci from tests based on two-dimensional observations, however, can be misleading. This is because both the number of focus cross-sections observed in a tissue section and the percent area occupied by foci can be expressed in terms of the number of foci per unit volume of liver tissue and the mean focus size. As a consequence, a treatment difference may be caused by a difference in the density of foci, their average size, or both. Of more serious concern is the possibility that failure to detect a treatment effect may occur not only when there is no treatment effect but also when the density and size of foci differ between treatments in such a way that their product is unchanged. This can happen if the effect of treatment is to increase the number of foci and decrease their average size, or vice versa. A similar difficulty of interpretation is associated with hypothesis tests based on average focus cross-section area. Tests based on estimates of the number of foci per unit volume and mean focus volume allow direct inference about the quantities of interest, but these estimates are unstable because they have large variances. Empirical estimates of statistical power for the Wilcoxon rank sum test and the t-test from data on control rats suggest power may be limited in experiments with group sizes of ten and low observed numbers of focus cross-sections. If hypothesis tests based on estimates of the density and size of foci are to form the basis for a bioassay, then the power of statistical tests used to identify treatment effects should be investigated.
APA, Harvard, Vancouver, ISO, and other styles
47

Korneenkov, A. A., and I. V. Fanta. "Estimation of the effect size of clinical intervention in otorhinolaryngology." Russian Otorhinolaryngology 19, no. 2 (2020): 42–50. http://dx.doi.org/10.18692/1810-4800-2020-2-42-50.

Full text
Abstract:
The article discusses the concepts of measures of the effect of clinical effects, quantitative methods for their calculation and interpretation, their importance for making medical decisions. Algorithms for calculating effect measures are described for different clinical trial endpoints represented by quantitative (numerical) or binary types of variables, and for different types of effect size indicator (absolute, relative effect size, or clinical effectiveness indicator). It is shown that in the context of assessing the effect of therapeutic effects and clinical efficacy in general, measuring the size of the effect provides a valuable tool for data analysis. Evaluation and interpretation of the effect of the therapeutic modality only on the basis of the level of significance p obtained by testing statistical hypotheses without specifying the size of the effect is not sufficient to understand the importance of using the effect in clinical practice. To obtain an adequate quantitative assessment of the effect and its interpretation, the concept of the size of the effect is a convenient system of methods that is widely used. To illustrate the calculation and interpretation of the size of the effect, published data from clinical studies of the effectiveness of local anesthesia to reduce pain after septoplasty were used. It is shown how, using the presented technique, it is possible to efficiently calculate and easily interpret measures of the effect of the application of local anesthesia. All calculations were performed in the statistical program R.
APA, Harvard, Vancouver, ISO, and other styles
48

van den Bergh, Don, Julia M. Haaf, Alexander Ly, Jeffrey N. Rouder, and Eric-Jan Wagenmakers. "A Cautionary Note on Estimating Effect Size." Advances in Methods and Practices in Psychological Science 4, no. 1 (January 2021): 251524592199203. http://dx.doi.org/10.1177/2515245921992035.

Full text
Abstract:
An increasingly popular approach to statistical inference is to focus on the estimation of effect size. Yet this approach is implicitly based on the assumption that there is an effect while ignoring the null hypothesis that the effect is absent. We demonstrate how this common null-hypothesis neglect may result in effect size estimates that are overly optimistic. As an alternative to the current approach, a spike-and-slab model explicitly incorporates the plausibility of the null hypothesis into the estimation process. We illustrate the implications of this approach and provide an empirical example.
APA, Harvard, Vancouver, ISO, and other styles
49

Henson, Robin K. "Effect-Size Measures and Meta-Analytic Thinking in Counseling Psychology Research." Counseling Psychologist 34, no. 5 (September 2006): 601–29. http://dx.doi.org/10.1177/0011000005283558.

Full text
Abstract:
Effect sizes are critical to result interpretation and synthesis across studies. Although statistical significance testing has historically dominated the determination of result importance, modern views emphasize the role of effect sizes and confidence intervals. This article accessibly discusses how to calculate and interpret the effect sizes that counseling psychologists use most frequently. To provide context, the author presents a brief history of statistical significance tests. Second, the author discusses the difference between statistical, practical, and clinical significance. Third, the author reviews and graphically demonstrates two common types of effect sizes, commenting on multivariate and corrected effect sizes. Fourth, the author emphasizes meta-analytic thinking and the potential role of confidence intervals around effect sizes. Finally, the author gives a hypothetical example of how to report and potentially interpret some effect sizes.
APA, Harvard, Vancouver, ISO, and other styles
50

Borracci, Raúl A., Eduardo B. Arribalzaga, and Jorge Thierer. "Training in statistical analysis reduces the framing effect on medical students and residentsinArgentina." Journal of Educational Evaluation for Health Professions 17 (September 1, 2020): 25. http://dx.doi.org/10.3352/jeehp.2020.17.25.

Full text
Abstract:
Purpose:The framing effect refers to a phenomenon whereby, when the same problem is presented using different representations of information, people make significant changes in their decisions.Itaimed to explore whether theframingeffect could be reduced in medical students and residents by teaching them the statistical concepts of effect size, probability, and sampling to be used in the medical decision-making process.MethodsNinety-five second-year medical students and 100 second-year medical residentsof Austral University and Buenos Aires University, Argentina were invited to participate in the study between March and June 2017. A questionnaire was developed to assess the different types of framing effects in medical situations. After an initial administration of the survey, students and residents were taught statistical concepts including effect size, probability, and sampling during two individual independent official biostatistics courses. After these interventions, the same questionnaire was randomly applied again, and pre- and post-intervention outcomes were compared for students and residents. Results: Almost every type of framing effect was reproduced either in the students or in the resident population. After teaching medical students and residents the analytical process behind statistical notions, a significant reduction in sample-size, risky-choice, pseudo-certainty, number-size, attribute, goal, and probabilistic formulation framing effects was observed. Conclusions Decision-making of medical students and residents in simulated medical situations may be affected by different frame descriptions, and these framing effects can be partially reduced by training individuals in probability analysis and statistical sampling methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography