To see the other types of publications on this topic, follow the link: Statistical size effect.

Dissertations / Theses on the topic 'Statistical size effect'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Statistical size effect.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Lindh, Johan. "Common language effect size : A valuable step towards a more comprehensible presentation of statistical information?" Thesis, Stockholms universitet, Psykologiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-166438.

Full text
Abstract:
To help address the knowledge gap between science and practice this study explores the possible positive benefits of using a more pedagogical effect size estimate when presenting statistical relationships. Traditional presentation has shown limitations with major downsides being that scientific findings are misinterpreted or misunderstood even by professionals. This study explores the possible effects of the non-traditional effect size estimate Common Language Effect Size (CLES) on different training outcomes for HR professionals. This study also explores the possible effect of cognitive system preference on training outcomes. Results show no overall effect of CLES on either training outcomes or cognitive system. A positive effect of CLES on training outcome is found at the subfactor level showing a significant effect. The results can be interpreted that non-traditional effect size estimates have a limited effect on training outcomes. This small but valuable piece to bridge the gap of knowledge is discussed.
APA, Harvard, Vancouver, ISO, and other styles
2

Moracz, Kelle. "Comprehension and Interpretation of Common Language Effect Size Displays." Bowling Green State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1573756511230833.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Coe, Robert, and Soto César Merino. "Effect Size: A guide for researchers and users." Pontificia Universidad Católica del Perú, 2003. http://repositorio.pucp.edu.pe/index/handle/123456789/100341.

Full text
Abstract:
The present article describes a method to quantify the magnitude of the differences between two measures and/or the degree of the effect of a variable about criteria, and it is named likethe effect size measure, d. Use it use in research and applied contexts provides a quitedescriptive complementary information, improving the interpretation of the results obtained bythe traditional methods that emphasize the statistical significance. Severa) forms there are of interpreting the d, and an example taken of an experimental research, is presented to clarify the concepts and necessary calculations. This method is not robust to sorne conditions that they candistort its interpretation, for example, the non normality of the data; alternative methods are mentioned to the statistical d. We ending with sorne conclusions that will notice about the appropriate use of it.
El presente artículo describe un método para cuantificar la magnitud de las diferencias entredos mediciones y/o el grado del efecto de una variable sobre un criterio, y es llamado lamedida de la magnitud del efecto, de su uso en contextos de investigación y aplicados proporciona un información complementaria bastante descriptiva, mejorando la interpretaciónde los resultados obtenidos por los métodos tradicionales que enfatizan la significación estadística. Existen varias formas de interpretar el estadístico d, y se presenta un ejemplo,tomado de una investigación experimental, para aclarar los conceptos y cálculos necesarios.Este método no es robusto a ciertas condiciones que pueden distorsionar su interpretación, por ejemplo, la no normalidad de los datos entre otros; se mencionan métodos alternativos alestadístico d. Finalizamos con unas conclusiones que advierten sobre su apropiado uso.
APA, Harvard, Vancouver, ISO, and other styles
4

Xia, Yang. "A robust statistical method for determining material properties and indentation size effect using instrumented indentation testing." Thesis, Compiègne, 2014. http://www.theses.fr/2014COMP1982/document.

Full text
Abstract:
L'indentation instrumentée est un outil pratique et puissant pour sonder les propriétés mécaniques des matériaux à petite échelle. Cependant, plusieurs erreurs (rugosité de surface, effet de taille d’indentation, la détermination de premier point de contact, etc.) affectent l'essai d’indentation instrumentée (e.g. non reproductibilité de la courbe d’indentation) et conduisent à des imprécisions dans la détermination des propriétés mécaniques des matériaux analysés. Une approche originale est développée dans cette thèse pour la caractérisation précise des propriétés mécaniques des matériaux. Cette approche fondée sur une analyse statistique des courbes d’indentation avec la prise en compte d’erreur dans la détermination du premier point de contact et des effets de la rugosité de surface. L’approche est basée sur une minimisation de la distance (défini comme l'erreur de la profondeur de contact initiale) entre l’ensemble des courbes expérimentales et celles simulées par le modèle de Bernhard de manière à générer une courbe maitresse « unique » représentative du faisceau de courbes expérimentales. La méthode proposée permet de calculer à partir de cette courbe maitresse la macro-dureté et le module d’Young du matériau en tenant compte des erreurs dues à la rugosité de surface et à l'effet de taille en indentation pour les faibles profondeurs de pénétration. La robustesse de la méthode est prouvée par son application à différents groupes d’échantillons, i.e. panels de matériaux à propriétés mécaniques diverses, différents traitements de surface (polissage, sablage) et différentes pointes d’indentation permettant de générer différents états de contraintes locaux. Une liaison quantitative entre la rugosité de surface et l'écart type de l'erreur de la profondeur de contact initiale est établie grâce à une analyse multi- échelle de la rugosité de la surface. La méthode proposée permet de caractériser les propriétés mécaniques des matériaux sans avoir recours à la préparation de surface pouvant potentiellement altérer ses propriétés (e.g. génération de contraintes résiduelles, contamination de surface…)
Instrumented indentation is a practical and powerful tool for probing the mechanical properties of materials at small scales. However, several errors (surface roughness, indentation size effect, determination of first contact point, etc…) affect the instrumented indentation testing (e.g. the low reproducibility of the indentation curves) and lead to inaccuracies in the determination of mechanical properties of materials analyzed. An original approach is developed in this thesis for the accurate characterization of the mechanical properties of materials. This approach is established by a statistical analysis of the indentation curves with taking account of error in determining the first contact point and effects of the surface roughness. This approach is basing on a minimization of the distance (defined as the initial contact depth error) between the experimental indentation curves and the ones simulated with Bernhard’s model in order to generate a “unique” representative curve which enables to represent all the experimental curves. The proposed method permits to calculate the macro-hardness and the Young’s modulus of materials from this representative curve with the consideration of the errors due to the surface roughness and the indentation size effect for shallow penetration. The robustness of the method is proved by its application to different groups of specimens, i.e. different materials with various mechanical properties, different surface preparation methods (polishing, sandblasting) and different indenter tips to generate different states of local stresses. A quantitative link between the surface roughness and the standard deviation of initial contact depth error is established by a multi-scale surface roughness analyzing. The proposed method enables to characterize the mechanical properties of materials without resorting to the surface preparation which may potentially alter its properties (e.g. generation of residual stresses, surface contamination ...)
APA, Harvard, Vancouver, ISO, and other styles
5

Li, Zheng Verfasser], and Hartmut [Gutachter] [Pasternak. "Statistical size effect in steel structure and corresponding influence on structural reliability / Zheng Li ; Gutachter: Hartmut Pasternak." Cottbus : BTU Cottbus - Senftenberg, 2018. http://d-nb.info/1164445286/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bell, M. L., M. H. Fiero, H. M. Dhillon, V. J. Bray, and J. L. Vardy. "Statistical controversies in cancer research: using standardized effect size graphs to enhance interpretability of cancer-related clinical trials with patient-reported outcomes." Oxford University Press, 2017. http://hdl.handle.net/10150/626025.

Full text
Abstract:
Patient reported outcomes (PROs) are becoming increasingly important in cancer studies, particularly with the emphasis on patient centered outcome research. However, multiple PROs, using different scales, with different directions of favorability are often used within a trial, making interpretation difficult. To enhance interpretability, we propose the use of a standardized effect size graph, which shows all PROs from a study on the same figure, on the same scale. Plotting standardized effects with their 95% confidence intervals (CIs) on a single graph clearly showing the null value conveys a comprehensive picture of trial results. We demonstrate how to create such a graph using data from a randomized controlled trial that measured 12 PROs at two time points. The 24 effect sizes and CIs are shown on one graph and clearly indicate that the intervention is effective and sustained.
APA, Harvard, Vancouver, ISO, and other styles
7

Subbiah, Sathyan. "Some Investigations of Scaling Effects in Micro-Cutting." Diss., Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/13938.

Full text
Abstract:
The scaling of specific cutting energy is studied when micro-cutting ductile metals. A unified framework for understanding the scaling in specific cutting energy is first presented by viewing the cutting force as a combination of constant, increasing, and decreasing force components, the independent variable being the uncut chip thickness. Then, an attempt is made to isolate the constant force component by performing high rake angle orthogonal cutting experiments on OFHC Copper. The data shows a trend towards a constant cutting force component as the rake angle is increased. In order to understand the source of this constant force component the chip-root is investigated. By quickly stopping the spindle at low cutting speeds, the chip is frozen and the chip-workpiece interface is examined in a scanning electron microscope. Evidence of ductile tearing ahead of the cutting tool is seen at low and high rake angles. At higher cutting speeds a quick-stop device is used to obtain chip-roots. These experiments also clearly indicate evidence of ductile fracture ahead of the cutting tool in both OFHC Copper and Al-2024 T3. To model the cutting process with ductile fracture leading to material separation the finite element method is used. The model is implemented in a commercial finite element software using the explicit formulation. Material separation is modeled via element failure. The model is then validated using the measured cutting and thrust forces and used to study the energy consumed in cutting. As the thickness of layer removed is reduced the energy consumed in material separation becomes important. Simulations also show that the stress state ahead of the tool is favorable for ductile fracture to occur. Ductile fracture in three locations in an interface zone at the chip root is seen while cutting with edge radius tool. A hypothesis is advanced wherein an element gets wrapped around the tool edge and is stretched in two directions leading to fracture. The numerical model is then used to study the difference in stress state and energy consumption between a sharp tool and a tool with a non-zero edge radius.
APA, Harvard, Vancouver, ISO, and other styles
8

Senteney, Michael H. "A Monte Carlo Study to Determine Sample Size for Multiple Comparison Procedures in ANOVA." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou160433478343909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Martinez, Silas G. "Aggression and boxing performance: Testing the channeling hypothesis with multiple statistical methodologies." Wright State University / OhioLINK, 2017. http://rave.ohiolink.edu/etdc/view?acc_num=wright1491929510847969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Rypl, Rostislav, Miroslav Vořechovský, Britta Sköck-Hartmann, Rostislav Chudoba, and Thomas Gries. "Effect of twist, fineness, loading rate and length on tensile behavior of multifilament yarn." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2009. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1244041881719-95100.

Full text
Abstract:
The idea underlying the present study was to apply twisting in order to introduce different levels of transverse pressure. The modified structure affected both the bonding level and the evolution of the damage in the yarn. In order to isolate this effect in a broader context, additional parameters were included in the experiment design, namely effects of loading rate, specimen length and filament diameter (directly linked to the fineness of the yarn). These factors have been studied in various contexts by several authors. Some related studies on involved factors will be briefly reviewed.
APA, Harvard, Vancouver, ISO, and other styles
11

Schäfer, Thomas, and Marcus A. Schwarz. "The Meaningfulness of Effect Sizes in Psychological Research: Differences Between Sub-Disciplines and the Impact of Potential Biases." Frontiers Media SA, 2019. https://monarch.qucosa.de/id/qucosa%3A33749.

Full text
Abstract:
Effect sizes are the currency of psychological research. They quantify the results of a study to answer the research question and are used to calculate statistical power. The interpretation of effect sizes—when is an effect small, medium, or large?—has been guided by the recommendations Jacob Cohen gave in his pioneering writings starting in 1962: Either compare an effect with the effects found in past research or use certain conventional benchmarks. The present analysis shows that neither of these recommendations is currently applicable. From past publications without pre-registration, 900 effects were randomly drawn and compared with 93 effects from publications with pre-registration, revealing a large difference: Effects from the former (median r = 0.36) were much larger than effects from the latter (median r = 0.16). That is, certain biases, such as publication bias or questionable research practices, have caused a dramatic inflation in published effects, making it difficult to compare an actual effect with the real population effects (as these are unknown). In addition, there were very large differences in the mean effects between psychological sub-disciplines and between different study designs, making it impossible to apply any global benchmarks. Many more pre-registered studies are needed in the future to derive a reliable picture of real population effects.
APA, Harvard, Vancouver, ISO, and other styles
12

Liley, Albert James. "Statistical co-analysis of high-dimensional association studies." Thesis, University of Cambridge, 2017. https://www.repository.cam.ac.uk/handle/1810/270628.

Full text
Abstract:
Modern medical practice and science involve complex phenotypic definitions. Understanding patterns of association across this range of phenotypes requires co-analysis of high-dimensional association studies in order to characterise shared and distinct elements. In this thesis I address several problems in this area, with a general linking aim of making more efficient use of available data. The main application of these methods is in the analysis of genome-wide association studies (GWAS) and similar studies. Firstly, I developed methodology for a Bayesian conditional false discovery rate (cFDR) for levering GWAS results using summary statistics from a related disease. I extended an existing method to enable a shared control design, increasing power and applicability, and developed an approximate bound on false-discovery rate (FDR) for the procedure. Using the new method I identified several new variant-disease associations. I then developed a second application of shared control design in the context of study replication, enabling improvement in power at the cost of changing the spectrum of sensitivity to systematic errors in study cohorts. This has application in studies on rare diseases or in between-case analyses. I then developed a method for partially characterising heterogeneity within a disease by modelling the bivariate distribution of case-control and within-case effect sizes. Using an adaptation of a likelihood-ratio test, this allows an assessment to be made of whether disease heterogeneity corresponds to differences in disease pathology. I applied this method to a range of simulated and real datasets, enabling insight into the cause of heterogeneity in autoantibody positivity in type 1 diabetes (T1D). Finally, I investigated the relation of subtypes of juvenile idiopathic arthritis (JIA) to adult diseases, using modified genetic risk scores and linear discriminants in a penalised regression framework. The contribution of this thesis is in a range of methodological developments in the analysis of high-dimensional association study comparison. Methods such as these will have wide application in the analysis of GWAS and similar areas, particularly in the development of stratified medicine.
APA, Harvard, Vancouver, ISO, and other styles
13

Awuor, Risper Akelo. "Effect of Unequal Sample Sizes on the Power of DIF Detection: An IRT-Based Monte Carlo Study with SIBTEST and Mantel-Haenszel Procedures." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28321.

Full text
Abstract:
This simulation study focused on determining the effect of unequal sample sizes on statistical power of SIBTEST and Mantel-Haenszel procedures for detection of DIF of moderate and large magnitudes. Item parameters were estimated by, and generated with the 2PLM using WinGen2 (Han, 2006). MULTISIM was used to simulate ability estimates and to generate response data that were analyzed by SIBTEST. The SIBTEST procedure with regression correction was used to calculate the DIF statistics, namely the DIF effect size and the statistical significance of the bias. The older SIBTEST was used to calculate the DIF statistics for the M-H procedure. SAS provided the environment in which the ability parameters were simulated; response data generated and DIF analyses conducted. Test items were observed to determine if a priori manipulated items demonstrated DIF. The study results indicated that with unequal samples in any ratio, M-H had better Type I error rate control than SIBTEST. The results also indicated that not only the ratios, but also the sample size and the magnitude of DIF influenced the behavior of SIBTEST and M-H with regard to their error rate behavior. With small samples and moderate DIF magnitude, Type II errors were committed by both M-H and SIBTEST when the reference to focal group sample size ratio was 1:.10 due to low observed statistical power and inflated Type I error rates.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Hollmann, Christian. "Die Übertragbarkeit von Schwingfestigkeitseigenschaften im Örtlichen Konzept." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2004. http://nbn-resolving.de/urn:nbn:de:swb:14-1089643211437-01340.

Full text
Abstract:
Das Örtliche Konzept dient der Berechnung von Lebensdauern zyklisch belasteter Konstruktionen. Dabei wird die Ermüdungsfestigkeit des ungekerbten Werkstoffes zugrundegelegt, um die Bauteillebensdauer abzuschätzen. Mit dieser Annahme verbunden ist ein starke Vereinfachung des Ermüdungsvorganges. Die Ursache ist die unzureichende Berücksichtigung wesentlicher Einflußgrößen, die die Festigkeit des interessierenden Bauteiles bestimmen und die andere Ausprägung als an der Werkstoffprobe erfahren. Dadurch ist die Zuverlässigkeit der Rechnung reduziert. Die vorliegende Arbeit untersucht und klassifiziert die Ursachen für dieses Übertragbarkeitsproblem. Mit einem sehr flexiblen multiplikativen Ansatz, der technologischen, statistischen und spannungsemchanischen Effekt berücksichtigt, läßt sich der Unterschied zwischen Festigkeit des Bauteils und der Werkstoffprobe rechnerisch erfassen. Zur numerischen Betrachtung technologischer und statistischer Einflüsse kann auf bereits bestehendes Wissen zurückgegriffen werden. Dies wird konzeptbezogen und mit Rücksicht auf die Anwendung angepaßt. Die Berücksichtigung des spannungsmechanischen Einflusses hingegen kann nicht von Bestehendem abgeleitet werden. Dazu wird aus einer umfangreichen Sammlung experimenteller Ergebnisse dieser Einflufaktor abgeleitet. Aus der so gegebenen Stichprobe lassen sich durch eine statistische Auswertung eine Reihe von Kennwerten identifizieren, die die Stützwirkung maßgeblich beeinflussen. Darauf aufbauend wird eine Bestimmungsgleichung für den spannungsmechanischen Einflußfaktor abgeleitet. Somit ist ein Stützwirkungskonzept geschaffen, das es erlaubt, die drei wesentliche Aspekte technologischen, statistischen und spannungsmechanischen Einfluß qualitativ uind quantitativ ins Örtliche Konzept zu integrieren. Mit dieser Vorgehsweise und einer vorliegenden Werkstoffwöhlerlinie läßt sich die bauteilspezifische Festigkeit abschätzen. Die so abgeleitete Wöhlerlinie erlaubt eine wesentlich genauere Lebensdauerberechnung, wie die Kotrolle an umfangreichen, unabhängigen Daten beweist
The local strain concept serves to estimate the service life of cyclically loaded structures. Here the fatigue strength of the mere material coupon is taken as the basis in the calculation for notched components. This represents a distinct simplification of the complex fatigue process because of disregarding the relevant influences that determine strength and durability. By that the reliability of the calculation is not yet satisfying. The present investigation first classifies the reasons for this problem of transferabillity between notched component and unnotched specimen. A simple but flexible approach is used to describe the technological, statistical and and gradient effect. To consider the technological influences and statistical size effect numerically, known relationships and procedures are taken and adapted to the methodology of the concept. To catch gradient effects a new stress-relief-concept was developed. From a comprehensive database of experimental results gradient effects were were separated. By a statistical analysis relevant variables that govern the stress-relief are identified. Using these, an equation gets derived which enables to compute the gradient effects on fatigue strength. The developed stress-relief-concept allows to estimate a component-related strain(parameter-)-life-curve. Lifetime predictioins based on this are by far more reliable than those based on materials data only. This is verified by a check on an extensive and independent database
APA, Harvard, Vancouver, ISO, and other styles
15

Noronha, José M. B. "Statistical mechanics of ideal quantum gases : finite size effects." Thesis, University of Newcastle Upon Tyne, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.247828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Tse, Kwok Ho. "Sample size calculation : influence of confounding and interaction effects /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?MATH%202006%20TSE.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Sutherland, Leigh Stuart. "An investigation into composites size effects using statistically designed experiments." Thesis, University of Southampton, 1997. https://eprints.soton.ac.uk/378727/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Hembree, David. "The robustness of confidence intervals for effect size in one way designs with respect to departures from normality." Kansas State University, 2012. http://hdl.handle.net/2097/13676.

Full text
Abstract:
Master of Science
Department of Statistics
Paul Nelson
Effect size is a concept that was developed to bridge the gap between practical and statistical significance. In the context of completely randomized one way designs, the setting considered here, inference for effect size has only been developed under normality. This report is a simulation study investigating the robustness of nominal 0.95 confidence intervals for effect size with respect to departures from normality in terms of their coverage rates and lengths. In addition to the normal distribution, data are generated from four non-normal distributions: logistic, double exponential, extreme value, and uniform. The report discovers that the coverage rates of the logistic, double exponential, and extreme value distributions drop as effect size increases, while, as expected, the coverage rate of the normal distribution remains very steady at 0.95. In an interesting turn of events, the uniform distribution produced higher than 0.95 coverage rates, which increased with effect size. Overall, in the scope of the settings considered, normal theory confidence intervals for effect size are robust for small effect size and not robust for large effect size. Since the magnitude of effect size is typically not known, researchers are advised to investigate the assumption of normality before constructing normal theory confidence intervals for effect size.
APA, Harvard, Vancouver, ISO, and other styles
19

Wilson, Celia M. "Attenuation of the Squared Canonical Correlation Coefficient Under Varying Estimates of Score Reliability." Thesis, University of North Texas, 2010. https://digital.library.unt.edu/ark:/67531/metadc30528/.

Full text
Abstract:
Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability. Monte Carlo simulation methodology was used to fulfill the purpose of this study. Initially, data populations with various manipulated conditions were generated (N = 100,000). Subsequently, 500 random samples were drawn with replacement from each population, and data was subjected to canonical correlation analyses. The canonical correlation results were then analyzed using descriptive statistics and an ANOVA design to determine under which condition(s) the squared canonical correlation coefficient was most attenuated when compared to population Rc2 values. This information was analyzed and used to determine what effect, if any, the different conditions considered in this study had on Rc2. The results from this Monte Carlo investigation clearly illustrated the importance of score reliability when interpreting study results. As evidenced by the outcomes presented, the more measurement error (lower reliability) present in the variables included in an analysis, the more attenuation experienced by the effect size(s) produced in the analysis, in this case Rc2. These results also demonstrated the role between and within set correlation, variable set size, and sample size played in the attenuation levels of the squared canonical correlation coefficient.
APA, Harvard, Vancouver, ISO, and other styles
20

SUN, SHUYAN. "A Comprehensive Review of Effect Size Reporting and Interpreting Practices in Academic Journals in Education and Psychology." University of Cincinnati / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1216868724.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Lim, Sze-Wah. "Competing population : effects of diverse preferences and a finite-size scaling theory of dynamical transitions /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?PHYS%202006%20LIM.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Weisberger, Andrea Godwin. "Empirical Benchmarks for Interpreting Effect Sizes in Child Counseling Research." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc984168/.

Full text
Abstract:
The goal of this study was to establish empirical benchmarks for Cohen's d in child counseling research. After initial review of over 1,200 child intervention research studies published from 1990 to 2016, 41 randomized clinical trials were identified in which intervention and control groups were compared with children 3-12 years old (N = 3,586). Upon identification or calculation of a Cohen's d for each study, I calculated a weighted mean d by multiplying the effect size of each study by the number of participants in that study then dividing by total number of effect sizes. The weighted mean accounted for study sample size and served as the suggested medium effect size benchmark. Results indicated effect size is impacted in large part by type of reporter, with parents apparently most sensitive to improvement and yielding higher effect sizes overall; teachers relatively less sensitive, perhaps due to difficulty observing change in a classroom setting; and children self-reporting lowest levels of improvement, perhaps reflecting a lack of sufficient measures of child development. Suggested medium benchmarks for Cohen's d in child counseling literature are .70. for parent report, .50 for teacher report, and .36 for child self-report. Small and large benchmarks are suggested based on the use of standard deviations of the mean Cohen's d for each reporter.
APA, Harvard, Vancouver, ISO, and other styles
23

Arran, Matthew Iain. "Avalanching on dunes and its effects : size statistics, stratification, & seismic surveys." Thesis, University of Cambridge, 2018. https://www.repository.cam.ac.uk/handle/1810/278773.

Full text
Abstract:
Geophysical research has long been interdisciplinary, with many phenomena on the Earth's surface involving multiple, linked processes that are best understood using a combination of techniques. This is particularly true in the case of grain flows on sand dunes, in which the sedimentary stratification with which geologists are concerned arises from the granular processes investigated by physicists and engineers, and the water permeation that interests hydrologists and soil scientists determines the seismic velocities of concern to exploration geophysicists. In this dissertation, I describe four projects conducted for the degree of Doctor of Philosophy, using a combination of laboratory experimentation, fieldwork, numerical simulation, and mathematical modelling to link avalanching on dunes to its effects on stratification, on the permeation of water, and on seismic surveys. Firstly, I describe experiments on erodible, unbounded, grain piles in a channel, slowly supplied with additional grains, and I demonstrate that the behaviour of the consequent, discrete avalanches alternates between two regimes, typified by their size statistics. Reconciling the `self-organised criticality' that several authors have predicted for such a system with the hysteretic behaviour that others have observed, the system exhibits quasi-periodic, system-spanning avalanches in one regime, while in the other avalanches pass at irregular intervals and have a power-law size distribution. Secondly, I link this power-law size distribution to the strata emplaced by avalanches on bounded grain piles. A low inflow rate of grains into an experimental channel develops a pile, composed of strata in which blue-dyed, coarser grains overlie finer grains. Associating stopped avalanche fronts with the `trapped kinks' described by previous authors, I show that, in sufficiently large grain piles, mean stratum width increases linearly with distance downslope. This implies the possibility of interpreting paleodune height from the strata of aeolian sandstones, and makes predictions for the structure of avalanche-associated strata within active dunes. Thirdly, I discuss investigations of these strata within active, Qatari barchan dunes, using dye-infiltration to image strata in the field and extracting samples across individual strata with sub-centimetre resolution. Downslope increases in mean stratum width are evident, while measurements of particle size distributions demonstrate preferential permeation of water along substrata composed of finer particles, explaining the strata-associated, localised regions of high water content discovered by other work on the same dunes. Finally, I consider the effect of these within-dune variations in water content on seismic surveys for oil and gas. Having used high performance computing to simulate elastic wave propagation in the vicinity of an isolated, barchan sand dune, I demonstrate that such a dune acts as a resonator, absorbing energy from Rayleigh waves and reemitting it over an extensive period of time. I derive and validate a mathematical framework that uses bulk properties of the dune to predict quantitative properties of the emitted waves, and I demonstrate the importance of internal variations in seismic velocity, resulting from variations in water content.
APA, Harvard, Vancouver, ISO, and other styles
24

Guan, Tianyuan. "Sample Size Calculations in Simple Linear Regression: A New Approach." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627667392849137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Denbleyker, John Nickolas. "Comparing trend and gap statistics across tests: distributional change using ordinal methods and bayesian inference." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/2856.

Full text
Abstract:
The shortcomings of the proportion above cut (PAC) statistic used so prominently in the educational landscape renders it a very problematic measure for making correct inferences with student test data. The limitations of PAC-based statistics are more pronounced with cross-test comparisons due to their dependency on cut-score locations. A better alternative is using mean-based statistics that can translate to parametric effect-size measures. However, these statistics as well can be problematic. When Gaussian assumptions are not met, reasonable transformations of a score scale produce non-monotonic outcomes. The present study develops a distribution-wide approach to summarize trend, gap, and gap trend (TGGT) measures. This approach counters the limitations of PAC-based measures and mean-based statistics in addition to addressing TGGT-related statistics in a manner more closely tied to both the data and questions regarding student achievement. This distribution-wide approach encompasses visual graphics such as percentile trend displays and probability-probability plots fashioned after Receiver Operating Characteristic (ROC) curve methodology. The latter is framed as the P-P plot framework that was proposed by Ho (2008) as a way to examine trends and gaps with more consideration given to questions of scale and policy decisions. The extension in this study involves three main components: (1) incorporating Bayesian inference, (2) using a multivariate structure for longitudinal data, and (3) accounting for measurement error at the individual level. The analysis is based on mathematical assessment data comprising Grade 3 to Grade 7 from a large Midwestern school district. Findings suggest that PP-based effect sizes provide a useful framework to measure aggregate test score change and achievement gaps. The distribution-wide perspective adds insight by examining both visually and numerically how trends and gaps are affected throughout the score distribution. Two notable findings using the PP-based effect sizes were (1) achievement gaps were very similar between the Focal and Audit test, and (2) trend measures were significantly larger for the Audit test. Additionally, measurement error corrections using the multivariate Bayesian CTT approach had effect sizes disattenuated from those based on observed scores. Also, the ordinal-based effect size statistics were generally larger than their parametric-based counterparts, and this disattenuation was practically equivalent to that seen by accounting for measurement error. Finally, the rank-based estimator of P(X>Y) via estimated true scores had smaller standard errors than for its parametric-based counterpart.
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, Michelle Oi San. "Sample size calculation for testing an interaction effect in a logistic regression under measurement error model /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?MATH%202003%20LEE.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.
Includes bibliographical references (leaves 66-67). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
27

Van, Duker Heather L. "The Effect of Birth Order on Infant Injury." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/1306.

Full text
Abstract:
Pediatric injury is both common and expensive. Finding ways to prevent pediatric injury is a major public health concern. Many studies have investigated various aspects of pediatric injury, and some suggest that birth order may be an important risk factor for pediatric injury. This study further examined the relationship of birth order with pediatric injury, specifically studying the association of birth order with emergency department-attended infant injury while adjusting for other important family and individual covariates. Data for analysis included Utah birth certificate, death certificate, and hospital emergency department datasets, which were probabilistically linked to obtain complete demographic and injury information for infants born in 1999—2002. Three groups of risk factors were defined for analysis: maternal demographics, maternal risk behaviors, and infant demographics. Two outcome variables were defined for this study, “injury event” and “severe injury event.” Data was analyzed using generalized estimating equations (GEE). Birth order was associated with infant injury events and severe infant injury events. Birth order 4th or greater had the greatest effect for both injury outcomes. Additionally, several maternal characteristics were associated with infant injury events and severe infant injury events. In particular, maternal age and maternal smoking behavior were associated with increased infant injury risk. This study identified two targeted populations that are well-suited to injury prevention efforts: infants born to mothers who smoke, and infants born to mothers who are young and have many other children.
APA, Harvard, Vancouver, ISO, and other styles
28

Granado, Elvalicia A. "Comparing Three Effect Sizes for Latent Class Analysis." Thesis, University of North Texas, 2015. https://digital.library.unt.edu/ark:/67531/metadc822835/.

Full text
Abstract:
Traditional latent class analysis (LCA) considers entropy R2 as the only measure of effect size. However, entropy may not always be reliable, a low boundary is not agreed upon, and good separation is limited to values of greater than .80. As applications of LCA grow in popularity, it is imperative to use additional sources to quantify LCA classification accuracy. Greater classification accuracy helps to ensure that the profile of the latent classes reflect the profile of the true underlying subgroups. This Monte Carlo study compared the quantification of classification accuracy and confidence intervals of three effect sizes, entropy R2, I-index, and Cohen’s d. Study conditions included total sample size, number of dichotomous indicators, latent class membership probabilities (γ), conditional item-response probabilities (ρ), variance ratio, sample size ratio, and distribution types for a 2-class model. Overall, entropy R2 and I-index showed the best accuracy and standard error, along with the smallest confidence interval widths. Results showed that I-index only performed well for a few cases.
APA, Harvard, Vancouver, ISO, and other styles
29

Ashby, Darren. "The effect of the real world side impacts on occupant injuries : a finite element and statistical approach." Thesis, Coventry University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.325792.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Janse, Sarah A. "INFERENCE USING BHATTACHARYYA DISTANCE TO MODEL INTERACTION EFFECTS WHEN THE NUMBER OF PREDICTORS FAR EXCEEDS THE SAMPLE SIZE." UKnowledge, 2017. https://uknowledge.uky.edu/statistics_etds/30.

Full text
Abstract:
In recent years, statistical analyses, algorithms, and modeling of big data have been constrained due to computational complexity. Further, the added complexity of relationships among response and explanatory variables, such as higher-order interaction effects, make identifying predictors using standard statistical techniques difficult. These difficulties are only exacerbated in the case of small sample sizes in some studies. Recent analyses have targeted the identification of interaction effects in big data, but the development of methods to identify higher-order interaction effects has been limited by computational concerns. One recently studied method is the Feasible Solutions Algorithm (FSA), a fast, flexible method that aims to find a set of statistically optimal models via a stochastic search algorithm. Although FSA has shown promise, its current limits include that the user must choose the number of times to run the algorithm. Here, statistical guidance is provided for this number iterations by deriving a lower bound on the probability of obtaining the statistically optimal model in a number of iterations of FSA. Moreover, logistic regression is severely limited when two predictors can perfectly separate the two outcomes. In the case of small sample sizes, this occurs quite often by chance, especially in the case of a large number of predictors. Bhattacharyya distance is proposed as an alternative method to address this limitation. However, little is known about the theoretical properties or distribution of B-distance. Thus, properties and the distribution of this distance measure are derived here. A hypothesis test and confidence interval are developed and tested on both simulated and real data.
APA, Harvard, Vancouver, ISO, and other styles
31

Kim, Sung Won [Verfasser], and Martin [Akademischer Betreuer] Schumacher. "Statistical models for multivariate longitudinal data with application to the development of side effects during radiation therapy." Freiburg : Universität, 2017. http://d-nb.info/115849579X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Demiral, Murat. "Enhanced gradient crystal-plasticity study of size effects in B.C.C. metal." Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/11634.

Full text
Abstract:
Owing to continuous miniaturization, many modern high-technology applications such as medical and optical devices, thermal barrier coatings, electronics, micro- and nano-electro mechanical systems (MEMS and NEMS), gems industry and semiconductors increasingly use components with sizes down to a few micrometers and even smaller. Understanding their deformation mechanisms and assessing their mechanical performance help to achieve new insights or design new material systems with superior properties through controlled microstructure at the appropriate scales. However, a fundamental understanding of mechanical response in surface-dominated structures, different than their bulk behaviours, is still elusive. In this thesis, the size effect in a single-crystal Ti alloy (Ti15V3Cr3Al3Sn) is investigated. To achieve this, nanoindentation and micropillar (with a square cross-section) compression tests were carried out in collaboration with Swiss Federal Laboratories for Materials Testing and Research (EMPA), Switzerland. Three-dimensional finite element models of compression and indentation with an implicit time-integration scheme incorporating a strain-gradient crystal-plasticity (SGCP) theory were developed to accurately represent deformation of the studied body-centered cubic metallic material. An appropriate hardening model was implemented to account for strain-hardening of the active slip systems, determined experimentally. The optimized set of parameters characterizing the deformation behaviour of Ti alloy was obtained based on a direct comparison of simulations and the experiments. An enhanced model based on the SGCP theory (EMSGCP), accounting for an initial microstructure of samples in terms of different types of dislocations (statistically stored and geometrically necessary dislocations), was suggested and used in the numerical analysis. This meso-scale continuum theory bridges the gap between the discrete-dislocation dynamics theory, where simulations are performed at strain rates several orders of magnitude higher than those in experiments, and the classical continuum-plasticity theory, which cannot explain the dependence of mechanical response on a specimen s size since there is no length scale in its constitutive description. A case study was performed using a cylindrical pillar to examine, on the one hand, accuracy of the proposed EMSGCP theory and, on the other hand, its universality for different pillar geometries. An extensive numerical study of the size effect in micron-size pillars was also implemented. On the other hand, an anisotropic character of surface topographies around indents along different crystallographic orientations of single crystals obtained in numerical simulations was compared to experimental findings. The size effect in nano-indentation was studied numerically. The differences in the observed hardness values for various indenter types were investigated using the developed EMSGCP theory.
APA, Harvard, Vancouver, ISO, and other styles
33

Rypl, Rostislav. "Tahová pevnost vláknitých svazků a kompozitů." Doctoral thesis, Vysoké učení technické v Brně. Fakulta stavební, 2016. http://www.nusl.cz/ntk/nusl-355616.

Full text
Abstract:
Technical textiles play a highly important role in today's material engineering. In fibrous composites, which are being applied in a number of industrial branches ranging from aviation to civil engineering, technical textiles are used as the reinforcing or toughening constituent. With growing number of production facilities for fibrous materials, the need for standardized and reproducible quality control procedures becomes urgent. The present thesis addresses the issue of tensile strength of high-modulus multifilament yarns both from the theoretical and experimental point of view. In both these aspects, novel approaches are introduced. Regarding the theoretical strength of fibrous yarns, a model for the length dependent tensile strength is formulated, which distinguishes three asymptotes of the mean strength size effect curve. The transition between the model of independent parallel fibers applicable for smaller gauge lengths and the chain-of-bundles model applicable for longer gauge lengths is emphasized in particular. It is found that the transition depends on the stress transfer or anchorage length of filaments and can be identified experimentally by means of standard tensile tests at different gauge lengths. In the experimental part of the thesis, the issue of stress concentration in the clamping has been addressed. High-modulus yarns with brittle filaments are very sensitive to stress concentrations when loaded in tension making the use of traditional tensile test methods difficult. A novel clamp adapter for the Statimat 4U yarn tensile test machine (producer: Textechno GmbH) has been developed and a prototype has been built. A test series comparing yarns strengths tested with the clamp adapter and with commonly used test methods has been performed and the results are discussed. Furthermore, they are compared with theoretical values using the Daniels' statistical fiber-bundle model.
APA, Harvard, Vancouver, ISO, and other styles
34

Rault, Claire. "Effets de site, endommagement et érosion des pentes dans les zones épicentrales des chaînes de montagnes actives." Thesis, Paris Sciences et Lettres (ComUE), 2019. http://www.theses.fr/2019PSLEE006/document.

Full text
Abstract:
Les glissements de terrain constituent un risque naturel majeur à l’origine de dégâts matériels et humains considérables. Les séismes sont l’une des principales causes de leur déclenchement dans les orogènes actifs. Dans la zone épicentrale, le passage des ondes sismiques perturbe le champs de contraintes local ce qui peut provoquer le dépassement du seuil de stabilité des versants. La probabilité de déclenchement d’un glissement de terrain sismo-induit sur une pente donnée est donc fonction de facteurs liés au mouvement du sol et aux caractéristiques géologiques et topographiques de celle-ci. Très peu de données sismiques sont disponibles sur les versants et les modèles d’interpolation sont peu précis. Or le mouvement sismique peut s’avérer très variable à l’échelle d’un bassin du fait de la présence d’effets de site. L’étude de la réponse sismique d’un relief taïwanais nous permet de documenter ces effets et de prendre connaissance de la complexité du mouvement enregistré sur ce relief à la suite du passage de l’onde. Un réseau de six stations larges-bandes a été déployé, au travers de ce relief large d’environ 3km. Entre mars 2015 et juin 2016, ce réseau a permis d’enregistrer la réponse des sites à plus de 2200 séismes régionaux (magnitude Ml>3, distance hypocentrale<200km). Bien que distants de quelques centaines de mètres, les sites présentent tous une réponse qui leur est caractéristique résultant d’une combinaison complexe entre la topographie et la géologie du site. A fréquences modérées, correspondant à des longueurs d’ondes du mouvement pouvant contribuer au déclenchement de glissements de terrain, l’amplification du mouvement sismique est principalement due à la géologie locale et non à la topographie, comme montré par les indicateurs classiques (SSR, PGA, PGV et Arias) extraits des réponses des stations aux séismes. La topographie semble néanmoins jouer un rôle dans la durée du mouvement sismique fort aux stations situées sur les crêtes et en bordure de bassin sédimentaire, par effet de résonance et génération d’ondes de surface. La contribution prédominante de la géologie dans le déclenchement des glissements de terrain sismo-induits est également montrée par l’analyse de leur position sur les versants pour les glissements associés aux séismes de Northridge (Mw 6.7, 1994, Etats-Unis), de Chi-Chi (Mw 7.6, 1999, Taiwan), et de Wenchuan (Mw 7.9, 2008, Chine). En effet, bien que les glissements sismo-induits se localisent statistiquement plus haut sur les versants que les glissements d’origine climatique, on note que cette tendance est fortement modulée par la géologie des bassins. En fonction des « attracteurs », tels que des failles ou forts contrastes lithologiques, présents dans les bassins, les glissements tendent à se déclencher plus ou moins haut sur les versants, là où le potentiel de rupture est plus fort. Les propriétés mécaniques des pentes sont peu contraintes dans les zones montagneuses. Souvent leurs paramètres géotechniques sont estimés à partir des cartes géologiques régionales, or ils peuvent varier fortement pour une même lithologie d’un bassin à un autre. En considérant un modèle frictionnel simple de stabilité des pentes, on propose d’inverser des paramètres de type Coulomb à partir de la distribution des pentes des glissements de terrain sismo-induits à l’échelle des bassins dans les zones épicentrales des séismes de Northridge, Chi-Chi et Wenchuan. La variation spatiale de ces paramètres semble cohérente avec celle de la lithologie et la profondeur des sols
Landslides are a major natural hazard that cause significant damages and casualties to people. Earthquakes are one of their main triggers in active mountain belts. In epicentral area, the passage of seismic-waves that disrupt the stress-field, leads the slope stability threshold to be exceeded. Co-seismic slope failure probability thus depends on complex interactions between the ground-motion and the slope geology and geometry. A few seismic data are available on mountain slopes and the resolution of ground-motion models is generally low. Yet strong variation of ground-motion from one ridge to another can be felt due to site effects. We document site effects across topography and show the complexity of slope responses to earthquakes using a seismic network set across a Taiwanese ridge. Six broadband seismometers were set along the profile of this 3km wide ridge. From March 2015 to June 2016, more than 2200 earthquakes (magnitude Ml>3 and hypocentral distance<200km) were recorded. Although the sites are within a distance of hundreds of meters they all show different characteristic responses that are related to a complex combination of the geology and topography of the sites. At medium frequency corresponding to groundmotion wavelength that could affect slope stability, the ground-motion amplification is mostly related to the local geology and the topographic effect seems relatively negligible as attested by current indicators measured at the stations (PGA, PGV, Arias, SSR). However the duration of strong ground-motion at the ridge crests and slope toe seems to be related to possible resonance effects and surface wave generation due to the geometry of the topography. The strong contribution of the geology to co-seismic landslide trigger is demonstrated by the analysis of their position along hillslopes for the co-seismic landslides triggered by the Northridge earthquake (Mw 6.7, 1994, USA), the Chi-Chi earthquake (Mw 7.6, 1999, Taiwan), and the Wenchuan earthquake (Mw 7.9, 2008, China). Indeed, although co-seismic landslides are statistically located higher on hillslopes than the rainfall-induced landslides, we show that this tendency is strongly modulated by the geology. According to the “potential landslides attractiveness” of geological structures, such as faults or lithological contrasts, present in the watershed, the slope failure would occur more or less upslope, where the failure probability is the highest.Slope mechanical properties are not well constrained in mountain area. Their geotechnical parameters are usually estimated using information provided by geological maps, but even for the same lithology they can strongly differ for one basin to another. Considering one simple friction model for seismic slope stability, we propose to invert Coulomb related parameters using the slope distributions of the landslides triggered by the Northridge, Chi-Chi and Wenchuan earthquakes. The spatial variation of these parameters seems to be in agreement with the lithology and soil depth at the first order
APA, Harvard, Vancouver, ISO, and other styles
35

Mason, Brenden James. "The Effects of Options Markets on the Underlying Markets: Quasi-Experimental Evidence." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/503097.

Full text
Abstract:
Economics
Ph.D.
This dissertation consists of three essays in applied financial economics. The unifying theme is the use of financial regulation as quasi-experiments to understand the interrelationship between derivatives and the underlying assets. The first two essays use different quasi-experimental econometric techniques to answer the same research question: how does option listing affect the return volatility of the underlying stock? This question is difficult to answer empirically because being listed on an options exchange is not random. Volatility is one of the dimensions along which the options exchanges make their listing decisions. This selection bias confounds any causal effect that option listing may have. What is more, the options exchanges may list along unobservable dimensions. Such omitted variable bias can also confound any causal effect of option listing. My first essay overcomes these two biases by exploiting the exogenous variation in option listing that is created by the SEC-imposed option listing standards. Specifically, the SEC mandates that a stock must meet certain criteria in the underlying market before it can trade on an options exchange. For example, a stock needs to trade a total of 2.4 million shares over the previous 12 months before it can be listed. Since 2.4 million is an arbitrary number, stocks that are “just above” the 2.4 million threshold will be identical to stocks that are “just below” it, the sole difference being their probability of option listing. Accordingly, I use the 2.4 million threshold as an instrument for option listing in a fuzzy regression discontinuity design. I find that option listing causes a modest decrease in underlying volatility, a result that corroborates many previous empirical studies. My second essay attempts to estimate the effect of option listing for stocks that are “far away from” the 2.4 million threshold. I overcome the aforementioned omitted variable bias by fully exploiting the panel nature of the data. I control for the unobserved heterogeneity across stocks by implementing a two-way fixed effects model. Unlike most previous studies, I control for individual-level fixed effects at the firm level rather than at the industry level. My results show that option listing is associated with a decrease in volatility. Importantly, these results are only statistically significant in a model with firm-level fixed effects; they are insignificant with industry-level fixed effects. My third essay is a policy evaluation of the SEC’s Penny Pilot Program, a mandated decrease of the option tick size for various equity options classes. Several financial professionals claimed that this decrease would drive institutional investors out of the exchange-traded options market, channeling them into the opaque, over-the-counter (OTC) options market. I empirically test an implication of this hypothesis: if institutional investors have fled the exchange-traded options market for the OTC market, then it may take longer for information to be impounded into a stock’s price. Using the `price delay’ measure of Hou and Moskowitz (2005), I test whether stocks become less price efficient as a result of being included in the Penny Pilot Program. I perform this test using firm-level fixed effects on all classes that were included in the program. I confirm these results with synthetic control experiments for the classes included in Phase I of the Penny Pilot Program. Generally, I find no change in price efficiency of the underlying stocks, which suggests that the decrease in option tick size did not materially erode the price discovery that takes place in the exchange-traded equity options market. I also find evidence that the decrease in option tick size caused an increase in short selling for the piloted stocks.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
36

Turkoz, Ibrahim. "BLINDED EVALUATIONS OF EFFECT SIZES IN CLINICAL TRIALS: COMPARISONS BETWEEN BAYESIAN AND EM ANALYSES." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/234528.

Full text
Abstract:
Statistics
Ph.D.
Clinical trials are major and costly undertakings for researchers. Planning a clinical trial involves careful selection of the primary and secondary efficacy endpoints. The 2010 draft FDA guidance on adaptive designs acknowledges possible study design modifications, such as selection and/or order of secondary endpoints, in addition to sample size re-estimation. It is essential for the integrity of a double-blind clinical trial that individual treatment allocation of patients remains unknown. Methods have been proposed for re-estimating the sample size of clinical trials, without unblinding treatment arms, for both categorical and continuous outcomes. Procedures that allow a blinded estimation of the treatment effect, using knowledge of trial operational characteristics, have been suggested in the literature. Clinical trials are designed to evaluate effects of one or more treatments on multiple primary and secondary endpoints. The multiplicity issues when there is more than one endpoint require careful consideration for controlling the Type I error rate. A wide variety of multiplicity approaches are available to ensure that the probability of making a Type I error is controlled within acceptable pre-specified bounds. The widely used fixed sequence gate-keeping procedures require prospective ordering of null hypotheses for secondary endpoints. This prospective ordering is often based on a number of untested assumptions about expected treatment differences, the assumed population variance, and estimated dropout rates. We wish to update the ordering of the null hypotheses based on estimating standardized treatment effects. We show how to do so while the study is ongoing, without unblinding the treatments, without losing the validity of the testing procedure, and with maintaining the integrity of the trial. Our simulations show that we can reliably order the standardized treatment effect also known as signal-to-noise ratio, even though we are unable to estimate the unstandardized treatment effect. In order to estimate treatment difference in a blinded setting, we must define a latent variable substituting for the unknown treatment assignment. Approaches that employ the EM algorithm to estimate treatment differences in blinded settings do not provide reliable conclusions about ordering the null hypotheses. We developed Bayesian approaches that enable us to order secondary null hypotheses. These approaches are based on posterior estimation of signal-to-noise ratios. We demonstrate with simulation studies that our Bayesian algorithms perform better than existing EM algorithm counterparts for ordering effect sizes. Introducing informative priors for the latent variables, in settings where the EM algorithm has been used, typically improves the accuracy of parameter estimation in effect size ordering. We illustrate our method with a secondary analysis of a longitudinal study of depression.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
37

Campbell, Marcus James. "A Simulation Method for Studying Effects of Site-Specific Clutter on SAR-GMTI Performance." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1525351957036997.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Hess, Melinda Rae. "Effect Sizes, Significance Tests, and Confidence Intervals: Assessing the Influence and Impact of Research Reporting Protocol and Practice." Scholar Commons, 2003. https://scholarcommons.usf.edu/etd/1390.

Full text
Abstract:
This study addresses research reporting practices and protocols by bridging the gap from the theoretical and conceptual debates typically found in the literature with more realistic applications using data from published research. Specifically, the practice of using findings of statistical analysis as the primary, and often only, basis for results and conclusions of research is investigated through computing effect size and confidence intervals and considering how their use might impact the strength of inferences and conclusions reported. Using a sample of published manuscripts from three peer-rviewed journals, central quantitative findings were expressed as dichotomous hypothesis test results, point estimates of effect sizes and confidence intervals. Studies using three different types of statistical analyses were considered for inclusion: t-tests, regression, and Analysis of Variance (ANOVA). The differences in the substantive interpretations of results from these accomplished and published studies were then examined as a function of these different analytical approaches. Both quantitative and qualitative techniques were used to examine the findings. General descriptive statistical techniques were employed to capture the magnitude of studies and analyses that might have different interpretations if althernative methods of reporting findings were used in addition to traditional tests of statistical signficance. Qualitative methods were then used to gain a sense of the impact on the wording used in the research conclusions of these other forms of reporting findings. It was discovered that tests of non-signficant results were more prone to need evidence of effect size than those of significant results. Regardless of tests of significance, the addition of information from confidence intervals tended to heavily impact the findings resulting from signficance tests. The results were interpreted in terms of improving the reporting practices in applied research. Issues that were noted in this study relevant to the primary focus are discussed in general with implicaitons for future research. Recommendations are made regarding editorial and publishing practices, both for primary researchers and editors.
APA, Harvard, Vancouver, ISO, and other styles
39

Book, Emil, and Linus Ekelöf. "A Multiple Linear Regression Model To Assess The Effects of Macroeconomic Factors On Small and Medium-Sized Enterprises." Thesis, KTH, Matematisk statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-254298.

Full text
Abstract:
Small and medium-sized enterprises (SMEs) have long been considered the backbone in any country’s economy for their contribution to growth and prosperity. It is therefore of great importance that the government and legislators adopt policies that optimise the success of SMEs. Recent concerns of an impending recession has made this topic even more relevant since small companies will have greater difficulty withstanding such an event. This thesis will focus on the effects of macroeconomic factors on SMEs in Sweden, with the usage of multiple linear regression. Data was collected for a 10 year period, from 2009 to 2019 at a monthly interval. The end result was a five variable model with an coefficient of determination of 98%.
Små- och medelstora företag (SMEs) har länge varit ansedda som en av de viktigaste komponenterna i ett lands ekonomi, främst för deras bidrag till tillväxt och framgång. Det är därför mycket viktigt att regeringar och lagstiftare för en politik som främjar SMEs optimala tillväxt. Flera år av högkonjunktur och oro över kommande lågkonjunktur har gjort detta ämne ytterst relevant då små företag är de som kommer att drabbas värst av en svårare ekonomisk tillvaro. Denna rapport använder multipel linjär regression för att utvärdera effekterna av olika makroekonomiska faktorer på SMEs i Sverige. Data har insamlats månadsvis för en 10 årsperiod mellan 2009 till 2010. Resultatet blev en modell med fem variabler och en förklaringsgrad på 98%.
APA, Harvard, Vancouver, ISO, and other styles
40

Hansson, Helena. "Driving and restraining forces for economic and technical efficiency in dairy farms : what are the effects of technology and management? /." Uppsala : Dept. of Economics, Swedish University of Agricultural Sciences, 2007. http://epsilon.slu.se/2007108.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Bailey, Claire Elizabeth. "Three papers on side effects and modern contraceptive use among women in Ghana." Thesis, University of Southampton, 2009. https://eprints.soton.ac.uk/166579/.

Full text
Abstract:
This thesis investigates the issue of side effects and how they may act as a barrier to the use of modern contraceptive methods among women in Ghana. Three papers are presented each addressing the issue using different sources of data and different methodologies. The disparate nature of the data sources and techniques used provides each paper with its own perspective on the research question and each paper gives a unique insight into the topic. The aim of the first paper is to use a qualitative focus group methodology to explore in-depth the way individuals perceive information about family planning. The study seeks to better define what is meant by the term fear of side effect in this particular social context and to determine on what information and from what sources is this fear constructed. Overall the findings of this study show that fear of side effects does act as a significant barrier to the use of temporary methods and these fears result mainly from a large amount of negative information regarding side effects being passed through the social network. However the events being recounted cannot be dismissed as myth or rumour as they are most often based in real experiences. The second paper uses monthly data on contraceptive use and the experience of side effects from the calendar section of a longitudinal survey of women in Southern Ghana. Using life tables and a multi-level logistic discrete-time hazards model this study analyses contraceptive discontinuation and how it relates to the concurrent self-reported experience of side effects. The results show that experiencing side effects is associated with a higher probability of discontinuation of the method and that counselling from health workers is extremely important in minimizing discontinuation rates. The third paper uses a sub-sample of women who are not current contraceptive users from the 2003 GDHS. The study uses multiple logistic regression to determine the association between exposure to family planning information, through mass media and interpersonal channels, and the probability that a respondent will cite fear of side effects as their main reason for not intending to use a contraceptive method in the future. The results show that the only family planning communication variable which does have a significant effect is receiving a message from a health worker which increases the odds of fear of side effects being the main reason for not intending to use a method in the future. Overall the socio-economic characteristics of those not intending to use a method in the future due to a fear of side effects is more similar to current users than to those who are not intending to use in the future for other reasons.
APA, Harvard, Vancouver, ISO, and other styles
42

Liu, Tsunglin. "Physics and bioinformatics of RNA." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1141407392.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Leach, Lesley Ann Freeny. "Bias and Precision of the Squared Canonical Correlation Coefficient under Nonnormal Data Conditions." Thesis, University of North Texas, 2006. https://digital.library.unt.edu/ark:/67531/metadc5361/.

Full text
Abstract:
This dissertation: (a) investigated the degree to which the squared canonical correlation coefficient is biased in multivariate nonnormal distributions and (b) identified formulae that adjust the squared canonical correlation coefficient (Rc2) such that it most closely approximates the true population effect under normal and nonnormal data conditions. Five conditions were manipulated in a fully-crossed design to determine the degree of bias associated with Rc2: distribution shape, variable sets, sample size to variable ratios, and within- and between-set correlations. Very few of the condition combinations produced acceptable amounts of bias in Rc2, but those that did were all found with first function results. The sample size to variable ratio (n:v)was determined to have the greatest impact on the bias associated with the Rc2 for the first, second, and third functions. The variable set condition also affected the accuracy of Rc2, but for the second and third functions only. The kurtosis levels of the marginal distributions (b2), and the between- and within-set correlations demonstrated little or no impact on the bias associated with Rc2. Therefore, it is recommended that researchers use n:v ratios of at least 10:1 in canonical analyses, although greater n:v ratios have the potential to produce even less bias. Furthermore,because it was determined that b2 did not impact the accuracy of Rc2, one can be somewhat confident that, with marginal distributions possessing homogenous kurtosis levels ranging anywhere from -1 to 8, Rc2 will likely be as accurate as that resulting from a normal distribution. Because the majority of Rc2 estimates were extremely biased, it is recommended that all Rc2 effects, regardless of which function from which they result, be adjusted using an appropriate adjustment formula. If no rationale exists for the use of another formula, the Rozeboom-2 would likely be a safe choice given that it produced the greatest number of unbiased Rc2 estimates for the greatest number of condition combinations in this study.
APA, Harvard, Vancouver, ISO, and other styles
44

Hess, Melinda Rae. "Effect sizes, signficance tests, and confidence intervals [electronic resource] : assessing the influence and impact of research reporting protocol and practice / by Melinda Rae Hess." [Tampa, Fla.] : University of South Florida, 2003. http://purl.fcla.edu/fcla/etd/SFE0000148.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Kassiou, Evgenia. "Investigating the decrease ofgroundwater levels and the effect of fracture zone on recovery time: A case study of decrease in groundwater levels in a tunnel construction site in Vinsta, Stockholm." Thesis, KTH, Hållbar utveckling, miljövetenskap och teknik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-223619.

Full text
Abstract:
Groundwater is one of the main natural resources worldwide. Groundwater exists in aquifers below the earth surface and provides quantities of water for various purposes such as supply to households and businesses, public supply, drinking water supply, irrigation and agriculture. Sweden is also highly dependent on groundwater. As mentioned in the list of 16 Environmental Quality Objectives, that the Swedish Parliament established, "groundwater must assure a safe and sustainable supply of drinking water, as well as promoting viable habitats for plants and animals in lakes and watercourses". However, the protection of groundwater and generally the aquifer resources is prone to various human activities that are harmful in terms of volume and quality. The present thesis aims to investigate the behavior of groundwater towards such human activities of large scales, like a tunnel construction, and small scale, like a construction of a geothermal plant. The area under study is investigated through spatial analysis, using ArcGIS; the groundwater levels are monitored and further statistically analyzed by implementing a Modified Double Mass Statistical Analysis; and further on a 3D numerical model is built in COMSOL Multiphysics in order to simulate possible drawdown caused by human intervention to the natural environment. The created 3D model was used in order to evaluate the drawdown and different scenarios were implemented with the aim to determine the degree of sensitivity the model has towards fracture parameters. Since the occurrence of fractures in the rock mass is often connected to extended investigation and time/cost consuming techniques, the model contains an overall uncertainty concerning the location and properties of the fracture formations in the area. The different scenarios involve variation of fracture zone width and thus the behavior of the top soil layer is investigated in terms of recovery after drawdown. The results indicated connection to human activities, with the statistical analysis to support this. Also, the numerical model showed that the fracture properties are connected to the recovery time of the groundwater levels after a drawdown is noticed. Wider fracture zone width implied more time needed for the groundwater levels to get to their initial values, under the perception that the source of recharge is precipitation. On the other hand, narrow fracture zone width was connected with greater drawdown, compared to the wider width scenario, and also earlier in time recovery of the groundwater levels. The type of the soil layer and its vulnerability to human activities can vary greatly in terms of volume loss which can prove a hazard to existing infrastructure on the ground surface. The present study can prove useful in cases of prestudy of drilling projects of any scale. There is strong connection between fracture formations and recovery of groundwater levels and thus such kind of models can generate innovative techniques of planning before a project begins.
Grundvatten är en av de viktigaste naturresurserna världen över. Grundvatten finns i akviferer under jordytan och ger vatten för olika ändamål så som tillförsel till hushåll och företag, kommunalt bruk, dricksvattenförsörjning, bevattning och jordbruk. Även Sverige är mycket beroende av grundvatten. I en sammanställning av 16 nationella miljökvalitetsmål fastställde riksdagen bland annat att "grundvatten måste säkerställa ett säkert och hållbart utbud av dricksvatten samt att främja livskraftiga livsmiljöer för växter och djur i sjöar och vattendrag". Skyddet av grundvatten och de allmänna vattenresursresurserna är framtaget för att begränsa påverkan från olika mänskliga aktiviteter som är skadliga när det gäller volym och kvalitet. Föreliggande uppsats syftar till att undersöka grundvattnets beteende till följd av storskaliga mänskliga aktiviteter, till exempel en tunnelkonstruktion, och mindre aktiviteter, till exempel byggnation av en geotermisk anläggning. Det område som studeras undersöks genom rumslig analys, med hjälp av ArcGIS; grundvattennivån övervakas och analyseras vidare statistiskt genom implementering av en statistisk analys av Modified Double Mass Statistical Analysis; en numerisk 3D-modell byggs i mjukvaran COMSOL Multiphysics för att simulera möjlig grundvattennivåsänkning orsakad av mänsklig påverkan i den naturliga miljön. 3D-modellen användes för att utvärdera eventuell grundvattensänkning och olika scenarier implementerades med syfte att bestämma graden av känslighet med avseende på sprickparameterar i modellen. Eftersom förekomst av sprickor i bergmassan ofta innebär ett behov av utökad undersökning och tid/kostnadskrävande tekniker innehåller modellen en övergripande osäkerhet om platsen samt egenskaper hos sprickorna i området. De olika scenarierna involverar variation av sprickzonsbredd och det övre jordskiktets beteende betraktas i termer av återhämtning efter avsänkt grundvattennivå. Resultaten indikerade koppling till mänskliga aktiviteter, den statistiska analysen stödjer detta. Den numeriska modellen visade också att sprickornas egenskaper är kopplade till grundvattennivåernas återhämtningstid efter det att en grundvattennivåsänkning noterats. Bredare sprickzonsbredd innebar längre återhämtningstid för grundvattennivåerna att stiga till deras ursprungliga värden, under förutsättning att källan till återhämtning är nederbörd. Å andra sidan var en smalare sprickzonsbredd förenad med större grundvattenavsänkning samt snabbare återhämtning av grundvattennivån jämfört med scenariot för bredare sprickzoner. Jordlagrets typ och dess känslighet för påverkan från mänskliga aktiviteter kan variera kraftigt i fråga om volymförlust vilket kan utgöra en fara för befintlig infrastruktur på markytan. Den aktuella studien kan vara användbar i förstudier till borrprojekt av vilken skala som helst. Det finns stark koppling mellan sprickbildning och återhämtning av grundvattennivån, och sålunda denna typ av modeller generera innovativa planeringstekniker innan ett projekt börjar.
APA, Harvard, Vancouver, ISO, and other styles
46

EDWARDS, KARLA ROBERTA LISA. "Site-Specific Point Positioning and GPS Code Multipath Parameterization and Prediction." The Ohio State University, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=osu1300860715.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Gavrilovic, Nenad. "VIBRATION-BASED HEALTH MONITORING OF ROTATING SYSTEMS WITH GYROSCOPIC EFFECT." DigitalCommons@CalPoly, 2015. https://digitalcommons.calpoly.edu/theses/1358.

Full text
Abstract:
This thesis focuses on the simulation of the gyroscopic effect using the software MSC Adams. A simple shaft-disk system was created and parameter of the sys-tem were changed in order to study the influence of the gyroscopic effect. It was shown that an increasing bearing stiffness reduces the precession motion. Fur-thermore, it was shown that the gyroscopic effect vanishes if the disk of system is placed symmetrically on the shaft, which reduces the system to a Jeffcott-Ro-tor. The second objective of this study was to analyze different defects in a simple fixed axis gear set. In particular, a cracked shaft, a cracked pinion and a chipped pinion as well as a healthy gear system were created and tested in Adams. The contact force between the two gears was monitored and the 2D and 3D frequency spectrum, as well as the Wavelet Transform, were plotted in order to compare the individual defects. It was shown that the Wavelet Transform is a powerful tool, capable of identifying a cracked gear with a non-constant speed. The last part of this study included fault detection with statistical methods as well as with the Sideband Energy Ratio (SER). The time domain signal of the individual faults were used to compare the mean, the standard deviation and the root mean square. Furthermore, the noise profile in the frequency spectrum was tracked with statistical methods using the mean and the standard deviation. It was demonstrated that it is possible to identify a cracked gear, as well as a chipped gear, with statistical methods. However, a cracked shaft could not be identified. The results also show that SER was only capable to identify major defects in a gear system such as a chipped tooth.
APA, Harvard, Vancouver, ISO, and other styles
48

Lavoie, J. André. "Scaling Effects on Damage Development, Strength, and Stress-Rupture Life on Laminated Composites in Tension." Diss., Virginia Tech, 1997. http://hdl.handle.net/10919/30510.

Full text
Abstract:
The damage development and strength of ply-level scaled carbon/epoxy composite laminates having stacking sequence of [+Tn/-Tn/902n]s where constraint ply angle, T, was 0, 15, 30, 45, 60, and 75 degrees, and size was scaled as n=1,2,3, and 4, is reported in Part I. X-radiography was used to monitor damage developments. First-ply failure stress, and tensile strength were recorded. First-ply failure of the midplane 90 deg. plies depended on the stiffness of constraint plies, and size. All 24 cases were predicted using Zhang's shear-lag model and data generated from cross-ply tests. Laminate strength was controlled by the initiation of a triangular-shaped local delamination of the surface angle plies. This delamination was predicted using O'Brien's strain energy release rate model for delamination of surface angle plies. For each ply angle, the smallest laminate was used to predict delamination (and strength) of the other sizes. The in-situ tensile strength of the 0 deg. plies within different cross-ply, and quasi-isotropic laminates of varying size and stacking sequence is reported in Part II. No size effect was observed in the strength of 0 deg. plies for those lay-ups having failure confined to the gauge section. Laminates exhibiting a size-strength relationship, had grip region failures for the larger sizes. A statistically significant set of 3-point bend tests of unidirectional beams were used to provide parameters for a Weibull model, to re-examine relationship between ultimate strength of 0 deg. plies and specimen volume. The maximum stress in the 0 deg. plies in bending, and the tensile strength of the 0 deg. plies (from valid tests only) was the same. Weibull theory predicted loss of strength which was not observed in the experiments. An effort to model the durability and life of quasi-isotropic E-glass/913 epoxy composite laminates under steady load and in an acidic environment is reported in Part III. Stress-rupture tests of unidirectional coupons immersed in a weak hydrochloric acid solution was conducted to determine their stress-life response. Creep tests were conducted on unidirectional coupons parallel and transverse to the fibers, and on ±45°. layups to characterize the lamina stress- and time-dependent compliances. These data were used in a composite stress-rupture life model, based on the critical element modeling philosophy of Reifsnider, to predict the life of two ply-level thickness-scaled quasi-isotropic laminates.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
49

Terning, Fredrik, Anna Ahl, and Sofie Söderström. "Datorbaserad rapportering av biverkningar och symptom vid cytostatikabehandlad avancerad bröstcancer." Thesis, Uppsala University, Department of Public Health and Caring Sciences, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-112554.

Full text
Abstract:

Syftet är att beskriva symtom och biverkningar som kvinnor med avancerad bröstcancer och cytostatikabehandling rapporterat i ett datoriserat rapporteringssystem före läkarbesök. Undersöka tillfredsställelsen med detta system; se om det finns en skillnad mellan äldre och yngre; undersöka kvinnornas uppfattning om vad som kan förbättras i uppföljningen av symtom/biverkningar samt stödet från läkare. Detta är en kvantitativ, deskriptiv tvärsnittsstudie baserat på rapporteringssystemets databas samt enkätundersökning.

 

Biverkningarna trötthet, smärta och nervpåverkan rapporterades mest frekvent. Tidsåtgången för rapportering ansågs utav de flesta vara kort eller mycket kort och formuläret upplevdes av majoriteten vara ganska lätt till mycket lätt att använda oberoende av datorvana. Läkaren ansågs från hög grad till mycket hög grad vara ett stöd i att hantera symtom och biverkningar av två tredjedelar av respondenterna. Hälften ansåg att rapporterade biverkningar och symtom uppmärksammades av läkaren i hög grad till mycket hög grad.

 

Undersökningen bekräftar det tidigare forskning visat om datoriserade rapporteringssystem i vården, att de är funktionella oavsett ålder samt att intresse finns för att använda dessa i större utsträckning. På grund av litet urval och relativt stort bortfall i enkätstudien kan dock inga direkta slutsatser dras men undersökningen antyder att behov finns att vidareutveckla rapporteringssystemet.


The aim of the study is to describe symptoms and side effects that women with advanced breast cancer and chemotherapy reported in an adverse drug reporting system before seeing their oncologist; examine the satisfaction with this system; if there are any differences between older and younger women; the women’s opinion of what improvements could be done in the follow-up of the symptoms/side effects and the support from the oncologist. This is a quantitative, descriptive cross-sectional study based on the database of the adverse drug reporting system and the questionnaire survey.

 

The side effects fatigue, pain and peripheral neuropathy were most frequently reported. The time consumption for reporting was considered short or very short and the majority thought that the questionnaire was fairly easy to very easy to use independent of computer habits. The oncologists where considered from a high extent to a very high extent being a support in handling the symptoms/side effects by two thirds of the respondents. Two fourths felt that the oncologists attended reported symptoms/side effects from a high extent to a very high extent.

Because of a small sample and a relatively large drop-out no real conclusions can be drawn except the need for further development of the system.

APA, Harvard, Vancouver, ISO, and other styles
50

Ashour, Ashraf F., and Ilker F. Kara. "Size effect on shear strength of FRP reinforced concrete beams." 2013. http://hdl.handle.net/10454/7606.

Full text
Abstract:
yes
This paper presents test results of six concrete beams reinforced with longitudinal carbon fiber reinforced polymer (CFRP) bars and without vertical shear reinforcement. All beams were tested under a two-point loading system to investigate shear behavior of CFRP reinforced concrete beams. Beam depth and amount of CFRP reinforcement were the main parameters investigated. All beams failed due to a sudden diagonal shear crack at almost 45°. A simplified, empirical expression for the shear capacity of FRP reinforced concrete members accounting for most influential parameters is developed based on the design-by-testing approach using a large database of 134 specimens collected from the literature including the beams tested in this study. The equations of six existing design standards for shear capacity of FRP reinforced concrete beams have also been evaluated using the large database collected. The existing shear design methods for FRP reinforced concrete beams give either conservative or unsafe predictions for many specimens in the database and their accuracy are mostly dependent on the effective depth and type of FRP reinforcement. On the other hand, the proposed equation provides reasonably accurate shear capacity predictions for a wide range of FRP reinforced concrete beams.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography