To see the other types of publications on this topic, follow the link: Sample size effect.

Dissertations / Theses on the topic 'Sample size effect'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Sample size effect.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Guan, Tianyuan. "Sample Size Calculations in Simple Linear Regression: A New Approach." University of Cincinnati / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1627667392849137.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Price, Tonia. "A faster simulation approach to sample size determination for random effect models." Thesis, University of Bristol, 2017. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.730872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Xiangrong. "Effect of Sample Size on Irt Equating of Uni-Dimensional Tests in Common Item Non-Equivalent Group Design: a Monte Carlo Simulation Study." Diss., Virginia Tech, 2012. http://hdl.handle.net/10919/37555.

Full text
Abstract:
Test equating is important to large-scale testing programs because of the following two reasons: strict test security is a key concern for high-stakes tests and fairness of test equating is important for test takers. The question of adequacy of sample size often arises in test equating. However, most recommendations in the existing literature are based on classical test equating. Very few research studies systematically investigated the minimal sample size which leads to reasonably accurate equating results based on item response theory (IRT). The main purpose of this study was to examine the minimal sample size for desired IRT equating accuracy for the common-item nonequivalent groups design under various conditions. Accuracy was determined by examining the relative magnitude of six accuracy statistics. Two IRT equating methods were carried out on simulated tests with combinations of test length, test format, group ability difference, similarity of the form difficulty, and parameter estimation methods for 14 sample sizes using Monte Carlo simulations with 1,000 replications per cell. Observed score equating and true score equating were compared to the criterion equating to obtain the accuracy statistics. The results suggest that different sample size requirements exist for different test lengths, test formats and parameter estimation methods. Additionally, the results show the following: first, the results for true score equating and observed score equating are very similar. Second, the longer test has less accurate equating than the shorter one at the same sample size level and as the sample size decreases, the gap is greater. Third, concurrent parameter estimation method produced less equating error than separate estimation at the same sample size level and as the sample size reduces, the difference increases. Fourth, the cases with different group ability have larger and less stable error comparing to the base case and the cases with different test difficulty, especially when using separate parameter estimation method with sample size less than 750. Last, the mixed formatted test is more accurate than the single formatted one at the same sample size level.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
4

Cong, Danni. "The effect of sample size re-estimation on type I error rates when comparing two binomial proportions." Kansas State University, 2016. http://hdl.handle.net/2097/34504.

Full text
Abstract:
Master of Science<br>Department of Statistics<br>Christopher I. Vahl<br>Estimation of sample size is an important and critical procedure in the design of clinical trials. A trial with inadequate sample size may not produce a statistically significant result. On the other hand, having an unnecessarily large sample size will definitely increase the expenditure of resources and may cause a potential ethical problem due to the exposure of unnecessary number of human subjects to an inferior treatment. A poor estimate of the necessary sample size is often due to the limited information at the planning stage. Hence, the adjustment of the sample size mid-trial has become a popular strategy recently. In this work, we introduce two methods for sample size re-estimation for trials with a binary endpoint utilizing the interim information collected from the trial: a blinded method and a partially unblinded method. The blinded method recalculates the sample size based on the first stage’s overall event proportion, while the partially unblinded method performs the calculation based only on the control event proportion from the first stage. We performed simulation studies with different combinations of expected proportions based on fixed ratios of response rates. In this study, equal sample size per group was considered. The study shows that for both methods, the type I error rates were preserved satisfactorily.
APA, Harvard, Vancouver, ISO, and other styles
5

Hadley, Patrick. "The performance of the Mantel-Haenszel and logistic regression dif detection procedures across sample size and effect size: A Monte Carlo study." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/10019.

Full text
Abstract:
In recent years, public attention has become focused on the issue of test and item bias in standardized tests. Since the 1980's, the Mantel-Haenszel (Holland & Thayer, 1986) and Logistic Regression procedures (Swaminathan & Rogers, 1990) have been developed to detect item bias, or differential item functioning (dif). In this study the effectiveness of the MH and LR procedures was compared under a variety of conditions, using simulated data. The ability of the MH and LR to detect dif was tested at sample sizes of 100/100, 200/200, 400/400, 600/600, and 800/800. The simulated test had 66 items, the first 33 items with item discrimination ("a") set at 0.80, the second 33 items with "a" set at 1.20. The pseudo-guessing parameter ("c") was 0.15 for all items. The item difficulty ("b") parameter ranged from $-$2.00 to 2.00 in increments of 0.125 for the first 33 items, and again for the second 33 items. Both the MH and LRU detected dif with a high degree of success whenever sample size was large (600 or more), especially when effect size, no matter how measured, was also large. The LRU outperformed the MH marginally under almost every condition of the study. However, the LRU also had a higher false-positive rate than the MH, a finding consistent with previous studies (Pang et al., 1994, Tian et al., 1994a, 1994b). Since the "a" and "b" parameters which underly the computation of the three measures of effect size used in the study are not always determinable in data derived from real world test administrations, it may be that the $\Delta\sb{\rm MH}$ is the best available measure of effect size in real world test items. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
6

Lee, Michelle Oi San. "Sample size calculation for testing an interaction effect in a logistic regression under measurement error model /." View Abstract or Full-Text, 2003. http://library.ust.hk/cgi/db/thesis.pl?MATH%202003%20LEE.

Full text
Abstract:
Thesis (M. Phil.)--Hong Kong University of Science and Technology, 2003.<br>Includes bibliographical references (leaves 66-67). Also available in electronic version. Access restricted to campus users.
APA, Harvard, Vancouver, ISO, and other styles
7

Senteney, Michael H. "A Monte Carlo Study to Determine Sample Size for Multiple Comparison Procedures in ANOVA." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou160433478343909.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kennedy, Michael. "The influence of sample size, effect size, and percentage of DIF items on the performance of the Mantel-Haenszel and logistic regression DIF identification procedures." Thesis, University of Ottawa (Canada), 1994. http://hdl.handle.net/10393/6884.

Full text
Abstract:
The frequent use of standardized tests for admission, advancement, and accreditation has increased public awareness of measurement issues, in particular, test and item bias. The logistic regression (LR) and Mantel-Haenszel (MH) procedures are relatively new methods of detecting item bias or differential item functioning (DIF) in tests. In only a few studies has the performance of these two procedures been compared. In the present study, sample size, effect size, and percentage of DIF items in the test were manipulated in order to compare detection rates of uniform DIF by the LR and MH procedures. Simulated data, with known amounts of DIF, were used to evaluate the effects of these variables on DIF detection rates. In detecting uniform DIF, the LR procedure had a slight advantage over the MH procedure at the cost of increased false positive rates. P-value difference was definitely a more accurate measure of the amount of DIF than b value difference. (Abstract shortened by UMI.)
APA, Harvard, Vancouver, ISO, and other styles
9

Schäfer, Thomas, and Marcus A. Schwarz. "The Meaningfulness of Effect Sizes in Psychological Research: Differences Between Sub-Disciplines and the Impact of Potential Biases." Frontiers Media SA, 2019. https://monarch.qucosa.de/id/qucosa%3A33749.

Full text
Abstract:
Effect sizes are the currency of psychological research. They quantify the results of a study to answer the research question and are used to calculate statistical power. The interpretation of effect sizes—when is an effect small, medium, or large?—has been guided by the recommendations Jacob Cohen gave in his pioneering writings starting in 1962: Either compare an effect with the effects found in past research or use certain conventional benchmarks. The present analysis shows that neither of these recommendations is currently applicable. From past publications without pre-registration, 900 effects were randomly drawn and compared with 93 effects from publications with pre-registration, revealing a large difference: Effects from the former (median r = 0.36) were much larger than effects from the latter (median r = 0.16). That is, certain biases, such as publication bias or questionable research practices, have caused a dramatic inflation in published effects, making it difficult to compare an actual effect with the real population effects (as these are unknown). In addition, there were very large differences in the mean effects between psychological sub-disciplines and between different study designs, making it impossible to apply any global benchmarks. Many more pre-registered studies are needed in the future to derive a reliable picture of real population effects.
APA, Harvard, Vancouver, ISO, and other styles
10

Awuor, Risper Akelo. "Effect of Unequal Sample Sizes on the Power of DIF Detection: An IRT-Based Monte Carlo Study with SIBTEST and Mantel-Haenszel Procedures." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/28321.

Full text
Abstract:
This simulation study focused on determining the effect of unequal sample sizes on statistical power of SIBTEST and Mantel-Haenszel procedures for detection of DIF of moderate and large magnitudes. Item parameters were estimated by, and generated with the 2PLM using WinGen2 (Han, 2006). MULTISIM was used to simulate ability estimates and to generate response data that were analyzed by SIBTEST. The SIBTEST procedure with regression correction was used to calculate the DIF statistics, namely the DIF effect size and the statistical significance of the bias. The older SIBTEST was used to calculate the DIF statistics for the M-H procedure. SAS provided the environment in which the ability parameters were simulated; response data generated and DIF analyses conducted. Test items were observed to determine if a priori manipulated items demonstrated DIF. The study results indicated that with unequal samples in any ratio, M-H had better Type I error rate control than SIBTEST. The results also indicated that not only the ratios, but also the sample size and the magnitude of DIF influenced the behavior of SIBTEST and M-H with regard to their error rate behavior. With small samples and moderate DIF magnitude, Type II errors were committed by both M-H and SIBTEST when the reference to focal group sample size ratio was 1:.10 due to low observed statistical power and inflated Type I error rates.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
11

Markari, Adrian. "Investigation on the self-healing capabilties of asphaltic materials using neutron imaging." Licentiate thesis, KTH, Bro- och stålbyggnad, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-291114.

Full text
Abstract:
Bitumen acts as a binding agent in asphalt mixtures where it binds the aggregates together. It is known for its potential to heal small cracks and recover its mechanical properties under the right conditions. Though this self-healing property is known, there is currently a lack of knowledge about the mechanisms that drive the process. To optimize the use of this material for pavement design, the healing ability should be better understood and controlled. In this work, it is investigated how neutron imaging can be used to increase the understanding of the mechanisms behind the self-healing in bitumen. As a first step, the sample size requirement set by the measurement technique was determined. In order to detect micro cracks in bitumen by using this technique, the sample must be sufficiently small to allow neutron transmission. On the other hand, too small samples would complicate the structural analysis of the material since less information would be possible to obtain. Bitumen with different dimensions were scanned with neutrons to determine the maximum sample thickness. This work was followed by evaluating the healing capability of fractured bitumen and mastic samples, by using time series neutron tomography. The studied samples had a varying combination of hydrated lime (HL) filler concentration, crack volume, and contact area between the broken pieces. The data acquired from the time series tomography scans was analyzed using a three-dimensional analysis procedure including denoising, segmentation and volume measurements. From the volumetric analysis, it appeared that the initial crack size and crack shape have a large impact on the healing rate. It was found that bitumen, mastic with 20%, and 30% filler content had a similar healing behavior for relatively small crack volumes. When increasing the content of HL in the mastic, the healing rate decreases exponentially, with a drastic decrease after reaching a filler content of about 30%.<br>Bitumen fungerar som bindemedel i asfaltsblandningar där det binder ihop stenaggregaten. Bitumen är känd för sin förmåga att läka små sprickor och återfå sina mekaniska egenskaper under rätt förutsättningar. Trots att den självläkande egenskapen är välkänd, råder det idag en brist på kunskap om de mekanismer som ligger bakom denna process. För att optimera användandet av bitumen för vägbeläggningar behövs en bättre förståelse kring denna läkande egenskap. I detta projekt undersöks det hur neutronavbildning kan användas för att öka förståelsen kring de mekanismer som ligger bakom den självläkande egenskapen hos bitumen. Som ett första steg bestämdes provstorlekskravet för denna analysteknik. För att möjliggöra detekteringen av små sprickor i bitumen genom att använda denna teknik måste provmaterialet vara tillräckligt tunt för att neutronerna ska kunna transmitteras genom materialet. Allt för små provstorlekar skulle, å andra sidan, försvåra analysen av materialets struktur då informationen man kan erhålla blir mer begränsad. Bitumen med olika provstorlekar skannades med neutroner för att bestämma den maximala provtjockleken. Därefter analyserades den självläkande förmågan hos brutna bitumen- och bitumenmastixprover med tidsserie neutrontomografi. Prover med olika mängder av kalciumhydroxidfiller i bitumenblandningen, olika storlek på sprickvolymen och kontaktytan mellan de brutna provdelarna studerades. Data erhållna från experimenten användes för att göra en 3-dimensionell analys som inkluderade brusreducering av bilder, segmentering och volymmätningar. Från volymanalyserna konstaterades det att den initiala sprickstorleken och sprickformen har en stor inverkan på läkningstakten. Bitumen, mastix med 20%, och 30% filler-additiv uppvisade liknande läkningsegenskaper för relativt små sprickstorlekar. Vid en ökning av mängden filler material i mastixen minskar läkningstakten exponentiellt, med en drastisk minskning när man passerar en filler-koncentration på runt 30%.<br><p>QC 210303</p>
APA, Harvard, Vancouver, ISO, and other styles
12

Norfleet, David M. "Sample size effects related to nickel, titanium and nickel-titanium at the micron size scale." The Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc_num=osu1187038020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Tse, Kwok Ho. "Sample size calculation : influence of confounding and interaction effects /." View abstract or full-text, 2006. http://library.ust.hk/cgi/db/thesis.pl?MATH%202006%20TSE.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Gilkey, Justin Michael. "The Effects of Sample Size on Measures of Subjective Correlation." Bowling Green State University / OhioLINK, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1211901739.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Norfleet, David Matthew. "Sample size effects related to nickel, titanium and nickel-titanium at the micron size scale." Columbus, Ohio : Ohio State University, 2007. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1187038020.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Liv, Per. "Efficient strategies for collecting posture data using observation and direct measurement." Doctoral thesis, Umeå universitet, Yrkes- och miljömedicin, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-59132.

Full text
Abstract:
Relationships between occupational physical exposures and risks of contracting musculoskeletal disorders are still not well understood; exposure-response relationships are scarce in the musculoskeletal epidemiology literature, and many epidemiological studies, including intervention studies, fail to reach conclusive results. Insufficient exposure assessment has been pointed out as a possible explanation for this deficiency. One important aspect of assessing exposure is the selected measurement strategy; this includes issues related to the necessary number of data required to give sufficient information, and to allocation of measurement efforts, both over time and between subjects in order to achieve precise and accurate exposure estimates. These issues have been discussed mainly in the occupational hygiene literature considering chemical exposures, while the corresponding literature on biomechanical exposure is sparse. The overall aim of the present thesis was to increase knowledge on the relationship between data collection design and the resulting precision and accuracy of biomechanical exposure assessments, represented in this thesis by upper arm postures during work, data which have been shown to be relevant to disorder risk. Four papers are included in the thesis. In papers I and II, non-parametric bootstrapping was used to investigate the statistical efficiency of different strategies for distributing upper arm elevation measurements between and within working days into different numbers of measurement periods of differing durations. Paper I compared the different measurement strategies with respect to the eventual precision of estimated mean exposure level. The results showed that it was more efficient to use a higher number of shorter measurement periods spread across a working day than to use a smaller number for longer uninterrupted measurement periods, in particular if the total sample covered only a small part of the working day. Paper II evaluated sampling strategies for the purpose of determining posture variance components with respect to the accuracy and precision of the eventual variance component estimators. The paper showed that variance component estimators may be both biased and imprecise when based on sampling from small parts of working days, and that errors were larger with continuous sampling periods. The results suggest that larger posture samples than are conventionally used in ergonomics research and practice may be needed to achieve trustworthy estimates of variance components. Papers III and IV focused on method development. Paper III examined procedures for estimating statistical power when testing for a group difference in postures assessed by observation. Power determination was based either on a traditional analytical power analysis or on parametric bootstrapping, both of which accounted for methodological variance introduced by the observers to the exposure data. The study showed that repeated observations of the same video recordings may be an efficient way of increasing the power in an observation-based study, and that observations can be distributed between several observers without loss in power, provided that all observers contribute data to both of the compared groups, and that the statistical analysis model acknowledges observer variability. Paper IV discussed calibration of an inferior exposure assessment method against a superior “golden standard” method, with a particular emphasis on calibration of observed posture data against postures determined by inclinometry. The paper developed equations for bias correction of results obtained using the inferior instrument through calibration, as well as for determining the additional uncertainty of the eventual exposure value introduced through calibration. In conclusion, the results of the present thesis emphasize the importance of carefully selecting a measurement strategy on the basis of statistically well informed decisions. It is common in the literature that postural exposure is assessed from one continuous measurement collected over only a small part of a working day. In paper I, this was shown to be highly inefficient compared to spreading out the corresponding sample time across the entire working day, and the inefficiency was also obvious when assessing variance components, as shown in paper II. The thesis also shows how a well thought-out strategy for observation-based exposure assessment can reduce the effects of measurement error, both for random methodological variance (paper III) and systematic observation errors (bias) (paper IV).
APA, Harvard, Vancouver, ISO, and other styles
17

Janse, Sarah A. "INFERENCE USING BHATTACHARYYA DISTANCE TO MODEL INTERACTION EFFECTS WHEN THE NUMBER OF PREDICTORS FAR EXCEEDS THE SAMPLE SIZE." UKnowledge, 2017. https://uknowledge.uky.edu/statistics_etds/30.

Full text
Abstract:
In recent years, statistical analyses, algorithms, and modeling of big data have been constrained due to computational complexity. Further, the added complexity of relationships among response and explanatory variables, such as higher-order interaction effects, make identifying predictors using standard statistical techniques difficult. These difficulties are only exacerbated in the case of small sample sizes in some studies. Recent analyses have targeted the identification of interaction effects in big data, but the development of methods to identify higher-order interaction effects has been limited by computational concerns. One recently studied method is the Feasible Solutions Algorithm (FSA), a fast, flexible method that aims to find a set of statistically optimal models via a stochastic search algorithm. Although FSA has shown promise, its current limits include that the user must choose the number of times to run the algorithm. Here, statistical guidance is provided for this number iterations by deriving a lower bound on the probability of obtaining the statistically optimal model in a number of iterations of FSA. Moreover, logistic regression is severely limited when two predictors can perfectly separate the two outcomes. In the case of small sample sizes, this occurs quite often by chance, especially in the case of a large number of predictors. Bhattacharyya distance is proposed as an alternative method to address this limitation. However, little is known about the theoretical properties or distribution of B-distance. Thus, properties and the distribution of this distance measure are derived here. A hypothesis test and confidence interval are developed and tested on both simulated and real data.
APA, Harvard, Vancouver, ISO, and other styles
18

Wittwer, Rico. "Raumstrukturelle Einflüsse auf das Verkehrsverhalten - Nutzbarkeit der Ergebnisse großräumiger und lokaler Haushaltsbefragungen für makroskopische Verkehrsplanungsmodelle." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1201084936499-97472.

Full text
Abstract:
Für die Verkehrsnachfragemodellierung stehen dem Planer sehr differenzierte Modellansätze zur Verfügung. Ein wesentliches Unterscheidungskriterium stellt dabei der Modellierungsgegenstand dar. Der Fokus der vorliegenden Arbeit ist auf makroskopische Verkehrsplanungsmodelle gerichtet. Es wird der Frage nachgegangen, in welcher Form die Ergebnisse großräumiger und lokaler Haushaltsbefragungen effizient bzw. sich gegenseitig ergänzend in Modellierungsaufgaben Einsatz finden können. Im Mittelpunkt der empirischen Datenanalyse steht die Frage, ob ein Unterschied in der Ausprägung zentraler modellierungsrelevanter Kenngrößen differenziert nach Raumtypen statistisch belegbar und planungspraktisch bedeutsam ist. Vor diesem Hintergrund wird auch die Auswirkung der komplexen Stichprobenpläne von MiD 2002 und SrV 2003 auf die Varianz der Parameterschätzung berücksichtigt. Ein in dieser Arbeit entwickelter, mehrstufiger Bewertungsalgorithmus, der dem Signifikanz-Relevanz-Problem hinreichend Rechnung trägt, bildet die Grundlage der Hypothesenprüfung. Er verbindet das Standardvorgehen (Signifikanztest) mit normativ gesetzten Effektgrößen und dem schätzerbasierten Vorgehen (Konfidenzintervalle). Eine besonders hohe Transparenz und Entscheidungskonsistenz erlangt der Ansatz dadurch, dass die Hypothesenprüfung auf Basis zweier voneinander unabhängig erhobener Untersuchungsgruppen (MiD, SrV) erfolgt. Die intensive Arbeit mit den Datengrundlagen MiD und SrV liefert eine Vielzahl von Erkenntnissen zur weiteren Qualifizierung des Erhebungsinstrumentes „Mobilität in Städten – SrV“. In Vorbereitung der im Jahre 2008 anstehenden Neuauflage der Erhebungsreihe wird nach Ansicht des Autors mit der Arbeit ein wesentlicher Impuls zur Weiterentwicklung der Methodik gegeben.
APA, Harvard, Vancouver, ISO, and other styles
19

Lynch, Edward. "The Effects of Irrelevant Information and Minor Errors in Client Documents on Assessments of Misstatement Risk and Sample Size." VCU Scholars Compass, 2017. http://scholarscompass.vcu.edu/etd/4959.

Full text
Abstract:
This dissertation consists of three studies. The first study conducts a 2 by 2 experiment to examine how auditors are influenced by the presence of irrelevant information and minor errors (i.e., “dirty documents”) when reviewing audit evidence produced by the client. This study tasks 97 public accountants to review audit evidence and finds some evidence that dirty documents influence an auditor’s assessment of the likelihood of account misstatement and the appropriate sample size. In order to demonstrate the usefulness of eye-tracking and to help generate potential research topics, the second study reviews extant literature in other disciplines where eye-tracking technology is applied to various judgment and decision-making contexts. This study suggests how eye-tracking can enhance extant accounting research. Illustrative examples of promising research opportunities (extending extant research) are provided. In addition, this study identifies how eye-tracking can be applied to more contemporary decision making and educational circumstances. The third study extends the first experiment through the use of eye-tracking technology. This study utilizes the same 2 by 2 experiment as the first study, but in this case records the eye movements of 43 auditing students while they review the audit evidence. The eye-tracking technology provides additional detail as to the specific evidence participants’ focus on during their review. This study finds that participants focus their attention differently depending on whether irrelevant information or minor errors were present.
APA, Harvard, Vancouver, ISO, and other styles
20

Wittwer, Rico. "Raumstrukturelle Einflüsse auf das Verkehrsverhalten - Nutzbarkeit der Ergebnisse großräumiger und lokaler Haushaltsbefragungen für makroskopische Verkehrsplanungsmodelle." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A24043.

Full text
Abstract:
Für die Verkehrsnachfragemodellierung stehen dem Planer sehr differenzierte Modellansätze zur Verfügung. Ein wesentliches Unterscheidungskriterium stellt dabei der Modellierungsgegenstand dar. Der Fokus der vorliegenden Arbeit ist auf makroskopische Verkehrsplanungsmodelle gerichtet. Es wird der Frage nachgegangen, in welcher Form die Ergebnisse großräumiger und lokaler Haushaltsbefragungen effizient bzw. sich gegenseitig ergänzend in Modellierungsaufgaben Einsatz finden können. Im Mittelpunkt der empirischen Datenanalyse steht die Frage, ob ein Unterschied in der Ausprägung zentraler modellierungsrelevanter Kenngrößen differenziert nach Raumtypen statistisch belegbar und planungspraktisch bedeutsam ist. Vor diesem Hintergrund wird auch die Auswirkung der komplexen Stichprobenpläne von MiD 2002 und SrV 2003 auf die Varianz der Parameterschätzung berücksichtigt. Ein in dieser Arbeit entwickelter, mehrstufiger Bewertungsalgorithmus, der dem Signifikanz-Relevanz-Problem hinreichend Rechnung trägt, bildet die Grundlage der Hypothesenprüfung. Er verbindet das Standardvorgehen (Signifikanztest) mit normativ gesetzten Effektgrößen und dem schätzerbasierten Vorgehen (Konfidenzintervalle). Eine besonders hohe Transparenz und Entscheidungskonsistenz erlangt der Ansatz dadurch, dass die Hypothesenprüfung auf Basis zweier voneinander unabhängig erhobener Untersuchungsgruppen (MiD, SrV) erfolgt. Die intensive Arbeit mit den Datengrundlagen MiD und SrV liefert eine Vielzahl von Erkenntnissen zur weiteren Qualifizierung des Erhebungsinstrumentes „Mobilität in Städten – SrV“. In Vorbereitung der im Jahre 2008 anstehenden Neuauflage der Erhebungsreihe wird nach Ansicht des Autors mit der Arbeit ein wesentlicher Impuls zur Weiterentwicklung der Methodik gegeben.
APA, Harvard, Vancouver, ISO, and other styles
21

LOUREIRO, LORENA DRUMOND. "EFFECT OF THE NUMBER AND SIZE OF THE INITIAL SAMPLES ON THE PERFORMANCE OF THE S CHART AND ON THE PROCESS CAPABILITY ESTIMATE." PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO, 2008. http://www.maxwell.vrac.puc-rio.br/Busca_etds.php?strSecao=resultado&nrSeq=11959@1.

Full text
Abstract:
PONTIFÍCIA UNIVERSIDADE CATÓLICA DO RIO DE JANEIRO<br>CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICO<br>A probabilidade de alarme falso, a, e o poder dos gráficos de controle de processos dependem dos seus limites de controle, que, por sua vez, dependem de estimativas dos parâmetros do processo. A análise de capacidade de processos também depende de tais estimativas. Esta dissertação apresenta inicialmente uma revisão - ao nosso conhecimento, a primeira em português - dos principais trabalhos sobre o efeito dos erros de estimação dos parâmetros do processo sobre a. Todos os trabalhos citados buscam determinar, com base na distribuição de probabilidades das estimativas dos parâmetros do processo (parametrizada pelo número de amostras iniciais, m, e do tamanho delas, n) o valor esperado de a ou, equivalentemente, o valor esperado da distribuição marginal do número de amostras até um alarme falso. Nossa abordagem é distinta: obter (parametrizada por n e m) a distribuição de a e seus percentis ou, equivalentemente, a distribuição do valor esperado do número de amostras até um alarme falso, de modo a fornecer orientação sobre o número de amostras iniciais a serem utilizadas antes de fixar definitivamente os valores dos limites de controle dos gráficos. A análise foi conduzida para o gráfico de S. Foi analisada também a influência da estimação do desvio-padrão do processo sobre o poder do gráfico. Finalmente, foi obtida a distribuição dos erros na estimativa da capacidade do processo em função de m e n, para fornecer orientação sobre o número de amostras necessário para garantir uma precisão especificada nessa estimativa, com um grau de confiança também especificado.<br>The false-alarm probability, a, and the power of process control charts depend on their control limits, which, in turn, depend on process parameters estimates. Process capability analyses also depend on those estimates. This dissertation initially presents a review - to our knowledge, the first in Portuguese - of the main research articles about the effect upon a of the estimation errors of the process parameters. All the works reviewed aim to determine, based on the probability distribution of the process parameters estimates (which is a function of the number of initial samples, m, and of their size, n), the expected value of a, or, equivalently, the expected value of the marginal distribution of the number of samples until a false alarm occurs. Our approach is different: to get (parameterized by m and n) the distribution of - and their percentiles or, equivalently, the distribution of the expected number of samples until a false alarm, in order to provide guidance on the initial number of samples to be used before setting the chart definitive control limits. The analysis was conducted for the S chart. The influence of the estimation errors on the power of the S chart was also examined. Finally, the distribution function was obtained, parameterized by m and n, of the estimation errors of the process capability index Cp, to provide guidance on the initial number of samples required to ensure, with a specified confidence level, a specified accuracy of the estimate.
APA, Harvard, Vancouver, ISO, and other styles
22

Basterfield, Candice. "The cognitive rehabilitation of a sample of children living with HIV : a specific focus on the cognitive rehabilitation of sustained attention." Thesis, Rhodes University, 2015. http://hdl.handle.net/10962/d1017881.

Full text
Abstract:
Pharmacological interventions to treat Human Immunodeficiency Virus (HIV) with antiretrovirals (ARVs), have dramatically improved the survival rates of HIV positive children maturing into adulthood. However, HIV-associated neurocognitive decline still persists in the era of ARVs. Within the framework of brain plasticity, a number of researchers have begun to assess the feasibility of cognitive rehabilitation therapy as a complement to ARVs to reverse neurocognitive decline as a result of HIV (e.g., Becker et al., 2012). Only one study has been conducted in South Africa, by Zondo & Mulder (2014), assessing the efficacy of cognitive rehabilitation in a paediatric sample. The current research builds on the above mentioned study by implementing an experimental approach to examine the effect of cognitive rehabilitation in a sample of both HIV positive and HIV negative children. Five HIV positive and six HIV negative children were assigned to either an experimental or control group. The experimental group underwent two months of cognitive rehabilitation therapy remediating sustained attention, whereas the control group took part in placebo activities. Sustained attention measures were taken before and after the intervention training sessions, using a sustained attention subtest from the Test of Everyday Attention for Children (TEA-CH). A Mann Whitney U Test revealed that the experimental group (Mdn=38.50) did not differ significantly from the control group (Mdn = 37.00) after the cognitive rehabilitation intervention, U=12.00, z= -.55, p= .66, r= -.17. But a Wilcoxon Signed Rank Test found that there was a significant improvement from pretest scores (Mdn=31.00) to posttest scores (Mdn=38.00) following the rehabilitation for HIV positive participants in the sample, T=15.00, z = -2.02, p= .04, r= -.90. This raises the possibility that cognitive rehabilitation could be used as a low cost intervention in underdeveloped contexts
APA, Harvard, Vancouver, ISO, and other styles
23

Hütter, Geralf [Verfasser], Samuel [Gutachter] Forest, Björn [Gutachter] Kiefer, and Paul [Gutachter] Steinmann. "A theory for the homogenisation towards micromorphic media and its application to size effects and damage / Geralf Hütter ; Gutachter: Samuel Forest, Björn Kiefer, Paul Steinmann." Freiberg : Technische Universität Bergakademie Freiberg, 2019. http://d-nb.info/1228469253/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Silva, Wandermon CorrÃa. "Sorte versus habilidade na anÃlise de desempenho de fundos de investimento em aÃÃes no Brasil." Universidade Federal do CearÃ, 2012. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=10979.

Full text
Abstract:
nÃo hÃ<br>Esta dissertaÃÃo visa contribuir ao mainstream da Teoria de ApreÃamento de Ativos, ao analisar o desempenho dos fundos de investimento em aÃÃes no Brasil, a partir de um painel composto por 75 fundos do tipo ANBIMA Ibovespa Ativo, sobreviventes no perÃodo de janeiro de 1998 a dezembro de 2008, identificando aqueles cujo resultado se deve simplesmente à sorte ou ao azar e aqueles cujo resultado se deve à habilidade ou à falta de habilidade dos seus gestores. Seguindo a metodologia desenvolvida em Fama & French (1992, 1993) e o trabalho elaborado por Matos e Silva (2010), construÃram-se fatores, os quais consistem em zero cost equal weighted portfolios compostos apenas por fundos, capazes de captar os efeitos tamanho e ganho acumulado destes ativos, sendo os mesmos usados em diversas aplicaÃÃes em uma versÃo estendida do Capital Asset Pricing Model (CAPM). Os efeitos tamanho e ganho acumulado, evidenciados pela inadequaÃÃo do CAPM em modelar fundos com maior patrimÃnio lÃquido e ganhos acumulados muito altos ou baixos, parecem ser muito bem acomodados quando da incorporaÃÃo dos fatores, os quais se mostraram significativos conjuntamente em 50% dos 75 fundos analisados. As principais evidÃncias obtidas a partir de regressÃes temporais individuais sÃo corroboradas quando do teste em painel com efeitos aleatÃrios em que ambos os efeitos sÃo indispensÃveis na explicaÃÃo dos retornos dos fundos de investimento em aÃÃes no Brasil. Para a anÃlise de performance dos fundos, seguiu-se a metodologia proposta por Fama & French (2010), na qual, por meio de tÃcnicas de bootstrap, modela-se o estudo transversal do desempenho dos fundos de investimento. Para a maioria dos fundos que apresentaram outperformance significativa, com base nos alfas estimados nas regressÃes individuais, identificou-se desempenho devido ao acaso. No modelo de fatores proposto, somente trÃs fundos apresentaram real desempenho superior devido à habilidade de seus gestores, todos esses vinculados a instituiÃÃes financeiras privadas. O modelo de fatores se mostrou mais criterioso na caracterizaÃÃo da aleatoriedade de performance.<br>This dissertation aims to contribute to the mainstream in Asset Pricing Theory, to analyze the performance of stock mutual funds in Brazil, for a panel with 75 mutual funds type ANBIMA Active Ibovespa which have survived during the period between Jan-1998 and Dec-2008, identifying those whose result is simply due to good luck or bad luck and those whose result is due to the skill or lack of skill of their managers. Following the methodology developed in Fama and French (1992, 1993), we built two factors, mutual funds zero cost equal weighted portfolios, able to accommodate the size and performance effects observed for these assets, which are used in some applications in an extended version of Capital Asset Pricing Model (CAPM). Both effects, which seem to play a relevant role due to the inefficiency of the CAPM model to price big funds with huge relative performance (very high or very low), are partially accommodated when one adds factors, which are significant jointly in 50% of the 75 funds analyzed. The main evidences obtained running individual time series regressions are corroborated if one uses the panel technique estimation with random effects, where both factors seem to be vital if one intends to better understand the returns of the mutual funds in Brazil. To analyze the performance of the funds, the methodology developed in Fama and French (2010) was used, in which, by bootstrap techniques, the cross-section of the performance of investment funds are modeled. For most of the funds that had significant outperformance, based on the estimated alphas in individual regressions, performance due to chance was identified. In the factors model proposed, only three funds really outperformed due to the ability of their managers, all those linked to private financial institutions. The factor model proved to be more accurate in characterizing the randomness of performance with the appropriate criteria.
APA, Harvard, Vancouver, ISO, and other styles
25

Novakovic, Ana M. "Longitudinal Models for Quantifying Disease and Therapeutic Response in Multiple Sclerosis." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-316562.

Full text
Abstract:
Treatment of patients with multiple sclerosis (MS) and development of new therapies have been challenging due to the disease complexity and slow progression, and the limited sensitivity of available clinical outcomes. Modeling and simulation has become an increasingly important component in drug development and in post-marketing optimization of use of medication. This thesis focuses on development of pharmacometric models for characterization and quantification of the relationships between drug exposure, biomarkers and clinical endpoints in relapse-remitting MS (RRMS) following cladribine treatment. A population pharmacokinetic model of cladribine and its main metabolite, 2-chloroadenine, was developed using plasma and urine data. The renal clearance of cladribine was close to half of total elimination, and was found to be a linear function of creatinine clearance (CRCL). Exposure-response models could quantify a clear effect of cladribine tablets on absolute lymphocyte count (ALC), burden of disease (BoD), expanded disability status scale (EDSS) and relapse rate (RR) endpoints. Moreover, they gave insight into disease progression of RRMS. This thesis further demonstrates how integrated modeling framework allows an understanding of the interplay between ALC and clinical efficacy endpoints. ALC was found to be a promising predictor of RR. Moreover, ALC and BoD were identified as predictors of EDSS time-course. This enables the understanding of the behavior of the key outcomes necessary for the successful development of long-awaited MS therapies, as well as how these outcomes correlate with each other. The item response theory (IRT) methodology, an alternative approach for analysing composite scores, enabled to quantify the information content of the individual EDSS components, which could help improve this scale. In addition, IRT also proved capable of increasing the detection power of potential drug effects in clinical trials, which may enhance drug development efficiency. The developed nonlinear mixed-effects models offer a platform for the quantitative understanding of the biomarker(s)/clinical endpoint relationship, disease progression and therapeutic response in RRMS by integrating a significant amount of knowledge and data.
APA, Harvard, Vancouver, ISO, and other styles
26

Naumann, Thomas E. "K-Ar Age Values of Bulk Soil Samples and Clay Fractions: Effects of Acid Extraction and Implications for the Origin of Micaceous Clay in Savannah River Site Soils, South Carolina, USA." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/geosciences_theses/27.

Full text
Abstract:
Understanding how natural Cs, Rb, and K have been redistributed in Savannah River Site (SRS) soils during pedogenesis is important to understanding how radiocesium released to the soils will behave over the long term. In this effort, it is important to distinguish K that has participated in mineral-water reactions from that still residing in primary silicate structures, particularly in the clay fraction. The impact of different degrees of acid extraction on K and radiogenic Ar in bulk soil and in clay from five SRS soil samples has been determined. Strong treatment (50% HNO3, three hours, 100°C) releases K from primary minerals, as shown also by a concomitant release of radiogenic Ar, but a more moderate treatment (6% HNO3, three hours, 80°C) does not release K. K in the clay fraction is mostly nonexchangeable K in remnants of primary mica, and clay K-Ar age values near 300 Ma indicate the mica originated in the Appalachian mountain belt.
APA, Harvard, Vancouver, ISO, and other styles
27

Vong, Camille. "Model-Based Optimization of Clinical Trial Designs." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233445.

Full text
Abstract:
General attrition rates in drug development pipeline have been recognized as a necessity to shift gears towards new methodologies that allow earlier and correct decisions, and the optimal use of all information accrued throughout the process. The quantitative science of pharmacometrics using pharmacokinetic-pharmacodynamic models was identified as one of the strategies core to this renaissance. Coupled with Optimal Design (OD), they constitute together an attractive toolkit to usher more rapidly and successfully new agents to marketing approval. The general aim of this thesis was to investigate how the use of novel pharmacometric methodologies can improve the design and analysis of clinical trials within drug development. The implementation of a Monte-Carlo Mapped power method permitted to rapidly generate multiple hypotheses and to adequately compute the corresponding sample size within 1% of the time usually necessary in more traditional model-based power assessment. Allowing statistical inference across all data available and the integration of mechanistic interpretation of the models, the performance of this new methodology in proof-of-concept and dose-finding trials highlighted the possibility to reduce drastically the number of healthy volunteers and patients exposed to experimental drugs. This thesis furthermore addressed the benefits of OD in planning trials with bio analytical limits and toxicity constraints, through the development of novel optimality criteria that foremost pinpoint information and safety aspects. The use of these methodologies showed better estimation properties and robustness for the ensuing data analysis and reduced the number of patients exposed to severe toxicity by 7-fold.  Finally, predictive tools for maximum tolerated dose selection in Phase I oncology trials were explored for a combination therapy characterized by main dose-limiting hematological toxicity. In this example, Bayesian and model-based approaches provided the incentive to a paradigm change away from the traditional rule-based “3+3” design algorithm. Throughout this thesis several examples have shown the possibility of streamlining clinical trials with more model-based design and analysis supports. Ultimately, efficient use of the data can elevate the probability of a successful trial and increase paramount ethical conduct.
APA, Harvard, Vancouver, ISO, and other styles
28

Marcinkowska, Anna. "Exploratory study of market entry strategies for digital payment platforms." Thesis, Linköpings universitet, Institutionen för ekonomisk och industriell utveckling, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-147994.

Full text
Abstract:
The digital payment industry has become one of the fastest evolving markets in the world, but in the wake of its rapid advancement, an ever increasing gap between academic theory and the actual reality of this market widens - and especially so when it comes to entry theory. It is widely acknowledged that the world is moving towards an ever more homogeneous economy, but despite the fact that payment preferences differ greatly from country to country - research on this subject continues to revolve mainly around localized efforts. But as historical inequalities between poor and rich societies continue to dissipate - learning from nations at the forefront of technological advancement increases the likelihood that the developed strategy becomes applicable to an increased number of countries. By selecting a nation most conducive to technological growth, the purpose of this report is to map the present dynamics in its digital payment industry using both recent and traditional market entry theory. However, studies geared towards globalized strategy formulation cannot be assumed as having guaranteed access to internal company-data at all times. So in order to facilitate such studies, the level of dependency on primary data required for conducting such research needs to be understood first, which is why the work in this report is constrained strictly to data of secondary nature. This, not only to further map the characteristics of this market, but also to see how open the market is to public inspection. Ultimately, the academic contribution becomes that of providing a road-map towards adapting currently available market entry theory to suit the rapidly evolving conditions of the digital payment industry from a global perspective and, when failing to do so, the aim is to also explore avenues for further research towards this end goal.
APA, Harvard, Vancouver, ISO, and other styles
29

Comas, Hervada Bruna. "Downward flame front spread in thin solid fuels: theory and experiments." Doctoral thesis, Universitat de Girona, 2014. http://hdl.handle.net/10803/276957.

Full text
Abstract:
Flame spread over solid samples has been studied from many points of view, as it is key for fire safety, yet it is a complex phenomenon that involves processes occurring in both the solid and the gas phases. This Ph.D. thesis studies the flame spread over thin solid samples in processes more complex than the classical cases where a flame spreads horizontally or downward over a vertical solid sample. In particular, this thesis deals with three different situations: the effects of the sides to the vertically downward flame spread over a thin solid; the effects of having various parallel samples burning simultaneously, and flame spread over horizontal and downward inclined samples. For all these situations a complete experimental study is made and a model that explains the obtained results is developed<br>La propagació de flames en sòlids és un fenomen complex que inclou processos que s’esdevenen tant a la fase sòlida com a la fase gasosa. Diversos autors han estudiat aquest fenomen des de diferents punts de vista ja que és un element clau en l’anàlisi del risc d’incendis i de la dinàmica de focs. En aquesta tesi doctoral estudiem la propagació de flames en sòlids prims en processos més complexos que els processos clàssics, on la flama es propaga horitzontalment o cap avall en una mostra vertical. Més concretament, aquesta tesi versa sobre tres situacions diferents: l'efecte de les vores en la propagació verticalment cap avall de la flama sobre un sòlid prim; els efectes de tenir diverses mostres paral•leles cremant alhora, i la propagació cap avall de la flama en mostres inclinades i horitzontals. Per a aquestes tres situacions es desenvolupa un estudi experimental complet i un model que descriu els resultats obtinguts
APA, Harvard, Vancouver, ISO, and other styles
30

Moreira, Ana Sofia Pereira. "Study of modifications induced by thermal and oxidative treatment in oligo and polysaccharides of coffee by mass spectrometry." Doctoral thesis, Universidade de Aveiro, 2016. http://hdl.handle.net/10773/17074.

Full text
Abstract:
Doutoramento em Bioquímica<br>Os polissacarídeos são os componentes maioritários dos grãos de café verde e torrado e da bebida de café. Os mais abundantes são as galactomananas, seguindo-se as arabinogalactanas. Durante o processo de torra, as galactomananas e arabinogalactanas sofrem modificações estruturais, as quais estão longe de estar completamente elucidadas devido à sua diversidade e à complexidade estrutural dos compostos formados. Durante o processo de torra, as galactomananas e arabinogalactanas reagem com proteínas, ácidos clorogénicos e sacarose, originando compostos castanhos de alto peso molecular contendo nitrogénio, designados de melanoidinas. As melanoidinas do café apresentam diversas atividades biológicas e efeitos benéficos para a saúde. No entanto, a sua estrutura exata e os mecanismos envolvidos na sua formação permanecem desconhecidos, bem como a relação estrutura-atividade biológica. A utilização de sistemas modelo e a análise por espectrometria de massa permitem obter uma visão global e, simultaneamente, detalhada das modificações estruturais nos polissacarídeos do café promovidas pela torra, contribuindo para a elucidação das estruturas e mecanismos de formação das melanoidinas. Com base nesta tese, oligossacarídeos estruturalmente relacionados com a cadeia principal das galactomananas, (β1→4)-Dmanotriose (Man3), e as cadeias laterais das arabinogalactanas, (α1→5)-Larabinotriose (Ara3), isoladamente ou em misturas com ácido 5-Ocafeoilquínico (5-CQA), o ácido clorogénico mais abundante nos grãos de café verde, e péptidos compostos por tirosina e leucina, usados como modelos das proteínas, foram sujeitos a tratamento térmico a seco, mimetizando o processo de torra. A oxidação induzida por radicais hidroxilo (HO•) foi também estudada, uma vez que estes radicais parecem estar envolvidos na modificação dos polissacarídeos durante a torra. A identificação das modificações estruturais induzidas por tratamento térmico e oxidativo dos compostos modelo foi feita por estratégias analíticas baseadas principalmente em espectrometria de massa, mas também em cromatografia líquida. A cromatografia de gás foi usada na análise de açúcares neutros e ligações glicosídicas. Para validar as conclusões obtidas com os compostos modelo, foram também analisadas amostras de polissacarídeos do café obtidas a partir de resíduo de café e café instantâneo. Os resultados obtidos a partir dos oligossacarídeos modelo quando submetidos a tratamento térmico (seco), assim como à oxidação induzida por HO• (em solução), indicam a ocorrência de despolimerização, o que está de acordo com estudos anteriores que reportam a despolimerização das galactomananas e arabinogalactanas do café durante a torra. Foram ainda identificados outros compostos resultantes da quebra do anel de açúcares formados durante o tratamento térmico e oxidativo da Ara3. Por outro lado, o tratamento térmico a seco dos oligossacarídeos modelo (individualmente ou quando misturados) promoveu a formação de oligossacarídeos com um maior grau de polimerização, e também polissacarídeos com novos tipos de ligações glicosídicas, evidenciando a ocorrência de polimerização através reações de transglicosilação não enzimática induzidas por tratamento térmico a seco. As reações de transglicosilação induzidas por tratamento térmico a seco podem ocorrer entre resíduos de açúcares provenientes da mesma origem, mas também de origens diferentes com formação de estruturas híbridas, contendo arabinose e manose como observado nos casos dos compostos modelo usados. Os resultados obtidos a partir de amostras do resíduo de café e de café instantâneo sugerem a presença de polissacarídeos híbridos nestas amostras de café processado, corroborando a ocorrência de transglicosilação durante o processo de torra. Além disso, o estudo de misturas contendo diferentes proporções de cada oligossacarídeo modelo, mimetizando regiões do grão de café com composição distinta em polissacarídeos, sujeitos a diferentes períodos de tratamento térmico, permitiu inferir que diferentes estruturas híbridas e não híbridas podem ser formadas a partir das arabinogalactanas e galactomananas, dependendo da sua distribuição nas paredes celulares do grão e das condições de torra. Estes resultados podem explicar a heterogeneidade de estruturas de melanoidinas formadas durante a torra do café. Os resultados obtidos a partir de misturas modelo contendo um oligossacarídeo (Ara3 ou Man3) e 5-CQA sujeitas a tratamento térmico a seco, assim como de amostras provenientes do resíduo de café, mostraram a formação de compostos híbridos compostos por moléculas de CQA ligadas covalentemente a um número variável de resíduos de açúcar. Além disso, os resultados obtidos a partir da mistura contendo Man3 e 5-CQA mostraram que o CQA atua como catalisador das reações de transglicosilação. Por outro lado, nas misturas modelo contendo um péptido, mesmo contendo também 5-CQA e sujeitas ao mesmo tratamento, observou-se uma diminuição na extensão das reações transglicosilação. Este resultado pode explicar a baixa extensão das reações de transglicosilação não enzimáticas durante a torra nas regiões do grão de café mais ricas em proteínas, apesar dos polissacarídeos serem os componentes maioritários dos grãos de café. A diminuição das reações de transglicosilação na presença de péptidos/proteínas pode dever-se ao facto de os resíduos de açúcares redutores reagirem preferencialmente com os grupos amina de péptidos/proteínas por reação de Maillard, diminuindo o número de resíduos de açúcares redutores disponíveis para as reações de transglicosilação. Além dos compostos já descritos, uma diversidade de outros compostos foram formados a partir dos sistemas modelo, nomeadamente derivados de desidratação formados durante o tratamento térmico a seco. Em conclusão, a tipificação das modificações estruturais promovidas pela torra nos polissacarídeos do café abre o caminho para a compreensão dos mecanismos de formação das melanoidinas e da relação estrutura-atividade destes compostos.<br>Polysaccharides are the major components of green and roasted coffee beans, and coffee brew. The most abundant ones are galactomannans, followed by arabinogalactans. During the roasting process, galactomannans and arabinogalactans undergo structural modifications that are far to be completely elucidated due to their diversity and complexity of the compounds formed. During the roasting process, galactomannans and arabinogalactans react with proteins, chlorogenic acids, and sucrose, originating high molecular weight brown compounds containing nitrogen, known as melanoidins. Several biological activities and beneficial health effects have been attributed to coffee melanoidins. However, their exact structures and the mechanisms involved in their formation remain unknown, as well as the structure-biological activity relationship. The use of model systems and mass spectrometry analysis allow to obtain an overall view and, simultaneously, detailed, of the structural modifications in coffee polysaccharides promoted by roasting, contributing to the elucidation of the structures and formation mechanisms of melanoidins. Based on this thesis, oligosaccharides structurally related to the backbone of galactomannans, (β1→4)-D-mannotriose, and the side chains of arabinogalactans, (α1→5)-Larabinotriose, alone or in mixtures with 5-O-caffeoylquinic acid, the most abundant chlorogenic acid in green coffee beans, and dipeptides composed by tyrosine and leucine, used as models of proteins, were submitted to dry thermal treatments, mimicking the coffee roasting process. The oxidation induced by hydroxyl radicals (HO•) was also studied, since these radicals seem to be involved in the modification of the polysaccharides during roasting. The identification of the structural modifications induced by thermal and oxidative treatment of the model compounds was performed mostly by mass spectrometry-based analytical strategies, but also using liquid chromatography. Gas chromatography was used in the analysis of neutral sugars and glycosidic linkages. To validate the conclusions achieved with the model compounds, coffee polysaccharide samples obtained from spent coffee grounds and instant coffee were also analysed. The results obtained from the model oligosaccharides when submitted to thermal treatment (dry) or oxidation induced by HO• (in solution) indicate the occurrence of depolymerization, which is in line with previous studies reporting the depolymerization of coffee galactomannans and arabinogalactans during roasting. Compounds resulting from sugar ring cleavage were also formed during thermal treatment and oxidative treatment of Ara3. On the other hand, the dry thermal treatment of the model oligosaccharides (alone or when mixed) promoted the formation of oligosaccharides with a higher degree of polymerization, and also polysaccharides with new type of glycosidic linkages, evidencing the occurrence of polymerization via non-enzymatic transglycosylation reactions induced by dry thermal treatment. The transglycosylation reactions induced by dry thermal treatment can occur between sugar residues from the same origin, but also of different origins, with formation of hybrid structures, containing arabinose and mannose in the case of the model compounds used. The results obtained from spent coffee grounds and instant coffee samples suggest the presence of hybrid polysaccharides in these processed coffee samples, corroborating the occurrence of transglycosylation during the roasting process. Furthermore, the study of mixtures containing different proportions of each model oligosaccharide, mimicking coffee bean regions with distinct polysaccharide composition, subjected to different periods of thermal treatment, allowed to infer that different hybrid and non-hybrid structures may be formed from arabinogalactans and galactomannans, depending on their distribution in the bean cell walls and on roasting conditions. These results may explain the heterogeneity of melanoidins structures formed during coffee roasting. The results obtained from model mixtures containing an oligosaccharide (Ara3 or Man3) and 5-CQA and subjected to dry thermal treatment, as well as samples derived from spent coffee grounds, showed the formation of hybrid compounds composed by CQA molecules covalently linked to a variable number of sugar residues. Moreover, the results obtained from the mixture containing Man3 and 5-CQA showed that CQA acts as catalyst of transglycosylation reactions. On the other hand, in the model mixtures containing a peptide, even if containing 5-CQA and subjected to the same treatment, it was observed a decrease in the extent of transglycosylation reactions. This outcome can explain the low extent of non-enzymatic transglycosylation reactions during roasting in coffee bean regions enriched in proteins, although polysaccharides are the major components of the coffee beans. The decrease of transglycosylation reactions in the presence of peptides/proteins can be related with the preferential reactivity of reducing residues with the amino groups of peptides/proteins by Maillard reaction, decreasing the number of reducing residues available to be directly involved in the transglycosylation reactions. In addition to the compounds already described, a diversity of other compounds were formed from model systems, namely dehydrated derivatives formed during dry thermal treatment. In conclusion, the identification of the structural modifications in coffee polysaccharides promoted by roasting pave the way to the understanding of the mechanisms of formation of melanoidins and structure-activity relationship of these compounds.
APA, Harvard, Vancouver, ISO, and other styles
31

Chu, An-ting, and 朱安婷. "The Sample Size Study of Moderated Mediation Effect." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/60975448202619310076.

Full text
Abstract:
碩士<br>國立高雄大學<br>統計學研究所<br>100<br>When researchers study social science, education, psychology and so on, they often study how and why variables are related. Hence, mediator and moderator are developed. Moderated mediation effect is one of the combinations of mediation and moderation effects. It refers to a mediator affecting the relation between an independent variable and a dependent variable, and then the mediation effect depends on the value of the moderator. There are many test methods of moderated mediation effects. Many researchers prefer to use regression-based tests. The most common method is the product of coefficients and is assumed the product is normally distributed. First-order multivariate delta method (Sobel, 1982, 1986) was commonly used. In fact, moderated mediation effect is not normally distributed. We recommend using the bootstrapping method when sample size is not large enough. Except for introducing mediation and moderation models, we extended five moderated mediation models which were studied by Preacher et al. (2007). Using five testing methods to detect how many sample size required for different coefficients combination with appropriate statistical power. Moreover, we provide guidelines for researchers to study moderated mediation model in determining sample size.
APA, Harvard, Vancouver, ISO, and other styles
32

Lin, Yu-Hsin, and 林煜鑫. "Sample Size for Hypothesis Testing of Standardized Effect between Groups." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/86v3g4.

Full text
Abstract:
碩士<br>國立交通大學<br>管理科學系所<br>103<br>Standardized effect, known as effect size, is a quantitative measure of a phenomenon’s strength and a substantial explanation for statistical consequences. Hence, reporting effect sizes is considered good practice when presenting research findings in many fields. We want to examine the actual effect size of the sample. By hypothesis testing, we give effect size of the null hypothesis, confidence level, and power. We discuss how much the optimal equal sample size between groups and the optimal total sample size we need in order to decide accepting the null hypothesis of effect size or rejecting the null hypothesis with lack of evidence. In this study, we discuss Cohen’s f and root-mean-square standardized effect. Find each optimal sample sizes for these effect sizes under all possible combination of conditions and compare the difference of the optimal sample sizes under the same combination of conditions.
APA, Harvard, Vancouver, ISO, and other styles
33

Su, Shih-Ming, and 蘇士鳴. "Sample Size for Precise Confidence Interval of Standardized Effect in Groups." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/3p49qx.

Full text
Abstract:
碩士<br>國立交通大學<br>管理科學系所<br>103<br>This article provides the sample size calculation for precise confidence interval of Standardized effect in groups. Calculating appropriate sample size with the approach of confidence interval could provide more information for population. The adoption of the sample size could narrow confidence interval, and the estimation accuracy of the population was expected to be high. With the advance of analysis software, Researchers can get the upper limit and lower limit of noncentral parameters. Further, using the confidence interval of effect sizes to calculate a proper sample size with different combination of population under required confidence level.
APA, Harvard, Vancouver, ISO, and other styles
34

Yang, Ya-Zheng, and 楊亞政. "Sample Size and Power Calculations of One Way Random Effect Model." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/58529462351737015317.

Full text
Abstract:
碩士<br>國立交通大學<br>管理科學系所<br>101<br>Concerning the field of study in the traditional analysis of variance, the intraclass correlation coefficient (ICC) is easily ignored, and this may amplify the group error variance. As a consequence, using the random effects ANOVA model, will measure the group similarity independence more accurately. The sample size is an essential key to infer the population, which I lay emphasis on. Via testing hypothesis method, I tend to use formal F distribution to calculate the sample size by fixing the level of significance and power. When the sample size is calculated with the cost function; thereupon the optimal sample size can be calculated. We confirm the accuracy of the previous reviews when exploring this part of this study by the statistical software SAS iteration operation to calculate the result.
APA, Harvard, Vancouver, ISO, and other styles
35

Wu, Jo-han, and 吳若涵. "The sample size effect and deformation mechanism of single crystal aluminum." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/51871490941770380921.

Full text
Abstract:
碩士<br>國立中山大學<br>材料與光電科學學系研究所<br>103<br>Over recent years, the important issue of nano-technology is the size effect of the materials in nano-scaled. And the nano-scaled materials possess the unique mechanical properties compared to the large size materials. The size effect is not only applied for nanocrystalline materials but also for nano-structured single crystals. The size effect of strength in accordance with the Hall-Petch relationship has been studied. The strength of a single crystal will increase with decreasing sample size to the nano-scaled. Among various metallic elements with a face centered cubic (FCC) structure, aluminum (Al) is selected in this study due to its low cost and many excellent physical and chemical properties. In addition, aluminum possesses lower twin boundary energy, higher specific strength and higher stacking fault energy. In this study, the mechanical properties and deformation behaviors of single crystal aluminum with different sizes and loading directions have been investigated via different tension and compression tests. The yield stress for the mini-scaled tension, micro-scaled compression and nano-scaled tension testing loading along [111] direction are 60 MPa, 180 MPa and 0.75 GPa, respectively. The yield stress for the micro-scaled compression testing loading along [110] direction is 116 MPa. The size effect and the orientation effect of the single crystal aluminum were revealed and rationalized.
APA, Harvard, Vancouver, ISO, and other styles
36

Chou, Shih-Gang, and 周仕罡. "Sample Size for Hypothesis Testing of Standardized Multivariate Effect between Groups." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/23741977171513614281.

Full text
Abstract:
碩士<br>國立交通大學<br>管理科學系所<br>103<br>Sample size, an issue, which is concerned in many fields of research. We can save a lot of cost of human resources and time if researchers can master the technique of optimum sample size of their research. The term, called effect size, which is a quantitative measure of the strength association and the difference between populations. There are more practical meanings and applications with attaching the effect size in research. If we could attach the effect size with our study, we could easily understand the importance of studies. The form of effect size will be different in the method of statistic and the fields. In order to reflect the condition in real world, we use MANOVA model and Monte Carlo simulation. According to null-hypothesis significance testing we discuss the optimal sample size in groups given the significance level and the power we accepted.
APA, Harvard, Vancouver, ISO, and other styles
37

Sun, You-Jia, and 孫佑嘉. "Sample Size Requirement for Confidence Interval of Standardized Effect in One-Way MANOVA." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/15894725371411102387.

Full text
Abstract:
碩士<br>國立交通大學<br>管理科學系所<br>103<br>The confidence interval width was expected to be narrow when estimating the sample size. The enough sample size is necessary to maintain the confidence interval narrow with high accuracy. This article provides the sample size calculation for precise confidence interval of standardized effect in one-way MANOVA. Researchers can produce the multivariate approximate sample distribution with the approach of stochastic simulation, and get the upper limit and lower limit of noncentral parameters by the inversion confidence interval principle. Finally, using the confidence interval of effect sizes to calculate a appropriate sample size with different combination of population parameters under required confidence level.
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Ying-pu, and 林應璞. "Investigation of the Effect of Training Sample Size on Performance of 2D and 2.5D Face Recognition." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/15741076427990963964.

Full text
Abstract:
碩士<br>國立成功大學<br>系統及船舶機電工程學系碩博士班<br>97<br>The purpose of this thesis is to investigate the effect of training sample size on performance of 2D and 2.5D face recognition. The methods of face recognition are formed by feature extraction (Haar wavelet transform, principlal component analysis and improved principlal component analysis) and classification (Euclidean distance, nearest feature line method and linear discriminant analysis) techniques. This thesis makes an effort on finding a suitable recognition method and concludes the relationship between the size of training sample and the rate of face recognition. A facial image in 2D face recognition is first captured by a CCD camera and an image pre-processing technique is applied to obtain a facial region. However, a facial image in 2.5D face recognition is established by using Photometric Stereo Method (PSM) to obtain the depth and pixel values in the 2.5D face model. Since the construction of 2.5D face model is performed in a dark room, the pixel values are not affected by the intensity of light. Thus, the combination of depth and pixel value will be used as the feature vector for 2.5D face recognition. The simulation of 2D face recognition based on ORL (Olivetti Research Lab), GIT (Georgia Institute of Technology), CIT (California Institute of Technology), ESSEX (University of ESSEX), UMIST (University of Manchester Institute of Science and Technology) and author’s own databases are performed to derive the the relationship between the size of training sample and the rate of face recognition. As a result, the combination of improved principlal component analysis and Euclidean distance has the best recognition rate. When the size of training sample is between 13 and 17, the recognition rate is over 85%. If the size of training sample increases to be above 18, it slightly increases the recognition rate but it rises the recognition time. From a large scale database (i.e., ESSEX), the recognition rate is over 92% when the size of training sample is between 13 and 19. Thus, the recognition rate is stable for a large scale database. As the training sample size increases to 25, the recognition rate does not significantly increase. Thus, an increase of the size of training sample does not provide better recognition rates. On the other hand, the simulation of 2.5D face recognition is based on author’s own database and the recognition method is the same as the 2D face recognition (i.e., the combination of improved principlal component analysis and Euclidean distance). When the size of training sample is between 14 and 17, the recognition rate is above 84% and is stable. If the size of training sample increases to be above 17, the recognition rate reaches to be 93.93%. However, if the size of training sample continuously increases, it does not increase the recognition rate significantly but it rises the recognition time. In conclusion, the algorithms of 2D and 2.5D face recognition are integrated to become a real-time face recognition by using Instrument Control Toolbox and Graphical User Interface in the Matlab environment. Index Terms: Training Sample Size, Photometric Stereo Method, Haar Wavelet Transform, Principlal Component Analysis, Improved Principlal Component Analysis, Euclidean Distance, Linear Discriminant Analysis, Nearest Feature Line.
APA, Harvard, Vancouver, ISO, and other styles
39

"Determining Appropriate Sample Sizes and Their Effects on Key Parameters in Longitudinal Three-Level Models." Doctoral diss., 2016. http://hdl.handle.net/2286/R.I.40260.

Full text
Abstract:
abstract: Through a two study simulation design with different design conditions (sample size at level 1 (L1) was set to 3, level 2 (L2) sample size ranged from 10 to 75, level 3 (L3) sample size ranged from 30 to 150, intraclass correlation (ICC) ranging from 0.10 to 0.50, model complexity ranging from one predictor to three predictors), this study intends to provide general guidelines about adequate sample sizes at three levels under varying ICC conditions for a viable three level HLM analysis (e.g., reasonably unbiased and accurate parameter estimates). In this study, the data generating parameters for the were obtained using a large-scale longitudinal data set from North Carolina, provided by the National Center on Assessment and Accountability for Special Education (NCAASE). I discuss ranges of sample sizes that are inadequate or adequate for convergence, absolute bias, relative bias, root mean squared error (RMSE), and coverage of individual parameter estimates. The current study, with the help of a detailed two-part simulation design for various sample sizes, model complexity and ICCs, provides various options of adequate sample sizes under different conditions. This study emphasizes that adequate sample sizes at either L1, L2, and L3 can be adjusted according to different interests in parameter estimates, different ranges of acceptable absolute bias, relative bias, root mean squared error, and coverage. Under different model complexity and varying ICC conditions, this study aims to help researchers identify L1, L2, and L3 sample size or both as the source of variation in absolute bias, relative bias, RMSE, or coverage proportions for a certain parameter estimate. This assists researchers in making better decisions for selecting adequate sample sizes in a three-level HLM analysis. A limitation of the study was the use of only a single distribution for the dependent and explanatory variables, different types of distributions and their effects might result in different sample size recommendations.<br>Dissertation/Thesis<br>Doctoral Dissertation Educational Psychology 2016
APA, Harvard, Vancouver, ISO, and other styles
40

TABASSO, MYRIAM. "Spatial regression in large datasets: problem set solution." Doctoral thesis, 2014. http://hdl.handle.net/11573/918515.

Full text
Abstract:
In this dissertation we investigate a possible attempt to combine the Data Mining methods and traditional Spatial Autoregressive models, in the context of large spatial datasets. We start to considere the numerical difficulties to handle massive datasets by the usual approach based on Maximum Likelihood estimation for spatial models and Spatial Two-Stage Least Squares. So, we conduct an experiment by Monte Carlo simulations to compare the accuracy and computational complexity for decomposition and approximation techniques to solve the problem of computing the Jacobian in spatial models, for various regular lattice structures. In particular, we consider one of the most common spatial econometric models: spatial lag (or SAR, spatial autoregressive model). Also, we provide new evidences in the literature, by examining the double effect on computational complexity of these methods: the influence of "size effect" and "sparsity effect". To overcome this computational problem, we propose a data mining methodology as CART (Classification and Regression Tree) that explicitly considers the phenomenon of spatial autocorrelation on pseudo-residuals, in order to remove this effect and to improve the accuracy, with significant saving in computational complexity in wide range of spatial datasets: realand simulated data.
APA, Harvard, Vancouver, ISO, and other styles
41

Kim, Insuk. "Tests of independence in 2 by 2 contingency tables : an examination of type I error and statistical power related to sample size, marginal distribution, and effect size." 2001. http://purl.galileo.usg.edu/uga%5Fetd/kim%5Fin-suk%5F200112%5Fma.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Yu, Tzu-Hao, and 尤子豪. "Effects of sample size and whether coarse roots are sampled on the performance of the Root Bundle Model Weibull." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/r8k5n3.

Full text
Abstract:
碩士<br>國立中興大學<br>水土保持學系所<br>106<br>Naturally-regenerated forests cover more than 30% area of Taiwan and many of them are of high species diversity. To better quantify forest effects on slope stability, we should sample as many roots for every tree species in forests as we can. However, the excavation for root samples is time-consuming and labor-intensive. We can measure root pull out resistance for more species, if the root sample size can be reduced without compromising estimation preciseness. Via regression approaches, the Root Bundle Model Weibull (RBMw) uses the pull out resistance data of single roots to estimate the reinforcement effects of root bundles on soil mass. It is known that regression approaches tend to be sensitive to extreme values, i.e. pull out force of the coarsest roots in the present study. Finding coarse roots is time consuming, especially tree species of small stature and low individual number. The aims of the present study are to identify whether estimation preciseness of root reinforcement of RBMw will be compromised by reducing root sample size and missing the coarsest roots. The study site is located in the 25-ha Lien-hua-chih plot in central Taiwan. The three dominant species (Randia cochinchinensis, Scheffera octophylla, Cryptocarya chinensis) were chosen for single root pull out tests and the properties of roots (pull out force, displacement and root density of soil profile) were collected for modeling reinforcement of root bundles. Fifty single root pull out tests were done for each of the three species. Results derived from the sample size of 50 roots were used as reference data to evaluate effects of reduced sample size (40 and 30 roots) and missing the coarsest roots. The result indicates that the root reinforcement of three species in similar conditions (DBH and distance from stem), Randia cochinchinensis is the highest and Scheffera octophylla is the lowest. Result of random sampling without replacement shows that the estimating errors were significant when the coarsest roots were missed. In terms of the estimation for single root pull out force, even sampling only 30 roots can still provide estimates not significantly different from those of 50-root sample. However, in terms of estimation for root bundle reinforcement, results of 30- and 40-root sample differed significantly from those of 50-root sample. The effects of missing coarsest roots were more noticeable than those of sample size. When using RBMws to estimate reinforcement of root bundles of certain species, it is better to increase sample size and the diameter of the coarsest roots. Under the circumstance of limited sample size, sampling coarse roots can provide estimates for root reinforcement with a certain degree of preciseness.
APA, Harvard, Vancouver, ISO, and other styles
43

Chan, Hsuan-Wei, and 詹玄維. "Balance the Effect of Forecast in Different Sample Sizes." Thesis, 2014. http://ndltd.ncl.edu.tw/handle/73225213928606335763.

Full text
Abstract:
碩士<br>國立屏東教育大學<br>應用數學系<br>102<br>In this research, our goal is to forecast the closing price in the following day of the stock of TCC. The data is collected from January, 2010 to January, 2014. We do the Monte Carlo Simulations with Geometric Brownian Motion Model and the History Volatility Model to forecast the price of stock. In this study, the deviations in different sample size between the estimated value and the actual value are compared. Besides, the deviation judgment indexes are RMSE, MAE and MAPE. In the other hand, we write program in Excel 2010 by ourselves to assist with collecting data and calculation. The results show that the behavior of different samples is different in distinct environment. However, generally speaking, the effect of 50-sample-size is better than the others.
APA, Harvard, Vancouver, ISO, and other styles
44

Lai, Yen-Huei, and 賴炎暉. "Effects of Sample Size on Various Metallic Glass Micropillars in Microcompression." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/53846581694925960292.

Full text
Abstract:
博士<br>國立中山大學<br>材料與光電科學學系研究所<br>98<br>Over the past decades, bulk metallic glasses (BMGs) have attracted extensive interests because of their unique properties such as good corrosion resistance, large elastic limit, as well as high strength and hardness. However, with the advent of micro-electro-mechanical systems (MEMS) and other microscaled devices, the fundamental properties of micrometer-sized BMGs have become increasingly more important. Thus, in this study, a methodology for performing uniaxial compression tests on BMGs having micron-sized dimensions is presented. Micropillar with diameters of 3.8, 1 and 0.7 μm are fabricated successfully from the Mg65Cu25Gd10 and Zr63.8Ni16.2Cu15Al5 BMGs using focus ion beam, and then tested in microcompression at room temperature and strain rates from 1 x 10-4 to 1 x 10-2 s-1. Microcompression tests on the Mg- and Zr-based BMG pillar samples have shown an obvious sample size effect, with the yield strength increasing with decreasing sample diameter. The strength increase can be rationalized by the Weibull statistics for brittle materials, and the Weibull moduli of the Mg- and Zr-based BMGs are estimated to be about 35 and 60, respectively. The higher Weibull modulus of the Zr-based BMG is consistent with the more ductile nature of this system. In additions, high temperature microcompression tests are performed to investigate the deformation behavior of micron-sized Au49Ag5.5Pd2.3Cu26.9Si16.3 BMG pillar samples from room to their glass transition temperature (~400 K). For the 1 μm Au-based BMG pillars, a transition from inhomogeneous flow to homogeneous flow is clearly observed at or near the glass transition temperature. Specifically, the flow transition temperature is about 393 K atthe strain rate of 1 x 10-2 s-1. For the 3.8 μm Au-based BMG pillars, in order to investigate the homogeneous deformation behavior, microcompression tests are performed at 395.9-401.2 K. The strength is observed to decrease with increasing temperature and decreasing strain rate. Plastic flow behavior can be described by a shear transition zone model. The activation energy and the size of the basic flow unit are deduced and compared favorably with the theory.
APA, Harvard, Vancouver, ISO, and other styles
45

Chuang, Wen-Shuo, and 莊文碩. "Deformation mechanisms and sample size effects of Mg-Zn-Y 18R LPSO." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/59uk37.

Full text
Abstract:
博士<br>國立中山大學<br>材料與光電科學學系研究所<br>106<br>With the long-period stacking ordered (LPSO) second phase, a novel Mg-Zn-Y based alloys with excellent mechanical properties have been developed. Because the LPSO phase plays an important role in strengthening, it is urgent to clarify the deformation mechanisms within the LPSO phase. Until now, there have been many research works done on the LPSO structure examination and the deformation mechanism along its growth direction, which is [11"2" ̅0], but all in the mini-meter scales. However, because the grain size of the LPSO is typically in 30 to 150 m, it is essential to shrink the testing sample sizes in order to achieve a clear examination on LPSO single crystals. Moreover, the mechanical properties and the deformation mechanisms on different crystal orientations have not been clarified. In this study, the yield strength as well as the deformation mechanisms of the Mg-Zn-Y 18R LPSO structure with two different orientations, namely, (11"2" ̅0) and (0001), will be systematically examined and analyzed in the micro-meter scale by micropillar compression, and the basic mechanical properties by nanoindentation. As for compressing along [11"2" ̅0], different deformation behavior have been observed between different sample sizes. The prism slip has been observed in smaller micro-pillars while the deformation kink has been observed in larger micro-pillars. The origin of deformation kink has been found out to be the prismatic dislocation slip. According to the Frank’s rule, the interaction between prismatic dislocations would cause the stair-rod dislocations, which would form the kink boundaries and nucleate the basal dislocations to induce the deformation kinks. It indicates that the yield mechanisms in Mg-Zn-Y 18R LPSO single crystal during compressing along [11"2" ̅0] are the same which is proved by using both TEM analysis and ln-lnd curves fitting. The different deformation behaviors are caused by the sample size effect. With larger sample sizes, the probability of forming stair rod dislocations becomes higher, resulting in higher probability of forming deformation kinks. Moreover, these results will be compared with those in the literature on the mini-meter scale. As for compressing along [0001], since there are few studies with deformation along [0001], we attempt to figure out the whole deformation mechanism. It has been found that the basal dislocations activate first causing the bending of the micro-pillars. The bending situation results in the formation of prism screw dislocations forming the slip band with 45 degrees with respect to the [0001] loading direction. It is considered that the bending can cause the shear stress along [0001] and would induce the nucleation of prism dislocations along 45 degrees with respect to the [0001] loading direction to form the slip band. This slip band causes the first pop-in shown in the stress strain curves. From our results, the deformation behavior are the same with different sample sizes. Moreover, the sample size effect of the first pop-in stress has also been examined using the ln-lnd curves fitting. Interestingly, the first pop-in stress in the 3.8 m micro-pillar rises to the level of 2.7 m micro-pillar. The reason is because the original dislocations influence the slip of basal dislocations, resulting in the delay of slip band forming. Both experimental results show the trend of higher flow stresses with smaller sample sizes because of the sample size effect. In comparison with the slope of the ln-lnd curves (or in transferring into the Weibull modulus) with the data in literatures, it can be extracted that both the yielding under compression along [11"2" ̅0] and the first pop-in under compression along [0001] are caused by the prismatic slip, consistent with our transmission electron microscopy analysis.
APA, Harvard, Vancouver, ISO, and other styles
46

Wang, Lifei. "Species Distribution Modeling: Implications of Modeling Approaches, Biotic Effects, Sample Size, and Detection Limit." Thesis, 2013. http://hdl.handle.net/1807/43754.

Full text
Abstract:
When we develop and use species distribution models to predict species' current or potential distributions, we are faced with the trade-offs between model generality, precision, and realism. It is important to know how to improve and validate model generality while maintaining good model precision and realism. However, it is difficult for ecologists to evaluate species distribution models using field-sampled data alone because the true species response function to environmental or ecological factors is unknown. Species distribution models should be able to approximate the true characteristics and distributions of species if ecologists want to use them as reliable tools. Simulated data provide the advantage of being able to know the true species-environment relationships and control the causal factors of interest to obtain insights into the effects of these factors on model performance. I used a case study on Bythotrephes longimanus distributions from several hundred Ontario lakes and a simulation study to explore the effects on model performance caused by several factors: the choice of predictor variables, the model evaluation methods, the quantity and quality of the data used for developing models, and the strengths and weaknesses of different species distribution models. Linear discriminant analysis, multiple logistic regression, random forests, and artificial neural networks were compared in both studies. Results based on field data sampled from lakes indicated that the predictive performance of the four models was more variable when developed on abiotic (physical and chemical) conditions alone, whereas the generality of these models improved when including biotic (relevant species) information. When using simulated data, although the overall performance of random forests and artificial neural networks was better than linear discriminant analysis and multiple logistic regression, linear discriminant analysis and multiple logistic regression had relatively good and stable model sensitivity at different sample size and detection limit levels, which may be useful for predicting species presences when data are limited. Random forests performed consistently well at different sample size levels, but was more sensitive to high detection limit. The performance of artificial neural networks was affected by both sample size and detection limit, and it was more sensitive to small sample size.
APA, Harvard, Vancouver, ISO, and other styles
47

Jiang, Hong-You, and 江鴻猷. "Effects of sample size on accuracy of MaxEnt : A case study of Fagus hayatae." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/6f79ms.

Full text
Abstract:
碩士<br>國立中興大學<br>森林學系所<br>102<br>The accuracy of species distribution modeling (SDM) is affected not only by species, modeling method and scale, and environmental predictor, but also by sample size. In this study, we extracted the occurrence data of Fagus hayatae with 1~8,014 occurrence points together with 8 environmental predictors to evaluate the effect of sample size on MaxEnt through AUC, TSS and Kappa indices. The results showed that through 40 m arithmetic series of sampling scale, the accuracy of MaxEnt increased with enlarging sample size, accompanied by decreasing uncertainty, until it reached the maximum value of AUC, TSS and Kappa indices. However, based on AUC index, the exaggerated sample size such as 2,008 and 8,014 points led to a lightly reduction of MaxEnt accuracy. On the other hand, some cases using small sample size (less than 10 points) for SDM also performed well. The result represented that small sample size from a specific sampling condition could provide a reliable modeling, and for modeling F. hayatae distribution, with sample size 35 ~ 200 occurrence points unbiased spatially was appropriate. AUC, TSS and Kappa were inconsistent for evaluating the SDM performance. Because Kappa was sensitive to the prevalence, and not applicable to evaluate the effect of sample size on SDM accuracy.By the jackknife analysis and principal component analysis to assessment the important of model with eight environment variables, and we found the annual precipitation, precipitation of seasonality, temperature seasonality and warmth index had higher influence than other environment variables, that confirmed heat and moisture factors had important impact to F. hayatae. The predicted habitat suitability map showed that, some distributed areas of F. hayatae found in past decades were disappeared by competition or climate change, or couldn’t be rediscovered by its limited population size. Further investigation to clarity the decreace F. hayatae distribution with he needed, probably by meteorological data to explore the environmental changes in the areas.
APA, Harvard, Vancouver, ISO, and other styles
48

Ye, Fan. "Investigating the Effects of Sample Size, Model Misspecification, and Underreporting in Crash Data on Three Commonly Used Traffic Crash Severity Models." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-05-9360.

Full text
Abstract:
Numerous studies have documented the application of crash severity models to explore the relationship between crash severity and its contributing factors. These studies have shown that a large amount of work was conducted on this topic and usually focused on different types of models. However, only a limited amount of research has compared the performance of different crash severity models. Additionally, three major issues related to the modeling process for crash severity analysis have not been sufficiently explored: sample size, model misspecification and underreporting in crash data. Therefore, in this research, three commonly used traffic crash severity models: multinomial logit model (MNL), ordered probit model (OP) and mixed logit model (ML) were studied in terms of the effects of sample size, model misspecification and underreporting in crash data, via a Monte-Carlo approach using simulated and observed crash data. The results of sample size effects on the three models are consistent with prior expectations in that small sample sizes significantly affect the development of crash severity models, no matter which model type is used. Furthermore, among the three models, the ML model was found to require the largest sample size, while the OP model required the lowest sample size. The sample size requirement for the MNL model is intermediate to the other two models. In addition, when the sample size is sufficient, the results of model misspecification analysis lead to the following suggestions: in order to decrease the bias and variability of estimated parameters, logit models should be selected over probit models. Meanwhile, it was suggested to select more general and flexible model such as those allowing randomness in the parameters, i.e., the ML model. Another important finding was that the analysis of the underreported data for the three models showed that none of the three models was immune to this underreporting issue. In order to minimize the bias and reduce the variability of the model, fatal crashes should be set as the baseline severity for the MNL and ML models while, for the OP models, the rank for the crash severity should be set from fatal to property-damage-only (PDO) in a descending order. Furthermore, when the full or partial information about the unreported rates for each severity level is known, treating crash data as outcome-based samples in model estimation, via the Weighted Exogenous Sample Maximum Likelihood Estimator (WESMLE), dramatically improve the estimation for all three models compared to the result produced from the Maximum Likelihood estimator (MLE).
APA, Harvard, Vancouver, ISO, and other styles
49

Hung, Yu-Yan, and 洪于硯. "The power and sample size calculation for one-way random effects model - The Unbalanced Case." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/6w2t6w.

Full text
Abstract:
碩士<br>國立交通大學<br>管理科學系所<br>101<br>In the experimental design, sample size plays an important role in experimental reliability and confidence level. Under the constraints of costs and available resources, how can we determine the optimal sample size required to estimate characteristics of the population? In this research, we want to measure the power of intraclass correlation coefficient for the unbalanced case in the random effects model.
APA, Harvard, Vancouver, ISO, and other styles
50

Grimsey, Ian M., S. W. Booth, Roberts Sarra N. Campbell, and Adrian C. Williams. "Quantitative Analysis Of Mannitol Polymorphs - X-Ray Powder Diffractometry. Exploring Preferred Orientation Effects." 2009. http://hdl.handle.net/10454/3288.

Full text
Abstract:
No<br>Mannitol is a polymorphic pharmaceutical excipient, which commonly exists in three forms: alpha, beta and delta. Each polymorph has a needle-like morphology, which can give preferred orientation effects when analysed by X-ray powder diffractometry (XRPD) thus providing difficulties for quantitative XRPD assessments. The occurrence of preferred orientation may be demonstrated by sample rotation and the consequent effects on X-ray data can be minimised by reducing the particle size. Using two particle size ranges (<125 and 125¿500 ¿m), binary mixtures of beta and delta mannitol were prepared and the delta component was quantified. Samples were assayed in either a static or rotating sampling accessory. Rotation and reducing the particle size range to <125 ¿m halved the limits of detection and quantitation to 1 and 3.6%, respectively. Numerous potential sources of assay errors were investigated; sample packing and mixing errors contributed the greatest source of variation. However, the rotation of samples for both particle size ranges reduced the majority of assay errors examined. This study shows that coupling sample rotation with a particle size reduction minimises preferred orientation effects on assay accuracy, allowing discrimination of two very similar polymorphs at around the 1% level.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography