To see the other types of publications on this topic, follow the link: Mixed models design.

Dissertations / Theses on the topic 'Mixed models design'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Mixed models design.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schmelter, Thomas. "Experimental design for mixed models with application to population pharmacokinetic studies." [S.l.] : [s.n.], 2007. http://deposit.ddb.de/cgi-bin/dokserv?idn=98529650X.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Yang, Xiao. "Optimal Design of Single Factor cDNA Microarray experiments and Mixed Models for Gene Expression Data." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26379.

Full text
Abstract:
Microarray experiments are used to perform gene expression profiling on a large scale. E- and A-optimality of mixed designs was established for experiments with up to 26 different varieties and with the restriction that the number of arrays available is equal to the number of varieties. Because the IBD setting only allows for a single blocking factor (arrays), the search for optimal designs was extended to the Row-Column Design (RCD) setting with blocking factors dye (row) and array (column). Relative efficiencies of these designs were further compared under analysis of variance (ANOVA) models. We also compared the performance of classification analysis for the interwoven loop and the replicated reference designs under four scenarios. The replicated reference design was favored when gene-specific sample variation was large, but the interwoven loop design was preferred for large variation among biological replicates. We applied mixed model methodology to detection and estimation of gene differential expression. For identification of differential gene expression, we favor contrasts which include both variety main effects and variety by gene interactions. In terms of t-statistics for these contrasts, we examined the equivalence between the one- and two-step analyses under both fixed and mixed effects models. We analytically established conditions for equivalence under fixed and mixed models. We investigated the difference of approximation with the two-step analysis in situations where equivalence does not hold. The significant difference between the one- and two-step mixed effects model was further illustrated through Monte Carlo simulation and three case studies. We implemented the one-step analysis for mixed models with the ASREML software.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
3

Nyberg, Joakim. "Practical Optimal Experimental Design in Drug Development and Drug Treatment using Nonlinear Mixed Effects Models." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-160481.

Full text
Abstract:
The cost of releasing a new drug on the market has increased rapidly in the last decade. The reasons for this increase vary with the drug, but the need to make correct decisions earlier in the drug development process and to maximize the information gained throughout the process is evident. Optimal experimental design (OD) describes the procedure of maximizing relevant information in drug development and drug treatment processes. While various optimization criteria can be considered in OD, the most common is to optimize the unknown model parameters for an upcoming study. To date, OD has mainly been used to optimize the independent variables, e.g. sample times, but it can be used for any design variable in a study. This thesis addresses the OD of multiple continuous or discrete design variables for nonlinear mixed effects models. The methodology for optimizing and the optimization of different types of models with either continuous or discrete data are presented and the benefits of OD for such models are shown. A software tool for optimizing these models in parallel is developed and three OD examples are demonstrated: 1) optimization of an intravenous glucose tolerance test resulting in a reduction in the number of samples by a third, 2) optimization of drug compound screening experiments resulting in the estimation of nonlinear kinetics and 3) an individual dose-finding study for the treatment of children with ciclosporin before kidney transplantation resulting in a reduction in the number of blood samples to ~27% of the original number and an 83% reduction in the study duration. This thesis uses examples and methodology to show that studies in drug development and drug treatment can be optimized using nonlinear mixed effects OD. This provides a tool than can lower the cost and increase the overall efficiency of drug development and drug treatment.
APA, Harvard, Vancouver, ISO, and other styles
4

KANKIPATI, SUNDER RAJAN. "MACRO MODEL GENERATION FOR SYNTHESIS OF ANALOG AND MIXED SIGNAL CIRCUITS." University of Cincinnati / OhioLINK, 2004. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1077297705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ernest, II Charles. "Benefits of Non-Linear Mixed Effect Modeling and Optimal Design : Pre-Clinical and Clinical Study Applications." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-209247.

Full text
Abstract:
Despite the growing promise of pharmaceutical research, inferior experimentation or interpretation of data can inhibit breakthrough molecules from finding their way out of research institutions and reaching patients. This thesis provides evidence that better characterization of pre-clinical and clinical data can be accomplished using non-linear mixed effect modeling (NLMEM) and more effective experiments can be conducted using optimal design (OD).  To demonstrate applicability of NLMEM and OD in pre-clinical applications, in vitro ligand binding studies were examined. NLMEMs were used to evaluate precision and accuracy of ligand binding parameter estimation from different ligand binding experiments using sequential (NLR) and simultaneous non-linear regression (SNLR). SNLR provided superior resolution of parameter estimation in both precision and accuracy compared to NLR.  OD of these ligand binding experiments for one and two binding site systems including commonly encountered experimental errors was performed.  OD was employed using D- and ED-optimality.  OD demonstrated that reducing the number of samples, measurement times, and separate ligand concentrations provides robust parameter estimation and more efficient and cost effective experimentation. To demonstrate applicability of NLMEM and OD in clinical applications, a phase advanced sleep study formed the basis of this investigation. A mixed-effect Markov-chain model based on transition probabilities as multinomial logistic functions using polysomnography data in phase advanced subjects was developed and compared the sleep architecture between this population and insomniac patients. The NLMEM was sufficiently robust for describing the data characteristics in phase advanced subjects, and in contrast to aggregated clinical endpoints, which provide an overall assessment of sleep behavior over the night, described the dynamic behavior of the sleep process. OD of a dichotomous, non-homogeneous, Markov-chain phase advanced sleep NLMEM was performed using D-optimality by computing the Fisher Information Matrix for each Markov component.  The D-optimal designs improved the precision of parameter estimates leading to more efficient designs by optimizing the doses and the number of subjects in each dose group.  This thesis provides examples how studies in drug development can be optimized using NLMEM and OD. This provides a tool than can lower the cost and increase the overall efficiency of drug development.

My name should be listed as "Charles Steven Ernest II" on cover.

APA, Harvard, Vancouver, ISO, and other styles
6

Dlangamandla, Nkosikho. "Design of integrated processes for a second generation biorefinery using mixed agricultural waste." Thesis, Cape Peninsula University of Technology, 2018. http://hdl.handle.net/20.500.11838/2843.

Full text
Abstract:
Thesis (Doctor of Engineering in Chemical Engineering)--Cape Peninsula University of Technology, 2018.
Lignocellulosic biomass (agro-waste) has been recommended as the most promising feedstock for the production of bioalcohols, in the biofuel industry. Furthermore, agro-waste is well-known as the most abundant organic matter in the agricultural and forestry product processing industry. However, the challenge with utilizing agro-waste as a feedstock is its highly recalcitrant structure, which limits hydrolysis to convert the holocelluloses into fermentable sugars. Conventional pre-treatment methods such as dilute acid, alkaline, thermal, hot water and enzymatic, have been used in previous studies. The challenge with these conventional methods is the generation of residual toxicants during the pretreatment process, which inhibits a high bioalcohol yield, by reducing the microbial populations’ (fermenter) ability to be metabolically proficient during fermentation. Numerous studies have been developed to improve the engineered strains, which have shown to have an ability to reduce the inhibition and toxicity of the bioalcohols produced or by-products produced during pre-treatment, while enhancing the bioalcohol production. In the present study (chapter 5), evaluation of common conventional methods for the pretreatment of the mixed agro-waste, i.e. (˃45µm to <100µm) constituted by Citrus sinensis, Malus domestica peels, corn cobs from Zea mays and Quercus robur (oak) yard waste without a pre-rinsing step at a ratio of 1:1 at 25% (w/w) for each waste material, was undertaken, focusing on hot water pre treatment followed by dilute acid (H2SO4) pre-treatment. To further pretreat the mixed agro-waste residue, cellulases were used to further hydrolyse the pre-treated agro-waste in a single pot (batch) multi-reaction process. The TRS concentration of 0.12, 1.43 and 3.22 g/L was achieved with hot water, dilute acid and cellulases hydrolysis as sequential pretreatment steps, respectively, in a single pot multi-reaction system. Furthermore, a commercial strain was used to ascertain low (C1 to C3) and high carbon content (C4+) bioalcohol production under aerobic conditions. Multiple bioproducts were obtained within 48 to 72 h, including bioethanol and 1-Butanol, 3-methyl, which were major products for this study. However, undesirable bio-compounds such as phenolics, were detected post fermentation. Since multiple process units characterised by chemical usage and high energy intensivity have been utilized to overcome delignification and cellulolysis, a sustainable, environmental benign pretreatment process was proposed using N. mirabilis “monkey cup” fluids (extracts) to also reduce fermenter inhibitors from the delignification of mixed agrowaste; a process with minimal thermo physical chemical inputs for which a single pot multi-reaction system strategy was used. Nepenthes mirabilis extracts shown to have ligninolytic, cellulolytic and xylanolytic activities, were used as an enzyme cocktail to pretreat mixed agro-waste, subsequent to the furtherance of TRS production from the agro-waste, by further using cellulase for further hydrolysis. N. mirabilis pod extracts were determined to contained carboxylesterases (529.41±30.50 U/L), β-glucosidases (251.94±11.48 U/L) and xylanases (36.09±18.04 U/L), constituting an enzymatic cocktail with a significant potential for the reduction in total residual phenolic compounds (TRPCs). Furthermore, the results indicated that maximum concentration of TRS obtainable was 310±5.19 mg/L within 168 h, while the TRPCs were reduced from 6.25±0.18 to 4.26 ±0.09 mg/L, which was lower than that observed when conventional methods were used. Overall N. mirabilis extracts were demonstrated to have an ability to support biocatalytic processes for the conversion of agro-waste to produce fermentable TRS in a single unit facilitating multiple reactions with minimised interference with cellulase hydrolysis. Therefore, the digestive enzymes in N. mirabilis pods can be used in an integrated system for a second generation biorefinery.
APA, Harvard, Vancouver, ISO, and other styles
7

Rasch, Dieter, Thomas Rusch, Marie Simeckova, Klaus D. Kubinger, Karl Moder, and Petr Simecek. "Tests of additivity in mixed and fixed effect two-way ANOVA models with single sub-class numbers." Springer, 2009. http://dx.doi.org/10.1070/s00362-009-0254-4.

Full text
Abstract:
In variety testing as well as in psychological assessment, the situation occurs that in a two-way ANOVA-type model with only one replication per cell, analysis is done under the assumption of no interaction between the two factors. Tests for this situation are known only for fixed factors and normally distributed outcomes. In the following we will present five additivity tests and apply them to fixed and mixed models and to quantitative as well as to Bernoulli distributed data. We consider their performance via simulation studies with respect to the type-I-risk and power. Furthermore, two new approaches will be presented, one being a modification of Tukey's test and the other being a new experimental design to test for interactions.
APA, Harvard, Vancouver, ISO, and other styles
8

Strömberg, Eric. "Applied Adaptive Optimal Design and Novel Optimization Algorithms for Practical Use." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-308452.

Full text
Abstract:
The costs of developing new pharmaceuticals have increased dramatically during the past decades. Contributing to these increased expenses are the increasingly extensive and more complex clinical trials required to generate sufficient evidence regarding the safety and efficacy of the drugs.  It is therefore of great importance to improve the effectiveness of the clinical phases by increasing the information gained throughout the process so the correct decision may be made as early as possible.   Optimal Design (OD) methodology using the Fisher Information Matrix (FIM) based on Nonlinear Mixed Effect Models (NLMEM) has been proven to serve as a useful tool for making more informed decisions throughout the clinical investigation. The calculation of the FIM for NLMEM does however lack an analytic solution and is commonly approximated by linearization of the NLMEM. Furthermore, two structural assumptions of the FIM is available; a full FIM and a block-diagonal FIM which assumes that the fixed effects are independent of the random effects in the NLMEM. Once the FIM has been derived, it can be transformed into a scalar optimality criterion for comparing designs. The optimality criterion may be considered local, if the criterion is based on singe point values of the parameters or global (robust), where the criterion is formed for a prior distribution of the parameters.  Regardless of design criterion, FIM approximation or structural assumption, the design will be based on the prior information regarding the model and parameters, and is thus sensitive to misspecification in the design stage.  Model based adaptive optimal design (MBAOD) has however been shown to be less sensitive to misspecification in the design stage.   The aim of this thesis is to further the understanding and practicality when performing standard and MBAOD. This is to be achieved by: (i) investigating how two common FIM approximations and the structural assumptions may affect the optimized design, (ii) reducing runtimes complex design optimization by implementing a low level parallelization of the FIM calculation, (iii) further develop and demonstrate a framework for performing MBAOD, (vi) and investigate the potential advantages of using a global optimality criterion in the already robust MBAOD.
APA, Harvard, Vancouver, ISO, and other styles
9

Berhe, Leakemariam. "Statistical modeling and design in forestry : The case of single tree models." Doctoral thesis, Umeå : Umeå University, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-1663.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Pell, David Andrew. "Statistical models for estimating the intake of nutrients and foods from complex survey data." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/286334.

Full text
Abstract:
Background: The consequences of poor nutrition are well known and of wide concern. Governments and public health agencies utilise food and diet surveillance data to make decisions that lead to improvements in nutrition. These surveys often utilise complex sample designs for efficient data collection. There are several challenges in the statistical analysis of dietary intake data collected using complex survey designs, which have not been fully addressed by current methods. Firstly, the shape of the distribution of intake can be highly skewed due to the presence of outlier observations and a large proportion of zero observations arising from the inability of the food diary to capture consumption within the period of observation. Secondly, dietary data is subject to variability arising from day-to-day individual variation in food consumption and measurement error, to be accounted for in the estimation procedure for correct inferences. Thirdly, the complex sample design needs to be incorporated into the estimation procedure to allow extrapolation of results into the target population. This thesis aims to develop novel statistical methods to address these challenges, applied to the analysis of iron intake data from the UK National Diet and Nutrition Survey Rolling Programme (NDNS RP) and UK national prescription data of iron deficiency medication. Methods: 1) To assess the nutritional status of particular population groups a two-part model with a generalised gamma (GG) distribution was developed for intakes that show high frequencies of zero observations. The two-part model accommodated the sources of data variation of dietary intake with a random intercept in each component, which could be correlated to allow a correlation between the probability of consuming and the amount consumed. 2) To identify population groups at risk of low nutrient intakes, a linear quantile mixed-effects model was developed to model quantiles of the distribution of intake as a function of explanatory variables. The proposed approach was illustrated by comparing the quantiles of iron intake with Lower Reference Nutrient Intakes (LRNI) recommendations using NDNS RP. This thesis extended the estimation procedures of both the two-part model with GG distribution and the linear quantile mixed-effects model to incorporate the complex sample design in three steps: the likelihood function was multiplied by the sample weightings; bootstrap methods for the estimation of the variance and finally, the variance estimation of the model parameters was stratified by the survey strata. 3) To evaluate the allocation of resources to alleviate nutritional deficiencies, a quantile linear mixed-effects model was used to analyse the distribution of expenditure on iron deficiency medication across health boards in the UK. Expenditure is likely to depend on the iron status of the region; therefore, for a fair comparison among health boards, iron status was estimated using the method developed in objective 2) and used in the specification of the median amount spent. Each health board is formed by a set of general practices (GPs), therefore, a random intercept was used to induce correlation between expenditure from two GPs from the same health board. Finally, the approaches in objectives 1) and 2) were compared with the traditional approach based on weighted linear regression modelling used in the NDNS RP reports. All analyses were implemented using SAS and R. Results: The two-part model with GG distribution fitted to amount of iron consumed from selected episodically food, showed that females tended to have greater odds of consuming iron from foods but consumed smaller amounts. As age groups increased, consumption tended to increase relative to the reference group though odds of consumption varied. Iron consumption also appeared to be dependent on National Statistics Socio-Economic Classification (NSSEC) group with lower social groups consuming less, in general. The quantiles of iron intake estimated using the linear quantile mixed-effects model showed that more than 25% of females aged 11-50y are below the LRNI, and that 11-18y girls are the group at highest of deficiency in the UK. Predictions of spending on iron medication in the UK based on the linear quantile mixed-effects model showed areas of higher iron intake resulted in lower spending on treating iron deficiency. In a geographical display of expenditure, Northern Ireland featured the lowest amount spent. Comparing the results from the methods proposed here showed that using the traditional approach based on weighted regression analysis could result in spurious associations. Discussion: This thesis developed novel approaches to the analysis of dietary complex survey data to address three important objectives of diet surveillance, namely the mean estimation of food intake by population groups, identification of groups at high risk of nutrient deficiency and allocation of resources to alleviate nutrient deficiencies. The methods provided models of good fit to dietary data, accounted for the sources of data variability and extended the estimation procedures to incorporate the complex sample survey design. The use of a GG distribution for modelling intake is an important improvement over existing methods, as it includes many distributions with different shapes and its domain takes non-negative values. The two-part model accommodated the sources of data variation of dietary intake with a random intercept in each component, which could be correlated to allow a correlation between the probability of consuming and the amount consumed. This also improves existing approaches that assume a zero correlation. The linear quantile mixed-effects model utilises the asymmetric Laplace distribution which can also accommodate many different distributional shapes, and likelihood-based estimation is robust to model misspecification. This method is an important improvement over existing methods used in nutritional research as it explicitly models the quantiles in terms of explanatory variables using a novel quantile regression model with random effects. The application of these models to UK national data confirmed the association of poorer diets and lower social class, identified the group of 11-50y females as a group at high risk of iron deficiency, and highlighted Northern Ireland as the region with the lowest expenditure on iron prescriptions.
APA, Harvard, Vancouver, ISO, and other styles
11

Mathews, Kai Monet. "Transformative Models in K-12 Education| The Impact of a Blended Universal Design for Learning Intervention. An Experimental Mixed Methods Study." Thesis, University of San Diego, 2016. http://pqdtopen.proquest.com/#viewpdf?dispub=10128128.

Full text
Abstract:

Accountability measures, by way of standardized curriculum and assessments, have played a large part in the attempt to ensure that students from all backgrounds receive equal access to quality education. However, the inherent disadvantage of a standardized system is the implied assumption that all students come in with the same knowledge, learn at the same pace, and learn the same way. In the wake of an increasingly diverse K-12 population, educational researchers, learning theorists, and practitioners agree that the concept of the average student is, in fact, a myth. Students come to school with different needs, norms, interests, cultural behavior, knowledge, motivations, and skill sets. In order for education to properly address the issue of equity, the issue of learner variance must first be attended to.

In 2010, the U.S. Department of Education released its educational plan encouraging teachers to address student variance through more inclusive learning environments. The report highlighted Blended Learning (BL) and Universal Design for Learning (UDL) as promising practices in enabling, motivating, and inspiring all students to achieve regardless of background, language, or disability. Research suggests that the combination of these two approaches could lead to transformative teaching practices that dramatically impact student learning. However, the efficacy of such a model has yet to be tested.

This study tested the efficacy of a Blended Universal Design for Learning (BUDL) model in improving student outcomes. An experimental design was used to explore the impact of a two-week BUDL intervention in an accelerated 7 th grade math class. The effect on student achievement, engagement, and perception was measured. Both quantitative and qualitative data were collected. Though results from the study were statistically insignificant, possible positive associations between a BUDL intervention and student achievement, engagement, and perception emerged. Considerations for clinical significance, suggestions for improvement on the BUDL model, and implications for future research are discussed.

APA, Harvard, Vancouver, ISO, and other styles
12

Ueckert, Sebastian. "Novel Pharmacometric Methods for Design and Analysis of Disease Progression Studies." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-216537.

Full text
Abstract:
With societies aging all around the world, the global burden of degenerative diseases is expected to increase exponentially. From the perspective drug development, degenerative diseases represent an especially challenging class. Clinical trials, in this context often termed disease progression studies, are long, costly, require many individuals, and have low success rates. Therefore, it is crucial to use informative study designs and to analyze efficiently the obtained trial data. The development of novel approaches intended towards facilitating both the design and the analysis of disease progression studies was the aim of this thesis. This aim was pursued in three stages (i) the characterization and extension of pharmacometric software, (ii) the development of new methodology around statistical power, and (iii) the demonstration of application benefits. The optimal design software PopED was extended to simplify the application of optimal design methodology when planning a disease progression study. The performance of non-linear mixed effect estimation algorithms for trial data analysis was evaluated in terms of bias, precision, robustness with respect to initial estimates, and runtime. A novel statistic allowing for explicit optimization of study design for statistical power was derived and found to perform superior to existing methods. Monte-Carlo power studies were accelerated through application of parametric power estimation, delivering full power versus sample size curves from a few hundred Monte-Carlo samples. Optimal design and an explicit optimization for statistical power were applied to the planning of a study in Alzheimer's disease, resulting in a 30% smaller study size when targeting 80% power. The analysis of ADAS-cog score data was improved through application of item response theory, yielding a more exact description of the assessment score, an increased statistical power and an enhanced insight in the assessment properties. In conclusion, this thesis presents novel pharmacometric methods that can help addressing the challenges of designing and planning disease progression studies.
APA, Harvard, Vancouver, ISO, and other styles
13

Freeman, Laura J. "Statistical Methods for Reliability Data from Designed Experiments." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/37729.

Full text
Abstract:
Product reliability is an important characteristic for all manufacturers, engineers and consumers. Industrial statisticians have been planning experiments for years to improve product quality and reliability. However, rarely do experts in the field of reliability have expertise in design of experiments (DOE) and the implications that experimental protocol have on data analysis. Additionally, statisticians who focus on DOE rarely work with reliability data. As a result, analysis methods for lifetime data for experimental designs that are more complex than a completely randomized design are extremely limited. This dissertation provides two new analysis methods for reliability data from life tests. We focus on data from a sub-sampling experimental design. The new analysis methods are illustrated on a popular reliability data set, which contains sub-sampling. Monte Carlo simulation studies evaluate the capabilities of the new modeling methods. Additionally, Monte Carlo simulation studies highlight the principles of experimental design in a reliability context. The dissertation provides multiple methods for statistical inference for the new analysis methods. Finally, implications for the reliability field are discussed, especially in future applications of the new analysis methods.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
14

Kristoffersson, Anders. "Study Design and Dose Regimen Evaluation of Antibiotics based on Pharmacokinetic and Pharmacodynamic Modelling." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-264798.

Full text
Abstract:
Current excessive use and abuse of antibiotics has resulted in increasing bacterial resistance to common treatment options which is threatening to deprive us of a pillar of modern medicine. In this work methods to optimize the use of existing antibiotics and to help development of new antibiotics were developed and applied. Semi-mechanistic pharmacokinetic-pharmacodynamic (PKPD) models were developed to describe the time course of the dynamic effect and interaction of combinations of antibiotics. The models were applied to illustrate that colistin combined with a high dose of meropenem may overcome meropenem-resistant P. aeruginosa infections. The results from an in vivo dose finding study of meropenem was successfully predicted by the meropenem PKPD model in combination with a murine PK model, which supports model based dosage selection. However, the traditional PK/PD index based dose selection was predicted to have poor extrapolation properties from pre-clinical to clinical settings, and across patient populations. The precision of the model parameters, and hence the model predictions, is dependent on the experimental design. A limited study design is dictated by cost and, for in vivo studies, ethical reasons. In this work optimal design (OD) was demonstrated to be able to reduce the experimental effort in time-kill curve experiments and was utilized to suggest the experimental design for identification and estimation of an interaction between antibiotics. OD methods to handle inter occasion variability (IOV) in optimization of individual PK parameter estimates were proposed. The strategy was applied in the design of a sparse sampling schedule that aim to estimate individual exposures of colistin in a multi-centre clinical study. Plasma concentration samples from the first 100 patients have been analysed and indicate that the performance of the design is close to the predicted. The methods described in this thesis holds promise to facilitate the development of new antibiotics and to improve the use of existing antibiotics.
APA, Harvard, Vancouver, ISO, and other styles
15

Pagadarai, Srikanth. "Wireless Communications and Spectrum Characterization in Impaired Channel Environments." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/33.

Full text
Abstract:
The demand for sophisticated wireless applications capable of conveying information content represented in various forms such as voice, data, audio and video is ever increasing. In order to support such applications, either additional wireless spectrum is needed or advanced signal processing techniques must be employed by the next-generation wireless communication systems. An immediate observation that can be made regarding the first option is that radio frequency spectrum is a limited natural resource. Moreover, since existing spectrum allocation policies of several national regulatory agencies such as the Federal Communications Commission (FCC) restrict spectrum access to licensed entities only, it has been identified that most of the licensed spectrum across time and frequency is inefficiently utilized. To facilitate greater spectral efficiency, many national regulatory agencies are considering a paradigm shift towards spectrum allocation by allowing unlicensed users to temporarily borrow unused spectral resources. This concept is referred to a dynamic spectrum access (DSA). Although, several spectrum measurement campaigns have been reported in the published literature for quantitatively assessing the available vacant spectrum, there are certain aspects of spectrum utilization that need a deeper understanding. First, we examine two complementary approaches to the problem of characterizing the usage of licensed bands. In the first approach, a linear mixed-effects based regression model is proposed, where the variations in percentage spectrum occupancy and activity period of the licensed user are described as a function of certain independent regressor variables. The second approach is based on the creation of a geo-location database consisting of the licensed transmitters in a specific geographical region and identifying the coverage areas that affect the available secondary channels. Both of these approaches are based on the energy spectral density data-samples collected across numerous frequency bands in several locations in the United States. We then study the mutual interference effects in a coexistence scenario consisting of licensed and unclicensed users. We numerically evaluate the impact of interference as a function of certain receiver characteristics. Specifically, we consider the unlicensed user to utilize OFDM or NOFDM symbols since the appropriate subcarriers can be turned off to facilitate non- contiguous spectrum utilization. Finally, it has been demonstrated that multiple-input and multiple-output (MIMO) antennas yield significant throughput while requiring no increase in transmit power or required bandwidth. However, the separation of spectrally overlapping signals is a challenging task that involves the estimation of the channel. We provide results concerning channel and symbol estimation in the scenario described above. In particular, we focus on the MIMO-OFDM transmission scheme and derive capacity lower bounds due to imperfect channel estimation.
APA, Harvard, Vancouver, ISO, and other styles
16

Pokhilko, Victoria V. "Statistical Designs for Network A/B Testing." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/6101.

Full text
Abstract:
A/B testing refers to the statistical procedure of experimental design and analysis to compare two treatments, A and B, applied to different testing subjects. It is widely used by technology companies such as Facebook, LinkedIn, and Netflix, to compare different algorithms, web-designs, and other online products and services. The subjects participating in these online A/B testing experiments are users who are connected in different scales of social networks. Two connected subjects are similar in terms of their social behaviors, education and financial background, and other demographic aspects. Hence, it is only natural to assume that their reactions to online products and services are related to their network adjacency. In this research, we propose to use the conditional autoregressive model (CAR) to present the network structure and include the network effects in the estimation and inference of the treatment effect. The following statistical designs are presented: D-optimal design for network A/B testing, a re-randomization experimental design approach for network A/B testing and covariate-assisted Bayesian sequential design for network A/B testing. The effectiveness of the proposed methods are shown through numerical results with synthetic networks and real social networks.
APA, Harvard, Vancouver, ISO, and other styles
17

Santos, Alessandra dos. "Design and analysis of sugarcane breeding experiments: a case study." Universidade de São Paulo, 2017. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-06102017-103933/.

Full text
Abstract:
One purpose of breeding programs is the selection of the better test lines. The accuracy of selection can be improved by using optimal design and using models that fit the data well. Finding this is not easy, especially in large experiments which assess more than one hundred lines without the possibility of replication due to the limited material, area and high costs. Thus, the large number of parameters in the complex variance structure that needs to be fitted relies on the limited number of replicated check varieties. The main objectives of this thesis were to model 21 trials of sugarcane provided by \"Centro de Tecnologia Canavieira\" (CTC - Brazilian company of sugarcane) and to evaluate the design employed, which uses a large number of unreplicated test lines (new varieties) and systematic replicated check (commercial) lines. The mixed linear model was used to identify the three major components of spatial variation in the plot errors and the competition effects at the genetic and residual levels. The test lines were assumed as random effects and check lines as fixed, because they came from different processes. The single and joint analyses were developed because the trials could be grouped into two types: (i) one longitudinal data set (two cuts) and (ii) five regional groups of experiment (each group was a region which had three sites). In a study of alternative designs, a fixed size trial was assumed to evaluate the efficiency of the type of unreplicated design employed in these 21 trials comparing to spatially optimized unreplicated and p-rep designs with checks and a spatially optimized p-rep design. To investigate models and design there were four simulation studies to assess mainly the i) fitted model, under conditions of competition effects at the genetic level, ii) accuracy of estimation in the separate versus joint analysis; iii) relation between the sugarcane lodging and the negative residual correlation, and iv) design efficiency. To conclude, the main information obtained from the simulation studies was: the convergence number of the algorithm model analyzed; the variance parameter estimates; the correlations between the direct genetic EBLUPs and the true direct genetic effects; the assertiveness of selection or the average similarity, where similarity was measured as the percentage of the 30 test lines with the highest direct genetic EBLUPs that are in the true 30 best test lines (generated); and the heritability estimates or the genetic gain.
Um dos propósitos dos programas de melhoramento genético é a seleção de novos clones melhores (novos materiais). A acurácia de seleção pode ser melhorada usando delineamentos ótimos e modelos bem ajustados. Porém, descobrir isso não é fácil, especialmente, em experimentos grandes que possuem mais de cem clones sem a possibilidade de repetição devido à limitação de material, área e custos elevados, dadas as poucas repetições de parcelas com variedades comerciais (testemunhas) e o número de parâmetros de complexa variância estrutural que necessitam ser assumidos. Os principais objetivos desta tese foram modelar 21 experimentos de cana de açúcar fornecidos pelo Centro de Tecnologia Canavieira (CTC - empresa brasileira de cana de açúcar) e avaliar o delineamento empregado, o qual usa um número grande de clones não repetidos e testemunhas sistematicamente repetidas. O modelo linear misto foi usado, identificando três principais componentes de variação espacial nos erros de parcelas e efeitos de competição, em nível genético e residual. Os clones foram assumidos de efeitos aleatórios e as testemunhas de efeitos fixos, pois vieram de processos diferentes. As análises individuais e conjuntas foram desenvolvidas neste material pois os experimentos puderam ser agrupados em dois tipos: (i) um delineamento longitudinal (duas colheitas) e (ii) cinco grupos de experimentos (cada grupo uma região com três locais). Para os estudos de delineamentos, um tamanho fixo de experimento foi assumido para se avaliar a eficiência do delineamento não replicado (empregado nesses 21 experimentos) com os não replicados otimizado espacialmente, os parcialmente replicados com testemunhas e os parcialmente replicados otimizado espacialmente. Quatro estudos de simulação foram feitos para avaliar i) os modelos ajustados, sob condições de efeito de competição em nível genético, ii) a acurácia das estimativas vindas dos modelos de análise individual e conjunta; iii) a relação entre tombamento da cana e a correlação residual negativa, e iv) a eficiência dos delineamentos. Para concluir, as principais informações utilizadas nos estudos de simulação foram: o número de vezes que o algoritmo convergiu; a variância na estimativa dos parâmetros; a correlação entre os EBLUPs genético direto e os efeitos genéticos reais; a assertividade de seleção ou a semelhança média, sendo semelhança medida como a porcentagem dos 30 clones com os maiores EBLUPS genético e os 30 melhores verdadeiros clones; e a estimativa da herdabilidade ou do ganho genético.
APA, Harvard, Vancouver, ISO, and other styles
18

Paulenas, Viviane Panariello. "Análise de experimentos em látice quadrado no melhoramento vegetal utilizando modelos mistos." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-29112016-151907/.

Full text
Abstract:
Experimentos conduzidos no delineamento látice ou reticulado são bastante comuns no melhoramento genético vegetal em que diversos materiais genéticos são comparados, principalmente nas etapas iniciais do programa, visando explorar com maior intensidade a variabilidade genética disponível. Em situações de restrições espaciais e financeiras estes delineamentos se destacam por permitir a comparação de todas as progênies em teste estando ou não instaladas no mesmo bloco. O objetivo do trabalho foi a avaliação de testes de progênies de milho (Zea mays L.), em diferentes ambientes para o caráter produção de grãos em t.ha-1. Duzentas e cinquenta e seis progênies foram instaladas em 4 estações experimentais do município de Piracicaba em diferentes anos agrícolas. Os dados de produção de grãos obtidos pelos diferentes ambientes foram analisados de forma individual e conjunta, a fim de verificar presença da interação genótipo × ambiente. O delineamento usado foi, portanto, o látice quadrado 16 × 16, com duas repetições em cada local. Duas abordagens experimentais foram confrontadas, considerando a estrutura de blocos incompletos parcialmente balanceados do látice e a outra em que cada repetição do látice foi analisada como se fosse um bloco completo. Uma maneira de se analisar estruturas experimentais como esta é utilizando modelos mistos, por meio da inclusão de fatores de efeito aleatório e, fazendo o uso da máxima verossimilhança restrita (REML) para estimar os componentes de variância associados a tais fatores com um menor viés. Além dos componentes de variância, os EBLUPs (melhores preditores lineares não viesados empíricos) também foram calculados e a partir deles foi verificada a correlação entre os diferentes ambientes, e a porcentagem de progênies selecionadas comparando-se os resultados obtidos pelas duas abordagens do conjunto de dados. Análises estatísticas foram implementadas utilizando o software gratuito R, com o pacote estatístico lme4.
Experiments conducted in the lattice design are quite common in plant breeding in which several genetic materials are compared, especially in the early stages of the program, aiming to explore more intensively the genetic variability available. In situations of space and financial constraints these designs stand out for allowing the comparison of all progenies being tested whether or not installed in the same block. The aim of the study was the evaluation of maize (Zea mays L.) progeny tests in different environments for grain yield in t.ha-1. Two hundred and fifty six progenies were tested in four experimental stations in the city of Piracicaba, in different agricultural years. Grain production data obtained by different environments were analyzed individually and jointly in order to verify the presence of genotype × environment interaction. Therefore, the square lattice design with dimension 16 × 16 was used with two replications in each location. Two experimental approaches were compared, considering the partially balanced incomplete block structure of the lattice and the other in each repetition of the lattice was analyzed as if it were a complete block. One way to analyze experimental structures like this is with the use of mixed models, by adding random effect factors, and by making use of the restricted maximum likelihood (REML) for estimating the variance components associated with such factors with less bias. Besides the variance components, EBLUPs (empirical best linear predictor unbiased) were also calculated and from them was checked the correlation between the different environments, and the percentage of selected progenies comparing the results obtained by the two assembly approaches data. Statistical analyzes were implemented for the open-souce software R, using the statistical package lme4.
APA, Harvard, Vancouver, ISO, and other styles
19

Allen, Brandon. "Identifying the Effectiveness of Pre-Listening Activities for Students of Chinese Mandarin." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2666.

Full text
Abstract:
Listening has proved to be a difficult skill to teach in the language classroom. Research has shown that pre-listening activities, or those activities done with students prior to listening, can have an effect on listening comprehension outcomes. This research addressed the effectiveness of two types of pre-listening activities: top-down and bottom-up. Volunteers from intermediate level courses taught at Brigham Young University were divided into two treatment groups and a control group. The treatment groups followed a mixed models design by each going through a top-down and bottom-up pre-listening activity, followed by listening to a passage in Mandarin Chinese and taking a multiple-choice test. The bottom-up activity chosen for this research was a vocabulary preview activity, with an advance organizer being chosen for the top-down activity. Results showed both treatment groups significantly outperformed the control group for both the top-down and bottom-up activities (p=0.0123 and p=0.0181 respectively). No significant difference existed in scores between top-down and bottom-up activities (p=0.9456). It was determined that both the vocabulary activity and the advance organizer helped to increase the listening comprehension of intermediate level students of Mandarin Chinese.
APA, Harvard, Vancouver, ISO, and other styles
20

Vong, Camille. "Model-Based Optimization of Clinical Trial Designs." Doctoral thesis, Uppsala universitet, Institutionen för farmaceutisk biovetenskap, 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-233445.

Full text
Abstract:
General attrition rates in drug development pipeline have been recognized as a necessity to shift gears towards new methodologies that allow earlier and correct decisions, and the optimal use of all information accrued throughout the process. The quantitative science of pharmacometrics using pharmacokinetic-pharmacodynamic models was identified as one of the strategies core to this renaissance. Coupled with Optimal Design (OD), they constitute together an attractive toolkit to usher more rapidly and successfully new agents to marketing approval. The general aim of this thesis was to investigate how the use of novel pharmacometric methodologies can improve the design and analysis of clinical trials within drug development. The implementation of a Monte-Carlo Mapped power method permitted to rapidly generate multiple hypotheses and to adequately compute the corresponding sample size within 1% of the time usually necessary in more traditional model-based power assessment. Allowing statistical inference across all data available and the integration of mechanistic interpretation of the models, the performance of this new methodology in proof-of-concept and dose-finding trials highlighted the possibility to reduce drastically the number of healthy volunteers and patients exposed to experimental drugs. This thesis furthermore addressed the benefits of OD in planning trials with bio analytical limits and toxicity constraints, through the development of novel optimality criteria that foremost pinpoint information and safety aspects. The use of these methodologies showed better estimation properties and robustness for the ensuing data analysis and reduced the number of patients exposed to severe toxicity by 7-fold.  Finally, predictive tools for maximum tolerated dose selection in Phase I oncology trials were explored for a combination therapy characterized by main dose-limiting hematological toxicity. In this example, Bayesian and model-based approaches provided the incentive to a paradigm change away from the traditional rule-based “3+3” design algorithm. Throughout this thesis several examples have shown the possibility of streamlining clinical trials with more model-based design and analysis supports. Ultimately, efficient use of the data can elevate the probability of a successful trial and increase paramount ethical conduct.
APA, Harvard, Vancouver, ISO, and other styles
21

Hecht, Martin. "Optimierung von Messinstrumenten im Large-scale Assessment." Doctoral thesis, Humboldt-Universität zu Berlin, Lebenswissenschaftliche Fakultät, 2015. http://dx.doi.org/10.18452/17270.

Full text
Abstract:
Messinstrumente stellen in der wissenschaftlichen Forschung ein wesentliches Element zur Erkenntnisgewinnung dar. Das Besondere an Messinstrumenten im Large-scale Assessment in der Bildungsforschung ist, dass diese normalerweise für jede Studie neu konstruiert werden und dass die Testteilnehmer verschiedene Versionen des Tests bekommen. Hierbei ergeben sich potentielle Gefahren für die Akkuratheit und Validität der Messung. Um solche Gefahren zu minimieren, sollten (a) die Ursachen für Verzerrungen der Messung und (b) mögliche Strategien zur Optimierung der Messinstrumente eruiert werden. Deshalb wird in der vorliegenden Dissertation spezifischen Fragestellungen im Rahmen dieser beiden Forschungsanliegen nachgegangen.
Measurement instruments are essential elements in the acquisition of knowledge in scientific research. Special features of measurement instruments in large-scale assessments of student achievement are their frequent reconstruction and the usage of different test versions. Here, threats for the accuracy and validity of the measurement may emerge. To minimize such threats, (a) sources for potential bias of measurement and (b) strategies to optimize measuring instruments should be explored. Therefore, the present dissertation investigates several specific topics within these two research areas.
APA, Harvard, Vancouver, ISO, and other styles
22

Cheldelin, Brent. "Design for mixed model production of complex products /." May be available electronically:, 2007. http://proquest.umi.com/login?COPT=REJTPTU1MTUmSU5UPTAmVkVSPTI=&clientId=12498.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Fisher, John Sheridan. "Application of model driven architecture design methodologies to mixed-signal system design projects." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1143218375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Båtelsson, Niklas, and Simon Alfredsson. "Assembly system design - : Case study of a mixed model production." Thesis, KTH, Industriell produktion, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-103276.

Full text
Abstract:
The report, which is a part of the course "MG202X Examensarbete", has been written for the institution Industrial Production at the Royal Institute of Technology (KTH) with guidance from Antonio Maffei. The work has been focused on creating an assembly system at a production facility for Schneider Electric in Nyköping. The Authors has divided the report into a literature review containing Lean production and assembly systems, an analysis of the initial state and a solution. The literature review presents three separate parts which creates the framework of our analysis. The first part regards assembly system and describes different types of design alternatives and which losses that can be found in an assembly system. Furthermore the second part contains Lean production where selected parts of the philosophy are described. The last part of the literature review treats the design of the workstation with regards to ergonomics and part presentation. The analysis at Schneider Electric has been conducted during a three month period and has included time studies, observations and interviews. To analyze the initial state a model for estimating assembly times and workload were needed. An in depth understanding of the initial state was the foundation to be able to create an adapted and accepted assembly system. The work resulted in two suggested assembly systems. One system contains only one workstation and was to be used for a simple assembly process. The second system is to be used for more complex products and has a higher capacity as it contains three workstations. As the assembly system contains three separate workstations it means that the assembly process has been divided which were done through a consideration between logical split and balancing of the system. Both systems used a continuous supply system for components.
Rapporten, som är en del av kursen "MG202X Examensarbete", har skrivits för institutionen Industriell Produktion på KTH under handledning av Antonio Maffei. Arbetet har inriktats på att utveckla ett monteringssystem hos Schneider Electrics produktionsanläggning i Nyköping. Författarna har delat upp rapporten mellan en litteraturstudie kring Lean produktion och monteringssystem, en analys av den aktuella situationen och en presentation av lösning. Litteraturstudien presenterar tre separata delar som bildar ramverket till vår analys. Den första delen är monteringssystem vilket beskriver olika typer av designalternativ samt vilka förluster som finns i ett monteringssystem. Vidare består den andra delen av Lean produktion där utvalda delar av filosofin har beskrivits. Den sista delen av studien behandlar utformandet av den enskilda arbetssituationen med hänsyn till ergonomi och komponentpresentation. Analysen av situationen hos Schneider Electric har gjorts under en tremånadersperiod inkluderat tidsanalyser, observationer och intervjuer. För att analysera dagens system krävdes en modell för uppskattning av monteringstider och arbetsbelastning. En djupgående förståelse av dagsläget var grunden för att skapa ett anpassat och accepterat monteringssystem. Arbetet resulterade i två förslag till monteringssystem. Det ena systemet bestod endast av en arbetsstation för enklare monteringsförfarande. Det andra systemet skall användas till mer komplexa produkter och har en högre kapacitet då den består av tre arbetsstationer. Då monteringssystemet består av tre skiljda arbetsstationer innebär detta att monteringsprocessen har delats vilket skedde genom att en avvägning mellan logisk delning och balansering. Båda systemen använde sig av ett kanbansystem för komponenttillförsel.
APA, Harvard, Vancouver, ISO, and other styles
25

Tavahodi, Mana. "Mixed model predictive control with energy function design for power system." Queensland University of Technology, 2007. http://eprints.qut.edu.au/16374/.

Full text
Abstract:
For reliable service, a power system must remain stable and capable of withstanding a wide range of disturbances especially for the large interconnected systems. In the last decade and a half and in particular after the famous blackout in N.Y. U.S.A. 1965, considerable research effort has gone in to the stability investigation of power systems. To deal with the requirements of real power systems, various stabilizing control techniques were being developed over the last decade. Conventional control engineering approaches are unable to effectively deal with system complexity, nonlinearities, parameters variations and uncertainties. This dissertation presents a non-linear control technique which relies on prediction of the large power system behaviour. One example of a large modern power system formed by interconnecting the power systems of various states is the South-Eastern Australian power network made up of the power systems of Queensland, New South Wales, Victoria and South Australia. The Model Predictive Control (MPC) for the total power system has been shown to be successful in addressing many large scale nonlinear control problems. However, for application to the high order problems of power systems and given the fast control response required, total MPC is still expensive and is structured for centralized control. This thesis develops a MPC algorithm to control the field currents of generators incorporating them in a decentralized overall control scheme. MPC decisions are based on optimizing the control action in accordance with the predictions of an identified power system model so that the desired response is obtained. Energy Function based design provides good control for direct influence items such as SVC (Static Var Compensators), FACTS (Flexible AC Transmission System) or series compensators and can be used to define the desired flux for generator. The approach in this thesis is to use the design flux for best system control as a reference for MPC. Given even a simple model of the relation between input control signal and the resulting machine flux, the MPC can be used to find the control sequence which will start the correct tracking. The continual recalculation of short time optimal control and then using only the initial control value provides a form of feedback control for the system in the desired tracking task but in a manner which retains the nonlinearity of the model.
APA, Harvard, Vancouver, ISO, and other styles
26

Arledge, Lauren Habenicht. "Wind-Abilities: A Mixed-Use Model for Thoughtful Wind Farm Design." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/78246.

Full text
Abstract:
Globally, wind power is leading the renewable energy revolution. While carbon neutral and cost-effective, wind energy infrastructure is immobile and has the potential to profoundly change land use and the visible landscape. As wind technology takes its place as a key contributor to the US energy grid, it becomes clear that these types of projects will come into greater contact with areas occupied by humans, and eventually with wilderness and other more natural areas. This increased visibility and close proximity necessitates the development of future wind farm sites that afford opportunities for auxiliary uses while maintaining their intrinsic value as energy producers. In short, it is important for wind farms to be versatile because land is a finite resource and because over time, increasing numbers of these sites will occupy our landscapes. In the Eastern US, the majority of onshore wind resources suitable for energy development are found along ridge lines in the Appalachian mountains. These mountains are ancient focal points in the landscape, and subsequently host myriad sites of historic, recreational, and scenic significance. In the future, these windswept ridges will likely become targets for wind energy development. This thesis demonstrates a methodology for the thoughtful siting and design of future wind projects in the Appalachian mountains. Opportunities for offsite views, diversified trail experiences, and planned timber harvests are realized by locating a seven-turbine wind park adjacent to the Appalachian Trail in Cherokee National Forest in Carter county, Tennessee. The proposed wind park demonstrates the sound possibility of thoughtfully integrating wind infrastructure along Appalachian ridges in conjunction with forestry and recreation opportunities, such as hiking and camping. The design is a wind park rather than a wind farm because in addition to its inherent function as a production landscape, it is also a place that is open to the general public for recreational use.
Master of Landscape Architecture
APA, Harvard, Vancouver, ISO, and other styles
27

Smith, Pieter R. "A computerized search methodology for the design of mixed model assembly systems." Master's thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-02162010-020023/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Norrell, Jeffery Lee. "A mixed mode thermal/fluids model for improvements in SLS part quality, machine design, and process design /." Digital version accessible at:, 1999. http://wwwlib.umi.com/cr/utexas/main.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Qiu, Chen. "A study of covariance structure selection for split-plot designs analyzed using mixed models." Kansas State University, 2014. http://hdl.handle.net/2097/18129.

Full text
Abstract:
Master of Science
Department of Statistics
Christopher I. Vahl
In the classic split-plot design where whole plots have a completely randomized design, the conventional analysis approach assumes a compound symmetry (CS) covariance structure for the errors of observation. However, often this assumption may not be true. In this report, we examine using different covariance models in PROC MIXED in the SAS system, which are widely used in the repeated measures analysis, to model the covariance structure in the split-plot data in which the simple compound symmetry assumption does not hold. The comparison of the covariance structure models in PROC MIXED and the conventional split-plot model is illustrated through a simulation study. In the example analyzed, the heterogeneous compound symmetry (CSH) covariance model has the smallest values for the Akaike and Schwarz’s Bayesian information criteria fit statistics and is therefore the best model to fit our example data.
APA, Harvard, Vancouver, ISO, and other styles
30

Wolfram, Heiko. "Model Building, Control Design and Practical Implementation of a High Precision, High Dynamical MEMS Acceleration Sensor." Universitätsbibliothek Chemnitz, 2005. http://nbn-resolving.de/urn:nbn:de:swb:ch1-200501921.

Full text
Abstract:
This paper presents the whole process of building up a high precision, high dynamical MEMS acceleration sensor. The first samples have achieved a resolution of better than 500 micro g and a bandwidth of more than 200 Hz. The sensor fabrication technology is shortly covered in the paper. A theoretical model is built from the physical principles of the complete sensor system, consisting of the MEMS sensor, the charge amplifier and the PWM driver for the sensor element. The mathematical modeling also covers problems during startup. A reduced order model of the entire system is used to design a robust control with the Mixed-Sensitivity H-infinity Approach. Since the system has an unstable pole, imposed by the electrostatic field and time delay, caused by A/D-D/A conversation delay and DSP computing time, limitations for the control design are given. The theoretical model might be inaccurate or lacks of completeness, because the parameters for the theoretical model building vary from sample to sample or might be not known. A new identification scheme for open or closed-loop operation is deployed to obtain directly from the samples the parameters of the mechanical system and the voltage dependent gains. The focus of this paper is the complete system development and identification process including practical tests in a DSP TI-TMS320C3000 environment.
APA, Harvard, Vancouver, ISO, and other styles
31

Sercundes, Ricardo Klein. "Análise de dados longitudinais: uma aplicação na avaliação do conforto animal." Universidade de São Paulo, 2014. http://www.teses.usp.br/teses/disponiveis/11/11134/tde-20032014-153059/.

Full text
Abstract:
Em regiões tropicais e subtropicais, a alta intensidade da radiação solar associada aos altos valores de temperatura e umidade proporcionam condições de desconforto dentro dos aviários comerciais, afetando a sanidade e produção dos lotes de frango. Nesse sentido, o presente trabalho propôs-se avaliar dados de conforto animal em aviários construídos em escala reduzida com diferentes tipos de telhas (cerâmica e fibrocimento) e forros (A e B). Modelos lineares mistos foram utilizados objetivando-se o estudo dos índices de conforto \"entalpia específica\" (h) e \"temperatura de globo e umidade\" (ITGU). A obtenção dos modelos envolveu a escolha de efeitos aleatórios, fixos e estruturas de covariância utilizando técnicas gráficas e analíticas. Para selecionar os modelos que melhor se ajustavaram aos dados, foram utilizados testes de razão de verossimilhanças, teste Wald-F e os critérios de informação AIC e BIC, em um método de seleção top-down. Para a variável entalpia específica, não houve diferença entre os tratamentos avaliados, sendo todos representados por uma parábola que apresentou ponto máximo em 50,68 kJ.kg ar seco-1 às 13h 51min. Para a variável ITGU, houve interação entre os fatores testados, sendo a combinação telha de cerâmica e forro B a de melhor desempenho, apresentando máximo em 74,08 às 14h 21min. As análises de diagnóstico confirmaram o bom ajuste dos modelos. Era esperado que os diferentes índices de conforto gerassem conclusões equivalentes, no entanto isso não foi observado.
In tropical and subtropical regions, the high intensity of solar radiation associated with high values of temperature and humidity provide discomfort inside the commercial poultry houses, which affects animal health and production batches. Therefore, this works\'s goal is to analyse data of performance of small-scale poultry houses built with different types of tiles (ceramic and cement) and liners (A and B) in animal comfort. Linear mixed models were used aiming to study two thermal comfort indexes: specific enthalpy (h) and black globe temperature and humidity (GTHI). Model building involved choosing fixed and random effects and covariance structures using graphical and analytical techniques. To select the best model fit, likelihood ratio tests were used, as well as Walf-F tests and the AIC and BIC criteria in a top-down selection method. For the specific enthalpy variable, there was no significant difference among the treatments and all were represented by a single curve which presented a peak at 50.68 kJ.kg of dry air-1 at 13h 51min. For the variable GTHI, there was a significant interaction effect between the factors and the combination of ceramic tile and liner B provided the best performance, with a maximum of 74.08 at 14h 21min. The diagnostic tests confirmed that the models were well fitted. It was expected that the different comfort indexes would generate equivalent conclusions, however this was not observed.
APA, Harvard, Vancouver, ISO, and other styles
32

Thompson-Sellers, Ingrid N. "What Informs Practice and What is Valued in Corporate Instructional Design? A Mixed Methods Study." Digital Archive @ GSU, 2012. http://digitalarchive.gsu.edu/msit_diss/89.

Full text
Abstract:
This study used a two-phased explanatory mixed-methods design to explore in-depth what factors are perceived by Instructional Design and Technology (IDT) professionals as impacting instructional design practice, how these factors are valued in the field, and what differences in perspectives exist between IDT managers and non-managers. For phase 1 of the study, one hundred and sixteen corporate IDT professionals (managers and non-managers) responded to a web-based survey that was designed and developed from: (a) The results of an exploratory study of the practices of corporate instructional designers, (b) the results of an extensive literature review into the theory and practice in the field of IDT, and (c) other survey instruments developed, validated and used in prior studies. Analysis of the data collected in phase 1 of the study resulted in the development of an Evaluation Model for IDT Practice that was used as a framework to answer the research questions. Quantitative analysis included the use of Hotelling’s T2 inferential statistic to test for mean differences between managers and non-managers perceptions of formal and informally trained groups of IDT personnel. Chi squared analysis test of independence, and correlation analysis was used to determine the nature and extent of the relationship between the type of training and the professional status of the participants. For phase 2 of the study, semi-structured interviews were conducted with selected participants and analyzed using the constant comparative method in order to help validate the findings from phase 1. Ensuing analysis of the survey data determined that, both managers and non-managers generally agreed that both formal and on the job training was valuable, and that their peers who were formally and informally trained were competent instructional designers. The qualitative phase of the study and a closer examination of effect sizes suggested the potential for some variation in perceptions. In addition, a statistically significant correlation showed that IDT managers who completed the survey were more likely to be formally trained. Recommendations based on the results included future studies with a larger, more diverse population; future studies to refine the Evaluation Model for ID practice; and that academic ID programs work more closely with practitioners when designing and delivering their curricula.
APA, Harvard, Vancouver, ISO, and other styles
33

Olabode, John A. "Analysis of the performance of an optimization model for time-shiftable electrical load scheduling under uncertainty." Thesis, Monterey, California: Naval Postgraduate School, 2016. http://hdl.handle.net/10945/51591.

Full text
Abstract:
Approved for public release; distribution is unlimited
To ensure sufficient capacity to handle unexpected demands for electric power, decision makers often over-estimate expeditionary power requirements. Therefore, we often use limited resources inefficiently by purchasing more generators and investing in more renewable energy sources than needed to run power systems on the battlefield. Improvement of the efficiency of expeditionary power units requires better managing of load requirements on the power grids and, where possible, shifting those loads to a more economical time of day. We analyze the performance of a previously developed optimization model for scheduling time-shiftable electrical loads in an expeditionary power grids model in two experiments. One experiment uses model data similar to the original baseline data, in which expected demand and expected renewable production remain constant throughout the day. The second experiment introduces unscheduled demand and realistic fluctuations in the power production and the demand distributions data that more closely reflect actual data. Our major findings show energy grid power production composition affects which uncertain factor(s) influence fuel con-sumption, and uncertainty in the energy grid system does not always increase fuel consumption by a large amount. We also discover that the generators running the most do not always have the best load factor on the grid, even when optimally scheduled.
Lieutenant Commander, United States Navy
APA, Harvard, Vancouver, ISO, and other styles
34

Alvarez, Genesis Barbie. "Control Design for a Microgrid in Normal and Resiliency Modes of a Distribution System." Thesis, Virginia Tech, 2019. http://hdl.handle.net/10919/94627.

Full text
Abstract:
As inverter-based distributed energy resources (DERs) such as photovoltaic (PV) and battery energy storage system (BESS) penetrate within the distribution system. New challenges regarding how to utilize these devices to improve power quality arises. Before, PV systems were required to disconnect from the grid during a large disturbance, but now smart inverters are required to have dynamically controlled functions that allows them to remain connected to the grid. Monitoring power flow at the point of common coupling is one of the many functions the controller should perform. Smart inverters can inject active power to pick up critical load or inject reactive power to regulate voltage within the electric grid. In this context, this thesis focuses on a high level and local control design that incorporates DERs. Different controllers are implemented to stabilize the microgrid in an Islanding and resiliency mode. The microgrid can be used as a resiliency source when the distribution is unavailable. An average model in the D-Q frame is calculated to analyze the inherent dynamics of the current controller for the point of common coupling (PCC). The space vector approach is applied to design the voltage and frequency controller. Secondly, using inverters for Volt/VAR control (VVC) can provide a faster response for voltage regulation than traditional voltage regulation devices. Another objective of this research is to demonstrate how smart inverters and capacitor banks in the system can be used to eliminate the voltage deviation. A mixed-integer quadratic problem (MIQP) is formulated to determine the amount of reactive power that should be injected or absorbed at the appropriate nodes by inverter. The Big M method is used to address the nonconvex problem. This contribution can be used by distribution operators to minimize the voltage deviation in the system.
Master of Science
Reliable power supply from the electric grid is an essential part of modern life. This critical infrastructure can be vulnerable to cascading failures or natural disasters. A solution to improve power systems resilience can be through microgrids. A microgrid is a small network of interconnected loads and distributed energy resources (DERs) such as microturbines, wind power, solar power, or traditional internal combustion engines. A microgrid can operate being connected or disconnected from the grid. This research emphases on the potentially use of a Microgrid as a resiliency source during grid restoration to pick up critical load. In this research, controllers are designed to pick up critical loads (i.e hospitals, street lights and military bases) from the distribution system in case the electric grid is unavailable. This case study includes the design of a Microgrid and it is being tested for its feasibility in an actual integration with the electric grid. Once the grid is restored the synchronization between the microgrid and electric must be conducted. Synchronization is a crucial task. An abnormal synchronization can cause a disturbance in the system, damage equipment, and overall lead to additional system outages. This thesis develops various controllers to conduct proper synchronization. Interconnecting inverter-based distributed energy resources (DERs) such as photovoltaic and battery storage within the distribution system can use the electronic devices to improve power quality. This research focuses on using these devices to improve the voltage profile within the distribution system and the frequency within the Microgrid.
APA, Harvard, Vancouver, ISO, and other styles
35

Di, Pace Brian S. "Site- and Location-Adjusted Approaches to Adaptive Allocation Clinical Trial Designs." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/5706.

Full text
Abstract:
Response-Adaptive (RA) designs are used to adaptively allocate patients in clinical trials. These methods have been generalized to include Covariate-Adjusted Response-Adaptive (CARA) designs, which adjust treatment assignments for a set of covariates while maintaining features of the RA designs. Challenges may arise in multi-center trials if differential treatment responses and/or effects among sites exist. We propose Site-Adjusted Response-Adaptive (SARA) approaches to account for inter-center variability in treatment response and/or effectiveness, including either a fixed site effect or both random site and treatment-by-site interaction effects to calculate conditional probabilities. These success probabilities are used to update assignment probabilities for allocating patients between treatment groups as subjects accrue. Both frequentist and Bayesian models are considered. Treatment differences could also be attributed to differences in social determinants of health (SDH) that often manifest, especially if unmeasured, as spatial heterogeneity amongst the patient population. In these cases, patient residential location can be used as a proxy for these difficult to measure SDH. We propose the Location-Adjusted Response-Adaptive (LARA) approach to account for location-based variability in both treatment response and/or effectiveness. A Bayesian low-rank kriging model will interpolate spatially-varying joint treatment random effects to calculate the conditional probabilities of success, utilizing patient outcomes, treatment assignments and residential information. We compare the proposed methods with several existing allocation strategies that ignore site for a variety of scenarios where treatment success probabilities vary.
APA, Harvard, Vancouver, ISO, and other styles
36

Metta, Haritha. "A MULTI-STAGE DECISION SUPPORT MODEL FOR COORDINATED SUSTAINABLE PRODUCT AND SUPPLY CHAIN DESIGN." UKnowledge, 2011. http://uknowledge.uky.edu/gradschool_diss/137.

Full text
Abstract:
In this research, a decision support model for coordinating sustainable product and supply chain design decisions is developed using a multi-stage hierarchical approach. The model evaluates alternate product designs and their corresponding supply chain configurations to identify the best product design and the corresponding supply chain configuration that maximizes the economic, environmental and societal benefits. The model considers a total life-cycle approach and incorporates closed-loop flow among multiple product lifecycles. In the first stage, a mixed integer linear programming model is developed to select for each product design an optimal supply chain configuration that maximizes the profit. In the subsequent stages, the economic, environmental and societal multiple life-cycle analysis models are developed which assess the economic, environment and the societal performance of each product design and its optimal supply chain configuration to identify the best product design with highest sustainability benefits. The decision support model is applied for an example problem to illustrate the procedure for identifying the best sustainable design. Later, the model is applied for a real-time refrigerator case to identify the best refrigerator design that maximizes economic, environmental and societal benefits. Further, sensitivity analysis is performed on the optimization model to study the closed-loop supply chain behavior under various situations. The results indicated that both product and supply chain design criteria significantly influence the performance of the supply chain. The results provided insights into closed-loop supply chain models and their behavior under various situations. Decision support models such as above can help a company identify the best designs that bring highest sustainability benefits, can provide a manager with holistic view and the impact of their design decisions on the supply chain performance and also provide areas for improvement.
APA, Harvard, Vancouver, ISO, and other styles
37

Cole, James Jacob. "Assessing Nonlinear Relationships through Rich Stimulus Sampling in Repeated-Measures Designs." OpenSIUC, 2018. https://opensiuc.lib.siu.edu/dissertations/1587.

Full text
Abstract:
Explaining a phenomenon often requires identification of an underlying relationship between two variables. However, it is common practice in psychological research to sample only a few values of an independent variable. Young, Cole, and Sutherland (2012) showed that this practice can impair model selection in between-subject designs. The current study expands that line of research to within-subjects designs. In two Monte Carlo simulations, model discrimination under systematic sampling of 2, 3, or 4 levels of the IV was compared with that under random uniform sampling and sampling from a Halton sequence. The number of subjects, number of observations per subject, effect size, and between-subject parameter variance in the simulated experiments were also manipulated. Random sampling out-performed the other methods in model discrimination with only small, function-specific costs to parameter estimation. Halton sampling also produced good results but was less consistent. The systematic sampling methods were generally rank-ordered by the number of levels they sampled.
APA, Harvard, Vancouver, ISO, and other styles
38

Mielke, Tobias [Verfasser], and Rainer [Akademischer Betreuer] Schwabe. "Approximations of the Fisher information for the construction of efficient experimental designs in nonlinear mixed effects models / Tobias Mielke. Betreuer: Rainer Schwabe." Magdeburg : Universitätsbibliothek, 2011. http://d-nb.info/1051445477/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Maurer, Simon. "Analysis and coordination of mixed-criticality cyber-physical systems." Thesis, University of Hertfordshire, 2018. http://hdl.handle.net/2299/21094.

Full text
Abstract:
A Cyber-physical System (CPS) can be described as a network of interlinked, concurrent computational components that interact with the physical world. Such a system is usually of reactive nature and must satisfy strict timing requirements to guarantee a correct behaviour. The components can be of mixed-criticality which implies different progress models and communication models, depending whether the focus of a component lies on predictability or resource efficiency. In this dissertation I present a novel approach that bridges the gap between stream processing models and Labelled Transition Systems (LTSs). The former offer powerful tools to describe concurrent systems of, usually simple, components while the latter allow to describe complex, reactive, components and their mutual interaction. In order to achieve the bridge between the two domains I introduce the novel LTS Synchronous Interface Automaton (SIA) that allows to model the interaction protocol of a process via its interface and to incrementally compose simple processes into more complex ones while preserving the system properties. Exploiting these properties I introduce an analysis to identify permanent blocking situations in a network of composed processes. SIAs are wrapped by the novel component-based coordination model Process Network with Synchronous Communication (PNSC) that allows to describe a network of concurrent processes where multiple communication models and the co-existence and interaction of heterogeneous processes is supported due to well defined interfaces. The work presented in this dissertation follows a holistic approach which spans from the theory of the underlying model to an instantiation of the model as a novel coordination language, called Streamix. The language uses network operators to compose networks of concurrent processes in a structured and hierarchical way. The work is validated by a prototype implementation of a compiler and a Run-time System (RTS) that allows to compile a Streamix program and execute it on a platform with support for ISO C, POSIX threads, and a Linux operating system.
APA, Harvard, Vancouver, ISO, and other styles
40

Tebaldi, Enrico. "SIMAID : a rapid development methodology for the design of acyclic, bufferless, multi-process and mixed model agile production facilities for spaceframe vehicles." Thesis, University of Warwick, 2001. http://wrap.warwick.ac.uk/3069/.

Full text
Abstract:
The facility layout problem (FL) is a non-linear, NP-complete problem whose complexity is derived from the vast solution space generated by multiple variables and interdependent factors. For reconfigurable, agile facilities the problem is compounded by parallelism (simultaneity of operations) and scheduling issues. Previous work has either concentrated on conventional (linear or branched) facility layout design, or has not considered the issues of agile, reconfigurable facilities and scheduling. This work is the first comprehensive methodology incorporating the design and scheduling of parallel cellular facilities for the purpose of easy and rapid reconfiguration in the increasingly demanding world of agile manufacturing. A novel three-stage algorithm is described for the design of acyclic (asynchronous), bufferless, parallel, multi-process and mixed-model production facilities for spaceframe-based vehicles. Data input begins with vehicle part processing and volume requirements from multiple models and includes time, budget and space constraints. The algorithm consists of a powerful combination of a guided cell formation stage, iterative solution improvement searches and design stage scheduling. The improvement iterations utilise a modified (rules-based) Tabu search applied to a constant-flow group technology, while the design stage scheduling is done by the use of genetic algorithms. The objective-based solution optimisation direction is not random but guided, based on measurement criteria from simulation. The end product is the selection and graphic presentation of the best solution out of a database of feasible ones. The case is presented in the form of an executable program and three real world industrial examples are included. The results provide evidence that good solutions can be found to this new type and size of heavily constrained problem within a reasonable amount of time.
APA, Harvard, Vancouver, ISO, and other styles
41

Gupta, Patel Salin. "MECHANISMS AND THERMODYNAMICS OF THE INFLUENCE OF SOLUTION-STATE INTERACTIONS BETWEEN HPMC AND SURFACTANTS ON MIXED ADSORPTION ONTO MODEL NANOPARTICLES." UKnowledge, 2019. https://uknowledge.uky.edu/pharmacy_etds/103.

Full text
Abstract:
Nanoparticulate drug delivery systems (NDDS) such as nanocrystals, nanosuspensions, solid-lipid nanoparticles often formulated for the bioavailability enhancement of poorly soluble drug candidates are stabilized by a mixture of excipients including surfactants and polymers. Most literature studies have focused on the interaction of excipients with the NDDS surfaces while ignoring the interaction of excipients in solution and the extent to which the solution-state interactions influence the affinity and capacity of adsorption. Mechanisms by which excipients stabilize NDDS and how this information can be utilized by formulators a priori to make a rational selection of excipients is not known. The goals of this dissertation work were (a) to determine the energetics of interactions between HPMC and model surfactants and the extent to which these solution-state interactions modulate the adsorption of these excipients onto solid surfaces, (b) to determine and characterize the structures of various aggregate species formed by the interaction between hydroxypropyl methylcellulose (HPMC) and model surfactants (nonionic and ionic) in solution-state, and (c) to extend these quantitative relationships to interpret probable mechanisms of mixed adsorption of excipients onto the model NDDS surface. A unique approach utilizing fluorescence, solution calorimetry and adsorption isotherms was applied to tease apart the effect of solution state interactions of polymer and surfactant on the extent of simultaneous adsorption of the two excipients on a model surface. The onset of aggregation and changes in aggregate structures were quantified by a fluorescence probe approach with successive addition of surfactant. In the presence of HPMC, the structures of the aggregates formed were much smaller with an aggregation number (Nagg) of 34 as compared to micelles (Nagg ~ 68) formed in the absence of HPMC. The strength of polymer-surfactant interactions was determined to be a function of ionic strength and hydrophobicity of surfactant. The nature of these structures was characterized using their solubilization power for a hydrophobic probe molecule. This was determined to be approximately 35% higher in the polymer-surfactant aggregates as compared to micelles alone and was attributed to a significant increase in the number of aggregates formed and the increased hydrophobic microenvironment within these aggregates at a given concentration of surfactant. The energetics of the adsorption of SDS, HPMC, and SDS-HPMC aggregate onto nanosuspensions of silica, which is the model solid surface were quantified. A strong adsorption enthalpy of 1.25 kJ/mol was determined for SDS adsorption onto silica in the presence of HPMC as compared to the negligible adsorption enthalpy of 0.1 kJ/mol for SDS alone on the silica surface. The solution depletion and HPMC/ELSD methods showed a marked increase in the adsorption of SDS onto silica in the presence of HPMC. However, at high SDS concentrations, a significant decrease in the adsorbed amount of HPMC onto silica was determined. This was further corroborated by the adsorption enthalpy that showed that the silica-HPMC-SDS aggregation process became less endothermic upon addition of SDS. This suggested that the decrease in adsorption of HPMC onto silica at high SDS concentrations was due to competitive adsorption of SDS-HPMC aggregates wherein SDS is displaced/desorbed from silica in the presence of HPMC. At low SDS concentrations, an increase in adsorption of SDS was due to cooperative adsorption wherein SDS is preferentially adsorbed onto silica in the presence of HPMC. This adsorption behavior confirmed the hypothesis that the solution-state interactions between pharmaceutical excipients such as polymers and surfactants would significantly impact the affinity and capacity of adsorption of these excipients on NDDS surfaces.
APA, Harvard, Vancouver, ISO, and other styles
42

Alharbi, Abdulmajeed A. "Investigating Survey Response Rates and Analytic Choice of Survey Results fromUniversity Faculty in Saudi Arabia." Ohio University / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1585051418774214.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Ray, Sharon N. E. "Evaluating the Efficacy of the Developing Algebraic Literacy Model: Preparing Special Educators to Implement Effective Mathematics Practices." Scholar Commons, 2008. https://scholarcommons.usf.edu/etd/466.

Full text
Abstract:
For students with learning disabilities, positive academic achievement outcomes are a chief area of concern for educators across the country. This achievement emphasis has become particularly important over the last several years because of the No Child Left Behind legislation. The content area of mathematics, especially in the higher order thinking arena of algebra, has been of particular concern for student progress. While most educational research in algebra has been targeted towards remedial efforts at the high school level, early intervention in the foundational skills of algebraic thinking at the elementary level needs consideration for students who would benefit from early exposure to algebraic ideas. A key aspect of students' instruction with algebraic concepts at any level is the degree and type of preparation their teachers have received with this content. Using a mixed methods design, the current researcher investigated the usage of the Developing Algebraic Literacy (DAL) framework with preservice special education teacher candidates in an integrated practicum and coursework experience. Multiple survey measures were given at pre-, mid-, and post- junctures to assess teacher candidates' attitudes about mathematics, feelings of efficacy when teaching mathematics, and content knowledge surrounding mathematics. An instructional knowledge exam and fidelity checks were completed to evaluate teacher candidates' acquisition and application of algebraic instructional skills. Focus groups, case studies, and final project analyses were used to discern descriptive information about teacher candidates' experience while engaging in work with the DAL framework. Results indicated an increase in preservice teachers' attitudes towards mathematics instruction, feelings of efficacy in teaching mathematics, and in the content knowledge surrounding mathematics instruction. Instructional knowledge also increased across preservice teacher candidates, but abilities to apply this knowledge varied across teacher candidates', based on their number of sessions working with students within their practicum site. Further findings indicate the desire of preservice teachers to increase the length and number of student sessions within the DAL experience, as well as the need for increased levels of instructional support to enhance their own experience. This study provides preliminary support for utilizing the DAL instructional framework within preservice teacher preparation experiences for future special educators.
APA, Harvard, Vancouver, ISO, and other styles
44

Bourgos, Paraskevas. "Rigorous Design Flow for Programming Manycore Platforms." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENM012/document.

Full text
Abstract:
L'objectif du travail présenté dans cette thèse est de répondre à un verrou fondamental, qui est «comment programmer d'une manière rigoureuse et efficace des applications embarquées sur des plateformes multi-coeurs?». Cette problématique pose plusieurs défis: 1) le développement d'une approche rigoureuse basée sur les modèles pour pouvoir garantir la correction; 2) le « mariage » entre modèle physique et modèle de calcul, c'est-à-dire, l'intégration du fonctionnel et non-fonctionnel; 3) l'adaptabilité. Pour s'attaquer à ces défis, nous avons développé un flot de conception rigoureux autour du langage BIP. Ce flot de conception permet l'exploration de l'espace de conception, le traitement à diffèrent niveaux d'abstraction à la fois pour la plate-forme et l'application, la génération du code et le déploiement sur des plates-formes multi-cœurs. La méthode utilisée s'appuie sur des transformations source-vers-source des modèles BIP. Ces transformations sont correctes-par-construction. Nous illustrons ce flot de conception avec la modélisation et le déploiement de plusieurs applications sur deux plates-formes différentes. La première plate-forme considérée est MPARM, une plate-forme virtuelle, basée sur des processeurs ARM et structurée avec des clusters, où chacun contient plusieurs cœurs. Pour cette plate-forme, nous avons considérée les applications suivantes: la factorisation de Cholesky, le décodage MPEG-2, le décodage MJPEG, la Transformée de Fourier Rapide et un algorithme de demosaicing. La seconde plate-forme est P2012/STHORM, une plate-forme multi-cœur, basée sur plusieurs clusters capable d'une gestion énergétique efficace. L'application considérée sur P2012/STHORM est l'algorithme HMAX. Les résultats expérimentaux montrent l'intérêt du flot de conception, notamment l'analyse rapide des performances ainsi que la modélisation au niveau du système, la génération de code et le déploiement
The advent of many-core platforms is nowadays challenging our capabilities for efficient and predictable design. To meet this challenge, designers need methods and tools for guaranteeing essential properties and determining tradeoffs between performance and efficient resource management. In the process of designing a mixed software/hardware system, functional constraints and also extra-functional specifications should be taken into account as an essential part for the design of embedded systems. The impact of design choices on the overall behavior of the system should also be analyzed. This implies a deep understanding of the interaction between application software and the underlying execution platform. We present a rigorous model-based design flow for building parallel applications running on top of many-core platforms. The flow is based on the BIP - Behavior, Interaction, Priority - component framework and its associated toolbox. The method allows generation of a correct-by-construction mixed hardware/software system model for manycore platforms from an application software and a mapping. It is based on source-to-source correct-by-construction transformations of BIP models. It provides full support for modeling application software and validation of its functional correctness, modeling and performance analysis of system-level models, code generation and deployment on target many-core platforms. Our design flow is illustrated through the modeling and deployment of various software applications on two different hardware platforms; MPARM and platform P2012/STHORM. MPARM is a virtual ARM-based multi-cluster manycore platform, configured by the number of clusters, the number of ARM cores per cluster, and their interconnections. On MPARM, the software applications considered are the Cholesky factorization, the MPEG-2 decoding, the MJPEG decoding, the Fast Fourier Transform and the Demosaicing algorithm. Platform 2012 (P2012/STHORM) is a power efficient manycore computing fabric, which is highly modular and based on multiple clusters capable of aggressive fine-grained power management. As a case study on P2012/STHORM, we used the HMAX algorithm. Experimental results show the merits of the design flow, notably performance analysis as well as correct-by-construction system level modeling, code generation and efficient deployment
APA, Harvard, Vancouver, ISO, and other styles
45

Bailey, Brittney E. "Data analysis and multiple imputation for two-level nested designs." The Ohio State University, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=osu1531822703002162.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Paakkola, Dennis, and Robin Rännar. "Ökad användarberedskap för digitala miljösimuleringar : Kravställning,utveckling och utvärdering av digital prototyp för användarintroduktion." Thesis, Mittuniversitetet, Institutionen för data- och systemvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:miun:diva-38021.

Full text
Abstract:
Digital environmental simulations can be performed with different techniques and the most common technologies are virtual reality, augmented reality and mixed reality. Digital environmental simulations have proven to be effective in practicing surgery, industrial activities and for military exercises. Previous studies have shown that technology habits are a factor that affects whether digital environmental simulations can be used effectively. Thus, the purpose of the study was to investigate how users can be introduced to digital environmental simulations. To achieve the purpose, the following question was needed: How can a digital prototype be designed to introduce users to digital environmental simulations based on user needs? The study was based on design science as a research strategy, which meant that the study was carried out in three phases: development of requirements, development and evaluation of digital prototype. The production of requirements was made through a qualitative data collection in the form of semi-structured interviews. The interview questions were developed using a theoretical framework on digital competence. The interviews resulted in a requirement specification containing 15 user stories that were prioritized.Based onthe requirement specification, a digital prototype was developed in thedevelopment environment Unity. The evaluation of the digital prototype wascarried out in two stages, where the first was to evaluate internally and thesecond step was to evaluate externally. The external evaluation was conductedwith respondents who carried out a use test of the digital prototype thatresulted in proposals for further development. But it also resulted in usershaving increased knowledge and ability to see opportunities with digitalenvironmental simulations. The conclusion is that users can be introduced to digitalenvironmental simulations through a digital prototype designed based on userneeds.
Digitala miljösimuleringar kan utföras med olika tekniker och de vanligaste teknikerna är virtual reality, augmented reality och mixed reality. Digitala miljösimuleringar har visat sig vara effektiva för att öva på kirurgi, industrimoment samt för militärövningar. Tidigare studier har visat att teknikvana är en faktor som påverkar om digitala miljösimuleringar kan användas effektivt. Således var syftet med studien att undersöka hur användare kan introduceras till digitala miljösimuleringar. För att uppnå syftet behövdes följande frågeställning besvaras: Hur kan en digital prototyp utformas för att introducera användare till digitala miljösimuleringar baserat på användares behov? Studien har utgått från design science som forskningsstrategi, vilket medförde att studien har utförts i tre faser: framtagning av krav, utveckling och utvärdering av digital prototyp. Framtagning av krav skedde genom en kvalitativ datainsamling i form av semistrukturerade intervjuer. Intervjufrågorna togs fram med hjälp av ett teoretiskt ramverk om digital kompetens. Intervjuerna resulterade i en kravspecifikation innehållande 15 användarberättelser som prioriterades.   Utifrån kravspecifikationen utvecklades en digital prototyp i utvecklingsmiljön Unity. Utvärderingen av den digitala prototypen genomfördes i två steg, där det första var att utvärdera internt och det andra steget var att utvärdera externt. Den externa utvärderingen genomfördes med respondenter som utförde ett användningstest av den digitala prototypen som resulterade i förslag till vidareutvecklingMen det resulterade även i att användare fick ökadkunskap och förmåga att se möjligheter med digitala miljösimuleringar.Slutsatsen är att användare kan introduceras till digitala miljösimuleringargenom en digital prototyp som utformats baserat på användares behov.
APA, Harvard, Vancouver, ISO, and other styles
47

Pinto, Taborga Carola. "A methodology and a mathematical model for reducing greenhouse gas emissions through the suppply chain redesign." Doctoral thesis, Universitat Politècnica de Catalunya, 2018. http://hdl.handle.net/10803/620787.

Full text
Abstract:
Virtually the entire scientific, political, business, and social community is aware of the importance of climate change. Countries adhering to the Kyoto Protocol have taken up the challenge of reducing carbon emission, implementing national policies that include the introduction of carbon emissions trading programs, voluntary programs, taxes on carbon emissions and energy efficiency standards. In this context, the business world must be able to generate a carbon reduction strategy to ensure long-term success, considering also that customers (and investors) are ever more interested in the well-being of the environment, and increasingly demand their suppliers to be eco-friendly. This thesis has addressed the problem of designing (or redesigning) the supply chain to reduce carbon emission in an economically viable and, as far as possible, optimal way. The thesis addresses the problem by designing a complete and formalized methodology, which also includes a mathematical model to determine the best decisions to take. The research begins, as usual, with a review of the basic terminology, standards and the scientific literature related to the topic. From the review of the literature, it has been concluded that, although there are authors who propose models related to the design of the supply chain including carbon reduction, there is a lack of formalized methodologies that can be applied to real cases . The methodology consists of 4 stages: 1) The creation of a corporate carbon strategy; 2) The alignment with strategic financial planning; 3) The development of a mathematical model; and 4) The implementation and tracking. In the first stage a six-step guide is developed to create a corporate carbon strategy. The steps are: 1) Determine the type of emission; 2) Boundaries definition; 3) Planning and performance information; 4) ldentify carbon reduction opportunities; 5) Determine carbon reduction goals; 6) Participating in programs and carbon markets . In the second stage, the corporate carbon strategy is evaluated from a financial point of view and integrated into the strategic planning. In the third stage, a Mixed lnteger Linear Programming (MILP) model is proposed to obtain a plan for the supply chain redesign, so that: 1) the carbon reduction targets are achieved; 2) the strategic financial plan is taken into account; 3) all the real possibilities are contemplated to redesign the supply chain; and 4) a solution is achieved to optimize the economic results of the company. The carbon reduction methodology , including the mathematical model, has been applied to three case studies that are useful for adjusting sorne elements and for its validation . The first case study corresponds to a company that operates in the Home and Personal Care sector in Brazil, where the system of taxes is more complex than in other countries and illustrates how the mathematical model can be adapted to any context. The second case study deals with a multinational company which operates in the Foods sector in Spain and requires a redesign of the supply chain to improve its product cost. Finally, the third case used a company in the U.S. to show the effect of the scope definition on the carbon strategy. In the three cases, the solution of the mathematical model maximizes the net profit, whilst the carbon reduction target is achieved. Therefore, the carbon reduction methodology is useful for achieving economic and environmental benefits, as well as providing benefits related to the improvement of the corporate image, strengthening of brands and avoiding possible carbon taxes risks. In conclusion, the carbon reduction methodology proposed in this thesis, was developed to support companies that want to generate a competitive advantage and a sustainable development. In addition, it was designed to be flexible enough to adapt to the needs of each business and facilitate its execution in the business world.
Prácticamente toda la comunidad científica, política, comercial y social es consciente de la importancia del desafío medio ambiental relacionado con las emisiones de Gases de Efecto Invernadero (GEi). Los paises adheridos al Protocolo de Kioto han asumido el desafío de reducir los GEi, implementando políticas que incluyen programas de comercio de emisiones , programas voluntarios, impuestos sobre la emisión de GEi y normas sobre eficiencia energética. En este contexto, el mundo empresarial debe ser capaz de generar una estrategia de reducción de GEi para garantizar el éxito a largo plazo, considerando además que los clientes están cada vez más interesados en el bienestar del medio ambiente . Esta tesis ha abordado el problema de diseñar (o rediseñar) la cadena de suministro como vía para la reducción de GEi de una manera económicamente viable y, en la medida de lo posible, óptima. La tesis aborda la problemática diseñando una metodología completa y formalizada, que incluye también un modelo matemático para determinar las mejores decisiones a tomar. De la revisión de la literatura, se ha concluido que, si bien existen autores que proponen modelos relacionados con el diseño de la cadena de suministro que incluyen la reducción de GEi, no existen trabajos que propongan una metodología completa y suficientemente formalizada que puedan ser aplicados a la realidad. La metodología consta de 4 etapas que son: 1) La creación de una estrategia corporativa para la reducción de GEi; 2) La alineación con la planificación financiera estratégica; 3) El desarrollo de un modelo matemático; y 4) La implementación y seguimiento. En la primera etapa se desarrolla una guía de seis pasos para crear una estrategia corporativa para la reducción de GEi, los pasos son: 1) Determinar el tipo de emisión; 2) Definir el alcance; 3) Establecer las bases de la medición; 4) Identificar oportunidades de reducción de GEi; 5) Establecer los objetivos; 6) Planificar la participación en programas de reducción de GEi. En la segunda etapa, la estrategia corporativa antes propuesta, se evalúa desde un punto de vista financiero y se integra en la planificación estratégica. En la tercera etapa, se propone un modelo de Programación Lineal Entera Mixta para obtener un plan para et rediseño de ta cadena de suministro, de modo que: 1) se logren tos objetivos de reducción de GEi; 2) se tenga en cuenta el plan financiero estratégico; 3) se contemplen todas las posibilidades reales para rediseñar la cadena de suministro; y 4) se optimicen tos resultados económicos de la empresa. La metodología, incluyendo el programa matemático se ha probado en tres casos de estudio. El primer caso de estudio corresponde a una multinacional del sector de productos de higiene del hogar y cuidado personal que opera en Brasil, donde el modelo matemático fue adaptado para integrar beneficios fiscales. El segundo caso trata de una multinacional del sector alimentario basada en España que requiere un rediseño de la cadena de suministro para mejorar el coste de producir. Finalmente, en el tercer caso se utiliza una empresa del sector del metal basada en EE. UU., para ilustrar la importancia de la definición de límites y responsabilidades corporativas . En los tres casos de estudio, el modelo matemático maximiza el beneficio neto mientras alcanza el objetivo de reducción de GEi. Por lo tanto, la metodología es útil para conseguir beneficios económicos y medio ambientales, además de brindar beneficios relacionados con la mejora de la imagen corporativa, fortalecimiento de las marcas y el evitar posibles riesgos impositivos . En conclusión, la metodología propuesta fue desarrollada para que su implementación pueda generar en las empresas una ventaja competitiva y un crecimiento fundamentado en la sostenibilidad ambiental; asimismo, fue diseñada para que sea lo suficientemente flexible y pueda adaptarse a las necesidades de cada negocio
APA, Harvard, Vancouver, ISO, and other styles
48

Shah, Aditya Arunkumar. "Combining mathematical programming and SysML for component sizing as applied to hydraulic systems." Thesis, Georgia Institute of Technology, 2010. http://hdl.handle.net/1853/33890.

Full text
Abstract:
In this research, the focus is on improving a designer's capability to determine near-optimal sizes of components for a given system architecture. Component sizing is a hard problem to solve because of the presence of competing objectives, requirements from multiple disciplines, and the need for finding a solution quickly for the architecture being considered. In current approaches, designers rely on heuristics and iterate over the multiple objectives and requirements until a satisfactory solution is found. To improve on this state of practice, this research introduces advances in the following two areas: a.) Formulating a component sizing problem in a manner that is convenient to designers and b.) Solving the component sizing problem in an efficient manner so that all of the imposed requirements are satisfied simultaneously and the solution obtained is mathematically optimal. In particular, an acausal, algebraic, equation-based, declarative modeling approach is taken to solve component sizing problems efficiently. This is because global optimization algorithms exist for algebraic models and the computation time is considerably less as compared to the optimization of dynamic simulations. In this thesis, the mathematical programming language known as GAMS (General Algebraic Modeling System) and its associated global optimization solvers are used to solve component sizing problems efficiently. Mathematical programming languages such as GAMS are not convenient for formulating component sizing problems and therefore the Systems Modeling Language developed by the Object Management Group (OMG SysML ) is used to formally capture and organize models related to component sizing into libraries that can be reused to compose new models quickly by connecting them together. Model-transformations are then used to generate low-level mathematical programming models in GAMS that can be solved using commercial off-the-shelf solvers such as BARON (Branch and Reduce Optimization Navigator) to determine the component sizes that satisfy the requirements and objectives imposed on the system. This framework is illustrated by applying it to an example application for sizing a hydraulic log splitter.
APA, Harvard, Vancouver, ISO, and other styles
49

Oesterle, Jonathan. "Holistic approach to designing hybrid assembly lines A comparative study of Multi-Objective Algorithms for the Assembly Line Balancing and Equipment Selection Problem under consideration of Product Design Alternatives Evaluation of the influence of dominance rules for the assembly line design problem under consideration of product design alternatives Hybrid Multi-objective Optimization Method for Solving Simultaneously the line Balancing, Equipment and Buffer Sizing Problems for Hybrid Assembly Systems Comparison of Multiobjective Algorithms for the Assembly Line Balancing Design Problem Efficient multi-objective optimization method for the mixed-model-line assembly line design problem Detaillierungsgrad von Simulationsmodellen Rechnergestützte Austaktung einer Mixed-Model Line. Der Weg zur optimalen Austaktung." Thesis, Troyes, 2017. http://www.theses.fr/2017TROY0012.

Full text
Abstract:
Le travail présenté dans cette thèse concerne la formulation et la résolution de deux problèmes d'optimisation multi-objectifs. Ces problèmes de décision, liés à une approche holistique, ont pour but de sélectionner la meilleure configuration « produit/ligne d’assemblage » à partir d'un ensemble de design produits, et de ressources. Concernant le premier problème, un modèle de coût a été développé afin de traduire les interdépendances complexes entre la sélection d’un design produit et les caractéristiques des ressources. Une étude empirique est proposée et vise à comparer, selon plusieurs indicateurs de qualité multi-objectifs, différentes méthodes de résolution - comprenant des algorithmes génétiques, de colonies de fourmis, d’optimisation par essaims particulaires, des chauves-souris, de recherche du coucou et de pollinisation des fleurs. Plusieurs règles de dominance et une recherche locale spécifique au problème ont été appliquées aux méthodes de résolution les plus prometteuses. Concernant le second problème, qui se penche également sur le dimensionnement des stocks tampons, les méthodes de résolution sont à un modèle de simulation à événements discrets, dont la fonction première est l’évaluation des valeurs des différentes fonctions objectives. L’approche holistique associée aux deux problèmes a été validée avec deux cas industriels
The work presented in this thesis concerns the formulation and the resolution of two holistic multi-objective optimization problems associated with the selection of the best product and hybrid assembly line configuration out of a set of products, processes and resources alternatives. Regarding the first problem, a cost model was developed in order to translate the complex interdependencies between the selection of specific product designs, processes and resources characteristics. An empirical study is proposed, which aimed at comparing, according to several multi-objective quality indicators, various resolution methods – including variants of evolutionary algorithms, ant colony optimization, particle swarm optimization, bat algorithms, cuckoo search algorithms, and flower-pollination algorithms. Several dominance rules and a problem-specific local search were applied to the most promising resolution methods. Regarding the second problem, which also considers the buffer sizing, the developed algorithms were enhanced with a genetic discrete-event simulation model, whose primary function is to evaluate the value of the various objective functions. The demonstration of the associated resolution frameworks for both problems was validated through two industrial-cases
APA, Harvard, Vancouver, ISO, and other styles
50

Kasam, Alisha. "Conceptual design of a breed & burn molten salt reactor." Thesis, University of Cambridge, 2019. https://www.repository.cam.ac.uk/handle/1810/289755.

Full text
Abstract:
A breed-and-burn molten salt reactor (BBMSR) concept is proposed to address the Generation IV fuel cycle sustainability objective in a once-through cycle with low enrichment and no reprocessing. The BBMSR uses separate fuel and coolant molten salts, with the fuel contained in assemblies of individual tubes that can be shuffled and reclad periodically to enable high burnup. In this dual-salt configuration, the BBMSR may overcome several limitations of previous breed-and-burn (B$\&$B) designs to achieve high uranium utilisation with a simple, passively safe design. A central challenge in design of the BBMSR fuel is balancing the neutronic requirement of large fuel volume fraction for B$\&$B mode with the thermal-hydraulic requirements for safe and economically competitive reactor operation. Natural convection of liquid fuel within the tubes aids heat transfer to the coolant, and a systematic approach is developed to efficiently model this complex effect. Computational fluid dynamics modelling is performed to characterise the unique physics of the system and produce a new heat transfer correlation, which is used alongside established correlations in a numerical model. A design framework is built around this numerical model to iteratively search for the limiting power density of a given fuel and channel geometry, applying several defined temperature and operational constraints. It is found that the trade-offs between power density, core pressure drop, and pumping power are lessened by directing the flow of coolant downwards through the channel. Fuel configurations that satisfy both neutronic and thermal-hydraulic objectives are identified for natural, 5$\%$ enriched, and 20$\%$ enriched uranium feed fuel. B$\&$B operation is achievable in the natural and 5$\%$ enriched versions, with power densities of 73 W/cm$^3$ and 86 W/cm$^3$, and theoretical uranium utilisations of 300 $\mathrm{MWd/kgU_{NAT}}$ and 25.5 $\mathrm{MWd/kgU_{NAT}}$, respectively. Using 20$\%$ enriched feed fuel relaxes neutronic constraints so a wider range of fuel configurations can be considered, but there is a strong inverse correlation between power density and uranium utilisation. The fuel design study demonstrates the flexibility of the BBMSR concept to operate along a spectrum of modes ranging from high fuel utilisation at moderate power density using natural uranium feed fuel, to high power density and moderate utilisation using 20$\%$ uranium enrichment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography