To see the other types of publications on this topic, follow the link: Two continuum model.

Dissertations / Theses on the topic 'Two continuum model'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Two continuum model.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Laurie, Henri De Guise. "The general continuum model for structured populations, with two case studies in plant ecology." Doctoral thesis, University of Cape Town, 1994. http://hdl.handle.net/11427/18243.

Full text
Abstract:
Bibliography: p. 129-143.
The broad aim of this thesis is to investigate the formulation and usefulness of a very general model for plant population dynamics. In chapter 1, the goal of generality is discussed, particularly in the light of the lack of interaction between field and experimental population studies on the one hand and theoretical population dynamics on the other hand. A distinction is ma.de between descriptive and axiomatic theories, and it is suggested that they serve different purposes. The advantages of a. rigorous framework are pointed out and the basic elements of the continuum approach are introduced. In chapter 2, the model is proposed, the existence and uniqueness of solutions to its equations is proved, and an algorithm for numerically -approximating transient solutions is discussed. The question of generality is addressed in two places, and it is argued that the basic framework presented here is in principle adequate to model the processes of plant population dynamics in full detail, though the existence proof cannot to accommodate all possible models. In particular, models with time lags are excluded. Further limitations of the existence proof ill terms of constitutive relations are pointed out. In consequence, the theory here presented does not fully exploit the possibilities for generality inherent in the basic equations. In chapter 3, the question of what data would allow identification of factors determining somatic growth and mortality is investigated computationally. It is shown that using only the average size is insufficient. A class of models which includes all possible combinations of three types of size dependence in somatic growth and mortality is formulated. Qualitative parameter estimation for the various models yields size distributions that can be classified into the following biologically meaningful groups: group (i) has no models that use dependence on relative size; group (ii) has all the models in which somatic growth depends on relative size group (iii) has the models where only mortality depends on relative size. Thus it appears that size distribution may be used to distinguish various forms of size dependence in somatic growth and mortality. In chapter 4, a lottery model criterion for coexistence of plants with disjoint generations is developed, which is shown to require relative density dependence. Computer simulations aiming to initiate the use of exploratory calculations in studies of coexisting serotinous proteoids in fynbos indicate that the aspect of plant population dynamics most sensitive to density dependence is seed production, then somatic growth, while mortality is least sensitive to density dependence.
APA, Harvard, Vancouver, ISO, and other styles
2

Miller, Ryan Michael. "Continuum Modeling of Liquid-Solid Suspensions for Nonviscometric Flows." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4864.

Full text
Abstract:
A suspension flow model based on the "suspension balance" approach has been developed. This work modifies the model to allow the solution of suspension flows under general flow conditions. This requires the development of a frame-invariant constitutive model for the particle stress which can take into account the spatially-varying local kinematic conditions. The mass and momentum balances for the bulk suspension and particle phase are solved numerically using a finite volume method. The particle stress is based upon the computed rate of strain and the local kinematic conditions. A nonlocal stress contribution corrects the continuum approximation of the particle phase for finite particle size effects. Local kinematic conditions are accounted through the local ratio of rotation to extension in the flow field. The coordinates for the stress definition are the local principal axes of the rate of strain field. The developed model is applied to a range of problems. (i) Axially-developing conduit flows are computed using both the full two-dimensional solution and the more computationally efficient "marching" method. The model predictions are compared to experimental results for cross-stream particle concentration profiles and axial development lengths. (ii) Model predictions are compared to experiments for wide-gap circular Couette flow of a concentrated suspension in a shear-thinning liquid. With minor modification, the suspension flow model predicts the major trends and results observed in this flow. (iii) Comparisons are made to experiments for an axisymmetric contraction-expansion. Model predictions for a two-dimensional planar contraction flow test the influence of model formulation. The variation of the magnitude of an isotropic particle normal stress with local kinematic conditions and anisotropy in the in-plane normal stresses are both explored. The formulation of the particle phase stress is found to have significant effects on the solid fraction and velocity. (iv) Finally, for a rectangular piston-driven flow and an obstructed channel flow, a "computational suspension dynamics" study explores the effect of particle migration on the bulk flow field, system pressure drop and particle phase composition.
APA, Harvard, Vancouver, ISO, and other styles
3

Bhamare, Sagar D. "High Cycle Fatigue Simulation using Extended Space-Time Finite Element Method Coupled with Continuum Damage Mechanics." University of Cincinnati / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1352490187.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Mottet, Laetitia. "Simulations of heat and mass transfer within the capillary evaporator of a two-phase loop." Thesis, Toulouse, INPT, 2016. http://www.theses.fr/2016INPT0012/document.

Full text
Abstract:
Le contrôle thermique des composants électroniques embarqués dans les engins spatiaux est souvent assuré par des boucles fluides diphasiques à pompage capillaire (Loop Heat Pipe (LHP) ou Capillary Pumped Loop (CPL)). La présente étude est centrée sur les évaporateurs des LHP. Ils sont composés principalement d’un bâti métallique, d’une mèche poreuse et de cannelures. Le milieu poreux est initialement saturé en liquide. La charge thermique à évacuer est appliquée sur le bâti entraînant la vaporisation du liquide au sein de la mèche. La vapeur est ensuite récoltée au sein des cannelures pour être évacuée. L’étude est effectuée sur une cellule unitaire de l’évaporateur. Dans le but d’étudier les transferts de masse et de chaleur, un modèle de réseau de pores 3D dit mixte a été développé. Les champs de pression et de température sont calculés à partir des équations macroscopiques tandis que la capillarité est gérée à l’aide d’une approche réseau de pore classique. L’un des avantages d’une telle formulation est de pouvoir accéder à la répartition des phases liquide et vapeur au sein de l’espace poral du milieu poreux. Il a ainsi été mis en évidence qu’une zone diphasique (zone où le liquide et la vapeur coexistent) se met en place pour une large gamme de flux lorsque la vapeur apparait dans la structure capillaire. Cette zone diphasique est localisée sous le bâti métallique et est corrélée avec les meilleures performances thermiques de l’évaporateur. Cette observation diffère fortement de l’hypothèse souvent considérée de la présence d’une zone sèche dans cette région. Trois positions différentes de cannelures ont été étudiées. Il a ainsi pu être mis en évidence que la plus large gamme de flux, pour laquelle les performances de l’évaporateur sont les meilleures, est obtenue lorsque les cannelures sont usinées à la surface extérieure de la mèche. Toujours dans le but d’améliorer les performances thermiques de l’évaporateur, une étude paramétrique a été menée pour mettre en évidence les paramètres qui influencent positivement la conductance de l’évaporateur. Finalement, l’étude de l’influence d’une mèche biporeuse/bidispersée, c’est-à-dire d’un milieu poreux caractérisé par deux tailles de pores/liens différentes, a été menée. La distribution des phases liquide et vapeur au sein de la structure capillaire bidispersée est différente de celle d’un milieu mono-poreux du fait des chemins préférentiels créés par les larges pores. Par ailleurs, l’analyse thermique a montré qu’un tel milieu poreux permet de réduire considérablement la température du bâti ainsi que d’augmenter les performances thermiques de l’évaporateur. Un deuxième modèle basé sur une approche continue a été développé. Cette méthode utilise l’algorithme IMPES (IMplicit Pressure Explicit Saturation) et est couplé à la résolution du champ de température avec changement de phase. Ce type de résolution permet d’accéder à un champ de saturation. Les résultats ainsi obtenus sont en bon accord avec ceux prédits par le modèle réseau de pores mixte. Le modèle continu, moins gourmand en temps de calcul, permet d’envisager des simulations sur une plus grande partie de l’évaporateur
The thermal control of electronic devices embedded in spacecraft is often carried out by capillary twophase loop systems (Loop Heat Pipe (LHP) or Capillary Pumped Loop (CPL)). This thesis focuses on the LHP evaporators. They mostly consist of a metallic casing, a porous wick and vapour grooves. The porous medium is initially saturated with liquid. The heat load is applied at the external surface of the casing inducing the vaporisation of the liquid within the wick. The vapour is then evacuated thanks to the vapour grooves. A unit cell of the evaporator is studied and corresponds to our computational domain. A so-called 3D mixed pore network model has been developed in order to study the heat and mass transfers. Pressure and temperature fields are computed from macroscopic equations, while the capillarity is managed using the classical pore network approach. The main advantage of such formulation is to obtain the liquid-vapour phase distribution within the porous medium pore space. The work highlights that a two-phase zone (characterized by the coexistence of the liquid and the vapour) exists for a large range of fluxes when vaporisation takes place within the capillary structure. This twophase zone is located right under the casing and is positively correlated with the best evaporator thermal performances. This result differs from the often made assumption of a dry region under the casing. Three different groove locations are tested. This investigation highlights that evaporator thermal performances are the best over a large range of fluxes for grooves manufactured at the external surface of the wick. In complement, a parametric study is performed to highlight parameters which impact positively the evaporator thermal performances. Finally, a biporous/bidispersed wick, i.e. a wick with a bimodal pore/throat size distribution, is studied. The liquidvapour phase distribution within the capillary structure is different from the one for a monoporous structure due to preferential vapour paths created by the large throats and pores-network. Moreover, the thermal analysis shows that such a porous medium permits to reduce considerably the evaporator wall temperature and to increase the evaporator thermal performances. A second model is developed based on a continuum approach. This method uses the IMPES (IMplicit Pressure Explicit Saturation) algorithm coupled with the heat transfer with phase change. Results are in good agreement with those predicted by the mixed pore network model. The continuum model, requiring less computing time, should allow considering larger sub domains of the evaporator
APA, Harvard, Vancouver, ISO, and other styles
5

Fischer, Joern, and joern@cres anu edu au. "Beyond fragmentation : Lizard distribution patterns in two production landscapes and their implications for conceptual landscape models." The Australian National University. Centre for Resource and Environmental Studies, 2004. http://thesis.anu.edu.au./public/adt-ANU20060718.150101.

Full text
Abstract:
Fauna conservation outside protected areas can make an important complementary contribution to conservation within reserves. This thesis aimed to contribute new information and analytical frameworks to the science of fauna conservation in human-modified landscapes. Two approaches were used: (1) empirical data collection and analysis, and (2) the discussion and development of conceptual landscape models. ¶ Empirical work focused on lizard distribution patterns in two production landscapes in southeastern Australia. Lizards were targeted because ectotherms are frequently neglected by conservation biologists. The “Nanangroe grazing landscape” was used for sheep and cattle grazing. In this landscape, approximately 85% of pre-European woodland cover had been cleared, and understorey vegetation was sparse. Lizards were surveyed at 16 landscape units, which were stratified by aspect, topographic position and amount of tree cover. Each landscape unit contained three sites, and each site contained three plots. Regression modelling showed that different species responded differently to their environment. For example, the four-fingered skink (Carlia tetradactyla) and Boulenger’s skink (Morethia boulengeri) were more likely to occur at woodland sites with northerly aspects, whereas the striped skink (Ctenotus robustus) and olive legless lizard (Delma inornata) were more likely to inhabit sites with a simple microhabitat structure. Statistical analysis further showed that the habitat attributes that lizards were related to varied continuously through space, and over different spatial scales. For example, invertebrate abundance (a proxy for food availability) varied most strongly over tens of metres, whereas the amount of grass cover varied most strongly over hundreds to thousands of metres. Thus, work at Nanangroe revealed spatially complex patterns of lizard occurrence and habitat variables. ¶ The “Tumut plantation landscape” was a spatial mosaic of native eucalypt (Eucalyptus) forest patches embedded within a plantation of the introduced radiata pine (Pinus radiata). In this landscape, thirty sites were surveyed for lizards. Sites were stratified by forest type and patch size, and included eucalypt patches, pine sites, and extensive areas of eucalypt forest adjacent to the plantation. Regression modelling showed that lizard species responded to various habitat attributes, including elevation, the amount of eucalypt forest within 1 km of a site, invertebrate abundance and ground cover. Variables related to habitat fragmentation often were significant predictors of lizard occurrence. However, work at Tumut suggested that important additional insights into lizard distribution patterns could be obtained by considering variables related to food and shelter resources, and climatic conditions. ¶ The Nanangroe and Tumut landscapes were in close proximity, but together spanned an altitudinal gradient of 900 m. An investigation of changes in lizard community composition with altitude showed that (1) only one species was common to Nanangroe and Tumut, (2) different species had different altitudinal preferences, and (3) ecologically similar species replaced one another with increasing altitude. These results highlighted that even in highly modified landscapes, natural gradients (such as climate) can play an important role in shaping animal assemblage composition and species distribution patterns. ¶ Empirical work suggested that, in some landscapes, the frequently used “fragmentation model” is a relatively weak conceptual basis for the study of animal distribution patterns. The fragmentation model implicitly assumes that “habitat patches” can be defined unequivocally across many species, and that patches are located within a relatively inhospitable matrix. Where these assumptions are breached, conservation guidelines arising from the fragmentation model may be too simplified. In spatially complex production landscapes, it may be more appropriate to maintain habitat heterogeneity at multiple spatial scales than to focus solely on the management of large, pre-defined patches. ¶ Given the potential limitations of the fragmentation model, a new, more holistic landscape model was developed. The “continuum model” was derived from continuum theory as developed for plant ecology. The continuum model recognises (1) spatial continua of environmental variables, and (2) species’ individualistic responses to these variables. For animals, key environmental variables may be related to the availability of food, shelter, sufficient space, and suitable climatic conditions. Unlike the fragmentation model, the continuum model is inherently process-based and thus may help to link the perceived gap between patterns and processes in landscape ecology. ¶ Three general conclusions arise from this thesis: 1. Some heterogeneous production landscapes support many native species, and therefore represent important conservation opportunities. 2. In some modified landscapes, the fragmentation model does not capture the complexity of animal distribution patterns. In those landscapes, conservation recommendations derived from the fragmentation model may be overly simplistic. 3. The continuum model may be a useful extension of the fragmentation model. It provides a process-based conceptual basis for empirical work on animal distribution patterns.
APA, Harvard, Vancouver, ISO, and other styles
6

辛樹豪 and Shu-ho Sun. "A two-dimensional continuum approach to facility location problems." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31223394.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Olsen, Tyler J. (Tyler John). "The two-way street between discrete and continuum models of particle systems." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/120258.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 175-184).
Many systems exhibit behavior across multiple length scales. When modeling the behavior of such systems, simplifying assumptions are commonly made to reduce model complexity while still capturing system behavior accurately at a length scale of interest. However, it can frequently be advantageous to explicitly incorporate information about a smaller length scale. We present two examples from diverse fields using this approach. First, we propose a model to describe the evolution of a flowing, microstructured suspension of conductive particles, which are being considered for use in large-scale energy storage technologies. In such a suspension, the microstructure of the contact network between particles gives rise to macroscopic electrical conductivity. Developing this model consists of two phases: 1) developing a discrete model for the conductivity of a simplified network, and 2) embedding the discrete model into the framework of modern continuum mechanics. The resulting model takes the form of a tensorial evolution law, like those typically seen in continuum constitutive relationships. The model has been validated experimentally and is able to predict both steady-state and transient conductivity more accurately than pre-existing models in the literature. The second application that we consider is the simulation of many-rigid- body systems. Treating stiff, elastic bodies in contact as perfectly rigid-an approach commonly referred to as Contact Dynamics (CD)-simplifies some aspects of their behavior and can alleviate considerable computational burden. However, in many cases the neglect of elasticity results in indeterminate systems, a problem that prevents CD from being used in many real-world applications. We show that information from elasticity can be re-introduced as a compatibility condition while retaining the assumption of perfect rigidity. This preserves the computational advantages of an optimization-based CD method. The new method is exact in the absence of friction and shows improved force calculation for frictional granular systems.
by Tyler John Olsen.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
8

Terriberry, Timothy B. Gerig Guido. "Continuous medial models in two-sample statistics of shape." Chapel Hill, N.C. : University of North Carolina at Chapel Hill, 2006. http://dc.lib.unc.edu/u?/etd,579.

Full text
Abstract:
Thesis (Ph. D.)--University of North Carolina at Chapel Hill, 2006.
Title from electronic title page (viewed Oct. 10, 2007). "... in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science." Discipline: Computer Science; Department/School: Computer Science.
APA, Harvard, Vancouver, ISO, and other styles
9

Collins, Sean E. "Comparing hypotheses proposed by two conceptual models for stream ecology." University of Cincinnati / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1396532770.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ayana, Haimanot, and Sarah Al-Swej. "A review of two financial market models: the Black--Scholes--Merton and the Continuous-time Markov chain models." Thesis, Mälardalens högskola, Akademin för utbildning, kultur och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-55417.

Full text
Abstract:
The objective of this thesis is to review the two popular mathematical models of the financialderivatives market. The models are the classical Black–Scholes–Merton and the Continuoustime Markov chain (CTMC) model. We study the CTMC model which is illustrated by themathematician Ragnar Norberg. The thesis demonstrates how the fundamental results ofFinancial Engineering work in both models.The construction of the main financial market components and the approach used for pricingthe contingent claims were considered in order to review the two models. In addition, the stepsused in solving the first–order partial differential equations in both models are explained.The main similarity between the models are that the financial market components are thesame. Their contingent claim is similar and the driving processes for both models utilizeMarkov property.One of the differences observed is that the driving process in the BSM model is the Brownianmotion and Markov chain in the CTMC model.We believe that the thesis can motivate other students and researchers to do a deeper andadvanced comparative study between the two models.
APA, Harvard, Vancouver, ISO, and other styles
11

Crooks, Matthew Stuart. "Application of an elasto-plastic continuum model to problems in geophysics." Thesis, University of Manchester, 2014. https://www.research.manchester.ac.uk/portal/en/theses/application-of-an-elastoplastic-continuum-model-to-problems-in-geophysics(56bc2269-3eb2-47f9-8482-b62e8e053b76).html.

Full text
Abstract:
A model for stress and strain accumulation in strike slip earthquake faults is presented in which a finite width cuboidal fault region is embedded between two cuboidal tectonic plates. Elasto-plastic continuum constitutive equations model the gouge in the fault and the tectonic plates are linear elastic solids obeying the generalised Hooke's law. The model predicts a velocity field which is comparable to surface deformations. The plastic behaviour of the fault material allows the velocities in the tectonic plate to increase to values which are independent of the distance from the fault. Both of the non-trivial stress and strain components accumulate most significantly in the vicinity of the fault. The release of these strains during a dynamic earthquake event would produce the most severe deformations at the fault which is consistent with observations and the notion of an epicenter. The accumulations in the model, however, are at depths larger than would be expected. Plastic strains build up most significantly at the base of the fault which is in yield for the longest length of time but additionally is subject to larger temperatures which makes the material more ductile. The speed of propagation of the elasto-plastic boundary is calculated and its acceleration towards the surface of the fault may be indicative of a dynamic earthquake type event.
APA, Harvard, Vancouver, ISO, and other styles
12

Alexander, Roger Kirk. "A Tractable Cross-Nested Logit Model For Evaluating Two-Way Interconnection Competition With Multiple Network Subscription." Diss., Economics, George Washington University, 2004. http://hdl.handle.net/1961/119.

Full text
Abstract:
Degree awarded (2004): PhDEc, Economics, George Washington University
This research introduces a new theoretical framework for the analysis of access pricing (the prices that networks charge each other for the completion of calls) and the modeling of network interconnection competition. Prior work on two-way access by Armstrong (1998), Laffont, Rey and Tirole (1998), and Carter and Wright (1999), et al has been built on a two-network Hotelling (1929) differentiated competition model applied to network interconnection. The current research develops an alternative approach that is based on a cross-nested logit (CNL) discrete/continuous consumer choice model with a constant elasticity of substitution (CES) calling utility specification. A principal contribution of the new modeling framework is that in addition to being able to analyze interconnection competition among multiple networks, it is designed to incorporate the element of multiple network subscription where consumers may simultaneously subscribed to more than one type of access network. By introducing multiple-network subscription and usage substitution for users subscribed to multiple networks, the analysis allows more general assessments to be made of the impact of access pricing schemes on the degree of competition between interconnected networks. The model is also not restricted to assumptions of homogeneity in calling on the differentiated networks but can incorporate call differentiation according to network type. The model is applied to evaluate the effects of dual network subscription and asymmetric network competition and to assess multi-network competition in an environment served by two mobile networks and a fixed, wireline network. While confirming the results of prior single network subscription analysis, a central finding of the research based on the developed model is that while network competition is intensified when dual network subscription occurs, negotiated access charges between connected networks continue to serve as an instrument of collusion even in cases of non-linear (two-part) consumer tariffs.
Advisory Committee: John Kwoka, Christopher Snyder (Chair), Sumit Joshi
APA, Harvard, Vancouver, ISO, and other styles
13

Sahin, Serkan. "Language Modeling For Turkish Continuous Speech Recognition." Master's thesis, METU, 2003. http://etd.lib.metu.edu.tr/upload/2/1223254/index.pdf.

Full text
Abstract:
This study aims to build a new language model for Turkish continuous speech recognition. Turkish is very productive language in terms of word forms because of its agglutinative nature. For such languages like Turkish, the vocabulary size is far from being acceptable from only one simple stem, thousands of new words can be generated using inflectional and derivational suffixes. In this work, word are parsed into their stem and endings. First of all, we consider endings as words and we obtained bigram probabilities using stem and endings. Then, bigram probabilities are obtained using only the stems. Single pass recognition was performed by using bigram probabilities. As a second job, two pass recognition was performed. Firstly, previous bigram probabilities were used to create word lattices. Secondly, trigram probabilities were obtained from a larger text. Finally, one-best results were obtained by using word lattices and trigram probabilities. All work is done in Hidden Markov Model Toolkit (HTK) environment, except parsing and network transforming.
APA, Harvard, Vancouver, ISO, and other styles
14

Robacker, Thomas C. "Comparison of Two Parameter Estimation Techniques for Stochastic Models." Digital Commons @ East Tennessee State University, 2015. https://dc.etsu.edu/etd/2567.

Full text
Abstract:
Parameter estimation techniques have been successfully and extensively applied to deterministic models based on ordinary differential equations but are in early development for stochastic models. In this thesis, we first investigate using parameter estimation techniques for a deterministic model to approximate parameters in a corresponding stochastic model. The basis behind this approach lies in the Kurtz limit theorem which implies that for large populations, the realizations of the stochastic model converge to the deterministic model. We show for two example models that this approach often fails to estimate parameters well when the population size is small. We then develop a new method, the MCR method, which is unique to stochastic models and provides significantly better estimates and smaller confidence intervals for parameter values. Initial analysis of the new MCR method indicates that this method might be a viable method for parameter estimation for continuous time Markov chain models.
APA, Harvard, Vancouver, ISO, and other styles
15

Ozpamukcu, Serkan. "An Assessment Of A Two-echelon Inventory System Againstalternative Systems." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613949/index.pdf.

Full text
Abstract:
In this study, we focus on a real life problem that involves a single item which is used in military operations. The items in use fail according to a Poisson process and lead times are deterministic. Four alternative inventory control models are developed. Among these models, a two-echelon system consisting of a depot in the upper and several bases in the lower echelon is operated currently. This system is compared to a single-echelon system that consists of several bases. The comparison reveals the importance of the holding cost incurred for the items intransit between the depot and the base which is ignored in most of the studies in literature. Both the two and single-echelon models are also extended to have repair ability. A continuous-review base-stock policy is used for all models. Exact models are formulated. The results are obtained under various lead time, unit costs and demand parameters. Results of four different settings are compared and the findings are reported.
APA, Harvard, Vancouver, ISO, and other styles
16

Gur, Sourav, and Sourav Gur. "Atomistic to Continuum Multiscale and Multiphysics Simulation of NiTi Shape Memory Alloy." Diss., The University of Arizona, 2017. http://hdl.handle.net/10150/625589.

Full text
Abstract:
Shape memory alloys (SMAs) are materials that show reversible, thermo-elastic, diffusionless, displacive (solid to solid) phase transformation, due to the application of temperature and/ or stress (/strain). Among different SMAs, NiTi is a popular one. NiTi shows reversible phase transformation, the shape memory effect (SME), where irreversible deformations are recovered upon heating, and superelasticity (SE), where large strains imposed at high enough temperatures are fully recovered. Phase transformation process in NiTi SMA is a very complex process that involves the competition between developed internal strain and phonon dispersion instability. In NiTi SMA, phase transformation occurs over a wide range of temperature and/ or stress (strain) which involves, evolution of different crystalline phases (cubic austenite i.e. B2, different monoclinic variant of martensite i.e. B19', and orthorhombic B19 or BCO structures). Further, it is observed from experimental and computational studies that the evolution kinetics and growth rate of different phases in NiTi SMA vary significantly over a wide spectrum of spatio-temporal scales, especially with length scales. At nano-meter length scale, phase transformation temperatures, critical transformation stress (or strain) and phase fraction evolution change significantly with sample or simulation cell size and grain size. Even, below a critical length scale, the phase transformation process stops. All these aspects make NiTi SMA very interesting to the science and engineering research community and in this context, the present focuses on the following aspects. At first this study address the stability, evolution and growth kinetics of different phases (B2 and variants of B19'), at different length scales, starting from the atomic level and ending at the continuum macroscopic level. The effects of simulation cell size, grain size, and presence of free surface and grain boundary on the phase transformation process (transformation temperature, phase fraction evolution kinetics due to temperature) are also demonstrated herein. Next, to couple and transfer the statistical information of length scale dependent phase transformation process, multiscale/ multiphysics methods are used. Here, the computational difficulty from the fact that the representative governing equations (i.e. different sub-methods such as molecular dynamics simulations, phase field simulations and continuum level constitutive/ material models) are only valid or can be implemented over a range of spatiotemporal scales. Therefore, in the present study, a wavelet based multiscale coupling method is used, where simulation results (phase fraction evolution kinetics) from different sub-methods are linked via concurrent multiscale coupling fashion. Finally, these multiscale/ multiphysics simulation results are used to develop/ modify the macro/ continuum scale thermo-mechanical constitutive relations for NiTi SMA. Finally, the improved material model is used to model new devices, such as thermal diodes and smart dampers.
APA, Harvard, Vancouver, ISO, and other styles
17

Xing, Dongyuan. "Bayesian Inference on Longitudinal Semi-continuous Substance Abuse/Dependence Symptoms Data." Scholar Commons, 2015. http://scholarcommons.usf.edu/etd/5802.

Full text
Abstract:
Substance use data such as alcohol drinking often contain a high proportion of zeros. In studies examining the alcohol consumption in college students, for instance, many students may not drink in the studied period, resulting in a number of zeros. Zero-inflated continuous data, also called semi continuous data, typically consist of a mixture of a degenerate distribution at the origin (zero) and a right-skewed, continuous distribution for the positive values. Ignoring the extreme non-normality in semi-continuous data may lead to substantially biased estimates and inference. Longitudinal or repeated measures of semi-continuous data present special challenges in statistical inference because of the correlation tangled in the repeated measures on the same subject. Linear mixed-eects models (LMM) with normality assumption that is routinely used to analyze correlated continuous outcomes are inapplicable for analyzing semi-continuous outcome. Data transformation such as log transformation is typically used to correct the non-normality in data. However, log-transformed data, after the addition of a small constant to handle zeros, may not successfully approximate the normal distribution due to the spike caused by the zeros in the original observations. In addition, the reasons that data transformation should be avoided include: (i) transforming usually provides reduced information on an underlying data generation mechanism; (ii) data transformation causes diculty in regard to interpretation of the transformed scale; and (iii) it may cause re-transformation bias. Two-part mixed-eects models with one component modeling the probability of being zero and one modeling the intensity of nonzero values have been developed over the last ten years to analyze the longitudinal semi-continuous data. However, log transformation is still needed for the right-skewed nonzero continuous values in the two-part modeling. In this research, we developed Bayesian hierarchical models in which the extreme non-normality in the longitudinal semi-continuous data caused by the spike at zero and right skewness was accommodated using skew-elliptical (SE) distribution and all of the inferences were carried out through Bayesian approach via Markov chain Monte Carlo (MCMC). The substance abuse/dependence data, including alcohol abuse/dependence symptoms (AADS) data and marijuana abuse/dependence symptoms (MADS) data from a longitudinal observational study, were used to illustrate the proposed models and methods. This dissertation explored three topics: First, we presented one-part LMM with skew-normal (SN) distribution under Bayesian framework and applied it to AADS data. The association between AADS and gene serotonin transporter polymorphism (5-HTTLPR) and baseline covariates was analyzed. The results from the proposed model were compared with those from LMMs with normal, Gamma and LN distributional assumptions. Simulation studies were conducted to evaluate the performance of the proposed models. We concluded that the LMM with SN distribution not only provides the best model t based on Deviance Information Criterion (DIC), but also offers more intuitive and convenient interpretation of results, because it models the original scale of response variable. Second, we proposed a flexible two-part mixed-effects model with skew distributions including skew-t (ST) and SN distributions for the right-skewed nonzero values in Part II of model under a Bayesian framework. The proposed model is illustrated with the longitudinal AADS data and the results from models with ST, SN and normal distributions were compared under different random-effects structures. Simulation studies are conducted to evaluate the performance of the proposed models. Third, multivariate (bivariate) correlated semi-continuous data are also commonly encountered in clinical research. For instance, the alcohol use and marijuana use may be observed in the same subject and there might be underlying common factors to cause the dependence of alcohol and marijuana uses. There is very limited literature on multivariate analysis of semi-continuous data. We proposed a Bayesian approach to analyze bivariate semi-continuous outcomes by jointly modeling a logistic mixed-effects model on zero-inflation in either response and a bivariate linear mixed-effects model (BLMM) on the positive values through a correlated random-effects structure. Multivariate skew distributions including ST and SN distributions were used to relax the normality assumption in BLMM. The proposed models were illustrated with an application to the longitudinal AADS and MADS data. A simulation study was conducted to evaluate the performance of the proposed models.
APA, Harvard, Vancouver, ISO, and other styles
18

Rustand, Denis. "Modèles conjoints pour un biomarqueur semi-continu et un événement terminal avec application aux essais cliniques en cancérologie." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0252.

Full text
Abstract:
Evaluer l’efficacité des traitements dans les essais cliniques en oncologie soulève de multiples problèmes méthodologiques qui doivent être correctement traités afin de produire une estimation fiable des effets du traitement. Le but de ce projet de recherche est de proposer une nouvelle stratégie de modélisation dans le cadre de la modélisation conjointe pour étudier simultanément l’évolution de la taille tumorale (biomarqueur) et le risque de décès (évènement terminal). Un excès de zéros caractérise la distribution des mesures de taille tumorale, correspondant à des patients ayant une réponse au traitement qui se traduit par la disparition des tumeurs. Le modèle two-part a été proposé avec l’idée de décomposer la distribution du biomarqueur en une partie binaire (zéros vs. valeurs positives) et une partie continue, les deux étant généralement modélisés avec des modèles de régression à effets mixtes. Nous avons développé un modèle conjoint two-part pour lequel la partie binaire donne l’effet de covariables sur la probabilité de valeur nulle du biomarqueur tandis que la partie continue donne l’effet de covariables soit sur la valeur du biomarqueur parmi les positifs (forme conditionnelle) ou la valeur marginale du biomarqueur (forme marginale), tous deux répondant à différentes questions cliniques d’intérêt. Nous avons établi à l’aide de simulations que ce modèle fournit des estimations de paramètres non biaisées et nous l’avons comparé avec des approches alternatives telles qu’ignorer l’excès de zéro en ne décomposant pas la distribution du biomarqueur ou considérer les zéros comme des valeurs censurées (i.e., trop petites pour être mesurées). Nous montrons comment l’approche two-part est plus appropriée en présence de vrais zéros (i.e., non censurés). Ce nouveau modèle permet d’utiliser à la fois les mesures répétées de taille tumorale et les temps de survie pour comparer plusieurs lignes de traitement, ce qui pourrait impacter les décisions cliniques finales. Nous avons illustré ces développements sur la base de données réelles issues d’essais cliniques randomisés en cancérologie. Enfin, nous avons étendu l’estimation fréquentiste que nous avons implémentée dans le package R frailtypack à un cadre Bayésien avec le package R INLA afin de réduire le temps de calcul et résoudre les problèmes de convergence observés pour des structures de corrélation plus complexes. Les logiciels et codes pour l’estimation fréquentiste et Bayésienne de ce nouveau modèle sont publiquement disponibles pour s’assurer que ces outils sont facilement diffusés aux épidémiologistes, aux statisticiens ou chercheurs en sciences biomédicales. Les distributions semi-continues sont courantes dans la recherche biomédicale, par exemple lorsque l’on quantifie une exposition ou mesure les symptômes d’une maladie, notamment en génomique (microbiome, épigénétique), de sorte que le modèle proposé pourrait ouvrir un large spectre d’applications au-delà de la recherche relative au cancer
Assessing the effectiveness of cancer treatments in clinical trials raises multiple methodological problems that need to be properly addressed in order to produce a reliable estimate of treatment effects. The purpose of this research project is to propose a new modeling strategy within the joint modeling framework to study simultaneously the evolution of tumor size (biomarker) and the risk of death (terminal event). An excess of zero values characterize the distribution of the tumor size measurements, corresponding to patients responding well to a treatment that observe a complete shrinkage of their tumors. The two-part model has been proposed with the idea to decompose the distribution of the biomarker into a binary outcome (zero values vs. positive values) and a continuous outcome, both outcomes usually being modeled with mixed effects regression models. We developed a two-part joint model for which the binary part captures the effect of covariates on the probability of zero value of the biomarker while the continuous part gives the effect of covariates either on the expected value of the biomarker among positives (conditional form) or the marginal expected value of the biomarker (marginal form), both answering different clinical questions of interest. We established it provides unbiased parameter estimations by simulations and compared this new model with alternative approaches such as ignoring the zero excess by not decomposing the biomarker’s distribution or considering zeros as censored values (i.e., too small to be measured). We show how the two-part approach is more appropriate in presence of true zeros (i.e., not censored). This new model allows to use both the tumor size repeated measurements and the survival times to compare several treatment lines, which could impact the final clinical decisions. We illustrated these developments on the basis of real data from randomized cancer clinical trials. Finally, we extended the frequentist estimation that we implemented into the R package frailtypack to a Bayesian framework within the R package INLA in order to reduce the computation time and solve convergence issues when dealing with more complex correlation structures. The software and code for both the frequentist and Bayesian estimations of this new model are freely available to ensure that these tools are easily disseminated to epidemiologists, statisticians or biomedical researchers. Semicontinuous distributions are common in biomedical research, e.g., when quantifying exposure or measuring symptoms of a disease, in genomics (microbiome, epigenetics), so that the proposed work could lead to a wide spectrum of applications beyond cancer research
APA, Harvard, Vancouver, ISO, and other styles
19

Camacho, Torregrosa Francisco Javier. "DEVELOPMENT AND CALIBRATION OF A GLOBAL GEOMETRIC DESIGN CONSISTENCY MODEL FOR TWO-LANE RURAL HIGHWAYS, BASED ON THE USE OF CONTINUOUS OPERATING SPEED PROFILES." Doctoral thesis, Universitat Politècnica de València, 2015. http://hdl.handle.net/10251/48543.

Full text
Abstract:
Road safety is one of the most important problems in our society. It causes hundreds of fatalities every year worldwide. A road accident may be caused by several concurrent factors. The most common are human and infrastructure. Their interaction is important too, which has been studied in-depth for years. Therefore, there is a better knowledge about the driving task. In several cases, these advances are still not included in road guidelines. Some of these advances are centered on explaining the underlying cognitive processes of the driving task. Some others are related to the analysis of drivers’ response or a better estimation of road crashes. The concept of design consistency is related to all of them. Road design consistency is the way how road alignment fits drivers’ expectancies. Hence, drivers are surprised at inconsistent roads, presenting a higher crash risk potential. This PhD presents a new, operating speed-based global consistency model. It is based on the analysis of more than 150 two-lane rural homogeneous road segments of the Valencian Region (Spain). The final consistency parameter was selected as the combination of operational parameters that best estimated the number of crashes. Several innovative auxiliary tools were developed for this process. One example is a new tool for recreating the horizontal alignment of two-lane rural roads by means of an analytic-heuristic process. A new procedure for determining road homogeneous segments was also developed, as well as some expressions to accurately determine the most adequate design speed. The consistency model can be integrated into safety performance functions in order to estimate the amount of road crashes. Finally, all innovations are combined into a new road design methodology. This methodology aims to complement the existing guidelines, providing to road safety a continuum approach and giving the engineers tools to estimate how safe are their road designs.
Camacho Torregrosa, FJ. (2015). DEVELOPMENT AND CALIBRATION OF A GLOBAL GEOMETRIC DESIGN CONSISTENCY MODEL FOR TWO-LANE RURAL HIGHWAYS, BASED ON THE USE OF CONTINUOUS OPERATING SPEED PROFILES [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/48543
TESIS
APA, Harvard, Vancouver, ISO, and other styles
20

Condessa, Janaína. "A motivação dos alunos para continuar seus estudos em música." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/32473.

Full text
Abstract:
Esta pesquisa foi desenvolvida dentro da temática sobre motivação para aprender música e teve como objetivo geral investigar a interação entre os fatores individuais e ambientais que motivam os alunos para continuar seus estudos em música fora da escola. Segundo a literatura, os fatores individuais referem-se às crenças, às percepções e às características pessoais dos alunos, enquanto os fatores ambientais relacionam-se às experiências em determinado local e momento de vida, bem como com as interações estabelecidas com as pessoas desse ambiente. Como objetivos específicos, esta pesquisa verificou o papel do ambiente (pais, família, professores, pares e contexto escolar) e investigou os fatores individuais (metas e autoconceito) que possuem relação com a motivação para continuar os estudos em música fora da escola. O referencial teórico adotado foi o modelo de motivação em música (HALLAM, 2002, 2005, 2006), pois contempla a interação entre os dois tipos de fatores: individuais e ambientais. O método utilizado foi o estudo de entrevistas, com alunos das séries finais do ensino fundamental que possuíam aula de música na escola desde as séries iniciais e que optaram por estudar música fora da escola após a quinta série. Apoiada na literatura da educação e da educação musical, esta pesquisa justifica-se pela possibilidade de compreender tanto os diferentes fatores envolvidos na interação entre o indivíduo e o ambiente durante a aprendizagem musical quanto a maneira pela qual esses fatores incentivam o aluno a continuar os estudos em música. Os resultados apontaram para uma forte relação entre os fatores individuais e ambientais e para a correspondência entre a motivação para aprender música dentro da escola e a motivação para continuar os estudos em música fora da escola. Esses resultados poderão colaborar para o aprimoramento pedagógico dos professores de música, além de permitir reflexões e oferecer subsídios que possam melhorar a motivação dos alunos para aprender música, dentro e fora da escola.
This research is about motivation to learn music. Its general aim was to investigate the interaction between individual and environmental factors which motivate students to continue their studies in music outside school. According to literature, individual factors refer to students’ beliefs, perceptions and personal characteristics, whilst environmental factors are related to life experiences in some places and at some times, as well as the interaction with others. The specific aims to this research was to verify the role of environment (parents, family, teachers, peers and school context), as well as investigate individual factors (goals and self-concept about abilities) in students’ choice in continuing to learn music outside school. The theoretical framework adopted was the model of motivation in music (Hallam, 2002, 2005, 2006), because it considers the interactions between individual and environmental factors. The method chosen was the interview study, with middle school students who attended music classes at school since first cycle of fundamental education, and opted to study music outside school after the fifth grade. Based both on the education and the music education literature, the reason of this research is the possibility of understanding not only the different factors involved in the interaction between the individual and the environment during the musical learning, but also the way these factors motivate the student to continue studying music. The results of this research revealed a close link between individual and environmental factors. Moreover, data showed the relation between motivation to learn music inside school and motivation to continue music outside school.
APA, Harvard, Vancouver, ISO, and other styles
21

Giral, Castillón Roberto. "Síntesis de estructuras multiplicadoras de tensión basadas en células convertidoras continua-continua de tipo conmutado." Doctoral thesis, Universitat Politècnica de Catalunya, 1999. http://hdl.handle.net/10803/6329.

Full text
Abstract:
Uno de los campos más importantes de la Electrónica de Potencia es el de los convertidores de potencia conmutados, que debido a sus características de alto rendimiento energético, reducido tamaño, posibilidades de regulación del factor de potencia y de elevación de tensión, etc., están presentes en un gran número de las etapas de alimentación de los equipos electrónicos actuales.
Las mejoras tecnológicas en ámbitos como el de la integración de circuitos han permitido importantes reducciones en el tamaño de los equipos (por ejemplo en los ordenadores). Sin embargo, este proceso de reducción de tamaño que, además, suele venir unido a unas especificaciones más rígidas en cuanto a costes, rendimiento, seguridad y prestaciones en general, no se ha producido en igual medida en las etapas de alimentación. El estudio de los convertidores conmutados es por lo tanto un campo necesitado de esfuerzos de investigación y desarrollo.
Para potencias superiores a 25 W, y especialmente en potencias superiores a 150 W, una de las estrategias utilizadas para mejorar las prestaciones de los convertidores es el uso del denominado "interleaving" o entrelazado , definido como la puesta en paralelo de N convertidores idénticos desfasando sus señales de control de forma uniforme a lo largo del periodo de conmutación.
Con el objetivo principal de reducir al máximo los rizados de la tensión de salida y de la corriente de entrada, en esta tesis se estudian casos particulares de "interleaving" en estructuras convertidoras continua-continua que utilizan el convertidor elevador ("boost") como célula básica y cuyas tensiones de salida son, idealmente y operando en modo de conducción continua, múltiplos enteros positivos de la tensión de entrada, de ahí la denominación de multiplicadores de tensión que aparece en el título de tesis propuesto. Posteriormente se analizan las posibilidades de regulación de tensión que presentan algunos de los casos de estudio, a costa de incrementar los rizados.
APA, Harvard, Vancouver, ISO, and other styles
22

Shalookh, Othman H. Zinkaah. "Behaviour of continuous concrete deep beams reinforced with GFRP bars." Thesis, University of Bradford, 2019. http://hdl.handle.net/10454/18381.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Mills, Elizabeth Dastrup. "Adjusting for covariates in zero-inflated gamma and zero-inflated log-normal models for semicontinuous data." Diss., University of Iowa, 2013. https://ir.uiowa.edu/etd/2583.

Full text
Abstract:
Semicontinuous data consist of a combination of a point-mass at zero and a positive skewed distribution. This type of non-negative data distribution is found in data from many fields, but presents unique challenges for analysis. Specifically, these data cannot be analyzed using positive distributions, but distributions that are unbounded are also likely a poor fit. Two-part models incorporate both the zero values from semicontinuous data and the positive continuous values. In this dissertation, we compare zero-inflated gamma (ZIG) and zero-inflated log-normal (ZILN) two-part models. For both of these models, the probability that an outcome is non-zero is modeled via logistic regression. Then the distribution of the non-zero outcomes is modeled via gamma regression with a log-link for ZIG regression and via log-normal regression for ZILN. In this dissertation we propose tests which combine the two parts of the ZIG and ZILN models in meaningful ways for performing a two group comparison. Then we compare these tests in terms of observed Type 1 error rates and power levels under both correctly specified and misspecified ZIG and ZILN models. Tests falling under two main hypotheses are examined. First, we look at two-part tests which come from a two-part hypothesis of no difference between the two groups in terms of the probability of non-zero values and in terms of the the mean of the non-zero values. The second type of tests are mean-based tests. These combine the two parts of the model in ways related to the overall group means of the semicontinuous variable. When not adjusting for covariates, two tests are developed based on a difference of means (DM) and a ratio of means (RM). When adjusting for covariates, tests using mean-based hypotheses are developed which marginalize over the values of the adjusting covariates. Under the adjusting framework, two ratio of means statistics are proposed and examined, an average of the subject specific ratio of means (RMSS) and a ratio of the marginal group means (RMMAR). Simulations are used to compare Type 1 error and power for these tests and standard two group comparison tests. Simulation results show that when ZIG and ZILN models are misspecified and the coefficient of variation (CoV) and/or sample size is large, there are differences in Type 1 error and power results between the misspecified and correctly specified models. Specifically, when ZILN data with high CoV or sample size are analyzed as ZIG, Type 1 error rates are prohibitively high. On the other hand, when ZIG data are analyzed as ZILN under these scenarios, power levels are much lower for ZILN analyses than for ZIG analyses. Examination of Q-Q plots show, however, that in these settings, distinguishing between ZIG and ZILN data can be relatively straightforward. When the coefficient of variation is small it is harder to distinguish between ZIG and ZILN models, but the differences between Type 1 error rates and power levels for misspecified or correctly specified models is also slight. Finally, we use the proposed methods to analyze a data set involving Parkinson's disease (PD) and driving. A number of these methods show that PD subjects exhibit poorer lane keeping ability than control subjects.
APA, Harvard, Vancouver, ISO, and other styles
24

Huynh, Martin, and Fernando Valarino. "An analysis of continuous consistency models in real time peer-to-peer fighting games." Thesis, Högskolan Kristianstad, Fakulteten för naturvetenskap, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:hkr:diva-19404.

Full text
Abstract:
This study analyses different methods of maintaining a consistent state between two peers in a real time fighting game played over a network. Current methods of state management are explored in a comprehensive literature review, which establishes a baseline knowledge and theoretical comparison of use cases for the two most common models: delay and rollback. These results were then further explored by a practical case study where a test fighting game was created in Unity3D that implemented both delay and rollback networking. Networking strategies were tested by a group of ten users under different simulated network conditions and their experiences were documented using a Likert-style questionnaire for each stage of testing. Based on user feedback it was found that the implemented rollback strategy provided an overall better user experience. Rollback was found to be more responsive and stable than the delay implementation as network latency was increased, suggesting that rollback is also more fault tolerant than delay.
APA, Harvard, Vancouver, ISO, and other styles
25

Fernández, López Adriana. "Learning of meaningful visual representations for continuous lip-reading." Doctoral thesis, Universitat Pompeu Fabra, 2021. http://hdl.handle.net/10803/671206.

Full text
Abstract:
In the last decades, there has been an increased interest in decoding speech exclusively using visual cues, i.e. mimicking the human capability to perform lip-reading, leading to Automatic Lip-Reading (ALR) systems. However, it is well known that the access to speech through the visual channel is subject to many limitations when compared to the audio channel, i.e. it has been argued that humans can actually read around 30% of the information from the lips, and the rest is filled-in from the context. Thus, one of the main challenges in ALR resides in the visual ambiguities that arise at the word level, highlighting that not all sounds that we hear can be easily distinguished by observing the lips. In the literature, early ALR systems addressed simple recognition tasks such as alphabet or digit recognition but progressively shifted to more complex and realistic settings leading to several recent systems that target continuous lip-reading. To a large extent, these advances have been possible thanks to the construction of powerful systems based on deep learning architectures that have quickly started to replace traditional systems. Despite the recognition rates for continuous lip-reading may appear modest in comparison to those achieved by audio-based systems, the field has undeniably made a step forward. Interestingly, an analogous effect can be observed when humans try to decode speech: given sufficiently clean signals, most people can effortlessly decode the audio channel but would struggle to perform lip-reading, since the ambiguity of the visual cues makes it necessary the use of further context to decode the message. In this thesis, we explore the appropriate modeling of visual representations with the aim to improve continuous lip-reading. To this end, we present different data-driven mechanisms to handle the main challenges in lip-reading related to the ambiguities or the speaker dependency of visual cues. Our results highlight the benefits of a proper encoding of the visual channel, for which the most useful features are those that encode corresponding lip positions in a similar way, independently of the speaker. This fact opens the door to i) lip-reading in many different languages without requiring large-scale datasets, and ii) increasing the contribution of the visual channel in audio-visual speech systems. On the other hand, our experiments identify a tendency to focus on the modeling of temporal context as the key to advance the field, where there is a need for ALR models that are trained on datasets comprising large speech variability at several context levels. In this thesis, we show that both proper modeling of visual representations and the ability to retain context at several levels are necessary conditions to build successful lip-reading systems.
En les darreres dècades, hi ha hagut un interès creixent en la descodificació de la parla utilitzant exclusivament senyals visuals, es a dir, imitant la capacitat humana de llegir els llavis, donant lloc a sistemes de lectura automàtica de llavis (ALR). No obstant això, se sap que l’accès a la parla a través del canal visual està subjecte a moltes limitacions en comparació amb el senyal acústic, es a dir, s’ha argumentat que els humans poden llegir al voltant del 30% de la informació dels llavis, i la resta es completa fent servir el context. Així, un dels principals reptes de l’ALR resideix en les ambigüitats visuals que sorgeixen a escala de paraula, destacant que no tots els sons que escoltem es poden distingir fàcilment observant els llavis. A la literatura, els primers sistemes ALR van abordar tasques de reconeixement senzilles, com ara el reconeixement de l’alfabet o els dígits, però progressivament van passar a entorns mes complexos i realistes que han conduït a diversos sistemes recents dirigits a la lectura continua dels llavis. En gran manera, aquests avenços han estat possibles gracies a la construcció de sistemes potents basats en arquitectures d’aprenentatge profund que han començat a substituir ràpidament els sistemes tradicionals. Tot i que les taxes de reconeixement de la lectura continua dels llavis poden semblar modestes en comparació amb les assolides pels sistemes basats en audio, és evident que el camp ha fet un pas endavant. Curiosament, es pot observar un efecte anàleg quan els humans intenten descodificar la parla: donats senyals sense soroll, la majoria de la gent pot descodificar el canal d’àudio sense esforç¸, però tindria dificultats per llegir els llavis, ja que l’ambigüitat dels senyals visuals fa necessari l’ús de context addicional per descodificar el missatge. En aquesta tesi explorem el modelatge adequat de representacions visuals amb l’objectiu de millorar la lectura contínua dels llavis. Amb aquest objectiu, presentem diferents mecanismes basats en dades per fer front als principals reptes de la lectura de llavis relacionats amb les ambigüitats o la dependència dels parlants dels senyals visuals. Els nostres resultats destaquen els avantatges d’una correcta codificació del canal visual, per a la qual les característiques més útils són aquelles que codifiquen les posicions corresponents dels llavis d’una manera similar, independentment de l’orador. Aquest fet obre la porta a i) la lectura de llavis en molts idiomes diferents sense necessitat de conjunts de dades a gran escala, i ii) a l’augment de la contribució del canal visual en sistemes de parla audiovisuals.´ D’altra banda, els nostres experiments identifiquen una tendència a centrar-se en iii la modelització del context temporal com la clau per avançar en el camp, on hi ha la necessitat de models d’ALR que s’entrenin en conjunts de dades que incloguin una gran variabilitat de la parla a diversos nivells de context. En aquesta tesi, demostrem que tant el modelatge adequat de les representacions visuals com la capacitat de retenir el context a diversos nivells són condicions necessàries per construir sistemes de lectura de llavis amb èxit.
APA, Harvard, Vancouver, ISO, and other styles
26

Belgacem, Najib. "Modélisation mixte continue-réseau de pores des transferts diphasiques cathodiques d'une pile à combustible PEMFC." Phd thesis, Toulouse, INPT, 2016. http://oatao.univ-toulouse.fr/17731/1/BELGACEM_Najib.pdf.

Full text
Abstract:
Cette thèse présente une contribution à l’étude des transferts d’eau au sein des piles à combustible de type PEMFC, un aspect clé de cette technologie. Une approche de simulation numérique est développée en couplant un modèle de type réseau de pores dans la couche de diffusion (DM), une approche mixte continue–réseau de pore dans la couche microporeuse (MPL) et une modélisation par compartiments dans la couche active. L’approche développée prend en compte les transferts couplés de chaleur et d’eau via notamment la modélisation des phénomènes de changement de phase dans la DM et la MPL (évaporation et condensation). Dans une première partie, nous étudions le cas où l’eau migre dans l’assemblage MPL-DM directement en phase liquide. L’impact de la variation de pression dans la phase gazeuse sur la distribution de la phase liquide est étudié. L’épaisseur optimale de la MPL est également étudiée. Dans une seconde partie, nous étudions des situations où l’eau se forme par condensation dans la couche de diffusion. Nous étudions tout d’abord l’impact des propriétés de la couche de diffusion et de la MPL sur le diagramme de condensation. Ensuite nous analysons l’impact de la formation de l’eau liquide sur la distribution de courant locale. Enfin, l’impact de la mouillabilité sur les figures de condensation est étudié. Cette dernière étude est vue comme un premier pas vers l’étude des mécanismes de dégradation dans le régime de condensation.
APA, Harvard, Vancouver, ISO, and other styles
27

Fox, Clayton D. L. "Modeling Simplified Reaction Mechanisms using Continuous Thermodynamics for Hydrocarbon Fuels." Thesis, Université d'Ottawa / University of Ottawa, 2018. http://hdl.handle.net/10393/37554.

Full text
Abstract:
Commercial fuels are mixtures with large numbers of components. Continuous thermodynamics is a technique for modelling fuel mixtures using a probability density function rather than dealing with each discreet component. The mean and standard deviation of the distribution are then used to model the chemical reactions of the mixture. This thesis develops the necessary theory to apply the technique of continuous thermodynamics to the oxidation reactions of hydrocarbon fuels. The theory is applied to three simplified models of hydrocarbon oxidation: a global one-step reaction, a two-step reaction with CO as the intermediate product, and the four-step reaction of Müller et al. (1992), which contains a high- and a low-temperature branch. These are all greatly simplified models of the complex reaction kinetics of hydrocarbons, and in this thesis they are applied specifically to n-paraffin hydrocarbons in the range from n-heptane to n-hexadecane. The model is tested numerically using a simple constant pressure homogeneous ignition problem using Cantera and compared to simplified and detailed mechanisms for n-heptane. The continuous thermodynamics models are able not only to predict ignition delay times and the development of temperature and species concentrations with time, but also changes in the mixture composition as reaction proceeds as represented by the mean and standard deviation of the distribution function. Continuous thermodynamics is therefore shown to be a useful tool for reactions of multicomponent mixtures, and an alternative to the "surrogate fuel" approach often used at present.
APA, Harvard, Vancouver, ISO, and other styles
28

Mohd, Damanhuri Nor Alisa. "The numerical approximation to solutions for the double-slip and double-spin model for the deformation and flow of granular materials." Thesis, University of Manchester, 2017. https://www.research.manchester.ac.uk/portal/en/theses/the-numerical-approximation-to-solutions-for-the-doubleslip-and-doublespin-model-for-the-deformation-and-flow-of-granular-materials(9986ac45-e48c-4061-a299-a80b2e665c3e).html.

Full text
Abstract:
The aim of this thesis is to develop a numerical method to find approximations to solutions of the double-slip and double-spin model for the deformation and flow of granular materials. The model incorporates the physical and kinematic concepts of yield, shearing motion on slip lines, dilatation and average grain rotation. The equations governing the model comprise a set of five first order partial differential equations for the five dependent variables comprising two stress variables, two velocity components and the density. For steady state flows, the model is hyperbolic and the characteristic directions and relations along the characteristics are presented. The numerical approximation for the rate of working of the stresses are also presented. The model is then applied to a number of granular flow problems using the numerical method.
APA, Harvard, Vancouver, ISO, and other styles
29

Khatab, Mahmoud A. T. "Behaviour of continuously supported self-compacting concrete deep beams." Thesis, University of Bradford, 2016. http://hdl.handle.net/10454/14628.

Full text
Abstract:
The present research is conducted to investigate the structural behaviour of continuously supported deep beams made with SCC. A series of tests on eight reinforced two-span continuous deep beams made with SCC was performed. The main parameters investigated were the shear span-to-depth ratio, the amount and configuration of web reinforcement and the main longitudinal reinforcement ratio. All beams failed due to a major diagonal crack formed between the applied mid-span load and the intermediate support separating the beam into two blocks: the first one rotated around the end support leaving the rest of the beam fixed on the other two supports. The amount and configuration of web reinforcement had a major effect in controlling the shear capacity of SCC continuous deep beams. The shear provisions of the ACI 318M-11 reasonably predicted the load capacity of SCC continuous deep beams. The strut-and-tie model recommended by different design codes showed conservative results for all SCC continuous deep beams. The ACI Building Code (ACI 318M-11) predictions were more accurate than those of the EC2 and Canadian Code (CSA23.3-04). The proposed effectiveness factor equations for the strut-and-tie model showed accurate predictions compared to the experimental results. The different equations of the effectiveness factor used in upper-bound analysis can reasonably be applied to the prediction of the load capacity of continuously supported SCC deep beams although they were proposed for normal concrete (NC). The proposed three dimensional FE model accurately predicted the failure modes, the load capacity and the load-deflection response of the beams tested.
APA, Harvard, Vancouver, ISO, and other styles
30

Peña, Monferrer Carlos. "Computational fluid dynamics multiscale modelling of bubbly flow. A critical study and new developments on volume of fluid, discrete element and two-fluid methods." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/90493.

Full text
Abstract:
The study and modelling of two-phase flow, even the simplest ones such as the bubbly flow, remains a challenge that requires exploring the physical phenomena from different spatial and temporal resolution levels. CFD (Computational Fluid Dynamics) is a widespread and promising tool for modelling, but nowadays, there is no single approach or method to predict the dynamics of these systems at the different resolution levels providing enough precision of the results. The inherent difficulties of the events occurring in this flow, mainly those related with the interface between phases, makes that low or intermediate resolution level approaches as system codes (RELAP, TRACE, ...) or 3D TFM (Two-Fluid Model) have significant issues to reproduce acceptable results, unless well-known scenarios and global values are considered. Instead, methods based on high resolution level such as Interfacial Tracking Method (ITM) or Volume Of Fluid (VOF) require a high computational effort that makes unfeasible its use in complex systems. In this thesis, an open-source simulation framework has been designed and developed using the OpenFOAM library to analyze the cases from microescale to macroscale levels. The different approaches and the information that is required in each one of them have been studied for bubbly flow. In the first part, the dynamics of single bubbles at a high resolution level have been examined through VOF. This technique has allowed to obtain accurate results related to the bubble formation, terminal velocity, path, wake and instabilities produced by the wake. However, this approach has been impractical for real scenarios with more than dozens of bubbles. Alternatively, this thesis proposes a CFD Discrete Element Method (CFD-DEM) technique, where each bubble is represented discretely. A novel solver for bubbly flow has been developed in this thesis. This includes a large number of improvements necessary to reproduce the bubble-bubble and bubble-wall interactions, turbulence, velocity seen by the bubbles, momentum and mass exchange term over the cells or bubble expansion, among others. But also new implementations as an algorithm to seed the bubbles in the system have been incorporated. As a result, this new solver gives more accurate results as the provided up to date. Following the decrease on resolution level, and therefore the required computational resources, a 3D TFM have been developed with a population balance equation solved with an implementation of the Quadrature Method Of Moments (QMOM). The solver is implemented with the same closure models as the CFD-DEM to analyze the effects involved with the lost of information due to the averaging of the instantaneous Navier-Stokes equation. The analysis of the results with CFD-DEM reveals the discrepancies found by considering averaged values and homogeneous flow in the models of the classical TFM formulation. Finally, for the lowest resolution level approach, the system code RELAP5/MOD3 is used for modelling the bubbly flow regime. The code has been modified to reproduce properly the two-phase flow characteristics in vertical pipes, comparing the performance of the calculation of the drag term based on drift-velocity and drag coefficient approaches.
El estudio y modelado de flujos bifásicos, incluso los más simples como el bubbly flow, sigue siendo un reto que conlleva aproximarse a los fenómenos físicos que lo rigen desde diferentes niveles de resolución espacial y temporal. El uso de códigos CFD (Computational Fluid Dynamics) como herramienta de modelado está muy extendida y resulta prometedora, pero hoy por hoy, no existe una única aproximación o técnica de resolución que permita predecir la dinámica de estos sistemas en los diferentes niveles de resolución, y que ofrezca suficiente precisión en sus resultados. La dificultad intrínseca de los fenómenos que allí ocurren, sobre todo los ligados a la interfase entre ambas fases, hace que los códigos de bajo o medio nivel de resolución, como pueden ser los códigos de sistema (RELAP, TRACE, etc.) o los basados en aproximaciones 3D TFM (Two-Fluid Model) tengan serios problemas para ofrecer resultados aceptables, a no ser que se trate de escenarios muy conocidos y se busquen resultados globales. En cambio, códigos basados en alto nivel de resolución, como los que utilizan VOF (Volume Of Fluid), requirieren de un esfuerzo computacional tan elevado que no pueden ser aplicados a sistemas complejos. En esta tesis, mediante el uso de la librería OpenFOAM se ha creado un marco de simulación de código abierto para analizar los escenarios desde niveles de resolución de microescala a macroescala, analizando las diferentes aproximaciones, así como la información que es necesaria aportar en cada una de ellas, para el estudio del régimen de bubbly flow. En la primera parte se estudia la dinámica de burbujas individuales a un alto nivel de resolución mediante el uso del método VOF (Volume Of Fluid). Esta técnica ha permitido obtener resultados precisos como la formación de la burbuja, velocidad terminal, camino recorrido, estela producida por la burbuja e inestabilidades que produce en su camino. Pero esta aproximación resulta inviable para entornos reales con la participación de más de unas pocas decenas de burbujas. Como alternativa, se propone el uso de técnicas CFD-DEM (Discrete Element Methods) en la que se representa a las burbujas como partículas discretas. En esta tesis se ha desarrollado un nuevo solver para bubbly flow en el que se han añadido un gran número de nuevos modelos, como los necesarios para contemplar los choques entre burbujas o con las paredes, la turbulencia, la velocidad vista por las burbujas, la distribución del intercambio de momento y masas con el fluido en las diferentes celdas por cada una de las burbujas o la expansión de la fase gaseosa entre otros. Pero también se han tenido que incluir nuevos algoritmos como el necesario para inyectar de forma adecuada la fase gaseosa en el sistema. Este nuevo solver ofrece resultados con un nivel de resolución superior a los desarrollados hasta la fecha. Siguiendo con la reducción del nivel de resolución, y por tanto los recursos computacionales necesarios, se efectúa el desarrollo de un solver tridimensional de TFM en el que se ha implementado el método QMOM (Quadrature Method Of Moments) para resolver la ecuación de balance poblacional. El solver se desarrolla con los mismos modelos de cierre que el CFD-DEM para analizar los efectos relacionados con la pérdida de información debido al promediado de las ecuaciones instantáneas de Navier-Stokes. El análisis de resultados de CFD-DEM permite determinar las discrepancias encontradas por considerar los valores promediados y el flujo homogéneo de los modelos clásicos de TFM. Por último, como aproximación de nivel de resolución más bajo, se investiga el uso uso de códigos de sistema, utilizando el código RELAP5/MOD3 para analizar el modelado del flujo en condiciones de bubbly flow. El código es modificado para reproducir correctamente el flujo bifásico en tuberías verticales, comparando el comportamiento de aproximaciones para el cálculo del término d
L'estudi i modelatge de fluxos bifàsics, fins i tot els més simples com bubbly flow, segueix sent un repte que comporta aproximar-se als fenòmens físics que ho regeixen des de diferents nivells de resolució espacial i temporal. L'ús de codis CFD (Computational Fluid Dynamics) com a eina de modelatge està molt estesa i resulta prometedora, però ara per ara, no existeix una única aproximació o tècnica de resolució que permeta predir la dinàmica d'aquests sistemes en els diferents nivells de resolució, i que oferisca suficient precisió en els seus resultats. Les dificultat intrínseques dels fenòmens que allí ocorren, sobre tots els lligats a la interfase entre les dues fases, fa que els codis de baix o mig nivell de resolució, com poden ser els codis de sistema (RELAP,TRACE, etc.) o els basats en aproximacions 3D TFM (Two-Fluid Model) tinguen seriosos problemes per a oferir resultats acceptables , llevat que es tracte d'escenaris molt coneguts i se persegueixen resultats globals. En canvi, codis basats en alt nivell de resolució, com els que utilitzen VOF (Volume Of Fluid), requereixen d'un esforç computacional tan elevat que no poden ser aplicats a sistemes complexos. En aquesta tesi, mitjançant l'ús de la llibreria OpenFOAM s'ha creat un marc de simulació de codi obert per a analitzar els escenaris des de nivells de resolució de microescala a macroescala, analitzant les diferents aproximacions, així com la informació que és necessària aportar en cadascuna d'elles, per a l'estudi del règim de bubbly flow. En la primera part s'estudia la dinàmica de bambolles individuals a un alt nivell de resolució mitjançant l'ús del mètode VOF. Aquesta tècnica ha permès obtenir resultats precisos com la formació de la bambolla, velocitat terminal, camí recorregut, estela produida per la bambolla i inestabilitats que produeix en el seu camí. Però aquesta aproximació resulta inviable per a entorns reals amb la participació de més d'unes poques desenes de bambolles. Com a alternativa en aqueix cas es proposa l'ús de tècniques CFD-DEM (Discrete Element Methods) en la qual es representa a les bambolles com a partícules discretes. En aquesta tesi s'ha desenvolupat un nou solver per a bubbly flow en el qual s'han afegit un gran nombre de nous models, com els necessaris per a contemplar els xocs entre bambolles o amb les parets, la turbulència, la velocitat vista per les bambolles, la distribució de l'intercanvi de moment i masses amb el fluid en les diferents cel·les per cadascuna de les bambolles o els models d'expansió de la fase gasosa entre uns altres. Però també s'ha hagut d'incloure nous algoritmes com el necessari per a injectar de forma adequada la fase gasosa en el sistema. Aquest nou solver ofereix resultats amb un nivell de resolució superior als desenvolupat fins la data. Seguint amb la reducció del nivell de resolució, i per tant els recursos computacionals necessaris, s'efectua el desenvolupament d'un solver tridimensional de TFM en el qual s'ha implementat el mètode QMOM (Quadrature Method Of Moments) per a resoldre l'equació de balanç poblacional. El solver es desenvolupa amb els mateixos models de tancament que el CFD-DEM per a analitzar els efectes relacionats amb la pèrdua d'informació a causa del promitjat de les equacions instantànies de Navier-Stokes. L'anàlisi de resultats de CFD-DEM permet determinar les discrepàncies ocasionades per considerar els valors promitjats i el flux homogeni dels models clàssics de TFM. Finalment, com a aproximació de nivell de resolució més baix, s'analitza l'ús de codis de sistema, utilitzant el codi RELAP5/MOD3 per a analitzar el modelatge del fluxos en règim de bubbly flow. El codi és modificat per a reproduir correctament les característiques del flux bifàsic en canonades verticals, comparant el comportament d'aproximacions per al càlcul del terme de drag basades en velocitat de drift flux model i de les basades en coe
Peña Monferrer, C. (2017). Computational fluid dynamics multiscale modelling of bubbly flow. A critical study and new developments on volume of fluid, discrete element and two-fluid methods [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90493
TESIS
APA, Harvard, Vancouver, ISO, and other styles
31

Zápeca, Jan. "Spínaný zdroj s digitální řídící smyčkou." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2012. http://www.nusl.cz/ntk/nusl-219759.

Full text
Abstract:
The diploma thesis is describing how forward converter works. The diploma thesis presents the function of forward converter with demagnetizing winding and presents the function of two-switched forward converter. The diploma thesis descibes the behaviour of continuous current mode and discontinuous current mode. The diploma thesis explains the reasons for implementation feedback and presents the basic types of compensations. The project deals with AC analysis of two-switched forward converter with continuous peak current mode control. The Analog prototyping metod is used for digital control design. The function of the converter was tested in laboratory. The laboratory results have been compared with the theoretical and the simulation results.
APA, Harvard, Vancouver, ISO, and other styles
32

McKay, Ian Ross. "Assessing orientations to cultural difference of the faculty of a university foundation programme in the Gulf Cooperation Council : a mixed-methods approach informed by the Intercultural Development Continuum and using the Intercultural Development Inventory." Thesis, University of Exeter, 2013. http://hdl.handle.net/10871/13781.

Full text
Abstract:
This study examined the orientations to cultural difference of sojourner educators in the Foundation Program at Qatar University to determine if orientations were correlated with select demographic and experiential variables, including gender, age, time overseas, education level, formative region, ethnic minority status, job position, length of time in Qatar, intercultural marriage, default language, formal teacher training, and overseas development organization experience. This study used a sequential mixed-method design. Perceived and Developmental Orientations were measured using the Intercultural Development Inventory© (V.3), which produced a measure of each respondent’s orientation to cultural difference. Focus group interviews were conducted to engage participants in explaining and interpreting the findings. Five focus groups of three to six participants each were conducted. Most of the teachers were found to operate from within the transitional orientation of Minimization, although individual scores ranged from Denial to Adaptation. On average, the educators were found to overestimate their orientations by 31 points. A positive correlation between orientation and formative region was found, with participants from North America showing the highest orientation. Statistically significant differences emerged for orientations when comparing Middle East and North African (MENA) and North American formative regions. Formative region was found to account for 4.8% of the variance in orientation and is a significant fit of the data. Focus groups participants speculated that (a) core differences regarding multiculturalism in MENA and North American cultures help explain the results, (b) aspects of the workplace culture and both the broader MENA and local Qatari culture encourage a sense of exclusion, and (c) external events further complicate cross-cultural relations. The study findings add to the literature by providing baseline orientation data on sojourner educators in post-secondary education in the GCC region, and by confirming some of the findings of similar studies. The study provides practitioners with suggestions for staffing and professional development. Future research should focus on the measurement of orientations in broader samples of educators, changes in orientation over time in Qatar and other cultural contexts, differences in orientation among short-term vs. long-term expatriates, the impact of employment systems and societal structures on orientations in sojourner educators, the impact of educator orientation to cultural difference on student achievement, and the design of effective cross-cultural professional development for educators.
APA, Harvard, Vancouver, ISO, and other styles
33

Abbas, Ghulam. "Analysis, modelling, design and implementation of fast-response digital controllers for high-frequency low-power switching converters." Thesis, Lyon, INSA, 2012. http://www.theses.fr/2012ISAL0055.

Full text
Abstract:
L'objectif de la thèse est de concevoir des compensateurs discrets qui permettent de compenser les non-linéarités introduites par les différents éléments dans la boucle de commande numérique, tout en maintenant des performances dynamiques élevées, des temps de développement rapide, et une structure reconfigurable. Ces compensateurs discrets doivent également avoir des temps de réponse rapide, avoir une déviation de la tension minimale et avoir, pour un étage de puissance donné, un temps de récupération rapide de la tension. Ces performances peuvent être atteintes par des compensateurs discrets conçus sur la base de techniques de contrôle linéaires et non linéaires. Pour obtenir une réponse rapide et stable, la thèse propose deux solutions : La première consiste à utiliser des techniques de contrôle linéaires et de concevoir le compensateur discret tout en gardant la bande passante la plus élevée possible. Il est communément admis que plus la bande passante est élevée, plus la réponse transitoire est rapide. L‘obtention d’une bande passante élevée, en utilisant des techniques de contrôle linéaires, est parfois difficile. Toutes ces situations sont mises en évidence dans la thèse. La seconde consiste à combiner les techniques de contrôle linéaires avec les techniques de contrôles non linéaires tels que la logique floue ou les réseaux de neurones. Les résultats de simulations ont permis de vérifier que la combinaison des contrôleurs non-linéaires avec les linéaires ont un meilleur rendement dynamique que les contrôleurs linéaires lorsque le point de fonctionnement varie. Avec l'aide des deux méthodes décrites ci-dessus, la thèse étudie également la technique de l’annulation des pôles-zéros (PZC) qui annule la fonction de transfert du convertisseur. Quelques modifications des techniques classiques de contrôle sont également proposées à partir de contrôleurs numériques afin d’améliorer les performances dynamiques. La thèse met également en évidence les non-linéarités qui dégradent les performances, propose les solutions permettant d'obtenir les meilleures performances, et lève les mystères du contrôle numérique. Une interface graphique est également introduite et illustrée dans le cas de la conception d'un convertisseur abaisseur de tension synchrone. En résumé, cette thèse décrit principalement l'analyse, la conception, la simulation, l’optimisation la mise en œuvre et la rentabilité des contrôleurs numériques. Une attention particulière est portée à l'analyse et l'optimisation des performances dynamique à haute fréquence et pour de faibles puissances des convertisseurs DC-DC abaisseur de tension. Ces convertisseurs fonctionnent en mode de conduction continue (CCM) à une fréquence de commutation de 1 MHz et s’appuie sur des techniques de contrôle linéaires et non linéaires de façon séquentielle
The objective of the thesis is to design the discrete compensators which counteract the nonlinearities introduced by various elements in the digital control loop while delivering high dynamic performance, fast time-to-market and scalability. Excellent line and fast load transient response, which is a measure of the system response speed, with minimal achievable voltage deviation and a fast voltage recovery time for a given power stage can be achieved through the discrete compensators designed on the basis of linear and nonlinear control techniques. To achieve a stable and fast response, the thesis proposes two ways. One way is to use linear control techniques to design the discrete compensator while keeping the bandwidth higher. It is well-known fact that the higher the bandwidth, the faster is the transient response. Achieving higher bandwidth through linear control techniques sometimes becomes tricky. All those situations are highlighted in the thesis. The other way is to hybridize the linear control techniques with the nonlinear control techniques such as fuzzy logic or neural network based control techniques. Simulation results verify that hybridization of nonlinear controllers with the linear ones have better dynamic performance over linear controllers under the change of operating points. Along with using the two methodologies described above, the thesis also investigates the pole-zero cancellation (PZC) technique in which the poles and zeros of the compensator are placed in such a way that they cancel the effect of the poles or zeros of the buck converter to boost the phase margin at the required bandwidth. Some modifications are also suggested to the classical control techniques based digital controllers to improve the dynamic performance. The thesis highlights the nonlinearities which degrade the performance, a cost-effective solution that achieves good performance and the mysteries of digital control system. A graphical user interface is introduced and demonstrated for use with the design of a synchronous-buck converter. In summary, this thesis mainly describes the analysis, design, simulation, optimization, implementation and cost effectiveness of digital controllers with particular focus on the analysis and the optimization of the dynamic performance for high-frequency low-power DC-DC buck converter working in continuous conduction mode (CCM) operating at a switching frequency of 1 MHz using linear and nonlinear control techniques in a very sequential and comprehensive way
APA, Harvard, Vancouver, ISO, and other styles
34

Oueslati, Zied. "Modèle de comportement pour la modélisation du thermoformage de feuilles plastiques multicouches." Thesis, Compiègne, 2015. http://www.theses.fr/2015COMP2213.

Full text
Abstract:
Les Thermoplastique Polyoléfine (TPO) se sont avérés très intéressants pour les applications automobiles. Les caractéristiques mécaniques de ces matériaux sont en bon accord avec le contexte environnemental et économique de la dernière décennie. En fait, au-delà de leur coût et recyclabilité, ils permettent un gain de poids important, une excellente flexibilité de conception et de hautes qualité en terme d'apparence ou de caractéristiques tactiles et olfactives. Le but de cette étude était de modéliser le comportement des nouvelles feuilles de TPO pour les applications de thermoformage. Le matériau étudié peut atteindre de très hautes valeurs d'allongement (jusqu’à 800%) et se distingue par une isotropie transverse. Afin de prédire correctement la distribution de l'épaisseur des pièces thermoformées finales, des essais detraction uniaxiale ont été menés le long des directions longitudinale, transversale et diagonale et à 5 températures différentes de ambiante à 120 °C. Un nouveau modèle hyperélastique isotrope transverstal a été développé. Les paramètres matériau à chacune des températures ont été identifiés en utilisant des méthodes inverses, et de bons résultats ont été obtenus. La procédure d'identification s'est avéré être difficile en raison de la grande sensibilité des paramètres et les problèmes d'instabilité en grandes déformations. Des techniques de mesures champ de déplacement 3D ont finalement été menées et associés à un test de thermoformage afin de valider la procédure matériel d'identification
Thermoplastic PolyOiefin (TPO) materials have shown great interest for automotive applications. The mechanical characteristics of these materials are in good agreements with the environmental and economical context of the last decade. ln fact, beyond their cost and recyclability, they allow important weight gain, excellent design flexibility, and high quality whether in term of appearance or tactile and olfactory perceptions. The aim of this study was to model the behavior of new TPO sheets for thermoforming applications. The studied material can reach very high stretch ranges (up to 800%) and was found to be transversely isotropie. ln order to properly predict the thickness distribution of the final thermoformed parts, uniaxial tensile tests were performed along the longitudinal, transverse and diagonal directions, at 5 different temperatures from ambient to 120°C. A new transversely isotropic hyperelastic model was developed using User Subroutines in Abaqus software. The material parameters at each temperature have been identified using inverse methods, and good results have been obtained. The identification procedure has shown to be difficult because of the high sensitivity of the material parameters and the instability problems at high stretch ranges. 3D displacement field techniques were finally conducted and associated to a thermoforming test in order to validate the material parameter identification procedure
APA, Harvard, Vancouver, ISO, and other styles
35

Kakarla, Svnp Sri Hari Santosh. "Modélisation de la multi-fissuration des matériaux quasi-fragiles par couplage d’un modèle d’endommagement anisotrope microplan et d’une formulation des discontinuités fortes dans la méthode des éléments finis enrichis." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASN013.

Full text
Abstract:
Dans un contexte d’évaluation des structures en béton armé, il est d’usage d’estimer la durabilité à l’échelle du matériau, l’aptitude au service et enfin, la tenue structurale qui définit la performance de l’ouvrage considéré. Toutefois, évaluer la performance d’un élément de structure nécessite le recours à des approches numériques permettant la description voire la prédiction des principaux mécanismes dissipatifs. Par ailleurs, il est attendu de cette description que des quantités locales caractérisantes la fissuration soient accessibles numériquement, on peut citer la tortuosité, l’espacement ou encore les ouvertures associées à la fissuration. Le principal objectif de ces travaux de thèse consiste à proposer une approche permettant la description de la fissuration sous chargement cyclique. Pour cela, l’approche proposée permet de décrire le processus complet de localisation des déformations depuis l'apparition de l’endommagement jusqu’à l’initiation et la propagation de fissures multiples dans des matériaux quasi fragiles. Pour cela, la stratégie adoptée repose d’une part sur l’utilisation du modèle microplan permettant ainsi de décrire le caractère anisotrope de la fissuration et d’autre part, sur la particularisation de la méthode aux éléments finis à discontinuités intégrées. Ces deux piliers sont couplés à l’aide d’une technique de transition entre endommagement diffus et fissuration localisée. L’approche proposée est illustrée à l’aide de plusieurs exemples élémentaires. Enfin, quelques cas tests à l’échelle de l’élément de structure sont étudiés
The performance aspects of large scale civil engineering structures like containment facilities such as durability, serviceability and structural safety are assessed from time to time to avert any catastrophes. Also, in the cases of extreme loading, different cracking mechanisms contribute to each other ultimately leading to the failure. This creates the need for devising certain regulatory measures. In order to achieve this, it is essential to predict the information like crack opening displacements, crack spacing and tortuosity. The purpose of this thesis is to develop numerical tools to model multiple intersecting cracks. In particular, the complete strain localization process from the onset of damage to the initiation and propagation of multiple cracks. Two main ingredients are used. The microplane model allows to describe the anisotropic damage phase and Embedded Finite Element Method (EFEM) introduces cracks as multiple strong discontinuities in the damaged continuum. First, the standard EFEM is extended in the context of multiple cracks. Later, the microplane microdamage model is formulated in a thermodynamic framework using simple constitutive laws. Finally, these two approaches are coupled using a transition methodology. The proposed methodologies are illustrated using several elementary and structural test cases that involve complex stress-strain states
APA, Harvard, Vancouver, ISO, and other styles
36

Klang, Johanna, and Susanne Jönsson. "Att arbeta med förebyggande förändring på producerande företag." Thesis, Linnéuniversitetet, Institutionen för maskinteknik (MT), 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-26413.

Full text
Abstract:
En förändring är ett tillstånd som vi upplever under hela vår livstid, både privat och i arbetslivet. Företag måste förändras för att kunna bevara sin konkurrens-kraft. Motstånd till förändring kan anses som det största enskilda hotet mot ett framgångsrikt införande av en strategi på ett företag. Ett sätt att hantera detta motstånd är att använda sig av den delaktiga förändringsmodellen samt stävja rädslor och osäkerhet.   Syftet med detta examensarbete var att vi ville få en ökad förståelse för hur företag arbetar med produktionsförbättringar och om de känner av något motstånd vid dessa förändringar. De intervjuade företagen anger som sin absolut största förändring deras införande av ett eget produktionssystem med stort fokus på Kaizen – Ständiga förbättringar.   Under arbetets gång stötte vi på en psykologisk och vetenskaplig teori om Förändringens fyra rum som anses vara ett kraftfullt hjälpmedel vid alla förändringar.
A change is a state that everyone experiences during their whole life, both privately as well as at work. Companies must change in order to keep their competitiveness. Resistance to change can be considered as the largest single threat against a successful implementation of a strategy in a company. One way to deal with the resistance is to apply the participation change model as well as suppressing fears and insecurity.    The purpose of this thesis was to achieve a higher understanding for how companies work with productivity improvements and if they experience any resistance when making these changes. The interviewed companies state as their absolute largest change to be the implementation of their own production system with big focus on Kaizen – Continuous improvements.   During our work we encountered a psychological and scientific theory about Four rooms of change which is considered to be a powerful aid of assistance during all changes.
APA, Harvard, Vancouver, ISO, and other styles
37

Eliasson, Björn. "Voice Activity Detection and Noise Estimation for Teleconference Phones." Thesis, Umeå universitet, Institutionen för matematik och matematisk statistik, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-108395.

Full text
Abstract:
If communicating via a teleconference phone the desired transmitted signal (speech) needs to be crystal clear so that all participants experience a good communication ability. However, there are many environmental conditions that contaminates the signal with background noise, i.e sounds not of interest for communication purposes, which impedes the ability to communicate due to interfering sounds. Noise can be removed from the signal if it is known and so this work has evaluated different ways of estimating the characteristics of the background noise. Focus was put on using speech detection to define the noise, i.e. the non-speech part of the signal, but other methods not solely reliant on speech detection but rather on characteristics of the noisy speech signal were included. The implemented techniques were compared and evaluated to the current solution utilized by the teleconference phone in two ways, firstly for their speech detection ability and secondly for their ability to correctly estimate the noise characteristics. The evaluation process was based on simulations of the methods' performance in various noise conditions, ranging from harsh to mild environments. It was shown that the proposed method showed improvement over the existing solution, as implemented in this study, in terms of speech detection ability and for the noise estimate it showed improvement in certain conditions. It was also concluded that using the proposed method would enable two sources of noise estimation compared to the current single estimation source and it was suggested to investigate how utilizing two noise estimators could affect the performance.
APA, Harvard, Vancouver, ISO, and other styles
38

Von, Pfeil Karl. "A two-fluid continuum model for structure evolution in electro- and magnetorheological fluids." 2002. http://catalog.hathitrust.org/api/volumes/oclc/50264137.html.

Full text
Abstract:
Thesis (M.S.)--University of Wisconsin--Madison, 2002.
Typescript. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references (leaves 107-116).
APA, Harvard, Vancouver, ISO, and other styles
39

Lu, Hsiu-Chen, and 盧脩塵. "Displacement Analysis of a Two-Wire-Driven Continuum Robot with a Contacted Obstacle based on Neural Network Model." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/53vy75.

Full text
Abstract:
碩士
國立臺灣科技大學
機械工程系
106
Continuum robot becomes more and more popular in recent years. Displacement analysis is necessary to find the position of continuum robots. In robot control, there are a lot of researches on path planning and how to bypass the obstacle. There are also many methods can be used to analyze the displacement of continuum robots like geometry models and beam theory models. These methods can also solve the kinematics problem about continuum robot with end loads. However, there is no research about what will happen when a continuum robot contacts an obstacle and keep going. Therefore, the purpose of this thesis is analyzing displacement of a continuum robot with a contacted obstacle. The geometry models are too simple, some of the beam theory models are too complicated, and some cannot solve this problem. Afterward, the neural network model is chosen to analyze displacement of continuum robot with contacted obstacle although there is no research about displacement analysis using a neural network model. By using the neural network model, the user only needs to decide what are the input data, the target data, and then do some experiments to get data. These data would be used to train the neural network model. After that, a trained model would be produced. The trained neural network model will output the position and orientation of the end-effector of the robot in a short time.
APA, Harvard, Vancouver, ISO, and other styles
40

Torres, Estrella. "Measuring Mental Health in Children with Disabilities : The use of the two continua model." Thesis, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-53665.

Full text
Abstract:
Mental health has traditionally been described as the absence of mental problems, being those second ones equated to impairments, overlapping disability with mental illness. This unfounded conviction is being replaced by a positive mental health approach that recognizes them as distinct constructs. The two continua model is the first model to prove with empirical support that the presence of mental problems does not entail a lacking positive mental health. In the midst of this transformation disabled children’s voices are being acknowledged as an often-ignored presence as the United Nation’s Convention of People with Disability pushes for their recognition.  This systematic review aims to explore which instruments are being used to measure the mental health of children with disabilities, and to assess how do they compare to the Mental health Continuum Scale (MHC-SF) which emerges as the operationalization of positive mental health in the Two continuum model. Five databases were explored, eight articles were chosen from which nine questionnaires were analysed and quality assessed with the Cosmin Checklist. From those, two instruments focused on mental problems (SDQ and ChYMH), two Surveys from which items were taken and adapted to measure flourishing (NSCH 2016/2011-2012 and L&H-YP 2011), three instruments targeting quality of life on children with a disability (Kidslife, CPQoL-Teens and Kidscreen), a newly developed subjective mental health questionnaire for children with intellectual disability (WellSEQ) and the MHC-SF itself.  Results show the emotional wellbeing dimension to be the most widely used, but positive functioning is misrepresented often measured as external factors. There is a tendence towards the traditional deficit-based formulation of items, despite that, there are good quality instruments that cater to children with disabilities with self-report measures (CPQoL-Teens, WellSEQ and Kidscreen) although severe ID co-mobilities are excluded. The use of digital resources in the administration poses a promising path to allow large scale surveys in children with cognitive and motor impairments, even more so being that the School is the common place of administration without acknowledging that children with chronic health conditions present higher rates of absenteeism.
APA, Harvard, Vancouver, ISO, and other styles
41

Farrokhpanah, Amirsaman. "Applying Contact Angle to a Two-dimensional Smoothed Particle Hydrodynamics (SPH) model on a Graphics Processing Unit (GPU) Platform." Thesis, 2012. http://hdl.handle.net/1807/33416.

Full text
Abstract:
A parallel GPU compatible Lagrangian mesh free particle solver for multiphase fluid flow based on SPH scheme is developed and used to capture the interface evolution during droplet impact. Surface tension is modeled employing the multiphase scheme of Hu et al. (2006). In order to precisely simulate the wetting phenomena, a method based on the work of Šikalo et al. (2005) is jointly used with the model proposed by Afkhami et al. (2009) to ensure accurate dynamic contact angle calculations. Accurate predictions were obtained for droplet contact angle during spreading. A two-dimensional analytical model is developed as an expansion to the work of Chandra et al. (1991). Results obtain from the solver agrees well to this analytical results. Effects of memory management techniques along with a variety of task assigning algorithms on GPU are studied. GPU speedups of up to 120 times faster than a single processor CPU were obtained.
APA, Harvard, Vancouver, ISO, and other styles
42

James, Martin. "Turbulence and pattern formation in continuum models for active matter." Doctoral thesis, 2020. http://hdl.handle.net/21.11130/00-1735-0000-0005-131C-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Tseng, Hsien-Hsiu, and 曾賢秀. "Motivation for Consumer's Continuous Usage of Mobile APP : Two-Factor Model Perspective." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/2tx3rm.

Full text
Abstract:
碩士
東吳大學
資訊管理學系
104
Following the popularity of smart mobile devices, the rise and diversification of every respect of rapid Application program (APP) lead the trend and also integrate into daily life. However, what factors affect users downloads and continues using? The research investigated factors for the App users who continued to use the APP by questionnaire. By Two Factor Model classification and probe into the factors for the App users who continued to use the APP. In this study, with 203 valid participants. The results showed that, the "curiosity", "interpersonal", "perceived price", "perecived enjoy", "utility value", "social value", "relieve stress", "performance improvement", "brand", "custom" and "habit", etc. belong Motivation Factor. "compatibility", "satisfaction", "perecived usefulness", "ease", "functional value", "quality", "trust", "localization", "perecived enjoyment" and "security", etc. belong to Hygiene Factor. We anticipate the result of research will become a significant assessment for the developing and future market.
APA, Harvard, Vancouver, ISO, and other styles
44

Lo, Ship-Peng, and 羅仕鵬. "A Study on Two-Dimension Cutting Models of Continuous Chip and Discontinuous Chip." Thesis, 1997. http://ndltd.ncl.edu.tw/handle/26780474806089749081.

Full text
Abstract:
博士
國立台灣工業技術學院
機械工程技術研究所
85
ABSTRACT The large-deformation strain finite element theory, the updatedLagrangian formulation(ULF) and incremental principles were used inthis paper to develope a two-dimensional thero-elastic-plastic analyticalmodel. In this model, the tool moves forward step by step from the initialtool- workpiece contact till the formation of steady cutting force. Firstly, based on the above model, it was analyzed that OFC copper wasmachined by diamond tool with zero rake angle. The key point is to observethe effects of both cutting speed and temperature to workpiece material.Along the predefined cutting tool path, the geometrical chip separation criterion was used to separate the workpiece node into chip node and machined workpiece node. To understand the effects of the amounts of elastic deformationand crater of cutting tool subjected high cutting force and stressin the chip-tool interface. Under the condition of low cutting speed and no heat transfer, The tool considered as elastic material is the secondstage key point. The iteration mathematical model in the chip-tool interface was developed, and three kinds of tools were applied to machinemild steel workpiece. By using the above iterative model, the effects of different tools under the condition of zero rake angle to cutting processwere investigated. The study of 6-4 copper incipient discontinuous chip formation is thekey point in the final stage. The initial fracture position of chip waspredicted using accumulated strain energy density and the growth orientation of fracture was found using the direction of maximum strain energy density.As a result, the discontinuous chip configuration, cutting force and stress, strain distributions can be obtained.
APA, Harvard, Vancouver, ISO, and other styles
45

LIN, ZHONG-SHENG, and 林中聖. "An algorithm to the regression quantile and its application to the estimation of continuous two-phase regression model." Thesis, 1990. http://ndltd.ncl.edu.tw/handle/73143005336508395395.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

(8115878), Matthew T. Moore. "Numerical Simulation of a Continuous Caster." Thesis, 2019.

Find full text
Abstract:
Heat transfer and solidification models were developed for use in a numerical model of a continuous caster to provide a means of predicting how the developing shell would react under variable operating conditions. Measurement data of the operating conditions leading up to a breakout occurrence were provided by an industrial collaborator and were used to define the model boundary conditions. Steady-state and transient simulations were conducted, using boundary conditions defined from time-averaged measurement data. The predicted shell profiles demonstrated good agreement with thickness measurements of a breakout shell segment – recovered from the quarter-width location. Further examination of the results with measurement data suggests pseudo-steady assumption may be inadequate for modeling shell and flow field transition period following sudden changes in casting speed. An adaptive mesh refinement procedure was established to increase refinement in areas of predicted shell growth and to remove excess refinement from regions containing only liquid. A control algorithm was developed and employed to automate the refinement procedure in a proof-of-concept simulation. The use of adaptive mesh refinement was found to decrease the total simulation time by approximately 11% from the control simulation – using a static mesh.
APA, Harvard, Vancouver, ISO, and other styles
47

"Structural equation models with continuous and polytomous variables: comparisons on the bayesian and the two-stage partition approaches." 2003. http://library.cuhk.edu.hk/record=b5891707.

Full text
Abstract:
Chung Po-Yi.
Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.
Includes bibliographical references (leaves 33-34).
Abstracts in English and Chinese.
Chapter 1 --- Introduction --- p.1
Chapter 2 --- Bayesian Approach --- p.4
Chapter 2.1 --- Model Description --- p.5
Chapter 2.2 --- Identification --- p.6
Chapter 2.3 --- Bayesian Analysis of the Model --- p.8
Chapter 2.3.1 --- Posterior Analysis --- p.8
Chapter 2.3.2 --- The Gibbs Sampler --- p.9
Chapter 2.3.3 --- Conditional Distributions --- p.10
Chapter 2.4 --- Bayesian Estimation --- p.13
Chapter 3 --- Two-stage Partition Approach --- p.15
Chapter 3.1 --- First Stage: PRELIS --- p.15
Chapter 3.2 --- Second Stage: LISREL --- p.17
Chapter 3.2.1 --- Model Description --- p.17
Chapter 3.2.2 --- Identification --- p.17
Chapter 3.2.3 --- LISREL Analysis of the Model --- p.18
Chapter 4 --- Comparison --- p.19
Chapter 4.1 --- Simulation Studies --- p.19
Chapter 4.2 --- Real Data Studies --- p.28
Chapter 5 --- Conclusion & Discussion --- p.30
Chapter A --- Tables for the Two Approaches --- p.35
Chapter B --- Manifest variables in the ICPSR examples --- p.51
Chapter C --- PRELIS & LISREL Scripts for Simulation Studies --- p.52
APA, Harvard, Vancouver, ISO, and other styles
48

Pereira, Armanda Sofia Carvalho Santos. "Non-traditional university students at university: an explanatory model of the intention to continue studying." Master's thesis, 2012. http://hdl.handle.net/1822/21145.

Full text
Abstract:
Dissertação de mestrado integrado em Psicologia (área de especialização em Psicologia Escolar e da Educação)
Em Portugal, tem aumentado a participação de novos grupos de alunos no ensino superior, resultando numa presença cada vez maior de estudantes adultos neste contexto. Embora o desempenho académico constitua um fator importante no processo de decisão dos alunos para continuar a estudar na universidade, a investigação sobre este tema é limitada. O presente estudo analisou a relação entre o desempenho académico e a intenção de estudantes não-tradicionais (NTS) continuarem os seus estudos numa universidade pública (N = 327). Os dados foram analisados através da construção de um modelo de equações estruturais, observando-se que a intenção de continuar a estudar dos alunos de primeiro ano é significativamente determinada pelo seu desempenho académico e que essa conquista é parcialmente determinada pela opção de entrada dos alunos na universidade, média de ingresso e idade. Os resultados apoiaram a viabilidade do modelo e sugerem direções rentáveis em relação à continuidade de NTS na universidade. Os dados sugerem que as universidades deveriam refletir sobre o apoio académico que NTS necessitam receber para continuarem os seus estudos.
Portugal is widening the participation of new groups of people in higher education, resulting in a growing presence of mature students at university. Although academic achievement is believed to be an important factor in students’ decision to continue studying at university, the research on this topic is limited. The current study analyzed the relationship between academic achievement and the intention of non-traditional students (NTS) to continue studying at a public university (N=327). The data were analyzed by fitting a path model where first-year students’ intention to continue studying is significantly determined by their academic achievement and that this achievement is partly determined by students’ entry option at university, high school GPA and age. The findings supported the feasibility of the model and suggested profitable directions regarding the retention of NTS at university. The data suggest that universities should reflect upon the academic support that NTS receive to continue their studies.
APA, Harvard, Vancouver, ISO, and other styles
49

Sundar, Arun. "A 3-Bit Current Mode Quantizer for Continuous Time Delta Sigma Analog-to-Digital Converters." Thesis, 2011. http://hdl.handle.net/1969.1/ETD-TAMU-2011-12-10515.

Full text
Abstract:
The summing amplifier and the quantizer form two of the most critical blocks in a continuous time delta sigma (CT ΔΣ) analog-to-digital converter (ADC). Most of the conventional CT ΔΣ ADC designs incorporate a voltage summing amplifier and a voltage-mode quantizer. The high gain-bandwidth (GBW) requirement of the voltage summing amplifier increases the overall power consumption of the CT ΔΣ ADC. In this work, a novel method of performing the operations of summing and quantization is proposed. A current-mode summing stage is proposed in the place of a voltage summing amplifier. The summed signal, which is available in current domain, is then quantized with a 3-bit current mode flash ADC. This current mode summing approach offers considerable power reduction of about 80% compared to conventional solutions [2]. The total static power consumption of the summing stage and the quantizer is 5.3mW. The circuits were designed in IBM 90nm process. The static and dynamic characteristics of the quantizer are analyzed. The impact of process and temperature variation and mismatch tolerance as well as the impact of jitter, in the presence of an out-of-band blocker signal, on the performance of the quantizer is also studied.
APA, Harvard, Vancouver, ISO, and other styles
50

Petráčková, Denisa. "Využití gelově-založených proteomových technik při analýze genové exprese u prokaryotních a eukaryotních modelů." Doctoral thesis, 2011. http://www.nusl.cz/ntk/nusl-312065.

Full text
Abstract:
This PhD thesis showed the applicability of a gel-based proteomic separation tool, 2-D electrophoresis in three independent projects. Supplemented with results obtained using different techniques the proteomic studies enabled a global imaging of proteoms in the studied biological systems. Comparing total proteoms of E. coli 61 protein changes were identified and connected with the development of the bacterial population in the presence of an antibiotic compound, erythromycin. This classic proteomic approach included sample extraction, optimization of its 2D separation followed by 2D gel analysis and protein identification by MS methods. A disadvantage of this work was an enourmously large amount of data to be analyzed by computer analysis. For the study of membrane proteom of B. subtilis during a pH induced stress, on the other hand, a modification of isolation techniques for membrane and membrane associated proteins was required first to improve the subsequent protein separation by 2-D electrophoresis. The optimalization of protein extraction included changes in detergents used for protein solubilization and a prolongation of time periods in the protein solubilization protocol. 5 relevant protein changes were then described that play a role in the bacterial response to pH stress. The proteins were...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography