To see the other types of publications on this topic, follow the link: Penalty parameter and scaling parameter.

Dissertations / Theses on the topic 'Penalty parameter and scaling parameter'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 dissertations / theses for your research on the topic 'Penalty parameter and scaling parameter.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Adamu-Lema, Fikru. "Scaling and intrinsic parameter fluctuations in nano-CMOS devices." Thesis, University of Glasgow, 2005. http://theses.gla.ac.uk/7086/.

Full text
Abstract:
The core of this thesis is a thorough investigation of the scaling properties of conventional nano-CMOS MOSFETs, their physical and operational limitations and intrinsic parameter fluctuations. To support this investigation a well calibrated 35 nm physical gate length real MOSFET fabricated by Toshiba was used as a reference transistor. Prior to the start of scaling to shorter channel lengths, the simulators were calibrated against the experimentally measured characteristics of the reference device. Comprehensive numerical simulators were then used for designing the next five generations of transistors that correspond to the technology nodes of the latest International Technology Roadmap for Semiconductors (lTRS). The scaling of field effect transistors is one of the most widely studied concepts in semiconductor technology. The emphases of such studies have varied over the years, being dictated by the dominant issues faced by the microelectronics industry. The research presented in this thesis is focused on the present state of the scaling of conventional MOSFETs and its projections during the next 15 years. The electrical properties of conventional MOSFETs; threshold voltage (VT), subthreshold slope (S) and on-off currents (lon, Ioffi ), which are scaled to channel lengths of 35, 25, 18, 13, and 9 nm have been investigated. In addition, the channel doping profile and the corresponding carrier mobility in each generation of transistors have also been studied and compared. The concern of limited solid solubility of dopants in silicon is also addressed along with the problem of high channel doping concentrations in scaled devices. The other important issue associated with the scaling of conventional MOSFETs are the intrinsic parameter fluctuations (IPF) due to discrete random dopants in the inversion layer and the effects of gate Line Edge Roughness (LER). The variations of the three important MOSFET parameters (loff, VT and Ion), induced by random discrete dopants and LER have been comprehensively studied in the thesis. Finally, one of the promising emerging CMOS transistor architectures, the Ultra Thin Body (UTB) SOl MOSFET, which is expected to replace the conventional MOSFET, has been investigated from the scaling point of view.
APA, Harvard, Vancouver, ISO, and other styles
2

Charonko, John James. "A Nondimensional Scaling Parameter for Predicting Pressure Wave Reflection in Stented Arteries." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/31906.

Full text
Abstract:
Coronary stents have become a very popular treatment for cardiovascular disease, historically the leading cause of death in the United States. Stents, while successful in the short term, are subject to high failure rates (up to 24% in the first six months) due to wall regrowth and clotting, probably due to a combination of abnormal mechanical stresses and disruption of the arterial blood flow. The goal of this research was to develop recommendations concerning ways in which stent design might be improved, focusing on the problem of pressure wave reflections. A one-dimensional finite-difference model was developed to predict these reflections, and effects of variations in stent and vessel properties were examined, including stent stiffness, length, and compliance transition region, as well as vessel radius and wall thickness. The model was solved using a combination of Weighted Essentially Non-Oscillatory (WENO) and Runge-Kutta methods. Over 100 cases were tested. Results showed that reasonable variations in these parameters could induce changes in reflection magnitude of up to ±50%. It was also discovered that the relationship between each of these properties and the resulting wave reflection could be described simply, and the effect of all of them together could in fact be encompassed by a single non-dimensional parameter. This parameter was titled â Stent Authority,â and several variations were proposed. It is believed this parameter is a novel way of relating the energy imposed upon the arterial wall by the stent, to the fraction of the incident pressure energy which is reflected from the stented region.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Das, Narendra Narayan. "Modeling and application of soil moisture at varying spatial scales with parameter scaling." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kayhan, Belgin. "Parameter Estimation In Generalized Partial Linear Modelswith Tikhanov Regularization." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612530/index.pdf.

Full text
Abstract:
Regression analysis refers to techniques for modeling and analyzing several variables in statistical learning. There are various types of regression models. In our study, we analyzed Generalized Partial Linear Models (GPLMs), which decomposes input variables into two sets, and additively combines classical linear models with nonlinear model part. By separating linear models from nonlinear ones, an inverse problem method Tikhonov regularization was applied for the nonlinear submodels separately, within the entire GPLM. Such a particular representation of submodels provides both a better accuracy and a better stability (regularity) under noise in the data. We aim to smooth the nonparametric part of GPLM by using a modified form of Multiple Adaptive Regression Spline (MARS) which is very useful for high-dimensional problems and does not impose any specific relationship between the predictor and dependent variables. Instead, it can estimate the contribution of the basis functions so that both the additive and interaction effects of the predictors are allowed to determine the dependent variable. The MARS algorithm has two steps: the forward and backward stepwise algorithms. In the rst one, the model is built by adding basis functions until a maximum level of complexity is reached. On the other hand, the backward stepwise algorithm starts with removing the least significant basis functions from the model. In this study, we propose to use a penalized residual sum of squares (PRSS) instead of the backward stepwise algorithm and construct PRSS for MARS as a Tikhonov regularization problem. Besides, we provide numeric example with two data sets<br>one has interaction and the other one does not have. As well as studying the regularization of the nonparametric part, we also mention theoretically the regularization of the parametric part. Furthermore, we make a comparison between Infinite Kernel Learning (IKL) and Tikhonov regularization by using two data sets, with the difference consisting in the (non-)homogeneity of the data set. The thesis concludes with an outlook on future research.
APA, Harvard, Vancouver, ISO, and other styles
5

Bharadwaj, Shashank. "Investigation of oxide thickness dependence of Fowler-Nordheim parameter B." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

VanDerwerken, Douglas Nielsen. "Variable Selection and Parameter Estimation Using a Continuous and Differentiable Approximation to the L0 Penalty Function." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2486.

Full text
Abstract:
L0 penalized likelihood procedures like Mallows' Cp, AIC, and BIC directly penalize for the number of variables included in a regression model. This is a straightforward approach to the problem of overfitting, and these methods are now part of every statistician's repertoire. However, these procedures have been shown to sometimes result in unstable parameter estimates as a result on the L0 penalty's discontinuity at zero. One proposed alternative, seamless-L0 (SELO), utilizes a continuous penalty function that mimics L0 and allows for stable estimates. Like other similar methods (e.g. LASSO and SCAD), SELO produces sparse solutions because the penalty function is non-differentiable at the origin. Because these penalized likelihoods are singular (non-differentiable) at zero, there is no closed-form solution for the extremum of the objective function. We propose a continuous and everywhere-differentiable penalty function that can have arbitrarily steep slope in a neighborhood near zero, thus mimicking the L0 penalty, but allowing for a nearly closed-form solution for the beta-hat vector. Because our function is not singular at zero, beta-hat will have no zero-valued components, although some will have been shrunk arbitrarily close thereto. We employ a BIC-selected tuning parameter used in the shrinkage step to perform zero-thresholding as well. We call the resulting vector of coefficients the ShrinkSet estimator. It is comparable to SELO in terms of model performance (selecting the truly nonzero coefficients, overall MSE, etc.), but we believe it to be more intuitive and simpler to compute. We provide strong evidence that the estimator enjoys favorable asymptotic properties, including the oracle property.
APA, Harvard, Vancouver, ISO, and other styles
7

Jurczyk, Michael Ulrich. "Shape based stereovision assistance in rehabilitation robotics." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pellicer, Alborch Klaus [Verfasser], Stefan [Akademischer Betreuer] Junne, Frank [Gutachter] Delvigne, Alain [Gutachter] Sourabié, and Peter [Gutachter] Neubauer. "Cocci chain length distribution as control parameter in scaling lactic acid fermentations / Klaus Pellicer Alborch ; Gutachter: Frank Delvigne, Alain Sourabié, Peter Neubauer ; Betreuer: Stefan Junne." Berlin : Technische Universität Berlin, 2020. http://d-nb.info/1220355917/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Morgenstern, Yvonne. "Analyse und Konzeption von Messstrategien zur Erfassung der bodenhydraulischen Variabilität." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1204887346375-13520.

Full text
Abstract:
Die Berücksichtigung der flächenhaften bodenhydraulischen Variabilität gilt bei der Modellierung von Wasser- und Stofftransportprozessen als problematisch. Dies liegt vorrangig an ihrer Erfassung, die kosten- und zeitintensiv ist. Die vorliegende Arbeit untersucht verschiedene Messstrategien, die zur Abbildung der flächenhaften Bodenhydraulik mit wenigen, einfach zu bestimmenden und physikalisch begründeten Bodenparametern führen. Die Vorgehensweise erfolgt mit der Anwendung eines Ähnlichkeitskonzeptes, das die Böden in bodenhydraulisch ähnliche Klassen unterteilt. Innerhalb einer Klasse kann die Variabilität der Retentions- und hydraulischen Leitfähigkeitcharakteristik auf einen freien Parameter (Skalierungsparameter) reduziert werden. Die Analyse der Zusammenhänge zwischen Boden- und Skalierungsparametern führt letztendlich zu den geeigneten Parametern die eine flächenhafte Abbildung möglich machen. Diese Untersuchungen bilden die Grundlage für die weitere Entwicklung eines stochastischen Modellansatzes, der die Variabilität der Bodenhydraulik bei der Modellierung des Bodenwassertransportes im Feldmaßstab berücksichtigen kann. An Hand von drei Datensätzen unterschiedlicher Skalenausbreitung konnte dieses Konzept angewendet werden. Die Ergebnisse zeigen, dass die Beschreibung der hydraulischen Variabilität nur für die vertikale (Profil) nicht aber für die flächenhafte Ausbreitung mit einfachen Bodenparametern möglich ist. Mit einer ersten Modellanwendung konnte gezeigt werden, dass über die Variabilität der Bodenparameter Trockenrohdichte und Tongehalt auch die Variabilität der Bodenhydraulik und damit die Berechnung des Bodenfeuchteverlaufs am Standort darstellbar ist<br>The consideration of the spatial variability of the unsaturated soil hydraulic characteristics still remains an unsolved problem in the modelling of the water and matter transport in the vadose zone. This can be mainly explained by the rather cumbersome measurement of this variability, which is both, time-consuming and cost-intensive. The presented thesis analyses various measurement strategies which aim at the description of the soil-hydraulic heterogeneity by a small number of proxy-parameters, which should be easily measurable and still have a soil-physical meaning. The developed approach uses a similarity concept, which groups soils into similar soil hydraulic classes. Within a class, the variability of the retention and hydraulic conductivity curves can be explained by a single parameter (scaling parameter). The analysis of the correlation between the soil parameters and the scaling parameters can eventually indicate which soil parameters can be used for describing the soil hydraulic variability in a given area. This investigation forms the basis for the further development of a stochastic model, which can integrate the soil-hydraulic variability in the modelling of the soil water transport. Three data sets, all covering different scales, were subsequently used in the application of the developed concept. The results show that depth development of the soil-hydraulic variability in a soil profile can be explained by a single soil parameter. Contrarily, the explanation of the horizontal variability of the soil-hydraulic properties was not possible with the given data sets. First model applications for a soil profile showed that including the variability of the soil parameters bulk density and clay fraction in the water transport simulations could describe the variability of the soil-hydraulic variability and thus, the dynamics of the soil water content at the investigated profile
APA, Harvard, Vancouver, ISO, and other styles
10

Morgenstern, Yvonne. "Analyse und Konzeption von Messstrategien zur Erfassung der bodenhydraulischen Variabilität." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A24111.

Full text
Abstract:
Die Berücksichtigung der flächenhaften bodenhydraulischen Variabilität gilt bei der Modellierung von Wasser- und Stofftransportprozessen als problematisch. Dies liegt vorrangig an ihrer Erfassung, die kosten- und zeitintensiv ist. Die vorliegende Arbeit untersucht verschiedene Messstrategien, die zur Abbildung der flächenhaften Bodenhydraulik mit wenigen, einfach zu bestimmenden und physikalisch begründeten Bodenparametern führen. Die Vorgehensweise erfolgt mit der Anwendung eines Ähnlichkeitskonzeptes, das die Böden in bodenhydraulisch ähnliche Klassen unterteilt. Innerhalb einer Klasse kann die Variabilität der Retentions- und hydraulischen Leitfähigkeitcharakteristik auf einen freien Parameter (Skalierungsparameter) reduziert werden. Die Analyse der Zusammenhänge zwischen Boden- und Skalierungsparametern führt letztendlich zu den geeigneten Parametern die eine flächenhafte Abbildung möglich machen. Diese Untersuchungen bilden die Grundlage für die weitere Entwicklung eines stochastischen Modellansatzes, der die Variabilität der Bodenhydraulik bei der Modellierung des Bodenwassertransportes im Feldmaßstab berücksichtigen kann. An Hand von drei Datensätzen unterschiedlicher Skalenausbreitung konnte dieses Konzept angewendet werden. Die Ergebnisse zeigen, dass die Beschreibung der hydraulischen Variabilität nur für die vertikale (Profil) nicht aber für die flächenhafte Ausbreitung mit einfachen Bodenparametern möglich ist. Mit einer ersten Modellanwendung konnte gezeigt werden, dass über die Variabilität der Bodenparameter Trockenrohdichte und Tongehalt auch die Variabilität der Bodenhydraulik und damit die Berechnung des Bodenfeuchteverlaufs am Standort darstellbar ist.<br>The consideration of the spatial variability of the unsaturated soil hydraulic characteristics still remains an unsolved problem in the modelling of the water and matter transport in the vadose zone. This can be mainly explained by the rather cumbersome measurement of this variability, which is both, time-consuming and cost-intensive. The presented thesis analyses various measurement strategies which aim at the description of the soil-hydraulic heterogeneity by a small number of proxy-parameters, which should be easily measurable and still have a soil-physical meaning. The developed approach uses a similarity concept, which groups soils into similar soil hydraulic classes. Within a class, the variability of the retention and hydraulic conductivity curves can be explained by a single parameter (scaling parameter). The analysis of the correlation between the soil parameters and the scaling parameters can eventually indicate which soil parameters can be used for describing the soil hydraulic variability in a given area. This investigation forms the basis for the further development of a stochastic model, which can integrate the soil-hydraulic variability in the modelling of the soil water transport. Three data sets, all covering different scales, were subsequently used in the application of the developed concept. The results show that depth development of the soil-hydraulic variability in a soil profile can be explained by a single soil parameter. Contrarily, the explanation of the horizontal variability of the soil-hydraulic properties was not possible with the given data sets. First model applications for a soil profile showed that including the variability of the soil parameters bulk density and clay fraction in the water transport simulations could describe the variability of the soil-hydraulic variability and thus, the dynamics of the soil water content at the investigated profile.
APA, Harvard, Vancouver, ISO, and other styles
11

Hungenahalli, Shivanna Bharath. "Musculoskeletal Modeling of Ballet." Thesis, Linköpings universitet, Mekanik och hållfasthetslära, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171924.

Full text
Abstract:
This thesis work comprises the working and simulation procedures being involved in simulating motion capture data in AnyBody Modeling System. The motion capture data used in this thesis are ballet movements from dancers of Östgöta ballet and dance academy. The ballet movements taken into consideration are the arabesque on demi-pointe and pirouette. The arabesque on demi-pointe was performed by two dancers but the pirouette is performed by only one dancer. The method involved recording ballet movements by placing markers on the dancer's body and using this motion capture data as input to AnyBody Modeling System to create a musculoskeletal simulation. The musculoskeletal modeling involved creating a very own Qualisys marker protocol for the markers placed on the ballet dancers. Then implementing the marker protocol onto a human model in AnyBody Modeling System by making use of the AnyBody Managed Modeling Repository (TM) and obtain the kinematics from the motion capture. To best fit the human model to the dancer's anthropometry, scaling of the human model is done, environmental conditions such as the force plates are provided. An optimization algorithm is conducted for the marker positions to best fit the dancer's anthropometry by running parameter identification. From the kinematics of the motion capture data, we simulate the inverse dynamics in AnyBody Modeling System. The simulations explain a lot of parameters that describe the ballet dancers. Results such as the center of mass, the center of pressure, muscle activation, topple angle are presented and discussed. Moreover, we compare the models of the dancers and draw conclusions about body balance, effort level, and muscles activated during the ballet movements.
APA, Harvard, Vancouver, ISO, and other styles
12

Ould-Kaddour, Latifa. "Etude par diffusion de la lumiere des systemes ternaires : polymere-polymere-solvant." Strasbourg 1, 1988. http://www.theses.fr/1988STR13006.

Full text
Abstract:
Etude experimentale de plusieurs systemes ternaires, comportant deux polymeres differents en solution par diffusion de la lumiere. L'analyse de l'intensite diffusee a vecteur d'onde nul a permis de caracteriser les proprietes thermodynamiques et plus particulierement le parametre d'interaction entre les deux polymeres
APA, Harvard, Vancouver, ISO, and other styles
13

Ye, Wenfeng. "Numerical methods for the simulation of shear wave propagation in nearly incompressible medium - Application in elastography." Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEI046.

Full text
Abstract:
L'élastographie transitoire est une technologie médicale de caractérisation qui permet d'estimer la rigidité des tissus mous biologiques. En visualisant la propagation de l'onde de cisaillement dans le tissu, on peut déduire le module de cisaillement µ. Au cours de la dernière décennie, cette technique a été utilisée avec succès pour étudier diverses pathologie, en particulier la fibrose, le cancer. Cependant, de nombreux facteurs, comme par exemple, la réflexion des ondes, les conditions aux limites et la précontrainte perturbent les mesures en élastographie, la qualité de la caractérisation mécanique du tissu peut alors être altérée. De plus, les tissus présentent des propriétés mécaniques plus complexes, dont la viscosité, la nonlinéarité et l'anisotropie, dont la caractérisation peut améliorer la valeur diagnostique de l'élastographie. Les simulations de propagation d'onde par Eléments Finis (EF) apparaissent alors prometteuses puisqu'elles permettent d'étudier l'influence des paramètres mécaniques intrinsèques et extrinsèques sur les vitesses de propagation et ainsi, de permettre l'identification de propriétés mécaniques complexes dans les cas de mesure réels. Dans ce travail, nous développons un modèle d'éléments finis pour la propagation d'ondes non-linéaires dans les tissus mous. Les modèles numériques ont été validés à partir d'expériences d'élastographie issues de la littérature et ensuite utilisés pour évaluer l'identifiabilité des paramètres d'un modèle de comportement non-linéaire dans l'élastographie: la loi de Landau. En mesurant les ondes d'amplitude finie et les ondes de faible amplitude autour d'un état pré-déformé, une méthode pratique et robuste est proposée pour identifier la nonlinéarité des tissus homogènes en utilisant l'expérience d'élastographie. La problématique du coût de calcul est également étudiée dans ce travail. En effet, la quasi-incompressibilité des tissus biologiques rend la vitesse de l'onde de pression extrêmement élevée, ce qui limite le pas de temps d'un calcul formulé en dynamique explicite. Pour faire face à cette difficulté, différentes méthodes numériques sont présentées, dans lesquelles l'incrément de temps est limité par la vitesse d'onde de cisaillement au lieu de la vitesse d'onde de compression. Divers exemples numériques sont testés dans le cadre de l'élastographie dynamique, il a été montré que les méthodes sont précises pour ces problèmes et une réduction significative du temps CPU est obtenue<br>Transient elastography is a medical characterization technology that estimates the stiffness of biological soft tissues. By imaging the transient propagation of shear wave in tissues, one can deduce the shear modulus µ. In the last decade, this technique has been used successfully to study various pathologies, particularly fibrosis and cancers. However, numerous factors such as wave reflection, boundary conditions and pre-stress disturb elastography measurements, and the quality of the mechanical characterization of the tissue can be altered. Moreover, the tissues exhibit more complex mechanical properties, including viscosity, nonlinearity and anisotropy, the characterization of which can improve the diagnostic value of elastography. Simulations of wave propagation by finite element (FE) appear promising since they make it possible to study the influence of intrinsic and extrinsic mechanical parameters on the propagation speeds and thus to allow the identification of complex mechanical properties in the real measurement cases. In this work, we develop a FE model for the propagation of nonlinear waves in soft tissues. The numerical models are validated from elastographic experiments taken from the literature, and then used to evaluate the identifiability of the parameters of a nonlinear model in elastography, \emph{i.e.}, Landau's law. By measuring finite amplitude waves and low amplitude waves in pre-deformed states, a practical and robust method is proposed to identify the nonlinearity of homogeneous tissues using elastography experiment. The problem of the cost of computation is also studied in this work. In fact, the quasi-incompressibility of biological tissues makes the compressional wave speed extremely high, which limits the time step of a simulation formulated in explicit dynamics. To deal with this difficulty, different numerical methods are presented, in which the time step is controlled by the shear wave speed instead of the compressional wave speed. Various numerical examples are tested in the context of dynamic elastography, it has been shown that the methods are precise for these problems and a significant reduction of the CPU time is obtained
APA, Harvard, Vancouver, ISO, and other styles
14

Pérez, Pellitero Javier. "Improvement of monte carlo algorithms and intermolecular potencials for the modelling of alkanois, ether thiophenes and aromatics." Doctoral thesis, Universitat Rovira i Virgili, 2007. http://hdl.handle.net/10803/8550.

Full text
Abstract:
Durante la última década y paralelamente al incremento de la velocidad de computación, las técnicas de simulación molecular se han erigido como una importante herramienta para la predicción de propiedades físicas de sistemas de interés industrial. Estas propiedades resultan esenciales en las industrias química y petroquímica a la hora de diseñar, optimizar, simular o controlar procesos. El actual coste moderado de computadoras potentes hace que la simulación molecular se convierta en una excelente opción para proporcionar predicciones de dichas propiedades. En particular, la capacidad predictiva de estas técnicas resulta muy importante cuando en los sistemas de interés toman parte compuestos tóxicos o condiciones extremas de temperatura o presión debido a la dificultad que entraña la experimentación a dichas condiciones. La simulación molecular proporciona una alternativa a los modelos termofísicos utilizados habitualmente en la industria como es el caso de las ecuaciones de estado, modelos de coeficientes de actividad o teorías de estados correspondientes, que resultan inadecuados al intentar reproducir propiedades complejas de fluidos como es el caso de las de fluidos que presentan enlaces de hidrógeno, polímeros, etc. En particular, los métodos de Monte Carlo (MC) constituyen, junto a la dinámica molecular, una de las técnicas de simulación molecular más adecuadas para el cálculo de propiedades termofísicas. Aunque, por contra del caso de la dinámica molecular, los métodos de Monte Carlo no proporcionan información acerca del proceso molecular o las trayectorias moleculares, éstos se centran en el estudio de propiedades de equilibrio y constituyen una herramienta, en general, más eficiente para el cálculo del equilibrio de fases o la consideración de sistemas que presenten elevados tiempos de relajación debido a su bajos coeficientes de difusión y altas viscosidades. Los objetivos de esta tesis se centran en el desarrollo y la mejora tanto de algoritmos de simulación como de potenciales intermoleculares, factor considerado clave para el desarrollo de las técnicas de simulación de Monte Carlo. En particular, en cuanto a los algoritmos de simulación, la localización de puntos críticos de una manera precisa ha constituido un problema para los métodos habitualmente utilizados en el cálculo de equlibrio de fases, como es el método del colectivo de GIBBS. La aparición de fuertes fluctuaciones de densidad en la región crítica hace imposible obtener datos de simulación en dicha región, debido al hecho de que las simulaciones son llevadas a cabo en una caja de simulación de longitud finita que es superada por la longitud de correlación. Con el fin de proporcionar una ruta adecuada para la localización de puntos críticos tanto de componentes puros como mezclas binarias, la primera parte de esta tesis está dedicada al desarrollo y aplicación de métodos adecuados que permitan superar las dificultades encontradas en el caso de los métodos convencionales. Con este fin se combinan estudios de escalado del tamaño de sitema con técnicas de "Histogram Reweighting" (HR). La aplicación de estos métodos se ha mostrado recientemente como mucho mejor fundamentada y precisa para el cálculo de puntos críticos de sistemas sencillos como es el caso del fluido de LennardJones (LJ). En esta tesis, estas técnicas han sido combinadas con el objetivo de extender su aplicación a mezclas reales de interés industrial. Previamente a su aplicación a dichas mezclas reales, el fluido de LennardJones, capaz de reproducir el comportamiento de fluidos sencillos como es el caso de argón o metano, ha sido tomado como referencia en un paso preliminar. A partir de simulaciones realizadas en el colectivo gran canónico y recombinadas mediante la mencionada técnica de "Histogram Reweighting" se han obtenido los diagramas de fases tanto de fluidos puros como de mezclas binarias. A su vez se han localizado con una gran precisión los puntos críticos de dichos sistemas mediante las técnicas de escalado del tamaño de sistema. Con el fin de extender la aplicación de dichas técnicas a sistemas multicomponente, se han introducido modificaciones a los métodos de HR evitando la construcción de histogramas y el consecuente uso de recursos de memoria. Además, se ha introducido una metodología alternativa, conocida como el cálculo del cumulante de cuarto orden o parámetro de Binder, con el fin de hacer más directa la localización del punto crítico. En particular, se proponen dos posibilidades, en primer lugar la intersección del parámetro de Binder para dos tamaños de sistema diferentes, o la intersección del parámetro de Binder con el valor conocido de la correspondiente clase de universalidad combinado con estudios de escalado. Por otro lado, y en un segundo frente, la segunda parte de esta tesis está dedicada al desarrollo de potenciales intermoleculares capaces de describir las energías inter e intramoleculares de las moléculas involucradas en las simulaciones. En la última década se han desarrolldo diferentes modelos de potenciales para una gran variedad de compuestos. Uno de los más comunmente utilizados para representar hidrocarburos y otras moléculas flexibles es el de átomos unidos, donde cada grupo químico es representado por un potencial del tipo de LennardJones. El uso de este tipo de potencial resulta en una significativa disminución del tiempo de cálculo cuando se compara con modelos que consideran la presencia explícita de la totalidad de los átomos. En particular, el trabajo realizado en esta tesis se centra en el desarrollo de potenciales de átomos unidos anisotrópicos (AUA), que se caracterizan por la inclusión de un desplazamiento de los centros de LennardJones en dirección a los hidrógenos de cada grupo, de manera que esta distancia se convierte en un tercer parámetro ajustable junto a los dos del potencial de LennardJones.<br/>En la segunda parte de esta tesis se han desarrollado potenciales del tipo AUA4 para diferentes familias de compuesto que resultan de interés industrial como son los tiofenos, alcanoles y éteres. En el caso de los tiofenos este interés es debido a las cada vez más exigentes restricciones medioambientales que obligan a eliminar los compuestos con presencia de azufre. De aquí la creciente de necesidad de propiedades termodinámicas para esta familia de compuestos para la cual solo existe una cantidad de datos termodinámicos experimentales limitada. Con el fin de hacer posible la obtención de dichos datos a través de la simulación molecular hemos extendido el potencial intermolecular AUA4 a esta familia de compuestos. En segundo lugar, el uso de los compuestos oxigenados en el campo de los biocombustibles ha despertado un importante interés en la industria petroquímica por estos compuestos. En particular, los alcoholes más utilizados en la elaboración de los biocombustibles son el metanol y el etanol. Como en el caso de los tiofenos, hemos extendido el potencial AUA4 a esta familia de compuestos mediante la parametrización del grupo hidroxil y la inclusión de un grupo de cargas electrostáticas optimizadas de manera que reproduzcan de la mejor manera posible el potencial electrostático creado por una molecula de referencia en el vacío. Finalmente, y de manera análoga al caso de los alcanoles, el último capítulo de esta tesis la atención se centra en el desarrollo de un potencial AUA4 capaz de reproducir cuantitativamente las propiedades de coexistencia de la familia de los éteres, compuestos que son ampliamente utilizados como solventes.<br>Parallel with the increase of computer speed, in the last decade, molecular simulation techniques have emerged as important tools to predict physical properties of systems of industrial interest. These properties are essential in the chemical and petrochemical industries in order to perform process design, optimization, simulation and process control. The actual moderate cost of powerful computers converts molecular simulation into an excellent tool to provide predictions of such properties. In particular, the predictive capability of molecular simulation techniques becomes very important when dealing with extreme conditions of temperature and pressure as well as when toxic compounds are involved in the systems to be studied due to the fact that experimentation at such extreme conditions is difficult and expensive.<br/>Consequently, alternative processes must be considered in order to obtain the required properties. Chemical and petrochemical industries have made intensive use of thermophysical models including equations of state, activity coefficients models and corresponding state theories. These predictions present the advantage of providing good approximations with minimal computational needs. However, these models are often inadequate when only a limited amount of information is available to determine the necesary parameters, or when trying to reproduce complex fluid properties such as that of molecules which exhibit hydrogen bonding, polymers, etc. In addition, there is no way for dynamical properties to be estimated in a consistent manner.<br/>In this thesis, the HR and FSS techniques are combined with the main goal of extending the application of these methodologies to the calculation of the vaporliquid equilibrium and critical point of real mixtures. Before applying the methodologies to the real mixtures of industrial interest, the LennardJones fluid has been taken as a reference model and as a preliminary step. In this case, the predictions are affected only by the omnipresent statistical errors, but not by the accuracy of the model chosen to reproduce the behavior of the real molecules or the interatomic potential used to calculate the configurational energy of the system.<br/>The simulations have been performed in the grand canonical ensemble (GCMC)using the GIBBS code. Liquidvapor coexistences curves have been obtained from HR techniques for pure fluids and binary mixtures, while critical parameters were obtained from FSS in order to close the phase envelope of the phase diagrams. In order to extend the calculations to multicomponent systems modifications to the conventional HR techniques have been introduced in order to avoid the construction of histograms and the consequent need for large memory resources. In addition an alternative methodology known as the fourth order cumulant calculation, also known as the Binder parameter, has been implemented to make the location of the critical point more straightforward. In particular, we propose the use of the fourth order cumulant calculation considering two different possibilities: either the intersection of the Binder parameter for two different system sizes or the intersection of the Binder parameter with the known value for the system universality class combined with a FSS study. The development of transferable potential models able to describe the inter and intramolecular energies of the molecules involved in the simulations constitutes an important field in the improvement of Monte Carlo techniques. In the last decade, potential models, also referred to as force fields, have been developed for a wide range of compounds. One of the most common approaches for modeling hydrocarbons and other flexible molecules is the use of the unitedatoms model, where each chemical group is represented by one LennardJones center. This scheme results in a significant reduction of the computational time as compared to allatoms models since the number of pair interactions goes as the square of the number of sites. Improvements on the standard unitedatoms model, where typically a 612 LennardJones center of force is placed on top of the most significant atom, have been proposed. For instance, the AUA model consists of a displacement of the LennardJones centers of force towards the hydrogen atoms, converting the distance of displacement into a third adjustable parameter. In this thesis we have developed AUA 4 intermolecular potentials for three different families of compounds. The family of ethers is of great importance due to their applications as solvents. The other two families, thiophenes and alkanols, play an important roles in the oil and gas industry. Thiophene due to current and future environmental restrictions and alkanols due ever higher importance and presence of biofuels in this industry.
APA, Harvard, Vancouver, ISO, and other styles
15

"Approximate Multi-Parameter Inverse Scattering Using Pseudodifferential Scaling." Thesis, 2011. http://hdl.handle.net/1911/70367.

Full text
Abstract:
I propose a computationally efficient method to approximate the inverse of the normal operator arising in the multi-parameter linearized inverse problem for reflection seismology in two and three spatial dimensions. Solving the inverse problem using direct matrix methods like Gaussian elimination is computationally infeasible. In fact, the application of the normal operator requires solving large scale PDE problems. However, under certain conditions, the normal operator is a matrix of pseudodifferential operators. This manuscript shows how to generalize Cramer's rule for matrices to approximate the inverse of a matrix of pseudodifferential operators. Approximating the solution to the normal equations proceeds in two steps: (1) First, a series of applications of the normal operator to specific permutations of the right hand side. This step yields a phase-space scaling of the solution. Phase space scalings are scalings in both physical space and Fourier space. Second, a correction for the phase space scaling. This step requires applying the normal operator once more. The cost of approximating the inverse is a few applications of the normal operator (one for one parameter, two for two parameters, six for three parameters). The approximate inverse is an adequately accurate solution to the linearized inverse problem when it is capable of fitting the data to a prescribed precision. Otherwise, the approximate inverse of the normal operator might be used to precondition Krylov subspace methods in order to refine the data fit. I validate the method on a linearized version of the Marmousi model for constant density acoustics for the one-parameter problem. For the two parameter problem, the inversion of a variable density acoustics layered model corroborates the success of the proposed method. Furthermore, this example details the various steps of the method. I also apply the method to a 1D section of the Marmousi model to test the behavior of the method on complex two-parameter layered models.
APA, Harvard, Vancouver, ISO, and other styles
16

Wu, Wen-Hong, and 吳文弘. "Parameter Free Penalty Strategies and Gene Loss Free Crossover Scheme in Constrained Genetic Search." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/67448182831362651091.

Full text
Abstract:
博士<br>國立臺灣科技大學<br>機械工程系<br>92<br>Genetic Algorithms have been applied on many optimization problems with success and the most common strategy to handle constraints is penalty strategies. The success of the genetic search highly depends on appropriate selections of many operation parameters, especially the parameters associated to penalty strategies, which are often determined through the trial-and-error process or by experience. Furthermore, the gene loss problem often existing in the binary-coded genetic algorithms (BGA) will become worse when the penalty strategy is involved. This dissertation aims to devise an adaptive and parameter free penalty strategy: the first- and second-generation self-organizing adaptive penalty strategy (SOAPS & SOAPS-II) to avoid the agonizing selection of penalty parameters and to increase the reliabilities of genetic searches as well. In this work, a hybrid-coded crossover strategy (HCC) is also proposed to increare the effectiveness of attainment of the global optima for the genetic searches with small population sizes. By combining the advantages of crossover strategies of real-coded genetic algorithms (RGA), HCC can significantly reduce the gene loss phenomenon in BGA and make the BGA search robustly and reduce the sensitive influence of parameter selection. All combinations of proposed strategies are tested and compared with other combinations of known penalty strategies in mathematical and engineering optimization problems with favorable results. The test results also show the combinations of HCC and SOAPS, and HCC and SOAPS-II consistently outperform other strategy combinations.
APA, Harvard, Vancouver, ISO, and other styles
17

CHEN, GUO-CHENG, and 陳國成. "A study of the scaling parameter in the multiple scattering Hartree-Fock-Slater method for molecules." Thesis, 1989. http://ndltd.ncl.edu.tw/handle/11202170702052098337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

O'Neil, Timothy Paul. "Maintenance of Vertical Scales Under Conditions of Item Parameter Drift and Rasch Model-data Misfit." 2010. https://scholarworks.umass.edu/open_access_dissertations/239.

Full text
Abstract:
With scant research to draw upon with respect to the maintenance of vertical scales over time, decisions around the creation and performance of vertical scales over time necessarily suffers due to the lack of information. Undetected item parameter drift (IPD) presents one of the greatest threats to scale maintenance within an item response theory (IRT) framework. There is also still an outstanding question as to the utility of the Rasch model as an underlying viable framework for establishing and maintaining vertical scales. Even so, this model is currently used for scaling many state assessment systems. Most criticisms of the Rasch model in this context have not involved simulation. And most have not acknowledged conditions in which the model may function sufficiently to justify its use in vertical scaling. To address these questions, vertical scales were created from real data using the Rasch and 3PL models. Ability estimates were then generated to simulate a second (Time 2) administration. These simulated data were placed onto the base vertical scales using a horizontal vertical scaling approach and a mean-mean transformation. To examine the effects of IPD on vertical scale maintenance, several conditions of IPD were simulated to occur within each set of linking items. In order to evaluate the viability of using the Rasch model within a vertical scaling context, data were generated and calibrated at Time 2 within each model (Rasch and 3PL) as well as across each model (Rasch data generataion/3PL calibration, and vice versa). Results pertaining the first question of the effect IPD has on vertical scale maintenance demonstrate that IPD has an effect directly related to percentage of drifting linking items, the magnitude of IPD exhibited, and the direction. With respect to the viability of using the Rasch model within a vertical scaling context, results suggest that the Rasch model is perfectly viable within a vertical scaling context in which the model is appropriate for the data. It is also clearly evident that where data involve varying discrimination and guessing, use of the Rasch model is inappropriate.
APA, Harvard, Vancouver, ISO, and other styles
19

Hansen, Cayen [Verfasser]. "Lokale Metronidazol-Applikation als Ergänzung zum subgingivalen Scaling : Auswertung klinischer und mikrobiologischer Parameter über 1 Jahr / vorgelegt von Cayen Hansen." 2003. http://d-nb.info/972781404/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Sutradhar, Jagannath. "Transport, localization and entanglement in disordered and interacting systems: From real space to Fock space." Thesis, 2021. https://etd.iisc.ac.in/handle/2005/5592.

Full text
Abstract:
In this thesis, we explore some of the exciting physics of condensed matter systems manifested because of imperfection or disorder and interactions among the constituent particles. In phenomena like transport, e.g., electrical current; localization, e.g., confinement of electrons only within a small part of a system; entanglement (a correlation among the constituents particle); disorder and interaction play essential roles. These three properties are our main focus in the thesis. There are six chapters. In the first chapter, we introduce a few landmarks in the field to set the stage and give an overview of the works presented in the thesis. In the second chapter, we consider quasi-disordered or quasiperiodic systems in one, two, and three dimensions, where the quasi-disorder is deterministic but non-repeating throughout a lattice and considered from. Metal-insulator transitions in these systems are probed by calculating conductances and their change with system size. More specifically, we look at the systems from the perspective of single-parameter scaling theory. In the third chapter, we consider both the disordered and quasi-disordered systems with interactions. The systems show transitions from thermal to many-body localized phases, and we study them in Fock space, which is a natural description for an interacting system. We exploit the Fock space structure to calculate the propagator or Green’s function in an iterative way to push the system size accessible in the exact calculations. We define a length scale in Fock space, which can detect the phase transition and distinguish between the disordered and the quasi-disordered systems. In the fourth chapter, motivated by an experiment, we study the electrical current and noise therein in a disordered quantum Hall system in the proximity of a superconductor. To our surprise, the quantum Hall conductance plateau in the system comes with noise in the current as also observed in the experiment, and the calculated quantities match pretty well with the observed values. In the fifth chapter, we study the entanglement entropy of an interacting fermionic system using a new saddle-point approximation similar to a mean-field approximation. The approximation is based on a newly developed path integral approach for calculating the entanglement entropy. In the last chapter, we conclude the thesis by summarizing the important findings of our works presented in the thesis with some future directions.
APA, Harvard, Vancouver, ISO, and other styles
21

Zehtabian, Shohre. "Development of new scenario decomposition techniques for linear and nonlinear stochastic programming." Thèse, 2016. http://hdl.handle.net/1866/16182.

Full text
Abstract:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.<br>In the literature of optimization problems under uncertainty a common approach of dealing with two- and multi-stage problems is to use scenario analysis. To do so, the uncertainty of some data in the problem is modeled by stage specific random vectors with finite supports. Each realization is called a scenario. By using scenarios, it is possible to study smaller versions (subproblems) of the underlying problem. As a scenario decomposition technique, the progressive hedging algorithm is one of the most popular methods in multi-stage stochastic programming problems. In spite of full decomposition over scenarios, progressive hedging efficiency is greatly sensitive to some practical aspects, such as the choice of the penalty parameter and handling the quadratic term in the augmented Lagrangian objective function. For the choice of the penalty parameter, we review some of the popular methods, and design a novel adaptive strategy that aims to better follow the algorithm process. Numerical experiments on linear multistage stochastic test problems suggest that most of the existing techniques may exhibit premature convergence to a sub-optimal solution or converge to the optimal solution, but at a very slow rate. In contrast, the new strategy appears to be robust and efficient, converging to optimality in all our experiments and being the fastest in most of them. For the question of handling the quadratic term, we review some existing techniques and we suggest to replace the quadratic term with a linear one. Although this method has yet to be tested, we have the intuition that it will reduce some numerical and theoretical difficulties of progressive hedging in linear problems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!