Academic literature on the topic 'Penalty parameter and scaling parameter'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Penalty parameter and scaling parameter.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Penalty parameter and scaling parameter"

1

Gao, Xiangyu, Xian Zhang, and Yantao Wang. "A Simple Exact Penalty Function Method for Optimal Control Problem with Continuous Inequality Constraints." Abstract and Applied Analysis 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/752854.

Full text
Abstract:
We consider an optimal control problem subject to the terminal state equality constraint and continuous inequality constraints on the control and the state. By using the control parametrization method used in conjunction with a time scaling transform, the constrained optimal control problem is approximated by an optimal parameter selection problem with the terminal state equality constraint and continuous inequality constraints on the control and the state. On this basis, a simple exact penalty function method is used to transform the constrained optimal parameter selection problem into a sequence of approximate unconstrained optimal control problems. It is shown that, if the penalty parameter is sufficiently large, the locally optimal solutions of these approximate unconstrained optimal control problems converge to the solution of the original optimal control problem. Finally, numerical simulations on two examples demonstrate the effectiveness of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
2

HOTHAZIE, Mihai-Vladut, Georgiana ICHIM, and Mihai-Victor PRICOP. "Development and validation of constraints handling in a Differential Evolution optimizer." INCAS BULLETIN 12, no. 1 (2020): 59–66. http://dx.doi.org/10.13111/2066-8201.2020.12.1.6.

Full text
Abstract:
Research work requires independent, portable optimization tools for many applications, most often for problems where derivability of objective functions is not satisfied. Differential evolution optimization represents an alternative to the more complex, encryption based genetic algorithms. Various packages are available as freeware, but they lack constraints handling, while constrained optimizations packages are commercially available. However, the literature devoted to constraints treatment is significant and the current work is devoted to the implementation of such an optimizer, to be applied in low-fidelity optimization processes. The parameter free penalty scheme is adopted for implementation, and the code is validated against the CEC2006 benchmark test problems and compared with the genetic algorithm in MATLAB. Our paper underlines the implementation of constrained differential evolution by varying two parameters, a predefined parameter for feasibility and the scaling factor, to ensure the convergence of the solution.
APA, Harvard, Vancouver, ISO, and other styles
3

Dong, Zhaonan, and Alexandre Ern. "Hybrid high-order method for singularly perturbed fourth-order problems on curved domains." ESAIM: Mathematical Modelling and Numerical Analysis 55, no. 6 (2021): 3091–114. http://dx.doi.org/10.1051/m2an/2021081.

Full text
Abstract:
We propose a novel hybrid high-order method (HHO) to approximate singularly perturbed fourth-order PDEs on domains with a possibly curved boundary. The two key ideas in devising the method are the use of a Nitsche-type boundary penalty technique to weakly enforce the boundary conditions and a scaling of the weighting parameter in the stabilization operator that compares the singular perturbation parameter to the square of the local mesh size. With these ideas in hand, we derive stability and optimal error estimates over the whole range of values for the singular perturbation parameter, including the zero value for which a second-order elliptic problem is recovered. Numerical experiments illustrate the theoretical analysis.
APA, Harvard, Vancouver, ISO, and other styles
4

Qin, Shaopeng, Gaofeng Wei, Zheng Liu, and Xuehui Shen. "Elastodynamic Analysis of Functionally Graded Beams and Plates Based on Meshless RKPM." International Journal of Applied Mechanics 13, no. 04 (2021): 2150043. http://dx.doi.org/10.1142/s1758825121500435.

Full text
Abstract:
In this paper, the reproducing kernel particle method (RKPM) is innovatively extended to the elastodynamic analysis of functionally graded material (FGM). The elastodynamics governing equations of FGM are solved by using the RKPM. The penalty factor method is used to solve the displacement boundary conditions, and the Newmark-[Formula: see text] method is used to discretize the time. The influence of the penalty factor and the scaling parameter is discussed, and the stability and convergence of the RKPM are analyzed. Finally, the correctness of meshless RKPM to solve the elastodynamics of FGM is verified by numerical examples of the functionally graded beams and plates.
APA, Harvard, Vancouver, ISO, and other styles
5

Rajput, Abhishek, Alessandro Roggero, and Nathan Wiebe. "Hybridized Methods for Quantum Simulation in the Interaction Picture." Quantum 6 (August 17, 2022): 780. http://dx.doi.org/10.22331/q-2022-08-17-780.

Full text
Abstract:
Conventional methods of quantum simulation involve trade-offs that limit their applicability to specific contexts where their use is optimal. In particular, the interaction picture simulation has been found to provide substantial asymptotic advantages for some Hamiltonians, but incurs prohibitive constant factors and is incompatible with methods like qubitization. We provide a framework that allows different simulation methods to be hybridized and thereby improve performance for interaction picture simulations over known algorithms. These approaches show asymptotic improvements over the individual methods that comprise them and further make interaction picture simulation methods practical in the near term. Physical applications of these hybridized methods yield a gate complexity scaling as log2⁡Λ in the electric cutoff Λ for the Schwinger Model and independent of the electron density for collective neutrino oscillations, outperforming the scaling for all current algorithms with these parameters. For the general problem of Hamiltonian simulation subject to dynamical constraints, these methods yield a query complexity independent of the penalty parameter λ used to impose an energy cost on time-evolution into an unphysical subspace.
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Zheng, Gaofeng Wei, Zhiming Wang, and Jinwei Qiao. "The Meshfree Analysis of Geometrically Nonlinear Problem Based on Radial Basis Reproducing Kernel Particle Method." International Journal of Applied Mechanics 12, no. 04 (2020): 2050044. http://dx.doi.org/10.1142/s1758825120500441.

Full text
Abstract:
Based on the reproducing kernel particle method (RKPM) and the radial basis function (RBF), the radial basis reproducing kernel particle method (RRKPM) is presented for solving geometrically nonlinear problems. The advantages of the presented method are that it can eliminate the negative effect of diverse kernel functions on the computational accuracy and has greater computational accuracy and better convergence than the RKPM. Using the weak form of Galerkin integration and the Total Lagrangian (T.L.) formulation, the correlation formulae of the RRKPM for geometrically nonlinear problem are obtained. Newton–Raphson (N-R) iterative method is utilized in the process of numerical solution. Moreover, penalty factor, the scaling parameter, the shaped parameter of the RBF and loading step number are discussed. To prove validity of the proposed method, several numerical examples are simulated and compared to finite element method (FEM) solutions.
APA, Harvard, Vancouver, ISO, and other styles
7

Pająk, G. "Planning of Collision-Free Trajectory for Mobile Manipulators." International Journal of Applied Mechanics and Engineering 18, no. 2 (2013): 475–89. http://dx.doi.org/10.2478/ijame-2013-0028.

Full text
Abstract:
A method of planning sub-optimal trajectory for a mobile manipulator working in the environment including obstacles is presented. The path of the end-effector is defined as a curve that can be parameterized by any scaling parameter, the reference trajectory of a mobile platform is not needed. Constraints connected with the existence of mechanical limits for a given manipulator configuration, collision avoidance conditions and control constraints are considered. The motion of the mobile manipulator is planned in order to maximize the manipulability measure, thus to avoid manipulator singularities. The method is based on a penalty function approach and a redundancy resolution at the acceleration level. A computer example involving a mobile manipulator consisting of a nonholonomic platform and a SCARA type holonomic manipulator operating in a two-dimensional task space is also presented.
APA, Harvard, Vancouver, ISO, and other styles
8

Talman, Richard. "Scaling behavior of circular colliders dominated by synchrotron radiation." International Journal of Modern Physics A 30, no. 23 (2015): 1544003. http://dx.doi.org/10.1142/s0217751x15440030.

Full text
Abstract:
The scaling formulas in this paper — many of which involve approximation — apply primarily to electron colliders like CEPC or FCC-ee. The more abstract “radiation dominated” phrase in the title is intended to encourage use of the formulas — though admittedly less precisely — to proton colliders like SPPC, for which synchrotron radiation begins to dominate the design in spite of the large proton mass. Optimizing a facility having an electron–positron Higgs factory, followed decades later by a p, p collider in the same tunnel, is a formidable task. The CEPC design study constitutes an initial “constrained parameter” collider design. Here the constrained parameters include tunnel circumference, cell lengths, phase advance per cell, etc. This approach is valuable, if the constrained parameters are self-consistent and close to optimal. Jumping directly to detailed design makes it possible to develop reliable, objective cost estimates on a rapid time scale. A scaling law formulation is intended to contribute to a “ground-up” stage in the design of future circular colliders. In this more abstract approach, scaling formulas can be used to investigate ways in which the design can be better optimized. Equally important, by solving the lattice matching equations in closed form, as contrasted with running computer programs such as MAD, one can obtain better intuition concerning the fundamental parametric dependencies. The ground-up approach is made especially appropriate by the seemingly impossible task of simultaneous optimization of tunnel circumference for both electrons and protons. The fact that both colliders will be radiation dominated actually simplifies the simultaneous optimization task. All GeV scale electron accelerators are “synchrotron radiation dominated”, meaning that all beam distributions evolve within a fraction of a second to an equilibrium state in which “heating” due to radiation fluctuations is canceled by the “cooling” in RF cavities that restore the lost energy. To the contrary, until now, the large proton to electron mass ratio has caused synchrotron radiation to be negligible in proton accelerators. The LHC beam energy has still been low enough that synchrotron radiation has little effect on beam dynamics; but the thermodynamic penalty in cooling the superconducting magnets has still made it essential for the radiated power not to be dissipated at liquid helium temperatures. Achieving this has been a significant challenge. For the next generation p, p collider this will be even more true. Furthermore, the radiation will effect beam distributions on time scales measured in minutes, for example causing the beams to be flattened, wider than they are high. In this regime scaling relations previously valid only for electrons will be applicable also to protons.
APA, Harvard, Vancouver, ISO, and other styles
9

Weisenthal, Samuel, Samuel J. Weisenthal, Caroline Quill, et al. "2416." Journal of Clinical and Translational Science 1, S1 (2017): 17–18. http://dx.doi.org/10.1017/cts.2017.75.

Full text
Abstract:
OBJECTIVES/SPECIFIC AIMS: Our objective was to develop and evaluate a machine learning pipeline that uses electronic health record (EHR) data to predict acute kidney injury (AKI) during rehospitalization for patients who did not have an AKI episode in their most recent hospitalization. METHODS/STUDY POPULATION: The protocol under which this study falls was given exempt status by our institutional review board. The fully deidentified data set, containing all adult hospital admissions during a 2-year period, is a combination of administrative, laboratory, and pharmaceutical information. The administrative data set includes International Classification of Diseases, 9th Revision (ICD-9) diagnosis and procedure codes, Current Procedural Terminology, 4th Edition (CPT-4) procedure codes, diagnosis-related grouping (DRG) codes, locations visited in the hospital, discharge disposition, insurance, marital status, gender, age, ethnicity, and total length of stay. The laboratory data set includes bicarbonate, chloride, calcium, anion gap, phosphate, glomerular filtration rate, creatinine, urea nitrogen, albumin, total protein, liver function enzymes, and hemoglobin A1c. The pharmacy data set includes, for each medication, a description, pharmacologic class and subclass, and therapeutic class. Data preprocessing was performed using Python library Pandas (McKinney, 2011). Top-level binary representation (Singh, 2015) was used for diagnosis and procedure codes. Categorical variables were transformed via 1-hot encoding. Previous admissions were collapsed using rules informed by domain expertise (eg, the most recent age or sum of assigned diagnosis codes were retained as elements in the feature vector). We excluded any patient without at least 1 rehospitalization during the time window. We excluded any admission with or without AKI where AKI was also present in the most recent hospitalization. For comparison, we do not exclude such admissions for an identical experiment in which we considered any AKI event as a positive sample (regardless of AKI presence in the most recent hospitalization). We defined an AKI event as an assignment of any of the acute kidney failure (AKF) ICD-9 codes [584.5, AKF with lesion of tubular necrosis, 584.6, AKF with lesion of renal cortical necrosis, 584.7, AKF with lesion of renal medullary (papillary) necrosis, 584.8, AKF with other specified pathological lesion in kidney, or 584.9, AKF, unspecified]. Since diagnosis codes are believed to be specific but not sensitive for AKI (Waikar, 2006), we supplemented them using creatinine for patients who had laboratory values. Diagnosis was made according to the Kidney Disease: Improving Global Outcomes (KDIGO) Practice Guidelines (AKI defined as a 1.5-fold or greater increase in serum creatinine from baseline within 7 d or 0.3 mg/dL or greater increase in serum creatinine within 48 h). We report preliminary model discrimination via area under the receiver operating characteristic curve (AUC) using k-fold cross validation grouped by patient identifier (to ensure that admissions from the same patient would not appear in the training and validation set). It was confirmed that the prevalence of positive samples in the entire data set was maintained in each fold. Python library Sci-kit Learn (Pedregosa, 2011) was used for pipeline development, which consisted of imputation, scaling, and hyper-parameter tuning for penalized (l1 and l2 norm) logistic regression, random forest, and multilayer perceptron classifiers. All experiments were stored in IPython (Pérez, 2007) notebooks for easy viewing and result reproduction. RESULTS/ANTICIPATED RESULTS: There were 107,036 adult patients that accounted for 199,545 admissions during a 2-year window. Per admission, there were at most 54 ICD-9 diagnoses, 38 ICD-9 procedures, 314 CPT-4 procedures, and 25 hospital locations visited. The admissions were 55% female, the average age was 46±standard deviation 20, and average length of stay was 2.5±8.0 days. We excluded 2360 admissions that involved an AKI event that directly followed an admission with an AKI event and 4130 admissions that did not involve an AKI event but directly followed an admission with an AKI event. In total, there were 4561 (5.3%) positive samples (AKI during rehospitalization without AKI in the previous stay) generated by 3699 unique patients and 81,458 negative samples (non-AKI during rehospitalization without AKI in the previous stay) generated by 31,831 unique patients. When using any AKI event as a positive sample (regardless of whether or not AKI was in the most recent stay), the prevalence was 7.3% (6921 positive samples generated by 4395 unique patients and 85,588 negative samples generated by 33,287 unique patients). Best results were achieved with a code precision of 3 digits for which we had a total of 4556 features per patient. Fitted hyper-parameters corresponding to each classifier were logistic regression with l1 penalty C as 2×10−3; logistic regression with l2 penalty C as 1×10−6; random forest number of estimators as 100, maximum depth as 3, minimum samples per leaf as 50, minimum samples per split as 10, and entropy as the splitting criterion; and multilayer perceptron l2 regularization parameter α as 15, architecture as 1 hidden layer with 5 units, and learning rate as 0.001. Five-fold stratified cross validation on the development set yielded AUC for logistic regression with l1 penalty average 0.830±0.006, logistic regression with l2 penalty 0.796±0.007, random forest 0.828±0.007, and multilayer perceptron 0.841±0.005. In an identical experiment for which an AKI event was considered a positive sample regardless of AKI presence in the most recent stay, we had 4592 features per sample with the same code precision. Five-fold stratified cross validation on the development set with identical settings for the hyper-parameters yielded AUC for logistic regression with l1 penalty average 0.850±0.004, logistic regression with l2 penalty 0.819±0.006, random forest 0.853±0.004, and multilayer perceptron 0.853±0.006. DISCUSSION/SIGNIFICANCE OF IMPACT: Our objective was to investigate the feasibility of using machine learning methods on EHR data to provide a personalized risk assessment for “unexpected” AKI in rehospitalized patients. Preliminary model discrimination was good, suggesting that this approach is feasible. Such a model could aid clinicians to recognize AKI risk in unsuspicious patients. The authors recognize several limitations. Since our data set corresponds to a time-window sample, patients with high frequency of hospital utilization are likely overrepresented. Similarly, our data set contains records from only 1 hospital network. Although we supplement with laboratory-based diagnosis, using diagnosis codes as labels is problematic as numerous reports suggest low sensitivity of codes for AKI. Future work includes calibration analysis, incremental updating (“online learning”), and a representation learning-based (“deep learning”) extension of the model.
APA, Harvard, Vancouver, ISO, and other styles
10

Ji, Xia, Jiguang Sun, and Yang Yang. "Optimal penalty parameter forC0IPDG." Applied Mathematics Letters 37 (November 2014): 112–17. http://dx.doi.org/10.1016/j.aml.2014.06.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Penalty parameter and scaling parameter"

1

Adamu-Lema, Fikru. "Scaling and intrinsic parameter fluctuations in nano-CMOS devices." Thesis, University of Glasgow, 2005. http://theses.gla.ac.uk/7086/.

Full text
Abstract:
The core of this thesis is a thorough investigation of the scaling properties of conventional nano-CMOS MOSFETs, their physical and operational limitations and intrinsic parameter fluctuations. To support this investigation a well calibrated 35 nm physical gate length real MOSFET fabricated by Toshiba was used as a reference transistor. Prior to the start of scaling to shorter channel lengths, the simulators were calibrated against the experimentally measured characteristics of the reference device. Comprehensive numerical simulators were then used for designing the next five generations of transistors that correspond to the technology nodes of the latest International Technology Roadmap for Semiconductors (lTRS). The scaling of field effect transistors is one of the most widely studied concepts in semiconductor technology. The emphases of such studies have varied over the years, being dictated by the dominant issues faced by the microelectronics industry. The research presented in this thesis is focused on the present state of the scaling of conventional MOSFETs and its projections during the next 15 years. The electrical properties of conventional MOSFETs; threshold voltage (VT), subthreshold slope (S) and on-off currents (lon, Ioffi ), which are scaled to channel lengths of 35, 25, 18, 13, and 9 nm have been investigated. In addition, the channel doping profile and the corresponding carrier mobility in each generation of transistors have also been studied and compared. The concern of limited solid solubility of dopants in silicon is also addressed along with the problem of high channel doping concentrations in scaled devices. The other important issue associated with the scaling of conventional MOSFETs are the intrinsic parameter fluctuations (IPF) due to discrete random dopants in the inversion layer and the effects of gate Line Edge Roughness (LER). The variations of the three important MOSFET parameters (loff, VT and Ion), induced by random discrete dopants and LER have been comprehensively studied in the thesis. Finally, one of the promising emerging CMOS transistor architectures, the Ultra Thin Body (UTB) SOl MOSFET, which is expected to replace the conventional MOSFET, has been investigated from the scaling point of view.
APA, Harvard, Vancouver, ISO, and other styles
2

Charonko, John James. "A Nondimensional Scaling Parameter for Predicting Pressure Wave Reflection in Stented Arteries." Thesis, Virginia Tech, 2005. http://hdl.handle.net/10919/31906.

Full text
Abstract:
Coronary stents have become a very popular treatment for cardiovascular disease, historically the leading cause of death in the United States. Stents, while successful in the short term, are subject to high failure rates (up to 24% in the first six months) due to wall regrowth and clotting, probably due to a combination of abnormal mechanical stresses and disruption of the arterial blood flow. The goal of this research was to develop recommendations concerning ways in which stent design might be improved, focusing on the problem of pressure wave reflections. A one-dimensional finite-difference model was developed to predict these reflections, and effects of variations in stent and vessel properties were examined, including stent stiffness, length, and compliance transition region, as well as vessel radius and wall thickness. The model was solved using a combination of Weighted Essentially Non-Oscillatory (WENO) and Runge-Kutta methods. Over 100 cases were tested. Results showed that reasonable variations in these parameters could induce changes in reflection magnitude of up to ±50%. It was also discovered that the relationship between each of these properties and the resulting wave reflection could be described simply, and the effect of all of them together could in fact be encompassed by a single non-dimensional parameter. This parameter was titled â Stent Authority,â and several variations were proposed. It is believed this parameter is a novel way of relating the energy imposed upon the arterial wall by the stent, to the fraction of the incident pressure energy which is reflected from the stented region.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
3

Das, Narendra Narayan. "Modeling and application of soil moisture at varying spatial scales with parameter scaling." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2877.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kayhan, Belgin. "Parameter Estimation In Generalized Partial Linear Modelswith Tikhanov Regularization." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612530/index.pdf.

Full text
Abstract:
Regression analysis refers to techniques for modeling and analyzing several variables in statistical learning. There are various types of regression models. In our study, we analyzed Generalized Partial Linear Models (GPLMs), which decomposes input variables into two sets, and additively combines classical linear models with nonlinear model part. By separating linear models from nonlinear ones, an inverse problem method Tikhonov regularization was applied for the nonlinear submodels separately, within the entire GPLM. Such a particular representation of submodels provides both a better accuracy and a better stability (regularity) under noise in the data. We aim to smooth the nonparametric part of GPLM by using a modified form of Multiple Adaptive Regression Spline (MARS) which is very useful for high-dimensional problems and does not impose any specific relationship between the predictor and dependent variables. Instead, it can estimate the contribution of the basis functions so that both the additive and interaction effects of the predictors are allowed to determine the dependent variable. The MARS algorithm has two steps: the forward and backward stepwise algorithms. In the rst one, the model is built by adding basis functions until a maximum level of complexity is reached. On the other hand, the backward stepwise algorithm starts with removing the least significant basis functions from the model. In this study, we propose to use a penalized residual sum of squares (PRSS) instead of the backward stepwise algorithm and construct PRSS for MARS as a Tikhonov regularization problem. Besides, we provide numeric example with two data sets<br>one has interaction and the other one does not have. As well as studying the regularization of the nonparametric part, we also mention theoretically the regularization of the parametric part. Furthermore, we make a comparison between Infinite Kernel Learning (IKL) and Tikhonov regularization by using two data sets, with the difference consisting in the (non-)homogeneity of the data set. The thesis concludes with an outlook on future research.
APA, Harvard, Vancouver, ISO, and other styles
5

Bharadwaj, Shashank. "Investigation of oxide thickness dependence of Fowler-Nordheim parameter B." [Tampa, Fla.] : University of South Florida, 2004. http://purl.fcla.edu/fcla/etd/SFE0000251.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

VanDerwerken, Douglas Nielsen. "Variable Selection and Parameter Estimation Using a Continuous and Differentiable Approximation to the L0 Penalty Function." BYU ScholarsArchive, 2011. https://scholarsarchive.byu.edu/etd/2486.

Full text
Abstract:
L0 penalized likelihood procedures like Mallows' Cp, AIC, and BIC directly penalize for the number of variables included in a regression model. This is a straightforward approach to the problem of overfitting, and these methods are now part of every statistician's repertoire. However, these procedures have been shown to sometimes result in unstable parameter estimates as a result on the L0 penalty's discontinuity at zero. One proposed alternative, seamless-L0 (SELO), utilizes a continuous penalty function that mimics L0 and allows for stable estimates. Like other similar methods (e.g. LASSO and SCAD), SELO produces sparse solutions because the penalty function is non-differentiable at the origin. Because these penalized likelihoods are singular (non-differentiable) at zero, there is no closed-form solution for the extremum of the objective function. We propose a continuous and everywhere-differentiable penalty function that can have arbitrarily steep slope in a neighborhood near zero, thus mimicking the L0 penalty, but allowing for a nearly closed-form solution for the beta-hat vector. Because our function is not singular at zero, beta-hat will have no zero-valued components, although some will have been shrunk arbitrarily close thereto. We employ a BIC-selected tuning parameter used in the shrinkage step to perform zero-thresholding as well. We call the resulting vector of coefficients the ShrinkSet estimator. It is comparable to SELO in terms of model performance (selecting the truly nonzero coefficients, overall MSE, etc.), but we believe it to be more intuitive and simpler to compute. We provide strong evidence that the estimator enjoys favorable asymptotic properties, including the oracle property.
APA, Harvard, Vancouver, ISO, and other styles
7

Jurczyk, Michael Ulrich. "Shape based stereovision assistance in rehabilitation robotics." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001084.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pellicer, Alborch Klaus [Verfasser], Stefan [Akademischer Betreuer] Junne, Frank [Gutachter] Delvigne, Alain [Gutachter] Sourabié, and Peter [Gutachter] Neubauer. "Cocci chain length distribution as control parameter in scaling lactic acid fermentations / Klaus Pellicer Alborch ; Gutachter: Frank Delvigne, Alain Sourabié, Peter Neubauer ; Betreuer: Stefan Junne." Berlin : Technische Universität Berlin, 2020. http://d-nb.info/1220355917/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Morgenstern, Yvonne. "Analyse und Konzeption von Messstrategien zur Erfassung der bodenhydraulischen Variabilität." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2008. http://nbn-resolving.de/urn:nbn:de:bsz:14-ds-1204887346375-13520.

Full text
Abstract:
Die Berücksichtigung der flächenhaften bodenhydraulischen Variabilität gilt bei der Modellierung von Wasser- und Stofftransportprozessen als problematisch. Dies liegt vorrangig an ihrer Erfassung, die kosten- und zeitintensiv ist. Die vorliegende Arbeit untersucht verschiedene Messstrategien, die zur Abbildung der flächenhaften Bodenhydraulik mit wenigen, einfach zu bestimmenden und physikalisch begründeten Bodenparametern führen. Die Vorgehensweise erfolgt mit der Anwendung eines Ähnlichkeitskonzeptes, das die Böden in bodenhydraulisch ähnliche Klassen unterteilt. Innerhalb einer Klasse kann die Variabilität der Retentions- und hydraulischen Leitfähigkeitcharakteristik auf einen freien Parameter (Skalierungsparameter) reduziert werden. Die Analyse der Zusammenhänge zwischen Boden- und Skalierungsparametern führt letztendlich zu den geeigneten Parametern die eine flächenhafte Abbildung möglich machen. Diese Untersuchungen bilden die Grundlage für die weitere Entwicklung eines stochastischen Modellansatzes, der die Variabilität der Bodenhydraulik bei der Modellierung des Bodenwassertransportes im Feldmaßstab berücksichtigen kann. An Hand von drei Datensätzen unterschiedlicher Skalenausbreitung konnte dieses Konzept angewendet werden. Die Ergebnisse zeigen, dass die Beschreibung der hydraulischen Variabilität nur für die vertikale (Profil) nicht aber für die flächenhafte Ausbreitung mit einfachen Bodenparametern möglich ist. Mit einer ersten Modellanwendung konnte gezeigt werden, dass über die Variabilität der Bodenparameter Trockenrohdichte und Tongehalt auch die Variabilität der Bodenhydraulik und damit die Berechnung des Bodenfeuchteverlaufs am Standort darstellbar ist<br>The consideration of the spatial variability of the unsaturated soil hydraulic characteristics still remains an unsolved problem in the modelling of the water and matter transport in the vadose zone. This can be mainly explained by the rather cumbersome measurement of this variability, which is both, time-consuming and cost-intensive. The presented thesis analyses various measurement strategies which aim at the description of the soil-hydraulic heterogeneity by a small number of proxy-parameters, which should be easily measurable and still have a soil-physical meaning. The developed approach uses a similarity concept, which groups soils into similar soil hydraulic classes. Within a class, the variability of the retention and hydraulic conductivity curves can be explained by a single parameter (scaling parameter). The analysis of the correlation between the soil parameters and the scaling parameters can eventually indicate which soil parameters can be used for describing the soil hydraulic variability in a given area. This investigation forms the basis for the further development of a stochastic model, which can integrate the soil-hydraulic variability in the modelling of the soil water transport. Three data sets, all covering different scales, were subsequently used in the application of the developed concept. The results show that depth development of the soil-hydraulic variability in a soil profile can be explained by a single soil parameter. Contrarily, the explanation of the horizontal variability of the soil-hydraulic properties was not possible with the given data sets. First model applications for a soil profile showed that including the variability of the soil parameters bulk density and clay fraction in the water transport simulations could describe the variability of the soil-hydraulic variability and thus, the dynamics of the soil water content at the investigated profile
APA, Harvard, Vancouver, ISO, and other styles
10

Morgenstern, Yvonne. "Analyse und Konzeption von Messstrategien zur Erfassung der bodenhydraulischen Variabilität." Doctoral thesis, Technische Universität Dresden, 2007. https://tud.qucosa.de/id/qucosa%3A24111.

Full text
Abstract:
Die Berücksichtigung der flächenhaften bodenhydraulischen Variabilität gilt bei der Modellierung von Wasser- und Stofftransportprozessen als problematisch. Dies liegt vorrangig an ihrer Erfassung, die kosten- und zeitintensiv ist. Die vorliegende Arbeit untersucht verschiedene Messstrategien, die zur Abbildung der flächenhaften Bodenhydraulik mit wenigen, einfach zu bestimmenden und physikalisch begründeten Bodenparametern führen. Die Vorgehensweise erfolgt mit der Anwendung eines Ähnlichkeitskonzeptes, das die Böden in bodenhydraulisch ähnliche Klassen unterteilt. Innerhalb einer Klasse kann die Variabilität der Retentions- und hydraulischen Leitfähigkeitcharakteristik auf einen freien Parameter (Skalierungsparameter) reduziert werden. Die Analyse der Zusammenhänge zwischen Boden- und Skalierungsparametern führt letztendlich zu den geeigneten Parametern die eine flächenhafte Abbildung möglich machen. Diese Untersuchungen bilden die Grundlage für die weitere Entwicklung eines stochastischen Modellansatzes, der die Variabilität der Bodenhydraulik bei der Modellierung des Bodenwassertransportes im Feldmaßstab berücksichtigen kann. An Hand von drei Datensätzen unterschiedlicher Skalenausbreitung konnte dieses Konzept angewendet werden. Die Ergebnisse zeigen, dass die Beschreibung der hydraulischen Variabilität nur für die vertikale (Profil) nicht aber für die flächenhafte Ausbreitung mit einfachen Bodenparametern möglich ist. Mit einer ersten Modellanwendung konnte gezeigt werden, dass über die Variabilität der Bodenparameter Trockenrohdichte und Tongehalt auch die Variabilität der Bodenhydraulik und damit die Berechnung des Bodenfeuchteverlaufs am Standort darstellbar ist.<br>The consideration of the spatial variability of the unsaturated soil hydraulic characteristics still remains an unsolved problem in the modelling of the water and matter transport in the vadose zone. This can be mainly explained by the rather cumbersome measurement of this variability, which is both, time-consuming and cost-intensive. The presented thesis analyses various measurement strategies which aim at the description of the soil-hydraulic heterogeneity by a small number of proxy-parameters, which should be easily measurable and still have a soil-physical meaning. The developed approach uses a similarity concept, which groups soils into similar soil hydraulic classes. Within a class, the variability of the retention and hydraulic conductivity curves can be explained by a single parameter (scaling parameter). The analysis of the correlation between the soil parameters and the scaling parameters can eventually indicate which soil parameters can be used for describing the soil hydraulic variability in a given area. This investigation forms the basis for the further development of a stochastic model, which can integrate the soil-hydraulic variability in the modelling of the soil water transport. Three data sets, all covering different scales, were subsequently used in the application of the developed concept. The results show that depth development of the soil-hydraulic variability in a soil profile can be explained by a single soil parameter. Contrarily, the explanation of the horizontal variability of the soil-hydraulic properties was not possible with the given data sets. First model applications for a soil profile showed that including the variability of the soil parameters bulk density and clay fraction in the water transport simulations could describe the variability of the soil-hydraulic variability and thus, the dynamics of the soil water content at the investigated profile.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Penalty parameter and scaling parameter"

1

Nardmann, Tobias. Physics-Based Compact Modeling and Parameter Extraction for Inp Heterojunction Bipolar Transistors with Special Emphasis on Material-Specific Physical Effects and Geometry Scaling. Books on Demand GmbH, 2017.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Brezin, Edouard, and Sinobu Hikami. Beta ensembles. Edited by Gernot Akemann, Jinho Baik, and Philippe Di Francesco. Oxford University Press, 2018. http://dx.doi.org/10.1093/oxfordhb/9780198744191.013.20.

Full text
Abstract:
This article deals with beta ensembles. Classical random matrix ensembles contain a parameter β, taking on the values 1, 2, and 4. This parameter, which relates to the underlying symmetry, appears as a repulsion sβ between neighbouring eigenvalues for small s. β may be regarded as a continuous positive parameter on the basis of different viewpoints of the eigenvalue probability density function for the classical random matrix ensembles - as the Boltzmann factor for a log-gas or the squared ground state wave function of a quantum many-body system. The article first considers log-gas systems before discussing the Fokker-Planck equation and the Calogero-Sutherland system. It then describes the random matrix realization of the β-generalization of the circular ensemble and concludes with an analysis of stochastic differential equations resulting from the case of the bulk scaling limit of the β-generalization of the Gaussian ensemble.
APA, Harvard, Vancouver, ISO, and other styles
3

Street, Brian. The Calder´on-Zygmund Theory II: Maximal Hypoellipticity. Princeton University Press, 2017. http://dx.doi.org/10.23943/princeton/9780691162515.003.0002.

Full text
Abstract:
This chapter remains in the single-parameter case and turns to the case when the metric is a Carnot–Carathéodory (or sub-Riemannian) metric. It defines a class of singular integral operators adapted to this metric. The chapter has two major themes. The first is a more general reprise of the trichotomy described in Chapter 1 (Theorem 2.0.29). The second theme is a generalization of the fact that Euclidean singular integral operators are closely related to elliptic partial differential equations. The chapter also introduces a quantitative version of the classical Frobenius theorem from differential geometry. This “quantitative Frobenius theorem” can be thought of as yielding “scaling maps” which are well adapted to the Carnot–Carathéodory geometry, and is of central use throughout the rest of the monograph.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Penalty parameter and scaling parameter"

1

Kaplan, A., and R. Tichatschke. "Proximal Penalty Method for Ill-Posed Parabolic Optimal Control Problems." In Control and Estimation of Distributed Parameter Systems. Birkhäuser Basel, 1998. http://dx.doi.org/10.1007/978-3-0348-8849-3_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ohtsuki, Tomi, and Keith Slevin. "Corrections to Single Parameter Scaling at the Anderson Transition." In Anderson Localization and Its Ramifications. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/978-3-540-45202-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Yalçin, Kübra, Serhat Karaçam, and Tuğba Selcen Navruz. "Improved Performance of Adaptive UKF SLAM with Scaling Parameter." In Engineering Cyber-Physical Systems and Critical Infrastructures. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-09753-9_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Haddar, Maroua, S. Caglar Baslamisli, Fakher Chaari, and Mohamed Haddar. "On-line Adaptive Scaling Parameter in Active Disturbance Rejection Controller." In Applied Condition Monitoring. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-96181-1_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Scardua, Leonardo Azevedo, and José Jaime da Cruz. "Adaptively Tuning the Scaling Parameter of the Unscented Kalman Filter." In Lecture Notes in Electrical Engineering. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-10380-8_41.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Fraix-Burnet, Didier. "Evolution as a Confounding Parameter in Scaling Relations for Galaxies." In Lecture Notes in Statistics. Springer New York, 2012. http://dx.doi.org/10.1007/978-1-4614-3520-4_49.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dose, V., W. Von Der Linden, and A. Garrett. "Bayesian Parameter Estimation of Nuclear Fusion Confinement Time Scaling Laws." In Maximum Entropy and Bayesian Methods. Springer Netherlands, 1996. http://dx.doi.org/10.1007/978-94-011-5430-7_29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Huckestein, Bodo. "One-Parameter Scaling of the Localization Length in High Magnetic Fields." In Quantum Coherence in Mesoscopic Systems. Springer US, 1991. http://dx.doi.org/10.1007/978-1-4899-3698-1_30.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Garen, W., B. Meyerer, S. Udagawa, and K. Maeno. "Shock waves in mini-tubes: influence of the scaling parameter S." In Shock Waves. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-540-85181-3_110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Lee, Chang-Ock, and Eun-Hee Park. "A Domain Decomposition Method Based on Augmented Lagrangian with an Optimized Penalty Parameter." In Lecture Notes in Computational Science and Engineering. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-18827-0_58.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Penalty parameter and scaling parameter"

1

Sun, Jiyuan, Haibo Yu, and Jianjun Zhao. "Generating Adversarial Examples Using Parameter-Free Penalty Method." In 2024 IEEE 24th International Conference on Software Quality, Reliability, and Security Companion (QRS-C). IEEE, 2024. http://dx.doi.org/10.1109/qrs-c63300.2024.00043.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Chang, Ernie, Matteo Paltenghi, Yang Li, et al. "Scaling Parameter-Constrained Language Models with Quality Data." In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track. Association for Computational Linguistics, 2024. http://dx.doi.org/10.18653/v1/2024.emnlp-industry.8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Guney, Evren, Joachim Ehrenthal, and Thomas Hanne. "Quantum Approaches to the 0/1 Multi-Knapsack Problem: QUBO Formulation, Penalty Parameter Characterization and Analysis." In Workshop on Quantum Artificial Intelligence and Optimization 2025. SCITEPRESS - Science and Technology Publications, 2025. https://doi.org/10.5220/0013387700003890.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Zhou, Yijia. "A Novel Approach for Updating the Penalty Parameter of Alternating Direction Method for the L1-norm Problem." In 2024 International Conference on New Trends in Computational Intelligence (NTCI). IEEE, 2024. https://doi.org/10.1109/ntci64025.2024.10776407.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Tanaka, Yasunari, Kazutaka Hara, Jun-Ichi Kani, and Tomoaki Yoshida. "Fiber Parameter Independent Zero-dispersion Wavelength Estimation for Penalty- and Equalizer-free 60-km Transmission at 60 Gbps PAM4 signal." In 2024 IEEE Opto-Electronics and Communications Conference (OECC). IEEE, 2024. https://doi.org/10.1109/oecc54135.2024.10975502.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Kantan, Prithvi Ravi, Sofia Dahl, and Erika G. Spaich. "Adapting Audio Mixing Principles and Tools to Parameter Mapping Sonification Design." In ICAD 2024: The 29th International Conference on Auditory Display. International Community for Auditory Display, 2024. http://dx.doi.org/10.21785/icad2024.034.

Full text
Abstract:
Designing a parameter mapping sonification (PMSon) involves defining a mapping function that determines how data variables affect audio signal parameters. The mapping function is represented using mathematical notation and/or characterized in terms of scaling, transfer function and polarity; both approaches manifest in software platforms for PMSon design. Math notation is not always directly relatable to complex design requirements, and simple characterizations lack generality and may be ambiguous - both issues hamper mapping function design, conceptualization, and dissemination. We seek to address them through knowledge transfer from audio mixing, a mature craft with strong parallels to PMSon design. For mixing, it was a versatile and universally applicable technological platform (the multitrack mixer) that supported the development of mixing technique, concepts, and recent formalizations thereof, laying the foundation for modern audio production. We posit that a PMSon design platform that adapts the essential elements of the mixer can similarly reinforce PMSon by supporting a mapping function representation directly tied to the design process. We define the correspondence between mixing and PMSon design, outline specifics of mixer functionality adaptation, and demonstrate the resulting capabilities with our proof-of-principle platform Mix-N-Map that is currently pending user testing. We believe a general PMSon framework explicitly rooted in audio mixing can potentially advance theory and practice to the benefit of PMSon designers and users alike.
APA, Harvard, Vancouver, ISO, and other styles
7

Sun, Wei, Xiangwei Kong, Dequan He, and Xingang You. "Information Security Game Analysis with Penalty Parameter." In 2008 International Symposium on Electronic Commerce and Security. IEEE, 2008. http://dx.doi.org/10.1109/isecs.2008.149.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sun, Wei, Xiangwei Kong, Dequan He, and Xingang You. "Information Security Investment Game with Penalty Parameter." In 2008 3rd International Conference on Innovative Computing Information and Control. IEEE, 2008. http://dx.doi.org/10.1109/icicic.2008.319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Zhong, Erheng, Yue Shi, Nathan Liu, and Suju Rajan. "Scaling Factorization Machines with Parameter Server." In CIKM'16: ACM Conference on Information and Knowledge Management. ACM, 2016. http://dx.doi.org/10.1145/2983323.2983364.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kumar, N. "One-parameter scaling: Some open questions." In Ordering disorder: Prospect and retrospect in condensed matter physics. AIP, 1992. http://dx.doi.org/10.1063/1.44717.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Penalty parameter and scaling parameter"

1

McKenna, S. A., and S. J. Altman. Geostatistical simulation, parameter development and property scaling for GWTT-95. Office of Scientific and Technical Information (OSTI), 1996. http://dx.doi.org/10.2172/200687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wadlinger, E. A. Parameter scaling to produce different charged-particle beam-transport systems having identical equations of motion. Office of Scientific and Technical Information (OSTI), 1988. http://dx.doi.org/10.2172/5303564.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Z. F., Andy L. Ward, and Glendon W. Gee. Estimating Field-Scale Hydraulic Parameters of Heterogeneous Soils Using A Combination of Parameter Scaling and Inverse Methods. Office of Scientific and Technical Information (OSTI), 2002. http://dx.doi.org/10.2172/15002667.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Kott, Phillip S. The Role of Weights in Regression Modeling and Imputation. RTI Press, 2022. http://dx.doi.org/10.3768/rtipress.2022.mr.0047.2203.

Full text
Abstract:
When fitting observations from a complex survey, the standard regression model assumes that the expected value of the difference between the dependent variable and its model-based prediction is zero, regardless of the values of the explanatory variables. A rarely failing extended regression model assumes only that the model error is uncorrelated with the model’s explanatory variables. When the standard model holds, it is possible to create alternative analysis weights that retain the consistency of the model-parameter estimates while increasing their efficiency by scaling the inverse-probability weights by an appropriately chosen function of the explanatory variables. When a regression model is used to impute for missing item values in a complex survey and when item missingness is a function of the explanatory variables of the regression model and not the item value itself, near unbiasedness of an estimated item mean requires that either the standard regression model for the item in the population holds or the analysis weights incorporate a correctly specified and consistently estimated probability of item response. By estimating the parameters of the probability of item response with a calibration equation, one can sometimes account for item missingness that is (partially) a function of the item value itself.
APA, Harvard, Vancouver, ISO, and other styles
5

Lillard, Scott. DTPH56-15-H-CAP02 Understanding and Mitigating the Threat of AC Induced Corrosion on Buried Pipelines. Pipeline Research Council International, Inc. (PRCI), 2017. http://dx.doi.org/10.55274/r0011875.

Full text
Abstract:
Explores new methods for assessing the threat of AC corrosion on buried pipelines. The results from this project will improve indirect inspection methods for assessing the impact of induced AC currents on pipeline corrosion rates and could be used for national and international standards. To accomplish this goal the project has three thrust areas: laboratory studies, industrial test facility benchmarking, and in-service pipeline validation. Previous work in our lab has shown that the magnitude of interfacial capacitance of the corroding metal is a key parameter in determining the AC corrosion rate. As such we will investigate the interfacial capacitance that develops on pipeline steel as a function of corrosion product build-up (scaling) and soil properties such as, resistivity, mineral content, and pH. In addition, we will conduct exploratory studies to determine the susceptibility of pipeline steel to environmental fracture during exposure to AC. Results from these tests will be benchmarked in experiments conducted in industrial pipeline testing facilities at Mears Integrity and Marathon Petroleum. Finally, we will validate the project by collecting indirect inspection data on an in-service pipeline in a transmission line right-of-way owned by Marathon. These data will be used as input to an AC Risk Algorithm to prioritize direct inspection of the pipeline. If permissible, the section of the pipeline identified as being at the greatest risk will be assessed using direct inspection.
APA, Harvard, Vancouver, ISO, and other styles
6

Stewart, Jonathan, Grace Parker, Joseph Harmon, et al. Expert Panel Recommendations for Ergodic Site Amplification in Central and Eastern North America. Pacific Earthquake Engineering Research Center, University of California, Berkeley, CA, 2017. http://dx.doi.org/10.55461/tzsy8988.

Full text
Abstract:
The U.S. Geological Survey (USGS) national seismic hazard maps have historically been produced for a reference site condition of VS30 = 760 m/sec (where VS30 is time averaged shear wave velocity in the upper 30 m of the site). The resulting ground motions are modified for five site classes (A-E) using site amplification factors for peak acceleration and ranges of short- and long-oscillator periods. As a result of Project 17 recommendations, this practice is being revised: (1) maps will be produced for a range of site conditions (as represented by VS30 ) instead of a single reference condition; and (2) the use of site factors for period ranges is being replaced with period-specific factors over the period range of interest (approximately 0.1 to 10 sec). Since the development of the current framework for site amplification factors in 1992, the technical basis for the site factors used in conjunction with the USGS hazard maps has remained essentially unchanged, with only one modification (in 2014). The approach has been to constrain site amplification for low-to-moderate levels of ground shaking using inference from observed ground motions (approximately linear site response), and to use ground response simulations (recently combined with observations) to constrain nonlinear site response. Both the linear and nonlinear site response has been based on data and geologic conditions in the western U.S. (an active tectonic region). This project and a large amount of previous and contemporaneous related research (e.g., NGA-East Geotechnical Working Group for site response) has sought to provide an improved basis for the evaluation of ergodic site amplification in central and eastern North America (CENA). The term ‘ergodic’ in this context refers to regionally-appropriate, but not site-specific, site amplification models (i.e., models are appropriate for CENA generally, but would be expected to have bias for any particular site). The specific scope of this project was to review and synthesize relevant research results so as to provide recommendations to the USGS for the modeling of ergodic site amplification in CENA for application in the next version of USGS maps. The panel assembled for this project recommends a model provided as three terms that are additive in natural logarithmic units. Two describe linear site amplification. One of these describes VS30-scaling relative to a 760 m/sec reference, is largely empirical, and has several distinct attributes relative to models for active tectonic regions. The second linear term adjusts iv site amplification from the 760 m/sec reference to the CENA reference condition (used with NGA-East ground motion models) of VS =3000 m/sec; this second term is simulation-based. The panel is also recommending a nonlinear model, which is described in a companion report [Hashash et al. 2017a]. All median model components are accompanied by models for epistemic uncertainty. The models provided in this report are recommended for application by the USGS and other entities. The models are considered applicable for VS30 = 200–2000 m/sec site conditions and oscillator periods of 0.08–5 sec. Finally, it should be understood that as ergodic models, they lack attributes that may be important for specific sites, such as resonances at site periods. Site-specific analyses are recommended to capture such effects for significant projects and for any site condition with VS30 &lt; 200 m/sec. We recommend that future site response models for hazard applications consider a two-parameter formulation that includes a measure of site period in addition to site stiffness.
APA, Harvard, Vancouver, ISO, and other styles
7

Multiple Engine Faults Detection Using Variational Mode Decomposition and GA-K-means. SAE International, 2022. http://dx.doi.org/10.4271/2022-01-0616.

Full text
Abstract:
As a critical power source, the diesel engine is widely used in various situations. Diesel engine failure may lead to serious property losses and even accidents. Fault detection can improve the safety of diesel engines and reduce economic loss. Surface vibration signal is often used in non-disassembly fault diagnosis because of its convenient measurement and stability. This paper proposed a novel method for engine fault detection based on vibration signals using variational mode decomposition (VMD), K-means, and genetic algorithm. The mode number of VMD dramatically affects the accuracy of extracting signal components. Therefore, a method based on spectral energy distribution is proposed to determine the parameter, and the quadratic penalty term is optimized according to SNR. The results show that the optimized VMD can adaptively extract the vibration signal components of the diesel engine. In the actual fault diagnosis case, it is difficult to obtain the data with labels. The clustering algorithm can complete the classification without labeled data, but it is limited by the low accuracy. In this paper, the optimized VMD is used to decompose and standardize the vibration signal. Then the correlation-based feature selection method is implemented to obtain the feature results after dimensionality reduction. Finally, the results are input into the classifier combined by K-means and genetic algorithm (GA). By introducing and optimizing the genetic algorithm, the number of classes can be selected automatically, and the accuracy is significantly improved. This method can carry out adaptive multiple fault detection of a diesel engine without labeled data. Compared with many supervised learning algorithms, the proposed method also has high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography