To see the other types of publications on this topic, follow the link: Regularized approaches.

Journal articles on the topic 'Regularized approaches'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Regularized approaches.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

G.V., Suresh, and Srinivasa Reddy E.V. "Uncertain Data Analysis with Regularized XGBoost." Webology 19, no. 1 (2022): 3722–40. http://dx.doi.org/10.14704/web/v19i1/web19245.

Full text
Abstract:
Uncertainty is a ubiquitous element in available knowledge about the real world. Data sampling error, obsolete sources, network latency, and transmission error are all factors that contribute to the uncertainty. These kinds of uncertainty have to be handled cautiously, or else the classification results could be unreliable or even erroneous. There are numerous methodologies developed to comprehend and control uncertainty in data. There are many faces for uncertainty i.e., inconsistency, imprecision, ambiguity, incompleteness, vagueness, unpredictability, noise, and unreliability. Missing infor
APA, Harvard, Vancouver, ISO, and other styles
2

Alexos, Antonios, Ian Domingo, and Pierre Baldi. "Improving Deep Learning Speed and Performance Through Synaptic Neural Balance." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15339–46. https://doi.org/10.1609/aaai.v39i15.33683.

Full text
Abstract:
We present theory of synaptic neural balance and we show experimentally that synaptic neural balance can improve deep learning speed, and accuracy, even in data-scarce environments. Given an additive cost function (regularizer) of the synaptic weights, a neuron is said to be in balance if the total cost of its incoming weights is equal to the total cost of its outgoing weights. For large classes of networks, activation functions, and regularizers, neurons can be balanced fully or partially using scaling operations that do not change their functionality. Furthermore, these balancing operations
APA, Harvard, Vancouver, ISO, and other styles
3

Taniguchi, Michiaki, and Volker Tresp. "Averaging Regularized Estimators." Neural Computation 9, no. 5 (1997): 1163–78. http://dx.doi.org/10.1162/neco.1997.9.5.1163.

Full text
Abstract:
We compare the performance of averaged regularized estimators. We show that the improvement in performance that can be achieved by averaging depends critically on the degree of regularization which is used in training the individual estimators. We compare four different averaging approaches: simple averaging, bagging, variance-based weighting, and variance-based bagging. In any of the averaging methods, the greatest degree of improvement—if compared to the individual estimators—is achieved if no or only a small degree of regularization is used. Here, variance-based weighting and variance-based
APA, Harvard, Vancouver, ISO, and other styles
4

Luft, Daniel, and Volker Schulz. "Simultaneous shape and mesh quality optimization using pre-shape calculus." Control and Cybernetics 50, no. 4 (2021): 473–520. http://dx.doi.org/10.2478/candc-2021-0028.

Full text
Abstract:
Abstract Computational meshes arising from shape optimization routines commonly suffer from decrease of mesh quality or even destruction of the mesh. In this work, we provide an approach to regularize general shape optimization problems to increase both shape and volume mesh quality. For this, we employ pre-shape calculus as established in Luft and Schulz (2021). Existence of regularized solutions is guaranteed. Further, consistency of modified pre-shape gradient systems is established. We present pre-shape gradient system modifications, which permit simultaneous shape optimization with mesh q
APA, Harvard, Vancouver, ISO, and other styles
5

Ebadat, Afrooz, Giulio Bottegal, Damiano Varagnolo, Bo Wahlberg, and Karl H. Johansson. "Regularized Deconvolution-Based Approaches for Estimating Room Occupancies." IEEE Transactions on Automation Science and Engineering 12, no. 4 (2015): 1157–68. http://dx.doi.org/10.1109/tase.2015.2471305.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Feng, Hesen, Lihong Ma, and Jing Tian. "A Dynamic Convolution Kernel Generation Method Based on Regularized Pattern for Image Super-Resolution." Sensors 22, no. 11 (2022): 4231. http://dx.doi.org/10.3390/s22114231.

Full text
Abstract:
Image super-resolution aims to reconstruct a high-resolution image from its low-resolution counterparts. Conventional image super-resolution approaches share the same spatial convolution kernel for the whole image in the upscaling modules, which neglect the specificity of content information in different positions of the image. In view of this, this paper proposes a regularized pattern method to represent spatially variant structural features in an image and further exploits a dynamic convolution kernel generation method to match the regularized pattern and improve image reconstruction perform
APA, Harvard, Vancouver, ISO, and other styles
7

Robitzsch, Alexander. "Implementation Aspects in Regularized Structural Equation Models." Algorithms 16, no. 9 (2023): 446. http://dx.doi.org/10.3390/a16090446.

Full text
Abstract:
This article reviews several implementation aspects in estimating regularized single-group and multiple-group structural equation models (SEM). It is demonstrated that approximate estimation approaches that rely on a differentiable approximation of non-differentiable penalty functions perform similarly to the coordinate descent optimization approach of regularized SEMs. Furthermore, using a fixed regularization parameter can sometimes be superior to an optimal regularization parameter selected by the Bayesian information criterion when it comes to the estimation of structural parameters. Moreo
APA, Harvard, Vancouver, ISO, and other styles
8

Robitzsch, Alexander. "Comparing Robust Linking and Regularized Estimation for Linking Two Groups in the 1PL and 2PL Models in the Presence of Sparse Uniform Differential Item Functioning." Stats 6, no. 1 (2023): 192–208. http://dx.doi.org/10.3390/stats6010012.

Full text
Abstract:
In the social sciences, the performance of two groups is frequently compared based on a cognitive test involving binary items. Item response models are often utilized for comparing the two groups. However, the presence of differential item functioning (DIF) can impact group comparisons. In order to avoid the biased estimation of groups, appropriate statistical methods for handling differential item functioning are required. This article compares the performance-regularized estimation and several robust linking approaches in three simulation studies that address the one-parameter logistic (1PL)
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Hong, Dong Lai Hao, and Xiang Yang Liu. "A Precoding Strategy for Massive MIMO System." Applied Mechanics and Materials 568-570 (June 2014): 1278–81. http://dx.doi.org/10.4028/www.scientific.net/amm.568-570.1278.

Full text
Abstract:
The computational precoding complexity increases with its dimensions in massive multiple-input multiple-output system. A precoding scheme based on the truncated polynomial expansion is proposed, the hardware implementation is described for the superiority of the algorithm compared with the conventional regularized zero forcing precoding. Finally, under different channel conditions, the simulation results show that the average achievable rate will increase infinitely approaches the regularized zero forcing precoding simulation in a certain order, the polynomial order does not need to scale with
APA, Harvard, Vancouver, ISO, and other styles
10

Leen, Todd K. "From Data Distributions to Regularization in Invariant Learning." Neural Computation 7, no. 5 (1995): 974–81. http://dx.doi.org/10.1162/neco.1995.7.5.974.

Full text
Abstract:
Ideally pattern recognition machines provide constant output when the inputs are transformed under a group G of desired invariances. These invariances can be achieved by enhancing the training data to include examples of inputs transformed by elements of G, while leaving the corresponding targets unchanged. Alternatively the cost function for training can include a regularization term that penalizes changes in the output when the input is transformed under the group. This paper relates the two approaches, showing precisely the sense in which the regularized cost function approximates the resul
APA, Harvard, Vancouver, ISO, and other styles
11

Kanzawa, Yuchi. "Entropy-Regularized Fuzzy Clustering for Non-Euclidean Relational Data and Indefinite Kernel Data." Journal of Advanced Computational Intelligence and Intelligent Informatics 16, no. 7 (2012): 784–92. http://dx.doi.org/10.20965/jaciii.2012.p0784.

Full text
Abstract:
In this paper, an entropy-regularized fuzzy clustering approach for non-Euclidean relational data and indefinite kernel data is developed that has not previously been discussed. It is important because relational data and kernel data are not always Euclidean and positive semi-definite, respectively. It is theoretically determined that an entropy-regularized approach for both non-Euclidean relational data and indefinite kernel data can be applied without using a β-spread transformation, and that two other options make the clustering results crisp for both data types. These results are in contra
APA, Harvard, Vancouver, ISO, and other styles
12

Feng, Huijie, Chunpeng Wu, Guoyang Chen, Weifeng Zhang, and Yang Ning. "Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3858–65. http://dx.doi.org/10.1609/aaai.v34i04.5798.

Full text
Abstract:
Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is shown to be an effective and scalable way to provide state-of-the-art probabilistic robustness guarantee against ℓ2 norm bounded adversarial perturbations. However, how to train a good base classifier that is accurate and robust when smoothed has not been fully investigated. In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart when training the base classifier. It is computationally efficient and can
APA, Harvard, Vancouver, ISO, and other styles
13

van Erp, Sara. "Bayesian Regularized SEM: Current Capabilities and Constraints." Psych 5, no. 3 (2023): 814–35. http://dx.doi.org/10.3390/psych5030054.

Full text
Abstract:
An important challenge in statistical modeling is to balance how well our model explains the phenomenon under investigation with the parsimony of this explanation. In structural equation modeling (SEM), penalization approaches that add a penalty term to the estimation procedure have been proposed to achieve this balance. An alternative to the classical penalization approach is Bayesian regularized SEM in which the prior distribution serves as the penalty function. Many different shrinkage priors exist, enabling great flexibility in terms of shrinkage behavior. As a result, different types of s
APA, Harvard, Vancouver, ISO, and other styles
14

Koné, N’Golo. "Regularized Maximum Diversification Investment Strategy." Econometrics 9, no. 1 (2020): 1. http://dx.doi.org/10.3390/econometrics9010001.

Full text
Abstract:
The maximum diversification has been shown in the literature to depend on the vector of asset volatilities and the inverse of the covariance matrix of the asset return covariance matrix. In practice, these two quantities need to be replaced by their sample statistics. The estimation error associated with the use of these sample statistics may be amplified due to (near) singularity of the covariance matrix, in financial markets with many assets. This, in turn, may lead to the selection of portfolios that are far from the optimal regarding standard portfolio performance measures of the financial
APA, Harvard, Vancouver, ISO, and other styles
15

Chahboun, Souhaila, and Mohamed Maaroufi. "Principal Component Analysis and Machine Learning Approaches for Photovoltaic Power Prediction: A Comparative Study." Applied Sciences 11, no. 17 (2021): 7943. http://dx.doi.org/10.3390/app11177943.

Full text
Abstract:
Nowadays, in the context of the industrial revolution 4.0, considerable volumes of data are being generated continuously from intelligent sensors and connected objects. The proper understanding and use of these amounts of data are crucial levers of performance and innovation. Machine learning is the technology that allows the full potential of big datasets to be exploited. As a branch of artificial intelligence, it enables us to discover patterns and make predictions from data based on statistics, data mining, and predictive analysis. The key goal of this study was to use machine learning appr
APA, Harvard, Vancouver, ISO, and other styles
16

Lao, Qicheng, Xiang Jiang, and Mohammad Havaei. "Hypothesis Disparity Regularized Mutual Information Maximization." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (2021): 8243–51. http://dx.doi.org/10.1609/aaai.v35i9.17003.

Full text
Abstract:
We propose a hypothesis disparity regularized mutual information maximization (HDMI) approach to tackle unsupervised hypothesis transfer---as an effort towards unifying hypothesis transfer learning (HTL) and unsupervised domain adaptation (UDA)---where the knowledge from a source domain is transferred solely through hypotheses and adapted to the target domain in an unsupervised manner. In contrast to the prevalent HTL and UDA approaches that typically use a single hypothesis, HDMI employs multiple hypotheses to leverage the underlying distributions of the source and target hypotheses. To bette
APA, Harvard, Vancouver, ISO, and other styles
17

Herrera, Roberto H., Sergey Fomel, and Mirko van der Baan. "Automatic approaches for seismic to well tying." Interpretation 2, no. 2 (2014): SD9—SD17. http://dx.doi.org/10.1190/int-2013-0130.1.

Full text
Abstract:
Tying the synthetic trace to the actual seismic trace at the well location is a labor-intensive task that relies on the interpreter’s experience and the similarity metric used. The traditional seismic to well tie suffers from subjectivity by visually matching major events and using global crosscorrelation to measure the quality of that tying. We compared two automatic techniques that will decrease the subjectivity in the entire process. First, we evaluated the dynamic time warping method, and then, we used the local similarity attribute based on regularized shaping filters. These two methods p
APA, Harvard, Vancouver, ISO, and other styles
18

Schmid, Matthias, Olaf Gefeller, Elisabeth Waldmann, Andreas Mayr, and Tobias Hepp. "Approaches to Regularized Regression – A Comparison between Gradient Boosting and the Lasso." Methods of Information in Medicine 55, no. 05 (2016): 422–30. http://dx.doi.org/10.3414/me16-01-0033.

Full text
Abstract:
Summary Background: Penalization and regularization techniques for statistical modeling have attracted increasing attention in biomedical research due to their advantages in the presence of high-dimensional data. A special focus lies on algorithms that incorporate automatic variable selection like the least absolute shrinkage operator (lasso) or statistical boosting techniques. Objectives: Focusing on the linear regression framework, this article compares the two most-common techniques for this task, the lasso and gradient boosting, both from a methodological and a practical perspective. Metho
APA, Harvard, Vancouver, ISO, and other styles
19

Iosifidis, Alexandros, Anastasios Tefas, and Ioannis Pitas. "Human Action Recognition Based on Multi-View Regularized Extreme Learning Machine." International Journal on Artificial Intelligence Tools 24, no. 05 (2015): 1540020. http://dx.doi.org/10.1142/s0218213015400205.

Full text
Abstract:
In this paper, we employ multiple Single-hidden Layer Feedforward Neural Networks for multi-view action recognition. We propose an extension of the Extreme Learning Machine algorithm that is able to exploit multiple action representations and scatter information in the corresponding ELM spaces for the calculation of the networks’ parameters and the determination of optimized network combination weights. The proposed algorithm is evaluated by using two state-of-the-art action video representation approaches on five publicly available action recognition databases designed for different applicati
APA, Harvard, Vancouver, ISO, and other styles
20

PANTOJA, N. R., and H. RAGO. "DISTRIBUTIONAL SOURCES IN GENERAL RELATIVITY: TWO POINT-LIKE EXAMPLES REVISITED." International Journal of Modern Physics D 11, no. 09 (2002): 1479–99. http://dx.doi.org/10.1142/s021827180200213x.

Full text
Abstract:
A regularization procedure, that allows one to relate singularities of curvature to those of the Einstein tensor without some of the shortcomings of previous approaches, is proposed. This regularization is obtained by requiring that (i) the density [Formula: see text], associated to the Einstein tensor [Formula: see text] of the regularized metric, rather than the Einstein tensor itself, be a distribution and (ii) the regularized metric be a continuous metric with a discontinuous extrinsic curvature across a non-null hypersurface of codimension one. In this paper, the curvature and Einstein te
APA, Harvard, Vancouver, ISO, and other styles
21

Guo, Zheng-Chu, and Yiming Ying. "Guaranteed Classification via Regularized Similarity Learning." Neural Computation 26, no. 3 (2014): 497–522. http://dx.doi.org/10.1162/neco_a_00556.

Full text
Abstract:
Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches to similarity metric learning that have been proposed, there has been little theoretical study on the links between similarity metric learning and the classification performance of the resulting classifier. In this letter, we propose a regularized similarity learning formulation associated with general matrix norms and establi
APA, Harvard, Vancouver, ISO, and other styles
22

Ward, Eric J., Kristin Marshall, and Mark D. Scheuerell. "Regularizing priors for Bayesian VAR applications to large ecological datasets." PeerJ 10 (November 8, 2022): e14332. http://dx.doi.org/10.7717/peerj.14332.

Full text
Abstract:
Using multi-species time series data has long been of interest for estimating inter-specific interactions with vector autoregressive models (VAR) and state space VAR models (VARSS); these methods are also described in the ecological literature as multivariate autoregressive models (MAR, MARSS). To date, most studies have used these approaches on relatively small food webs where the total number of interactions to be estimated is relatively small. However, as the number of species or functional groups increases, the length of the time series must also increase to provide enough degrees of freed
APA, Harvard, Vancouver, ISO, and other styles
23

Stevens, Abby, Rebecca Willett, Antonios Mamalakis, et al. "Graph-Guided Regularized Regression of Pacific Ocean Climate Variables to Increase Predictive Skill of Southwestern U.S. Winter Precipitation." Journal of Climate 34, no. 2 (2021): 737–54. http://dx.doi.org/10.1175/jcli-d-20-0079.1.

Full text
Abstract:
AbstractUnderstanding the physical drivers of seasonal hydroclimatic variability and improving predictive skill remains a challenge with important socioeconomic and environmental implications for many regions around the world. Physics-based deterministic models show limited ability to predict precipitation as the lead time increases, due to imperfect representation of physical processes and incomplete knowledge of initial conditions. Similarly, statistical methods drawing upon established climate teleconnections have low prediction skill due to the complex nature of the climate system. Recentl
APA, Harvard, Vancouver, ISO, and other styles
24

Ahrens, Achim, Christian B. Hansen, and Mark E. Schaffer. "lassopack: Model selection and prediction with regularized regression in Stata." Stata Journal: Promoting communications on statistics and Stata 20, no. 1 (2020): 176–235. http://dx.doi.org/10.1177/1536867x20909697.

Full text
Abstract:
In this article, we introduce lassopack, a suite of programs for regularized regression in Stata. lassopack implements lasso, square-root lasso, elastic net, ridge regression, adaptive lasso, and postestimation ordinary least squares. The methods are suitable for the high-dimensional setting, where the number of predictors p may be large and possibly greater than the number of observations, n. We offer three approaches for selecting the penalization (“tuning”) parameters: information criteria (implemented in lasso2), K-fold cross-validation and h-step-ahead rolling cross-validation for cross-s
APA, Harvard, Vancouver, ISO, and other styles
25

Khattab, Mahmoud M., Akram M. Zeki, Ali A. Alwan, Belgacem Bouallegue, Safaa S. Matter, and Abdelmoty M. Ahmed. "Regularized Multiframe Super-Resolution Image Reconstruction Using Linear and Nonlinear Filters." Journal of Electrical and Computer Engineering 2021 (December 18, 2021): 1–16. http://dx.doi.org/10.1155/2021/8309910.

Full text
Abstract:
The primary goal of the multiframe super-resolution image reconstruction is to produce an image with a higher resolution by integrating information extracted from a set of corresponding images with low resolution, which is used in various fields. However, super-resolution image reconstruction approaches are typically affected by annoying restorative artifacts, including blurring, noise, and staircasing effect. Accordingly, it is always difficult to balance between smoothness and edge preservation. In this paper, we intend to enhance the efficiency of multiframe super-resolution image reconstru
APA, Harvard, Vancouver, ISO, and other styles
26

Jain, Subit K., Deepak Kumar, Manoj Thakur, and Rajendra K. Ray. "Proximal Support Vector Machine-Based Hybrid Approach for Edge Detection in Noisy Images." Journal of Intelligent Systems 29, no. 1 (2019): 1315–28. http://dx.doi.org/10.1515/jisys-2017-0566.

Full text
Abstract:
Abstract We propose a novel edge detector in the presence of Gaussian noise with the use of proximal support vector machine (PSVM). The edges of a noisy image are detected using a two-stage architecture: smoothing of image is first performed using regularized anisotropic diffusion, followed by the classification using PSVM, termed as regularized anisotropic diffusion-based PSVM (RAD-PSVM) method. In this process, a feature vector is formed for a pixel using the denoised coefficient’s class and the local orientations to detect edges in all possible directions in images. From the experiments, co
APA, Harvard, Vancouver, ISO, and other styles
27

Park, Minsu, Tae-Hun Kim, Eun-Seok Cho, Heebal Kim, and Hee-Seok Oh. "Genomic Selection for Adjacent Genetic Markers of Yorkshire Pigs Using Regularized Regression Approaches." Asian-Australasian Journal of Animal Sciences 27, no. 12 (2014): 1678–83. http://dx.doi.org/10.5713/ajas.2014.14236.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Pavan Kumar Varma Kothapalli, Et al. "A Linear Regularized Normalized Model for Dyslexia and ADHD Prediction Using Learning Approaches." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 11 (2023): 560–71. http://dx.doi.org/10.17762/ijritcc.v11i11.9994.

Full text
Abstract:
A learning disability called dyslexia typically affects school-age kids. Children have trouble spelling, reading, and writing words. Children who experience this problem often struggle with negative emotions, rage, frustration, and low self-esteem. Consequently, a dyslexia predictor system is required to assist children in overcoming the risk. There are many current ways of predicting dyslexia. However, they need to provide higher prediction accuracy. Also, this work concentrates on another disorder known as Attention-Deficit Hyperactivity Disorder (ADHD). The prediction process is more challe
APA, Harvard, Vancouver, ISO, and other styles
29

Bröcker, Jochen. "Regularized Logistic Models for Probabilistic Forecasting and Diagnostics." Monthly Weather Review 138, no. 2 (2010): 592–604. http://dx.doi.org/10.1175/2009mwr3126.1.

Full text
Abstract:
Abstract Logistic models are studied as a tool to convert dynamical forecast information (deterministic and ensemble) into probability forecasts. A logistic model is obtained by setting the logarithmic odds ratio equal to a linear combination of the inputs. As with any statistical model, logistic models will suffer from overfitting if the number of inputs is comparable to the number of forecast instances. Computational approaches to avoid overfitting by regularization are discussed, and efficient techniques for model assessment and selection are presented. A logit version of the lasso (origina
APA, Harvard, Vancouver, ISO, and other styles
30

Patel, Pratham, Dhruv Parmar, and Gaurav Kulkarni. "Temporal Regularized Matrix Factorization for High-Dimensional Time Series Forecasting." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 03 (2025): 1–9. https://doi.org/10.55041/ijsrem43415.

Full text
Abstract:
Time series forecasting plays a critical role in numerous domains, including finance, economics, climatology, and retail. The ability to predict future values based on historical patterns enables better decision-making, resource allocation, and risk management. Traditional approaches to time series forecasting include statistical methods such as autoregressive integrated moving average (ARIMA) models, exponential smoothing, and vector autoregression (VAR)[1]
APA, Harvard, Vancouver, ISO, and other styles
31

Anastasiadis, Johannes, and Michael Heizmann. "GAN-regularized augmentation strategy for spectral datasets." tm - Technisches Messen 89, no. 4 (2022): 278–88. http://dx.doi.org/10.1515/teme-2021-0109.

Full text
Abstract:
Abstract Artificial neural networks are used in various fields including spectral unmixing, which is used to determine the proportions of substances involved in a mixture, and achieve promising results. This is especially true if there is a non-linear relationship between the spectra of mixtures and the spectra of the substances involved (pure spectra). To achieve sufficient results, neural networks need lots of representative training data. We present a method that extends existing training data for spectral unmixing consisting of spectra of mixtures by learning the mixing characteristic usin
APA, Harvard, Vancouver, ISO, and other styles
32

Thibault, Alexis, Lénaïc Chizat, Charles Dossal, and Nicolas Papadakis. "Overrelaxed Sinkhorn–Knopp Algorithm for Regularized Optimal Transport." Algorithms 14, no. 5 (2021): 143. http://dx.doi.org/10.3390/a14050143.

Full text
Abstract:
This article describes a set of methods for quickly computing the solution to the regularized optimal transport problem. It generalizes and improves upon the widely used iterative Bregman projections algorithm (or Sinkhorn–Knopp algorithm). We first proposed to rely on regularized nonlinear acceleration schemes. In practice, such approaches lead to fast algorithms, but their global convergence is not ensured. Hence, we next proposed a new algorithm with convergence guarantees. The idea is to overrelax the Bregman projection operators, allowing for faster convergence. We proposed a simple metho
APA, Harvard, Vancouver, ISO, and other styles
33

Nakhaeinezhadfard, Mohammadreza, Aidan Scannell, and Joni Pajarinen. "Entropy Regularized Task Representation Learning for Offline Meta-Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 18 (2025): 19616–23. https://doi.org/10.1609/aaai.v39i18.34160.

Full text
Abstract:
Offline meta-reinforcement learning aims to equip agents with the ability to rapidly adapt to new tasks by training on data from a set of different tasks. Context-based approaches utilize a history of state-action-reward transitions – referred to as the context – to infer a representation of the current task, and then condition the agent, i.e., the policy and value function, on this task representation. Intuitively, the better the task representation captures the underlying tasks, the better the agent can generalize to new tasks. Unfortunately, context-based approaches suffer from distribution
APA, Harvard, Vancouver, ISO, and other styles
34

Alberto, Dalla Libera, Carli Ruggero, and Pillonetto Gianluigi. "Kernel-based methods for Volterra series identification." Automatica 129 (May 8, 2021): 1–11. https://doi.org/10.1016/j.automatica.2021.109686.

Full text
Abstract:
Volterra series approximate a broad range of nonlinear systems. Their identification is challenging due to the curse of dimensionality: the number of model parameters grows exponentially with the complexity of the input–output response. This fact limits the applicability of such models and has stimulated recently much research on regularized solutions. Along this line, we propose two new strategies that use kernel-based methods. First, we introduce the multiplicative polynomial kernel (MPK). Compared to the standard polynomial kernel, the MPK is equipped with a richer set of hyperparamet
APA, Harvard, Vancouver, ISO, and other styles
35

Authier, Matthieu, Anders Galatius, Anita Gilles, and Jérôme Spitz. "Of power and despair in cetacean conservation: estimation and detection of trend in abundance with noisy and short time-series." PeerJ 8 (August 7, 2020): e9436. http://dx.doi.org/10.7717/peerj.9436.

Full text
Abstract:
Many conservation instruments rely on detecting and estimating a population decline in a target species to take action. Trend estimation is difficult because of small sample size and relatively large uncertainty in abundance/density estimates of many wild populations of animals. Focusing on cetaceans, we performed a prospective analysis to estimate power, type-I, sign (type-S) and magnitude (type-M) error rates of detecting a decline in short time-series of abundance estimates with different signal-to-noise ratio. We contrasted results from both unregularized (classical) and regularized approa
APA, Harvard, Vancouver, ISO, and other styles
36

Chen, Jiangjie, Qiaoben Bao, Changzhi Sun, et al. "LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 10 (2022): 10482–91. http://dx.doi.org/10.1609/aaai.v36i10.21291.

Full text
Abstract:
Given a natural language statement, how to verify its veracity against a large-scale textual knowledge source like Wikipedia? Most existing neural models make predictions without giving clues about which part of a false claim goes wrong. In this paper, we propose LOREN, an approach for interpretable fact verification. We decompose the verification of the whole claim at phrase-level, where the veracity of the phrases serves as explanations and can be aggregated into the final verdict according to logical rules. The key insight of LOREN is to represent claim phrase veracity as three-valued laten
APA, Harvard, Vancouver, ISO, and other styles
37

Fang, Qiang, Wenzhuo Zhang, and Xitong Wang. "Visual Navigation Using Inverse Reinforcement Learning and an Extreme Learning Machine." Electronics 10, no. 16 (2021): 1997. http://dx.doi.org/10.3390/electronics10161997.

Full text
Abstract:
In this paper, we focus on the challenges of training efficiency, the designation of reward functions, and generalization in reinforcement learning for visual navigation and propose a regularized extreme learning machine-based inverse reinforcement learning approach (RELM-IRL) to improve the navigation performance. Our contributions are mainly three-fold: First, a framework combining extreme learning machine with inverse reinforcement learning is presented. This framework can improve the sample efficiency and obtain the reward function directly from the image information observed by the agent
APA, Harvard, Vancouver, ISO, and other styles
38

Wang, Bingyuan, Yao Zhang, Dongyuan Liu, et al. "Sparsity-regularized approaches to directly reconstructing hemodynamic response in brain functional diffuse optical tomography." Applied Optics 58, no. 4 (2019): 863. http://dx.doi.org/10.1364/ao.58.000863.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Hepp, Tobias, Matthias Schmid, Olaf Gefeller, Elisabeth Waldmann, and Andreas Mayr. "Addendum to: Approaches to Regularized Regression – A Comparison between Gradient Boosting and the Lasso." Methods of Information in Medicine 58, no. 01 (2019): 060. http://dx.doi.org/10.1055/s-0038-1669389.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Voronin, Sergey, Dylan Mikesell, and Guust Nolet. "Compression approaches for the regularized solutions of linear systems from large-scale inverse problems." GEM - International Journal on Geomathematics 6, no. 2 (2015): 251–94. http://dx.doi.org/10.1007/s13137-015-0073-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Abdulsamad, Hany, Oleg Arenz, Jan Peters, and Gerhard Neumann. "State-Regularized Policy Search for Linearized Dynamical Systems." Proceedings of the International Conference on Automated Planning and Scheduling 27 (June 5, 2017): 419–24. http://dx.doi.org/10.1609/icaps.v27i1.13853.

Full text
Abstract:
Trajectory-Centric Reinforcement Learning and Trajectory Optimization methods optimize a sequence of feedback-controllers by taking advantage of local approximations of model dynamics and cost functions. Stability of the policy update is a major issue for these methods, rendering them hard to apply for highly nonlinear systems. Recent approaches combine classical Stochastic Optimal Control methods with information-theoretic bounds to control the step-size of the policy update and could even be used to train nonlinear deep control policies. These methods bound the relative entropy between the n
APA, Harvard, Vancouver, ISO, and other styles
42

Chen, Jiqiang, Jie Wan, and Litao Ma. "Regularized Discrete Optimal Transport for Class-Imbalanced Classifications." Mathematics 12, no. 4 (2024): 524. http://dx.doi.org/10.3390/math12040524.

Full text
Abstract:
Imbalanced class data are commonly observed in pattern analysis, machine learning, and various real-world applications. Conventional approaches often resort to resampling techniques in order to address the imbalance, which inevitably alter the original data distribution. This paper proposes a novel classification method that leverages optimal transport for handling imbalanced data. Specifically, we establish a transport plan between training and testing data without modifying the original data distribution, drawing upon the principles of optimal transport theory. Additionally, we introduce a n
APA, Harvard, Vancouver, ISO, and other styles
43

Zhang, Hong, Dong Lai Hao, and Xiang Yang Liu. "A Precoding Algorithm Based on Truncated Polynomial Expansion for Massive MIMO System." Advanced Materials Research 945-949 (June 2014): 2315–18. http://dx.doi.org/10.4028/www.scientific.net/amr.945-949.2315.

Full text
Abstract:
A precoding algorithm based on the truncated polynomial expansion is proposed for massive multiple-input multiple-output system, Using the random matrix theory, the optimal precoding weights coefficient is derived for tradeoff between system throughput and precoding complexity. Finally, under different channel conditions, the simulation results show that the average achievable rate will increase infinitely approaches the regularized zero forcing precoding and has better performance than TPE scheme without optimization..
APA, Harvard, Vancouver, ISO, and other styles
44

Yang, Pengcheng, Boxing Chen, Pei Zhang, and Xu Sun. "Visual Agreement Regularized Training for Multi-Modal Machine Translation." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 05 (2020): 9418–25. http://dx.doi.org/10.1609/aaai.v34i05.6484.

Full text
Abstract:
Multi-modal machine translation aims at translating the source sentence into a different language in the presence of the paired image. Previous work suggests that additional visual information only provides dispensable help to translation, which is needed in several very special cases such as translating ambiguous words. To make better use of visual information, this work presents visual agreement regularized training. The proposed approach jointly trains the source-to-target and target-to-source translation models and encourages them to share the same focus on the visual information when gene
APA, Harvard, Vancouver, ISO, and other styles
45

LARSEN, CHRISTOPHER J., CHRISTOPH ORTNER, and ENDRE SÜLI. "EXISTENCE OF SOLUTIONS TO A REGULARIZED MODEL OF DYNAMIC FRACTURE." Mathematical Models and Methods in Applied Sciences 20, no. 07 (2010): 1021–48. http://dx.doi.org/10.1142/s0218202510004520.

Full text
Abstract:
Existence and convergence results are proved for a regularized model of dynamic brittle fracture based on the Ambrosio–Tortorelli approximation. We show that the sequence of solutions to the time-discrete elastodynamics, proposed by Bourdin, Larsen & Richardson as a semidiscrete numerical model for dynamic fracture, converges, as the time-step approaches zero, to a solution of the natural time-continuous elastodynamics model, and that this solution satisfies an energy balance. We emphasize that these models do not specify crack paths a priori, but predict them, including such complicated b
APA, Harvard, Vancouver, ISO, and other styles
46

Jahan, Sohana, Moriyam Akter, Sifta Yeasmin, and Farhana Ahmed Simi. "Facial Expression Identification using Regularized Supervised Distance Preserving Projection." Dhaka University Journal of Science 69, no. 2 (2021): 70–75. http://dx.doi.org/10.3329/dujs.v69i2.56485.

Full text
Abstract:
Facial expression recognition is one of the most reliable and a key technology of advanced human-computer interaction with the rapid development of computer vision and artificial intelligence. Nowadays, there has been a growing interest in improving expression recognition techniques. In most of the cases, automatic recognition system’s efficiency depends on the represented facial expression feature. Even the best classifier may fail to achieve a good recognition rate if inadequate features are provided. Therefore, feature extraction is a crucial step of the facial expression recognition proces
APA, Harvard, Vancouver, ISO, and other styles
47

Robitzsch, Alexander. "Model-Robust Estimation of Multiple-Group Structural Equation Models." Algorithms 16, no. 4 (2023): 210. http://dx.doi.org/10.3390/a16040210.

Full text
Abstract:
Structural equation models (SEM) are widely used in the social sciences. They model the relationships between latent variables in structural models, while defining the latent variables by observed variables in measurement models. Frequently, it is of interest to compare particular parameters in an SEM as a function of a discrete grouping variable. Multiple-group SEM is employed to compare structural relationships between groups. In this article, estimation approaches for the multiple-group are reviewed. We focus on comparing different estimation strategies in the presence of local model misspe
APA, Harvard, Vancouver, ISO, and other styles
48

CHEN, WEN-SHENG, PONG CHI YUEN, JIAN HUANG, and BIN FANG. "TWO-STEP SINGLE PARAMETER REGULARIZATION FISHER DISCRIMINANT METHOD FOR FACE RECOGNITION." International Journal of Pattern Recognition and Artificial Intelligence 20, no. 02 (2006): 189–207. http://dx.doi.org/10.1142/s0218001406004600.

Full text
Abstract:
In face recognition tasks, Fisher discriminant analysis (FDA) is one of the promising methods for dimensionality reduction and discriminant feature extraction. The objective of FDA is to find an optimal projection matrix, which maximizes the between-class-distance and simultaneously minimizes within-class-distance. The main limitation of traditional FDA is the so-called Small Sample Size (3S) problem. It induces that the within-class scatter matrix is singular and then the traditional FDA fails to perform directly for pattern classification. To overcome 3S problem, this paper proposes a novel
APA, Harvard, Vancouver, ISO, and other styles
49

Nisha, Badwaik* Vijay Bagdi. "AN SUPERVISED METHOD FOR DETECTION MALWARE BY USING MACHINE LEARNING ALGORITHM." INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY 5, no. 12 (2016): 287–90. https://doi.org/10.5281/zenodo.192894.

Full text
Abstract:
There is Explosive increase in mobile application more and more threat, viruses and benign are migrate from traditional PC to mobile devices. Existence of this information and access creates more importance which makes device attractive targets for malicious entities. For this we proposed a probabilistic discriminative model which has regularized logistic regression for android malware detection with decompiled source code. There are so many approaches for detection of android malware has been proposed by using permission or source code analysis or dynamic analysis. In this survey paper, we us
APA, Harvard, Vancouver, ISO, and other styles
50

Bender, Philipp, Dirk Honecker, Mathias Bersweiler, et al. "Robust approaches for model-free small-angle scattering data analysis." Journal of Applied Crystallography 55, no. 3 (2022): 586–91. http://dx.doi.org/10.1107/s1600576722004356.

Full text
Abstract:
The small-angle neutron scattering data of nanostructured magnetic samples contain information regarding their chemical and magnetic properties. Often, the first step to access characteristic magnetic and structural length scales is a model-free investigation. However, due to measurement uncertainties and a restricted q range, a direct Fourier transform usually fails and results in ambiguous distributions. To circumvent these problems, different methods have been introduced to derive regularized, more stable correlation functions, with the indirect Fourier transform being the most prominent ap
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!