To see the other types of publications on this topic, follow the link: Penalty function.

Journal articles on the topic 'Penalty function'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Penalty function.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Nguyen, Binh Thanh, Yanqin Bai, Xin Yan, and Touna Yang. "Perturbed smoothing approach to the lower order exact penalty functions for nonlinear inequality constrained optimization." Tamkang Journal of Mathematics 50, no. 1 (2018): 37–60. http://dx.doi.org/10.5556/j.tkjm.50.2019.2625.

Full text
Abstract:
In this paper, we propose two new smoothing approximation to the lower order exact penalty functions for nonlinear optimization problems with inequality constraints. Error estimations between smoothed penalty function and nonsmooth penalty function are investigated. By using these new smooth penalty functions, a nonlinear optimization problem with inequality constraints is converted into a sequence of minimizations of continuously differentiable function. Then based on each of the smoothed penalty functions, we develop an algorithm respectively to finding an approximate optimal solution of the
APA, Harvard, Vancouver, ISO, and other styles
2

Xu, Xinsheng, Zhiqing Meng, Jianwu Sun, and Rui Shen. "A penalty function method based on smoothing lower order penalty function." Journal of Computational and Applied Mathematics 235, no. 14 (2011): 4047–58. http://dx.doi.org/10.1016/j.cam.2011.02.031.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Agarwal, Vivek, Andrei V. Gribok, and Mongi A. Abidi. "Image restoration usingL1norm penalty function." Inverse Problems in Science and Engineering 15, no. 8 (2007): 785–809. http://dx.doi.org/10.1080/17415970600971987.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Meng, Zhiqing, Rui Shen, Chuangyin Dang, and Min Jiang. "Augmented Lagrangian Objective Penalty Function." Numerical Functional Analysis and Optimization 36, no. 11 (2015): 1471–92. http://dx.doi.org/10.1080/01630563.2015.1070864.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Feiring, Bruce R., Don T. Phillips, and Gary L. Hogg. "Penalty function techniques: A tutorial." Computers & Industrial Engineering 9, no. 4 (1985): 307–26. http://dx.doi.org/10.1016/0360-8352(85)90019-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Huyer, Waltraud, and Arnold Neumaier. "A New Exact Penalty Function." SIAM Journal on Optimization 13, no. 4 (2003): 1141–58. http://dx.doi.org/10.1137/s1052623401390537.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Shandiz, Roohollah Aliakbari, and Emran Tohidi. "Decrease of the Penalty Parameter in Differentiable Penalty Function Methods." Theoretical Economics Letters 01, no. 01 (2011): 8–14. http://dx.doi.org/10.4236/tel.2011.11003.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zheng, Ying, Zhiqing Meng, and Rui Shen. "An M-Objective Penalty Function Algorithm Under Big Penalty Parameters." Journal of Systems Science and Complexity 29, no. 2 (2015): 455–71. http://dx.doi.org/10.1007/s11424-015-3204-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ruan, Jiechang, Wenguang Yu, Ke Song, Yihan Sun, Yujuan Huang, and Xinliang Yu. "A Note on a Generalized Gerber–Shiu Discounted Penalty Function for a Compound Poisson Risk Model." Mathematics 7, no. 10 (2019): 891. http://dx.doi.org/10.3390/math7100891.

Full text
Abstract:
In this paper, we propose a new generalized Gerber–Shiu discounted penalty function for a compound Poisson risk model, which can be used to study the moments of the ruin time. First, by taking derivatives with respect to the original Gerber–Shiu discounted penalty function, we construct a relation between the original Gerber–Shiu discounted penalty function and our new generalized Gerber–Shiu discounted penalty function. Next, we use Laplace transform to derive a defective renewal equation for the generalized Gerber–Shiu discounted penalty function, and give a recursive method for solving the
APA, Harvard, Vancouver, ISO, and other styles
10

Shen, Rui, Zhiqing Meng, and Min Jiang. "Smoothing Partially Exact Penalty Function of Biconvex Programming." Asia-Pacific Journal of Operational Research 37, no. 04 (2020): 2040018. http://dx.doi.org/10.1142/s0217595920400187.

Full text
Abstract:
In this paper, a smoothing partial exact penalty function of biconvex programming is studied. First, concepts of partial KKT point, partial optimum point, partial KKT condition, partial Slater constraint qualification and partial exactness are defined for biconvex programming. It is proved that the partial KKT point is equal to the partial optimum point under the condition of partial Slater constraint qualification and the penalty function of biconvex programming is partially exact if partial KKT condition holds. We prove the error bounds properties between smoothing penalty function and penal
APA, Harvard, Vancouver, ISO, and other styles
11

Kittisuwan, Pichid. "Analytical and Simple Form of Shrinkage Functions for Non-Convex Penalty Functions in Fused Lasso Algorithm." International Journal on Artificial Intelligence Tools 29, no. 06 (2020): 2050020. http://dx.doi.org/10.1142/s0218213020500207.

Full text
Abstract:
In some circumstances, the performance of machine learning (ML) tasks are based on the quality of signal (data) that is processed in these tasks. Therefore, the pre-processing techniques, such as reconstruction and denoising methods, are important techniques in ML tasks. In reconstructed (estimated) method, the fused lasso algorithm with non-convex penalty function is an efficient method when the signal corrupted by additive white Gaussian noise (AWGN) is considered. Therefore, this paper proposes new shrinkage functions for non-convex penalty functions, modified arctangent and exponential mod
APA, Harvard, Vancouver, ISO, and other styles
12

Gosain, Anjana, and Kavita Sachdeva. "Handling Constraints Using Penalty Functions in Materialized View Selection." International Journal of Natural Computing Research 8, no. 2 (2019): 1–17. http://dx.doi.org/10.4018/ijncr.2019040101.

Full text
Abstract:
Materialized view selection (MVS) plays a vital role for efficiently making decisions in a data warehouse. This problem is NP-hard and constrained optimization problem. The authors have handled both the space and maintenance cost constraint using penalty functions. Three penalty function methods i.e. static, dynamic and adaptive penalty functions have been used for handling constraints and Backtracking Search Optimization algorithm (BSA) has been used for optimizing the total query processing cost. Experiments were conducted comparing the static, dynamic and adaptive penalty functions on varyi
APA, Harvard, Vancouver, ISO, and other styles
13

Govindhasamy, Vanaja, and Ganesan Kandasamy. "An interior penalty function method for solving fuzzy nonlinear programming problems." Bulletin of Electrical Engineering and Informatics 13, no. 4 (2024): 2320–30. http://dx.doi.org/10.11591/eei.v13i4.7047.

Full text
Abstract:
In this article, we investigate fuzzy interior penalty function method for solving fuzzy nonlinear programming problems (FNLPP) based on a new fuzzy arith-metic, unconstrained optimization, and fuzzy ranking on the parametric form of triangular fuzzy numbers (TFN). The main objective of this paper is to solve constrained fuzzy nonlinear programming problems using interior penalty func-tions (IPF) by converting it into unconstrained optimization problems. We prove an important lemma and a convergence theorem for the interior penalty functions method. Interior penalty function techniques favor s
APA, Harvard, Vancouver, ISO, and other styles
14

Stetsyuk, Petro, Andreas Fischer, and Olha Khomiak. "Maximum Penalty Function in Linear Programming." Physico-mathematical modelling and informational technologies, no. 33 (September 6, 2021): 156–60. http://dx.doi.org/10.15407/fmmit2021.33.156.

Full text
Abstract:
A linear program can be equivalently reformulated as an unconstrained nonsmooth minimization problem, whose objective is the sum of the original objective and a penalty function with a sufficiently large penalty parameter. The article presents two methods for choosing this parameter. The first one applies to linear programs with usual linear inequality constraints. Then, we use a corresponding theorem by N.Z. Shor on the equivalence of a convex program to an unconstrained nonsmooth minimization problem. The second method is for linear programs of a special type. This means that all inequalitie
APA, Harvard, Vancouver, ISO, and other styles
15

Mongeau, Marcel, and Annick Sartenaer. "Automatic decrease of the penalty parameter in exact penalty function methods." European Journal of Operational Research 83, no. 3 (1995): 686–99. http://dx.doi.org/10.1016/0377-2217(93)e0339-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Pantoja, J. F. A. De O., and D. Q. Mayne. "Exact penalty function algorithm with simple updating of the penalty parameter." Journal of Optimization Theory and Applications 69, no. 3 (1991): 441–67. http://dx.doi.org/10.1007/bf00940684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Knypiński, Łukasz. "Adaptation of the penalty function method to genetic algorithm in electromagnetic devices designing." COMPEL - The international journal for computation and mathematics in electrical and electronic engineering 38, no. 4 (2019): 1285–94. http://dx.doi.org/10.1108/compel-01-2019-0010.

Full text
Abstract:
Purpose The purpose of this paper is to elaborate the effective method of adaptation of the external penalty function to the genetic algorithm. Design/methodology/approach In the case of solving the optimization tasks with constraints using the external penalty function, the penalty term has a larger value than the primary objective function. The sigmoidal transformation is introduced to solve this problem. A new method of determining the value of the penalty coefficient in subsequent iterations associated with the changing penalty has been proposed. The proposed approach has been applied to t
APA, Harvard, Vancouver, ISO, and other styles
18

Ding, Yi, Tian Jiang Wang, and Xian Fu. "The Research of Penalty Functions Based on Neural Networks." Applied Mechanics and Materials 63-64 (June 2011): 205–8. http://dx.doi.org/10.4028/www.scientific.net/amm.63-64.205.

Full text
Abstract:
The penalty functions are introduced in the negative correlation learning for finding a neural network in an ensemble. It is based on the average output of the ensemble. The idea of penalty function based on the average output is to make each individual network has the different output value to that of the ensemble on the same input. Experiments on a classification task show how the negative correlation learning generates a neural network with penalty functions.
APA, Harvard, Vancouver, ISO, and other styles
19

Duan, Yaqiong, and Shujun Lian. "Smoothing Approximation to the Square-Root Exact Penalty Function." Journal of Systems Science and Information 4, no. 1 (2016): 87–96. http://dx.doi.org/10.1515/jssi-2016-0087.

Full text
Abstract:
AbstractIn this paper, smoothing approximation to the square-root exact penalty functions is devised for inequality constrained optimization. It is shown that an approximately optimal solution of the smoothed penalty problem is an approximately optimal solution of the original problem. An algorithm based on the new smoothed penalty functions is proposed and shown to be convergent under mild conditions. Three numerical examples show that the algorithm is efficient.
APA, Harvard, Vancouver, ISO, and other styles
20

Jiang, Min, Zhiqing Meng, and Rui Shen. "Partial Exactness for the Penalty Function of Biconvex Programming." Entropy 23, no. 2 (2021): 132. http://dx.doi.org/10.3390/e23020132.

Full text
Abstract:
Biconvex programming (or inequality constrained biconvex optimization) is an important model in solving many engineering optimization problems in areas like machine learning and signal and information processing. In this paper, the partial exactness of the partial optimum for the penalty function of biconvex programming is studied. The penalty function is partially exact if the partial Karush–Kuhn–Tucker (KKT) condition is true. The sufficient and necessary partially local stability condition used to determine whether the penalty function is partially exact for a partial optimum solution is al
APA, Harvard, Vancouver, ISO, and other styles
21

Wang, Jiayu, and Houchun Wang. "On the Expected Discounted Penalty Function Using Physics-Informed Neural Network." Journal of Mathematics 2023 (December 28, 2023): 1–16. http://dx.doi.org/10.1155/2023/9950023.

Full text
Abstract:
We study the expected discounted penalty at ruin under a stochastic discount rate for the compound Poisson risk model with a threshold dividend strategy. The discount rate is modeled by a Poisson process and a standard Brownian motion. By applying the differentiation method and total expectation formula, we obtain an integrodifferential equation for the expected discounted penalty function. From this integrodifferential equation, a renewal equation and an asymptotic formula satisfied by the expected discounted penalty function are derived. In order to solve the integrodifferential equation, we
APA, Harvard, Vancouver, ISO, and other styles
22

Shen, Rui, Zhiqing Meng, Chuangyin Dang, and Min Jiang. "Algorithm of Barrier Objective Penalty Function." Numerical Functional Analysis and Optimization 38, no. 11 (2017): 1473–89. http://dx.doi.org/10.1080/01630563.2017.1338732.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Yang, X. Q., and Y. Y. Zhou. "Second-Order Analysis of Penalty Function." Journal of Optimization Theory and Applications 146, no. 2 (2010): 445–61. http://dx.doi.org/10.1007/s10957-010-9666-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Fletcher, Roger, and Sven Leyffer. "Nonlinear programming without a penalty function." Mathematical Programming 91, no. 2 (2002): 239–69. http://dx.doi.org/10.1007/s101070100244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Fiacco, Anthony V. "Perturbed variations of penalty function methods." Annals of Operations Research 27, no. 1 (1990): 371–80. http://dx.doi.org/10.1007/bf02055202.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

YILMAZ, Nurullah, and Hatice ÖĞÜT. "An exact penalty function approach for inequality constrained optimization problems based on a new smoothing technique." Communications Faculty Of Science University of Ankara Series A1Mathematics and Statistics 72, no. 3 (2023): 761–77. http://dx.doi.org/10.31801/cfsuasmas.1150659.

Full text
Abstract:
Exact penalty methods are one of the effective tools to solve nonlinear programming problems with inequality constraints. In this study, a new class of exact penalty functions is defined and a new family of smoothing techniques to exact penalty functions is introduced. Error estimations are presented among the original, non-smooth exact penalty and smoothed exact penalty problems. It is proved that an optimal solution of smoothed penalty problem is an optimal solution of original problem. A smoothing penalty algorithm based on the the new smoothing technique is proposed and the convergence of
APA, Harvard, Vancouver, ISO, and other styles
27

Zhang, Liansheng, and Zhijian Huang. "On theL 1 exact penalty function with locally lipschitz functions." Acta Mathematicae Applicatae Sinica 4, no. 2 (1988): 145–53. http://dx.doi.org/10.1007/bf02006063.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Lian, Shujun, and Jinli Han. "Smoothing Approximation to the Square-Order Exact Penalty Functions for Constrained Optimization." Journal of Applied Mathematics 2013 (2013): 1–7. http://dx.doi.org/10.1155/2013/568316.

Full text
Abstract:
A method is proposed to smooth the square-order exact penalty function for inequality constrained optimization. It is shown that, under some conditions, an approximately optimal solution of the original problem can be obtained by searching an approximately optimal solution of the smoothed penalty problem. An algorithm based on the smoothed penalty functions is given. The algorithm is shown to be convergent under mild conditions. Two numerical examples show that the algorithm seems efficient.
APA, Harvard, Vancouver, ISO, and other styles
29

Burachik, Regina S., and C. Yalçın Kaya. "An augmented penalty function method with penalty parameter updates for nonconvex optimization." Nonlinear Analysis: Theory, Methods & Applications 75, no. 3 (2012): 1158–67. http://dx.doi.org/10.1016/j.na.2011.03.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Clevenhaus, Anna, Matthias Ehrhardt, Michael Günther, and Daniel Ševčovič. "Pricing American Options with a Non-Constant Penalty Parameter." Journal of Risk and Financial Management 13, no. 6 (2020): 124. http://dx.doi.org/10.3390/jrfm13060124.

Full text
Abstract:
As the American early exercise results in a free boundary problem, in this article we add a penalty term to obtain a partial differential equation, and we also focus on an improved definition of the penalty term for American options. We replace the constant penalty parameter with a time-dependent function. The novelty and advantage of our approach consists in introducing a bounded, time-dependent penalty function, enabling us to construct an efficient, stable, and adaptive numerical approximation scheme, while in contrast, the existing standard approach to the penalisation of the American put
APA, Harvard, Vancouver, ISO, and other styles
31

Shen, Jie, Fang-Fang Guo, and Na Xu. "A Nonconvex Proximal Bundle Method for Nonsmooth Constrained Optimization." Complexity 2024 (February 6, 2024): 1–11. http://dx.doi.org/10.1155/2024/5720769.

Full text
Abstract:
An implementable algorithm for solving nonsmooth nonconvex constrained optimization is proposed by combining bundle ideas, proximity control, and the exact penalty function. We construct two kinds of approximations to nonconvex objective function; these two approximations correspond to the convex and concave behaviors of the objective function at the current point, which captures precisely the characteristic of the objective function. The penalty coefficients are increased only a finite number of times under the conditions of Slater constraint qualification and the boundedness of the constrain
APA, Harvard, Vancouver, ISO, and other styles
32

Han, Jinyang, Hao Yuan, Weichao Li, Liang Zhou, Chen Deng, and Ming Yan. "FCS-MPC Based on Dimension Unification Cost Function." Energies 17, no. 11 (2024): 2479. http://dx.doi.org/10.3390/en17112479.

Full text
Abstract:
Finite Control Set Model Predictive Control (FCS-MPC) has the ability to achieve multi-objective optimization, but there are still many challenges. The key to realizing multi-objective optimization in FCS-MPC lies in the design of the cost function. However, the different dimensions of penalty terms in the cost function often lead to difficulties in designing weighting coefficients. Incorrect weighting coefficients may result in truncation errors in calculations of DSPs and FPGAs, thereby affecting the algorithm’s control performance. Therefore, this article focuses on a system driving an indu
APA, Harvard, Vancouver, ISO, and other styles
33

Lv, Yibing. "An exact penalty function approach for solving the linear bilevel multiobjective programming problem." Filomat 29, no. 4 (2015): 773–79. http://dx.doi.org/10.2298/fil1504773l.

Full text
Abstract:
In this paper, a new penalty function approach is proposed for the linear bilevel multiobjective programming problem. Using the optimality conditions of the lower level problem, we transform the linear bilevel multiobjective programming problem into the corresponding linear multiobjective programming problem with complementary constraint. The complementary constraint is appended to the upper level objectives with a penalty. Then, we give via an exact penalty method an existence theorem of Pareto optimal solutions and propose an algorithm for the linear bilevel multiobjective programming proble
APA, Harvard, Vancouver, ISO, and other styles
34

Wang, Lei, Wenle Song, and Kai Sun. "Energy Scheduling of PV–ES Inverters Based on Particle Swarm Optimization Using a Non-Linear Penalty Function." Electronics 14, no. 11 (2025): 2272. https://doi.org/10.3390/electronics14112272.

Full text
Abstract:
The photovoltaic (PV) energy storage (ES) inverter is an effective way to solve the problems of energy shortage and environment pollution. However, when considering the constraints such as economic benefits and power supply reliability, the energy optimization and dispatching of this PV–ES system poses great challenges. This paper proposes an optimization method based on the combination of the particle swarm algorithm and non-linear penalty function to dispatch the energy of household PV–ES inverter. Based on the established optimization model of the PV–ES inverter system, compared with the st
APA, Harvard, Vancouver, ISO, and other styles
35

Zhu, Huiming, Ya Huang, Xiangqun Yang, and Jieming Zhou. "On the Expected Discounted Penalty Function for the Classical Risk Model with Potentially Delayed Claims and Random Incomes." Journal of Applied Mathematics 2014 (2014): 1–12. http://dx.doi.org/10.1155/2014/717269.

Full text
Abstract:
We focus on the expected discounted penalty function of a compound Poisson risk model with random incomes and potentially delayed claims. It is assumed that each main claim will produce a byclaim with a certain probability and the occurrence of the byclaim may be delayed depending on associated main claim amount. In addition, the premium number process is assumed as a Poisson process. We derive the integral equation satisfied by the expected discounted penalty function. Given that the premium size is exponentially distributed, the explicit expression for the Laplace transform of the expected d
APA, Harvard, Vancouver, ISO, and other styles
36

Yu, Chunxiao, Dinghui Jing, Chang Fu, and Yanfang Yang. "A Kind of FM-BEM Penalty Function Method for a 3D Elastic Frictional Contact Nonlinear System." Journal of Mathematics 2021 (January 13, 2021): 1–11. http://dx.doi.org/10.1155/2021/6626647.

Full text
Abstract:
In this paper, a kind of node_face frictional contact FM-BEM penalty function method is presented for 3D elastic frictional contact nonlinear problems. According to the principle of minimum potential energy, nonpenetrating constraints are introduced into the elastic frictional contact system as a penalty term. By using the least square method and penalty function method, an optimization mathematical model and a mathematical programming model with a penalty factor are established for the node_face frictional contact nonlinear system. For the two models, a penalty optimization IGMRES (m) algorit
APA, Harvard, Vancouver, ISO, and other styles
37

Sohn, Dongkyu, Shingo Mabu, Kotaro Hirasawa, and Jinglu Hu. "Optimization Method RasID-GA for Numerical Constrained Optimization Problems." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 5 (2007): 469–77. http://dx.doi.org/10.20965/jaciii.2007.p0469.

Full text
Abstract:
This paper proposes Adaptive Random search with Intensification and Diversification combined with Genetic Algorithm (RasID-GA) for constrained optimization. In the previous work, we proposed RasID-GA which combines the best properties of RasID and Genetic Algorithm for unconstrained optimization problems. In general, it is very difficult to find an optimal solution for constrained optimization problems because their feasible solution space is very limited and they should consider the objective functions and constraint conditions. The conventional constrained optimization methods usually use pe
APA, Harvard, Vancouver, ISO, and other styles
38

ANTCZAK, TADEUSZ. "THE l1 PENALTY FUNCTION METHOD FOR NONCONVEX DIFFERENTIABLE OPTIMIZATION PROBLEMS WITH INEQUALITY CONSTRAINTS." Asia-Pacific Journal of Operational Research 27, no. 05 (2010): 559–76. http://dx.doi.org/10.1142/s0217595910002855.

Full text
Abstract:
In this paper, some new results on the l1 exact penalty function method are presented. A simple optimality characterization is given for the nonconvex differentiable optimization problems with inequality constraints via the l1 exact penalty function method. The equivalence between sets of optimal solutions in the original mathematical programming problem and its associated exact penalized optimization problem is established under suitable r-invexity assumption. The penalty parameter is given, above which this equivalence holds. Furthermore, the equivalence between a saddle point in the conside
APA, Harvard, Vancouver, ISO, and other styles
39

HU, YIBO. "HYBRID-FITNESS FUNCTION EVOLUTIONARY ALGORITHM BASED ON SIMPLEX CROSSOVER AND PSO MUTATION FOR CONSTRAINED OPTIMIZATION PROBLEMS." International Journal of Pattern Recognition and Artificial Intelligence 23, no. 01 (2009): 115–27. http://dx.doi.org/10.1142/s0218001409007004.

Full text
Abstract:
For constrained optimization problems, evolutionary algorithms often utilize a penalty function to deal with constraints, even if it is difficult to control the penalty parameters. To overcome this shortcoming, this paper presents a new penalty function which has no parameter and can effectively handle constraint first, after which a hybrid-fitness function integrating this penalty function into the objective function is designed. The new fitness function can properly evaluate not only feasible solution, but also infeasible one, and distinguish any feasible one from an infeasible one. Meanwhil
APA, Harvard, Vancouver, ISO, and other styles
40

Lian, Shujun, Sitong Meng, and Yiju Wang. "An Objective Penalty Function-Based Method for Inequality Constrained Minimization Problem." Mathematical Problems in Engineering 2018 (June 6, 2018): 1–7. http://dx.doi.org/10.1155/2018/7484256.

Full text
Abstract:
For inequality constrained minimization problem, we first propose a new exact nonsmooth objective penalty function and then apply a smooth technique to the penalty function to make it smooth. It is shown that any minimizer of the smoothing objective penalty function is an approximated solution of the original problem. Based on this, we develop a solution method for the inequality constrained minimization problem and prove its global convergence. Numerical experiments are provided to show the efficiency of the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhang, Zhimin, Hailiang Yang, and Hu Yang. "On the absolute ruin in a MAP risk model with debit interest." Advances in Applied Probability 43, no. 1 (2011): 77–96. http://dx.doi.org/10.1239/aap/1300198513.

Full text
Abstract:
In this paper we consider a risk model where claims arrive according to a Markovian arrival process (MAP). When the surplus becomes negative or the insurer is in deficit, the insurer could borrow money at a constant debit interest rate to repay the claims. We derive the integro-differential equations satisfied by the discounted penalty functions and discuss the solutions. A matrix renewal equation is obtained for the discounted penalty function provided that the initial surplus is nonnegative. Based on this matrix renewal equation, we present some asymptotic formulae for the discounted penalty
APA, Harvard, Vancouver, ISO, and other styles
42

Zhang, Zhimin, Hailiang Yang, and Hu Yang. "On the absolute ruin in a MAP risk model with debit interest." Advances in Applied Probability 43, no. 01 (2011): 77–96. http://dx.doi.org/10.1017/s0001867800004699.

Full text
Abstract:
In this paper we consider a risk model where claims arrive according to a Markovian arrival process (MAP). When the surplus becomes negative or the insurer is in deficit, the insurer could borrow money at a constant debit interest rate to repay the claims. We derive the integro-differential equations satisfied by the discounted penalty functions and discuss the solutions. A matrix renewal equation is obtained for the discounted penalty function provided that the initial surplus is nonnegative. Based on this matrix renewal equation, we present some asymptotic formulae for the discounted penalty
APA, Harvard, Vancouver, ISO, and other styles
43

Prajapati, Raju, Om Prakash Dubey, and Ranjit Pradhan. "ON NON-QUADRATIC PENALTY FUNCTION FOR NON-LINEAR PROGRAMMING PROBLEM WITH EQUALITY CONSTRAINTS." International Journal of Students' Research in Technology & Management 7, no. 1 (2019): 23–28. http://dx.doi.org/10.18510/ijsrtm.2019.715.

Full text
Abstract:
Purpose: The present paper focuses on the Non-Linear Programming Problem (NLPP) with equality constraints. NLPP with constraints could be solved by penalty or barrier methods.
 Methodology: We apply the penalty method to the NLPP with equality constraints only. The non-quadratic penalty method is considered for this purpose. We considered a transcendental i.e. exponential function for imposing the penalty due to the constraint violation. The unconstrained NLPP obtained in this way is then processed for further solution. An improved version of evolutionary and famous meta-heuristic Particl
APA, Harvard, Vancouver, ISO, and other styles
44

Setiono, Rudy. "A Penalty-Function Approach for Pruning Feedforward Neural Networks." Neural Computation 9, no. 1 (1997): 185–204. http://dx.doi.org/10.1162/neco.1997.9.1.185.

Full text
Abstract:
This article proposes the use of a penalty function for pruning feedforward neural network by weight elimination. The penalty function proposed consists of two terms. The first term is to discourage the use of unnecessary connections, and the second term is to prevent the weights of the connections from taking excessively large values. Simple criteria for eliminating weights from the network are also given. The effectiveness of this penalty function is tested on three well-known problems: the contiguity problem, the parity problems, and the monks problems. The resulting pruned networks obtaine
APA, Harvard, Vancouver, ISO, and other styles
45

Knypiński, Łukasz, Krzysztof Kowalski, and Lech Nowak. "Constrained optimization using penalty function method combined with genetic algorithm." ITM Web of Conferences 19 (2018): 01037. http://dx.doi.org/10.1051/itmconf/20181901037.

Full text
Abstract:
In the paper the way of adaptation of the penalty function method to the genetic algorithm is presented. In case of application of the external penalty function, the penalty term may exceed the value of the primary objective function. This means, that the value of the modified objective function is negative, while in genetic algorithm the adaptation must be of positive value, especially it in the selection procedure utilizes the roulette method. The sigmoidal transformation is applied to solve this problem. The computer software is developed in the Delphi environment. The proposed approach is
APA, Harvard, Vancouver, ISO, and other styles
46

Gerber, Hans U., X. Sheldon Lin, and Hailiang Yang. "A Note on the Dividends-Penalty Identity and the Optimal Dividend Barrier." ASTIN Bulletin 36, no. 02 (2006): 489–503. http://dx.doi.org/10.2143/ast.36.2.2017931.

Full text
Abstract:
For a general class of risk models, the dividends-penalty identity is derived by probabilistic reasoning. This identity is the key for understanding and determining the optimal dividend barrier, which maximizes the difference between the expected present value of all dividends until ruin and the expected discounted value of a penalty at ruin (which is typically a function of the deficit at ruin). As an illustration, the optimal barrier is calculated in two classical models, for different penalty functions and a variety of parameter values.
APA, Harvard, Vancouver, ISO, and other styles
47

Gerber, Hans U., X. Sheldon Lin, and Hailiang Yang. "A Note on the Dividends-Penalty Identity and the Optimal Dividend Barrier." ASTIN Bulletin 36, no. 2 (2006): 489–503. http://dx.doi.org/10.1017/s0515036100014604.

Full text
Abstract:
For a general class of risk models, the dividends-penalty identity is derived by probabilistic reasoning. This identity is the key for understanding and determining the optimal dividend barrier, which maximizes the difference between the expected present value of all dividends until ruin and the expected discounted value of a penalty at ruin (which is typically a function of the deficit at ruin). As an illustration, the optimal barrier is calculated in two classical models, for different penalty functions and a variety of parameter values.
APA, Harvard, Vancouver, ISO, and other styles
48

Tyler, David E., and Mengxi Yi. "Lassoing eigenvalues." Biometrika 107, no. 2 (2020): 397–414. http://dx.doi.org/10.1093/biomet/asz076.

Full text
Abstract:
Summary The properties of penalized sample covariance matrices depend on the choice of the penalty function. In this paper, we introduce a class of nonsmooth penalty functions for the sample covariance matrix and demonstrate how their use results in a grouping of the estimated eigenvalues. We refer to the proposed method as lassoing eigenvalues, or the elasso.
APA, Harvard, Vancouver, ISO, and other styles
49

Prajapati, Raju, and Om Prakash Dubey. "ANALYSING THE IMPACT OF PENALTY CONSTANT ON PENALTY FUNCTION THROUGH PARTICE SWARM OPTIMIZATION." International Journal of Students' Research in Technology & Management 6, no. 2 (2018): 01–06. http://dx.doi.org/10.18510/ijsrtm.2018.621.

Full text
Abstract:
Non Linear Programming Problems (NLPP) are tedious to solve as compared to Linear Programming Problem (LPP). The present paper is an attempt to analyze the impact of penalty constant over the penalty function, which is used to solve the NLPP with inequality constraint(s). The improved version of famous meta heuristic Particle Swarm Optimization (PSO) is used for this purpose. The scilab programming language is used for computational purpose. The impact of penalty constant is studied by considering five test problems. Different values of penalty constant are taken to prepare the unconstraint NL
APA, Harvard, Vancouver, ISO, and other styles
50

Yu, Xinghuo, and Baolin Wu. "An Adaptive Penalty Function Method for Constrained Optimization with Evolutionary Programming." Journal of Advanced Computational Intelligence and Intelligent Informatics 4, no. 2 (2000): 164–70. http://dx.doi.org/10.20965/jaciii.2000.p0164.

Full text
Abstract:
In this paper, we propose a novel adaptive penalty function method for constrained optimization problems using the evolutionary programming technique. This method incorporates an adaptive tuning algorithm that adjusts the penalty parameters according to the population landscape so that it allows fast escape from a local optimum and quick convergence toward a global optimum. The method is simple and computationally effective in the sense that only very few penalty parameters are needed for tuning. Simulation results of five well-known benchmark problems are presented to show the performance of
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!