Segui questo link per vedere altri tipi di pubblicazioni sul tema: Lipschitz neural network.

Articoli di riviste sul tema "Lipschitz neural network"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Lipschitz neural network".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Zhu, Zelong, Chunna Zhao, and Yaqun Huang. "Fractional order Lipschitz recurrent neural network with attention for long time series prediction." Journal of Physics: Conference Series 2813, no. 1 (2024): 012015. http://dx.doi.org/10.1088/1742-6596/2813/1/012015.

Testo completo
Abstract (sommario):
Abstract Time series data prediction holds a significant importance in various applications. In this study, we specifically concentrate on long-time series data prediction. Recurrent Neural Networks are widely recognized as a fundamental neural network architecture for processing effectively time-series data. Recurrent Neural Network models encounter the gradient disappearance or gradient explosion challenge in long series data. To resolve the gradient problem and improve accuracy, the Fractional Order Lipschitz Recurrent Neural Network (FOLRNN) model is proposed to predict long time series in
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Zhang, Huan, Pengchuan Zhang, and Cho-Jui Hsieh. "RecurJac: An Efficient Recursive Algorithm for Bounding Jacobian Matrix of Neural Networks and Its Applications." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5757–64. http://dx.doi.org/10.1609/aaai.v33i01.33015757.

Testo completo
Abstract (sommario):
The Jacobian matrix (or the gradient for single-output networks) is directly related to many important properties of neural networks, such as the function landscape, stationary points, (local) Lipschitz constants and robustness to adversarial attacks. In this paper, we propose a recursive algorithm, RecurJac, to compute both upper and lower bounds for each element in the Jacobian matrix of a neural network with respect to network’s input, and the network can contain a wide range of activation functions. As a byproduct, we can efficiently obtain a (local) Lipschitz constant, which plays a cruci
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Araujo, Alexandre, Benjamin Negrevergne, Yann Chevaleyre, and Jamal Atif. "On Lipschitz Regularization of Convolutional Layers using Toeplitz Matrix Theory." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 8 (2021): 6661–69. http://dx.doi.org/10.1609/aaai.v35i8.16824.

Testo completo
Abstract (sommario):
This paper tackles the problem of Lipschitz regularization of Convolutional Neural Networks. Lipschitz regularity is now established as a key property of modern deep learning with implications in training stability, generalization, robustness against adversarial examples, etc. However, computing the exact value of the Lipschitz constant of a neural network is known to be NP-hard. Recent attempts from the literature introduce upper bounds to approximate this constant that are either efficient but loose or accurate but computationally expensive. In this work, by leveraging the theory of Toeplitz
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Xu, Yuhui, Wenrui Dai, Yingyong Qi, Junni Zou, and Hongkai Xiong. "Iterative Deep Neural Network Quantization With Lipschitz Constraint." IEEE Transactions on Multimedia 22, no. 7 (2020): 1874–88. http://dx.doi.org/10.1109/tmm.2019.2949857.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Mohammad, Ibtihal J. "Neural Networks of the Rational r-th Powers of the Multivariate Bernstein Operators." BASRA JOURNAL OF SCIENCE 40, no. 2 (2022): 258–73. http://dx.doi.org/10.29072/basjs.20220201.

Testo completo
Abstract (sommario):
In this study, a novel neural network for the multivariance Bernstein operators' rational powers was developed. A positive integer is required by these networks. In the space of all real-valued continuous functions, the pointwise and uniform approximation theorems are introduced and examined first. After that, the Lipschitz space is used to study two key theorems. Additionally, some numerical examples are provided to demonstrate how well these neural networks approximate two test functions. The numerical outcomes demonstrate that as input grows, the neural network provides a better approximati
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Ibtihal.J.M and Ali J. Mohammad. "Neural Network of Multivariate Square Rational Bernstein Operators with Positive Integer Parameter." European Journal of Pure and Applied Mathematics 15, no. 3 (2022): 1189–200. http://dx.doi.org/10.29020/nybg.ejpam.v15i3.4425.

Testo completo
Abstract (sommario):
This research is defined a new neural network (NN) that depends upon a positive integer parameter using the multivariate square rational Bernstein polynomials. Some theorems for this network are proved, such as the pointwise and the uniform approximation theorems. Firstly, the absolute moment for a function that belongs to Lipschitz space is defined to estimate the order of the NN. Secondly, some numerical applications for this NN are given by taking two test functions. Finally, the numerical results for this network are compared with the classical neural networks (NNs). The results turn out t
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Liu, Kanglin, and Guoping Qiu. "Lipschitz constrained GANs via boundedness and continuity." Neural Computing and Applications 32, no. 24 (2020): 18271–83. http://dx.doi.org/10.1007/s00521-020-04954-z.

Testo completo
Abstract (sommario):
AbstractOne of the challenges in the study of generative adversarial networks (GANs) is the difficulty of its performance control. Lipschitz constraint is essential in guaranteeing training stability for GANs. Although heuristic methods such as weight clipping, gradient penalty and spectral normalization have been proposed to enforce Lipschitz constraint, it is still difficult to achieve a solution that is both practically effective and theoretically provably satisfying a Lipschitz constraint. In this paper, we introduce the boundedness and continuity (BC) conditions to enforce the Lipschitz c
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Othmani, S., N. E. Tatar, and A. Khemmoudj. "Asymptotic behavior of a BAM neural network with delays of distributed type." Mathematical Modelling of Natural Phenomena 16 (2021): 29. http://dx.doi.org/10.1051/mmnp/2021023.

Testo completo
Abstract (sommario):
In this paper, we examine a Bidirectional Associative Memory neural network model with distributed delays. Using a result due to Cid [J. Math. Anal. Appl. 281 (2003) 264–275], we were able to prove an exponential stability result in the case when the standard Lipschitz continuity condition is violated. Indeed, we deal with activation functions which may not be Lipschitz continuous. Therefore, the standard Halanay inequality is not applicable. We will use a nonlinear version of this inequality. At the end, the obtained differential inequality which should imply the exponential stability appears
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Xia, Youshen. "An Extended Projection Neural Network for Constrained Optimization." Neural Computation 16, no. 4 (2004): 863–83. http://dx.doi.org/10.1162/089976604322860730.

Testo completo
Abstract (sommario):
Recently, a projection neural network has been shown to be a promising computational model for solving variational inequality problems with box constraints. This letter presents an extended projection neural network for solving monotone variational inequality problems with linear and nonlinear constraints. In particular, the proposed neural network can include the projection neural network as a special case. Compared with the modified projection-type methods for solving constrained monotone variational inequality problems, the proposed neural network has a lower complexity and is suitable for
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Li, Peiluan, Yuejing Lu, Changjin Xu, and Jing Ren. "Bifurcation Phenomenon and Control Technique in Fractional BAM Neural Network Models Concerning Delays." Fractal and Fractional 7, no. 1 (2022): 7. http://dx.doi.org/10.3390/fractalfract7010007.

Testo completo
Abstract (sommario):
In this current study, we formulate a kind of new fractional BAM neural network model concerning five neurons and time delays. First, we explore the existence and uniqueness of the solution of the formulated fractional delay BAM neural network models via the Lipschitz condition. Second, we study the boundedness of the solution to the formulated fractional delayed BAM neural network models using a proper function. Third, we set up a novel sufficient criterion on the onset of the Hopf bifurcation stability of the formulated fractional BAM neural network models by virtue of the stability criterio
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Wei Bian and Xiaojun Chen. "Smoothing Neural Network for Constrained Non-Lipschitz Optimization With Applications." IEEE Transactions on Neural Networks and Learning Systems 23, no. 3 (2012): 399–411. http://dx.doi.org/10.1109/tnnls.2011.2181867.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Chen, Xin, Yujuan Si, Zhanyuan Zhang, Wenke Yang, and Jianchao Feng. "Improving Adversarial Robustness of ECG Classification Based on Lipschitz Constraints and Channel Activation Suppression." Sensors 24, no. 9 (2024): 2954. http://dx.doi.org/10.3390/s24092954.

Testo completo
Abstract (sommario):
Deep neural networks (DNNs) are increasingly important in the medical diagnosis of electrocardiogram (ECG) signals. However, research has shown that DNNs are highly vulnerable to adversarial examples, which can be created by carefully crafted perturbations. This vulnerability can lead to potential medical accidents. This poses new challenges for the application of DNNs in the medical diagnosis of ECG signals. This paper proposes a novel network Channel Activation Suppression with Lipschitz Constraints Net (CASLCNet), which employs the Channel-wise Activation Suppressing (CAS) strategy to dynam
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Zhang, Chi, Wenjie Ruan, and Peipei Xu. "Reachability Analysis of Neural Network Control Systems." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 12 (2023): 15287–95. http://dx.doi.org/10.1609/aaai.v37i12.26783.

Testo completo
Abstract (sommario):
Neural network controllers (NNCs) have shown great promise in autonomous and cyber-physical systems. Despite the various verification approaches for neural networks, the safety analysis of NNCs remains an open problem. Existing verification approaches for neural network control systems (NNCSs) either can only work on a limited type of activation functions, or result in non-trivial over-approximation errors with time evolving. This paper proposes a verification framework for NNCS based on Lipschitzian optimisation, called DeepNNC. We first prove the Lipschitz continuity of closed-loop NNCSs by
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Yu, Hongshan, Jinzhu Peng, and Yandong Tang. "Identification of Nonlinear Dynamic Systems Using Hammerstein-Type Neural Network." Mathematical Problems in Engineering 2014 (2014): 1–9. http://dx.doi.org/10.1155/2014/959507.

Testo completo
Abstract (sommario):
Hammerstein model has been popularly applied to identify the nonlinear systems. In this paper, a Hammerstein-type neural network (HTNN) is derived to formulate the well-known Hammerstein model. The HTNN consists of a nonlinear static gain in cascade with a linear dynamic part. First, the Lipschitz criterion for order determination is derived. Second, the backpropagation algorithm for updating the network weights is presented, and the stability analysis is also drawn. Finally, simulation results show that HTNN identification approach demonstrated identification performances.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

Xin, YU, WU Lingzhen, XIE Mian, et al. "Smoothing Neural Network for Non‐Lipschitz Optimization with Linear Inequality Constraints." Chinese Journal of Electronics 30, no. 4 (2021): 634–43. http://dx.doi.org/10.1049/cje.2021.05.005.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Zhao, Chunna, Junjie Ye, Zelong Zhu, and Yaqun Huang. "FLRNN-FGA: Fractional-Order Lipschitz Recurrent Neural Network with Frequency-Domain Gated Attention Mechanism for Time Series Forecasting." Fractal and Fractional 8, no. 7 (2024): 433. http://dx.doi.org/10.3390/fractalfract8070433.

Testo completo
Abstract (sommario):
Time series forecasting has played an important role in different industries, including economics, energy, weather, and healthcare. RNN-based methods have shown promising potential due to their strong ability to model the interaction of time and variables. However, they are prone to gradient issues like gradient explosion and vanishing gradients. And the prediction accuracy is not high. To address the above issues, this paper proposes a Fractional-order Lipschitz Recurrent Neural Network with a Frequency-domain Gated Attention mechanism (FLRNN-FGA). There are three major components: the Fracti
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Liang, Youwei, and Dong Huang. "Large Norms of CNN Layers Do Not Hurt Adversarial Robustness." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 10 (2021): 8565–73. http://dx.doi.org/10.1609/aaai.v35i10.17039.

Testo completo
Abstract (sommario):
Since the Lipschitz properties of convolutional neural networks (CNNs) are widely considered to be related to adversarial robustness, we theoretically characterize the L-1 norm and L-infinity norm of 2D multi-channel convolutional layers and provide efficient methods to compute the exact L-1 norm and L-infinity norm. Based on our theorem, we propose a novel regularization method termed norm decay, which can effectively reduce the norms of convolutional layers and fully-connected layers. Experiments show that norm-regularization methods, including norm decay, weight decay, and singular value cl
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Zhuo, Li’an, Baochang Zhang, Chen Chen, Qixiang Ye, Jianzhuang Liu, and David Doermann. "Calibrated Stochastic Gradient Descent for Convolutional Neural Networks." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9348–55. http://dx.doi.org/10.1609/aaai.v33i01.33019348.

Testo completo
Abstract (sommario):
In stochastic gradient descent (SGD) and its variants, the optimized gradient estimators may be as expensive to compute as the true gradient in many scenarios. This paper introduces a calibrated stochastic gradient descent (CSGD) algorithm for deep neural network optimization. A theorem is developed to prove that an unbiased estimator for the network variables can be obtained in a probabilistic way based on the Lipschitz hypothesis. Our work is significantly distinct from existing gradient optimization methods, by providing a theoretical framework for unbiased variable estimation in the deep l
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Lippl, Samuel, Benjamin Peters, and Nikolaus Kriegeskorte. "Can neural networks benefit from objectives that encourage iterative convergent computations? A case study of ResNets and object classification." PLOS ONE 19, no. 3 (2024): e0293440. http://dx.doi.org/10.1371/journal.pone.0293440.

Testo completo
Abstract (sommario):
Recent work has suggested that feedforward residual neural networks (ResNets) approximate iterative recurrent computations. Iterative computations are useful in many domains, so they might provide good solutions for neural networks to learn. However, principled methods for measuring and manipulating iterative convergence in neural networks remain lacking. Here we address this gap by 1) quantifying the degree to which ResNets learn iterative solutions and 2) introducing a regularization approach that encourages the learning of iterative solutions. Iterative methods are characterized by two prop
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Feyzdar, Mahdi, Ahmad Reza Vali та Valiollah Babaeipour. "Identification and Optimization of Recombinant E. coli Fed-Batch Fermentation Producing γ-Interferon Protein". International Journal of Chemical Reactor Engineering 11, № 1 (2013): 123–34. http://dx.doi.org/10.1515/ijcre-2012-0081.

Testo completo
Abstract (sommario):
Abstract A novel approach to identification of fed-batch cultivation of E. coli BL21 (DE3) has been presented. The process has been identified in the system that is designed for maximum production of γ-interferon protein. Dynamic order of the process has been determined by Lipschitz test. Multilayer Perceptron neural network has been used to process identification by experimental data. The optimal brain surgeon method is used to reduce the model complexity that can be easily implemented. Validation results base on autocorrelation function of the residuals, show good performance of neural netwo
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Stamova, Ivanka, Trayan Stamov, and Gani Stamov. "Lipschitz stability analysis of fractional-order impulsive delayed reaction-diffusion neural network models." Chaos, Solitons & Fractals 162 (September 2022): 112474. http://dx.doi.org/10.1016/j.chaos.2022.112474.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Chen, Yu-Wen, Ming-Li Chiang, and Li-Chen Fu. "Adaptive Formation Control for Multiple Quadrotors with Nonlinear Uncertainties Using Lipschitz Neural Network." IFAC-PapersOnLine 56, no. 2 (2023): 8714–19. http://dx.doi.org/10.1016/j.ifacol.2023.10.053.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Li, Wenjing, Wei Bian, and Xiaoping Xue. "Projected Neural Network for a Class of Non-Lipschitz Optimization Problems With Linear Constraints." IEEE Transactions on Neural Networks and Learning Systems 31, no. 9 (2020): 3361–73. http://dx.doi.org/10.1109/tnnls.2019.2944388.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Akrour, Riad, Asma Atamna, and Jan Peters. "Convex optimization with an interpolation-based projection and its application to deep learning." Machine Learning 110, no. 8 (2021): 2267–89. http://dx.doi.org/10.1007/s10994-021-06037-z.

Testo completo
Abstract (sommario):
AbstractConvex optimizers have known many applications as differentiable layers within deep neural architectures. One application of these convex layers is to project points into a convex set. However, both forward and backward passes of these convex layers are significantly more expensive to compute than those of a typical neural network. We investigate in this paper whether an inexact, but cheaper projection, can drive a descent algorithm to an optimum. Specifically, we propose an interpolation-based projection that is computationally cheap and easy to compute given a convex, domain defining
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Humphries, Usa, Grienggrai Rajchakit, Pramet Kaewmesri, et al. "Global Stability Analysis of Fractional-Order Quaternion-Valued Bidirectional Associative Memory Neural Networks." Mathematics 8, no. 5 (2020): 801. http://dx.doi.org/10.3390/math8050801.

Testo completo
Abstract (sommario):
We study the global asymptotic stability problem with respect to the fractional-order quaternion-valued bidirectional associative memory neural network (FQVBAMNN) models in this paper. Whether the real and imaginary parts of quaternion-valued activation functions are expressed implicitly or explicitly, they are considered to meet the global Lipschitz condition in the quaternion field. New sufficient conditions are derived by applying the principle of homeomorphism, Lyapunov fractional-order method and linear matrix inequality (LMI) approach for the two cases of activation functions. The result
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Bensidhoum, Tarek, Farah Bouakrif, and Michel Zasadzinski. "Iterative learning radial basis function neural networks control for unknown multi input multi output nonlinear systems with unknown control direction." Transactions of the Institute of Measurement and Control 41, no. 12 (2019): 3452–67. http://dx.doi.org/10.1177/0142331219826659.

Testo completo
Abstract (sommario):
In this paper, an iterative learning radial basis function neural-networks (RBF NN) control algorithm is developed for a class of unknown multi input multi output (MIMO) nonlinear systems with unknown control directions. The proposed control scheme is very simple in the sense that we use just a P-type iterative learning control (ILC) updating law in which an RBF neural network term is added to approximate the unknown nonlinear function, and an adaptive law for the weights of RBF neural network is proposed. We chose the RBF NN because it has universal approximation capabilities and can approxim
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Zhang, Fan, Heng-You Lan, and Hai-Yang Xu. "Generalized Hukuhara Weak Solutions for a Class of Coupled Systems of Fuzzy Fractional Order Partial Differential Equations without Lipschitz Conditions." Mathematics 10, no. 21 (2022): 4033. http://dx.doi.org/10.3390/math10214033.

Testo completo
Abstract (sommario):
As is known to all, Lipschitz condition, which is very important to guarantee existence and uniqueness of solution for differential equations, is not frequently satisfied in real-world problems. In this paper, without the Lipschitz condition, we intend to explore a kind of novel coupled systems of fuzzy Caputo Generalized Hukuhara type (in short, gH-type) fractional partial differential equations. First and foremost, based on a series of notions of relative compactness in fuzzy number spaces, and using Schauder fixed point theorem in Banach semilinear spaces, it is naturally to prove existence
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Laurel, Jacob, Rem Yang, Shubham Ugare, Robert Nagel, Gagandeep Singh, and Sasa Misailovic. "A general construction for abstract interpretation of higher-order automatic differentiation." Proceedings of the ACM on Programming Languages 6, OOPSLA2 (2022): 1007–35. http://dx.doi.org/10.1145/3563324.

Testo completo
Abstract (sommario):
We present a novel, general construction to abstractly interpret higher-order automatic differentiation (AD). Our construction allows one to instantiate an abstract interpreter for computing derivatives up to a chosen order. Furthermore, since our construction reduces the problem of abstractly reasoning about derivatives to abstractly reasoning about real-valued straight-line programs, it can be instantiated with almost any numerical abstract domain, both relational and non-relational. We formally establish the soundness of this construction. We implement our technique by instantiating our con
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Tatar, Nasser-Eddine. "Long Time Behavior for a System of Differential Equations with Non-Lipschitzian Nonlinearities." Advances in Artificial Neural Systems 2014 (September 14, 2014): 1–7. http://dx.doi.org/10.1155/2014/252674.

Testo completo
Abstract (sommario):
We consider a general system of nonlinear ordinary differential equations of first order. The nonlinearities involve distributed delays in addition to the states. In turn, the distributed delays involve nonlinear functions of the different variables and states. An explicit bound for solutions is obtained under some rather reasonable conditions. Several special cases of this system may be found in neural network theory. As a direct application of our result it is shown how to obtain global existence and, more importantly, convergence to zero at an exponential rate in a certain norm. All these n
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Li, Jia, Cong Fang, and Zhouchen Lin. "Lifted Proximal Operator Machines." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4181–88. http://dx.doi.org/10.1609/aaai.v33i01.33014181.

Testo completo
Abstract (sommario):
We propose a new optimization method for training feedforward neural networks. By rewriting the activation function as an equivalent proximal operator, we approximate a feedforward neural network by adding the proximal operators to the objective function as penalties, hence we call the lifted proximal operator machine (LPOM). LPOM is block multiconvex in all layer-wise weights and activations. This allows us to use block coordinate descent to update the layer-wise weights and activations. Most notably, we only use the mapping of the activation function itself, rather than its derivative, thus
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Cantarini, Marco, Lucian Coroianu, Danilo Costarelli, Sorin G. Gal, and Gianluca Vinti. "Inverse Result of Approximation for the Max-Product Neural Network Operators of the Kantorovich Type and Their Saturation Order." Mathematics 10, no. 1 (2021): 63. http://dx.doi.org/10.3390/math10010063.

Testo completo
Abstract (sommario):
In this paper, we consider the max-product neural network operators of the Kantorovich type based on certain linear combinations of sigmoidal and ReLU activation functions. In general, it is well-known that max-product type operators have applications in problems related to probability and fuzzy theory, involving both real and interval/set valued functions. In particular, here we face inverse approximation problems for the above family of sub-linear operators. We first establish their saturation order for a certain class of functions; i.e., we show that if a continuous and non-decreasing funct
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Zhao, Liquan, and Yan Liu. "Spectral Normalization for Domain Adaptation." Information 11, no. 2 (2020): 68. http://dx.doi.org/10.3390/info11020068.

Testo completo
Abstract (sommario):
The transfer learning method is used to extend our existing model to more difficult scenarios, thereby accelerating the training process and improving learning performance. The conditional adversarial domain adaptation method proposed in 2018 is a particular type of transfer learning. It uses the domain discriminator to identify which images the extracted features belong to. The features are obtained from the feature extraction network. The stability of the domain discriminator directly affects the classification accuracy. Here, we propose a new algorithm to improve the predictive accuracy. Fi
Gli stili APA, Harvard, Vancouver, ISO e altri
33

Pantoja-Garcia, Luis, Vicente Parra-Vega, Rodolfo Garcia-Rodriguez, and Carlos Ernesto Vázquez-García. "A Novel Actor—Critic Motor Reinforcement Learning for Continuum Soft Robots." Robotics 12, no. 5 (2023): 141. http://dx.doi.org/10.3390/robotics12050141.

Testo completo
Abstract (sommario):
Reinforcement learning (RL) is explored for motor control of a novel pneumatic-driven soft robot modeled after continuum media with a varying density. This model complies with closed-form Lagrangian dynamics, which fulfills the fundamental structural property of passivity, among others. Then, the question arises of how to synthesize a passivity-based RL model to control the unknown continuum soft robot dynamics to exploit its input–output energy properties advantageously throughout a reward-based neural network controller. Thus, we propose a continuous-time Actor–Critic scheme for tracking tas
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Van, Mien. "Higher-order terminal sliding mode controller for fault accommodation of Lipschitz second-order nonlinear systems using fuzzy neural network." Applied Soft Computing 104 (June 2021): 107186. http://dx.doi.org/10.1016/j.asoc.2021.107186.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Jiao, Yulin, Feng Xiao, Wenjuan Zhang, Shujuan Huang, Hao Lu, and Zhaoting Lu. "Image Inpainting based on Gated Convolution and spectral Normalization." Frontiers in Computing and Intelligent Systems 6, no. 2 (2023): 96–100. http://dx.doi.org/10.54097/wkezn917.

Testo completo
Abstract (sommario):
Traditional image inpainting methods based on deep learning have the problem of insufficient discrimination between the missing area and the global area information in the feature extraction of image inpainting tasks because of the characteristics of the model constructed by the convolutional layer. At the same time, the traditional generative adversarial network often has problems such as training difficulties and model collapse in the training process. To solve the above problems and improve the repair effect of the model, this paper proposes a dual discriminator image inpainting model based
Gli stili APA, Harvard, Vancouver, ISO e altri
36

Li, Cuiying, Rui Wu, and Ranzhuo Ma. "Existence of solutions for Caputo fractional iterative equations under several boundary value conditions." AIMS Mathematics 8, no. 1 (2022): 317–39. http://dx.doi.org/10.3934/math.2023015.

Testo completo
Abstract (sommario):
<abstract><p>In this paper, we investigate the existence and uniqueness of solutions for nonlinear quadratic iterative equations in the sense of the Caputo fractional derivative with different boundary conditions. Under a one-sided-Lipschitz condition on the nonlinear term, the existence and uniqueness of a solution for the boundary value problems of Caputo fractional iterative equations with arbitrary order is demonstrated by applying the Leray-Schauder fixed point theorem and topological degree theory, where the solution for the case of fractional order greater than 1 is monotoni
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Tong, Qingbin, Feiyu Lu, Ziwei Feng, et al. "A Novel Method for Fault Diagnosis of Bearings with Small and Imbalanced Data Based on Generative Adversarial Networks." Applied Sciences 12, no. 14 (2022): 7346. http://dx.doi.org/10.3390/app12147346.

Testo completo
Abstract (sommario):
The data-driven intelligent fault diagnosis method of rolling bearings has strict requirements regarding the number and balance of fault samples. However, in practical engineering application scenarios, mechanical equipment is usually in a normal state, and small and imbalanced (S & I) fault samples are common, which seriously reduces the accuracy and stability of the fault diagnosis model. To solve this problem, an auxiliary classifier generative adversarial network with spectral normalization (ACGAN-SN) is proposed in this paper. First, a generation module based on a deconvolution layer
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Pauli, Patricia, Anne Koch, Julian Berberich, Paul Kohler, and Frank Allgower. "Training Robust Neural Networks Using Lipschitz Bounds." IEEE Control Systems Letters 6 (2022): 121–26. http://dx.doi.org/10.1109/lcsys.2021.3050444.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Negrini, Elisa, Giovanna Citti, and Luca Capogna. "System identification through Lipschitz regularized deep neural networks." Journal of Computational Physics 444 (November 2021): 110549. http://dx.doi.org/10.1016/j.jcp.2021.110549.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Zou, Dongmian, Radu Balan, and Maneesh Singh. "On Lipschitz Bounds of General Convolutional Neural Networks." IEEE Transactions on Information Theory 66, no. 3 (2020): 1738–59. http://dx.doi.org/10.1109/tit.2019.2961812.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Laurel, Jacob, Rem Yang, Gagandeep Singh, and Sasa Misailovic. "A dual number abstraction for static analysis of Clarke Jacobians." Proceedings of the ACM on Programming Languages 6, POPL (2022): 1–30. http://dx.doi.org/10.1145/3498718.

Testo completo
Abstract (sommario):
We present a novel abstraction for bounding the Clarke Jacobian of a Lipschitz continuous, but not necessarily differentiable function over a local input region. To do so, we leverage a novel abstract domain built upon dual numbers, adapted to soundly over-approximate all first derivatives needed to compute the Clarke Jacobian. We formally prove that our novel forward-mode dual interval evaluation produces a sound, interval domain-based over-approximation of the true Clarke Jacobian for a given input region. Due to the generality of our formalism, we can compute and analyze interval Clarke Jac
Gli stili APA, Harvard, Vancouver, ISO e altri
42

García Cabello, Julia. "Mathematical Neural Networks." Axioms 11, no. 2 (2022): 80. http://dx.doi.org/10.3390/axioms11020080.

Testo completo
Abstract (sommario):
ANNs succeed in several tasks for real scenarios due to their high learning abilities. This paper focuses on theoretical aspects of ANNs to enhance the capacity of implementing those modifications that make ANNs absorb the defining features of each scenario. This work may be also encompassed within the trend devoted to providing mathematical explanations of ANN performance, with special attention to activation functions. The base algorithm has been mathematically decoded to analyse the required features of activation functions regarding their impact on the training process and on the applicabi
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Ma, Shuo, and Yanmei Kang. "Exponential synchronization of delayed neutral-type neural networks with Lévy noise under non-Lipschitz condition." Communications in Nonlinear Science and Numerical Simulation 57 (April 2018): 372–87. http://dx.doi.org/10.1016/j.cnsns.2017.10.012.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
44

Neumayer, Sebastian, Alexis Goujon, Pakshal Bohra, and Michael Unser. "Approximation of Lipschitz Functions Using Deep Spline Neural Networks." SIAM Journal on Mathematics of Data Science 5, no. 2 (2023): 306–22. http://dx.doi.org/10.1137/22m1504573.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Song, Xueli, and Jigen Peng. "Global Asymptotic Stability of Impulsive CNNs with Proportional Delays and Partially Lipschitz Activation Functions." Abstract and Applied Analysis 2014 (2014): 1–11. http://dx.doi.org/10.1155/2014/832892.

Testo completo
Abstract (sommario):
This paper researches global asymptotic stability of impulsive cellular neural networks with proportional delays and partially Lipschitz activation functions. Firstly, by means of the transformation vi(t)=ui(et), the impulsive cellular neural networks with proportional delays are transformed into impulsive cellular neural networks with the variable coefficients and constant delays. Secondly, we provide novel criteria for the uniqueness and exponential stability of the equilibrium point of the latter by relative nonlinear measure and prove that the exponential stability of equilibrium point of
Gli stili APA, Harvard, Vancouver, ISO e altri
46

Han, Fangfang, Bin Liu, Junchao Zhu, and Baofeng Zhang. "Algorithm Design for Edge Detection of High-Speed Moving Target Image under Noisy Environment." Sensors 19, no. 2 (2019): 343. http://dx.doi.org/10.3390/s19020343.

Testo completo
Abstract (sommario):
For some measurement and detection applications based on video (sequence images), if the exposure time of camera is not suitable with the motion speed of the photographed target, fuzzy edges will be produced in the image, and some poor lighting condition will aggravate this edge blur phenomena. Especially, the existence of noise in industrial field environment makes the extraction of fuzzy edges become a more difficult problem when analyzing the posture of a high-speed moving target. Because noise and edge are always both the kind of high-frequency information, it is difficult to make trade-of
Gli stili APA, Harvard, Vancouver, ISO e altri
47

Becktor, Jonathan, Frederik Schöller, Evangelos Boukas, Mogens Blanke, and Lazaros Nalpantidis. "Lipschitz Constrained Neural Networks for Robust Object Detection at Sea." IOP Conference Series: Materials Science and Engineering 929 (November 27, 2020): 012023. http://dx.doi.org/10.1088/1757-899x/929/1/012023.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Aziznejad, Shayan, Harshit Gupta, Joaquim Campos, and Michael Unser. "Deep Neural Networks With Trainable Activations and Controlled Lipschitz Constant." IEEE Transactions on Signal Processing 68 (2020): 4688–99. http://dx.doi.org/10.1109/tsp.2020.3014611.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Delaney, Blaise, Nicole Schulte, Gregory Ciezarek, Niklas Nolte, Mike Williams, and Johannes Albrecht. "Applications of Lipschitz neural networks to the Run 3 LHCb trigger system." EPJ Web of Conferences 295 (2024): 09005. http://dx.doi.org/10.1051/epjconf/202429509005.

Testo completo
Abstract (sommario):
The operating conditions defining the current data taking campaign at the Large Hadron Collider, known as Run 3, present unparalleled challenges for the real-time data acquisition workflow of the LHCb experiment at CERN. To address the anticipated surge in luminosity and consequent event rate, the LHCb experiment is transitioning to a fully software-based trigger system. This evolution necessitated innovations in hardware configurations, software paradigms, and algorithmic design. A significant advancement is the integration of monotonic Lipschitz Neural Networks into the LHCb trigger system.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Mallat, Stéphane, Sixin Zhang, and Gaspar Rochette. "Phase harmonic correlations and convolutional neural networks." Information and Inference: A Journal of the IMA 9, no. 3 (2019): 721–47. http://dx.doi.org/10.1093/imaiai/iaz019.

Testo completo
Abstract (sommario):
Abstract A major issue in harmonic analysis is to capture the phase dependence of frequency representations, which carries important signal properties. It seems that convolutional neural networks have found a way. Over time-series and images, convolutional networks often learn a first layer of filters that are well localized in the frequency domain, with different phases. We show that a rectifier then acts as a filter on the phase of the resulting coefficients. It computes signal descriptors that are local in space, frequency and phase. The nonlinear phase filter becomes a multiplicative opera
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!