Siga este link para ver outros tipos de publicações sobre o tema: Generalization bound.

Artigos de revistas sobre o tema "Generalization bound"

Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos

Selecione um tipo de fonte:

Veja os 50 melhores artigos de revistas para estudos sobre o assunto "Generalization bound".

Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.

Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.

Veja os artigos de revistas das mais diversas áreas científicas e compile uma bibliografia correta.

1

Cohn, David, and Gerald Tesauro. "How Tight Are the Vapnik-Chervonenkis Bounds?" Neural Computation 4, no. 2 (1992): 249–69. http://dx.doi.org/10.1162/neco.1992.4.2.249.

Texto completo da fonte
Resumo:
We describe a series of numerical experiments that measure the average generalization capability of neural networks trained on a variety of simple functions. These experiments are designed to test the relationship between average generalization performance and the worst-case bounds obtained from formal learning theory using the Vapnik-Chervonenkis (VC) dimension (Blumer et al. 1989; Haussler et al. 1990). Recent statistical learning theories (Tishby et al. 1989; Schwartz et al. 1990) suggest that surpassing these bounds might be possible if the spectrum of possible generalizations has a “gap”
Estilos ABNT, Harvard, Vancouver, APA, etc.
2

Pereira, Rajesh, and Mohammad Ali Vali. "Generalizations of the Cauchy and Fujiwara Bounds for Products of Zeros of a Polynomial." Electronic Journal of Linear Algebra 31 (February 5, 2016): 565–71. http://dx.doi.org/10.13001/1081-3810.3333.

Texto completo da fonte
Resumo:
The Cauchy bound is one of the best known upper bounds for the modulus of the zeros of a polynomial. The Fujiwara bound is another useful upper bound for the modulus of the zeros of a polynomial. In this paper, compound matrices are used to derive a generalization of both the Cauchy bound and the Fujiwara bound. This generalization yields upper bounds for the modulus of the product of $m$ zeros of the polynomial.
Estilos ABNT, Harvard, Vancouver, APA, etc.
3

Nedovic, M. "Norm bounds for the inverse for generalized Nekrasov matrices in point-wise and block case." Filomat 35, no. 8 (2021): 2705–14. http://dx.doi.org/10.2298/fil2108705n.

Texto completo da fonte
Resumo:
Lower-semi-Nekrasov matrices represent a generalization of Nekrasov matrices. For the inverse of lower-semi-Nekrasov matrices, a max-norm bound is proposed. Numerical examples are given to illustrate that new norm bound can give tighter results compared to already known bounds when applied to Nekrasov matrices. Also, we presented new max-norm bounds for the inverse of lower-semi-Nekrasov matrices in the block case. We considered two types of block generalizations and illustrated the results with numerical examples.
Estilos ABNT, Harvard, Vancouver, APA, etc.
4

Liu, Tongliang, Dacheng Tao, and Dong Xu. "Dimensionality-Dependent Generalization Bounds for k-Dimensional Coding Schemes." Neural Computation 28, no. 10 (2016): 2213–49. http://dx.doi.org/10.1162/neco_a_00872.

Texto completo da fonte
Resumo:
The k-dimensional coding schemes refer to a collection of methods that attempt to represent data using a set of representative k-dimensional vectors and include nonnegative matrix factorization, dictionary learning, sparse coding, k-means clustering, and vector quantization as special cases. Previous generalization bounds for the reconstruction error of the k-dimensional coding schemes are mainly dimensionality-independent. A major advantage of these bounds is that they can be used to analyze the generalization error when data are mapped into an infinite- or high-dimensional feature space. How
Estilos ABNT, Harvard, Vancouver, APA, etc.
5

Rubab, Faiza, Hira Nabi, and Asif R. Khan. "GENERALIZATION AND REFINEMENTS OF JENSEN INEQUALITY." Journal of Mathematical Analysis 12, no. 5 (2021): 1–27. http://dx.doi.org/10.54379/jma-2021-5-1.

Texto completo da fonte
Resumo:
We give generalizations and refinements of Jensen and Jensen− Mercer inequalities by using weights which satisfy the conditions of Jensen and Jensen− Steffensen inequalities. We also give some refinements for discrete and integral version of generalized Jensen−Mercer inequality and shown to be an improvement of the upper bound for the Jensen’s difference given in [32]. Applications of our work include new bounds for some important inequalities used in information theory, and generalizing the relations among means.
Estilos ABNT, Harvard, Vancouver, APA, etc.
6

Nedovic, M., and Lj Cvetkovic. "Norm bounds for the inverse and error bounds for linear complementarity problems for {P1,P2}-Nekrasov matrices." Filomat 35, no. 1 (2021): 239–50. http://dx.doi.org/10.2298/fil2101239n.

Texto completo da fonte
Resumo:
{P1,P2}-Nekrasov matrices represent a generalization of Nekrasov matrices via permutations. In this paper, we obtained an error bound for linear complementarity problems for fP1; P2g-Nekrasov matrices. Numerical examples are given to illustrate that new error bound can give tighter results compared to already known bounds when applied to Nekrasov matrices. Also, we presented new max-norm bounds for the inverse of {P1,P2}-Nekrasov matrices in the block case, considering two different types of block generalizations. Numerical examples show that new norm bounds for the block case can give tighter
Estilos ABNT, Harvard, Vancouver, APA, etc.
7

赵, 帆. "A Generalization of Hoffman’s Bound." Advances in Applied Mathematics 14, no. 03 (2025): 123–31. https://doi.org/10.12677/aam.2025.143098.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
8

Han, Xinyu, Yi Zhao, and Michael Small. "A tighter generalization bound for reservoir computing." Chaos: An Interdisciplinary Journal of Nonlinear Science 32, no. 4 (2022): 043115. http://dx.doi.org/10.1063/5.0082258.

Texto completo da fonte
Resumo:
While reservoir computing (RC) has demonstrated astonishing performance in many practical scenarios, the understanding of its capability for generalization on previously unseen data is limited. To address this issue, we propose a novel generalization bound for RC based on the empirical Rademacher complexity under the probably approximately correct learning framework. Note that the generalization bound for the RC is derived in terms of the model hyperparameters. For this reason, it can explore the dependencies of the generalization bound for RC on its hyperparameters. Compared with the existing
Estilos ABNT, Harvard, Vancouver, APA, etc.
9

Gassner, Niklas, Marcus Greferath, Joachim Rosenthal, and Violetta Weger. "Bounds for Coding Theory over Rings." Entropy 24, no. 10 (2022): 1473. http://dx.doi.org/10.3390/e24101473.

Texto completo da fonte
Resumo:
Coding theory where the alphabet is identified with the elements of a ring or a module has become an important research topic over the last 30 years. It has been well established that, with the generalization of the algebraic structure to rings, there is a need to also generalize the underlying metric beyond the usual Hamming weight used in traditional coding theory over finite fields. This paper introduces a generalization of the weight introduced by Shi, Wu and Krotov, called overweight. Additionally, this weight can be seen as a generalization of the Lee weight on the integers modulo 4 and
Estilos ABNT, Harvard, Vancouver, APA, etc.
10

Chen, Jun, Hong Chen, Bin Gu, Guodong Liu, Yingjie Wang, and Weifu Li. "Error Analysis Affected by Heavy-Tailed Gradients for Non-Convex Pairwise Stochastic Gradient Descent." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 15 (2025): 15803–11. https://doi.org/10.1609/aaai.v39i15.33735.

Texto completo da fonte
Resumo:
In recent years, there have been a growing number of works studying the generalization properties of stochastic gradient descent (SGD) from the perspective of algorithmic stability. However, few of them devote to simultaneously studying the generalization and optimization for the non-convex setting, especially pairwise SGD with heavy-tailed gradient noise. This paper considers the impact of the heavy-tailed gradient noise obeying sub-Weibull distribution on the stability-based learning guarantees for non-convex pairwise SGD by investigating its generalization and optimization jointly. Specific
Estilos ABNT, Harvard, Vancouver, APA, etc.
11

Abou–Moustafa, Karim, and Csaba Szepesvári. "An Exponential Tail Bound for the Deleted Estimate." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3143–50. http://dx.doi.org/10.1609/aaai.v33i01.33013143.

Texto completo da fonte
Resumo:
There is an accumulating evidence in the literature that stability of learning algorithms is a key characteristic that permits a learning algorithm to generalize. Despite various insightful results in this direction, there seems to be an overlooked dichotomy in the type of stability-based generalization bounds we have in the literature. On one hand, the literature seems to suggest that exponential generalization bounds for the estimated risk, which are optimal, can be only obtained through stringent, distribution independent and computationally intractable notions of stability such as uniform
Estilos ABNT, Harvard, Vancouver, APA, etc.
12

Masegosa, Andres R., and Luis A. Ortega. "PAC-Chernoff Bounds: Understanding Generalization in the Interpolation Regime." Journal of Artificial Intelligence Research 82 (February 4, 2025): 503–62. https://doi.org/10.1613/jair.1.17036.

Texto completo da fonte
Resumo:
This paper introduces a distribution-dependent PAC-Chernoff bound that exhibits perfect tightness for interpolators, even within over-parameterized model classes. This bound, which relies on basic principles of Large Deviation Theory, defines a natural measure of the smoothness of a model, characterized by simple real-valued functions. Building upon this bound and the new concept of smoothness, we present an unified theoretical framework revealing why certain interpolators show an exceptional generalization, while others falter. We theoretically show how a wide spectrum of modern learning meth
Estilos ABNT, Harvard, Vancouver, APA, etc.
13

Baddai, Saad abood. "A Generalization of t-Practical Numbers." Baghdad Science Journal 17, no. 4 (2020): 1250. http://dx.doi.org/10.21123/bsj.2020.17.4.1250.

Texto completo da fonte
Resumo:
This paper generalizes and improves the results of Margenstren, by proving that the number of -practical numbers which is defined by has a lower bound in terms of . This bound is more sharper than Mangenstern bound when Further general results are given for the existence of -practical numbers, by proving that the interval contains a -practical for all
Estilos ABNT, Harvard, Vancouver, APA, etc.
14

Hieu, Nong Minh, Antoine Ledent, Yunwen Lei, and Cheng Yeaw Ku. "Generalization Analysis for Deep Contrastive Representation Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 16 (2025): 17186–94. https://doi.org/10.1609/aaai.v39i16.33889.

Texto completo da fonte
Resumo:
In this paper, we present generalization bounds for the unsupervised risk in the Deep Contrastive Representation Learning framework, which employs deep neural networks as representation functions. We approach this problem from two angles. On the one hand, we derive a parameter-counting bound that scales with the overall size of the neural networks. On the other hand, we provide a norm-based bound that scales with the norms of neural networks' weight matrices. Ignoring logarithmic factors, the bounds are independent of the size of the tuples provided for contrastive learning. To the best of our
Estilos ABNT, Harvard, Vancouver, APA, etc.
15

Harada, Masayasu, Francesco Sannino, Joseph Schechter, and Herbert Weigel. "Generalization of the bound state model." Physical Review D 56, no. 7 (1997): 4098–114. http://dx.doi.org/10.1103/physrevd.56.4098.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
16

Khare, Niraj, Nishali Mehta, and Naushad Puliyambalath. "Generalization of Erdős–Gallai edge bound." European Journal of Combinatorics 43 (January 2015): 124–30. http://dx.doi.org/10.1016/j.ejc.2014.07.004.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
17

Wang, Shusen. "A Sharper Generalization Bound for Divide-and-Conquer Ridge Regression." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 5305–12. http://dx.doi.org/10.1609/aaai.v33i01.33015305.

Texto completo da fonte
Resumo:
We study the distributed machine learning problem where the n feature-response pairs are partitioned among m machines uniformly at random. The goal is to approximately solve an empirical risk minimization (ERM) problem with the minimum amount of communication. The divide-and-conquer (DC) method, which was proposed several years ago, lets every worker machine independently solve the same ERM problem using its local feature-response pairs and the driver machine combine the solutions. This approach is in one-shot and thereby extremely communication-efficient. Although the DC method has been studi
Estilos ABNT, Harvard, Vancouver, APA, etc.
18

Liu, Xingtu. "Information-Theoretic Generalization Bounds for Batch Reinforcement Learning." Entropy 26, no. 11 (2024): 995. http://dx.doi.org/10.3390/e26110995.

Texto completo da fonte
Resumo:
We analyze the generalization properties of batch reinforcement learning (batch RL) with value function approximation from an information-theoretic perspective. We derive generalization bounds for batch RL using (conditional) mutual information. In addition, we demonstrate how to establish a connection between certain structural assumptions on the value function space and conditional mutual information. As a by-product, we derive a high-probability generalization bound via conditional mutual information, which was left open and may be of independent interest.
Estilos ABNT, Harvard, Vancouver, APA, etc.
19

Martin, W. J., and D. R. Stinson. "A Generalized Rao Bound for Ordered Orthogonal Arrays and (t, m, s)-Nets." Canadian Mathematical Bulletin 42, no. 3 (1999): 359–70. http://dx.doi.org/10.4153/cmb-1999-042-x.

Texto completo da fonte
Resumo:
AbstractIn this paper, we provide a generalization of the classical Rao bound for orthogonal arrays, which can be applied to ordered orthogonal arrays and (t, m, s)-nets. Application of our new bound leads to improvements in many parameter situations to the strongest bounds (i.e., necessary conditions) for existence of these objects.
Estilos ABNT, Harvard, Vancouver, APA, etc.
20

Hagiwara, Katsuyuki. "On the Problem in Model Selection of Neural Network Regression in Overrealizable Scenario." Neural Computation 14, no. 8 (2002): 1979–2002. http://dx.doi.org/10.1162/089976602760128090.

Texto completo da fonte
Resumo:
In considering a statistical model selection of neural networks and radial basis functions under an overrealizable case, the problem of unidentifiability emerges. Because the model selection criterion is an unbiased estimator of the generalization error based on the training error, this article analyzes the expected training error and the expected generalization error of neural networks and radial basis functions in overrealizable cases and clarifies the difference from regular models, for which identifiability holds. As a special case of an overrealizable scenario, we assumed a gaussian noise
Estilos ABNT, Harvard, Vancouver, APA, etc.
21

Jose, Sharu Theresa, and Osvaldo Simeone. "Information-Theoretic Generalization Bounds for Meta-Learning and Applications." Entropy 23, no. 1 (2021): 126. http://dx.doi.org/10.3390/e23010126.

Texto completo da fonte
Resumo:
Meta-learning, or “learning to learn”, refers to techniques that infer an inductive bias from data corresponding to multiple related tasks with the goal of improving the sample efficiency for new, previously unobserved, tasks. A key performance measure for meta-learning is the meta-generalization gap, that is, the difference between the average loss measured on the meta-training data and on a new, randomly selected task. This paper presents novel information-theoretic upper bounds on the meta-generalization gap. Two broad classes of meta-learning algorithms are considered that use either separ
Estilos ABNT, Harvard, Vancouver, APA, etc.
22

Chen, Jun, Hong Chen, Xue Jiang, et al. "On the Stability and Generalization of Triplet Learning." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7033–41. http://dx.doi.org/10.1609/aaai.v37i6.25859.

Texto completo da fonte
Resumo:
Triplet learning, i.e. learning from triplet data, has attracted much attention in computer vision tasks with an extremely large number of categories, e.g., face recognition and person re-identification. Albeit with rapid progress in designing and applying triplet learning algorithms, there is a lacking study on the theoretical understanding of their generalization performance. To fill this gap, this paper investigates the generalization guarantees of triplet learning by leveraging the stability analysis. Specifically, we establish the first general high-probability generalization bound for th
Estilos ABNT, Harvard, Vancouver, APA, etc.
23

Kiernan, Barbara J., and David P. Snow. "Bound-Morpheme Generalization by Children With SLI." Journal of Speech, Language, and Hearing Research 42, no. 3 (1999): 649–62. http://dx.doi.org/10.1044/jslhr.4203.649.

Texto completo da fonte
Resumo:
We investigated whether limited bound-morpheme generalization (BMG) by preschool children with SLI is functionally related to limited learning of training targets (words, affixed forms). Thirty children with SLI and 30 age-/gendermatched controls participated in the study. Production probes revealed a dissociation between learning and generalization performance. In addition, the number of children who achieved criterion-level BMG increased abruptly during an additional instructional experience with new training targets. These findings suggest that positive evidence of a bound morpheme's genera
Estilos ABNT, Harvard, Vancouver, APA, etc.
24

Guo, Zheng-Chu, and Yiming Ying. "Guaranteed Classification via Regularized Similarity Learning." Neural Computation 26, no. 3 (2014): 497–522. http://dx.doi.org/10.1162/neco_a_00556.

Texto completo da fonte
Resumo:
Learning an appropriate (dis)similarity function from the available data is a central problem in machine learning, since the success of many machine learning algorithms critically depends on the choice of a similarity function to compare examples. Despite many approaches to similarity metric learning that have been proposed, there has been little theoretical study on the links between similarity metric learning and the classification performance of the resulting classifier. In this letter, we propose a regularized similarity learning formulation associated with general matrix norms and establi
Estilos ABNT, Harvard, Vancouver, APA, etc.
25

Cao, Yuan, and Quanquan Gu. "Generalization Error Bounds of Gradient Descent for Learning Over-Parameterized Deep ReLU Networks." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (2020): 3349–56. http://dx.doi.org/10.1609/aaai.v34i04.5736.

Texto completo da fonte
Resumo:
Empirical studies show that gradient-based methods can learn deep neural networks (DNNs) with very good generalization performance in the over-parameterization regime, where DNNs can easily fit a random labeling of the training data. Very recently, a line of work explains in theory that with over-parameterization and proper random initialization, gradient-based methods can find the global minima of the training loss for DNNs. However, existing generalization error bounds are unable to explain the good generalization performance of over-parameterized DNNs. The major limitation of most existing
Estilos ABNT, Harvard, Vancouver, APA, etc.
26

NG, WING W. Y., DANIEL S. YEUNG, and ERIC C. C. TSANG. "THE LOCALIZED GENERALIZATION ERROR MODEL FOR SINGLE LAYER PERCEPTRON NEURAL NETWORK AND SIGMOID SUPPORT VECTOR MACHINE." International Journal of Pattern Recognition and Artificial Intelligence 22, no. 01 (2008): 121–35. http://dx.doi.org/10.1142/s0218001408006168.

Texto completo da fonte
Resumo:
We had developed the localized generalization error model for supervised learning with minimization of Mean Square Error. In this work, we extend the error model to Single Layer Perceptron Neural Network (SLPNN) and Support Vector Machine (SVM) with sigmoid kernel function. For a trained SLPNN or SVM and a given training dataset, the proposed error model bounds above the error for unseen samples which are similar to the training samples. As the major component of the localized generalization error model, the stochastic sensitivity measure formula for perceptron neural network derived in this w
Estilos ABNT, Harvard, Vancouver, APA, etc.
27

Yangjit, Wijit. "On the Montgomery–Vaughan weighted generalization of Hilbert’s inequality." Proceedings of the American Mathematical Society, Series B 10, no. 38 (2023): 439–54. http://dx.doi.org/10.1090/bproc/199.

Texto completo da fonte
Resumo:
This paper concerns the problem of determining the optimal constant in the Montgomery–Vaughan weighted generalization of Hilbert’s inequality. We consider an approach pursued by previous authors via a parametric family of inequalities. We obtain upper and lower bounds for the constants in inequalities in this family. A lower bound indicates that the method in its current form cannot achieve any value below 3.19497 3.19497 , so cannot achieve the conjectured constant π \pi . The problem of determining the optimal constant remains open.
Estilos ABNT, Harvard, Vancouver, APA, etc.
28

Poliquin, Guillaume. "Principal frequency of the p-Laplacian and the inradius of Euclidean domains." Journal of Topology and Analysis 07, no. 03 (2015): 505–11. http://dx.doi.org/10.1142/s1793525315500211.

Texto completo da fonte
Resumo:
We study the lower bounds for the principal frequency of the p-Laplacian on N-dimensional Euclidean domains. For p > N, we obtain a lower bound for the first eigenvalue of the p-Laplacian in terms of its inradius, without any assumptions on the topology of the domain. Moreover, we show that a similar lower bound can be obtained if p > N - 1 assuming the boundary is connected. This result can be viewed as a generalization of the classical bounds for the first eigenvalue of the Laplace operator on simply connected planar domains.
Estilos ABNT, Harvard, Vancouver, APA, etc.
29

Chadan, K., R. Kobayashi, A. Martin, and J. Stubbe. "Generalization of the Calogero–Cohn bound on the number of bound states." Journal of Mathematical Physics 37, no. 3 (1996): 1106–14. http://dx.doi.org/10.1063/1.531450.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
30

Wu, Liang, Ruixi Hu, and Yunwen Lei. "Stability-based Generalization Analysis of Randomized Coordinate Descent for Pairwise Learning." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 20 (2025): 21545–53. https://doi.org/10.1609/aaai.v39i20.35457.

Texto completo da fonte
Resumo:
Pairwise learning includes various machine learning tasks, with ranking and metric learning serving as the primary representatives. While randomized coordinate descent (RCD) is popular in various problems, there is much less theoretical analysis on the generalization behavior of models trained by RCD, especially under the pairwise learning framework. In this paper, we consider the generalization of RCD for pairwise learning. We measure the on-average argument stability for both convex and strongly convex objective functions, based on which we develop generalization bounds in expectation. The e
Estilos ABNT, Harvard, Vancouver, APA, etc.
31

Ying, Yiming, and Colin Campbell. "Rademacher Chaos Complexities for Learning the Kernel Problem." Neural Computation 22, no. 11 (2010): 2858–86. http://dx.doi.org/10.1162/neco_a_00028.

Texto completo da fonte
Resumo:
We develop a novel generalization bound for learning the kernel problem. First, we show that the generalization analysis of the kernel learning problem reduces to investigation of the suprema of the Rademacher chaos process of order 2 over candidate kernels, which we refer to as Rademacher chaos complexity. Next, we show how to estimate the empirical Rademacher chaos complexity by well-established metric entropy integrals and pseudo-dimension of the set of candidate kernels. Our new methodology mainly depends on the principal theory of U-processes and entropy integrals. Finally, we establish s
Estilos ABNT, Harvard, Vancouver, APA, etc.
32

YERNAUX, GONZAGUE, and WIM VANHOOF. "Anti-unification in Constraint Logic Programming." Theory and Practice of Logic Programming 19, no. 5-6 (2019): 773–89. http://dx.doi.org/10.1017/s1471068419000188.

Texto completo da fonte
Resumo:
AbstractAnti-unification refers to the process of generalizing two (or more) goals into a single, more general, goal that captures some of the structure that is common to all initial goals. In general one is typically interested in computing what is often called a most specific generalization, that is a generalization that captures a maximal amount of shared structure. In this work we address the problem of anti-unification in CLP, where goals can be seen as unordered sets of atoms and/or constraints. We show that while the concept of a most specific generalization can easily be defined in thi
Estilos ABNT, Harvard, Vancouver, APA, etc.
33

Bian, Wei, and Dacheng Tao. "Asymptotic Generalization Bound of Fisher’s Linear Discriminant Analysis." IEEE Transactions on Pattern Analysis and Machine Intelligence 36, no. 12 (2014): 2325–37. http://dx.doi.org/10.1109/tpami.2014.2327983.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
34

ELIAHOU, SHALOM, and MICHEL KERVAIRE. "BOUNDS ON THE MINIMAL SUMSET SIZE FUNCTION IN GROUPS." International Journal of Number Theory 03, no. 04 (2007): 503–11. http://dx.doi.org/10.1142/s1793042107001085.

Texto completo da fonte
Resumo:
In this paper, we give lower and upper bounds for the minimal size μG(r,s) of the sumset (or product set) of two finite subsets of given cardinalities r,s in a group G. Our upper bound holds for solvable groups, our lower bound for arbitrary groups. The results are expressed in terms of variants of the numerical function κG(r,s), a generalization of the Hopf–Stiefel function that, as shown in [6], exactly models μG(r,s) for G abelian.
Estilos ABNT, Harvard, Vancouver, APA, etc.
35

BRACKEN, PAUL. "ON A GENERALIZATION OF A THEOREM OF LICHNEROWICZ TO MANIFOLDS WITH BOUNDARY." International Journal of Geometric Methods in Modern Physics 08, no. 03 (2011): 639–46. http://dx.doi.org/10.1142/s0219887811005300.

Texto completo da fonte
Resumo:
A theorem due to Lichnerowicz which establishes a lower bound on the lowest nonzero eigenvalue of the Laplacian acting on functions on a compact, closed manifold is reviewed. It is shown how this theorem can be extended to the case of a manifold with nonempty boundary. Lower bounds for different boundary conditions, analogous to the empty boundary case, are formulated and some novel proofs are presented.
Estilos ABNT, Harvard, Vancouver, APA, etc.
36

Ma, Zhi-Hao, Zhi-Hua Chen, Shuai Han, Shao-Ming Feii, and Simone Severin. "Improved bounds on negativity of superpositions." Quantum Information and Computation 12, no. 11&12 (2012): 983–88. http://dx.doi.org/10.26421/qic12.11-12-6.

Texto completo da fonte
Resumo:
We consider an alternative formula for the negativity based on a simple generalization of the concurrence. We use the formula to bound the amount of entanglement in a superposition of two bipartite pure states of arbitrary dimension. Various examples indicate that our bounds are tighter than the previously known results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
37

Swisher, Linda, Maria Adelaida Restrepo, Elena Plante, and Soren Lowell. "Effect of Implicit and Explicit "Rule" Presentation on Bound-Morpheme Generalization in Specific Language Impairment." Journal of Speech, Language, and Hearing Research 38, no. 1 (1995): 168–73. http://dx.doi.org/10.1044/jshr.3801.168.

Texto completo da fonte
Resumo:
This study addressed whether generalization of a trained bound morpheme to untrained vocabulary stems differs between children with specific language impairment (SLI) and children with normal language (NL) under two controlled instructional conditions. Twenty-five children with NL and 25 children with SLI matched for age served as subjects. Contrasts between affixed and unaffixed words highlighted the affixation "rule" in the "implicit-rule" condition. The "rule" was verbalized by the trainer in the "explicit-rule" condition. Bimodal generalization results occurred in both subject groups, indi
Estilos ABNT, Harvard, Vancouver, APA, etc.
38

Wu, Liang, Antoine Ledent, Yunwen Lei, and Marius Kloft. "Fine-grained Generalization Analysis of Vector-Valued Learning." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 12 (2021): 10338–46. http://dx.doi.org/10.1609/aaai.v35i12.17238.

Texto completo da fonte
Resumo:
Many fundamental machine learning tasks can be formulated as a problem of learning with vector-valued functions, where we learn multiple scalar-valued functions together. Although there is some generalization analysis on different specific algorithms under the empirical risk minimization principle, a unifying analysis of vector-valued learning under a regularization framework is still lacking. In this paper, we initiate the generalization analysis of regularized vector-valued learning algorithms by presenting bounds with a mild dependency on the output dimension and a fast rate on the sample s
Estilos ABNT, Harvard, Vancouver, APA, etc.
39

Usta, Fuat, and Mehmet Sarikaya. "On generalization conformable fractional integral inequalities." Filomat 32, no. 16 (2018): 5519–26. http://dx.doi.org/10.2298/fil1816519u.

Texto completo da fonte
Resumo:
The main issues addressed in this paper are making generalization of Gronwall, Volterra and Pachpatte type inequalities for conformable differential equations. By using the Katugampola definition for conformable calculus we found some upper and lower bound for integral inequalities. The established results are extensions of some existing Gronwall, Volterra and Pachpatte type inequalities in the previous published studies.
Estilos ABNT, Harvard, Vancouver, APA, etc.
40

Deng, Xiaoge, Tao Sun, Shengwei Li, and Dongsheng Li. "Stability-Based Generalization Analysis of the Asynchronous Decentralized SGD." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 6 (2023): 7340–48. http://dx.doi.org/10.1609/aaai.v37i6.25894.

Texto completo da fonte
Resumo:
The generalization ability often determines the success of machine learning algorithms in practice. Therefore, it is of great theoretical and practical importance to understand and bound the generalization error of machine learning algorithms. In this paper, we provide the first generalization results of the popular stochastic gradient descent (SGD) algorithm in the distributed asynchronous decentralized setting. Our analysis is based on the uniform stability tool, where stable means that the learned model does not change much in small variations of the training set. Under some mild assumption
Estilos ABNT, Harvard, Vancouver, APA, etc.
41

Blinovsky, V. M. "Plotkin bound generalization to the case of multiple packings." Problems of Information Transmission 45, no. 1 (2009): 1–4. http://dx.doi.org/10.1134/s0032946009010013.

Texto completo da fonte
Estilos ABNT, Harvard, Vancouver, APA, etc.
42

Lv, Shao-Gao. "Refined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert Spaces." Neural Computation 27, no. 6 (2015): 1294–320. http://dx.doi.org/10.1162/neco_a_00739.

Texto completo da fonte
Resumo:
Gradient learning (GL), initially proposed by Mukherjee and Zhou ( 2006 ) has been proved to be a powerful tool for conducting variable selection and dimensional reduction simultaneously. This approach presents a nonparametric version of a gradient estimator with positive definite kernels without estimating the true function itself, so that the proposed version has wide applicability and allows for complex effects between predictors. In terms of theory, however, existing generalization bounds for GL depend on capacity-independent techniques, and the capacity of kernel classes cannot be charact
Estilos ABNT, Harvard, Vancouver, APA, etc.
43

Zhang, Tong. "Leave-One-Out Bounds for Kernel Methods." Neural Computation 15, no. 6 (2003): 1397–437. http://dx.doi.org/10.1162/089976603321780326.

Texto completo da fonte
Resumo:
In this article, we study leave-one-out style cross-validation bounds for kernel methods. The essential element in our analysis is a bound on the parameter estimation stability for regularized kernel formulations. Using this result, we derive bounds on expected leave-one-out cross-validation errors, which lead to expected generalization bounds for various kernel algorithms. In addition, we also obtain variance bounds for leave-oneout errors. We apply our analysis to some classification and regression problems and compare them with previous results.
Estilos ABNT, Harvard, Vancouver, APA, etc.
44

Fanzi, Zeng, and Ma Xiaolong. "The Generalization Error Bound for the Multiclass Analytical Center Classifier." Scientific World Journal 2013 (2013): 1–5. http://dx.doi.org/10.1155/2013/574748.

Texto completo da fonte
Resumo:
This paper presents the multiclass classifier based on analytical center of feasible space (MACM). This multiclass classifier is formulated as quadratic constrained linear optimization and does not need repeatedly constructing classifiers to separate a single class from all the others. Its generalization error upper bound is proved theoretically. The experiments on benchmark datasets validate the generalization performance of MACM.
Estilos ABNT, Harvard, Vancouver, APA, etc.
45

Yang, Yanxia, Pu Wang, and Xuejin Gao. "A Novel Radial Basis Function Neural Network with High Generalization Performance for Nonlinear Process Modelling." Processes 10, no. 1 (2022): 140. http://dx.doi.org/10.3390/pr10010140.

Texto completo da fonte
Resumo:
A radial basis function neural network (RBFNN), with a strong function approximation ability, was proven to be an effective tool for nonlinear process modeling. However, in many instances, the sample set is limited and the model evaluation error is fixed, which makes it very difficult to construct an optimal network structure to ensure the generalization ability of the established nonlinear process model. To solve this problem, a novel RBFNN with a high generation performance (RBFNN-GP), is proposed in this paper. The proposed RBFNN-GP consists of three contributions. First, a local generaliza
Estilos ABNT, Harvard, Vancouver, APA, etc.
46

Cahn, Patricia. "A generalization of Turaev’s virtual string cobracket and self-intersections of virtual strings." Communications in Contemporary Mathematics 19, no. 04 (2016): 1650053. http://dx.doi.org/10.1142/s021919971650053x.

Texto completo da fonte
Resumo:
Previously we defined an operation [Formula: see text] that generalizes Turaev’s cobracket for loops on a surface. We showed that, in contrast to the cobracket, this operation gives a formula for the minimum number of self-intersections of a loop in a given free homotopy class. In this paper, we consider the corresponding question for virtual strings, and conjecture that [Formula: see text] gives a formula for the minimum number of self-intersection points of a virtual string in a given virtual homotopy class. To support the conjecture, we show that [Formula: see text] gives a bound on the min
Estilos ABNT, Harvard, Vancouver, APA, etc.
47

Martinazzo, Rocco, and Eli Pollak. "Lower bounds to eigenvalues of the Schrödinger equation by solution of a 90-y challenge." Proceedings of the National Academy of Sciences 117, no. 28 (2020): 16181–86. http://dx.doi.org/10.1073/pnas.2007093117.

Texto completo da fonte
Resumo:
The Ritz upper bound to eigenvalues of Hermitian operators is essential for many applications in science. It is a staple of quantum chemistry and physics computations. The lower bound devised by Temple in 1928 [G. Temple,Proc. R. Soc. A Math. Phys. Eng. Sci.119, 276–293 (1928)] is not, since it converges too slowly. The need for a good lower-bound theorem and algorithm cannot be overstated, since an upper bound alone is not sufficient for determining differences between eigenvalues such as tunneling splittings and spectral features. In this paper, after 90 y, we derive a generalization and imp
Estilos ABNT, Harvard, Vancouver, APA, etc.
48

Meng, Juan, Guyu Hu, Dong Li, Yanyan Zhang, and Zhisong Pan. "Generalization Bounds Derived IPM-Based Regularization for Domain Adaptation." Computational Intelligence and Neuroscience 2016 (2016): 1–8. http://dx.doi.org/10.1155/2016/7046563.

Texto completo da fonte
Resumo:
Domain adaptation has received much attention as a major form of transfer learning. One issue that should be considered in domain adaptation is the gap between source domain and target domain. In order to improve the generalization ability of domain adaption methods, we proposed a framework for domain adaptation combining source and target data, with a new regularizer which takes generalization bounds into account. This regularization term considers integral probability metric (IPM) as the distance between the source domain and the target domain and thus can bound up the testing error of an ex
Estilos ABNT, Harvard, Vancouver, APA, etc.
49

Hosseinian, Seyedmohammadhossein, Dalila B. M. M. Fontes, and Sergiy Butenko. "A Lagrangian Bound on the Clique Number and an Exact Algorithm for the Maximum Edge Weight Clique Problem." INFORMS Journal on Computing 32, no. 3 (2020): 747–62. http://dx.doi.org/10.1287/ijoc.2019.0898.

Texto completo da fonte
Resumo:
This paper explores the connections between the classical maximum clique problem and its edge-weighted generalization, the maximum edge weight clique (MEWC) problem. As a result, a new analytic upper bound on the clique number of a graph is obtained and an exact algorithm for solving the MEWC problem is developed. The bound on the clique number is derived using a Lagrangian relaxation of an integer (linear) programming formulation of the MEWC problem. Furthermore, coloring-based bounds on the clique number are used in a novel upper-bounding scheme for the MEWC problem. This scheme is employed
Estilos ABNT, Harvard, Vancouver, APA, etc.
50

Shen, Yukai. "$ k $th powers in a generalization of Piatetski-Shapiro sequences." AIMS Mathematics 8, no. 9 (2023): 22411–18. http://dx.doi.org/10.3934/math.20231143.

Texto completo da fonte
Resumo:
<abstract><p>The article considers a generalization of Piatetski-Shapiro sequences in the sense of Beatty sequences. The sequence is defined by $ \left(\left\lfloor\alpha n^c+\beta\right\rfloor\right)_{n = 1}^{\infty} $, where $ \alpha \geq 1 $, $ c > 1 $, and $ \beta $ are real numbers.</p> <p>The focus of the study is on solving equations of the form $ \left\lfloor \alpha n^c +\beta\right\rfloor = s m^k $, where $ m $ and $ n $ are positive integers, $ 1 \leq n \leq N $, and $ s $ is an integer. Bounds for the solutions are obtained for different values of the
Estilos ABNT, Harvard, Vancouver, APA, etc.
Oferecemos descontos em todos os planos premium para autores cujas obras estão incluídas em seleções literárias temáticas. Contate-nos para obter um código promocional único!