Academic literature on the topic 'Learning from Constraints'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Learning from Constraints.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Learning from Constraints"

1

Cropper, Andrew, and Rolf Morel. "Learning programs by learning from failures." Machine Learning 110, no. 4 (2021): 801–56. http://dx.doi.org/10.1007/s10994-020-05934-z.

Full text
Abstract:
AbstractWe describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.
APA, Harvard, Vancouver, ISO, and other styles
2

Chou, Glen, Dmitry Berenson, and Necmiye Ozay. "Learning constraints from demonstrations with grid and parametric representations." International Journal of Robotics Research 40, no. 10-11 (2021): 1255–83. http://dx.doi.org/10.1177/02783649211035177.

Full text
Abstract:
We extend the learning from demonstration paradigm by providing a method for learning unknown constraints shared across tasks, using demonstrations of the tasks, their cost functions, and knowledge of the system dynamics and control constraints. Given safe demonstrations, our method uses hit-and-run sampling to obtain lower cost, and thus unsafe, trajectories. Both safe and unsafe trajectories are used to obtain a consistent representation of the unsafe set via solving an integer program. Our method generalizes across system dynamics and learns a guaranteed subset of the constraint. In addition, by leveraging a known parameterization of the constraint, we modify our method to learn parametric constraints in high dimensions. We also provide theoretical analysis on what subset of the constraint and safe set can be learnable from safe demonstrations. We demonstrate our method on linear and nonlinear system dynamics, show that it can be modified to work with suboptimal demonstrations, and that it can also be used to learn constraints in a feature space.
APA, Harvard, Vancouver, ISO, and other styles
3

Okabe, Masayuki, and Seiji Yamada. "Learning Similarity Matrix from Constraints of Relational Neighbors." Journal of Advanced Computational Intelligence and Intelligent Informatics 14, no. 4 (2010): 402–7. http://dx.doi.org/10.20965/jaciii.2010.p0402.

Full text
Abstract:
This paper describes a method of learning similarity matrix from pairwise constraints assumed used under the situation such as interactive clustering, where we can expect little user feedback. With the small number of pairwise constraints used, our method attempts to use additional constraints induced by the affinity relationship between constrained data and their neighbors. The similarity matrix is learned by solving an optimization problem formalized as semidefinite programming. Additional constraints are used as complementary in the optimization problem. Results of experiments confirmed the effectiveness of our proposed method in several clustering tasks and that our method is a promising approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Mueller, Carl L. "Abstract Constraints for Safe and Robust Robot Learning from Demonstration." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (2020): 13728–29. http://dx.doi.org/10.1609/aaai.v34i10.7136.

Full text
Abstract:
My thesis research incorporates high-level abstract behavioral requirements, called ‘conceptual constraints’, into the modeling processes of robot Learning from Demonstration (LfD) techniques. My most recent work introduces an LfD algorithm called Concept Constrained Learning from Demonstration. This algorithm encodes motion planning constraints as temporal Boolean operators that enforce high-level constraints over portions of the robot's motion plan during learned skill execution. This results in more easily trained, more robust, and safer learned skills. Future work will incorporate conceptual constraints into human-aware motion planning algorithms. Additionally, my research will investigate how these concept constrained algorithms and models are best incorporated into effective interfaces for end-users.
APA, Harvard, Vancouver, ISO, and other styles
5

Kato, Tsuyoshi, Wataru Fujibuchi, and Kiyoshi Asai. "Learning Kernels from Distance Constraints." IPSJ Digital Courier 2 (2006): 441–51. http://dx.doi.org/10.2197/ipsjdc.2.441.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Farina, Francesco, Stefano Melacci, Andrea Garulli, and Antonio Giannitrapani. "Asynchronous Distributed Learning From Constraints." IEEE Transactions on Neural Networks and Learning Systems 31, no. 10 (2020): 4367–73. http://dx.doi.org/10.1109/tnnls.2019.2947740.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Hammer, Rubi, Tomer Hertz, Shaul Hochstein, and Daphna Weinshall. "Category learning from equivalence constraints." Cognitive Processing 10, no. 3 (2008): 211–32. http://dx.doi.org/10.1007/s10339-008-0243-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Armesto, Leopoldo, João Moura, Vladimir Ivan, Mustafa Suphi Erden, Antonio Sala, and Sethu Vijayakumar. "Constraint-aware learning of policies by demonstration." International Journal of Robotics Research 37, no. 13-14 (2018): 1673–89. http://dx.doi.org/10.1177/0278364918784354.

Full text
Abstract:
Many practical tasks in robotic systems, such as cleaning windows, writing, or grasping, are inherently constrained. Learning policies subject to constraints is a challenging problem. In this paper, we propose a method of constraint-aware learning that solves the policy learning problem using redundant robots that execute a policy that is acting in the null space of a constraint. In particular, we are interested in generalizing learned null-space policies across constraints that were not known during the training. We split the combined problem of learning constraints and policies into two: first estimating the constraint, and then estimating a null-space policy using the remaining degrees of freedom. For a linear parametrization, we provide a closed-form solution of the problem. We also define a metric for comparing the similarity of estimated constraints, which is useful to pre-process the trajectories recorded in the demonstrations. We have validated our method by learning a wiping task from human demonstration on flat surfaces and reproducing it on an unknown curved surface using a force- or torque-based controller to achieve tool alignment. We show that, despite the differences between the training and validation scenarios, we learn a policy that still provides the desired wiping motion.
APA, Harvard, Vancouver, ISO, and other styles
9

Hewing, Lukas, Kim P. Wabersich, Marcel Menner, and Melanie N. Zeilinger. "Learning-Based Model Predictive Control: Toward Safe Learning in Control." Annual Review of Control, Robotics, and Autonomous Systems 3, no. 1 (2020): 269–96. http://dx.doi.org/10.1146/annurev-control-090419-075625.

Full text
Abstract:
Recent successes in the field of machine learning, as well as the availability of increased sensing and computational capabilities in modern control systems, have led to a growing interest in learning and data-driven control techniques. Model predictive control (MPC), as the prime methodology for constrained control, offers a significant opportunity to exploit the abundance of data in a reliable manner, particularly while taking safety constraints into account. This review aims at summarizing and categorizing previous research on learning-based MPC, i.e., the integration or combination of MPC with learning methods, for which we consider three main categories. Most of the research addresses learning for automatic improvement of the prediction model from recorded data. There is, however, also an increasing interest in techniques to infer the parameterization of the MPC controller, i.e., the cost and constraints, that lead to the best closed-loop performance. Finally, we discuss concepts that leverage MPC to augment learning-based controllers with constraint satisfaction properties.
APA, Harvard, Vancouver, ISO, and other styles
10

Wu, Xintao, and Daniel Barbará. "Learning missing values from summary constraints." ACM SIGKDD Explorations Newsletter 4, no. 1 (2002): 21–30. http://dx.doi.org/10.1145/568574.568579.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!