Academic literature on the topic 'Angluin's Algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Angluin's Algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Angluin's Algorithm"

1

Okudono, Takamasa, Masaki Waga, Taro Sekiyama, and Ichiro Hasuo. "Weighted Automata Extraction from Recurrent Neural Networks via Regression on State Spaces." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 5306–14. http://dx.doi.org/10.1609/aaai.v34i04.5977.

Full text
Abstract:
We present a method to extract a weighted finite automaton (WFA) from a recurrent neural network (RNN). Our method is based on the WFA learning algorithm by Balle and Mohri, which is in turn an extension of Angluin's classic L* algorithm. Our technical novelty is in the use of regression methods for the so-called equivalence queries, thus exploiting the internal state space of an RNN to prioritize counterexample candidates. This way we achieve a quantitative/weighted extension of the recent work by Weiss, Goldberg and Yahav that extracts DFAs. We experimentally evaluate the accuracy, expressivity and efficiency of the extracted WFAs.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhao, Li Hua, Xue Jia Liang, Xiang Peng, Hua Feng Kong, and Mei Zhen Wang. "An Automatic Network Protocol State Machine Inference Method in Protocol Reverse Engineering." Applied Mechanics and Materials 513-517 (February 2014): 2496–501. http://dx.doi.org/10.4028/www.scientific.net/amm.513-517.2496.

Full text
Abstract:
To infer the network protocol state machine is very useful in network security-related contexts, both in research and management. This process follows an extension of the classic Angluins L* algorithm and has achieved an extended version of some Mealy automata to represent or model a communication protocol. The algorithm has been validated by inferring the protocol state machine from SMTPFTP protocol, and tested offline algorithms for the comparison experiments. The experimental results show that this method can more accurately identify the network protocol state machine and is of the important application value.
APA, Harvard, Vancouver, ISO, and other styles
3

SITHARAM, M., and TIMOTHY STRANEY. "DERANDOMIZED LEARNING OF BOOLEAN FUNCTIONS OVER FINITE ABELIAN GROUPS." International Journal of Foundations of Computer Science 12, no. 04 (August 2001): 491–516. http://dx.doi.org/10.1142/s0129054101000618.

Full text
Abstract:
We employ the Always Approximately Correct or AAC model defined in [35], to prove learnability results for classes of Boolean functions over arbitrary finite Abelian groups. This model is an extension of Angluin's Query model of exact learning. The Boolean functions we consider belong to approximation classes, i.e. functions that are approximable (in various norms) by few Fourier basis functions, or irreducible characters of the domain Abelian group. We contrast our learnability results to previous results for similar classes in the PAC model of learning with and without membership queries. In addition, we discuss new, natural issues and questions that arise when the AAC model is used. One such question is whether a uniform training set is available for learning any function in a given approximation class. No analogous question seems to have been studied in the context of Angluin's Query model. Another question is whether the training set can be found quickly if the approximation class of the function is completely unknown to the learner, or only partial information about the approximation class is given to the learner (in addition to the answers to membership queries). In order to prove the learnability results in this paper we require new techniques for efficiently sampling Boolean functions using the character theory of finite Abelian groups, as well as the development of algebraic algorithms. The techniques result in other natural applications closely related to learning, for example, query complexity of deterministic algorithms for testing linearity, efficient pseudorandom generators, and estimating VC dimensions for classes of Boolean functions over finite Abelian groups.
APA, Harvard, Vancouver, ISO, and other styles
4

Tappler, Martin, Bernhard K. Aichernig, Giovanni Bacci, Maria Eichlseder, and Kim G. Larsen. "$$L^*$$-based learning of Markov decision processes (extended version)." Formal Aspects of Computing 33, no. 4-5 (March 31, 2021): 575–615. http://dx.doi.org/10.1007/s00165-021-00536-5.

Full text
Abstract:
AbstractAutomata learning techniques automatically generate systemmodels fromtest observations. Typically, these techniques fall into two categories: passive and active. On the one hand, passive learning assumes no interaction with the system under learning and uses a predetermined training set, e.g., system logs. On the other hand, active learning techniques collect training data by actively querying the system under learning, allowing one to steer the discovery ofmeaningful information about the systemunder learning leading to effective learning strategies. A notable example of active learning technique for regular languages is Angluin’s $$L^*$$ L ∗ -algorithm. The $$L^*$$ L ∗ -algorithm describes the strategy of a student who learns the minimal deterministic finite automaton of an unknown regular language $$L$$ L by asking a succinct number of queries to a teacher who knows $$L$$ L .In this work, we study $$L^*$$ L ∗ -based learning of deterministic Markov decision processes, a class of Markov decision processes where an observation following an action uniquely determines a successor state. For this purpose, we first assume an ideal setting with a teacher who provides perfect information to the student. Then, we relax this assumption and present a novel learning algorithm that collects information by sampling execution traces of the system via testing.Experiments performed on an implementation of our sampling-based algorithm suggest that our method achieves better accuracy than state-of-the-art passive learning techniques using the same amount of test obser vations. In contrast to existing learning algorithms which assume a predefined number of states, our algorithm learns the complete model structure including the state space.
APA, Harvard, Vancouver, ISO, and other styles
5

Tani, Seiichiro. "A fast exact quantum algorithm for solitude verification." Quantum Information and Computation 17, no. 1&2 (January 2017): 15–40. http://dx.doi.org/10.26421/qic17.1-2-2.

Full text
Abstract:
Solitude verification is arguably one of the simplest fundamental problems in distributed computing, where the goal is to verify that there is a unique contender in a network. This paper devises a quantum algorithm that exactly solves the problem on an anonymous network, which is known as a network model with minimal assumptions [Angluin, STOC’80]. The algorithm runs in O(N) rounds if every party initially has the common knowledge of an upper bound N on the number of parties. This implies that all solvable problems can be solved in O(N) rounds on average without error (i.e., with zero-sided error) on the network. As a generalization, a quantum algorithm that works in O(N log_2 (max{k, 2})) rounds is obtained for the problem of exactly computing any symmetric Boolean function, over n distributed input bits, which is constant over all the n bits whose sum is larger than k for k belongs to {0, 1, . . . , N −1}. All these algorithms work with the bit complexities bounded by a polynomial in N.
APA, Harvard, Vancouver, ISO, and other styles
6

Chockler, Hana, Pascal Kesseli, Daniel Kroening, and Ofer Strichman. "Learning the Language of Software Errors." Journal of Artificial Intelligence Research 67 (April 23, 2020): 881–903. http://dx.doi.org/10.1613/jair.1.11798.

Full text
Abstract:
We propose to use algorithms for learning deterministic finite automata (DFA), such as Angluin’s L* algorithm, for learning a DFA that describes the possible scenarios under which a given program error occurs. The alphabet of this automaton is given by the user (for instance, a subset of the function call sites or branches), and hence the automaton describes a user-defined abstraction of those scenarios. More generally, the same technique can be used for visualising the behavior of a program or parts thereof. It can also be used for visually comparing different versions of a program (by presenting an automaton for the behavior in the symmetric difference between them), and for assisting in merging several development branches. We present experiments that demonstrate the power of an abstract visual representation of errors and of program segments, accessible via the project’s web page. In addition, our experiments in this paper demonstrate that such automata can be learned efficiently over real-world programs. We also present lazy learning, which is a method for reducing the number of membership queries while using L*, and demonstrate its effectiveness on standard benchmarks.
APA, Harvard, Vancouver, ISO, and other styles
7

Tîrnăucă, Cristina. "A Survey of State Merging Strategies for DFA Identification in the Limit." Triangle, no. 8 (June 29, 2018): 121. http://dx.doi.org/10.17345/triangle8.121-136.

Full text
Abstract:
Identication of deterministic nite automata (DFAs) has an extensive history, both in passive learning and in active learning. Intractability results by Gold [5] and Angluin [1] show that nding the smallest automaton consistent with a set of accepted and rejected strings is NP-complete. Nevertheless, a lot of work has been done on learning DFAs from examples within specic heuristics, starting with Trakhtenbrot and Barzdin's algorithm [15], rediscovered and applied to the discipline of grammatical inference by Gold [5]. Many other algorithms have been developed, the convergence of most of which is based on characteristic sets: RPNI (Regular Positive and Negative Inference) by J. Oncina and P. García [11, 12], Traxbar by K. Lang [8], EDSM (Evidence Driven State Merging), Windowed EDSM and Blue- Fringe EDSM by K. Lang, B. Pearlmutter and R. Price [9], SAGE (Self-Adaptive Greedy Estimate) by H. Juillé [7], etc. This paper provides a comprehensive study of the most important state merging strategies developed so far.
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Zhiwu, Cheng Wen, Shengchao Qin, and Mengda He. "Extracting automata from neural networks using active learning." PeerJ Computer Science 7 (April 19, 2021): e436. http://dx.doi.org/10.7717/peerj-cs.436.

Full text
Abstract:
Deep learning is one of the most advanced forms of machine learning. Most modern deep learning models are based on an artificial neural network, and benchmarking studies reveal that neural networks have produced results comparable to and in some cases superior to human experts. However, the generated neural networks are typically regarded as incomprehensible black-box models, which not only limits their applications, but also hinders testing and verifying. In this paper, we present an active learning framework to extract automata from neural network classifiers, which can help users to understand the classifiers. In more detail, we use Angluin’s L* algorithm as a learner and the neural network under learning as an oracle, employing abstraction interpretation of the neural network for answering membership and equivalence queries. Our abstraction consists of value, symbol and word abstractions. The factors that may affect the abstraction are also discussed in the paper. We have implemented our approach in a prototype. To evaluate it, we have performed the prototype on a MNIST classifier and have identified that the abstraction with interval number 2 and block size 1 × 28 offers the best performance in terms of F1 score. We also have compared our extracted DFA against the DFAs learned via the passive learning algorithms provided in LearnLib and the experimental results show that our DFA gives a better performance on the MNIST dataset.
APA, Harvard, Vancouver, ISO, and other styles
9

Ozaki, Ana, Cosimo Persia, and Andrea Mazzullo. "Learning Query Inseparable εℒℋ Ontologies." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 03 (April 3, 2020): 2959–66. http://dx.doi.org/10.1609/aaai.v34i03.5688.

Full text
Abstract:
We investigate the complexity of learning query inseparable εℒℋ ontologies in a variant of Angluin's exact learning model. Given a fixed data instance A* and a query language 𝒬, we are interested in computing an ontology ℋ that entails the same queries as a target ontology 𝒯 on A*, that is, ℋ and 𝒯 are inseparable w.r.t. A* and 𝒬. The learner is allowed to pose two kinds of questions. The first is ‘Does (𝒯,A)⊨ q?’, with A an arbitrary data instance and q and query in 𝒬. An oracle replies this question with ‘yes’ or ‘no’. In the second, the learner asks ‘Are ℋ and 𝒯 inseparable w.r.t. A* and 𝒬?’. If so, the learning process finishes, otherwise, the learner receives (A*,q) with q ∈ 𝒬, (𝒯,A*) |= q and (ℋ,A*) ⊭ q (or vice-versa). Then, we analyse conditions in which query inseparability is preserved if A* changes. Finally, we consider the PAC learning model and a setting where the algorithms learn from a batch of classified data, limiting interactions with the oracles.
APA, Harvard, Vancouver, ISO, and other styles
10

Manolios, Peter, and Robert Fanelli. "First-Order Recurrent Neural Networks and Deterministic Finite State Automata." Neural Computation 6, no. 6 (November 1994): 1155–73. http://dx.doi.org/10.1162/neco.1994.6.6.1155.

Full text
Abstract:
We examine the correspondence between first-order recurrent neural networks and deterministic finite state automata. We begin with the problem of inducing deterministic finite state automata from finite training sets, that include both positive and negative examples, an NP-hard problem (Angluin and Smith 1983). We use a neural network architecture with two recurrent layers, which we argue can approximate any discrete-time, time-invariant dynamic system, with computation of the full gradient during learning. The networks are trained to classify strings as belonging or not belonging to the grammar. The training sets used contain only short strings, and the sets are constructed in a way that does not require a priori knowledge of the grammar. After training, the networks are tested using various test sets with strings of length up to 1000, and are often able to correctly classify all the test strings. These results are comparable to those obtained with second-order networks (Giles et al. 1992; Watrous and Kuhn 1992a; Zeng et al. 1993). We observe that the networks emulate finite state automata, confirming the results of other authors, and we use a vector quantization algorithm to extract deterministic finite state automata after training and during testing of the networks, obtaining a table listing the start state, accept states, reject states, all transitions from the states, as well as some useful statistics. We examine the correspondence between finite state automata and neural networks in detail, showing two major stages in the learning process. To this end, we use a graphics module, which graphically depicts the states of the network during the learning and testing phases. We examine the networks' performance when tested on strings much longer than those in the training set, noting a measure based on clustering that is correlated to the stability of the networks. Finally, we observe that with sufficiently long training times, neural networks can become true finite state automata, due to the attractor structure of their dynamics.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Angluin's Algorithm"

1

Czerny, Maximilian. "Automated software testing : Evaluation of Angluin's L* algorithm and applications in practice." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2014. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-146018.

Full text
Abstract:
Learning-based testing can ensure software quality without a formal documentation or maintained specification of the system under test. Therefore, an automaton learning algorithm is the key component to automatically generate efficient test cases for black-box systems. In the present report Angluin’s automaton learning algorithm L* and the extension called L* Mealy are examined and evaluated in the application area of learning-based software testing. The purpose of this work is to estimate the applicability of the L* algorithm for learning real world software and to describe constraints of this approach. To achieve this, a framework to test the L* implementation on various deterministic finite automata (DFA) was written and an adaptation called L* Mealy was integrated into the learning-based testing platform LBTest. To follow the learning process, the queries that the learner needs to perform on the system to learn are tracked and measured. Both algorithms show a polynomial growth on these queries in case studies from real world business software or on randomly generated DFAs. The test data indicate a better learning performance in practice than the theoretical predictions imply. In contrast to other existing learning algorithms, the L* adaptation L* Mealy performs slowly in LBTest due to a polynomially growing interval between the types of queries that the learner needs to derive a hypothesis.
APA, Harvard, Vancouver, ISO, and other styles
2

Shenkenfelder, Warren. "Learning bisimulation." Thesis, 2008. http://hdl.handle.net/1828/1262.

Full text
Abstract:
Computational learning theory is a branch of theoretical computer science that re-imagines the role of an algorithm from an agent of computation to an agent of learning. The operations of computers become those of the human mind; an important step towards illuminating the limitations of artificial intelligence. The central difference between a learning algorithm and a traditional algorithm is that the learner has access to an oracle who, in constant time, can answer queries about that to be learned. Normally an algorithm would have to discover such information on its own accord. This subtle change in how we model problem solving results in changes in the computational complexity of some classic problems; allowing us to re-examine them in a new light. Specifically two known result are examined: one positive, one negative. It is know that one can efficiently learn Deterministic Finite Automatons with queries, not so of Non-Deterministic Finite Automatons. We generalize these Automatons into Labeled Transition Systems and attempt to learn them using a stronger query.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Angluin's Algorithm"

1

van Heerdt, Gerco, Clemens Kupke, Jurriaan Rot, and Alexandra Silva. "Learning Weighted Automata over Principal Ideal Domains." In Lecture Notes in Computer Science, 602–21. Cham: Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-45231-5_31.

Full text
Abstract:
AbstractIn this paper, we study active learning algorithms for weighted automata over a semiring. We show that a variant of Angluin’s seminal $$\mathtt {L}^{\!\star }$$ L ⋆ algorithm works when the semiring is a principal ideal domain, but not for general semirings such as the natural numbers.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Angluin's Algorithm"

1

Michaliszyn, Jakub, and Jan Otop. "Minimization of Limit-Average Automata." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/388.

Full text
Abstract:
LimAvg-automata are weighted automata over infinite words that aggregate weights along runs with the limit-average value function. In this paper, we study the minimization problem for (deterministic) LimAvg-automata. Our main contribution is an equivalence relation on words characterizing LimAvg-automata, i.e., the equivalence classes of this relation correspond to states of an equivalent LimAvg-automaton. In contrast to relations characterizing DFA, our relation depends not only on the function defined by the target automaton, but also on its structure. We show two applications of this relation. First, we present a minimization algorithm for LimAvg-automata, which returns a minimal LimAvg-automaton among those equivalent and structurally similar to the input one. Second, we present an extension of Angluin's L^*-algorithm with syntactic queries, which learns in polynomial time a LimAvg-automaton equivalent to the target one.
APA, Harvard, Vancouver, ISO, and other styles
2

Funk, Maurice, Jean Christoph Jung, and Carsten Lutz. "Actively Learning Concepts and Conjunctive Queries under ELr-Ontologies." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/260.

Full text
Abstract:
We consider the problem to learn a concept or a query in the presence of an ontology formulated in the description logic ELr, in Angluin's framework of active learning that allows the learning algorithm to interactively query an oracle (such as a domain expert). We show that the following can be learned in polynomial time: (1) EL-concepts, (2) symmetry-free ELI-concepts, and (3) conjunctive queries (CQs) that are chordal, symmetry-free, and of bounded arity. In all cases, the learner can pose to the oracle membership queries based on ABoxes and equivalence queries that ask whether a given concept/query from the considered class is equivalent to the target. The restriction to bounded arity in (3) can be removed when we admit unrestricted CQs in equivalence queries. We also show that EL-concepts are not polynomial query learnable in the presence of ELI-ontologies.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography