To see the other types of publications on this topic, follow the link: Tractable reasoning.

Journal articles on the topic 'Tractable reasoning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Tractable reasoning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Schaerf, Marco, and Marco Cadoli. "Tractable reasoning via approximation." Artificial Intelligence 74, no. 2 (April 1995): 249–310. http://dx.doi.org/10.1016/0004-3702(94)00009-p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Bodirsky, M., and M. Hils. "Tractable Set Constraints." Journal of Artificial Intelligence Research 45 (December 31, 2012): 731–59. http://dx.doi.org/10.1613/jair.3747.

Full text
Abstract:
Many fundamental problems in artificial intelligence, knowledge representation, and verification involve reasoning about sets and relations between sets and can be modeled as set constraint satisfaction problems (set CSPs). Such problems are frequently intractable, but there are several important set CSPs that are known to be polynomial-time tractable. We introduce a large class of set CSPs that can be solved in quadratic time. Our class, which we call EI, contains all previously known tractable set CSPs, but also some new ones that are of crucial importance for example in description logics. The class of EI set constraints has an elegant universal-algebraic characterization, which we use to show that every set constraint language that properly contains all EI set constraints already has a finite sublanguage with an NP-hard constraint satisfaction problem.
APA, Harvard, Vancouver, ISO, and other styles
3

Fang, Liangda, Kewen Wang, Zhe Wang, and Ximing Wen. "Disjunctive Normal Form for Multi-Agent Modal Logics Based on Logical Separability." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2817–26. http://dx.doi.org/10.1609/aaai.v33i01.33012817.

Full text
Abstract:
Modal logics are primary formalisms for multi-agent systems but major reasoning tasks in such logics are intractable, which impedes applications of multi-agent modal logics such as automatic planning. One technique of tackling the intractability is to identify a fragment called a normal form of multiagent logics such that it is expressive but tractable for reasoning tasks such as entailment checking, bounded conjunction transformation and forgetting. For instance, DNF of propositional logic is tractable for these reasoning tasks. In this paper, we first introduce a notion of logical separability and then define a novel disjunctive normal form SDNF for the multiagent logic Kn, which overcomes some shortcomings of existing approaches. In particular, we show that every modal formula in Kn can be equivalently casted as a formula in SDNF, major reasoning tasks tractable in propositional DNF are also tractable in SDNF, and moreover, formulas in SDNF enjoy the property of logical separability. To demonstrate the usefulness of our approach, we apply SDNF in multi-agent epistemic planning. Finally, we extend these results to three more complex multi-agent logics Dn, K45n and KD45n.
APA, Harvard, Vancouver, ISO, and other styles
4

Pan, Jeff, Edward Thomas, Yuan Ren, and Stuart Taylor. "Exploiting Tractable Fuzzy and Crisp Reasoning in Ontology Applications." IEEE Computational Intelligence Magazine 7, no. 2 (May 2012): 45–53. http://dx.doi.org/10.1109/mci.2012.2188588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Mailis, Theofilos, Giorgos Stoilos, Nikolaos Simou, Giorgos Stamou, and Stefanos Kollias. "Tractable reasoning with vague knowledge using fuzzy $\mathcal{EL}^{++}$." Journal of Intelligent Information Systems 39, no. 2 (March 24, 2012): 399–440. http://dx.doi.org/10.1007/s10844-012-0195-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Cristani, M. "The Complexity of Reasoning about Spatial Congruence." Journal of Artificial Intelligence Research 11 (November 20, 1999): 361–90. http://dx.doi.org/10.1613/jair.641.

Full text
Abstract:
In the recent literature of Artificial Intelligence, an intensive research effort has been spent, for various algebras of qualitative relations used in the representation of temporal and spatial knowledge, on the problem of classifying the computational complexity of reasoning problems for subsets of algebras. The main purpose of these researches is to describe a restricted set of maximal tractable subalgebras, ideally in an exhaustive fashion with respect to the hosting algebras. In this paper we introduce a novel algebra for reasoning about Spatial Congruence, show that the satisfiability problem in the spatial algebra MC-4 is NP-complete, and present a complete classification of tractability in the algebra, based on the individuation of three maximal tractable subclasses, one containing the basic relations. The three algebras are formed by 14, 10 and 9 relations out of 16 which form the full algebra.
APA, Harvard, Vancouver, ISO, and other styles
7

McIntyre, Stephanie, Alexander Borgida, David Toman, and Grant Weddell. "On Limited Conjunctions and Partial Features in Parameter-Tractable Feature Logics." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 2995–3002. http://dx.doi.org/10.1609/aaai.v33i01.33012995.

Full text
Abstract:
Standard reasoning problems are complete for EXPTIME in common feature-based description logics—ones in which all roles are restricted to being functions. We show how to control conjunctions on left-hand-sides of subsumptions and use this restriction to develop a parameter-tractable algorithm for reasoning about knowledge base consistency. We then show how the resulting logic can simulate partial features, and present algorithms for efficient query answering in that setting.
APA, Harvard, Vancouver, ISO, and other styles
8

Eiter, Thomas, and Thomas Lukasiewicz. "Default reasoning from conditional knowledge bases: Complexity and tractable cases." Artificial Intelligence 124, no. 2 (December 2000): 169–241. http://dx.doi.org/10.1016/s0004-3702(00)00073-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jones, C. B. "The early search for tractable ways of reasoning about programs." IEEE Annals of the History of Computing 25, no. 2 (April 2003): 26–49. http://dx.doi.org/10.1109/mahc.2003.1203057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Borges Garcia, Berilhes. "New tractable classes for default reasoning from conditional knowledge bases." Annals of Mathematics and Artificial Intelligence 45, no. 3-4 (November 16, 2005): 275–91. http://dx.doi.org/10.1007/s10472-005-9000-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Simančík, František, Boris Motik, and Ian Horrocks. "Consequence-based and fixed-parameter tractable reasoning in description logics." Artificial Intelligence 209 (April 2014): 29–77. http://dx.doi.org/10.1016/j.artint.2014.01.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Drakengren, T., and P. Jonsson. "Eight Maximal Tractable Subclasses of Allen's Algebra with Metric Time." Journal of Artificial Intelligence Research 7 (July 1, 1997): 25–45. http://dx.doi.org/10.1613/jair.340.

Full text
Abstract:
This paper combines two important directions of research in temporal resoning: that of finding maximal tractable subclasses of Allen's interval algebra, and that of reasoning with metric temporal information. Eight new maximal tractable subclasses of Allen's interval algebra are presented, some of them subsuming previously reported tractable algebras. The algebras allow for metric temporal constraints on interval starting or ending points, using the recent framework of Horn DLRs. Two of the algebras can express the notion of sequentiality between intervals, being the first such algebras admitting both qualitative and metric time.
APA, Harvard, Vancouver, ISO, and other styles
13

Drakengren, T. "Reasoning about set constraints applied to tractable inference in intuitionistic logic." Journal of Logic and Computation 8, no. 6 (December 1, 1998): 855–75. http://dx.doi.org/10.1093/logcom/8.6.855.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hammond, Lewis, and Vaishak Belle. "Learning tractable probabilistic models for moral responsibility and blame." Data Mining and Knowledge Discovery 35, no. 2 (January 25, 2021): 621–59. http://dx.doi.org/10.1007/s10618-020-00726-4.

Full text
Abstract:
AbstractMoral responsibility is a major concern in autonomous systems, with applications ranging from self-driving cars to kidney exchanges. Although there have been recent attempts to formalise responsibility and blame, among similar notions, the problem of learning within these formalisms has been unaddressed. From the viewpoint of such systems, the urgent questions are: (a) How can models of moral scenarios and blameworthiness be extracted and learnt automatically from data? (b) How can judgements be computed effectively and efficiently, given the split-second decision points faced by some systems? By building on constrained tractable probabilistic learning, we propose and implement a hybrid (between data-driven and rule-based methods) learning framework for inducing models of such scenarios automatically from data and reasoning tractably from them. We report on experiments that compare our system with human judgement in three illustrative domains: lung cancer staging, teamwork management, and trolley problems.
APA, Harvard, Vancouver, ISO, and other styles
15

Renz, J., and B. Nebel. "Efficient Methods for Qualitative Spatial Reasoning." Journal of Artificial Intelligence Research 15 (October 1, 2001): 289–318. http://dx.doi.org/10.1613/jair.872.

Full text
Abstract:
The theoretical properties of qualitative spatial reasoning in the RCC8 framework have been analyzed extensively. However, no empirical investigation has been made yet. Our experiments show that the adaption of the algorithms used for qualitative temporal reasoning can solve large RCC8 instances, even if they are in the phase transition region -- provided that one uses the maximal tractable subsets of RCC8 that have been identified by us. In particular, we demonstrate that the orthogonal combination of heuristic methods is successful in solving almost all apparently hard instances in the phase transition region up to a certain size in reasonable time.
APA, Harvard, Vancouver, ISO, and other styles
16

Koubarakis, Manolis. "Tractable disjunctions of linear constraints: basic results and applications to temporal reasoning." Theoretical Computer Science 266, no. 1-2 (September 2001): 311–39. http://dx.doi.org/10.1016/s0304-3975(00)00177-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Calvanese, Diego, Giuseppe De Giacomo, Domenico Lembo, Maurizio Lenzerini, and Riccardo Rosati. "Tractable Reasoning and Efficient Query Answering in Description Logics: The DL-Lite Family." Journal of Automated Reasoning 39, no. 3 (July 20, 2007): 385–429. http://dx.doi.org/10.1007/s10817-007-9078-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

PICHLER, REINHARD, STEFAN RÜMMELE, STEFAN SZEIDER, and STEFAN WOLTRAN. "Tractable answer-set programming with weight constraints: bounded treewidth is not enough." Theory and Practice of Logic Programming 14, no. 2 (July 17, 2012): 141–64. http://dx.doi.org/10.1017/s1471068412000099.

Full text
Abstract:
AbstractCardinality constraints or, more generally, weight constraints are well recognized as an important extension of answer-set programming. Clearly, all common algorithmic tasks related to programs with cardinality or weight constraints – like checking the consistency of a program – are intractable. Many intractable problems in the area of knowledge representation and reasoning have been shown to become linear time tractable if the treewidth of the programs or formulas under consideration is bounded by some constant. The goal of this paper is to apply the notion of treewidth to programs with cardinality or weight constraints and to identify tractable fragments. It will turn out that the straightforward application of treewidth to such class of programs does not suffice to obtain tractability. However, by imposing further restrictions, tractability can be achieved.
APA, Harvard, Vancouver, ISO, and other styles
19

Sæther, Sigve Hortemo, Jan Arne Telle, and Martin Vatshelle. "Solving #SAT and MAXSAT by Dynamic Programming." Journal of Artificial Intelligence Research 54 (September 9, 2015): 59–82. http://dx.doi.org/10.1613/jair.4831.

Full text
Abstract:
We look at dynamic programming algorithms for propositional model counting, also called #SAT, and MaxSAT. Tools from graph structure theory, in particular treewidth, have been used to successfully identify tractable cases in many subfields of AI, including SAT, Constraint Satisfaction Problems (CSP), Bayesian reasoning, and planning. In this paper we attack #SAT and MaxSAT using similar, but more modern, graph structure tools. The tractable cases will include formulas whose class of incidence graphs have not only unbounded treewidth but also unbounded clique-width. We show that our algorithms extend all previous results for MaxSAT and #SAT achieved by dynamic programming along structural decompositions of the incidence graph of the input formula. We present some limited experimental results, comparing implementations of our algorithms to state-of-the-art #SAT and MaxSAT solvers, as a proof of concept that warrants further research.
APA, Harvard, Vancouver, ISO, and other styles
20

HELAOUI, MAHER, WADY NAANAA, and BECHIR AYEB. "SUBMODULARITY-BASED DECOMPOSING FOR VALUED CSP." International Journal on Artificial Intelligence Tools 22, no. 02 (April 2013): 1350006. http://dx.doi.org/10.1142/s0218213013500061.

Full text
Abstract:
Many combinatorial problems can be formulated as Valued Constraint Satisfaction Problems (VCSPs). In this framework, the constraints are defined by means of valuation functions to reflect several degrees of coherence. Despite the NP-hardness of the VCSP, tractable versions can be obtained by forcing the allowable valuation functions to have specific features. This is the case for submodular VCSPs, i.e. VCSPs that involve submodular valuation functions only. In this paper, we propose a problem decomposition scheme for binary VCSPs that takes advantage of submodular functions even when the studied problem is not submodular. The proposed scheme consists in decomposing the problem to be solved into a set of submodular, then tractable, subproblems. The decomposition scheme combines two techniques that where already used in the framework of constraint-based reasoning, but in separate manner. These techniques are domain partitioning and value permutation.
APA, Harvard, Vancouver, ISO, and other styles
21

Drakengren, T., and P. Jonsson. "A Complete Classification of Tractability in RCC-5." Journal of Artificial Intelligence Research 6 (June 1, 1997): 211–21. http://dx.doi.org/10.1613/jair.379.

Full text
Abstract:
We investigate the computational properties of the spatial algebra RCC-5 which is a restricted version of the RCC framework for spatial reasoning. The satisfiability problem for RCC-5 is known to be NP-complete but not much is known about its approximately four billion subclasses. We provide a complete classification of satisfiability for all these subclasses into polynomial and NP-complete respectively. In the process, we identify all maximal tractable subalgebras which are four in total.
APA, Harvard, Vancouver, ISO, and other styles
22

Sadiku, Matthew N. O., Justin Foreman, and Sarhan M. Musa. "Computational Intelligence." European Scientific Journal, ESJ 14, no. 21 (July 31, 2018): 56. http://dx.doi.org/10.19044/esj.2018.v14n21p56.

Full text
Abstract:
Computational intelligence (CI) refers to recreating human-like intelligence in a computing machine. It consists of a set of computing systems with the ability to learn and deal with new situations such that the systems are perceived to have some attributes of intelligence. It is efficient in solving realworld problems which require reasoning and decision-making. It produces more robust, simpler, and tractable solutions than the traditional techniques. This paper provides a brief introduction to computational intelligence.
APA, Harvard, Vancouver, ISO, and other styles
23

Park, S., E. H. Durfee, and W. P. Birmingham. "Use of Markov Chains to Design an Agent Bidding Strategy for Continuous Double Auctions." Journal of Artificial Intelligence Research 22 (November 1, 2004): 175–214. http://dx.doi.org/10.1613/jair.1466.

Full text
Abstract:
As computational agents are developed for increasingly complicated e-commerce applications, the complexity of the decisions they face demands advances in artificial intelligence techniques. For example, an agent representing a seller in an auction should try to maximize the seller?s profit by reasoning about a variety of possibly uncertain pieces of information, such as the maximum prices various buyers might be willing to pay, the possible prices being offered by competing sellers, the rules by which the auction operates, the dynamic arrival and matching of offers to buy and sell, and so on. A naive application of multiagent reasoning techniques would require the seller?s agent to explicitly model all of the other agents through an extended time horizon, rendering the problem intractable for many realistically-sized problems. We have instead devised a new strategy that an agent can use to determine its bid price based on a more tractable Markov chain model of the auction process. We have experimentally identified the conditions under which our new strategy works well, as well as how well it works in comparison to the optimal performance the agent could have achieved had it known the future. Our results show that our new strategy in general performs well, outperforming other tractable heuristic strategies in a majority of experiments, and is particularly effective in a 'seller?s market', where many buy offers are available.
APA, Harvard, Vancouver, ISO, and other styles
24

Renz, Jochen, and Bernhard Nebel. "On the complexity of qualitative spatial reasoning: A maximal tractable fragment of the Region Connection Calculus." Artificial Intelligence 108, no. 1-2 (March 1999): 69–123. http://dx.doi.org/10.1016/s0004-3702(99)00002-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Bauters, Kim, Kevin McAreavey, Weiru Liu, Jun Hong, Lluís Godo, and Carles Sierra. "Managing Different Sources of Uncertainty in a BDI Framework in a Principled Way with Tractable Fragments." Journal of Artificial Intelligence Research 58 (April 4, 2017): 731–75. http://dx.doi.org/10.1613/jair.5287.

Full text
Abstract:
The Belief-Desire-Intention (BDI) architecture is a practical approach for modelling large-scale intelligent systems. In the BDI setting, a complex system is represented as a network of interacting agents - or components - each one modelled based on its beliefs, desires and intentions. However, current BDI implementations are not well-suited for modelling more realistic intelligent systems which operate in environments pervaded by different types of uncertainty. Furthermore, existing approaches for dealing with uncertainty typically do not offer syntactical or tractable ways of reasoning about uncertainty. This complicates their integration with BDI implementations, which heavily rely on fast and reactive decisions. In this paper, we advance the state-of-the-art w.r.t. handling different types of uncertainty in BDI agents. The contributions of this paper are, first, a new way of modelling the beliefs of an agent as a set of epistemic states. Each epistemic state can use a distinct underlying uncertainty theory and revision strategy, and commensurability between epistemic states is achieved through a stratification approach. Second, we present a novel syntactic approach to revising beliefs given unreliable input. We prove that this syntactic approach agrees with the semantic definition, and we identify expressive fragments that are particularly useful for resource-bounded agents. Third, we introduce full operational semantics that extend CAN, a popular semantics for BDI, to establish how reasoning about uncertainty can be tightly integrated into the BDI framework. Fourth, we provide comprehensive experimental results to highlight the usefulness and feasibility of our approach, and explain how the generic epistemic state can be instantiated into various representations.
APA, Harvard, Vancouver, ISO, and other styles
26

Rintanen, J. "Complexity of Prioritized Default Logics." Journal of Artificial Intelligence Research 9 (December 1, 1998): 423–61. http://dx.doi.org/10.1613/jair.554.

Full text
Abstract:
In default reasoning, usually not all possible ways of resolving conflicts between default rules are acceptable. Criteria expressing acceptable ways of resolving the conflicts may be hardwired in the inference mechanism, for example specificity in inheritance reasoning can be handled this way, or they may be given abstractly as an ordering on the default rules. In this article we investigate formalizations of the latter approach in Reiter's default logic. Our goal is to analyze and compare the computational properties of three such formalizations in terms of their computational complexity: the prioritized default logics of Baader and Hollunder, and Brewka, and a prioritized default logic that is based on lexicographic comparison. The analysis locates the propositional variants of these logics on the second and third levels of the polynomial hierarchy, and identifies the boundary between tractable and intractable inference for restricted classes of prioritized default theories.
APA, Harvard, Vancouver, ISO, and other styles
27

DE TOFFOLI, SILVIA. "‘CHASING’ THE DIAGRAM—THE USE OF VISUALIZATIONS IN ALGEBRAIC REASONING." Review of Symbolic Logic 10, no. 1 (October 28, 2016): 158–86. http://dx.doi.org/10.1017/s1755020316000277.

Full text
Abstract:
AbstractThe aim of this article is to investigate the roles of commutative diagrams (CDs) in a specific mathematical domain, and to unveil the reasons underlying their effectiveness as a mathematical notation; this will be done through a case study. It will be shown that CDs do not depict spatial relations, but represent mathematical structures. CDs will be interpreted as a hybrid notation that goes beyond the traditional bipartition of mathematical representations into diagrammatic and linguistic. It will be argued that one of the reasons why CDs form a good notation is that they are highly mathematically tractable: experts can obtain valid results by ‘calculating’ with CDs. These calculations, take the form of ‘diagram chases’. In order to draw inferences, experts move algebraic elements around the diagrams. It will be argued that these diagrams are dynamic. It is thanks to their dynamicity that CDs can externalize the relevant reasoning and allow experts to draw conclusions directly by manipulating them. Lastly, it will be shown that CDs play essential roles in the context of proof as well as in other phases of the mathematical enterprise, such as discovery and conjecture formation.
APA, Harvard, Vancouver, ISO, and other styles
28

KHREISAT, LAILA. "REAL TIME INFERENCE IN BAYESIAN NETWORKS: AN ANYTIME APPROACH." International Journal on Artificial Intelligence Tools 14, no. 03 (June 2005): 477–89. http://dx.doi.org/10.1142/s0218213005002211.

Full text
Abstract:
One of the major challenges facing real time world applications that employ Bayesian networks, is the design and development of efficient inference algorithms. In this paper we present an approximate real time inference algorithm for Bayesian Networks. The algorithm is an anytime reasoning method based on probabilistic inequalities, capable of handling fully and partially quantified Bayesian networks. In our method the accuracy of the results improve gradually as computation time increases, providing a trade-off between resource consumption and output quality. The method is tractable in providing the initial answers, as well as complete in the limiting case.
APA, Harvard, Vancouver, ISO, and other styles
29

Love, Alan C. "Idealization in evolutionary developmental investigation: a tension between phenotypic plasticity and normal stages." Philosophical Transactions of the Royal Society B: Biological Sciences 365, no. 1540 (February 27, 2010): 679–90. http://dx.doi.org/10.1098/rstb.2009.0262.

Full text
Abstract:
Idealization is a reasoning strategy that biologists use to describe, model and explain that purposefully departs from features known to be present in nature. Similar to other strategies of scientific reasoning, idealization combines distinctive strengths alongside of latent weaknesses. The study of ontogeny in model organisms is usually executed by establishing a set of normal stages for embryonic development, which enables researchers in different laboratory contexts to have standardized comparisons of experimental results. Normal stages are a form of idealization because they intentionally ignore known variation in development, including variation associated with phenotypic plasticity (e.g. via strict control of environmental variables). This is a tension between the phenomenon of plasticity and the practice of staging that has consequences for evolutionary developmental investigation because variation is conceptually removed as a part of rendering model organisms experimentally tractable. Two compensatory tactics for mitigating these consequences are discussed: employing a diversity of model organisms and adopting alternative periodizations.
APA, Harvard, Vancouver, ISO, and other styles
30

BIERMAN, G. M. "Program equivalence in a linear functional language." Journal of Functional Programming 10, no. 2 (March 2000): 167–90. http://dx.doi.org/10.1017/s0956796899003639.

Full text
Abstract:
Researchers have recently proposed that for certain applications it is advantageous to use functional languages whose type systems are based upon linear logic: so-called linear functional languages. In this paper we develop reasoning techniques for programs in a linear functional language, linPCF, based on their operational behaviour. The principal theorem of this paper is to show that contextual equivalence of linPCF programs can be characterised coinductively. This characterisation provides a tractable method for reasoning about contextual equivalence, and is used in three ways:[bull ] A number of useful contextual equivalences between linPCF programs is given.[bull ] A notion of type isomorphism with respect to contextual equivalence, called operational isomorphism, is given. In particular the types !ϕ[otimes ]!ψ and !(ϕ&ψ) are proved to be operationally isomorphic.[bull ] A translation of non-strict PCF into linPCF is shown to be adequate, but not fully abstract, with respect to contextual equivalence.
APA, Harvard, Vancouver, ISO, and other styles
31

Shih, Andy, Arthur Choi, and Adnan Darwiche. "Compiling Bayesian Network Classifiers into Decision Graphs." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 7966–74. http://dx.doi.org/10.1609/aaai.v33i01.33017966.

Full text
Abstract:
We propose an algorithm for compiling Bayesian network classifiers into decision graphs that mimic the input and output behavior of the classifiers. In particular, we compile Bayesian network classifiers into ordered decision graphs, which are tractable and can be exponentially smaller in size than decision trees. This tractability facilitates reasoning about the behavior of Bayesian network classifiers, including the explanation of decisions they make. Our compilation algorithm comes with guarantees on the time of compilation and the size of compiled decision graphs. We apply our compilation algorithm to classifiers from the literature and discuss some case studies in which we show how to automatically explain their decisions and verify properties of their behavior.
APA, Harvard, Vancouver, ISO, and other styles
32

Hu, Jie, Jin Ma, Jin-Feng Feng, and Ying-Hong Peng. "Research on new creative conceptual design system using adapted case-based reasoning technique." Artificial Intelligence for Engineering Design, Analysis and Manufacturing 31, no. 1 (February 29, 2016): 16–29. http://dx.doi.org/10.1017/s0890060416000159.

Full text
Abstract:
AbstractCreative conceptual design requires significant previous design knowledge. Case-based reasoning enables learning from previous design experience and has a great potential in supporting creative conceptual design by means of seeking to retrieve, reuse, and revise most appropriate cases to generate inspired solutions. However, traditional case-based reasoning based creative conceptual design models focus on design strategies research, pay little attention to defining a consistent knowledge representation model, and neglect the research to make various types of knowledge retrieval tractable. Faced with such drawbacks, the expected design knowledge cannot be retrieved properly, especially in cases where multidisciplinary knowledge is concerned or exact query terms are absent. In order to solve these issues, this paper presents a combined approach to support creative conceptual design process. First, function–behavior–structure knowledge cell is introduced as a unified consistent design knowledge representation model. Second, a hybrid similarity measure is proposed to increase the overall possibility of obtaining useful design knowledge by considering semantic understanding ability. Third, an intelligent creative conceptual design system has been developed with a case study of a novel insulin pump design to demonstrate its usage, and two experiments are conducted to evaluate the performance of the proposed approach. The results show that the proposed approach outperforms other case-based reasoning based creative conceptual design models.
APA, Harvard, Vancouver, ISO, and other styles
33

Saisubramanian, Sandhya. "Adaptive Modeling for Risk-Aware Decision Making." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 9896–97. http://dx.doi.org/10.1609/aaai.v33i01.33019896.

Full text
Abstract:
This thesis aims to provide a foundation for risk-aware decision making. Decision making under uncertainty is a core capability of an autonomous agent. A cornerstone for with long-term autonomy and safety is risk-aware decision making. A risk-aware model fully accounts for a known set of risks in the environment, with respect to the problem under consideration, and the process of decision making using such a model is risk-aware decision making. Formulating risk-aware models is critical for robust reasoning under uncertainty, since the impact of using less accurate models may be catastrophic in extreme cases due to overly optimistic view of problems. I propose adaptive modeling, a framework that helps balance the trade-off between model simplicity and risk awareness, for different notions of risks, while remaining computationally tractable.
APA, Harvard, Vancouver, ISO, and other styles
34

Hoffmann, J., P. Bertoli, M. Helmert, and M. Pistore. "Message-Based Web Service Composition, Integrity Constraints, and Planning under Uncertainty: A New Connection." Journal of Artificial Intelligence Research 35 (May 31, 2009): 49–117. http://dx.doi.org/10.1613/jair.2716.

Full text
Abstract:
Thanks to recent advances, AI Planning has become the underlying technique for several applications. Figuring prominently among these is automated Web Service Composition (WSC) at the "capability" level, where services are described in terms of preconditions and effects over ontological concepts. A key issue in addressing WSC as planning is that ontologies are not only formal vocabularies; they also axiomatize the possible relationships between concepts. Such axioms correspond to what has been termed "integrity constraints" in the actions and change literature, and applying a web service is essentially a belief update operation. The reasoning required for belief update is known to be harder than reasoning in the ontology itself. The support for belief update is severely limited in current planning tools. Our first contribution consists in identifying an interesting special case of WSC which is both significant and more tractable. The special case, which we term "forward effects", is characterized by the fact that every ramification of a web service application involves at least one new constant generated as output by the web service. We show that, in this setting, the reasoning required for belief update simplifies to standard reasoning in the ontology itself. This relates to, and extends, current notions of "message-based" WSC, where the need for belief update is removed by a strong (often implicit or informal) assumption of "locality" of the individual messages. We clarify the computational properties of the forward effects case, and point out a strong relation to standard notions of planning under uncertainty, suggesting that effective tools for the latter can be successfully adapted to address the former. Furthermore, we identify a significant sub-case, named "strictly forward effects", where an actual compilation into planning under uncertainty exists. This enables us to exploit off-the-shelf planning tools to solve message-based WSC in a general form that involves powerful ontologies, and requires reasoning about partial matches between concepts. We provide empirical evidence that this approach may be quite effective, using Conformant-FF as the underlying planner.
APA, Harvard, Vancouver, ISO, and other styles
35

Servi, Gisèle Fischer. "Nonmonotonic consequence based on intuitionistic logic." Journal of Symbolic Logic 57, no. 4 (December 1992): 1176–97. http://dx.doi.org/10.2307/2275363.

Full text
Abstract:
Research in AI has recently begun to address the problems of nondeductive reasoning, i.e., the problems that arise when, on the basis of approximate or incomplete evidence, we form well-reasoned but possibly false judgments. Attempts to stimulate such reasoning fall into two main categories: the numerical approach is based on probabilities and the nonnumerical one tries to reconstruct nondeductive reasoning as a special type of deductive process. In this paper, we are concerned with the latter usually known as nonmonotonic deduction, because the set of theorems does not increase monotonically with the set of axioms.It is generally acknowledged that nonmonotonic (n.m.) formalisms (e.g., [C], [MC1], [MC2] [MD], [MD-D], [Rl], [R2], [S]) are plagued by a number of difficulties. A key issue concerns the fact that most systems do not produce an axiomatizable set of validities. Thus, the chief objective of this paper is to develop an alternative approach in which the set of n.m. inferences, which somehow qualify as being deductively sound, is r.e.The basic idea here is to reproduce the situation in First Order Logic where the metalogical concept of deduction translates into the logical notion of material implication. Since n.m. deductions are no longer truth preserving, our way to deal with a change in the metaconcept is to extend the standard logic apparatus so that it can reflect the new metaconcept. In other words, the intent is to study a concept of nonmonotonic implication that goes hand in hand with a notion of n.m. deduction. And in our case, it is convenient that the former be characterized within the more tractable context of monotonic logic.
APA, Harvard, Vancouver, ISO, and other styles
36

Guizzardi, Giancarlo, Fernanda Baião, Mauro Lopes, and Ricardo Falbo. "The Role of Foundational Ontologies for Domain Ontology Engineering." International Journal of Information System Modeling and Design 1, no. 2 (April 2010): 1–22. http://dx.doi.org/10.4018/jismd.2010040101.

Full text
Abstract:
Ontologies are commonly used in computer science either as a reference model to support semantic interoperability, or as an artifact that should be efficiently represented to support tractable automated reasoning. This duality poses a tradeoff between expressivity and computational tractability that should be addressed in different phases of an ontology engineering process. The inadequate choice of a modeling language, disregarding the goal of each ontology engineering phase, can lead to serious problems in the deployment of the resulting model. This article discusses these issues by making use of an industrial case study in the domain of Oil and Gas. The authors make the differences between two different representations in this domain explicit, and highlight a number of concepts and ideas that were implicit in an original OWL-DL model and that became explicit by applying the methodological directives underlying an ontologically well-founded modeling language.
APA, Harvard, Vancouver, ISO, and other styles
37

Lisman, John, and Eliezer J. Sternberg. "Habit and Nonhabit Systems for Unconscious and Conscious Behavior: Implications for Multitasking." Journal of Cognitive Neuroscience 25, no. 2 (February 2013): 273–83. http://dx.doi.org/10.1162/jocn_a_00319.

Full text
Abstract:
The study of human consciousness has demonstrated that there are both conscious and unconscious systems. Other work, particularly in animals, has shown that there are habit and nonhabit systems and that these involve different brain regions and memory processes. Here we argue that habits can be equated with unconscious behavior and nonhabits with conscious behavior. This equation makes the extensive physiological literature on habit/nonhabit relevant to the less tractable issue of consciousness. On the basis of this line of reasoning, it appears that different parts of the BG and different memory structures mediate conscious and unconscious processes. It is further argued here that the unconscious system is highly capable; it can both process sensory information and produce behavior. The benefit of such a dual system is multitasking: The unconscious system can execute background tasks, leaving the conscious system to perform more difficult tasks.
APA, Harvard, Vancouver, ISO, and other styles
38

Darling, Michael C., George F. Luger, Thomas B. Jones, Matthew R. Denman, and Katrina M. Groth. "Intelligent Modeling for Nuclear Power Plant Accident Management." International Journal on Artificial Intelligence Tools 27, no. 02 (March 2018): 1850003. http://dx.doi.org/10.1142/s0218213018500033.

Full text
Abstract:
This paper explores the viability of using counterfactual reasoning for impact analyses when understanding and responding to “beyond-design-basis” nuclear power plant accidents. Currently, when a severe nuclear power plant accident occurs, plant operators rely on Severe Accident Management Guidelines. However, the current guidelines are limited in scope and depth: for certain types of accidents, plant operators would have to work to mitigate the damage with limited experience and guidance for the particular situation. We aim to fill the need for comprehensive accident support by using a dynamic Bayesian network to aid in the diagnosis of a nuclear reactor's state and to analyze the impact of possible response measures. The dynamic Bayesian network, DBN, offers an expressive representation of the components and relationships that make up a complex causal system. For this reason, and for its tractable reasoning, the DBN supports a functional model for the intricate operations of nuclear power plants. In this domain, it is also pertinent that a Bayesian network can be composed of both probabilistic and knowledge-based components. Though probabilities can be calculated from simulated models, the structure of the network, as well as the value of some parameters, must be assigned by human experts. Since dynamic Bayesian network-based systems are capable of running better-than-real-time situation analyses, they can support both current event and alternate scenario impact analyses.
APA, Harvard, Vancouver, ISO, and other styles
39

Zhou, Zhangquan, and Guilin Qi. "GEL: A Platform-Independent Reasoner for Parallel Classification with OWL EL Ontologies Using Graph Representation." International Journal on Artificial Intelligence Tools 26, no. 01 (February 2017): 1760001. http://dx.doi.org/10.1142/s0218213017600016.

Full text
Abstract:
In the community of Semantic Web, the state-of-the-art reasoners are all implemented on specific platforms. As the Web Ontology Language (OWL) has been widely used in different real-world applications, it is required that the task of reasoning can be flexibly adapted to different platforms. In this paper, we take the first effort to give a platformindependent approach for parallel classification of the description logic OWL EL, which is a tractable fragment of OWL 2. Classification, which is the task of computing a subsumption hierarchy between concepts, plays an important role in many applications of OWL EL. The current classification methods rely on the specific parallel computation platforms and can hardly achieve a tradeoff between efficiency and scalability. To develop a platform-independent approach for performing classification in OWL EL, we first give a novel and well defined graph formalism GEL for representing EL ontologies and present a group of sound and complete rules for ontology classification on a GEL graph. Based on this formalism, we design a parallel classification algorithm, which can be implemented on different parallel computation platforms, and we also show the correctness of the algorithm. We implement a system, which can easily switch between multi-core and a distributed cluster. Finally, we conduct experiments on several real-world OWL EL ontologies. The experimental results show that our system outperforms two state-of-the-art EL reasoning systems, and has a linear scalability on the extensions of GO ontologies.
APA, Harvard, Vancouver, ISO, and other styles
40

BONTCHEVA, KALINA, and VANIA DIMITROVA. "EXAMINING THE USE OF CONCEPTUAL GRAPHS IN ADAPTIVE WEB-BASED SYSTEMS THAT AID TERMINOLOGY LEARNING." International Journal on Artificial Intelligence Tools 13, no. 02 (June 2004): 299–331. http://dx.doi.org/10.1142/s0218213004001569.

Full text
Abstract:
This paper discussed the use of Conceptual Graphs (CGs) for implementing tasks employed in web-based educational systems that aid terminology learning. Specifically, we focus on two critical issues in intelligent tutoring - student diagnosis and generation of adaptive explanations. Both tasks are demonstrated in terminological domains where learners have to familiarize themselves with concepts in a specific subject area (e.g. computing, finance, chemistry). Based on CG reasoning, robust and computationally tractable algorithms for student modelling and adaptive explanation generation are defined. Two intelligent systems are presented — STyLE-OLM and HYLITE+. STyLE-OLM is an interactive learner modelling system that extracts extended models of the learners' cognition. HYLITE+ is a natural language generation system that generates adaptive Web pages based on a learner model(LM). The two systems are complementary and have been implemented separately. However, considered together they cover most of the key tasks in adaptive web-based educational hypermedia that aid learning technical terminology. Based on evaluative studies of STyLE-OLM and HYLITE+, the use of CGs for interactive open student modelling and adaptive concept explanations is examined. The applicability of CGs in adaptive web-based systems that aid learning technical terminology is discussed.
APA, Harvard, Vancouver, ISO, and other styles
41

GOMES, CARLA P. "Artificial intelligence and operations research: challenges and opportunities in planning and scheduling." Knowledge Engineering Review 15, no. 1 (March 2000): 1–10. http://dx.doi.org/10.1017/s0269888900001090.

Full text
Abstract:
Both the Artificial Intelligence (AI) and the Operations Research (OR) communities are interested in developing techniques for solving hard combinatorial problems, in particular in the domain of planning and scheduling. AI approaches encompass a rich collection of knowledge representation formalisms for dealing with a wide variety of real-world problems. Some examples are constraint programming representations, logical formalisms, declarative and functional programming languages such as Prolog and Lisp, Bayesian models, rule-based formalism, etc. The downside of such rich representations is that in general they lead to intractable problems, and we therefore often cannot use such formalisms for handling realistic size problems. OR, on the other hand, has focused on more tractable representations, such as linear programming formulations. OR-based techniques have demonstrated the ability to identify optimal and locally optimal solutions for well-defined problem spaces. In general, however, OR solutions are restricted to rigid models with limited expressive power. AI techniques, on the other hand, provide richer and more flexible representations of real-world problems, supporting efficient constraint-based reasoning mechanisms as well as mixed initiative frameworks, which allow the human expertise to be in the loop. The challenge lies in providing representations that are expressive enough to describe real-world problems and at the same time guaranteeing good and fast solutions.
APA, Harvard, Vancouver, ISO, and other styles
42

GUGLIELMANN, RAFFAELLA, and LILIANA IRONI. "A DIVIDE-AND-CONQUER STRATEGY FOR QUALITATIVE SIMULATION AND FUZZY IDENTIFICATION OF COMPLEX DYNAMICAL SYSTEMS." International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, no. 03 (June 2011): 423–52. http://dx.doi.org/10.1142/s0218488511007076.

Full text
Abstract:
Fuzzy systems properly integrated with Qualitative Reasoning approaches yield a hybrid identification method, called FS-QM, that outperforms traditional data-driven approaches in terms of robustness, interpretability and efficiency in both rich and poor data contexts. This results from the embedment of the entire system dynamics predicted by the simulation of its qualitative model, represented by fuzzy-rules, into the fuzzy system. However, the intrinsic limitation of qualitative simulation to scale up to complex and large systems significantly reduces its efficient applicability to real-world problems. The novelty of this paper deals with a divide-and-conquer approach that aims at making qualitative simulation tractable and the derived behavioural description comprehensible and exhaustive, and consequently usable to perform system identification. The partition of the complete model into smaller ones prevents the generation of a complete temporal ordering of all unrelated events, that is one of the major causes of intractable branching in qualitative simulation. The set of generated behaviours is drastically but beneficially reduced as it still captures the entire range of possible dynamical distinctions. Thus, the properties of the correspondent fuzzy-rule base, that guarantee robustness and interpretability of the identified model, are preserved. The strategy we propose is discussed through a case study from the biological domain.
APA, Harvard, Vancouver, ISO, and other styles
43

GEBSER, MARTIN, JOOHYUNG LEE, and YULIYA LIERLER. "On elementary loops of logic programs." Theory and Practice of Logic Programming 11, no. 6 (May 24, 2011): 953–88. http://dx.doi.org/10.1017/s1471068411000019.

Full text
Abstract:
AbstractUsing the notion of an elementary loop, Gebser and Schaub (2005. Proceedings of the Eighth International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR'05), 53–65) refined the theorem on loop formulas attributable to Lin and Zhao (2004) by considering loop formulas of elementary loops only. In this paper, we reformulate the definition of an elementary loop, extend it to disjunctive programs, and study several properties of elementary loops, including how maximal elementary loops are related to minimal unfounded sets. The results provide useful insights into the stable model semantics in terms of elementary loops. For a nondisjunctive program, using a graph-theoretic characterization of an elementary loop, we show that the problem of recognizing an elementary loop is tractable. On the other hand, we also show that the corresponding problem is coNP-complete for a disjunctive program. Based on the notion of an elementary loop, we present the class of Head-Elementary-loop-Free (HEF) programs, which strictly generalizes the class of Head-Cycle-Free (HCF) programs attributable to Ben-Eliyahu and Dechter (1994. Annals of Mathematics and Artificial Intelligence 12, 53–87). Like an HCF program, an HEF program can be turned into an equivalent nondisjunctive program in polynomial time by shifting head atoms into the body.
APA, Harvard, Vancouver, ISO, and other styles
44

Calvanese, Diego, Silvio Ghilardi, Alessandro Gianola, Marco Montali, and Andrey Rivkin. "Model Completeness, Uniform Interpolants and Superposition Calculus." Journal of Automated Reasoning 65, no. 7 (June 21, 2021): 941–69. http://dx.doi.org/10.1007/s10817-021-09596-x.

Full text
Abstract:
AbstractUniform interpolants have been largely studied in non-classical propositional logics since the nineties; a successive research line within the automated reasoning community investigated uniform quantifier-free interpolants (sometimes referred to as “covers”) in first-order theories. This further research line is motivated by the fact that uniform interpolants offer an effective solution to tackle quantifier elimination and symbol elimination problems, which are central in model checking infinite state systems. This was first pointed out in ESOP 2008 by Gulwani and Musuvathi, and then by the authors of the present contribution in the context of recent applications to the verification of data-aware processes. In this paper, we show how covers are strictly related to model completions, a well-known topic in model theory. We also investigate the computation of covers within the Superposition Calculus, by adopting a constrained version of the calculus and by defining appropriate settings and reduction strategies. In addition, we show that computing covers is computationally tractable for the fragment of the language used when tackling the verification of data-aware processes. This observation is confirmed by analyzing the preliminary results obtained using the mcmt tool to verify relevant examples of data-aware processes. These examples can be found in the last version of the tool distribution.
APA, Harvard, Vancouver, ISO, and other styles
45

Friedman, Scott, and Ann Kate Lockwood. "Qualitative Reasoning: Everyday, Pervasive, and Moving Forward — A Report on QR-15." AI Magazine 37, no. 2 (July 4, 2016): 95–96. http://dx.doi.org/10.1609/aimag.v37i2.2635.

Full text
Abstract:
The 28th International Workshop on Qualitative Reasoning (QR-15) presented advances toward reasoning tractably with massive qualitative and quantitative models, automatically learning and reasoning about continuous processes, and representing knowledge about space, causation, and uncertainty.
APA, Harvard, Vancouver, ISO, and other styles
46

Sim, Kwang Mong. "Reasoning tractably about explicit belief: A model-theoretic approach." International Journal of Intelligent Systems 15, no. 9 (2000): 811–48. http://dx.doi.org/10.1002/1098-111x(200009)15:9<811::aid-int1>3.0.co;2-b.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Grün, Gabrielle Assunta. "An Efficient Algorithm for the Maximum Distance Problem." Discrete Mathematics & Theoretical Computer Science Vol. 4 no. 2 (January 1, 2001). http://dx.doi.org/10.46298/dmtcs.291.

Full text
Abstract:
International audience Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a \emphmetagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤ . They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ''distance'' between two vertices in a \emphchain; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n) time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1) time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Sichao, Johannes Bill, Jan Drugowitsch, and Samuel J. Gershman. "Human visual motion perception shows hallmarks of Bayesian structural inference." Scientific Reports 11, no. 1 (February 12, 2021). http://dx.doi.org/10.1038/s41598-021-82175-7.

Full text
Abstract:
AbstractMotion relations in visual scenes carry an abundance of behaviorally relevant information, but little is known about how humans identify the structure underlying a scene’s motion in the first place. We studied the computations governing human motion structure identification in two psychophysics experiments and found that perception of motion relations showed hallmarks of Bayesian structural inference. At the heart of our research lies a tractable task design that enabled us to reveal the signatures of probabilistic reasoning about latent structure. We found that a choice model based on the task’s Bayesian ideal observer accurately matched many facets of human structural inference, including task performance, perceptual error patterns, single-trial responses, participant-specific differences, and subjective decision confidence—especially, when motion scenes were ambiguous and when object motion was hierarchically nested within other moving reference frames. Our work can guide future neuroscience experiments to reveal the neural mechanisms underlying higher-level visual motion perception.
APA, Harvard, Vancouver, ISO, and other styles
49

Papantonis, Ioannis, and Vaishak Belle. "Closed-Form Results for Prior Constraints in Sum-Product Networks." Frontiers in Artificial Intelligence 4 (April 8, 2021). http://dx.doi.org/10.3389/frai.2021.644062.

Full text
Abstract:
Incorporating constraints is a major concern in probabilistic machine learning. A wide variety of problems require predictions to be integrated with reasoning about constraints, from modeling routes on maps to approving loan predictions. In the former, we may require the prediction model to respect the presence of physical paths between the nodes on the map, and in the latter, we may require that the prediction model respect fairness constraints that ensure that outcomes are not subject to bias. Broadly speaking, constraints may be probabilistic, logical or causal, but the overarching challenge is to determine if and how a model can be learnt that handles a declared constraint. To the best of our knowledge, treating this in a general way is largely an open problem. In this paper, we investigate how the learning of sum-product networks, a newly introduced and increasingly popular class of tractable probabilistic models, is possible with declared constraints. We obtain correctness results about the training of these models, by establishing a relationship between probabilistic constraints and the model's parameters.
APA, Harvard, Vancouver, ISO, and other styles
50

Mahmood, Yasir, Arne Meier, and Johannes Schmidt. "Parameterized complexity of abduction in Schaefer’s framework." Journal of Logic and Computation, December 29, 2020. http://dx.doi.org/10.1093/logcom/exaa079.

Full text
Abstract:
Abstract Abductive reasoning is a non-monotonic formalism stemming from the work of Peirce. It describes the process of deriving the most plausible explanations of known facts. Considering the positive version, asking for sets of variables as explanations, we study, besides the problem of wether there exists a set of explanations, two explanation size limited variants of this reasoning problem (less than or equal to, and equal to a given size bound). In this paper, we present a thorough two-dimensional classification of these problems: the first dimension is regarding the parameterized complexity under a wealth of different parameterizations, and the second dimension spans through all possible Boolean fragments of these problems in Schaefer’s constraint satisfaction framework with co-clones (T. J. Schaefer. The complexity of satisfiability problems. In Proceedings of the 10th Annual ACM Symposium on Theory of Computing, May 1–3, 1978, San Diego, California, USA, R.J. Lipton, W.A. Burkhard, W.J. Savitch, E.P. Friedman, A.V. Aho eds, pp. 216–226. ACM, 1978). Thereby, we almost complete the parameterized complexity classification program initiated by Fellows et al. (The parameterized complexity of abduction. In Proceedings of the Twenty-Sixth AAAI Conference on Articial Intelligence, July 22–26, 2012, Toronto, Ontario, Canada, J. Homann, B. Selman eds. AAAI Press, 2012), partially building on the results by Nordh and Zanuttini (What makes propositional abduction tractable. Artificial Intelligence, 172, 1245–1284, 2008). In this process, we outline a fine-grained analysis of the inherent parameterized intractability of these problems and pinpoint their FPT parts. As the standard algebraic approach is not applicable to our problems, we develop an alternative method that makes the algebraic tools partially available again.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography