To see the other types of publications on this topic, follow the link: Neural symbolic learning.

Journal articles on the topic 'Neural symbolic learning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Neural symbolic learning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Shavlik, Jude W. "Combining symbolic and neural learning." Machine Learning 14, no. 3 (1994): 321–31. http://dx.doi.org/10.1007/bf00993982.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Li, Xin, Chengli Zhao, Xue Zhang, and Xiaojun Duan. "Symbolic Neural Ordinary Differential Equations." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 17 (2025): 18511–19. https://doi.org/10.1609/aaai.v39i17.34037.

Full text
Abstract:
Differential equations are widely used to describe complex dynamical systems with evolving parameters in nature and engineering. Effectively learning a family of maps from the parameter function to the system dynamics is of great significance. In this study, we propose a novel learning framework of symbolic continuous-depth neural networks, termed Symbolic Neural Ordinary Differential Equations (SNODEs), to effectively and accurately learn the underlying dynamics of complex systems. Specifically, our learning framework comprises three stages: initially, pre-training a predefined symbolic neura
APA, Harvard, Vancouver, ISO, and other styles
3

Borges, Rafael V., Artur S. d'Avila Garcez, and Luis C. Lamb. "A neural-symbolic perspective on analogy." Behavioral and Brain Sciences 31, no. 4 (2008): 379–80. http://dx.doi.org/10.1017/s0140525x08004482.

Full text
Abstract:
AbstractThe target article criticises neural-symbolic systems as inadequate for analogical reasoning and proposes a model of analogy as transformation (i.e., learning). We accept the importance of learning, but we argue that, instead of conflicting, integrated reasoning and learning would model analogy much more adequately. In this new perspective, modern neural-symbolic systems become the natural candidates for modelling analogy.
APA, Harvard, Vancouver, ISO, and other styles
4

Fatima, Tuba, and Dr Rehan Muhammad. "The Impact of Neuro-Symbolic AI on Cognitive Linguistics." ACADEMIA International Journal for Social Sciences 4, no. 3 (2025): 455–66. https://doi.org/10.63056/acad.004.03.0386.

Full text
Abstract:
Neuro-Symbolic Artificial Intelligence (AI) is indeed a fascinating domain, merging the structured reasoning of symbolic methods with the learning capabilities of neural networks. Its long-standing history reflects its significance in advancing AI towards achieving more robust and interpretable solutions. Neuro-symbolic AI is such an exciting and transformative field, as it combines the structured reasoning of symbolic AI with the adaptability and learning capabilities of neural networks. Your summary elegantly captures the breadth and depth of this growing discipline. The focus on representat
APA, Harvard, Vancouver, ISO, and other styles
5

Tian, Jidong, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, and Yaohui Jin. "Weakly Supervised Neural Symbolic Learning for Cognitive Tasks." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (2022): 5888–96. http://dx.doi.org/10.1609/aaai.v36i5.20533.

Full text
Abstract:
Despite the recent success of end-to-end deep neural networks, there are growing concerns about their lack of logical reasoning abilities, especially on cognitive tasks with perception and reasoning processes. A solution is the neural symbolic learning (NeSyL) method that can effectively utilize pre-defined logic rules to constrain the neural architecture making it perform better on cognitive tasks. However, it is challenging to apply NeSyL to these cognitive tasks because of the lack of supervision, the non-differentiable manner of the symbolic system, and the difficulty to probabilistically
APA, Harvard, Vancouver, ISO, and other styles
6

Pacheco, Maria Leonor, and Dan Goldwasser. "Modeling Content and Context with Deep Relational Learning." Transactions of the Association for Computational Linguistics 9 (February 2021): 100–119. http://dx.doi.org/10.1162/tacl_a_00357.

Full text
Abstract:
Building models for realistic natural language tasks requires dealing with long texts and accounting for complicated structural dependencies. Neural-symbolic representations have emerged as a way to combine the reasoning capabilities of symbolic methods, with the expressiveness of neural networks. However, most of the existing frameworks for combining neural and symbolic representations have been designed for classic relational learning tasks that work over a universe of symbolic entities and relations. In this paper, we present DRaiL, an open-source declarative framework for specifying deep r
APA, Harvard, Vancouver, ISO, and other styles
7

Winters, Thomas, Giuseppe Marra, Robin Manhaeve, and Luc De Raedt. "DeepStochLog: Neural Stochastic Logic Programming." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 9 (2022): 10090–100. http://dx.doi.org/10.1609/aaai.v36i9.21248.

Full text
Abstract:
Recent advances in neural-symbolic learning, such as DeepProbLog, extend probabilistic logic programs with neural predicates. Like graphical models, these probabilistic logic programs define a probability distribution over possible worlds, for which inference is computationally hard. We propose DeepStochLog, an alternative neural-symbolic framework based on stochastic definite clause grammars, a kind of stochastic logic program. More specifically, we introduce neural grammar rules into stochastic definite clause grammars to create a framework that can be trained end-to-end. We show that infere
APA, Harvard, Vancouver, ISO, and other styles
8

Akanbi, Olawale Basheer, and Hameed Olamilekan Ajasa. "Predicting Food Prices in Nigeria Using Machine Learning: Symbolic Regression." International Journal of Research and Innovation in Applied Science X, no. VI (2025): 979–95. https://doi.org/10.51584/ijrias.2025.10060074.

Full text
Abstract:
The aim of this study is to predict the prices of local rice, beans, and Garri in the South West (SW) and North Central (NC), Nigeria using economic indicators such as exchange rate, inflation rate, crude oil price, past one month price (lag 1) and past five-month price (lag 5) of the food prices as the predictor variables. The data used were extracted from the website of the National Bureau of Statistics from January 2017 to July 2024. The data were split into training set and testing set. The study proposed four machine learning techniques; random forest, decision tree, neural network and sy
APA, Harvard, Vancouver, ISO, and other styles
9

Modak, Sadanand, Noah Tobias Patton, Isil Dillig, and Joydeep Biswas. "SYNAPSE: SYmbolic Neural-Aided Preference Synthesis Engine." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 26 (2025): 27529–37. https://doi.org/10.1609/aaai.v39i26.34965.

Full text
Abstract:
This paper addresses the problem of preference learning, which aims to align robot behaviors through learning user-specific preferences (e.g. “good pull-over location”) from visual demonstrations. Despite its similarity to learning factual concepts (e.g. “red door”), preference learning is a fundamentally harder problem due to its subjective nature and the paucity of person-specific training data. We address this problem using a novel framework called SYNAPSE, which is a neuro-symbolic approach designed to efficiently learn preferential concepts from limited data. SYNAPSE represents preference
APA, Harvard, Vancouver, ISO, and other styles
10

Shavlik, Jude W., Raymond J. Mooney, and Geoffrey G. Towell. "Symbolic and neural learning algorithms: An experimental comparison." Machine Learning 6, no. 2 (1991): 111–43. http://dx.doi.org/10.1007/bf00114160.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Shadrach C Matthew, Sanjay Siddharthan R, and Elavarasan R. "Adaptive Neuro-Symbolic Systems for Real Time Ethical Decision-Making in Autonomous Agents." International Research Journal on Advanced Engineering and Management (IRJAEM) 3, no. 04 (2025): 1571–76. https://doi.org/10.47392/irjaem.2025.0254.

Full text
Abstract:
With the rapid emergence of autonomous systems, appropriate robust frameworks that could make ethical decisions in real time are needed. The adaptive neuro-symbolic approach to decision making by autonomous agents is thus presented here, integrating the advantages of symbolic ability like conventional AI with the adaptability imparted through neural networks. This proposed system enables symbolic reasoning by the AI along with learning from data, thus ensuring transparency and adaptability in decisions. This system, with deep learning models integrated with symbolic representations, would have
APA, Harvard, Vancouver, ISO, and other styles
12

Garcez, Artur S. d'Avila, and Luís C. Lamb. "A Connectionist Computational Model for Epistemic and Temporal Reasoning." Neural Computation 18, no. 7 (2006): 1711–38. http://dx.doi.org/10.1162/neco.2006.18.7.1711.

Full text
Abstract:
The importance of the efforts to bridge the gap between the connectionist and symbolic paradigms of artificial intelligence has been widely recognized. The merging of theory (background knowledge) and data learning (learning from examples) into neural-symbolic systems has indicated that such a learning system is more effective than purely symbolic or purely connectionist systems. Until recently, however, neural-symbolic systems were not able to fully represent, reason, and learn expressive languages other than classical propositional and fragments of first-order logic. In this article, we show
APA, Harvard, Vancouver, ISO, and other styles
13

Dickens, Charles, Connor Pryor, and Lise Getoor. "Modeling Patterns for Neural-Symbolic Reasoning Using Energy-based Models." Proceedings of the AAAI Symposium Series 3, no. 1 (2024): 90–99. http://dx.doi.org/10.1609/aaaiss.v3i1.31187.

Full text
Abstract:
Neural-symbolic (NeSy) AI strives to empower machine learning and large language models with fast, reliable predictions that exhibit commonsense and trustworthy reasoning by seamlessly integrating neural and symbolic methods. With such a broad scope, several taxonomies have been proposed to categorize this integration, emphasizing knowledge representation, reasoning algorithms, and applications. We introduce a knowledge representation-agnostic taxonomy focusing on the neural-symbolic interface capturing methods that reason with probability, logic, and arithmetic constraints. Moreover, we deriv
APA, Harvard, Vancouver, ISO, and other styles
14

Kim, Segwang, Hyoungwook Nam, Joonyoung Kim, and Kyomin Jung. "Neural Sequence-to-grid Module for Learning Symbolic Rules." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 9 (2021): 8163–71. http://dx.doi.org/10.1609/aaai.v35i9.16994.

Full text
Abstract:
Logical reasoning tasks over symbols, such as learning arithmetic operations and computer program evaluations, have become challenges to deep learning. In particular, even state-of-the-art neural networks fail to achieve \textit{out-of-distribution} (OOD) generalization of symbolic reasoning tasks, whereas humans can easily extend learned symbolic rules. To resolve this difficulty, we propose a neural sequence-to-grid (seq2grid) module, an input preprocessor that automatically segments and aligns an input sequence into a grid. As our module outputs a grid via a novel differentiable mapping, an
APA, Harvard, Vancouver, ISO, and other styles
15

FLETCHER, JUSTIN, and ZORAN OBRADOVI[Cgrave]. "Combining Prior Symbolic Knowledge and Constructive Neural Network Learning." Connection Science 5, no. 3-4 (1993): 365–75. http://dx.doi.org/10.1080/09540099308915705.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

D'Avila Garcez, Artur S., Dov M. Gabbay, and Luis C. Lamb. "Value-based Argumentation Frameworks as Neural-symbolic Learning Systems." Journal of Logic and Computation 15, no. 6 (2005): 1041–58. http://dx.doi.org/10.1093/logcom/exi057.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Segler, Marwin H. S., and Mark P. Waller. "Neural-Symbolic Machine Learning for Retrosynthesis and Reaction Prediction." Chemistry - A European Journal 23, no. 25 (2017): 5966–71. http://dx.doi.org/10.1002/chem.201605499.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Milicevic, Vladimir, Igor Franc, Maja Lutovac Banduka, Nemanja Zdravkovic, and Nikola Dimitrijevic. "SYMBOLIC ANALYSIS OF CLASSICAL NEURAL NETWORKS FOR DEEP LEARNING." International Journal for Quality Research 19, no. 1 (2025): 85–100. https://doi.org/10.24874/ijqr19.01-06.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Liu, Anji, Hongming Xu, Guy Van den Broeck, and Yitao Liang. "Out-of-Distribution Generalization by Neural-Symbolic Joint Training." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 10 (2023): 12252–59. http://dx.doi.org/10.1609/aaai.v37i10.26444.

Full text
Abstract:
This paper develops a novel methodology to simultaneously learn a neural network and extract generalized logic rules. Different from prior neural-symbolic methods that require background knowledge and candidate logical rules to be provided, we aim to induce task semantics with minimal priors. This is achieved by a two-step learning framework that iterates between optimizing neural predictions of task labels and searching for a more accurate representation of the hidden task semantics. Notably, supervision works in both directions: (partially) induced task semantics guide the learning of the ne
APA, Harvard, Vancouver, ISO, and other styles
20

d'AVILA GARCEZ, ARTUR S., LUÍS C. LAMB, KRYSIA BRODA, and DOV M. GABBAY. "APPLYING CONNECTIONIST MODAL LOGICS TO DISTRIBUTED KNOWLEDGE REPRESENTATION PROBLEMS." International Journal on Artificial Intelligence Tools 13, no. 01 (2004): 115–39. http://dx.doi.org/10.1142/s0218213004001442.

Full text
Abstract:
Neural-Symbolic Systems concern the integration of the symbolic and connectionist paradigms of Artificial Intelligence. Distributed knowledge representation is traditionally seen under a symbolic perspective. In this paper, we show how neural networks can represent distributed symbolic knowledge, acting as multi-agent systems with learning capability (a key feature of neural networks). We apply the framework of Connectionist Modal Logics to well-known testbeds for distributed knowledge representation formalisms, namely the muddy children and the wise men puzzles. Finally, we sketch a full solu
APA, Harvard, Vancouver, ISO, and other styles
21

UEBERLA, JOERG P., and ARUN JAGOTA. "Integrating Neural and Symbolic Approaches: A Symbolic Learning Scheme for a Connectionist Associative Memory." Connection Science 5, no. 3-4 (1993): 377–93. http://dx.doi.org/10.1080/09540099308915706.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Marton, Sascha, Stefan Lüdtke, and Christian Bartelt. "Explanations for Neural Networks by Neural Networks." Applied Sciences 12, no. 3 (2022): 980. http://dx.doi.org/10.3390/app12030980.

Full text
Abstract:
Understanding the function learned by a neural network is crucial in many domains, e.g., to detect a model’s adaption to concept drift in online learning. Existing global surrogate model approaches generate explanations by maximizing the fidelity between the neural network and a surrogate model on a sample-basis, which can be very time-consuming. Therefore, these approaches are not applicable in scenarios where timely or frequent explanations are required. In this paper, we introduce a real-time approach for generating a symbolic representation of the function learned by a neural network. Our
APA, Harvard, Vancouver, ISO, and other styles
23

Chen, Hsinchun. "Machine learning for information retrieval: Neural networks, symbolic learning, and genetic algorithms." Journal of the American Society for Information Science 46, no. 3 (1995): 194–216. http://dx.doi.org/10.1002/(sici)1097-4571(199504)46:3<194::aid-asi4>3.0.co;2-s.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Pasupuleti, Murali Krishna. "Neural Rationality: Modeling Decision Logic in Deep Neural Architectures." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 355–67. https://doi.org/10.62311/nesx/rp05ai3.

Full text
Abstract:
Abstract: This paper introduces the concept of Neural Rationality, a framework that aims to model logical, interpretable decision-making within deep neural architectures. Traditional deep learning excels at pattern recognition but often lacks transparent decision logic. By integrating attention mechanisms, symbolic logic modules, and cognitive constraints, neural models can emulate rational decision-making observed in humans. Researcher conducted regression and predictive analyses using benchmark datasets (DecisionQA, LogicalNLI) to quantify logical consistency and interpretability. Results sh
APA, Harvard, Vancouver, ISO, and other styles
25

Xu, Zelin, Tingsong Xiao, Wenchong He, et al. "Spatial-Logic-Aware Weakly Supervised Learning for Flood Mapping on Earth Imagery." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 20 (2024): 22457–65. http://dx.doi.org/10.1609/aaai.v38i20.30253.

Full text
Abstract:
Flood mapping on Earth imagery is crucial for disaster management, but its efficacy is hampered by the lack of high-quality training labels. Given high-resolution Earth imagery with coarse and noisy training labels, a base deep neural network model, and a spatial knowledge base with label constraints, our problem is to infer the true high-resolution labels while training neural network parameters. Traditional methods are largely based on specific physical properties and thus fall short of capturing the rich domain constraints expressed by symbolic logic. Neural-symbolic models can capture rich
APA, Harvard, Vancouver, ISO, and other styles
26

Mohan Raja Pulicharla. "Neurosymbolic AI: Bridging neural networks and symbolic reasoning." World Journal of Advanced Research and Reviews 25, no. 1 (2025): 2351–73. https://doi.org/10.30574/wjarr.2025.25.1.0287.

Full text
Abstract:
Artificial Intelligence (AI) has made tremendous strides in recent decades, powered by advancements in neural networks and symbolic reasoning systems. Neural networks excel at learning patterns from data, enabling breakthroughs in tasks like image recognition, natural language processing, and autonomous driving. On the other hand, symbolic reasoning systems provide structured, rule-based frameworks for logical inference and knowledge representation, making them well-suited for domains requiring explainability, generalization, and interpretability. However, these paradigms often operate in isol
APA, Harvard, Vancouver, ISO, and other styles
27

MILARÉ, CLAUDIA R., ANDRÉ C. P. DE L. F. DE CARVALHO, and MARIA C. MONARD. "AN APPROACH TO EXPLAIN NEURAL NETWORKS USING SYMBOLIC ALGORITHMS." International Journal of Computational Intelligence and Applications 02, no. 04 (2002): 365–76. http://dx.doi.org/10.1142/s1469026802000695.

Full text
Abstract:
Although Artificial Neural Networks have been satisfactorily employed in several problems, such as clustering, pattern recognition, dynamic systems control and prediction, they still suffer from significant limitations. One of them is that the induced concept representation is not usually comprehensible to humans. Several techniques have been suggested to extract meaningful knowledge from trained networks. This paper proposes the use of symbolic learning algorithms, commonly used by the Machine Learning community, such as C4.5, C4.5rules and CN2, to extract symbolic representations from traine
APA, Harvard, Vancouver, ISO, and other styles
28

Rossi, Sara, and Samuel Johnson. "NEUROSYMBOLIC AI: MERGING DEEP LEARNING AND LOGICAL REASONING FOR ENHANCED EXPLAINABILITY." International Journal of Advanced Artificial Intelligence Research 2, no. 06 (2025): 1–7. https://doi.org/10.55640/ijaair-v02i06-01.

Full text
Abstract:
Neurosymbolic Artificial Intelligence (AI) represents a promising paradigm that bridges the gap between sub-symbolic learning and symbolic reasoning by integrating deep learning models with formal logic-based systems. This hybrid approach leverages the pattern recognition strengths of neural networks and the interpretability and generalization power of symbolic reasoning. The convergence of these two methodologies addresses key challenges in AI, such as explainability, data efficiency, and reasoning under uncertainty. This paper explores the conceptual foundations, architectures, and recent ad
APA, Harvard, Vancouver, ISO, and other styles
29

Marra, Giuseppe. "From Statistical Relational to Neuro-Symbolic Artificial Intelligence." Proceedings of the AAAI Conference on Artificial Intelligence 38, no. 20 (2024): 22678. http://dx.doi.org/10.1609/aaai.v38i20.30294.

Full text
Abstract:
The integration of learning and reasoning is one of the key challenges in artificial intelligence and machine learning today. The area of Neuro-Symbolic AI (NeSy) tackles this challenge by integrating symbolic reasoning with neural networks. In our recent work, we provided an introduction to NeSy by drawing several parallels to another field that has a rich tradition in integrating learning and reasoning, namely Statistical Relational Artificial Intelligence (StarAI).
APA, Harvard, Vancouver, ISO, and other styles
30

Craandijk, Dennis, and Floris Bex. "Enforcement Heuristics for Argumentation with Deep Reinforcement Learning." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 5 (2022): 5573–81. http://dx.doi.org/10.1609/aaai.v36i5.20497.

Full text
Abstract:
In this paper, we present a learning-based approach to the symbolic reasoning problem of dynamic argumentation, where the knowledge about attacks between arguments is incomplete or evolving. Specifically, we employ deep reinforcement learning to learn which attack relations between arguments should be added or deleted in order to enforce the acceptability of (a set of) arguments. We show that our Graph Neural Network (GNN) architecture EGNN can learn a near optimal enforcement heuristic for all common argument-fixed enforcement problems, including problems for which no other (symbolic) solvers
APA, Harvard, Vancouver, ISO, and other styles
31

R., John Martin, and Sujatha. "Symbolic-Connectionist Representational Model for Optimizing Decision Making Behavior in Intelligent Systems." International Journal of Electrical and Computer Engineering (IJECE) 8, no. 1 (2020): 326–32. https://doi.org/10.11591/ijece.v8i1.pp326-332.

Full text
Abstract:
Modeling higher order cognitive processes like human decision making come in three representational approaches namely symbolic, connectionist and symbolic-connectionist. Many connectionist neural network models are evolved over the decades for optimizing decision making behaviors and their agents are also in place. There had been attempts to implement symbolic structures within connectionist architectures with distributed representations. Our work was aimed at proposing an enhanced connectionist approach of optimizing the decisions within the framework of a symbolic cognitive model. The action
APA, Harvard, Vancouver, ISO, and other styles
32

Ashok Kumar Ramadoss. "Prophecies using Physics Involved Neural Networks (PINNs) for achieving the accuracy using AI Models in discrete Kinematics." International Journal of Science and Research Archive 16, no. 1 (2025): 444–53. https://doi.org/10.30574/ijsra.2025.16.1.2043.

Full text
Abstract:
Artificial Nonmonotonic Neural schema or Networks (ANNNs), a kind of hybrid learning systems that are capable of nonmonotonic reasoning. Nonmonotonic reasoning plays an important role in the development of artificial intelligent systems that try to mimic common sense reasoning, as exhibited by humans in slow and steady but the error is minimized unlike in monotonic where the decision is fast but with more errors. on the other hand, a hybrid learning system provides an explanation capability to trained Neural Networks through acquiring symbolic knowledge of a domain, refining it using a set of
APA, Harvard, Vancouver, ISO, and other styles
33

Anil Kumar. "Neuro Symbolic AI in personalized mental health therapy: Bridging cognitive science and computational psychiatry." World Journal of Advanced Research and Reviews 19, no. 2 (2023): 1663–79. https://doi.org/10.30574/wjarr.2023.19.2.1516.

Full text
Abstract:
Personalized mental health therapy has gained increasing attention as advancements in artificial intelligence (AI) enable tailored treatment strategies based on individual cognitive and emotional profiles. Neuro-symbolic AI, a hybrid approach combining symbolic reasoning and neural networks, offers a promising solution for bridging cognitive science and computational psychiatry. Unlike conventional AI models that rely solely on deep learning, neuro-symbolic AI integrates human-interpretable knowledge representations with data-driven learning, enhancing the adaptability and explainability of AI
APA, Harvard, Vancouver, ISO, and other styles
34

Vahed, A., and C. W. Omlin. "A Machine Learning Method for Extracting Symbolic Knowledge from Recurrent Neural Networks." Neural Computation 16, no. 1 (2004): 59–71. http://dx.doi.org/10.1162/08997660460733994.

Full text
Abstract:
Neural networks do not readily provide an explanation of the knowledge stored in their weights as part of their information processing. Until recently, neural networks were considered to be black boxes, with the knowledge stored in their weights not readily accessible. Since then, research has resulted in a number of algorithms for extracting knowledge in symbolic form from trained neural networks. This article addresses the extraction of knowledge in symbolic form from recurrent neural networks trained to behave like deterministic finite-state automata (DFAs). To date, methods used to extract
APA, Harvard, Vancouver, ISO, and other styles
35

Yamauchi, Yukari, and Shun'ichi Tano. "Analysis of Symbol Generation and Integration in a Unified Model Based on a Neural Network." Journal of Advanced Computational Intelligence and Intelligent Informatics 9, no. 3 (2005): 297–303. http://dx.doi.org/10.20965/jaciii.2005.p0297.

Full text
Abstract:
The computational (numerical information) and symbolic (knowledge-based) processing used in intelligent processing has advantages and disadvantages. A simple model integrating symbols into a neural network was proposed as a first step toward fusing computational and symbolic processing. To verify the effectiveness of this model, we first analyze the trained neural network and generate symbols manually. Then we discuss generation methods that are able to discover effective symbols during training of the neural network. We evaluated these through simulations of reinforcement learning in simple f
APA, Harvard, Vancouver, ISO, and other styles
36

Marra, Giuseppe. "Bridging symbolic and subsymbolic reasoning with minimax entropy models." Intelligenza Artificiale 15, no. 2 (2022): 71–90. http://dx.doi.org/10.3233/ia-210088.

Full text
Abstract:
In this paper, we investigate MiniMax Entropy models, a class of neural symbolic models where symbolic and subsymbolic features are seamlessly integrated. We show how these models recover classical algorithms from both the deep learning and statistical relational learning scenarios. Novel hybrid settings are defined and experimentally explored, showing state-of-the-art performance in collective classification, knowledge base completion and graph (molecular) data generation.
APA, Harvard, Vancouver, ISO, and other styles
37

Dathathri, Sumanth, Sicun Gao, and Richard M. Murray. "Inverse Abstraction of Neural Networks Using Symbolic Interpolation." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 3437–44. http://dx.doi.org/10.1609/aaai.v33i01.33013437.

Full text
Abstract:
Neural networks in real-world applications have to satisfy critical properties such as safety and reliability. The analysis of such properties typically requires extracting information through computing pre-images of the network transformations, but it is well-known that explicit computation of pre-images is intractable. We introduce new methods for computing compact symbolic abstractions of pre-images by computing their overapproximations and underapproximations through all layers. The abstraction of pre-images enables formal analysis and knowledge extraction without affecting standard learni
APA, Harvard, Vancouver, ISO, and other styles
38

Tsoi, Ho Fung, Adrian Alan Pol, Vladimir Loncar, et al. "Symbolic Regression on FPGAs for Fast Machine Learning Inference." EPJ Web of Conferences 295 (2024): 09036. http://dx.doi.org/10.1051/epjconf/202429509036.

Full text
Abstract:
The high-energy physics community is investigating the potential of deploying machine-learning-based solutions on Field-Programmable Gate Arrays (FPGAs) to enhance physics sensitivity while still meeting data processing time constraints. In this contribution, we introduce a novel end-to-end procedure that utilizes a machine learning technique called symbolic regression (SR). It searches the equation space to discover algebraic relations approximating a dataset. We use PySR (a software to uncover these expressions based on an evolutionary algorithm) and extend the functionality of hls4ml (a pac
APA, Harvard, Vancouver, ISO, and other styles
39

Pasupuleti, Murali Krishna. "Synthetic Cognition: Building Artificial Minds for Adaptive Learning." International Journal of Academic and Industrial Research Innovations(IJAIRI) 05, no. 05 (2025): 343–54. https://doi.org/10.62311/nesx/rp05ai2.

Full text
Abstract:
Abstract: Synthetic cognition refers to the construction of artificial systems that replicate or simulate cognitive processes such as learning, reasoning, and adaptation. This paper investigates the computational frameworks and architectures for building artificial minds capable of adaptive learning, drawing on concepts from neuroscience, machine learning, and cognitive psychology. By evaluating neuro-symbolic architectures, reinforcement learning with episodic memory, and meta-learning models, Researcher demonstrated how these systems can generalize across tasks and environments. Using regres
APA, Harvard, Vancouver, ISO, and other styles
40

Liang, Yitao, and Guy Van den Broeck. "Learning Logistic Circuits." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 4277–86. http://dx.doi.org/10.1609/aaai.v33i01.33014277.

Full text
Abstract:
This paper proposes a new classification model called logistic circuits. On MNIST and Fashion datasets, our learning algorithm outperforms neural networks that have an order of magnitude more parameters. Yet, logistic circuits have a distinct origin in symbolic AI, forming a discriminative counterpart to probabilistic-logical circuits such as ACs, SPNs, and PSDDs. We show that parameter learning for logistic circuits is convex optimization, and that a simple local search algorithm can induce strong model structures from data.
APA, Harvard, Vancouver, ISO, and other styles
41

SIEGELMANN, HAVA T. "ON NIL: THE SOFTWARE CONSTRUCTOR OF NEURAL NETWORKS." Parallel Processing Letters 06, no. 04 (1996): 575–82. http://dx.doi.org/10.1142/s0129626496000510.

Full text
Abstract:
Analog recurrent neural networks have attracted much attention lately as powerful tools of automatic learning. However, they are not as popular in industry as should be justified by their usefulness. The lack of any programming tool for networks. and their vague internal representation, leave the networks for the use of experts only. We propose a way to make the neural networks friendly to users by formally defining a high level language, called Neural Information Processing Programming Langage, which is rich enough to express any computer algorithm or rule-based system. We show how to compile
APA, Harvard, Vancouver, ISO, and other styles
42

Le-Phuoc, Danh, Thomas Eiter, and Anh Le-Tuan. "A Scalable Reasoning and Learning Approach for Neural-Symbolic Stream Fusion." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 6 (2021): 4996–5005. http://dx.doi.org/10.1609/aaai.v35i6.16633.

Full text
Abstract:
Driven by deep neural networks (DNN), the recent development of computer vision makes vision sensors such as stereo cameras and Lidars ubiquitous in autonomous cars, robotics and traffic monitoring. However, a traditional DNN-based data fusion pipeline like object tracking has to hard-wire an engineered set of DNN models to a fixed processing logic, which makes it difficult to infuse new models to that pipeline. To overcome this, we propose a novel neural-symbolic stream reasoning approach realised by semantic stream reasoning programs which specify DNN-based data fusion pipelines via logic ru
APA, Harvard, Vancouver, ISO, and other styles
43

Han, Zhongyi, Benzheng Wei, Xiaoming Xi, Bo Chen, Yilong Yin, and Shuo Li. "Unifying neural learning and symbolic reasoning for spinal medical report generation." Medical Image Analysis 67 (January 2021): 101872. http://dx.doi.org/10.1016/j.media.2020.101872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Verguts, Tom, and Wim Fias. "Representation of Number in Animals and Humans: A Neural Model." Journal of Cognitive Neuroscience 16, no. 9 (2004): 1493–504. http://dx.doi.org/10.1162/0898929042568497.

Full text
Abstract:
This article addresses the representation of numerical information conveyed by nonsymbolic and symbolic stimuli. In a first simulation study, we show how number-selective neurons develop when an initially uncommitted neural network is given nonsymbolic stimuli as input (e.g., collections of dots) under unsupervised learning. The resultant network is able to account for the distance and size effects, two ubiquitous effects in numerical cognition. Furthermore, the properties of the network units conform in detail to the characteristics of recently discovered number-selective neurons. In a second
APA, Harvard, Vancouver, ISO, and other styles
45

Evans, Richard, and Edward Grefenstette. "Learning Explanatory Rules from Noisy Data." Journal of Artificial Intelligence Research 61 (January 26, 2018): 1–64. http://dx.doi.org/10.1613/jair.5714.

Full text
Abstract:
Artificial Neural Networks are powerful function approximators capable of modelling solutions to a wide variety of problems, both supervised and unsupervised. As their size and expressivity increases, so too does the variance of the model, yielding a nearly ubiquitous overfitting problem. Although mitigated by a variety of model regularisation methods, the common cure is to seek large amounts of training data--which is not necessarily easily obtained--that sufficiently approximates the data distribution of the domain we wish to test on. In contrast, logic programming methods such as Inductive
APA, Harvard, Vancouver, ISO, and other styles
46

Kollia, Ilianna, Nikolaos Simou, Andreas Stafylopatis, and Stefanos Kollias. "SEMANTIC IMAGE ANALYSIS USING A SYMBOLIC NEURAL ARCHITECTURE." Image Analysis & Stereology 29, no. 3 (2010): 159. http://dx.doi.org/10.5566/ias.v29.p159-172.

Full text
Abstract:
Image segmentation and classification are basic operations in image analysis and multimedia search which have gained great attention over the last few years due to the large increase of digital multimedia content. A recent trend in image analysis aims at incorporating symbolic knowledge representation systems and machine learning techniques. In this paper, we examine interweaving of neural network classifiers and fuzzy description logics for the adaptation of a knowledge base for semantic image analysis. The proposed approach includes a formal knowledge component, which, assisted by a reasonin
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Qiuyuan, Li Deng, Dapeng Wu, Chang Liu, and Xiaodong He. "Attentive Tensor Product Learning." Proceedings of the AAAI Conference on Artificial Intelligence 33 (July 17, 2019): 1344–51. http://dx.doi.org/10.1609/aaai.v33i01.33011344.

Full text
Abstract:
This paper proposes a novel neural architecture — Attentive Tensor Product Learning (ATPL) — to represent grammatical structures of natural language in deep learning models. ATPL exploits Tensor Product Representations (TPR), a structured neural-symbolic model developed in cognitive science, to integrate deep learning with explicit natural language structures and rules. The key ideas of ATPL are: 1) unsupervised learning of role-unbinding vectors of words via the TPR-based deep neural network; 2) the use of attention modules to compute TPR; and 3) the integration of TPR with typical deep learn
APA, Harvard, Vancouver, ISO, and other styles
48

Wermter, S., and V. Weber. "SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis Using Artificial Neural Networks." Journal of Artificial Intelligence Research 6 (January 1, 1997): 35–85. http://dx.doi.org/10.1613/jair.282.

Full text
Abstract:
Previous approaches of analyzing spontaneously spoken language often have been based on encoding syntactic and semantic knowledge manually and symbolically. While there has been some progress using statistical or connectionist language models, many current spoken- language systems still use a relatively brittle, hand-coded symbolic grammar or symbolic semantic component. In contrast, we describe a so-called screening approach for learning robust processing of spontaneously spoken language. A screening approach is a flat analysis which uses shallow sequences of category representations for anal
APA, Harvard, Vancouver, ISO, and other styles
49

Welleck, Sean, Peter West, Jize Cao, and Yejin Choi. "Symbolic Brittleness in Sequence Models: On Systematic Generalization in Symbolic Mathematics." Proceedings of the AAAI Conference on Artificial Intelligence 36, no. 8 (2022): 8629–37. http://dx.doi.org/10.1609/aaai.v36i8.20841.

Full text
Abstract:
Neural sequence models trained with maximum likelihood estimation have led to breakthroughs in many tasks, where success is defined by the gap between training and test performance. However, their ability to achieve stronger forms of generalization remains unclear. We consider the problem of symbolic mathematical integration, as it requires generalizing systematically beyond the training set. We develop a methodology for evaluating generalization that takes advantage of the problem domain's structure and access to a verifier. Despite promising in-distribution performance of sequence-to-sequenc
APA, Harvard, Vancouver, ISO, and other styles
50

Crouse, Maxwell, Constantine Nakos, Ibrahim Abdelaziz, and Ken Forbus. "Neural Analogical Matching." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 1 (2021): 809–17. http://dx.doi.org/10.1609/aaai.v35i1.16163.

Full text
Abstract:
Analogy is core to human cognition. It allows us to solve problems based on prior experience, it governs the way we conceptualize new information, and it even influences our visual perception. The importance of analogy to humans has made it an active area of research in the broader field of artificial intelligence, resulting in data-efficient models that learn and reason in human-like ways. While cognitive perspectives of analogy and deep learning have generally been studied independently of one another, the integration of the two lines of research is a promising step towards more robust and e
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!