Academic literature on the topic 'Neuromorphic computer systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Neuromorphic computer systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Neuromorphic computer systems"

1

Diao, Yu, Yaoxuan Zhang, Yanran Li, and Jie Jiang. "Metal-Oxide Heterojunction: From Material Process to Neuromorphic Applications." Sensors 23, no. 24 (2023): 9779. http://dx.doi.org/10.3390/s23249779.

Full text
Abstract:
As technologies like the Internet, artificial intelligence, and big data evolve at a rapid pace, computer architecture is transitioning from compute-intensive to memory-intensive. However, traditional von Neumann architectures encounter bottlenecks in addressing modern computational challenges. The emulation of the behaviors of a synapse at the device level by ionic/electronic devices has shown promising potential in future neural-inspired and compact artificial intelligence systems. To address these issues, this review thoroughly investigates the recent progress in metal-oxide heterostructures for neuromorphic applications. These heterostructures not only offer low power consumption and high stability but also possess optimized electrical characteristics via interface engineering. The paper first outlines various synthesis methods for metal oxides and then summarizes the neuromorphic devices using these materials and their heterostructures. More importantly, we review the emerging multifunctional applications, including neuromorphic vision, touch, and pain systems. Finally, we summarize the future prospects of neuromorphic devices with metal-oxide heterostructures and list the current challenges while offering potential solutions. This review provides insights into the design and construction of metal-oxide devices and their applications for neuromorphic systems.
APA, Harvard, Vancouver, ISO, and other styles
2

Satnam Singh, Ishita Sabharwal, Shweta Kushwaha, Dr. Shilpi Jain, and Dr. Madhur Jain. "Enhancing Human-Machine Interaction: Leveraging Neuromorphic Chips for Adaptive Learning and Control in Neural Prosthetics and Artificial Intelligence." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 10, no. 6 (2024): 933–40. http://dx.doi.org/10.32628/cseit241061135.

Full text
Abstract:
This paper examines the integration of neuromorphic chips, AI, and neural prostheses to enhance human-machine interaction. Neuromorphic chips, modelled after the brain's neural architecture, enable efficient learning, adaptive behaviour, and energy-efficient processing in AI systems and prostheses. These chips improve pattern recognition, adaptive control, and integration with the human nervous system. In neural prostheses, they promise seamless brain-computer interfaces (BCI) to restore mobility for paralyzed individuals and enable precise control of devices for people with severe disabilities. For AI systems, neuromorphic chips support rapid learning from large datasets, enabling adaptability in dynamic environments and real-time decision-making.
APA, Harvard, Vancouver, ISO, and other styles
3

Mikki, Said. "Generalized Neuromorphism and Artificial Intelligence: Dynamics in Memory Space." Symmetry 16, no. 4 (2024): 492. http://dx.doi.org/10.3390/sym16040492.

Full text
Abstract:
This paper introduces a multidisciplinary conceptual perspective encompassing artificial intelligence (AI), artificial general intelligence (AGI), and cybernetics, framed within what we call the formalism of generalized neuromorphism. Drawing from recent advancements in computing, such as neuromorphic computing and spiking neural networks, as well as principles from the theory of open dynamical systems and stochastic classical and quantum dynamics, this formalism is tailored to model generic networks comprising abstract processing events. A pivotal aspect of our approach is the incorporation of the memory space and the intrinsic non-Markovian nature of the abstract generalized neuromorphic system. We envision future computations taking place within an expanded space (memory space) and leveraging memory states. Positioned at a high abstract level, generalized neuromorphism facilitates multidisciplinary applications across various approaches within the AI community.
APA, Harvard, Vancouver, ISO, and other styles
4

Sharma, Parul, Balwinder Raj, and Sandeep Singh Gill. "Spintronics Based Non-Volatile MRAM for Intelligent Systems." International Journal on Semantic Web and Information Systems 18, no. 1 (2022): 1–16. http://dx.doi.org/10.4018/ijswis.310056.

Full text
Abstract:
In this paper the spintronic-based memory MRAM is presented that showed how it can replace both SRAM and DRAM and provide the high speed with great chip size. Moreover, MRAM is the nonvolatile memory that provides great advancement in the storage process. The different types of MRAM are mentioned with the techniques used for writing purpose and also mention which one is more used and why. The basic working principle and the function performed by the MRAM are discussed. Artificial intelligence (AI) is mentioned with its pros and cons for intelligent systems. Neuromorphic computing is also explained along with its important role in intelligent systems. Some reasons are also discussed as to why neuromorphic computing is so important. This paper also presents how spintronic-based devices especially memory can be used in intelligent systems and neuromorphic computing. Nanoscale spintronic-based MRAM plays a key role in intelligent systems and neuromorphic computing applications.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Jun. "Recent Progress of Memristor-based Neuromorphic Computing." Transactions on Computer Science and Intelligent Systems Research 5 (August 12, 2024): 1655–61. http://dx.doi.org/10.62051/1kany131.

Full text
Abstract:
The evolution of memristors and their successful applications have positioned them as formidable candidates for the next generation of computer systems. With the rapid advancement of foundational ar- tificial intelligence applications, there is an increasing demand for computational power, energy efficiency, and stability. Memristors and the Neuromorphic Computing (NMC) systems they underpin hold signifi- can’t potential to break through the von Neumann bottleneck. However, technical challenges remain in the application of NMC to computer systems. In this review, we focus on the performance of various structured memristors within Neuromorphic Computing and across different machine learning algorithms. We pro- vide an overview of the current challenges faced by NMC, including the structural limitations due to sneak paths and the inherent power consumption limitations, and offer a perspective on future developments and opportunities in the field.
APA, Harvard, Vancouver, ISO, and other styles
6

K P, VISHNUPRIYA, JWALA JOSE, PRINCE JOY, SRITHA S, and GIBI K. S. "Brain-Inspired Artificial Intelligence: Revolutionizing Computing and Cognitive Systems." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 12 (2024): 1–8. https://doi.org/10.55041/ijsrem39825.

Full text
Abstract:
Brain-inspired artificial intelligence (AI) is a rapidly evolving field that seeks to model computational systems after the structure, processes, and functioning of the human brain. By drawing from neuroscience and cognitive science, brain-inspired AI aims to improve the efficiency, scalability, and adaptability of machine learning algorithms. This paper explores the key technologies and advancements in the realm of brain-inspired AI, including neural networks, neuromorphic hardware, brain-computer interfaces, and algorithms inspired by biological learning mechanisms. Additionally, we will analyze the challenges and future opportunities in achieving more brain-like cognitive systems. The integration of these technologies promises a paradigm shift in AI research, bringing us closer to artificial general intelligence (AGI) while creating more energy-efficient and resilient systems. Keywords Brain-inspired AI, Neural Networks, Neuromorphic Computing, Spiking Neural Networks, Artificial General Intelligence, Brain-Computer Interfaces, Cognitive Architectures.
APA, Harvard, Vancouver, ISO, and other styles
7

Dunham, Christopher S., Sam Lilak, Joel Hochstetter, et al. "Nanoscale neuromorphic networks and criticality: a perspective." Journal of Physics: Complexity 2, no. 4 (2021): 042001. http://dx.doi.org/10.1088/2632-072x/ac3ad3.

Full text
Abstract:
Abstract Numerous studies suggest critical dynamics may play a role in information processing and task performance in biological systems. However, studying critical dynamics in these systems can be challenging due to many confounding biological variables that limit access to the physical processes underpinning critical dynamics. Here we offer a perspective on the use of abiotic, neuromorphic nanowire networks as a means to investigate critical dynamics in complex adaptive systems. Neuromorphic nanowire networks are composed of metallic nanowires and possess metal-insulator-metal junctions. These networks self-assemble into a highly interconnected, variable-density structure and exhibit nonlinear electrical switching properties and information processing capabilities. We highlight key dynamical characteristics observed in neuromorphic nanowire networks, including persistent fluctuations in conductivity with power law distributions, hysteresis, chaotic attractor dynamics, and avalanche criticality. We posit that neuromorphic nanowire networks can function effectively as tunable abiotic physical systems for studying critical dynamics and leveraging criticality for computation.
APA, Harvard, Vancouver, ISO, and other styles
8

Siddique, Ali, Jingqi Sun, Kung Jui Hou, Mang I. Vai, Sio Hang Pun, and Muhammad Azhar Iqbal. "SpikoPoniC: A Low-Cost Spiking Neuromorphic Computer for Smart Aquaponics." Agriculture 13, no. 11 (2023): 2057. http://dx.doi.org/10.3390/agriculture13112057.

Full text
Abstract:
Aquaponics is an emerging area of agricultural sciences that combines aquaculture and hydroponics in a symbiotic way to enhance crop production. A stable smart aquaponic system requires estimating the fish size in real time. Though deep learning has shown promise in the context of smart aquaponics, most smart systems are extremely slow and costly and cannot be deployed on a large scale. Therefore, we design and present a novel neuromorphic computer that uses spiking neural networks (SNNs) for estimating not only the length but also the weight of the fish. To train the SNN, we present a novel hybrid scheme in which some of the neural layers are trained using direct SNN backpropagation, while others are trained using standard backpropagation. By doing this, a blend of high hardware efficiency and accuracy can be achieved. The proposed computer SpikoPoniC can classify more than 84 million fish samples in a second, achieving a speedup of at least 3369× over traditional general-purpose computers. The SpikoPoniC consumes less than 1100 slice registers on Virtex 6 and is much cheaper than most SNN-based hardware systems. To the best of our knowledge, this is the first SNN-based neuromorphic system that performs smart real-time aquaponic monitoring.
APA, Harvard, Vancouver, ISO, and other styles
9

Jang, Taejin, Suhyeon Kim, Jeesoo Chang, et al. "3D AND-Type Stacked Array for Neuromorphic Systems." Micromachines 11, no. 9 (2020): 829. http://dx.doi.org/10.3390/mi11090829.

Full text
Abstract:
NOR/AND flash memory was studied in neuromorphic systems to perform vector-by-matrix multiplication (VMM) by summing the current. Because the size of NOR/AND cells exceeds those of other memristor synaptic devices, we proposed a 3D AND-type stacked array to reduce the cell size. Through a tilted implantation method, the conformal sources and drains of each cell could be formed, with confirmation by a technology computer aided design (TCAD) simulation. In addition, the cell-to-cell variation due to the etch slope could be eliminated by controlling the deposition thickness of the cells. The suggested array can be beneficial in simple program/inhibit schemes given its use of Fowler–Nordheim (FN) tunneling because the drain lines and source lines are parallel. Therefore, the conductance of each synaptic device can be updated at low power level.
APA, Harvard, Vancouver, ISO, and other styles
10

Ferreira de Lima, Thomas, Alexander N. Tait, Armin Mehrabian, et al. "Primer on silicon neuromorphic photonic processors: architecture and compiler." Nanophotonics 9, no. 13 (2020): 4055–73. http://dx.doi.org/10.1515/nanoph-2020-0172.

Full text
Abstract:
AbstractMicroelectronic computers have encountered challenges in meeting all of today’s demands for information processing. Meeting these demands will require the development of unconventional computers employing alternative processing models and new device physics. Neural network models have come to dominate modern machine learning algorithms, and specialized electronic hardware has been developed to implement them more efficiently. A silicon photonic integration industry promises to bring manufacturing ecosystems normally reserved for microelectronics to photonics. Photonic devices have already found simple analog signal processing niches where electronics cannot provide sufficient bandwidth and reconfigurability. In order to solve more complex information processing problems, they will have to adopt a processing model that generalizes and scales. Neuromorphic photonics aims to map physical models of optoelectronic systems to abstract models of neural networks. It represents a new opportunity for machine information processing on sub-nanosecond timescales, with application to mathematical programming, intelligent radio frequency signal processing, and real-time control. The strategy of neuromorphic engineering is to externalize the risk of developing computational theory alongside hardware. The strategy of remaining compatible with silicon photonics externalizes the risk of platform development. In this perspective article, we provide a rationale for a neuromorphic photonics processor, envisioning its architecture and a compiler. We also discuss how it can be interfaced with a general purpose computer, i.e. a CPU, as a coprocessor to target specific applications. This paper is intended for a wide audience and provides a roadmap for expanding research in the direction of transforming neuromorphic photonics into a viable and useful candidate for accelerating neuromorphic computing.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Neuromorphic computer systems"

1

Bieszczad, Andrzej Carleton University Dissertation Engineering Systems and Computer. "Neuromorphic distributed general problem solvers." Ottawa, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Nease, Stephen Howard. "Contributions to neuromorphic and reconfigurable circuits and systems." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/44923.

Full text
Abstract:
This thesis presents a body of work in the field of reconfigurable and neuromorphic circuits and systems. Three main projects were undertaken. The first was using a Field-Programmable Analog Array (FPAA) to model the cable behavior of dendrites using analog circuits. The second was to design, lay out, and test part of a new FPAA, the RASP 2.9v. The final project was to use floating-gate programming to remove offsets in a neuromorphic FPAA, the RASP Neuron 1D.
APA, Harvard, Vancouver, ISO, and other styles
3

Azam, Md Ali. "Energy Efficient Spintronic Device for Neuromorphic Computation." VCU Scholars Compass, 2019. https://scholarscompass.vcu.edu/etd/6036.

Full text
Abstract:
Future computing will require significant development in new computing device paradigms. This is motivated by CMOS devices reaching their technological limits, the need for non-Von Neumann architectures as well as the energy constraints of wearable technologies and embedded processors. The first device proposal, an energy-efficient voltage-controlled domain wall device for implementing an artificial neuron and synapse is analyzed using micromagnetic modeling. By controlling the domain wall motion utilizing spin transfer or spin orbit torques in association with voltage generated strain control of perpendicular magnetic anisotropy in the presence of Dzyaloshinskii-Moriya interaction (DMI), different positions of the domain wall are realized in the free layer of a magnetic tunnel junction to program different synaptic weights. Additionally, an artificial neuron can be realized by combining this DW device with a CMOS buffer. The second neuromorphic device proposal is inspired by the brain. Membrane potential of many neurons oscillate in a subthreshold damped fashion and fire when excited by an input frequency that nearly equals their Eigen frequency. We investigate theoretical implementation of such “resonate-and-fire” neurons by utilizing the magnetization dynamics of a fixed magnetic skyrmion based free layer of a magnetic tunnel junction (MTJ). Voltage control of magnetic anisotropy or voltage generated strain results in expansion and shrinking of a skyrmion core that mimics the subthreshold oscillation. Finally, we show that such resonate and fire neurons have potential application in coupled nanomagnetic oscillator based associative memory arrays.
APA, Harvard, Vancouver, ISO, and other styles
4

Smith, Paul Devon. "An Analog Architecture for Auditory Feature Extraction and Recognition." Diss., Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4839.

Full text
Abstract:
Speech recognition systems have been implemented using a wide range of signal processing techniques including neuromorphic/biological inspired and Digital Signal Processing techniques. Neuromorphic/biologically inspired techniques, such as silicon cochlea models, are based on fairly simple yet highly parallel computation and/or computational units. While the area of digital signal processing (DSP) is based on block transforms and statistical or error minimization methods. Essential to each of these techniques is the first stage of extracting meaningful information from the speech signal, which is known as feature extraction. This can be done using biologically inspired techniques such as silicon cochlea models, or techniques beginning with a model of speech production and then trying to separate the the vocal tract response from an excitation signal. Even within each of these approaches, there are multiple techniques including cepstrum filtering, which sits under the class of Homomorphic signal processing, or techniques using FFT based predictive approaches. The underlying reality is there are multiple techniques that have attacked the problem in speech recognition but the problem is still far from being solved. The techniques that have shown to have the best recognition rates involve Cepstrum Coefficients for the feature extraction and Hidden-Markov Models to perform the pattern recognition. The presented research develops an analog system based on programmable analog array technology that can perform the initial stages of auditory feature extraction and recognition before passing information to a digital signal processor. The goal being a low power system that can be fully contained on one or more integrated circuit chips. Results show that it is possible to realize advanced filtering techniques such as Cepstrum Filtering and Vector Quantization in analog circuitry. Prior to this work, previous applications of analog signal processing have focused on vision, cochlea models, anti-aliasing filters and other single component uses. Furthermore, classic designs have looked heavily at utilizing op-amps as a basic core building block for these designs. This research also shows a novel design for a Hidden Markov Model (HMM) decoder utilizing circuits that take advantage of the inherent properties of subthreshold transistors and floating-gate technology to create low-power computational blocks.
APA, Harvard, Vancouver, ISO, and other styles
5

Паржин, Юрій Володимирович. "Моделі і методи побудови архітектури і компонентів детекторних нейроморфних комп'ютерних систем". Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/34755.

Full text
Abstract:
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Міністерство освіти і науки України, Харків, 2018. Дисертація присвячена вирішенню проблеми підвищення ефективності побудови та використання нейроморфних комп'ютерних систем (НКС) в результаті розробки моделей побудови їх компонентів та загальної архітектури, а також методів їх навчання на основі формалізованого детекторного принципу. В результаті аналізу і класифікації архітектури та компонентів НКС встановлено, що в основі всіх їх нейромережевих реалізацій лежить конекціоністська парадигма побудови штучних нейронних мереж. Було обґрунтовано та формалізовано альтернативний до конекціоністської парадигми детекторний принцип побудови архітектури НКС та її компонентів, в основі якого лежить встановлена властивість зв’язності елементів вхідного вектору сигналів та відповідних вагових коефіцієнтів нейроелемента НКС. На основі детекторного принципу були розроблені багатосегментні порогові інформаційні моделі компонентів детекторної НКС (ДНКС): блоків-детекторів, блоків-аналізаторів та блоку новизни, в яких в результаті розробленого методу зустрічного навчання формуються концепти, що визначають необхідні і достатні умови формування їх реакцій. Метод зустрічного навчання ДНКС дозволяє скоротити час її навчання при вирішенні практичних задач розпізнавання зображень до однієї епохи та скоротити розмірність навчальної вибірки. Крім того, цей метод дозволяє вирішити проблему стабільності-пластичності пам'яті ДНКС та проблему її перенавчання на основі самоорганізації карти блоків-детекторів вторинного рівня обробки інформації під управлінням блоку новизни. В результаті досліджень була розроблена модель мережевої архітектури ДНКС, що складається з двох шарів нейроморфних компонентів первинного та вторинного рівнів обробки інформації, та яка дозволяє скоротити кількість необхідних компонентів системи. Для обґрунтування підвищення ефективності побудови та використання НКС на основі детекторного принципу, були розроблені програмні моделі ДНКС автоматизованого моніторингу та аналізу зовнішньої електромагнітної обстановки, а також розпізнавання рукописних цифр бази даних MNIST. Результати дослідження цих систем підтвердили правильність теоретичних положень дисертації та високу ефективність розроблених моделей і методів.<br>Dissertation for the degree of Doctor of Technical Sciences in the specialty 05.13.05 – Computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Ministry of Education and Science of Ukraine, Kharkiv, 2018. The thesis is devoted to solving the problem of increasing the efficiency of building and using neuromorphic computer systems (NCS) as a result of developing models for constructing their components and a general architecture, as well as methods for their training based on the formalized detection principle. As a result of the analysis and classification of the architecture and components of the NCS, it is established that the connectionist paradigm for constructing artificial neural networks underlies all neural network implementations. The detector principle of constructing the architecture of the NCS and its components was substantiated and formalized, which is an alternative to the connectionist paradigm. This principle is based on the property of the binding of the elements of the input signal vector and the corresponding weighting coefficients of the NCS. On the basis of the detector principle, multi-segment threshold information models for the components of the detector NCS (DNCS): block-detectors, block-analyzers and a novelty block were developed. As a result of the developed method of counter training, these components form concepts that determine the necessary and sufficient conditions for the formation of reactions. The method of counter training of DNCS allows reducing the time of its training in solving practical problems of image recognition up to one epoch and reducing the dimension of the training sample. In addition, this method allows to solve the problem of stability-plasticity of DNCS memory and the problem of its overfitting based on self-organization of a map of block-detectors of a secondary level of information processing under the control of a novelty block. As a result of the research, a model of the network architecture of DNCS was developed, which consists of two layers of neuromorphic components of the primary and secondary levels of information processing, and which reduces the number of necessary components of the system. To substantiate the increase in the efficiency of constructing and using the NCS on the basis of the detector principle, software models were developed for automated monitoring and analysis of the external electromagnetic environment, as well as recognition of the manuscript figures of the MNIST database. The results of the study of these systems confirmed the correctness of the theoretical provisions of the dissertation and the high efficiency of the developed models and methods.
APA, Harvard, Vancouver, ISO, and other styles
6

Ramakrishnan, Shubha. "A system design approach to neuromorphic classifiers." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/51718.

Full text
Abstract:
This work considers alternative strategies to mainstream digital approaches to signal processing - namely analog and neuromorphic solutions, for increased computing efficiency. In the context of a speech recognizer application, we use low-power analog approaches for the signal conditioning and basic auditory feature extraction, while using a neuromorphic IC for building a dendritic classifier that can be used as a low-power word spotter. In doing so, this work also aspires to posit the significance of dendrites in neural computation.
APA, Harvard, Vancouver, ISO, and other styles
7

Паржин, Юрій Володимирович. "Моделі і методи побудови архітектури і компонентів детекторних нейроморфних комп'ютерних систем". Thesis, НТУ "ХПІ", 2018. http://repository.kpi.kharkov.ua/handle/KhPI-Press/34756.

Full text
Abstract:
Дисертація на здобуття наукового ступеня доктора технічних наук за спеціальністю 05.13.05 – комп'ютерні системи та компоненти. – Національний технічний університет "Харківський політехнічний інститут", Міністерство освіти і науки України, Харків, 2018. Дисертація присвячена вирішенню проблеми підвищення ефективності побудови та використання нейроморфних комп'ютерних систем (НКС) в результаті розробки моделей побудови їх компонентів та загальної архітектури, а також методів їх навчання на основі формалізованого детекторного принципу. В результаті аналізу і класифікації архітектури та компонентів НКС встановлено, що в основі всіх їх нейромережевих реалізацій лежить конекціоністська парадигма побудови штучних нейронних мереж. Було обґрунтовано та формалізовано альтернативний до конекціоністської парадигми детекторний принцип побудови архітектури НКС та її компонентів, в основі якого лежить встановлена властивість зв’язності елементів вхідного вектору сигналів та відповідних вагових коефіцієнтів нейроелемента НКС. На основі детекторного принципу були розроблені багатосегментні порогові інформаційні моделі компонентів детекторної НКС (ДНКС): блоків-детекторів, блоків-аналізаторів та блоку новизни, в яких в результаті розробленого методу зустрічного навчання формуються концепти, що визначають необхідні і достатні умови формування їх реакцій. Метод зустрічного навчання ДНКС дозволяє скоротити час її навчання при вирішенні практичних задач розпізнавання зображень до однієї епохи та скоротити розмірність навчальної вибірки. Крім того, цей метод дозволяє вирішити проблему стабільності-пластичності пам'яті ДНКС та проблему її перенавчання на основі самоорганізації карти блоків-детекторів вторинного рівня обробки інформації під управлінням блоку новизни. В результаті досліджень була розроблена модель мережевої архітектури ДНКС, що складається з двох шарів нейроморфних компонентів первинного та вторинного рівнів обробки інформації, та яка дозволяє скоротити кількість необхідних компонентів системи. Для обґрунтування підвищення ефективності побудови та використання НКС на основі детекторного принципу, були розроблені програмні моделі ДНКС автоматизованого моніторингу та аналізу зовнішньої електромагнітної обстановки, а також розпізнавання рукописних цифр бази даних MNIST. Результати дослідження цих систем підтвердили правильність теоретичних положень дисертації та високу ефективність розроблених моделей і методів.<br>Dissertation for the degree of Doctor of Technical Sciences in the specialty 05.13.05 – Computer systems and components. – National Technical University "Kharkiv Polytechnic Institute", Ministry of Education and Science of Ukraine, Kharkiv, 2018. The thesis is devoted to solving the problem of increasing the efficiency of building and using neuromorphic computer systems (NCS) as a result of developing models for constructing their components and a general architecture, as well as methods for their training based on the formalized detection principle. As a result of the analysis and classification of the architecture and components of the NCS, it is established that the connectionist paradigm for constructing artificial neural networks underlies all neural network implementations. The detector principle of constructing the architecture of the NCS and its components was substantiated and formalized, which is an alternative to the connectionist paradigm. This principle is based on the property of the binding of the elements of the input signal vector and the corresponding weighting coefficients of the NCS. On the basis of the detector principle, multi-segment threshold information models for the components of the detector NCS (DNCS): block-detectors, block-analyzers and a novelty block were developed. As a result of the developed method of counter training, these components form concepts that determine the necessary and sufficient conditions for the formation of reactions. The method of counter training of DNCS allows reducing the time of its training in solving practical problems of image recognition up to one epoch and reducing the dimension of the training sample. In addition, this method allows to solve the problem of stability-plasticity of DNCS memory and the problem of its overfitting based on self-organization of a map of block-detectors of a secondary level of information processing under the control of a novelty block. As a result of the research, a model of the network architecture of DNCS was developed, which consists of two layers of neuromorphic components of the primary and secondary levels of information processing, and which reduces the number of necessary components of the system. To substantiate the increase in the efficiency of constructing and using the NCS on the basis of the detector principle, software models were developed for automated monitoring and analysis of the external electromagnetic environment, as well as recognition of the manuscript figures of the MNIST database. The results of the study of these systems confirmed the correctness of the theoretical provisions of the dissertation and the high efficiency of the developed models and methods.
APA, Harvard, Vancouver, ISO, and other styles
8

Tully, Philip. "Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical Microcircuits." Doctoral thesis, KTH, Beräkningsvetenskap och beräkningsteknik (CST), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-205568.

Full text
Abstract:
Cortical and subcortical microcircuits are continuously modified throughout life. Despite ongoing changes these networks stubbornly maintain their functions, which persist although destabilizing synaptic and nonsynaptic mechanisms should ostensibly propel them towards runaway excitation or quiescence. What dynamical phenomena exist to act together to balance such learning with information processing? What types of activity patterns do they underpin, and how do these patterns relate to our perceptual experiences? What enables learning and memory operations to occur despite such massive and constant neural reorganization? Progress towards answering many of these questions can be pursued through large-scale neuronal simulations.    In this thesis, a Hebbian learning rule for spiking neurons inspired by statistical inference is introduced. The spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule involves changes in both synaptic strengths and intrinsic neuronal currents. The model is motivated by molecular cascades whose functional outcomes are mapped onto biological mechanisms such as Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability. Temporally interacting memory traces enable spike-timing dependence, a stable learning regime that remains competitive, postsynaptic activity regulation, spike-based reinforcement learning and intrinsic graded persistent firing levels.    The thesis seeks to demonstrate how multiple interacting plasticity mechanisms can coordinate reinforcement, auto- and hetero-associative learning within large-scale, spiking, plastic neuronal networks. Spiking neural networks can represent information in the form of probability distributions, and a biophysical realization of Bayesian computation can help reconcile disparate experimental observations.<br><p>QC 20170421</p>
APA, Harvard, Vancouver, ISO, and other styles
9

Brink, Stephen Isaac. "Learning in silicon: a floating-gate based, biophysically inspired, neuromorphic hardware system with synaptic plasticity." Diss., Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/50143.

Full text
Abstract:
The goal of neuromorphic engineering is to create electronic systems that model the behavior of biological neural systems. Neuromorphic systems can leverage a combination of analog and digital circuit design techniques to enable computational modeling, with orders of magnitude of reduction in size, weight, and power consumption compared to the traditional modeling approach based upon numerical integration. These benefits of neuromorphic modeling have the potential to facilitate neural modeling in resource-constrained research environments. Moreover, they will make it practical to use neural computation in the design of intelligent machines, including portable, battery-powered, and energy harvesting applications. Floating-gate transistor technology is a powerful tool for neuromorphic engineering because it allows dense implementation of synapses with nonvolatile storage of synaptic weights, cancellation of process mismatch, and reconfigurable system design. A novel neuromorphic hardware system, featuring compact and efficient channel-based model neurons and floating-gate transistor synapses, was developed. This system was used to model a variety of network topologies with up to 100 neurons. The networks were shown to possess computational capabilities such as spatio-temporal pattern generation and recognition, winner-take-all competition, bistable activity implementing a "volatile memory", and wavefront-based robotic path planning. Some canonical features of synaptic plasticity, such as potentiation of high frequency inputs and potentiation of correlated inputs in the presence of uncorrelated noise, were demonstrated. Preliminary results regarding formation of receptive fields were obtained. Several advances in enabling technologies, including methods for floating-gate transistor array programming, and the creation of a reconfigurable system for studying adaptation in floating-gate transistor circuits, were made.
APA, Harvard, Vancouver, ISO, and other styles
10

Bernard, Yann. "Calcul neuromorphique pour l'exploration et la catégorisation robuste d'environnement visuel et multimodal dans les systèmes embarqués." Electronic Thesis or Diss., Université de Lorraine, 2021. http://www.theses.fr/2021LORR0295.

Full text
Abstract:
Tandis que la quête pour des systèmes de calcul toujours plus puissants se confronte à des contraintes matérielles de plus en plus fortes, des avancées majeures en termes d’efficacité de calcul sont supposées bénéficier d’approches non conventionnelles et de nouveaux modèles de calcul tels que le calcul inspiré du cerveau. Le cerveau est une architecture de calcul massivement parallèle avec des interconnexions denses entre les unités de calcul. Les systèmes neurobiologiques sont donc une source d'inspiration naturelle pour la science et l'ingénierie informatiques. Les améliorations technologiques rapides des supports de calcul ont récemment renforcé cette tendance à travers deux conséquences complémentaires mais apparemment contradictoires : d’une part en offrant une énorme puissance de calcul, elles ont rendu possible la simulation de très grandes structures neuronales comme les réseaux profonds, et d’autre part en atteignant leurs limites technologiques et conceptuelles, elles ont motivé l'émergence de paradigmes informatiques alternatifs basés sur des concepts bio-inspirés. Parmi ceux-ci, les principes de l’apprentissage non supervisé retiennent de plus en plus l’attention.Nous nous intéressons ici plus particulièrement à deux grandes familles de modèles neuronaux, les cartes auto-organisatrices et les champs neuronaux dynamiques. Inspirées de la modélisation de l’auto-organisation des colonnes corticales, les cartes auto-organisatrices ont montré leur capacité à représenter un stimulus complexe sous une forme simplifiée et interprétable, grâce à d’excellentes performances en quantification vectorielle et au respect des relations de proximité topologique présentes dans l’espace d’entrée. Davantage inspirés des mécanismes de compétition dans les macro-colonnes corticales, les champs neuronaux dynamiques autorisent l’émergence de comportements cognitifs simples et trouvent de plus en plus d’applications dans le domaine de la robotique autonome notamment.Dans ce contexte, le premier objectif de cette thèse est de combiner cartes auto-organisatrices (SOM) et champs neuronaux dynamiques (DNF) pour l’exploration et la catégorisation d’environnements réels perçus au travers de capteurs visuels de différentes natures. Le second objectif est de préparer le portage de ce calcul de nature neuromorphique sur un substrat matériel numérique. Ces deux objectifs visent à définir un dispositif de calcul matériel qui pourra être couplé à différents capteurs de manière à permettre à un système autonome de construire sa propre représentation de l’environnement perceptif dans lequel il évolue. Nous avons ainsi proposé et évalué un modèle de détection de nouveauté à partir de SOM. Les considérations matérielles nous ont ensuite amené à des optimisations algorithmiques significatives dans le fonctionnement des SOM. Enfin, nous complémenté le modèle avec des DNF pour augmenter le niveau d'abstraction avec un mécanisme attentionnel de suivi de cible<br>As the quest for ever more powerful computing systems faces ever-increasing material constraints, major advances in computing efficiency are expected to benefit from unconventional approaches and new computing models such as brain-inspired computing. The brain is a massively parallel computing architecture with dense interconnections between computing units. Neurobiological systems are therefore a natural source of inspiration for computer science and engineering. Rapid technological improvements in computing media have recently reinforced this trend through two complementary but seemingly contradictory consequences: on the one hand, by providing enormous computing power, they have made it possible to simulate very large neural structures such as deep networks, and on the other hand, by reaching their technological and conceptual limits, they have motivated the emergence of alternative computing paradigms based on bio-inspired concepts. Among these, the principles of unsupervised learning are receiving increasing attention.We focus here on two main families of neural models, self-organizing maps and dynamic neural fields. Inspired by the modeling of the self-organization of cortical columns, self-organizing maps have shown their ability to represent a complex stimulus in a simplified and interpretable form, thanks to excellent performances in vector quantization and to the respect of topological proximity relationships present in the input space. More inspired by competition mechanisms in cortical macro-columns, dynamic neural fields allow the emergence of simple cognitive behaviours and find more and more applications in the field of autonomous robotics.In this context, the first objective of this thesis is to combine self-organizing maps and dynamic neural fields for the exploration and categorisation of real environments perceived through visual sensors of different natures. The second objective is to prepare the porting of this neuromorphic computation on a digital hardware substrate. These two objectives aim to define a hardware computing device that can be coupled to different sensors in order to allow an autonomous system to construct its own representation of the perceptual environment in which it operates. Therefore, we proposed and evaluated a novelty detection model based on self-organising maps. Hardware considerations then led us to significant algorithmic optimisations SOM operations. Finally, we complemented the model with dynamic neural fields to increase the level of abstraction with an attentional target tracking mechanism
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Neuromorphic computer systems"

1

1952-, Smith Leslie S., Hamilton Alister, and European Workshop on Neuromorphic Systems (1st : 1997 : University of Stirling), eds. Neuromorphic systems: Engineering silicon from neurobiology. World Scientific, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1950-, Lande Tor Sverre, ed. Neuromorphic systems engineering: Neural networks in silicon. Kluwer Academic, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Landolt, Oliver. Place Coding in Analog VLSI: A Neuromorphic Approach to Computation. Springer US, 1998.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Shih-Chii, Giacomo Indiveri, Rodney Douglas, Tobi Delbruck, and Adrian Whatley. Event-Based Neuromorphic Systems. Wiley & Sons, Incorporated, John, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Shih-Chii, Giacomo Indiveri, Rodney Douglas, Tobi Delbruck, and Adrian Whatley. Event-Based Neuromorphic Systems. Wiley & Sons, Incorporated, John, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Shih-Chii, Giacomo Indiveri, Rodney Douglas, Tobi Delbruck, and Adrian Whatley. Event-Based Neuromorphic Systems. Wiley & Sons, Incorporated, John, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Liu, Shih-Chii, Giacomo Indiveri, Rodney Douglas, Tobi Delbruck, and Adrian Whatley. Event-Based Neuromorphic Systems. Wiley & Sons, Limited, John, 2014.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Liu, Shih-Chii, Giacomo Indiveri, Rodney Douglas, Tobi Delbruck, and Adrian Whatley. Event-Based Neuromorphic Systems. Wiley & Sons, Incorporated, John, 2015.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lande, Tor Sverre. Neuromorphic Systems Engineering: Neural Networks In Silicon. Springer, 2013.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Liu, S. C. Neuromorphic and Bio-Inspired Engineered Systems. John Wiley and Sons Ltd, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Neuromorphic computer systems"

1

Haider, Muhammad Hamis, Hao Zhang, S. Deivalaskhmi, G. Lakshmi Narayanan, and Seok-Bum Ko. "Is Neuromorphic Computing the Key to Power-Efficient Neural Networks: A Survey." In Design and Applications of Emerging Computer Systems. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-42478-6_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Carstens, Niko, Maik-Ivo Terasa, Pia Holtz, et al. "Memristive Switching: From Individual Nanoparticles Towards Complex Nanoparticle Networks." In Springer Series on Bio- and Neurosystems. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-36705-2_9.

Full text
Abstract:
AbstractNovel hardware concepts in the framework of neuromorphic engineering are intended to overcome fundamental limits of current computer technologies and to be capable of efficient mass data processing. To reach this, research into material systems which enable the implementation of memristive switching in electronic devices, as well as into analytical approaches helping to understand fundamental mechanisms and dynamics of memristive switching is inevitable. In this chapter, memristive switching based on Ag metal filament formation is discussed throughout different scales, providing insights on the stability of metal filaments and the onset of collective behaviour. An unconventional cAFM approach, which intends to integrate the memristive system directly on the apex of the cantilever instead of usual contacting is presented. This facilitates the nanoscale probing of filamentary memristive switching dynamics on long time scales for the purpose of basic research, which is demonstrated by an archetypical electrochemical metallization (ECM) based system consisting of Ag/Si3N4/Au. Further, the application of AgAu and AgPt noble metal alloy nanoparticles (NPs) for memristive devices is discussed with special focus on the device scalability. For the smallest scale it is shown, that a single AgPt-NP encapsulated in SiO2 operates via stable diffusive switching. Finally, two concepts for the self-assembled fabrication of NP-based memristive switch networks are evaluated regarding to collective switching dynamics: A sub-percolated CNT network decorated with AgAu-NPs and a Ag-NP network poised at the percolation threshold. The hybrid CNT/AgAu-NPs networks exhibit a mixed form of diffusive and bipolar switching, which is very interesting for tailoring the retention time, while the networks dynamics of percolated Ag-NP networks are governed by ongoing transitions between a multitude of metastable states, which makes them interesting for reservoir computing and other neuromorphic computation schemes.
APA, Harvard, Vancouver, ISO, and other styles
3

Carboni, Roberto. "Characterization and Modeling of Spin-Transfer Torque (STT) Magnetic Memory for Computing Applications." In Special Topics in Information Technology. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-62476-7_5.

Full text
Abstract:
AbstractWith the ubiquitous diffusion of mobile computing and Internet of Things (IoT), the amount of data exchanged and processed over the internet is increasing every day, demanding secure data communication/storage and new computing primitives. Although computing systems based on microelectronics steadily improved over the past 50 years thanks to the aggressive technological scaling, their improvement is now hindered by excessive power consumption and inherent performance limitation associated to the conventional computer architecture (von Neumann bottleneck). In this scenario, emerging memory technologies are gaining interest thanks to their non-volatility and low power/fast operation. In this chapter, experimental characterization and modeling of spin-transfer torque magnetic memory (STT-MRAM) are presented, with particular focus on cycling endurance and switching variability, which both present a challenge towards STT-based memory applications. Then, the switching variability in STT-MRAM is exploited for hardware security and computing primitives, such as true-random number generator (TRNG) and stochastic spiking neuron for neuromorphic and stochastic computing.
APA, Harvard, Vancouver, ISO, and other styles
4

Lewis, Rory, Michael Bihn, and Katrina Nesterenko. "Rough Sets for a Neuromorphic CMOS System." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-62700-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Fatima, Afroz, and Abhijit Pethe. "Embedded system with in-memory compute neuromorphic accelerator for multiple applications." In Smart Embedded Systems. CRC Press, 2023. http://dx.doi.org/10.1201/9781032628059-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Liu, Ziyong, Xiaoxin Wang, Guiyao Xiang, Zhiyong Wang, Yitian Shao, and Honghai Liu. "A Neuromorphic Tactile Perception System Based on Spiking Neural Network for Texture Recognition." In Lecture Notes in Computer Science. Springer Nature Singapore, 2025. https://doi.org/10.1007/978-981-96-0789-1_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Huang, Lei, Pan Lv, Xin Du, Ouwen Jin, and Shuiguang Deng. "A Hierarchical Neural Task Scheduling Algorithm in the Operating System of Neuromorphic Computers." In Knowledge Science, Engineering and Management. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-5501-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Malavena, Gerardo. "Modeling of GIDL–Assisted Erase in 3–D NAND Flash Memory Arrays and Its Employment in NOR Flash–Based Spiking Neural Networks." In Special Topics in Information Technology. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-85918-3_4.

Full text
Abstract:
AbstractSince the very first introduction of three-dimensional (3–D) vertical-channel (VC) NAND Flash memory arrays, gate-induced drain leakage (GIDL) current has been suggested as a solution to increase the string channel potential to trigger the erase operation. Thanks to that erase scheme, the memory array can be built directly on the top of a $$n^+$$ n + plate, without requiring any p-doped region to contact the string channel and therefore allowing to simplify the manufacturing process and increase the array integration density. For those reasons, the understanding of the physical phenomena occurring in the string when GIDL is triggered is important for the proper design of the cell structure and of the voltage waveforms adopted during erase. Even though a detailed comprehension of the GIDL phenomenology can be achieved by means of technology computer-aided design (TCAD) simulations, they are usually time and resource consuming, especially when realistic string structures with many word-lines (WLs) are considered. In this chapter, an analysis of the GIDL-assisted erase in 3–D VC nand memory arrays is presented. First, the evolution of the string potential and GIDL current during erase is investigated by means of TCAD simulations; then, a compact model able to reproduce both the string dynamics and the threshold voltage transients with reduced computational effort is presented. The developed compact model is proven to be a valuable tool for the optimization of the array performance during erase assisted by GIDL. Then, the idea of taking advantage of GIDL for the erase operation is exported to the context of spiking neural networks (SNNs) based on NOR Flash memory arrays, which require operational schemes that allow single-cell selectivity during both cell program and cell erase. To overcome the block erase typical of nor Flash memory arrays based on Fowler-Nordheim tunneling, a new erase scheme that triggers GIDL in the NOR Flash cell and exploits hot-hole injection (HHI) at its drain side to accomplish the erase operation is presented. Using that scheme, spike-timing dependent plasticity (STDP) is implemented in a mainstream NOR Flash array and array learning is successfully demonstrated in a prototype SNN. The achieved results represent an important step for the development of large-scale neuromorphic systems based on mature and reliable memory technologies.
APA, Harvard, Vancouver, ISO, and other styles
9

Dey, Ajoy, Chetan Kadway, and Sounak Dey. "Towards On-Device Learning and Personalization: A Case of In-Car Driver Drowsiness Detection System Using Neuromorphic Computing." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-87660-8_22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Antsiperov, Viacheslav. "Neuromorphic Image Coding Based on a Sample of Counts Partition by a System of Receptive Fields." In Pattern Recognition, Computer Vision, and Image Processing. ICPR 2022 International Workshops and Challenges. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-37742-6_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Neuromorphic computer systems"

1

Weng, Yijie, Yu Qi, Yueming Wang, and Gang Pan. "Neuromorphic model-based neural decoders for brain-computer interfaces: a comparative study." In 2024 IEEE Biomedical Circuits and Systems Conference (BioCAS). IEEE, 2024. https://doi.org/10.1109/biocas61083.2024.10798332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mena Morales, Raphael, Pablo Miralles, Diviya Devani, et al. "Brainsat: Hardware Development of a Neuromorphic On-Board Computer Applied to Methane Detection from Low Earth Orbit." In IAF Space Systems Symposium, Held at the 75th International Astronautical Congress (IAC 2024). International Astronautical Federation (IAF), 2024. https://doi.org/10.52202/078372-0121.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Matinizadeh, Shadi, Arghavan Mohammadhassani, M. L. Varshika, Sarah Johari, Nagarajan Kandasamy, and Anup Das. "QUANTISENC++: A Fully-Configurable Many-Core Neuromorphic Hardware." In 2024 58th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2024. https://doi.org/10.1109/ieeeconf60004.2024.10942719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Andrei, Vlad C., Alexandru P. Drǎgutoiu, Gabriel Béna, et al. "Deep- Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware." In 2024 58th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2024. https://doi.org/10.1109/ieeeconf60004.2024.10942794.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zendrikov, Dmitrii, Alessio Franci, and Giacomo Indiveri. "Waves and Symbols in Neuromorphic Hardware: From Analog Signal Processing to Digital Computing on the Same Computational Substrate." In 2024 58th Asilomar Conference on Signals, Systems, and Computers. IEEE, 2024. https://doi.org/10.1109/ieeeconf60004.2024.10943060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Pei, Mengjiao, Ying Zhu, Siyao Liu, et al. "Ferroelectric Memcapacitor-based Reservoir Computing for High-efficiency Human-Computer Interface." In Neuromorphic Materials, Devices, Circuits and Systems. FUNDACIO DE LA COMUNITAT VALENCIANA SCITO, 2023. http://dx.doi.org/10.29363/nanoge.neumatdecas.2023.025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Yousefzadeh, Amirreza, Gert-Jan van Schaik, Mohammad Tahghighi, et al. "SENeCA: Scalable Energy-efficient Neuromorphic Computer Architecture." In 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS). IEEE, 2022. http://dx.doi.org/10.1109/aicas54282.2022.9870025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Song, Chang, Beiye Liu, Chenchen Liu, Hai Li, and Yiran Chen. "Design techniques of eNVM-enabled neuromorphic computing systems." In 2016 IEEE 34th International Conference on Computer Design (ICCD). IEEE, 2016. http://dx.doi.org/10.1109/iccd.2016.7753356.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Rajasekharan, Dinesh, Amit Ranjan Trivedi, and Yogesh Singh Chauhan. "Neuromorphic Circuits on FDSOI Technology for Computer Vision Applications." In 2019 32nd International Conference on VLSI Design and 2019 18th International Conference on Embedded Systems (VLSID). IEEE, 2019. http://dx.doi.org/10.1109/vlsid.2019.00108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kutyniok, Gitta. "Reliable AI: From Legal Requirements to Neuromorphic Computing." In The 10th World Congress on Electrical Engineering and Computer Systems and Science. Avestia Publishing, 2024. http://dx.doi.org/10.11159/cist24.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Neuromorphic computer systems"

1

Gall, W. E. Brain-Based Devices for Neuromorphic Computer Systems. Defense Technical Information Center, 2013. http://dx.doi.org/10.21236/ada587348.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Pasupuleti, Murali Krishna. Neural Computation and Learning Theory: Expressivity, Dynamics, and Biologically Inspired AI. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv425.

Full text
Abstract:
Abstract: Neural computation and learning theory provide the foundational principles for understanding how artificial and biological neural networks encode, process, and learn from data. This research explores expressivity, computational dynamics, and biologically inspired AI, focusing on theoretical expressivity limits, infinite-width neural networks, recurrent and spiking neural networks, attractor models, and synaptic plasticity. The study investigates mathematical models of function approximation, kernel methods, dynamical systems, and stability properties to assess the generalization capabilities of deep learning architectures. Additionally, it explores biologically plausible learning mechanisms such as Hebbian learning, spike-timing-dependent plasticity (STDP), and neuromodulation, drawing insights from neuroscience and cognitive computing. The role of spiking neural networks (SNNs) and neuromorphic computing in low-power AI and real-time decision-making is also analyzed, with applications in robotics, brain-computer interfaces, edge AI, and cognitive computing. Case studies highlight the industrial adoption of biologically inspired AI, focusing on adaptive neural controllers, neuromorphic vision, and memory-based architectures. This research underscores the importance of integrating theoretical learning principles with biologically motivated AI models to develop more interpretable, generalizable, and scalable intelligent systems. Keywords Neural computation, learning theory, expressivity, deep learning, recurrent neural networks, spiking neural networks, biologically inspired AI, infinite-width networks, kernel methods, attractor networks, synaptic plasticity, STDP, neuromodulation, cognitive computing, dynamical systems, function approximation, generalization, AI stability, neuromorphic computing, robotics, brain-computer interfaces, edge AI, biologically plausible learning.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!