Добірка наукової літератури з теми "Artificials neurons"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Artificials neurons".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

За допомогою хмари тегів ви можете побачити ще більше пов’язаних тем досліджень, а відповідні кнопки після кожного розділу сторінки дозволяють переглянути розширені списки книг, статей тощо на обрану тему.

Статті в журналах з теми "Artificials neurons":

1

HERRMANN, CHRISTOPH S., and ANDREAS KLAUS. "AUTAPSE TURNS NEURON INTO OSCILLATOR." International Journal of Bifurcation and Chaos 14, no. 02 (February 2004): 623–33. http://dx.doi.org/10.1142/s0218127404009338.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recently, neurobiologists have discovered axons on neurons which synapse on the same neuron's dendrites — so-called autapses. It is not yet clear what functional significance autapses offer for neural behavior. This is an ideal case for using a physical simulation to investigate how an autapse alters the firing of a neuron. We simulated a neural basket cell via the Hodgkin–Huxley equations and implemented an autapse which feeds back onto the soma of the neuron. The behavior of the cell was compared with and without autaptic feedback. Our artificial autapse neuron (AAN) displays oscillatory behavior which is not observed for the same model neuron without autapse. The neuron oscillates between two functional states: one where it fires at high frequency and another where firing is suppressed. This behavior is called "spike bursting" and represents a common pattern recorded from cerebral neurons.
2

Sharp, A. A., L. F. Abbott, and E. Marder. "Artificial electrical synapses in oscillatory networks." Journal of Neurophysiology 67, no. 6 (June 1992): 1691–94. http://dx.doi.org/10.1152/jn.1992.67.6.1691.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
1. We use an electronic circuit to artificially electrically couple neurons. 2. Strengthening the coupling between an oscillating neuron and a hyperpolarized, passive neuron can either increase or decrease the frequency of the oscillator depending on the properties of the oscillator. 3. The result of electrically coupling two neuronal oscillators depends on the membrane potentials, intrinsic properties of the neurons, and the coupling strength. 4. The interplay between chemical inhibitory synapses and electrical synapses can be studied by creating both chemical and electrical synapses between two cultured neurons and by artificially strengthening the electrical synapse between the ventricular dilator and one pyloric dilator neuron of the stomatogastric ganglion.
3

Márquez-Vera, Carlos Antonio, Zaineb Yakoub, Marco Antonio Márquez Vera, and Alfian Ma'arif. "Spiking PID Control Applied in the Van de Vusse Reaction." International Journal of Robotics and Control Systems 1, no. 4 (November 2021): 488–500. http://dx.doi.org/10.31763/ijrcs.v1i4.490.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial neural networks (ANN) can approximate signals and give interesting results in pattern recognition; some works use neural networks for control applications. However, biological neurons do not generate similar signals to the obtained by ANN. The spiking neurons are an interesting topic since they simulate the real behavior depicted by biological neurons. This paper employed a spiking neuron to compute a PID control, which is further applied to the Van de Vusse reaction. This reaction, as the inverse pendulum, is a benchmark used to work with systems that has inverse response producing the output to undershoot. One problem is how to code information that the neuron can interpret and decode the peak generated by the neuron to interpret the neuron's behavior. In this work, a spiking neuron is used to compute a PID control by coding in time the peaks generated by the neuron. The neuron has as synaptic weights the PID gains, and the peak observed in the axon is the coded control signal. The neuron adaptation tries to obtain the necessary weights to generate the peak instant necessary to control the chemical reaction. The simulation results show the possibility of using this kind of neuron for control issues and the possibility of using a spiking neural network to overcome the undershoot obtained due to the inverse response of the chemical reaction.
4

Torres-Treviño, Luis M., Angel Rodríguez-Liñán, Luis González-Estrada, and Gustavo González-Sanmiguel. "Single Gaussian Chaotic Neuron: Numerical Study and Implementation in an Embedded System." Discrete Dynamics in Nature and Society 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/318758.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial Gaussian neurons are very common structures of artificial neural networks like radial basis function. These artificial neurons use a Gaussian activation function that includes two parameters called the center of mass (cm) and sensibility factor (λ). Changes on these parameters determine the behavior of the neuron. When the neuron has a feedback output, complex chaotic behavior is displayed. This paper presents a study and implementation of this particular neuron. Stability of fixed points, bifurcation diagrams, and Lyapunov exponents help to determine the dynamical nature of the neuron, and its implementation on embedded system illustrates preliminary results toward embedded chaos computation.
5

Alvarellos-González, Alberto, Alejandro Pazos, and Ana B. Porto-Pazos. "Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks." Computational and Mathematical Methods in Medicine 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/476324.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem.
6

Takahata, M., and M. Hisada. "Local nonspiking interneurons involved in gating of the descending motor pathway in crayfish." Journal of Neurophysiology 56, no. 3 (September 1986): 718–31. http://dx.doi.org/10.1152/jn.1986.56.3.718.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Uropod motor neurons in the terminal abdominal ganglion of crayfish are continuously excited during the abdominal posture movement so that subthreshold excitatory postsynaptic potentials from the descending statocyst pathway can elicit spike activity in the motor neurons only while the abdominal posture system is in operation. Local nonspiking interneurons in the terminal ganglion were also found to show sustained membrane potential change during the fictive abdominal posture movement. Artificial membrane potential change of these interneurons by intracellular current injection in the same direction as that actually observed during the abdominal movement caused similar excitation of uropod motor neurons. Artificial cancellation of the membrane potential change of these interneurons during the abdominal movement also caused cancellation of the excitation of uropod motor neurons. We concluded that the continuous excitation of uropod motor neurons during the fictive abdominal movement was mediated, at least partly, by the local nonspiking interneurons. Fourteen (36%) out of 39 examined nonspiking interneurons were judged to be involved in the excitation of uropod motor neurons during the fictive abdominal movement. Another 25 interneurons (64%) were found not to be involved in the excitation of motor neurons, although most of them had a strong effect on the uropod motor neuron activity when their membrane potential was changed artificially. The interneurons that were involved in the excitation of motor neurons during the abdominal movement included both of the two major structural types of nonspiking interneurons in the terminal ganglion, i.e., those in the anterolateral portion and those in the posterolateral portion. No strict correlation was found between the structure of nonspiking interneurons and their function in the control of uropod motor neuron activity.
7

Ribar, Srdjan, Vojislav V. Mitic, and Goran Lazovic. "Neural Networks Application on Human Skin Biophysical Impedance Characterizations." Biophysical Reviews and Letters 16, no. 01 (February 2021): 9–19. http://dx.doi.org/10.1142/s1793048021500028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial neural networks (ANNs) are basically the structures that perform input–output mapping. This mapping mimics the signal processing in biological neural networks. The basic element of biological neural network is a neuron. Neurons receive input signals from other neurons or the environment, process them, and generate their output which represents the input to another neuron of the network. Neurons can change their sensitivity to input signals. Each neuron has a simple rule to process an input signal. Biological neural networks have the property that signals are processed through many parallel connections (massively parallel processing). The activity of all neurons in these parallel connections is summed and represents the output of the whole network. The main feature of biological neural networks is that changes in the sensitivity of the neurons lead to changes in the operation of the entire network. This is called adaptation and is correlated with the learning process of living organisms. In this paper, a set of artificial neural networks are used for classifying the human skin biophysical impedance data.
8

Wang, Yu, Xintong Chen, Daqi Shen, Miaocheng Zhang, Xi Chen, Xingyu Chen, Weijing Shao, et al. "Artificial Neurons Based on Ag/V2C/W Threshold Switching Memristors." Nanomaterials 11, no. 11 (October 2021): 2860. http://dx.doi.org/10.3390/nano11112860.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Artificial synapses and neurons are two critical, fundamental bricks for constructing hardware neural networks. Owing to its high-density integration, outstanding nonlinearity, and modulated plasticity, memristors have attracted emerging attention on emulating biological synapses and neurons. However, fabricating a low-power and robust memristor-based artificial neuron without extra electrical components is still a challenge for brain-inspired systems. In this work, we demonstrate a single two-dimensional (2D) MXene(V2C)-based threshold switching (TS) memristor to emulate a leaky integrate-and-fire (LIF) neuron without auxiliary circuits, originating from the Ag diffusion-based filamentary mechanism. Moreover, our V2C-based artificial neurons faithfully achieve multiple neural functions including leaky integration, threshold-driven fire, self-relaxation, and linear strength-modulated spike frequency characteristics. This work demonstrates that three-atom-type MXene (e.g., V2C) memristors may provide an efficient method to construct the hardware neuromorphic computing systems.
9

Volchikhin, V. I., A. I. Ivanov, T. A. Zolotareva, and D. M. Skudnev. "Synthesis of four new neuro-statistical tests for testing the hypothesis of independence of small samples of biometric data." Journal of Physics: Conference Series 2094, no. 3 (November 2021): 032013. http://dx.doi.org/10.1088/1742-6596/2094/3/032013.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract The paper considers the analysis of small samples according to several statistical criteria to test the hypothesis of independence, since the direct calculation of the correlation coefficients using the Pearson formula gives an unacceptably high error on small biometric samples. Each of the classical statistical criteria for testing the hypothesis of independence can be replaced with an equivalent artificial neuron. Neuron training is performed based on the condition of obtaining equal probabilities of errors of the first and second kind. To improve the quality of decisions made, it is necessary to use a variety of statistical criteria, both known and new. It is necessary to form networks of artificial neurons, generalizing the number of artificial neurons that is necessary for practical use. It is shown that the classical formula for calculating the correlation coefficients can be modified with four options. This allows you to create a network of 5 artificial neurons, which is not yet able to reduce the probability of errors in comparison with the classical formula. A gain in the confidence level in the future can only be obtained when using a network of more than 23 artificial neurons, if we apply the simplest code to detect and correct errors.
10

De-Miguel, Francisco F., Mariana Vargas-Caballero, and Elizabeth García-Pérez. "Spread of synaptic potentials through electrical synapses in Retzius neurones of the leech." Journal of Experimental Biology 204, no. 19 (October 2001): 3241–50. http://dx.doi.org/10.1242/jeb.204.19.3241.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
SUMMARYWe studied the spread of excitatory postsynaptic potentials (EPSPs) through electrical synapses in Retzius neurones of the leech Haementeria officinalis. The pair of Retzius neurones in each ganglion is coupled by a non-rectifying electrical synapse. Both neurones displayed synchronous EPSPs of varying amplitudes and rise times. The kinetics of synchronous EPSPs was similar in 79 % of the EPSP pairs. In the remaining 21 %, one EPSP was smaller and slower than the other, suggesting its passive spread from the other neurone. The proportion of these events increased to 75 % in the presence of Mg2+ in the bathing fluid. This spread of EPSPs from one neurone to another was tested by producing artificial EPSPs by current injection into the soma of one Retzius neurone. The artificial EPSPs were smaller and arrived more slowly at the soma of the coupled neurone. The coupling ratios for the EPSPs were proportional to the coupling ratio for long steady-state pulses in different neuronal pairs. Our results showed that EPSPs spread from one Retzius neurone to the other and support the idea that EPSP spread between electrically coupled neurones may contribute to the integration processes of neurones.

Дисертації з теми "Artificials neurons":

1

Henniquau, Dimitri. "Conception d’une interface fonctionnelle permettant la communication de neurones artificiels et biologiques pour des applications dans le domaine des neurosciences." Electronic Thesis or Diss., Université de Lille (2018-2021), 2021. http://www.theses.fr/2021LILUN032.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
L’ingénierie neuromorphique est un nouveau champ disciplinaire en plein essor qui fait appel à des compétences en électronique, mathématiques, informatique et en ingénierie biomorphique dans le but de produire des réseaux de neurones artificiels capables de traiter les informations à la manière du cerveau humain. Ainsi, les systèmes neuromorphiques offrent non seulement des solutions plus performantes et efficientes que les technologies actuelles de traitement de l’information mais permettent également d’envisager le développement de stratégies thérapeutiques inédites dans le cadre de dysfonctionnements cérébraux pathologiques. Le groupe Circuits Systèmes Applications des Micro-ondes (CSAM) de l’Institut d’Electronique, de Microélectronique et de Nanotechnologies (IEMN) dans lequel ces travaux de thèse ont été effectués a contribué à l’émergence de ces systèmes neuromorphiques en développant une boîte à outils complète de neurones et synapses artificiels. Pour intégrer l’ingénierie neuromorphique dans la prise en charge de dysfonctionnements neuronaux pathologiques, il convient d’interfacer les neurones artificiels et les neurones vivants afin d’assurer une communication réelle entre ces différents composants. Dans ce contexte, et en utilisant les outils innovants développés par le groupe CSAM, l’objectif de ce travail de thèse a été de concevoir et réaliser une interface fonctionnelle permettant d’établir une boucle de communication bidirectionnelle entre des neurones artificiels et des neurones vivants. Les neurones artificiels développés par le groupe CSAM sont réalisés en technologie CMOS et capables d’émettre des signaux électriques biomimétiques. Les neurones vivants sont issus de cellules PC12 différenciées. Une première étape de ce travail a consisté à modéliser et à simuler cette interface entre neurones artificiels et vivants ; une deuxième partie de la thèse a été dédiée à la fabrication et à la caractérisation d’interfaces neurobiohybrides, ainsi qu’à la croissance et à la caractérisation de neurones vivants, avant d’étudier leur capacité à communiquer avec des neurones artificiels. Ainsi, un modèle de membrane neuronale représentant un neurone vivant interfacé avec une électrode métallique planaire a été développé. L’exploitation de ce modèle a permis de montrer qu’il est possible de stimuler des neurones vivants en utilisant les signaux biomimétiques issus du modèle de neurones artificiels tout en conservant des tensions d’excitation faibles. L’utilisation de faibles tensions d’excitation permettrait d’améliorer l’efficacité énergétique des systèmes neurobiohybrides intégrant des neurones artificiels et d’amoindrir le risque d’endommager les tissus vivants. Ensuite, le neurobiohybride permettant d’interfacer les neurones vivants et les neurones artificiels a été conçu et réalisé. Une caractérisation expérimentale de cette interface a permis de valider l’approche consistant à exciter un neurone vivant au travers d’une électrode métallique planaire. Enfin, des cellules neuronales vivantes issues de cellules PC-12 ont été cultivées et différenciées dans les neurobiohybrides. Une preuve expérimentale de la capacité des signaux électriques biomimétiques produits par les neurones artificiels a ainsi pu être apportée par la technique d’imagerie calcique. En conclusion, les travaux présentés dans ce manuscrit établissent clairement la preuve de concept de l’excitation de neurones vivants par un signal biomimétique dans nos conditions expérimentales et étayent ainsi la première partie de la boucle de communication bidirectionnelle entre neurones artificiels et neurones vivants
Neuromorphic engineering is an exciting emerging new field, which combines skills in electronics, mathematics, computer sciences and biomorphic engineering with the aim of developing artificial neuronal networks capable of reproducing the brain’s data processing. Thus, neuromorphic systems not only offer more effective and energy efficient solutions than current data processing technologies, but also set the bases for developing novel original therapeutic strategies in the context of pathological brain dysfunctions. The research group Circuits Systèmes Applications des Micro-ondes (CSAM) of the Institute for Electronics, Microelectronics and Nanotechnologies (IEMN) in Lille, in which this thesis work was carried out, has contributed to the generation of such neuromorphic systems by developing a toolbox constituted of artificial neurons and synapses. In order to implement neuromorphic engineering in the therapeutic arsenal for treating neurologic disorders, we need to interface living and artificial neurons to ensure real communication between these different components. In this context and using the original tools developed by the CSAM group, the main goal of this thesis work was to design and produce a functional interface allowing a bidirectional communication loop to be established between living and artificial neurons. These artificial neurons have been developed by the CSAM group using CMOS technology and are able to emit biomimetic electrical signals. Living neurons were obtained from differentiated PC-12 cells. A first step in this work consisted in modeling and simulating this interface between artificial and living neurons; a second part of the thesis was dedicated to the fabrication and characterization of neurobiohybrid interfaces, and to the growth and characterization of living neurons before studying their capacities to communicate with artificial neurons. First, a model of neuronal membrane representing a living neuron interfaced with a metallic planar electrode has been developed. We thus showed that it is possible to excite neurons using biomimetic signals produced by artificial neurons while maintaining a low excitation voltage. Low voltage excitation would improve energy efficiency of neurobiohybrid systems integrating artificial neurons and reduce the impact of harmful electrical signals on living neurons. Then, the neurobiohybrid interfacing living and artificial neurons has been designed and produced. The results obtained by experimental characterization of this interface validate the approach consisting in exciting living neurons through a metallic planar electrode. Finally, living neurons from PC-12 cells were grown and differentiated directly onto neurobiohybrids. Then, an experimental proof of the ability of biomimetic electrical signals to excite living neurons was obtained using calcium imaging. To conclude, the work presented in this manuscript clearly establishes a proof of concept for the excitation of living neurons using a biomimetic signal in our experimental conditions and thus substantiates the first part of the bidirectional communication loop between artificial neurons and living neurons
2

Cottens, Pablo Eduardo Pereira de Araujo. "Development of an artificial neural network architecture using programmable logic." PublishedVersion, Universidade do Vale do Rio dos Sinos, 2016. http://www.repositorio.jesuita.org.br/handle/UNISINOS/5411.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Submitted by Silvana Teresinha Dornelles Studzinski (sstudzinski) on 2016-06-29T14:42:16Z No. of bitstreams: 1 Pablo Eduardo Pereira de Araujo Cottens_.pdf: 1315690 bytes, checksum: 78ac4ce471c2b51e826c7523a01711bd (MD5)
Made available in DSpace on 2016-06-29T14:42:16Z (GMT). No. of bitstreams: 1 Pablo Eduardo Pereira de Araujo Cottens_.pdf: 1315690 bytes, checksum: 78ac4ce471c2b51e826c7523a01711bd (MD5) Previous issue date: 2016-03-07
Nenhuma
Normalmente Redes Neurais Artificiais (RNAs) necessitam estações de trabalho para o seu processamento, por causa da complexidade do sistema. Este tipo de arquitetura de processamento requer que instrumentos de campo estejam localizados na vizinhança da estação de trabalho, caso exista a necessidade de processamento em tempo real, ou que o dispositivo de campo possua como única tarefa a de coleta de dados para processamento futuro. Este projeto visa criar uma arquitetura em lógica programável para um neurônio genérico, no qual as RNAs podem fazer uso da natureza paralela de FPGAs para executar a aplicação de forma rápida. Este trabalho mostra que a utilização de lógica programável para a implementação de RNAs de baixa resolução de bits é viável e as redes neurais, devido à natureza paralelizável, se beneficiam pela implementação em hardware, podendo obter resultados de forma muito rápida.
Currently, modern Artificial Neural Networks (ANN), according to their complexity, require a workstation for processing all their input data. This type of processing architecture requires that the field device is located somewhere in the vicintity of a workstation, in case real-time processing is required, or that the field device at hand will have the sole task of collecting data for future processing, when field data is required. This project creates a generic neuron architecture in programmabl logic, where Artifical Neural Networks can use the parallel nature of FPGAs to execute applications in a fast manner, albeit not using the same resolution for its otputs. This work shows that the utilization of programmable logic for the implementation of low bit resolution ANNs is not only viable, but the neural network, due to its parallel nature, benefits greatly from the hardware implementation, giving fast and accurate results.
3

Ogden, James M. "Construction of fully equivalent neuronal cables : an analysis of neuron morphology." Electronic Thesis or Diss., University of Glasgow, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301502.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Guahyba, Adriano da Silva. "Utilização de inteligência artificial (redes neurais artificiais) no gerenciamento de reprodutoras pesadas." PublishedVersion, reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2001. http://hdl.handle.net/10183/3322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Uma atividade com a magnitude da avicultura, que usa equipamentos de última geração e serviços atualizados, é levada, na maioria dos casos, a tomar decisões que envolvem todos aspectos de produção, apoiada em critérios subjetivos. A presente tese objetivou estudar a utilização das redes neurais artificiais na estimação dos parâmetros de desempenho de matrizes pesadas, pertencentes a uma integração avícola sul-brasileira. Foram utilizados os registros de 11 lotes em recria, do período compreendido entre 09/11/97 a 10/01/99 e de 21 lotes em produção, do período compreendido entre 26/04/98 a 19/12/99, para a análise por redes neurais artificiais. Os dados utilizados corresponderam a 273 linhas de registros semanais, do período de recria e 689 linhas de registros semanais, do período de produção. Os modelos de redes neurais foram comparados e selecionados como melhores, baseados no coeficiente de determinação múltipla (R2), Quadrado Médio do Erro (QME), bem como pela análise de gráficos, plotando a predição da rede versus a predição menos o real (resíduo). Com esta tese foi possível explicar os parâmetros de desempenho de matrizes pesadas, através da utilização de redes neurais artificiais. A técnica permite a tomada de decisões por parte do corpo técnico, baseadas em critérios objetivos obtidos cientificamente. Além disso, este método permite simulações das conseqüências de tais decisões e fornece a percentagem de contribuição de cada variável no fenômeno em estudo.
5

Bénédic, Yohann. "Approche analytique pour l'optimisation de réseaux de neurones artificiels." PhD thesis, Université de Haute Alsace - Mulhouse, 2007. http://tel.archives-ouvertes.fr/tel-00605216.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les réseaux de neurones artificiels sont nés, il y a presque cinquante ans, de la volonté de modéliser les capacités de mémorisation et de traitement du cerveau biologique. Aujourd'hui encore, les nombreux modèles obtenus brillent par leur simplicité de mise en œuvre, leur puissance de traitement, leur polyvalence, mais aussi par la complexité des méthodes de programmation disponibles. En réalité, très peu d'entre-elles sont capables d'aboutir analytiquement à un réseau de neurones correctement configuré. Bien au contraire, la plupart se " contentent " d'ajuster, petit à petit, une ébauche de réseau de neurones, jusqu'à ce qu'il fonctionne avec suffisamment d'exemples de la tâche à accomplir. Au travers de ces méthodes, dites " d'apprentissages ", les réseaux de neurones sont devenus des boîtes noires, que seuls quelques experts sont effectivement capables de programmer. Chaque traitement demande en effet de choisir convenablement une configuration initiale, la nature des exemples, leur nombre, l'ordre d'utilisation, ... Pourtant, la tâche finalement apprise n'en reste pas moins le résultat d'une stratégie algorithmique implémentée par le réseau de neurones. Une stratégie qui peut donc être identifiée par le biais de l'analyse, et surtout réutilisée lors de la conception d'un réseau de neurones réalisant une tâche similaire, court-circuitant ainsi les nombreux aléas liés à ces méthodes d'apprentissage. Les bénéfices de l'analyse sont encore plus évidents dans le cas de réseaux de neurones à sortie binaire. En effet, le caractère discret des signaux traités simplifie grandement l'identification des mécanismes mis en jeu, ainsi que leur contribution au traitement global. De ce type d'analyse systématique naît un formalisme original, qui décrit la stratégie implémentée par les réseaux de neurones à sortie binaire de façon particulièrement efficace. Schématiquement, ce formalisme tient lieu d'" état intermédiaire " entre la forme boîte noire d'un réseau de neurones et sa description mathématique brute. En étant plus proche des modèles de réseaux de neurones que ne l'est cette dernière, il permet de retrouver, par synthèse analytique, un réseau de neurones effectuant la même opération que celui de départ, mais de façon optimisée selon un ou plusieurs critères : nombre de neurones, nombre de connexions, dynamique de calcul, etc. Cette approche analyse-formalisation-synthèse constitue la contribution de ces travaux de thèse.
6

Martinez, Regis. "Dynamique des systèmes cognitifs et des systèmes complexes : étude du rôle des délais de transmission de l’information." Electronic Thesis or Diss., Lyon 2, 2011. http://www.theses.fr/2011LYO20054/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
La représentation de l’information mnésique est toujours une question d’intérêt majeur en neurobiologie, mais également, du point de vue informatique, en apprentissage artificiel. Dans certains modèles de réseaux de neurones artificiels, nous sommes confrontés au dilemme de la récupération de l’information sachant, sur la base de la performance du modèle, que cette information est effectivement stockée mais sous une forme inconnue ou trop complexe pour être facilement accessible. C’est le dilemme qui se pose pour les grands réseaux de neurones et auquel tente de répondre le paradigme du « reservoir computing ».Le « reservoir computing » est un courant de modèles qui a émergé en même temps que le modèle que nous présentons ici. Il s’agit de décomposer un réseau de neurones en (1) une couche d’entrée qui permet d’injecter les exemples d’apprentissage, (2) un « réservoir » composé de neurones connectés avec ou sans organisation particulière définie, et dans lequel il peut y avoir des mécanismes d’adaptation, (3) une couche de sortie, les « readout », sur laquelle un apprentissage supervisé est opéré. Nous apportons toutefois une particularité, qui est celle d’utiliser les délais axonaux, temps de propagation d’une information d’un neurone à un autre. Leur mise en oeuvre est un apport computationnel en même temps qu’un argument biologique pour la représentation de l’information. Nous montrons que notre modèle est capable d’un apprentissage artificiel efficace et prometteur même si encore perfectible. Sur la base de ce constat et dans le but d’améliorer les performances nous cherchons à comprendre les dynamiques internes du modèle. Plus précisément nous étudions comment la topologie du réservoir peut influencer sa dynamique. Nous nous aidons pour cela de la théorie des groupes polychrones. Nous avons développé, pour l’occasion, des algorithmes permettant de détecter ces structures topologico-dynamiques dans un réseau, et dans l’activité d’un réseau de topologie donnée.Si nous comprenons les liens entre topologie et dynamique, nous pourrons en tirer parti pour créer des réservoirs adaptés aux besoins de l’apprentissage. Finalement, nous avons mené une étude exhaustive de l’expressivité d’un réseau en termes de groupes polychrones, en fonction de différents types de topologies (aléatoire, régulière, petit-monde) et de nombreux paramètres (nombre de neurones, connectivité, etc.). Nous pouvons enfin formuler un certain nombre de recommandations pour créer un réseau dont la topologie peut être un support riche en représentations possibles. Nous tentons également de faire le lien avec la théorie cognitive de la mémoire à traces multiples qui peut, en principe, être implémentée et étudiée par le prisme des groupes polychrones
How memory information is represented is still an open question in neurobiology, but also, from the computer science point of view, in machine learning. Some artificial neuron networks models have to face the problem of retrieving information, knowing that, in regard to the model performance, this information is actually stored but in an unknown form or too complex to be easily accessible. This is one of the problems met in large neuron networks and which « reservoir computing » intends to answer.« Reservoir computing » is a category of models that has emerged at the same period as, and has propoerties similar to the model we present here. It is composed of three parts that are (1) an input layer that allows to inject learning examples, (2) a « reservoir » composed of neurons connected with or without a particular predefined, and where there can be adaptation mecanisms, (3) an output layer, called « readout », on which a supervised learning if performed. We bring a particularity that consists in using axonal delays, the propagation time of information from one neuron to another through an axonal connexion. Using delays is a computational improvement in the light of machin learning but also a biological argument for information representation.We show that our model is capable of a improvable but efficient and promising artificial learning. Based on this observation and in the aim of improving performance we seek to understand the internal dynamics of the model. More precisely we study how the topology of the reservoir can influence the dynamics. To do so, we make use of the theory of polychronous groups. We have developped complexe algorithms allowing us to detect those topologicodynamic structures in a network, and in a network activity having a given topology.If we succeed in understanding the links between topology and dynamics, we may take advantage of it to be able to create reservoir with specific properties, suited for learning. Finally, we have conducted an exhaustive study of network expressivness in terms of polychronous groups, based on various types of topologies (random, regular, small-world) and different parameters (number of neurones, conectivity, etc.). We are able to formulate some recommandations to create a network whose topology can be rich in terms of possible representations. We propose to link with the cognitive theory of multiple trace memory that can, in principle, be implemented and studied in the light of polychronous groups
7

Reali, Egidio Henrique. "Utilização de inteligência artificial - (Redes neurais artificiais) no gerenciamento da produção de frangos de corte." PublishedVersion, reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2004. http://hdl.handle.net/10183/6339.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Este estudo objetivou demonstrar que é possível explicar os fenômenos que ocorrem na criação de frangos de corte através de redes neurais artificiais. A estatística descritiva e a diferença entre as médias das variáveis dos dados iniciais foram calculadas com o programa computacional SigmaStat® Statistical Software para Windows 2.03. Foi utilizada uma série histórica de dados de produção de frangos de corte, obtidos nos anos de 2001 e 2002, fornecidos por uma Integração Avícola do Rio Grande do Sul, contendo informações de 1.516 criadores com lotes alojados em 2001 e 889 criadores com lotes alojados em 2002. Nos arquivos estavam registrados, para cada lote, suas variáveis de produção, tais como número do lote, data do alojamento, data do abate, idade ao abate, número de pintos alojados, quilogramas de ração consumidos, quilogramas de frangos produzidos, número de aves abatidas, custo do frango produzido, mortalidade, peso médio, ganho de peso diário, índice de conversão alimentar, índice de eficiência, quilogramas líquido de frangos, quilogramas de ração inicial, quilogramas de ração crescimento, quilogramas de ração abate, além de outros. Para a construção das redes neurais artificiais foi utilizado o programa computacional NeuroShell®Predictor, desenvolvido pela Ward Systems Group. Ao programa foi identificado as variáveis escolhidas como “entradas” para o cálculo do modelo preditivo e a variável de “saída” aquela a ser predita. Para o treinamento das redes foram usados 1.000 criadores do banco de dados do alojamento de frangos de corte de 2001. Os restantes 516 criadores de 2001 e todos os 889 criadores de 2002 serviram para a validação das predições, os quais não participaram da etapa de aprendizagem, sendo totalmente desconhecidos pelo programa. Foram gerados 20 modelos na fase de treinamento das redes neurais artificiais, com distintos parâmetros de produção ou variáveis (saídas). Em todos estes modelos, as redes neurais artificiais geradas foram bem ajustadas apresentando sempre, um Coeficiente de Determinação Múltipla (R²) elevado e o menor Quadrado Médio do Erro (QME). Ressalta-se que o R² perfeito é 1 e um coeficiente muito bom deve estar próximo de 1. Todos os 20 modelos, quando validados com os 516 lotes de 2001 e com 889 de 2002, apresentaram também Coeficientes de Determinação Múltipla (R²) elevados e muito próximos de 1, além de apresentarem o Quadrado Médio do Erro (QME) e Erro Médio reduzidos. Foi comprovado não haver diferenças significativas entre as médias dos valores preditos e as médias dos valores reais, em todas as validações efetuadas nos lotes abatidos em 2001 e em 2002, quando aplicados os 20 modelos de redes neurais gerados. Como conclusão, as redes neurais artificiais foram capazes de explicar os fenômenos envolvidos com a produção industrial de frangos de corte. A técnica oferece critérios objetivos, gerados cientificamente, que embasarão as decisões dos responsáveis pela produção industrial de frangos de corte.Também permite realizar simulações e medir a contribuição de cada variável no fenômeno em estudo.
8

Monaldi, Jessica. "neuroni artificiali e loro applicazioni." BachelorThesis, Alma Mater Studiorum - Università di Bologna, 2020. AMS Laurea.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Trattazione teorica dei neuroni artificiali, con analisi di come questi sono stati sviluppati a partire dall'analisi di un neurone biologico e dal modello matematico di Hodgkin-Huxley, con un approfondimento riguardante gli ipotetici utilizzi.
9

Wang, Shengrui Robert François Verjus Jean-Pierre Cosnard Michel Mazaré Guy. "Réseaux multicouches de neurones artificiels." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00335818.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tápia, Milena. "Redes neurais artificiais." PublishedVersion, Florianópolis, SC, 2000. http://repositorio.ufsc.br/xmlui/handle/123456789/78807.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Centro Tecnológico
Made available in DSpace on 2012-10-17T19:37:03Z (GMT). No. of bitstreams: 0Bitstream added on 2014-09-25T17:12:29Z : No. of bitstreams: 1 178322.pdf: 8164173 bytes, checksum: 58dff9972980056ae164ad29c6b70fd0 (MD5)
Pesquisa que aborda o uso de Redes Neurais Artificiais (RNAs) - modelos biologicamente inspirados - no problema de processamento temporal, onde o principal objetivo é a previsão. Com base na Taxinomia de MOZER (1994) para processamento temporal, o foco do estudo recaiu em duas questões: 1) Definir a forma da memória de curto tempo, o conteúdo que deveria ser armazenado nesta, e como seus parametros serião atualizados; 2) e definir a topologia da rede (tamanho, estrutura e conexões), assim como os parâmetros do algoritmo de treinamento (taxa de aprendizado, termo de momento e outros). O modelo resultante foi comparado com a Metodologia de Box & Jenkins para modelos univariados, avaliado e criticado em termos de: capacidade representativa, processo de identificação e capacidade preditiva. Os resultados mostram que uma RNA, quando bem modelada, têm potencial para representar qualquer mapeamento complexo, não-linear, que pode governar mudanças em uma série de tempo. No estudo de caso foi possível prever o preço do ovo para um período de quatorze meses à frente

Книги з теми "Artificials neurons":

1

Lek, Sovan. Artificial Neuronal Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Lek, Sovan, and Jean-François Guégan, eds. Artificial Neuronal Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-57030-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Blayo, François. Les réseaux de neurones artificiels. Paris: Presses universitaires de France, 1996.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Aleksander, Igor. Impossible minds: My neurons, my consciousness. New Jersey: Imperial College Press, 2015.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mira, José, and Alberto Prieto, eds. Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45720-8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Aizenberg, Igor. Complex-Valued Neural Networks with Multi-Valued Neurons. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Aizenberg, Igor N. Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications. Boston, MA: Springer US, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Aleksander, Igor. Neurons and symbols: The stuff that mind is made of. London: Chapman & Hall, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Aleksander, Igor. Neurons and symbols: The stuff that mind is made of. London: Chapman & Hall, 1993.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Tecnológica, Fundación Cotec para la Innovación. Redes neuronales. Madrid: Cotec, 1998.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Artificials neurons":

1

Çelikok, Sami Utku, and Neslihan Serap Şengör. "Realizing Medium Spiny Neurons with a Simple Neuron Model." In Artificial Neural Networks and Machine Learning – ICANN 2016, 256–63. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44778-0_30.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Bergel, Alexandre. "The Artificial Neuron." In Agile Artificial Intelligence in Pharo, 37–51. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5384-7_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Huyck, Christian, and Dainius Kreivenas. "Implementing Rules with Artificial Neurons." In Lecture Notes in Computer Science, 21–33. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04191-5_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kvasnička, Vladimír. "A Simulation of Spiking Neurons by Sigmoid Neurons." In Artificial Neural Nets and Genetic Algorithms, 31–34. Vienna: Springer Vienna, 2001. http://dx.doi.org/10.1007/978-3-7091-6230-9_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Agnes, Everton J., Rubem Erichsen, and Leonardo G. Brunnet. "Associative Memory in Neuronal Networks of Spiking Neurons: Architecture and Storage Analysis." In Artificial Neural Networks and Machine Learning – ICANN 2012, 145–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_19.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Murray, Gerard, and Tim Hendtlass. "Enhanced Artificial Neurons for Network Applications." In Engineering of Intelligent Systems, 281–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45517-5_32.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Laskowski, Łukasz, Magdalena Laskowska, Jerzy Jelonkiewicz, Henryk Piech, Tomasz Galkowski, and Arnaud Boullanger. "The Concept of Molecular Neurons." In Artificial Intelligence and Soft Computing, 494–501. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-39384-1_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Horzyk, Adrian. "Neurons Can Sort Data Efficiently." In Artificial Intelligence and Soft Computing, 64–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59063-9_6.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Choudhary, Swadesh, Steven Sloan, Sam Fok, Alexander Neckar, Eric Trautmann, Peiran Gao, Terry Stewart, Chris Eliasmith, and Kwabena Boahen. "Silicon Neurons That Compute." In Artificial Neural Networks and Machine Learning – ICANN 2012, 121–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lek, S., J. L. Giraudel, and J. F. Guégan. "Neuronal Networks: Algorithms and Architectures for Ecologists and Evolutionary Ecologists." In Artificial Neuronal Networks, 3–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-57030-8_1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Artificials neurons":

1

Howard, R. V., W. K. Chai, and H. S. Tzou. "Modal Voltages of Linear and Nonlinear Structures Using Distributed Artificial Neurons." In ASME 1999 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/imece1999-0547.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Laminated or embedded distributed neurons on structural components serve as in-situ sensors monitoring structure’s dynamic state and health status. Thin film piezoelectric patches are perfect candidates for this purpose. A generic piezoelectric neuron concept is introduced first, followed by definitions of neural signals generated by an arbitrary neuron laminated on a generic nonlinear double-curvature elastic shell. This generic neuron theory can be applied to a large class of linear and nonlinear common geometries, e.g., spheres, cylindrical shells, plates, etc. To demonstrate the neuron concept, an Euler-Bernoulli beam laminated with segmented neurons is studied. Neural signals and modal voltages are presented. Theoretical results are compared with experimental data favorably.
2

Xiao, Rong, Qiang Yu, Rui Yan, and Huajin Tang. "Fast and Accurate Classification with a Multi-Spike Learning Algorithm for Spiking Neurons." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/200.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The formulation of efficient supervised learning algorithms for spiking neurons is complicated and remains challenging. Most existing learning methods with the precisely firing times of spikes often result in relatively low efficiency and poor robustness to noise. To address these limitations, we propose a simple and effective multi-spike learning rule to train neurons to match their output spike number with a desired one. The proposed method will quickly find a local maximum value (directly related to the embedded feature) as the relevant signal for synaptic updates based on membrane potential trace of a neuron, and constructs an error function defined as the difference between the local maximum membrane potential and the firing threshold. With the presented rule, a single neuron can be trained to learn multi-category tasks, and can successfully mitigate the impact of the input noise and discover embedded features. Experimental results show the proposed algorithm has higher precision, lower computation cost, and better noise robustness than current state-of-the-art learning methods under a wide range of learning tasks.
3

Fu, Chaoyou, Liangchen Song, Xiang Wu, Guoli Wang, and Ran He. "Neurons Merging Layer: Towards Progressive Redundancy Reduction for Deep Supervised Hashing." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Deep supervised hashing has become an active topic in information retrieval. It generates hashing bits by the output neurons of a deep hashing network. During binary discretization, there often exists much redundancy between hashing bits that degenerates retrieval performance in terms of both storage and accuracy. This paper proposes a simple yet effective Neurons Merging Layer (NMLayer) for deep supervised hashing. A graph is constructed to represent the redundancy relationship between hashing bits that is used to guide the learning of a hashing network. Specifically, it is dynamically learned by a novel mechanism defined in our active and frozen phases. According to the learned relationship, the NMLayer merges the redundant neurons together to balance the importance of each output neuron. Moreover, multiple NMLayers are progressively trained for a deep hashing network to learn a more compact hashing code from a long redundant code. Extensive experiments on four datasets demonstrate that our proposed method outperforms state-of-the-art hashing methods.
4

Jiang, Chunhui, Guiying Li, Chao Qian, and Ke Tang. "Efficient DNN Neuron Pruning by Minimizing Layer-wise Nonlinear Reconstruction Error." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/318.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Deep neural networks (DNNs) have achieved great success, but the applications to mobile devices are limited due to their huge model size and low inference speed. Much effort thus has been devoted to pruning DNNs. Layer-wise neuron pruning methods have shown their effectiveness, which minimize the reconstruction error of linear response with a limited number of neurons in each single layer pruning. In this paper, we propose a new layer-wise neuron pruning approach by minimizing the reconstruction error of nonlinear units, which might be more reasonable since the error before and after activation can change significantly. An iterative optimization procedure combining greedy selection with gradient decent is proposed for single layer pruning. Experimental results on benchmark DNN models show the superiority of the proposed approach. Particularly, for VGGNet, the proposed approach can compress its disk space by 13.6× and bring a speedup of 3.7×; for AlexNet, it can achieve a compression rate of 4.1× and a speedup of 2.2×, respectively.
5

Jimeno Yepes, Antonio, Jianbin Tang, and Benjamin Scott Mashford. "Improving Classification Accuracy of Feedforward Neural Networks for Spiking Neuromorphic Chips." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/274.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Deep Neural Networks (DNN) achieve human level performance in many image analytics tasks but DNNs are mostly deployed to GPU platforms that consume a considerable amount of power. New hardware platforms using lower precision arithmetic achieve drastic reductions in power consumption. More recently, brain-inspired spiking neuromorphic chips have achieved even lower power consumption, on the order of milliwatts, while still offering real-time processing. However, for deploying DNNs to energy efficient neuromorphic chips the incompatibility between continuous neurons and synaptic weights of traditional DNNs, discrete spiking neurons and synapses of neuromorphic chips need to be overcome. Previous work has achieved this by training a network to learn continuous probabilities, before it is deployed to a neuromorphic architecture, such as IBM TrueNorth Neurosynaptic System, by random sampling these probabilities. The main contribution of this paper is a new learning algorithm that learns a TrueNorth configuration ready for deployment. We achieve this by training directly a binary hardware crossbar that accommodates the TrueNorth axon configuration constrains and we propose a different neuron model. Results of our approach trained on electroencephalogram (EEG) data show a significant improvement with previous work (76% vs 86% accuracy) while maintaining state of the art performance on the MNIST handwritten data set.
6

Fang, Haowen, Amar Shrestha, Ziyi Zhao, and Qinru Qiu. "Exploiting Neuron and Synapse Filter Dynamics in Spatial Temporal Learning of Deep Spiking Neural Network." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/388.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The recently discovered spatial-temporal information processing capability of bio-inspired Spiking neural networks (SNN) has enabled some interesting models and applications. However designing large-scale and high-performance model is yet a challenge due to the lack of robust training algorithms. A bio-plausible SNN model with spatial-temporal property is a complex dynamic system. Synapses and neurons behave as filters capable of preserving temporal information. As such neuron dynamics and filter effects are ignored in existing training algorithms, the SNN downgrades into a memoryless system and loses the ability of temporal signal processing. Furthermore, spike timing plays an important role in information representation, but conventional rate-based spike coding models only consider spike trains statistically, and discard information carried by its temporal structures. To address the above issues, and exploit the temporal dynamics of SNNs, we formulate SNN as a network of infinite impulse response (IIR) filters with neuron nonlinearity. We proposed a training algorithm that is capable to learn spatial-temporal patterns by searching for the optimal synapse filter kernels and weights. The proposed model and training algorithm are applied to construct associative memories and classifiers for synthetic and public datasets including MNIST, NMNIST, DVS 128 etc. Their accuracy outperforms state-of-the-art approaches.
7

Robertson, Joshua, Ewan Wade, and Antonio Hurtado. "Ultrafast Emulation of Retinal Neuronal Circuits with Artificial VCSEL Optical Neurons." In 2019 IEEE Photonics Conference (IPC). IEEE, 2019. http://dx.doi.org/10.1109/ipcon.2019.8908359.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Das, Payel, Brian Quanz, Pin-Yu Chen, Jae-wook Ahn, and Dhruv Shah. "Toward a neuro-inspired creative decoder." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/381.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Creativity, a process that generates novel and meaningful ideas, involves increased association between task-positive (control) and task-negative (default) networks in the human brain. Inspired by this seminal finding, in this study we propose a creative decoder within a deep generative framework, which involves direct modulation of the neuronal activation pattern after sampling from the learned latent space. The proposed approach is fully unsupervised and can be used off- the-shelf. Several novelty metrics and human evaluation were used to evaluate the creative capacity of the deep decoder. Our experiments on different image datasets (MNIST, FMNIST, MNIST+FMNIST, WikiArt and CelebA) reveal that atypical co-activation of highly activated and weakly activated neurons in a deep decoder promotes generation of novel and meaningful artifacts.
9

Liu, Jia, Maoguo Gong, and Qiguang Miao. "Modeling Hebb Learning Rule for Unsupervised Learning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/322.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper presents to model the Hebb learning rule and proposes a neuron learning machine (NLM). Hebb learning rule describes the plasticity of the connection between presynaptic and postsynaptic neurons and it is unsupervised itself. It formulates the updating gradient of the connecting weight in artificial neural networks. In this paper, we construct an objective function via modeling the Hebb rule. We make a hypothesis to simplify the model and introduce a correlation based constraint according to the hypothesis and stability of solutions. By analysis from the perspectives of maintaining abstract information and increasing the energy based probability of observed data, we find that this biologically inspired model has the capability of learning useful features. NLM can also be stacked to learn hierarchical features and reformulated into convolutional version to extract features from 2-dimensional data. Experiments on single-layer and deep networks demonstrate the effectiveness of NLM in unsupervised feature learning.
10

Wu, Zheng-Fan, Hui Xue, and Weimin Bai. "Learning Deeper Non-Monotonic Networks by Softly Transferring Solution Space." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/440.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Different from popular neural networks using quasiconvex activations, non-monotonic networks activated by periodic nonlinearities have emerged as a more competitive paradigm, offering revolutionary benefits: 1) compactly characterizing high-frequency patterns; 2) precisely representing high-order derivatives. Nevertheless, they are also well-known for being hard to train, due to easily over-fitting dissonant noise and only allowing for tiny architectures (shallower than 5 layers). The fundamental bottleneck is that the periodicity leads to many poor and dense local minima in solution space. The direction and norm of gradient oscillate continually during error backpropagation. Thus non-monotonic networks are prematurely stuck in these local minima, and leave out effective error feedback. To alleviate the optimization dilemma, in this paper, we propose a non-trivial soft transfer approach. It smooths their solution space close to that of monotonic ones in the beginning, and then improve their representational properties by transferring the solutions from the neural space of monotonic neurons to the Fourier space of non-monotonic neurons as the training continues. The soft transfer consists of two core components: 1) a rectified concrete gate is constructed to characterize the state of each neuron; 2) a variational Bayesian learning framework is proposed to dynamically balance the empirical risk and the intensity of transfer. We provide comprehensive empirical evidence showing that the soft transfer not only reduces the risk of non-monotonic networks on over-fitting noise, but also helps them scale to much deeper architectures (more than 100 layers) achieving the new state-of-the-art performance.

Звіти організацій з теми "Artificials neurons":

1

Aristizábal-Restrepo, María Clara. Evaluación asimétrica de una red neuronal artificial: Aplicación al caso de la inflación en Colombia. Bogotá, Colombia: Banco de la República, February 2006. http://dx.doi.org/10.32468/be.377.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Markova, Oksana, Serhiy Semerikov та Maiia Popel. СoCalc as a Learning Tool for Neural Network Simulation in the Special Course “Foundations of Mathematic Informatics”. Sun SITE Central Europe, травень 2018. http://dx.doi.org/10.31812/0564/2250.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The role of neural network modeling in the learning сontent of special course “Foundations of Mathematic Informatics” was discussed. The course was developed for the students of technical universities – future IT-specialists and directed to breaking the gap between theoretic computer science and it’s applied applications: software, system and computing engineering. CoCalc was justified as a learning tool of mathematical informatics in general and neural network modeling in particular. The elements of technique of using CoCalc at studying topic “Neural network and pattern recognition” of the special course “Foundations of Mathematic Informatics” are shown. The program code was presented in a CofeeScript language, which implements the basic components of artificial neural network: neurons, synaptic connections, functions of activations (tangential, sigmoid, stepped) and their derivatives, methods of calculating the network`s weights, etc. The features of the Kolmogorov–Arnold representation theorem application were discussed for determination the architecture of multilayer neural networks. The implementation of the disjunctive logical element and approximation of an arbitrary function using a three-layer neural network were given as an examples. According to the simulation results, a conclusion was made as for the limits of the use of constructed networks, in which they retain their adequacy. The framework topics of individual research of the artificial neural networks is proposed.

До бібліографії
Current page language: Ukrainian.
You might want to see the page in this language: English.
Change language