Добірка наукової літератури з теми "Artificials neurons"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Artificials neurons".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Artificials neurons":
HERRMANN, CHRISTOPH S., and ANDREAS KLAUS. "AUTAPSE TURNS NEURON INTO OSCILLATOR." International Journal of Bifurcation and Chaos 14, no. 02 (February 2004): 623–33. http://dx.doi.org/10.1142/s0218127404009338.
Sharp, A. A., L. F. Abbott, and E. Marder. "Artificial electrical synapses in oscillatory networks." Journal of Neurophysiology 67, no. 6 (June 1, 1992): 1691–94. http://dx.doi.org/10.1152/jn.1992.67.6.1691.
Márquez-Vera, Carlos Antonio, Zaineb Yakoub, Marco Antonio Márquez Vera, and Alfian Ma'arif. "Spiking PID Control Applied in the Van de Vusse Reaction." International Journal of Robotics and Control Systems 1, no. 4 (November 25, 2021): 488–500. http://dx.doi.org/10.31763/ijrcs.v1i4.490.
Torres-Treviño, Luis M., Angel Rodríguez-Liñán, Luis González-Estrada, and Gustavo González-Sanmiguel. "Single Gaussian Chaotic Neuron: Numerical Study and Implementation in an Embedded System." Discrete Dynamics in Nature and Society 2013 (2013): 1–11. http://dx.doi.org/10.1155/2013/318758.
Alvarellos-González, Alberto, Alejandro Pazos, and Ana B. Porto-Pazos. "Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks." Computational and Mathematical Methods in Medicine 2012 (2012): 1–10. http://dx.doi.org/10.1155/2012/476324.
Takahata, M., and M. Hisada. "Local nonspiking interneurons involved in gating of the descending motor pathway in crayfish." Journal of Neurophysiology 56, no. 3 (September 1, 1986): 718–31. http://dx.doi.org/10.1152/jn.1986.56.3.718.
Ribar, Srdjan, Vojislav V. Mitic, and Goran Lazovic. "Neural Networks Application on Human Skin Biophysical Impedance Characterizations." Biophysical Reviews and Letters 16, no. 01 (February 6, 2021): 9–19. http://dx.doi.org/10.1142/s1793048021500028.
Wang, Yu, Xintong Chen, Daqi Shen, Miaocheng Zhang, Xi Chen, Xingyu Chen, Weijing Shao, et al. "Artificial Neurons Based on Ag/V2C/W Threshold Switching Memristors." Nanomaterials 11, no. 11 (October 27, 2021): 2860. http://dx.doi.org/10.3390/nano11112860.
Volchikhin, V. I., A. I. Ivanov, T. A. Zolotareva, and D. M. Skudnev. "Synthesis of four new neuro-statistical tests for testing the hypothesis of independence of small samples of biometric data." Journal of Physics: Conference Series 2094, no. 3 (November 1, 2021): 032013. http://dx.doi.org/10.1088/1742-6596/2094/3/032013.
De-Miguel, Francisco F., Mariana Vargas-Caballero, and Elizabeth García-Pérez. "Spread of synaptic potentials through electrical synapses in Retzius neurones of the leech." Journal of Experimental Biology 204, no. 19 (October 1, 2001): 3241–50. http://dx.doi.org/10.1242/jeb.204.19.3241.
Дисертації з теми "Artificials neurons":
Henniquau, Dimitri. "Conception d’une interface fonctionnelle permettant la communication de neurones artificiels et biologiques pour des applications dans le domaine des neurosciences." Thesis, Université de Lille (2018-2021), 2021. http://www.theses.fr/2021LILUN032.
Neuromorphic engineering is an exciting emerging new field, which combines skills in electronics, mathematics, computer sciences and biomorphic engineering with the aim of developing artificial neuronal networks capable of reproducing the brain’s data processing. Thus, neuromorphic systems not only offer more effective and energy efficient solutions than current data processing technologies, but also set the bases for developing novel original therapeutic strategies in the context of pathological brain dysfunctions. The research group Circuits Systèmes Applications des Micro-ondes (CSAM) of the Institute for Electronics, Microelectronics and Nanotechnologies (IEMN) in Lille, in which this thesis work was carried out, has contributed to the generation of such neuromorphic systems by developing a toolbox constituted of artificial neurons and synapses. In order to implement neuromorphic engineering in the therapeutic arsenal for treating neurologic disorders, we need to interface living and artificial neurons to ensure real communication between these different components. In this context and using the original tools developed by the CSAM group, the main goal of this thesis work was to design and produce a functional interface allowing a bidirectional communication loop to be established between living and artificial neurons. These artificial neurons have been developed by the CSAM group using CMOS technology and are able to emit biomimetic electrical signals. Living neurons were obtained from differentiated PC-12 cells. A first step in this work consisted in modeling and simulating this interface between artificial and living neurons; a second part of the thesis was dedicated to the fabrication and characterization of neurobiohybrid interfaces, and to the growth and characterization of living neurons before studying their capacities to communicate with artificial neurons. First, a model of neuronal membrane representing a living neuron interfaced with a metallic planar electrode has been developed. We thus showed that it is possible to excite neurons using biomimetic signals produced by artificial neurons while maintaining a low excitation voltage. Low voltage excitation would improve energy efficiency of neurobiohybrid systems integrating artificial neurons and reduce the impact of harmful electrical signals on living neurons. Then, the neurobiohybrid interfacing living and artificial neurons has been designed and produced. The results obtained by experimental characterization of this interface validate the approach consisting in exciting living neurons through a metallic planar electrode. Finally, living neurons from PC-12 cells were grown and differentiated directly onto neurobiohybrids. Then, an experimental proof of the ability of biomimetic electrical signals to excite living neurons was obtained using calcium imaging. To conclude, the work presented in this manuscript clearly establishes a proof of concept for the excitation of living neurons using a biomimetic signal in our experimental conditions and thus substantiates the first part of the bidirectional communication loop between artificial neurons and living neurons
Cottens, Pablo Eduardo Pereira de Araujo. "Development of an artificial neural network architecture using programmable logic." Universidade do Vale do Rio dos Sinos, 2016. http://www.repositorio.jesuita.org.br/handle/UNISINOS/5411.
Made available in DSpace on 2016-06-29T14:42:16Z (GMT). No. of bitstreams: 1 Pablo Eduardo Pereira de Araujo Cottens_.pdf: 1315690 bytes, checksum: 78ac4ce471c2b51e826c7523a01711bd (MD5) Previous issue date: 2016-03-07
Nenhuma
Normalmente Redes Neurais Artificiais (RNAs) necessitam estações de trabalho para o seu processamento, por causa da complexidade do sistema. Este tipo de arquitetura de processamento requer que instrumentos de campo estejam localizados na vizinhança da estação de trabalho, caso exista a necessidade de processamento em tempo real, ou que o dispositivo de campo possua como única tarefa a de coleta de dados para processamento futuro. Este projeto visa criar uma arquitetura em lógica programável para um neurônio genérico, no qual as RNAs podem fazer uso da natureza paralela de FPGAs para executar a aplicação de forma rápida. Este trabalho mostra que a utilização de lógica programável para a implementação de RNAs de baixa resolução de bits é viável e as redes neurais, devido à natureza paralelizável, se beneficiam pela implementação em hardware, podendo obter resultados de forma muito rápida.
Currently, modern Artificial Neural Networks (ANN), according to their complexity, require a workstation for processing all their input data. This type of processing architecture requires that the field device is located somewhere in the vicintity of a workstation, in case real-time processing is required, or that the field device at hand will have the sole task of collecting data for future processing, when field data is required. This project creates a generic neuron architecture in programmabl logic, where Artifical Neural Networks can use the parallel nature of FPGAs to execute applications in a fast manner, albeit not using the same resolution for its otputs. This work shows that the utilization of programmable logic for the implementation of low bit resolution ANNs is not only viable, but the neural network, due to its parallel nature, benefits greatly from the hardware implementation, giving fast and accurate results.
Ogden, James M. "Construction of fully equivalent neuronal cables : an analysis of neuron morphology." Thesis, University of Glasgow, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.301502.
Guahyba, Adriano da Silva. "Utilização de inteligência artificial (redes neurais artificiais) no gerenciamento de reprodutoras pesadas." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2001. http://hdl.handle.net/10183/3322.
Bénédic, Yohann. "Approche analytique pour l'optimisation de réseaux de neurones artificiels." Phd thesis, Université de Haute Alsace - Mulhouse, 2007. http://tel.archives-ouvertes.fr/tel-00605216.
Martinez, Regis. "Dynamique des systèmes cognitifs et des systèmes complexes : étude du rôle des délais de transmission de l’information." Thesis, Lyon 2, 2011. http://www.theses.fr/2011LYO20054/document.
How memory information is represented is still an open question in neurobiology, but also, from the computer science point of view, in machine learning. Some artificial neuron networks models have to face the problem of retrieving information, knowing that, in regard to the model performance, this information is actually stored but in an unknown form or too complex to be easily accessible. This is one of the problems met in large neuron networks and which « reservoir computing » intends to answer.« Reservoir computing » is a category of models that has emerged at the same period as, and has propoerties similar to the model we present here. It is composed of three parts that are (1) an input layer that allows to inject learning examples, (2) a « reservoir » composed of neurons connected with or without a particular predefined, and where there can be adaptation mecanisms, (3) an output layer, called « readout », on which a supervised learning if performed. We bring a particularity that consists in using axonal delays, the propagation time of information from one neuron to another through an axonal connexion. Using delays is a computational improvement in the light of machin learning but also a biological argument for information representation.We show that our model is capable of a improvable but efficient and promising artificial learning. Based on this observation and in the aim of improving performance we seek to understand the internal dynamics of the model. More precisely we study how the topology of the reservoir can influence the dynamics. To do so, we make use of the theory of polychronous groups. We have developped complexe algorithms allowing us to detect those topologicodynamic structures in a network, and in a network activity having a given topology.If we succeed in understanding the links between topology and dynamics, we may take advantage of it to be able to create reservoir with specific properties, suited for learning. Finally, we have conducted an exhaustive study of network expressivness in terms of polychronous groups, based on various types of topologies (random, regular, small-world) and different parameters (number of neurones, conectivity, etc.). We are able to formulate some recommandations to create a network whose topology can be rich in terms of possible representations. We propose to link with the cognitive theory of multiple trace memory that can, in principle, be implemented and studied in the light of polychronous groups
Reali, Egidio Henrique. "Utilização de inteligência artificial - (Redes neurais artificiais) no gerenciamento da produção de frangos de corte." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2004. http://hdl.handle.net/10183/6339.
Monaldi, Jessica. "neuroni artificiali e loro applicazioni." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21371/.
Wang, Shengrui Robert François Verjus Jean-Pierre Cosnard Michel Mazaré Guy. "Réseaux multicouches de neurones artificiels." S.l. : Université Grenoble 1, 2008. http://tel.archives-ouvertes.fr/tel-00335818.
Tápia, Milena. "Redes neurais artificiais." Florianópolis, SC, 2000. http://repositorio.ufsc.br/xmlui/handle/123456789/78807.
Made available in DSpace on 2012-10-17T19:37:03Z (GMT). No. of bitstreams: 0Bitstream added on 2014-09-25T17:12:29Z : No. of bitstreams: 1 178322.pdf: 8164173 bytes, checksum: 58dff9972980056ae164ad29c6b70fd0 (MD5)
Pesquisa que aborda o uso de Redes Neurais Artificiais (RNAs) - modelos biologicamente inspirados - no problema de processamento temporal, onde o principal objetivo é a previsão. Com base na Taxinomia de MOZER (1994) para processamento temporal, o foco do estudo recaiu em duas questões: 1) Definir a forma da memória de curto tempo, o conteúdo que deveria ser armazenado nesta, e como seus parametros serião atualizados; 2) e definir a topologia da rede (tamanho, estrutura e conexões), assim como os parâmetros do algoritmo de treinamento (taxa de aprendizado, termo de momento e outros). O modelo resultante foi comparado com a Metodologia de Box & Jenkins para modelos univariados, avaliado e criticado em termos de: capacidade representativa, processo de identificação e capacidade preditiva. Os resultados mostram que uma RNA, quando bem modelada, têm potencial para representar qualquer mapeamento complexo, não-linear, que pode governar mudanças em uma série de tempo. No estudo de caso foi possível prever o preço do ovo para um período de quatorze meses à frente
Книги з теми "Artificials neurons":
Lek, Sovan. Artificial Neuronal Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000.
Lek, Sovan, and Jean-François Guégan, eds. Artificial Neuronal Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-57030-8.
Blayo, François. Les réseaux de neurones artificiels. Paris: Presses universitaires de France, 1996.
Aleksander, Igor. Impossible minds: My neurons, my consciousness. New Jersey: Imperial College Press, 2015.
Mira, José, and Alberto Prieto, eds. Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45720-8.
Aizenberg, Igor. Complex-Valued Neural Networks with Multi-Valued Neurons. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.
Aizenberg, Igor N. Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications. Boston, MA: Springer US, 2000.
Aleksander, Igor. Neurons and symbols: The stuff that mind is made of. London: Chapman & Hall, 1993.
Aleksander, Igor. Neurons and symbols: The stuff that mind is made of. London: Chapman & Hall, 1993.
Fundación Cotec para la Innovación Tecnológica. Redes neuronales. Madrid: Cotec, 1998.
Частини книг з теми "Artificials neurons":
Çelikok, Sami Utku, and Neslihan Serap Şengör. "Realizing Medium Spiny Neurons with a Simple Neuron Model." In Artificial Neural Networks and Machine Learning – ICANN 2016, 256–63. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-44778-0_30.
Bergel, Alexandre. "The Artificial Neuron." In Agile Artificial Intelligence in Pharo, 37–51. Berkeley, CA: Apress, 2020. http://dx.doi.org/10.1007/978-1-4842-5384-7_2.
Huyck, Christian, and Dainius Kreivenas. "Implementing Rules with Artificial Neurons." In Lecture Notes in Computer Science, 21–33. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-04191-5_2.
Kvasnička, Vladimír. "A Simulation of Spiking Neurons by Sigmoid Neurons." In Artificial Neural Nets and Genetic Algorithms, 31–34. Vienna: Springer Vienna, 2001. http://dx.doi.org/10.1007/978-3-7091-6230-9_6.
Agnes, Everton J., Rubem Erichsen, and Leonardo G. Brunnet. "Associative Memory in Neuronal Networks of Spiking Neurons: Architecture and Storage Analysis." In Artificial Neural Networks and Machine Learning – ICANN 2012, 145–52. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_19.
Murray, Gerard, and Tim Hendtlass. "Enhanced Artificial Neurons for Network Applications." In Engineering of Intelligent Systems, 281–89. Berlin, Heidelberg: Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-45517-5_32.
Laskowski, Łukasz, Magdalena Laskowska, Jerzy Jelonkiewicz, Henryk Piech, Tomasz Galkowski, and Arnaud Boullanger. "The Concept of Molecular Neurons." In Artificial Intelligence and Soft Computing, 494–501. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-39384-1_43.
Horzyk, Adrian. "Neurons Can Sort Data Efficiently." In Artificial Intelligence and Soft Computing, 64–74. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-59063-9_6.
Choudhary, Swadesh, Steven Sloan, Sam Fok, Alexander Neckar, Eric Trautmann, Peiran Gao, Terry Stewart, Chris Eliasmith, and Kwabena Boahen. "Silicon Neurons That Compute." In Artificial Neural Networks and Machine Learning – ICANN 2012, 121–28. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-33269-2_16.
Lek, S., J. L. Giraudel, and J. F. Guégan. "Neuronal Networks: Algorithms and Architectures for Ecologists and Evolutionary Ecologists." In Artificial Neuronal Networks, 3–27. Berlin, Heidelberg: Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/978-3-642-57030-8_1.
Тези доповідей конференцій з теми "Artificials neurons":
Howard, R. V., W. K. Chai, and H. S. Tzou. "Modal Voltages of Linear and Nonlinear Structures Using Distributed Artificial Neurons." In ASME 1999 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 1999. http://dx.doi.org/10.1115/imece1999-0547.
Xiao, Rong, Qiang Yu, Rui Yan, and Huajin Tang. "Fast and Accurate Classification with a Multi-Spike Learning Algorithm for Spiking Neurons." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/200.
Fu, Chaoyou, Liangchen Song, Xiang Wu, Guoli Wang, and Ran He. "Neurons Merging Layer: Towards Progressive Redundancy Reduction for Deep Supervised Hashing." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/322.
Jiang, Chunhui, Guiying Li, Chao Qian, and Ke Tang. "Efficient DNN Neuron Pruning by Minimizing Layer-wise Nonlinear Reconstruction Error." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/318.
Jimeno Yepes, Antonio, Jianbin Tang, and Benjamin Scott Mashford. "Improving Classification Accuracy of Feedforward Neural Networks for Spiking Neuromorphic Chips." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/274.
Fang, Haowen, Amar Shrestha, Ziyi Zhao, and Qinru Qiu. "Exploiting Neuron and Synapse Filter Dynamics in Spatial Temporal Learning of Deep Spiking Neural Network." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/388.
Robertson, Joshua, Ewan Wade, and Antonio Hurtado. "Ultrafast Emulation of Retinal Neuronal Circuits with Artificial VCSEL Optical Neurons." In 2019 IEEE Photonics Conference (IPC). IEEE, 2019. http://dx.doi.org/10.1109/ipcon.2019.8908359.
Das, Payel, Brian Quanz, Pin-Yu Chen, Jae-wook Ahn, and Dhruv Shah. "Toward a neuro-inspired creative decoder." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/381.
Liu, Jia, Maoguo Gong, and Qiguang Miao. "Modeling Hebb Learning Rule for Unsupervised Learning." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/322.
Wu, Zheng-Fan, Hui Xue, and Weimin Bai. "Learning Deeper Non-Monotonic Networks by Softly Transferring Solution Space." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/440.
Звіти організацій з теми "Artificials neurons":
Aristizábal-Restrepo, María Clara. Evaluación asimétrica de una red neuronal artificial: Aplicación al caso de la inflación en Colombia. Bogotá, Colombia: Banco de la República, February 2006. http://dx.doi.org/10.32468/be.377.
Markova, Oksana, Serhiy Semerikov та Maiia Popel. СoCalc as a Learning Tool for Neural Network Simulation in the Special Course “Foundations of Mathematic Informatics”. Sun SITE Central Europe, травень 2018. http://dx.doi.org/10.31812/0564/2250.