To see the other types of publications on this topic, follow the link: Biologically inspired vision systems.

Journal articles on the topic 'Biologically inspired vision systems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Biologically inspired vision systems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Fernández-Caballero, Antonio, and José Manuel Ferrández. "Biologically inspired vision systems in robotics." International Journal of Advanced Robotic Systems 14, no. 6 (2017): 172988141774594. http://dx.doi.org/10.1177/1729881417745947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ang, Li-Minn, Kah Phooi Seng, and Christopher Wing Hong Ngau. "Biologically Inspired Components in Embedded Vision Systems." International Journal of Systems Biology and Biomedical Technologies 3, no. 1 (2015): 39–72. http://dx.doi.org/10.4018/ijsbbt.2015010103.

Full text
Abstract:
Biological vision components like visual attention (VA) algorithms aim to mimic the mechanism of the human vision system. Often VA algorithms are complex and require high computational and memory requirements to be realized. In biologically-inspired vision and embedded systems, the computational capacity and memory resources are of a primary concern. This paper presents a discussion for implementing VA algorithms in embedded vision systems in a resource constrained environment. The authors survey various types of VA algorithms and identify potential techniques which can be implemented in embedded vision systems. Then, they propose a low complexity and low memory VA model based on a well-established mainstream VA model. The proposed model addresses critical factors in terms of algorithm complexity, memory requirements, computational speed, and salience prediction performance to ensure the reliability of the VA in a resource constrained environment. Finally a custom softcore microprocessor-based hardware implementation on a Field-Programmable Gate Array (FPGA) is used to verify the implementation feasibility of the presented model.
APA, Harvard, Vancouver, ISO, and other styles
3

Fernández-Caballero, Antonio. "Biologically Inspired Vision Systems for Flying Robots – Editorial." International Journal of Advanced Robotic Systems 13, no. 1 (2016): 22. http://dx.doi.org/10.5772/62432.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Siagian, C., and L. Itti. "Biologically Inspired Mobile Robot Vision Localization." IEEE Transactions on Robotics 25, no. 4 (2009): 861–73. http://dx.doi.org/10.1109/tro.2009.2022424.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Khan, Salman, Alexander Wong, and Bryan Tripp. "Guarding Against Adversarial Attacks using Biologically Inspired Contour Integration." Journal of Computational Vision and Imaging Systems 4, no. 1 (2018): 3. http://dx.doi.org/10.15353/jcvis.v4i1.336.

Full text
Abstract:
Artificial vision systems are susceptible to adversarial attacks. Smallintentional changes to images can cause these systems to mis-classify with high confidence. The brain has many mechanisms forstrengthening weak or confusing inputs. One such technique, con-tour integration can separate objects from irrelevant background.We show that incorporating contour integration within artificial vi-sual systems can increase their robustness to adversarial attacks.
APA, Harvard, Vancouver, ISO, and other styles
6

Haltis, Kosta, Matthew Sorell, and Russell Brinkworth. "A Biologically Inspired Smart Camera for Use in Surveillance Applications." International Journal of Digital Crime and Forensics 2, no. 3 (2010): 1–14. http://dx.doi.org/10.4018/jdcf.2010070101.

Full text
Abstract:
Biological vision systems are capable of discerning detail as well as detecting objects and motion in a wide range of highly variable lighting conditions that proves challenging to traditional cameras. In this paper, the authors describe the real-time implementation of a biological vision model using a high dynamic range video camera and a General Purpose Graphics Processing Unit. The effectiveness of this implementation is demonstrated in two surveillance applications: dynamic equalization of contrast for improved recognition of scene detail and the use of biologically-inspired motion processing for the detection of small or distant moving objects in a complex scene. A system based on this prototype could improve surveillance capability in any number of difficult situations.
APA, Harvard, Vancouver, ISO, and other styles
7

Malowany, Dan, and Hugo Guterman. "Biologically Inspired Visual System Architecture for Object Recognition in Autonomous Systems." Algorithms 13, no. 7 (2020): 167. http://dx.doi.org/10.3390/a13070167.

Full text
Abstract:
Computer vision is currently one of the most exciting and rapidly evolving fields of science, which affects numerous industries. Research and development breakthroughs, mainly in the field of convolutional neural networks (CNNs), opened the way to unprecedented sensitivity and precision in object detection and recognition tasks. Nevertheless, the findings in recent years on the sensitivity of neural networks to additive noise, light conditions, and to the wholeness of the training dataset, indicate that this technology still lacks the robustness needed for the autonomous robotic industry. In an attempt to bring computer vision algorithms closer to the capabilities of a human operator, the mechanisms of the human visual system was analyzed in this work. Recent studies show that the mechanisms behind the recognition process in the human brain include continuous generation of predictions based on prior knowledge of the world. These predictions enable rapid generation of contextual hypotheses that bias the outcome of the recognition process. This mechanism is especially advantageous in situations of uncertainty, when visual input is ambiguous. In addition, the human visual system continuously updates its knowledge about the world based on the gaps between its prediction and the visual feedback. CNNs are feed forward in nature and lack such top-down contextual attenuation mechanisms. As a result, although they process massive amounts of visual information during their operation, the information is not transformed into knowledge that can be used to generate contextual predictions and improve their performance. In this work, an architecture was designed that aims to integrate the concepts behind the top-down prediction and learning processes of the human visual system with the state-of-the-art bottom-up object recognition models, e.g., deep CNNs. The work focuses on two mechanisms of the human visual system: anticipation-driven perception and reinforcement-driven learning. Imitating these top-down mechanisms, together with the state-of-the-art bottom-up feed-forward algorithms, resulted in an accurate, robust, and continuously improving target recognition model.
APA, Harvard, Vancouver, ISO, and other styles
8

Low, Emily M. P., Ian R. Manchester, and Andrey V. Savkin. "A biologically inspired method for vision-based docking of wheeled mobile robots." Robotics and Autonomous Systems 55, no. 10 (2007): 769–84. http://dx.doi.org/10.1016/j.robot.2007.04.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lewinger, William A., Cynthia M. Harley, Michael S. Watson, Michael S. Branicky, Roy E. Ritzmann, and Roger D. Quinn. "Animal-Inspired Sensing for Autonomously Climbing or Avoiding Obstacles." Applied Bionics and Biomechanics 6, no. 1 (2009): 43–61. http://dx.doi.org/10.1155/2009/280968.

Full text
Abstract:
The way that natural systems navigate their environments with agility, intelligence and efficiency is an inspiration to engineers. Biological attributes such as modes of locomotion, sensory modalities, behaviours and physical appearance have been used as design goals. While methods of locomotion allow robots to move through their environment, the addition of sensing, perception and decision making are necessary to perform this task with autonomy. This paper contrasts how the addition of two separate sensing modalities – tactile antennae and non-contact sensing – and a low-computation, capable microcontroller allow a biologically abstracted mobile robot to make insect-inspired decisions when encountering a shelflike obstacle, navigating a cluttered environment without collision and seeking vision-based goals while avoiding obstacles.
APA, Harvard, Vancouver, ISO, and other styles
10

Csapo, Adam, Barna Resko, Domonkos Tikk, and Peter Baranyi. "Object Categorization Using Biologically Inspired Nodemaps and the HITEC Categorization System." Journal of Advanced Computational Intelligence and Intelligent Informatics 13, no. 5 (2009): 573–80. http://dx.doi.org/10.20965/jaciii.2009.p0573.

Full text
Abstract:
The computerized modeling of cognitive visual information has been a research field of great interest in the past several decades. The research field is interesting not only from a biological perspective, but also from an engineering point of view when systems are developed that aim to achieve similar goals as biological cognitive systems. This paper briefly describes a general framework for the extraction and systematic storage of low-level visual features, and demonstrates its applicability in image categorization using a linear categorization algorithm originally developed for the characterization of text documents. The performance of the algorithm together with the newly developed feature array was evaluated using the Caltech 101 database. Extremely high (95% and higher) success rates were achieved when distinguishing between pairs of categories using independent test images. Efforts were made to scale up the number of categories using a hierarchical, branch-and-bound decision tree, with limited success.
APA, Harvard, Vancouver, ISO, and other styles
11

Thurrowgood, Saul, Richard J. D. Moore, Dean Soccol, Michael Knight, and Mandyam V. Srinivasan. "A Biologically Inspired, Vision-based Guidance System for Automatic Landing of a Fixed-wing Aircraft." Journal of Field Robotics 31, no. 4 (2014): 699–727. http://dx.doi.org/10.1002/rob.21527.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

KABEER, V., and N. K. NARAYANAN. "WAVELET-BASED ARTIFICIAL LIGHT RECEPTOR MODEL FOR HUMAN FACE RECOGNITION." International Journal of Wavelets, Multiresolution and Information Processing 07, no. 05 (2009): 617–27. http://dx.doi.org/10.1142/s0219691309003124.

Full text
Abstract:
This paper presents a novel biologically-inspired and wavelet-based model for extracting features of faces from face images. The biological knowledge about the distribution of light receptors, cones and rods, over the surface of the retina, and the way they are associated with the nerve ends for pattern vision forms the basis for the design of this model. A combination of classical wavelet decomposition and wavelet packet decomposition is used for simulating the functional model of cones and rods in pattern vision. The paper also describes the experiments performed for face recognition using the features extracted on the AT & T face database (formerly, ORL face database) containing 400 face images of 40 different individuals. In the recognition stage, we used the Artificial Neural Network Classifier. A feature vector of size 40 is formed for face images of each person and recognition accuracy is computed using the ANN classifier. Overall recognition accuracy obtained for the AT & T face database is 95.5%.
APA, Harvard, Vancouver, ISO, and other styles
13

Khan, Babar, Fang Han, Zhijie Wang, and Rana J. Masood. "Bio-inspired approach to invariant recognition and classification of fabric weave patterns and yarn color." Assembly Automation 36, no. 2 (2016): 152–58. http://dx.doi.org/10.1108/aa-11-2015-100.

Full text
Abstract:
Purpose This paper aims to propose a biologically inspired processing architecture to recognize and classify fabrics with respect to the weave pattern (fabric texture) and yarn color (fabric color). Design/methodology/approach By using the fabric weave patterns image identification system, this study analyzed the fabric image based on the Hierarchical-MAX (HMAX) model of computer vision, to extract feature values related to texture of fabric. Red Green Blue (RGB) color descriptor based on opponent color channels simulating the single opponent and double opponent neuronal function of the brain is incorporated in to the texture descriptor to extract yarn color feature values. Finally, support vector machine classifier is used to train and test the algorithm. Findings This two-stage processing architecture can be used to construct a system based on computer vision to recognize fabric texture and to increase the system reliability and accuracy. Using this method, the stability and fault tolerance (invariance) was improved. Originality/value Traditionally, fabric texture recognition is performed manually by visual inspection. Recent studies have proposed automatic fabric texture identification based on computer vision. In the identification process, the fabric weave patterns are recognized by the warp and weft floats. However, due to the optical environments and the appearance differences of fabric and yarn, the stability and fault tolerance (invariance) of the computer vision method are yet to be improved. By using our method, the stability and fault tolerance (invariance) was improved.
APA, Harvard, Vancouver, ISO, and other styles
14

CANOSA, ROXANNE. "MODELING SELECTIVE PERCEPTION OF COMPLEX, NATURAL SCENES." International Journal on Artificial Intelligence Tools 14, no. 01n02 (2005): 233–60. http://dx.doi.org/10.1142/s0218213005002089.

Full text
Abstract:
Computational modeling of the human visual system is of current interest to developers of artificial vision systems, primarily because a biologically-inspired model can offer solutions to otherwise intractable image understanding problems. The purpose of this study is to present a biologically-inspired model of selective perception that augments a stimulus-driven approach with a high-level algorithm that takes into account particularly informative regions in the scene. The representation is compact and given in the form of a topographic map of relative perceptual conspicuity values. Other recent attempts at compact scene representation consider only low-level information that codes salient features such as color, edge, and luminance values. The previous attempts do not correlate well with subjects' fixation locations during viewing of complex images or natural scenes. This study uses high-level information in the form of figure/ground segmentation, potential object detection, and task-specific location bias. The results correlate well with the fixation densities of human viewers of natural scenes, and can be used as a preprocessing module for image understanding or intelligent surveillance applications.
APA, Harvard, Vancouver, ISO, and other styles
15

Wiederman, Steven D., and David C. O’Carroll. "Biologically Inspired Feature Detection Using Cascaded Correlations of off and on Channels." Journal of Artificial Intelligence and Soft Computing Research 3, no. 1 (2013): 5–14. http://dx.doi.org/10.2478/jaiscr-2014-0001.

Full text
Abstract:
Abstract Flying insects are valuable animal models for elucidating computational processes underlying visual motion detection. For example, optical flow analysis by wide-field motion processing neurons in the insect visual system has been investigated from both behavioral and physiological perspectives [1]. This has resulted in useful computational models with diverse applications [2,3]. In addition, some insects must also extract the movement of their prey or conspecifics from their environment. Such insects have the ability to detect and interact with small moving targets, even amidst a swarm of others [4,5]. We use electrophysiological techniques to record from small target motion detector (STMD) neurons in the insect brain that are likely to subserve these behaviors. Inspired by such recordings, we previously proposed an ‘elementary’ small target motion detector (ESTMD) model that accounts for the spatial and temporal tuning of such neurons and even their ability to discriminate targets against cluttered surrounds [6-8]. However, other properties such as direction selectivity [9] and response facilitation for objects moving on extended trajectories [10] are not accounted for by this model. We therefore propose here two model variants that cascade an ESTMD model with a traditional motion detection model algorithm, the Hassenstein Reichardt ‘elementary motion detector’ (EMD) [11]. We show that these elaborations maintain the principal attributes of ESTMDs (i.e. spatiotemporal tuning and background clutter rejection) while also capturing the direction selectivity observed in some STMD neurons. By encapsulating the properties of biological STMD neurons we aim to develop computational models that can simulate the remarkable capabilities of insects in target discrimination and pursuit for applications in robotics and artificial vision systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Cedron, Francisco, Sara Alvarez-Gonzalez, Alejandro Pazos, and Ana B. Porto-Pazos. "Use of Multiple Astrocytic Configurations within an Artificial Neuro-Astrocytic Network." Proceedings 21, no. 1 (2019): 46. http://dx.doi.org/10.3390/proceedings2019021046.

Full text
Abstract:
The artificial neural networks used in a multitude of fields are achieving good results. However, these systems are inspired in the vision of classical neuroscience where neurons are the only elements that process information in the brain. Advances in neuroscience have shown that there is a type of glial cell called astrocytes that collaborate with neurons to process information. In this work, a connectionist system formed by neurons and artificial astrocytes is presented. The astrocytes can have different configurations to achieve a biologically more realistic behaviour. This work indicates that the use of different artificial astrocytes behaviours is beneficial.
APA, Harvard, Vancouver, ISO, and other styles
17

Bar-Cohen, Yoseph. "EAP Actuators for Biomimetic Technologies with Humanlike Robots as one of the Ultimate Challenges." Advances in Science and Technology 61 (September 2008): 1–7. http://dx.doi.org/10.4028/www.scientific.net/ast.61.1.

Full text
Abstract:
Since the Stone Age, people have tried to reproduce the human appearance, functions, and intelligence using art and technology. Any aspect that represents our physical and intellectual being has been a subject of copying, mimicking and inspiration. Recent surges in technology advances led to the emergence of increasingly more realistic humanlike robots and simulations. Making such robots is part of the field of biologically inspired technologies - also known as biomimetics - and it involves developing engineered systems that exhibit the appearance and behavior of biological systems. Robots with selectable characteristics and personality that are customized to our needs and with self-learning capability may become our household appliance or even companion and they may be used to perform hard to do and complex tasks. In enabling this technology such elements as artificial intelligence, muscles, vision, skin and others are increasingly improved. In this paper, making humanlike robots will be described with focus on the use of artificial muscles as the enabling technology and the related challenges.
APA, Harvard, Vancouver, ISO, and other styles
18

Fries, David P., Chase A. Starr, and Geran W. Barton. "Ocean Sensor “Imaging” Arrays Based on Bio-inspired Architectures and 2-D/3-D Construction." Marine Technology Society Journal 49, no. 3 (2015): 43–49. http://dx.doi.org/10.4031/mtsj.49.3.17.

Full text
Abstract:
AbstractMany common ocean sensor systems measure a localized space above a single sensor element. Single-point measurements give magnitude but not necessarily direction information. Expanding single sensor elements, such as used in salinity sensors, into arrays permits spatial distribution measurements and allows flux visualizations. Furthermore, applying microsystem technology to these macro sensor systems can yield imaging arrays with high-resolution spatial/temporal sensing functions. Extending such high spatial resolution imaging over large areas is a desirable feature for new “vision” modes on autonomous robotic systems and for deployable ocean sensor systems. The work described here explores the use of printed circuit board (PCB) technology for new sensing concepts and designs. In order to create rigid-conformal, large area imaging “camera” systems, we have merged flexible PCB substrates with rigid constructions from 3-D printing. This approach merges the 2-D flexible electronics world of printed circuits with the 3-D printed packaging world. Furthermore, employing architectures used by biology as a basis for our imaging systems, we explored naturally and biologically inspired designs, their relationships to visual imagining, and alternate mechanical systems of perception. Through the use of bio-inspiration, a framework is laid out to base further research on design for packaging of ocean sensors and arrays. Using 3-D printed exoskeleton's rigid form with flexible printed circuits, one can create systems that are both rigid and form-fitting with 3-D shape and enable new sensor systems for various ocean sensory applications.
APA, Harvard, Vancouver, ISO, and other styles
19

Botzheim, János, Cristiano Cabrita, László T. Kóczy, and Antonio E. Ruano. "Genetic and Bacterial Programming for B-Spline Neural Networks Design." Journal of Advanced Computational Intelligence and Intelligent Informatics 11, no. 2 (2007): 220–31. http://dx.doi.org/10.20965/jaciii.2007.p0220.

Full text
Abstract:
The design phase of B-spline neural networks is a highly computationally complex task. Existent heuristics have been found to be highly dependent on the initial conditions employed. Increasing interest in biologically inspired learning algorithms for control techniques such as Artificial Neural Networks and Fuzzy Systems is in progress. In this paper, the Bacterial Programming approach is presented, which is based on the replication of the microbial evolution phenomenon. This technique produces an efficient topology search, obtaining additionally more consistent solutions.
APA, Harvard, Vancouver, ISO, and other styles
20

Beale, Russell, Robert J. Hendley, Andy Pryke, and Barry Wilkins. "Nature-Inspired Visualisation of Similarity and Relationships in Human Systems and Behaviours." Information Visualization 5, no. 4 (2006): 260–70. http://dx.doi.org/10.1057/palgrave.ivs.9500135.

Full text
Abstract:
Visualisations of complex interrelationships have the potential to be complex and require a lot of cognitive input. We have drawn analogues from natural systems to create new visualisation approaches that are more intutive and easier to work with. We use nature-inspired concepts to provide cognitive amplification, moving the load from the user's cognitive to their perceptual systems and thus allowing them to focus their cognitive resources where they are most appropriate. Two systems are presented: one uses a physical-based model to construct the visualisation, while the other uses a biological inspiration. Their application to four visualisation tasks is discussed: the structure of information browsing on the internet; the structure of parts of the web itself; to aid the refinement of queries to a digital library; and to compare different documents for similar content.
APA, Harvard, Vancouver, ISO, and other styles
21

Nguyen, Anh Tuan, Jian Xu, Diu Khue Luu, Qi Zhao, and Zhi Yang. "Advancing System Performance with Redundancy: From Biological to Artificial Designs." Neural Computation 31, no. 3 (2019): 555–73. http://dx.doi.org/10.1162/neco_a_01166.

Full text
Abstract:
Redundancy is a fundamental characteristic of many biological processes such as those in the genetic, visual, muscular, and nervous systems, yet its driven mechanism has not been fully comprehended. Until recently, the only understanding of redundancy is as a mean to attain fault tolerance, which is reflected in the design of many man-made systems. On the contrary, our previous work on redundant sensing (RS) has demonstrated an example where redundancy can be engineered solely for enhancing accuracy and precision. The design was inspired by the binocular structure of human vision, which we believe may share a similar operation. In this letter, we present a unified theory describing how such utilization of redundancy is feasible through two complementary mechanisms: representational redundancy (RPR) and entangled redundancy (ETR). We also point out two additional examples where our new understanding of redundancy can be applied to justify a system's superior performance. One is the human musculoskeletal system (HMS), a biological instance, and the other is the deep residual neural network (ResNet), an artificial counterpart. We envision that our theory would provide a framework for the future development of bio-inspired redundant artificial systems, as well as assist studies of the fundamental mechanisms governing various biological processes.
APA, Harvard, Vancouver, ISO, and other styles
22

Chen, Lichao, Sudhir Singh, Thomas Kailath, and Vwani Roychowdhury. "Brain-inspired automated visual object discovery and detection." Proceedings of the National Academy of Sciences 116, no. 1 (2018): 96–105. http://dx.doi.org/10.1073/pnas.1802103115.

Full text
Abstract:
Despite significant recent progress, machine vision systems lag considerably behind their biological counterparts in performance, scalability, and robustness. A distinctive hallmark of the brain is its ability to automatically discover and model objects, at multiscale resolutions, from repeated exposures to unlabeled contextual data and then to be able to robustly detect the learned objects under various nonideal circumstances, such as partial occlusion and different view angles. Replication of such capabilities in a machine would require three key ingredients: (i) access to large-scale perceptual data of the kind that humans experience, (ii) flexible representations of objects, and (iii) an efficient unsupervised learning algorithm. The Internet fortunately provides unprecedented access to vast amounts of visual data. This paper leverages the availability of such data to develop a scalable framework for unsupervised learning of object prototypes—brain-inspired flexible, scale, and shift invariant representations of deformable objects (e.g., humans, motorcycles, cars, airplanes) comprised of parts, their different configurations and views, and their spatial relationships. Computationally, the object prototypes are represented as geometric associative networks using probabilistic constructs such as Markov random fields. We apply our framework to various datasets and show that our approach is computationally scalable and can construct accurate and operational part-aware object models much more efficiently than in much of the recent computer vision literature. We also present efficient algorithms for detection and localization in new scenes of objects and their partial views.
APA, Harvard, Vancouver, ISO, and other styles
23

Venneri, Samuel L., and Ahmed K. Noor. "Plenty of Room in the Air." Mechanical Engineering 124, no. 11 (2002): 42–48. http://dx.doi.org/10.1115/1.2002-nov-1.

Full text
Abstract:
This article highlights a research on a spectrum of revolutionary concepts and technologies, for civilian and military air vehicles and the airspace system that will enable a bold new era of aviation and mobility. The long-range vision includes major changes in personal transportation and significant increases in air travel capacity and safety. The vision is included in the NASA Aeronautics Blueprint, published earlier this year. It includes advanced concepts for the airspace system as a complex, highly integrated system of systems. It also outlines a new model for aviation safety and security, revolutionary aerospace vehicles with significantly greater performance, and assured development of a competent aerospace workforce. NASA and the Defense Advanced Research Projects Agency are investigating the feasibility of creating personal air vehicles that could replace or, at the very least, augment personal ground and air transportation schemes. The integration of intelligence and multifunctionality into the varied airframe and propulsion components of aerospace vehicles requires the development of revolutionary materials, structures, and subsystems. They can be achieved through the fusion of nanotechnology, biotechnology, and information technology into a new discipline—nanobiologics—that is the foundation for biologically inspired materials and structures.
APA, Harvard, Vancouver, ISO, and other styles
24

Edelman, Shimon. "Spanning the Face Space." Journal of Biological Systems 06, no. 03 (1998): 265–79. http://dx.doi.org/10.1142/s0218339098000182.

Full text
Abstract:
The paper outlines a computational approach to face representation and recognition, inspired by two major features of biological perceptual systems: graded-profile overlapping receptive fields, and object-specific responses in the higher visual areas. This approach, according to which a face is ultimately represented by its similarities to a number of reference faces, led to the development of a comprehensive theory of object representation in biological vision, and to its subsequent psychophysical exploration and computational modeling.
APA, Harvard, Vancouver, ISO, and other styles
25

Chessa, Manuela, and Fabio Solari. "A Computational Model for the Neural Representation and Estimation of the Binocular Vector Disparity from Convergent Stereo Image Pairs." International Journal of Neural Systems 29, no. 05 (2019): 1850029. http://dx.doi.org/10.1142/s0129065718500296.

Full text
Abstract:
The depth cue is a fundamental piece of information for artificial and living beings who interact with the surrounding environment in order to handle objects and to avoid obstacles: in such situations, the disparity patterns, which arise when agents fixate objects, are vector fields. We propose a biologically-inspired computational model to estimate dense horizontal and vertical disparity maps by exploiting the cortical paradigms of the primate visual system: in particular, we aim to model the disparity sensitivity of the V1–MT visual pathway. The proposed model is based on a first processing stage composed of a bank of spatial band-pass filters and a static nonlinearity, mimicking complex binocular cells. Then, subsequent pooling stages and decoding strategies allow the model to estimate the vector disparity, after having represented it as a population of MT-like units. We assess the proposed model by using standard benchmarking stereo images, the Middlebury dataset, and specific stereo images that have horizontal and vertical disparities, which characterize the stimuli produced by active vision systems. Moreover, we systemically analyze how the different processing stages affect the model performance, and we discuss their implications for the neural modeling.
APA, Harvard, Vancouver, ISO, and other styles
26

Fu, Qinbing, Hongxin Wang, Cheng Hu, and Shigang Yue. "Towards Computational Models and Applications of Insect Visual Systems for Motion Perception: A Review." Artificial Life 25, no. 3 (2019): 263–311. http://dx.doi.org/10.1162/artl_a_00297.

Full text
Abstract:
Motion perception is a critical capability determining a variety of aspects of insects' life, including avoiding predators, foraging, and so forth. A good number of motion detectors have been identified in the insects' visual pathways. Computational modeling of these motion detectors has not only been providing effective solutions to artificial intelligence, but also benefiting the understanding of complicated biological visual systems. These biological mechanisms through millions of years of evolutionary development will have formed solid modules for constructing dynamic vision systems for future intelligent machines. This article reviews the computational motion perception models originating from biological research on insects' visual systems in the literature. These motion perception models or neural networks consist of the looming-sensitive neuronal models of lobula giant movement detectors (LGMDs) in locusts, the translation-sensitive neural systems of direction-selective neurons (DSNs) in fruit flies, bees, and locusts, and the small-target motion detectors (STMDs) in dragonflies and hoverflies. We also review the applications of these models to robots and vehicles. Through these modeling studies, we summarize the methodologies that generate different direction and size selectivity in motion perception. Finally, we discuss multiple systems integration and hardware realization of these bio-inspired motion perception models.
APA, Harvard, Vancouver, ISO, and other styles
27

Chu, Joseph Lin, and Adam Krzyźak. "The Recognition Of Partially Occluded Objects with Support Vector Machines, Convolutional Neural Networks and Deep Belief Networks." Journal of Artificial Intelligence and Soft Computing Research 4, no. 1 (2014): 5–19. http://dx.doi.org/10.2478/jaiscr-2014-0021.

Full text
Abstract:
Abstract Biologically inspired artificial neural networks have been widely used for machine learning tasks such as object recognition. Deep architectures, such as the Convolutional Neural Network, and the Deep Belief Network have recently been implemented successfully for object recognition tasks. We conduct experiments to test the hypothesis that certain primarily generative models such as the Deep Belief Network should perform better on the occluded object recognition task than purely discriminative models such as Convolutional Neural Networks and Support Vector Machines. When the generative models are run in a partially discriminative manner, the data does not support the hypothesis. It is also found that the implementation of Gaussian visible units in a Deep Belief Network trained on occluded image data allows it to also learn to effectively classify non-occluded images
APA, Harvard, Vancouver, ISO, and other styles
28

Thuruthel, Thomas George, Benjamin Shih, Cecilia Laschi, and Michael Thomas Tolley. "Soft robot perception using embedded soft sensors and recurrent neural networks." Science Robotics 4, no. 26 (2019): eaav1488. http://dx.doi.org/10.1126/scirobotics.aav1488.

Full text
Abstract:
Recent work has begun to explore the design of biologically inspired soft robots composed of soft, stretchable materials for applications including the handling of delicate materials and safe interaction with humans. However, the solid-state sensors traditionally used in robotics are unable to capture the high-dimensional deformations of soft systems. Embedded soft resistive sensors have the potential to address this challenge. However, both the soft sensors—and the encasing dynamical system—often exhibit nonlinear time-variant behavior, which makes them difficult to model. In addition, the problems of sensor design, placement, and fabrication require a great deal of human input and previous knowledge. Drawing inspiration from the human perceptive system, we created a synthetic analog. Our synthetic system builds models using a redundant and unstructured sensor topology embedded in a soft actuator, a vision-based motion capture system for ground truth, and a general machine learning approach. This allows us to model an unknown soft actuated system. We demonstrate that the proposed approach is able to model the kinematics of a soft continuum actuator in real time while being robust to sensor nonlinearities and drift. In addition, we show how the same system can estimate the applied forces while interacting with external objects. The role of action in perception is also presented. This approach enables the development of force and deformation models for soft robotic systems, which can be useful for a variety of applications, including human-robot interaction, soft orthotics, and wearable robotics.
APA, Harvard, Vancouver, ISO, and other styles
29

Wan, Jixiang, Ming Xia, Zunkai Huang, et al. "Event-Based Pedestrian Detection Using Dynamic Vision Sensors." Electronics 10, no. 8 (2021): 888. http://dx.doi.org/10.3390/electronics10080888.

Full text
Abstract:
Pedestrian detection has attracted great research attention in video surveillance, traffic statistics, and especially in autonomous driving. To date, almost all pedestrian detection solutions are derived from conventional framed-based image sensors with limited reaction speed and high data redundancy. Dynamic vision sensor (DVS), which is inspired by biological retinas, efficiently captures the visual information with sparse, asynchronous events rather than dense, synchronous frames. It can eliminate redundant data transmission and avoid motion blur or data leakage in high-speed imaging applications. However, it is usually impractical to directly apply the event streams to conventional object detection algorithms. For this issue, we first propose a novel event-to-frame conversion method by integrating the inherent characteristics of events more efficiently. Moreover, we design an improved feature extraction network that can reuse intermediate features to further reduce the computational effort. We evaluate the performance of our proposed method on a custom dataset containing multiple real-world pedestrian scenes. The results indicate that our proposed method raised its pedestrian detection accuracy by about 5.6–10.8%, and its detection speed is nearly 20% faster than previously reported methods. Furthermore, it can achieve a processing speed of about 26 FPS and an AP of 87.43% when implanted on a single CPU so that it fully meets the requirement of real-time detection.
APA, Harvard, Vancouver, ISO, and other styles
30

Asadnia, Mohsen, Ajay Giri Prakash Kottapalli, Jianmin Miao, Majid Ebrahimi Warkiani, and Michael S. Triantafyllou. "Artificial fish skin of self-powered micro-electromechanical systems hair cells for sensing hydrodynamic flow phenomena." Journal of The Royal Society Interface 12, no. 111 (2015): 20150322. http://dx.doi.org/10.1098/rsif.2015.0322.

Full text
Abstract:
Using biological sensors, aquatic animals like fishes are capable of performing impressive behaviours such as super-manoeuvrability, hydrodynamic flow ‘vision’ and object localization with a success unmatched by human-engineered technologies. Inspired by the multiple functionalities of the ubiquitous lateral-line sensors of fishes, we developed flexible and surface-mountable arrays of micro-electromechanical systems (MEMS) artificial hair cell flow sensors. This paper reports the development of the MEMS artificial versions of superficial and canal neuromasts and experimental characterization of their unique flow-sensing roles. Our MEMS flow sensors feature a stereolithographically fabricated polymer hair cell mounted on Pb(Zr 0.52 Ti 0.48 )O 3 micro-diaphragm with floating bottom electrode. Canal-inspired versions are developed by mounting a polymer canal with pores that guide external flows to the hair cells embedded in the canal. Experimental results conducted employing our MEMS artificial superficial neuromasts (SNs) demonstrated a high sensitivity and very low threshold detection limit of 22 mV/(mm s −1 ) and 8.2 µm s −1 , respectively, for an oscillating dipole stimulus vibrating at 35 Hz. Flexible arrays of such superficial sensors were demonstrated to localize an underwater dipole stimulus. Comparative experimental studies revealed a high-pass filtering nature of the canal encapsulated sensors with a cut-off frequency of 10 Hz and a flat frequency response of artificial SNs. Flexible arrays of self-powered, miniaturized, light-weight, low-cost and robust artificial lateral-line systems could enhance the capabilities of underwater vehicles.
APA, Harvard, Vancouver, ISO, and other styles
31

Chang, Ching-Chun. "Neural Reversible Steganography with Long Short-Term Memory." Security and Communication Networks 2021 (April 4, 2021): 1–14. http://dx.doi.org/10.1155/2021/5580272.

Full text
Abstract:
Deep learning has brought about a phenomenal paradigm shift in digital steganography. However, there is as yet no consensus on the use of deep neural networks in reversible steganography, a class of steganographic methods that permits the distortion caused by message embedding to be removed. The underdevelopment of the field of reversible steganography with deep learning can be attributed to the perception that perfect reversal of steganographic distortion seems scarcely achievable, due to the lack of transparency and interpretability of neural networks. Rather than employing neural networks in the coding module of a reversible steganographic scheme, we instead apply them to an analytics module that exploits data redundancy to maximise steganographic capacity. State-of-the-art reversible steganographic schemes for digital images are based primarily on a histogram-shifting method in which the analytics module is often modelled as a pixel intensity predictor. In this paper, we propose to refine the prior estimation from a conventional linear predictor through a neural network model. The refinement can be to some extent viewed as a low-level vision task (e.g., noise reduction and super-resolution imaging). In this way, we explore a leading-edge neuroscience-inspired low-level vision model based on long short-term memory with a brief discussion of its biological plausibility. Experimental results demonstrated a significant boost contributed by the neural network model in terms of prediction accuracy and steganographic rate-distortion performance.
APA, Harvard, Vancouver, ISO, and other styles
32

Thakoor, Sarita, Javaan Chahl, M. V. Srinivasan, et al. "Bioinspired Engineering of Exploration Systems for NASA and DoD." Artificial Life 8, no. 4 (2002): 357–69. http://dx.doi.org/10.1162/106454602321202426.

Full text
Abstract:
A new approach called bioinspired engineering of exploration systems (BEES) and its value for solving pressing NASA and DoD needs are described. Insects (for example honeybees and dragonflies) cope remarkably well with their world, despite possessing a brain containing less than 0.01% as many neurons as the human brain. Although most insects have immobile eyes with fixed focus optics and lack stereo vision, they use a number of ingenious, computationally simple strategies for perceiving their world in three dimensions and navigating successfully within it. We are distilling selected insect-inspired strategies to obtain novel solutions for navigation, hazard avoidance, altitude hold, stable flight, terrain following, and gentle deployment of payload. Such functionality provides potential solutions for future autonomous robotic space and planetary explorers. A BEES approach to developing lightweight low-power autonomous flight systems should be useful for flight control of such biomorphic flyers for both NASA and DoD needs. Recent biological studies of mammalian retinas confirm that representations of multiple features of the visual world are systematically parsed and processed in parallel. Features are mapped to a stack of cellular strata within the retina. Each of these representations can be efficiently modeled in semiconductor cellular nonlinear network (CNN) chips. We describe recent breakthroughs in exploring the feasibility of the unique blending of insect strategies of navigation with mammalian visual search, pattern recognition, and image understanding into hybrid biomorphic flyers for future planetary and terrestrial applications. We describe a few future mission scenarios for Mars exploration, uniquely enabled by these newly developed biomorphic flyers.
APA, Harvard, Vancouver, ISO, and other styles
33

Pearson, Martin J., Ben Mitchinson, J. Charles Sullivan, Anthony G. Pipe, and Tony J. Prescott. "Biomimetic vibrissal sensing for robots." Philosophical Transactions of the Royal Society B: Biological Sciences 366, no. 1581 (2011): 3085–96. http://dx.doi.org/10.1098/rstb.2011.0164.

Full text
Abstract:
Active vibrissal touch can be used to replace or to supplement sensory systems such as computer vision and, therefore, improve the sensory capacity of mobile robots. This paper describes how arrays of whisker-like touch sensors have been incorporated onto mobile robot platforms taking inspiration from biology for their morphology and control. There were two motivations for this work: first, to build a physical platform on which to model, and therefore test, recent neuroethological hypotheses about vibrissal touch; second, to exploit the control strategies and morphology observed in the biological analogue to maximize the quality and quantity of tactile sensory information derived from the artificial whisker array. We describe the design of a new whiskered robot, Shrewbot , endowed with a biomimetic array of individually controlled whiskers and a neuroethologically inspired whisking pattern generation mechanism. We then present results showing how the morphology of the whisker array shapes the sensory surface surrounding the robot's head, and demonstrate the impact of active touch control on the sensory information that can be acquired by the robot. We show that adopting bio-inspired, low latency motor control of the rhythmic motion of the whiskers in response to contact-induced stimuli usefully constrains the sensory range, while also maximizing the number of whisker contacts. The robot experiments also demonstrate that the sensory consequences of active touch control can be usefully investigated in biomimetic robots.
APA, Harvard, Vancouver, ISO, and other styles
34

Prahara, Adhi, Murinto Murinto, and Dewi Pramudi Ismi. "Bottom-up visual attention model for still image: a preliminary study." International Journal of Advances in Intelligent Informatics 6, no. 1 (2020): 82. http://dx.doi.org/10.26555/ijain.v6i1.469.

Full text
Abstract:
The philosophy of human visual attention is scientifically explained in the field of cognitive psychology and neuroscience then computationally modeled in the field of computer science and engineering. Visual attention models have been applied in computer vision systems such as object detection, object recognition, image segmentation, image and video compression, action recognition, visual tracking, and so on. This work studies bottom-up visual attention, namely human fixation prediction and salient object detection models. The preliminary study briefly covers from the biological perspective of visual attention, including visual pathway, the theory of visual attention, to the computational model of bottom-up visual attention that generates saliency map. The study compares some models at each stage and observes whether the stage is inspired by biological architecture, concept, or behavior of human visual attention. From the study, the use of low-level features, center-surround mechanism, sparse representation, and higher-level guidance with intrinsic cues dominate the bottom-up visual attention approaches. The study also highlights the correlation between bottom-up visual attention and curiosity.
APA, Harvard, Vancouver, ISO, and other styles
35

Fries, David, and Chase StarrGeran Barton. "2D PCB WITH 3D PRINT FABRICATIONS FOR RIGID-CONFORMAL PACKAGING OF MICROSENSOR IMAGING ARRAYS BASED ON BIOINSPIRED ARCHITECTURES." Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2014, DPC (2014): 001012–45. http://dx.doi.org/10.4071/2014dpc-tp33.

Full text
Abstract:
Macro sensor systems typically measure a localized space above a single sensor element. Expanding these single sensor elements into arrays permits spatial distribution measurements of a particular parameter and allow flux visualizations. Furthermore, applying microsystems technology to macro sensor systems yields imaging arrays and high resolution spatial/temporal sensing functions. Extending the high spatial resolution imaging over large areas is a desirable feature for new “vision” modes on autonomous robotic systems and for deployable environmental sensors. Rigid-flexible PCB's are desirable for miniaturization and integration of systems for mobile technology. The hybrid substrates provide substantial flexibility in systems design and integration of multiple functions into limited spaces. Using this design and construction approach allows lightweight, complex, and space efficient systems. Flex microsystems based on structured, fiber or non-fiber filled PCB laminates permits packaging to occur at two levels, at the local (micro) substrate scale and at the macro scale with the ability to flex the system across millimeter to centimeter lengths on real everyday systems. We continue to explore the use of PCB and PCBMEMS technology for new sensing concepts. In order to create rigid-conformal, large area imaging “camera” systems we have merged flexible PCB substrates with rigid constructions from 3D printing. This approach merges the 2D flexible electronics world of printed circuits with the 3D printed packaging world. Furthermore employing architectures used by biology as a basis for our imaging systems we explored naturally and biologically inspired designs, and their relationships to non-visible imagery, and alternate mechanical systems of perception. Radiolaria are extremely tiny ocean organisms that utilize a similar additive construction process to 3D printing. Their cell bodies secrete a substance mainly composed of silica to form intricate exoskeletons used as a system of protection. A correlation can be made between the radiolaria's construction process and the plastic extrusion system of the 3D fused deposition model printer. The benefits of additive construction are efficient use of materials, reduced cost and energy, and ability to customize forms. Through the use of bio-inspiration, a framework is laid out to base further research on (DFP)-design for packaging. Radiolarian exoskeletons take on a grid-like pattern while creating a cage around each microsensor interior and producing strong scaffolding. Using the 3D printed exoskeleton's form and function with flexible printed circuits one can create systems that are both rigid and form fitting with three-dimensional shape and enable new camera systems for various sensory applications.
APA, Harvard, Vancouver, ISO, and other styles
36

Ichikawa, Michinori, Hitoshi Yamada, and Johane Takeuchi. "Flying Robot with Biologically Inspired Vision." Journal of Robotics and Mechatronics 13, no. 6 (2001): 621–24. http://dx.doi.org/10.20965/jrm.2001.p0621.

Full text
Abstract:
An autonomous helicopter controlled by biologically inspired vision detects the displacement of altitude with real-time video processing, using has 2 CCD video cameras to see landscape objects and processing circuitry with an FPGA. Each image is divided into 800 areas for edge detection, used to detect displacement. Each of these small areas works as an ommatidium, i.e., the compound eye of an insect. In a typical indoor setting (objects such as desks, walls, etc.) visual feedback was not sufficient to realize stable hovering, but additional external feedback helps keep the unstable robot in more stable flight.
APA, Harvard, Vancouver, ISO, and other styles
37

Bao, Yuequan, Zhiyi Tang, Hui Li, and Yufeng Zhang. "Computer vision and deep learning–based data anomaly detection method for structural health monitoring." Structural Health Monitoring 18, no. 2 (2018): 401–21. http://dx.doi.org/10.1177/1475921718757405.

Full text
Abstract:
The widespread application of sophisticated structural health monitoring systems in civil infrastructures produces a large volume of data. As a result, the analysis and mining of structural health monitoring data have become hot research topics in the field of civil engineering. However, the harsh environment of civil structures causes the data measured by structural health monitoring systems to be contaminated by multiple anomalies, which seriously affect the data analysis results. This is one of the main barriers to automatic real-time warning, because it is difficult to distinguish the anomalies caused by structural damage from those related to incorrect data. Existing methods for data cleansing mainly focus on noise filtering, whereas the detection of incorrect data requires expertise and is very time-consuming. Inspired by the real-world manual inspection process, this article proposes a computer vision and deep learning–based data anomaly detection method. In particular, the framework of the proposed method includes two steps: data conversion by data visualization, and the construction and training of deep neural networks for anomaly classification. This process imitates human biological vision and logical thinking. In the data visualization step, the time series signals are transformed into image vectors that are plotted piecewise in grayscale images. In the second step, a training dataset consisting of randomly selected and manually labeled image vectors is input into a deep neural network or a cluster of deep neural networks, which are trained via techniques termed stacked autoencoders and greedy layer-wise training. The trained deep neural networks can be used to detect potential anomalies in large amounts of unchecked structural health monitoring data. To illustrate the training procedure and validate the performance of the proposed method, acceleration data from the structural health monitoring system of a real long-span bridge in China are employed. The results show that the multi-pattern anomalies of the data can be automatically detected with high accuracy.
APA, Harvard, Vancouver, ISO, and other styles
38

Gelenbe, Erol. "Biologically inspired autonomous systems." Robotics and Autonomous Systems 22, no. 1 (1997): 1–2. http://dx.doi.org/10.1016/s0921-8890(97)00012-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Colen, Jonathan, Ming Han, Rui Zhang, et al. "Machine learning active-nematic hydrodynamics." Proceedings of the National Academy of Sciences 118, no. 10 (2021): e2016708118. http://dx.doi.org/10.1073/pnas.2016708118.

Full text
Abstract:
Hydrodynamic theories effectively describe many-body systems out of equilibrium in terms of a few macroscopic parameters. However, such parameters are difficult to determine from microscopic information. Seldom is this challenge more apparent than in active matter, where the hydrodynamic parameters are in fact fields that encode the distribution of energy-injecting microscopic components. Here, we use active nematics to demonstrate that neural networks can map out the spatiotemporal variation of multiple hydrodynamic parameters and forecast the chaotic dynamics of these systems. We analyze biofilament/molecular-motor experiments with microtubule/kinesin and actin/myosin complexes as computer vision problems. Our algorithms can determine how activity and elastic moduli change as a function of space and time, as well as adenosine triphosphate (ATP) or motor concentration. The only input needed is the orientation of the biofilaments and not the coupled velocity field which is harder to access in experiments. We can also forecast the evolution of these chaotic many-body systems solely from image sequences of their past using a combination of autoencoders and recurrent neural networks with residual architecture. In realistic experimental setups for which the initial conditions are not perfectly known, our physics-inspired machine-learning algorithms can surpass deterministic simulations. Our study paves the way for artificial-intelligence characterization and control of coupled chaotic fields in diverse physical and biological systems, even in the absence of knowledge of the underlying dynamics.
APA, Harvard, Vancouver, ISO, and other styles
40

Studart, André R. "Biologically Inspired Dynamic Material Systems." Angewandte Chemie International Edition 54, no. 11 (2015): 3400–3416. http://dx.doi.org/10.1002/anie.201410139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Akolkar, Himanshu, Cedric Meyer, Xavier Clady, et al. "What Can Neuromorphic Event-Driven Precise Timing Add to Spike-Based Pattern Recognition?" Neural Computation 27, no. 3 (2015): 561–93. http://dx.doi.org/10.1162/neco_a_00703.

Full text
Abstract:
This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time—pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30–60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.
APA, Harvard, Vancouver, ISO, and other styles
42

Park, Sungho, Ahmed Al Maashri, Kevin M. Irick, et al. "System-On-Chip for Biologically Inspired Vision Applications." IPSJ Transactions on System LSI Design Methodology 5 (2012): 71–95. http://dx.doi.org/10.2197/ipsjtsldm.5.71.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Frintrop, Simone. "Bio-inspired Vision Systems." KI - Künstliche Intelligenz 29, no. 1 (2015): 1–4. http://dx.doi.org/10.1007/s13218-014-0336-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hallam, John. "Editorial: Biologically Inspired and Biomimetic Systems." Adaptive Behavior 9, no. 3-4 (2001): 129–30. http://dx.doi.org/10.1177/10597123010093001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

ALEKSANDER, IGOR. "PHENOMENAL CONSCIOUSNESS AND BIOLOGICALLY INSPIRED SYSTEMS." International Journal of Machine Consciousness 05, no. 01 (2013): 3–9. http://dx.doi.org/10.1142/s1793843013400015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

SHI, WEIREN, ZUOJIN LI, XIN SHI, and ZHI ZHONG. "A SURVEY OF BIOLOGICALLY INSPIRED IMAGE PROCESSING FOR OBJECTS RECOGNITION." International Journal of Image and Graphics 09, no. 04 (2009): 495–510. http://dx.doi.org/10.1142/s021946780900354x.

Full text
Abstract:
The human vision system is a very sophisticated image processing and objects recognition mechanism. However, it is a challenge to simulate the human or animal vision system to automate visual function in machines, because it is difficult to account for the view-invariant perception of universals such as environmental objects or processes and the explicit perception of featural parts and wholes in visual scenes. In this paper, we first present an introduction to the importance of biologically inspired computer vision and review general and key vision functions from neuroscience perspective. And most significantly, we give an important summarization to and discussion on the specific applications of biologically inspired modeling, including biologically inspired image pre-processing, image perception, and objects recognition. In the end, we give some important and challenging topics of computer vision for future work.
APA, Harvard, Vancouver, ISO, and other styles
47

Guthrie, Sarah. "Anne Elizabeth Warner. 25 August 1940—16 May 2012." Biographical Memoirs of Fellows of the Royal Society 70 (March 10, 2021): 441–62. http://dx.doi.org/10.1098/rsbm.2020.0046.

Full text
Abstract:
Anne Warner applied physiological techniques to developmental biology, elucidating the mechanisms of cell interaction and communication that pattern the early embryo. Through her determination and passion for science, she contributed crucial discoveries in the fields of muscle physiology, cellular differentiation and gap junction communication. She spent the majority of her career at University College London, which became her intellectual home and where she acquired a Royal Society Foulerton Research Professorship, becoming a highly respected and influential figure. In her work on gap junctions, Anne was the first to show that embryonic development and patterning required gap junctions, and that the restriction of junctional communication between cells played a key role in tissue differentiation. Anne excelled in her breadth of vision across research and its interdisciplinary possibilities. In 1998 she established the CoMPLEX Centre for systems biology at UCL, bringing her own group together with scientists from across the STEM subjects to build testable mathematical models of biological systems across multiple scales. Indefatigable in her capacity for leadership and committee work, she assumed an eclectic set of roles across a large span of research organizations and professional societies, and had a lifelong association with the Physiological Society. In 1984 she founded the Microelectrodes course at the Plymouth Marine Biology Laboratory, which has trained generations in the art of electrophysiology and still continues today. With uncompromisingly high standards, she inspired her mentees to be ambitious and fearless, and established postdoctoral fellowships to help the young scientists who followed after her.
APA, Harvard, Vancouver, ISO, and other styles
48

Zheng, Yufeng, Erik Blasch, and Adel S. Elmaghraby. "Biologically Inspired Methods for Imaging, Cognition, Vision, and Intelligence." Computational Intelligence and Neuroscience 2016 (2016): 1. http://dx.doi.org/10.1155/2016/2402067.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Plebe, Alessio, Marco Mazzone, and Vivian M. De La Cruz. "A BIOLOGICALLY INSPIRED NEURAL MODEL OF VISION-LANGUAGE INTEGRATION." Neural Network World 21, no. 3 (2011): 227–50. http://dx.doi.org/10.14311/nnw.2011.21.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Belbachir, A. N., T. Lunden, P. Hanák, F. Markus, M. Böttcher, and T. Mannersola. "Biologically-inspired stereo vision for elderly safety at home." e & i Elektrotechnik und Informationstechnik 127, no. 7-8 (2010): 216–22. http://dx.doi.org/10.1007/s502-010-0750-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!