Literatura académica sobre el tema "Sensory input"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte las listas temáticas de artículos, libros, tesis, actas de conferencias y otras fuentes académicas sobre el tema "Sensory input".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Artículos de revistas sobre el tema "Sensory input"

1

Santos, Bruno A., Rogerio M. Gomes, Xabier E. Barandiaran, and Phil Husbands. "Active Role of Self-Sustained Neural Activity on Sensory Input Processing: A Minimal Theoretical Model." Neural Computation 34, no. 3 (February 17, 2022): 686–715. http://dx.doi.org/10.1162/neco_a_01471.

Texto completo
Resumen
Abstract A growing body of work has demonstrated the importance of ongoing oscillatory neural activity in sensory processing and the generation of sensorimotor behaviors. It has been shown, for several different brain areas, that sensory-evoked neural oscillations are generated from the modulation by sensory inputs of inherent self-sustained neural activity (SSA). This letter contributes to that strand of research by introducing a methodology to investigate how much of the sensory-evoked oscillatory activity is generated by SSA and how much is generated by sensory inputs within the context of sensorimotor behavior in a computational model. We develop an abstract model consisting of a network of three Kuramoto oscillators controlling the behavior of a simulated agent performing a categorical perception task. The effects of sensory inputs and SSAs on sensory-evoked oscillations are quantified by the cross product of velocity vectors in the phase space of the network under different conditions (disconnected without input, connected without input, and connected with input). We found that while the agent is carrying out the task, sensory-evoked activity is predominantly generated by SSA (93.10%) with much less influence from sensory inputs (6.90%). Furthermore, the influence of sensory inputs can be reduced by 10.4% (from 6.90% to 6.18%) with a decay in the agent's performance of only 2%. A dynamical analysis shows how sensory-evoked oscillations are generated from a dynamic coupling between the level of sensitivity of the network and the intensity of the input signals. This work may suggest interesting directions for neurophysiological experiments investigating how self-sustained neural activity influences sensory input processing, and ultimately affects behavior.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Bui, Tuan V., and Robert M. Brownstone. "Sensory-evoked perturbations of locomotor activity by sparse sensory input: a computational study." Journal of Neurophysiology 113, no. 7 (April 2015): 2824–39. http://dx.doi.org/10.1152/jn.00866.2014.

Texto completo
Resumen
Sensory inputs from muscle, cutaneous, and joint afferents project to the spinal cord, where they are able to affect ongoing locomotor activity. Activation of sensory input can initiate or prolong bouts of locomotor activity depending on the identity of the sensory afferent activated and the timing of the activation within the locomotor cycle. However, the mechanisms by which afferent activity modifies locomotor rhythm and the distribution of sensory afferents to the spinal locomotor networks have not been determined. Considering the many sources of sensory inputs to the spinal cord, determining this distribution would provide insights into how sensory inputs are integrated to adjust ongoing locomotor activity. We asked whether a sparsely distributed set of sensory inputs could modify ongoing locomotor activity. To address this question, several computational models of locomotor central pattern generators (CPGs) that were mechanistically diverse and generated locomotor-like rhythmic activity were developed. We show that sensory inputs restricted to a small subset of the network neurons can perturb locomotor activity in the same manner as seen experimentally. Furthermore, we show that an architecture with sparse sensory input improves the capacity to gate sensory information by selectively modulating sensory channels. These data demonstrate that sensory input to rhythm-generating networks need not be extensively distributed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Mao, Yu-Ting, Tian-Miao Hua, and Sarah L. Pallas. "Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas." Journal of Neurophysiology 105, no. 4 (April 2011): 1558–73. http://dx.doi.org/10.1152/jn.00407.2010.

Texto completo
Resumen
Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into account that sensory cortex may become substantially more multisensory after alteration of its input during development.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Ugawa, Yoshikazu. "Sensory input and basal ganglia." Rinsho Shinkeigaku 52, no. 11 (2012): 862–65. http://dx.doi.org/10.5692/clinicalneurol.52.862.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Franosch, Jan-Moritz P., Sebastian Urban, and J. Leo van Hemmen. "Supervised Spike-Timing-Dependent Plasticity: A Spatiotemporal Neuronal Learning Rule for Function Approximation and Decisions." Neural Computation 25, no. 12 (December 2013): 3113–30. http://dx.doi.org/10.1162/neco_a_00520.

Texto completo
Resumen
How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as “supervisor.” Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Bell, C. C., V. Z. Han, Y. Sugawara, and K. Grant. "Synaptic plasticity in the mormyrid electrosensory lobe." Journal of Experimental Biology 202, no. 10 (May 15, 1999): 1339–47. http://dx.doi.org/10.1242/jeb.202.10.1339.

Texto completo
Resumen
The mormyrid electrosensory lateral line lobe (ELL) is one of several different sensory structures in fish that behave as adaptive sensory processors. These structures generate negative images of predictable features in the sensory inflow which are added to the actual inflow to minimize the effects of predictable sensory features. The negative images are generated through a process of association between centrally originating predictive signals and sensory inputs from the periphery. In vitro studies in the mormyrid ELL show that pairing of parallel fiber input with Na+ spikes in postsynaptic cells results in synaptic depression at the parallel fiber synapses. The synaptic plasticity observed at the cellular level and the associative process of generating negative images of predicted sensory input at the systems level share a number of properties. Both are rapidly established, anti-Hebbian, reversible, input-specific and tightly restricted in time. These common properties argue strongly that associative depression at the parallel fiber synapse contributes to the adaptive generation of negative images in the mormyrid ELL.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Etesami, Jalal, and Philipp Geiger. "Causal Transfer for Imitation Learning and Decision Making under Sensor-Shift." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 06 (April 3, 2020): 10118–25. http://dx.doi.org/10.1609/aaai.v34i06.6571.

Texto completo
Resumen
Learning from demonstrations (LfD) is an efficient paradigm to train AI agents. But major issues arise when there are differences between (a) the demonstrator's own sensory input, (b) our sensors that observe the demonstrator and (c) the sensory input of the agent we train.In this paper, we propose a causal model-based framework for transfer learning under such “sensor-shifts”, for two common LfD tasks: (1) inferring the effect of the demonstrator's actions and (2) imitation learning. First we rigorously analyze, on the population-level, to what extent the relevant underlying mechanisms (the action effects and the demonstrator policy) can be identified and transferred from the available observations together with prior knowledge of sensor characteristics. And we device an algorithm to infer these mechanisms. Then we introduce several proxy methods which are easier to calculate, estimate from finite data and interpret than the exact solutions, alongside theoretical bounds on their closeness to the exact ones. We validate our two main methods on simulated and semi-real world data.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Havrylovych, Mariia, and Valeriy Danylov. "Research of autoencoder-based user biometric verification with motion patterns." System research and information technologies, no. 2 (August 30, 2022): 128–36. http://dx.doi.org/10.20535/srit.2308-8893.2022.2.10.

Texto completo
Resumen
In the current research, we continue our previous study regarding motion-based user biometric verification, which consumes sensory data. Sensory-based verification systems empower the continuous authentication narrative – as physiological biometric methods mainly based on photo or video input meet a lot of difficulties in implementation. The research aims to analyze how various components of sensor data from an accelerometer affect and contribute to defining the process of unique person motion patterns and understanding how it may express the human behavioral patterns with different activity types. The study used the recurrent long-short-term-memory autoencoder as a baseline model. The choice of model was based on our previous research. The research results have shown that various data components contribute differently to the verification process depending on the type of activity. However, we conclude that a single sensor data source may not be enough for a robust authentication system. The multimodal authentication system should be proposed to utilize and aggregate the input streams from multiple sensors as further research.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Henn, V. "Sensory Input Modifying Central Motor Actions." Stereotactic and Functional Neurosurgery 49, no. 5 (1986): 251–55. http://dx.doi.org/10.1159/000100183.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Stolz, Thomas, Max Diesner, Susanne Neupert, Martin E. Hess, Estefania Delgado-Betancourt, Hans-Joachim Pflüger, and Joachim Schmidt. "Descending octopaminergic neurons modulate sensory-evoked activity of thoracic motor neurons in stick insects." Journal of Neurophysiology 122, no. 6 (December 1, 2019): 2388–413. http://dx.doi.org/10.1152/jn.00196.2019.

Texto completo
Resumen
Neuromodulatory neurons located in the brain can influence activity in locomotor networks residing in the spinal cord or ventral nerve cords of invertebrates. How inputs to and outputs of neuromodulatory descending neurons affect walking activity is largely unknown. With the use of matrix-assisted laser desorption/ionization time-of-flight mass spectrometry and immunohistochemistry, we show that a population of dorsal unpaired median (DUM) neurons descending from the gnathal ganglion to thoracic ganglia of the stick insect Carausius morosus contains the neuromodulatory amine octopamine. These neurons receive excitatory input coupled to the legs’ stance phases during treadmill walking. Inputs did not result from connections with thoracic central pattern-generating networks, but, instead, most are derived from leg load sensors. In excitatory and inhibitory retractor coxae motor neurons, spike activity in the descending DUM (desDUM) neurons increased depolarizing reflexlike responses to stimulation of leg load sensors. In these motor neurons, descending octopaminergic neurons apparently functioned as components of a positive feedback network mainly driven by load-detecting sense organs. Reflexlike responses in excitatory extensor tibiae motor neurons evoked by stimulations of a femur-tibia movement sensor either are increased or decreased or were not affected by the activity of the descending neurons, indicating different functions of desDUM neurons. The increase in motor neuron activity is often accompanied by a reflex reversal, which is characteristic for actively moving animals. Our findings indicate that some descending octopaminergic neurons can facilitate motor activity during walking and support a sensory-motor state necessary for active leg movements. NEW & NOTEWORTHY We investigated the role of descending octopaminergic neurons in the gnathal ganglion of stick insects. The neurons become active during walking, mainly triggered by input from load sensors in the legs rather than pattern-generating networks. This report provides novel evidence that octopamine released by descending neurons on stimulation of leg sense organs contributes to the modulation of leg sensory-evoked activity in a leg motor control system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Tesis sobre el tema "Sensory input"

1

McNair, Nicolas A. "Input-specificity of sensory-induced neural plasticity in humans." Thesis, University of Auckland, 2008. http://hdl.handle.net/2292/3285.

Texto completo
Resumen
The aim of this thesis was to investigate the input-specificity of sensory-induced plasticity in humans. This was achieved by varying the characteristics of sine gratings so that they selectively targeted distinct populations of neurons in the visual cortex. In Experiments 1-3, specificity was investigated with electroencephalography using horizontally- and vertically-oriented sine gratings (Experiment 1) or gratings of differing spatial frequency (Experiments 2 & 3). Increases in the N1b potential were observed only for sine gratings that were the same in orientation or spatial frequency as that used as the tetanus, suggesting that the potentiation is specific to the visual pathways stimulated during the induction of the tetanus. However, the increase in the amplitude of the N1b in Experiment 1 was not maintained when tested again at 50 minutes post-tetanus. This may have been due to depotentiation caused by the temporal frequency of stimulus presentation in the first post-tetanus block. To try to circumvent this potential confound, immediate and maintained (tested 30 minutes post-tetanus) spatial-frequency-specific potentiation were tested separately in Experiments 2 and 3, respectively. Experiment 3 demonstrated that the increased N1b was maintained for up to half an hour post-tetanus. In addition, the findings from Experiment 1, as well as the pattern of results from Experiments 2 and 3, indicate that the potentiation must be occurring in the visual cortex rather than further upstream at the lateral geniculate nucleus. In Experiment 4 functional magnetic resonance imaging was used to more accurately localise where these plastic changes were taking place using sine gratings of differing spatial frequency. A small, focal post-tetanic increase in the blood-oxygen-level-dependent (BOLD) response was observed for the tetanised grating in the right temporo-parieto-occipital junction. For the non-tetanised grating, decreases in BOLD were found in the primary visual cortex and bilaterally in the cuneus and pre-cuneus. These decreases may have been due to inhibitory interconnections between neurons tuned to different spatial frequencies. These data indicate that tetanic sensory stimulation selectively targets and potentiates specific populations of neurons in the visual cortex.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Nargis, Sultana Mahbuba. "Sensory Input and Mental Imagery in Second Language Acquisition." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1418370678.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Kim, Jung-Kyong. "Sensory substitution learning using auditory input: Behavioral and neural correlates." Thesis, McGill University, 2011. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=96695.

Texto completo
Resumen
Sensory substitution refers to the replacement of one sensory input with another. This concept, originally developed to aid the blind, presents a scientific opportunity to study crossmodal perceptual learning and neural plasticity. Using a technique that translates vision into sound, the present dissertation examined sensory substitution learning. Four studies tested the hypotheses that mental representations of spatial information such as shape are abstract, and that they are based on involvement of common brain regions independently of sensory modality. Study 1 aimed to develop a training paradigm in auditory vision substitution. We examined the minimum amount of learning necessary to identify visual images using sound, and the effects of more extensive training on a wide range of stimuli to test the hypothesis that sensory substitution would be based on generalized crossmodal rule learning. Study 2 was a functional magnetic resonance imaging (fMRI) adaptation of study 1. Subjects were scanned before and after training during a task in which shape-coded sound was to be matched to visually presented shape. It was predicted that training would lead to sound-induced visual recruitment. Study 3 examined auditory touch substitution learning. Blindfolded sighted subjects were trained to recognize tactile shapes using shape-coded sounds and tested on a matching task. We also tested post-training transfer to vision. It was predicted that shape could be conveyed across sensory modalities. Study 4 was an fMRI adaptation of Study 3. Subjects were scanned before and after training during a task in which shape-coded sound was matched to tactually presented shape. Visual recruitment driven by non-visual inputs was predicted. Results showed that sighted people learned to extract visual or tactile patterns from auditory input. This learning was generalizable across stimuli within and across modalities, suggesting an abstract mental representation of shape. Auditory shape learning was associated with change in the functional network between the auditory cortex and the lateral occipital complex (LOC), a region known for visual shape processing. The auditory access to the LOC supports the notion that sensory specificity of the brain is not determined by the nature of the stimuli but rather by the task demand of the information to be processed.<br>La substitution sensorielle réfère à la capacité de remplacer une entrée sensorielle par une autre. Ce concept, initialement développé pour aider les personnes aveugles, offre une opportunité scientifique pour étudier l'apprentissage perceptuel à travers plusieurs modalités sensorielles et la plasticité neurale. La présente dissertation utilise une technique qui transforme la vision en son pour examiner l'apprentissage de la substitution sensorielle. Quatre études ont testé les hypothèses que les représentations mentales de l'information spatiale telles que des formes abstraites sont basées sur l'implication de régions cérébrales communes indépendamment de modalités sensorielles. L'étude 1 avait pour but de développer un paradigme d'apprentissage de la substitution audio-visuelle. Nous avons examiné le taux minimal d'apprentissage nécessaire pour identifier les images visuelles en utilisant le son, et les effets d'un entraînement plus intensif sur une large gamme de stimuli pour tester l'hypothèse que la substitution sensorielle serait basée sur une loi d'apprentissage généralisé à travers plusieurs modalités. L'étude 2 était une adaptation de l'étude 1 utilisant l'imagerie par résonance magnétique fonctionnelle (IRMf). Les sujets étaient scannés avant et après un entraînement à une tâche pendant laquelle une forme codée sonore devait être appariée à une forme abstraite présentée visuellement. Nous faisions l'hypothèse que suite à l'entraînement, l'exposition sonore conduirait à un recrutement visuel. L'étude 3 a examiné l'apprentissage pour transformer le toucher en son. Des sujets voyants avaient les yeux bandés et étaient entraînés pour reconnaitre des formes tactiles utilisant des formes codées sonores et testées sur une tâche d'appariement. Nous avons aussi testé le transfert à la vision après entraînement. Nous avons prédit que les formes pourraient être transportées à travers les modalités sensorielles. L'étude 4 était une adaptation en IRMf de l'étude 3. Les sujets étaient scannés avant et après un entraînement pendant une tâche dans laquelle une forme codée sonore était appariée à une forme présentée tactilement. Nous faisions l'hypothèse que des entrées non visuelles conduiraient à un recrutement visuel. Les résultats ont montré que les personnes voyantes ont appris à extraire des modes visuels ou tactiles à partir d'entrées auditives. Cet apprentissage était généralisable à travers les stimuli, dans et à travers les modalités, suggérant une représentation mentale abstraite des formes. L'apprentissage de formes auditives était associé à un changement dans le réseau fonctionnel entre le cortex auditif et le complexe latero-occipital (CLO), une région connue pour le traitement visuel des formes. L'accès auditif au CLO supporte la notion que la spécificité sensorielle du cerveau n'est pas déterminée par la nature des stimuli mais plutôt par le traitement requis pour exécuter la tache.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Lovell, Nathan, and N/A. "Machine Vision as the Primary Sensory Input for Mobile, Autonomous Robots." Griffith University. School of Information and Communication Technology, 2006. http://www4.gu.edu.au:8080/adt-root/public/adt-QGU20070911.152447.

Texto completo
Resumen
Image analysis, and its application to sensory input (computer vision) is a fairly mature field, so it is surprising that its techniques are not extensively used in robotic applications. The reason for this is that, traditionally, robots have been used in controlled environments where sophisticated computer vision was not necessary, for example in car manufacturing. As the field of robotics has moved toward providing general purpose robots that must function in the real world, it has become necessary that the robots be provided with robust sensors capable of understanding the complex world around them. However, when researchers apply techniques previously studied in image analysis literature to the field of robotics, several difficult problems emerge. In this thesis we examine four reasons why it is difficult to apply work in image analysis directly to real-time, general purpose computer vision applications. These are: improvement in the computational complexity of image analysis algorithms, robustness to dynamic and unpredictable visual conditions, independence from domain specific knowledge in object recognition and the development of debugging facilities. This thesis examines each of these areas making several innovative contributions in each area. We argue that, although each area is distinct, improvement must be made in all four areas before vision will be utilised as the primary sensory input for mobile, autonomous robotic applications. In the first area, the computational complexity of image analysis algorithms, we note the dependence of a large number of high-level processing routines on a small number of low-level algorithms. Therefore, improvement to a small set of highly utilised algorithms will yield benefits in a large number of applications. In this thesis we examine the common tasks of image segmentation, edge and straight line detection and vectorisation. In the second area, robustness to dynamic and unpredictable conditions, we examine how vision systems can be made more tolerant to changes of illumination in the visual scene. We examine the classical image segmentation task and present a method for illumination independence that builds on our work from the first area. The third area is the reliance on domain-specific knowledge in object recognition. Many current systems depend on a large amount of hard-coded domainspecific knowledge to understand the world around them. This makes the system hard to modify, even for slight changes in the environment, and very difficult to apply in a different context entirely. We present an XML-based language, the XML Object Definition (XOD) language, as a solution to this problem. The language is largely descriptive instead of imperative so, instead of describing how to locate objects within each image, the developer simply describes the properties of the objects. The final area is the development of support tools. Vision system programming is extremely difficult because large amounts of data are handled at a very fast rate. If the system is running on an embedded device (such as a robot) then locating defects in the code is a time consuming and frustrating task. Many development-support applications are available for specific applications. We present a general purpose development-support tool for embedded, real-time vision systems. The primary case study for this research is that of Robotic soccer, in the international RoboCup Four-Legged league. We utilise all of the research of this thesis to provide the first illumination-independent object recognition system for RoboCup. Furthermore we illustrate the flexibility of our system by applying it to several other tasks and to marked changes in the visual environment for RoboCup itself.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Xin, Yifei. "Exploring the Chinese Room: Parallel Sensory Input in Second Language Learning." University of Toledo / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1333762798.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Lovell, Nathan. "Machine Vision as the Primary Sensory Input for Mobile, Autonomous Robots." Thesis, Griffith University, 2006. http://hdl.handle.net/10072/367107.

Texto completo
Resumen
Image analysis, and its application to sensory input (computer vision) is a fairly mature field, so it is surprising that its techniques are not extensively used in robotic applications. The reason for this is that, traditionally, robots have been used in controlled environments where sophisticated computer vision was not necessary, for example in car manufacturing. As the field of robotics has moved toward providing general purpose robots that must function in the real world, it has become necessary that the robots be provided with robust sensors capable of understanding the complex world around them. However, when researchers apply techniques previously studied in image analysis literature to the field of robotics, several difficult problems emerge. In this thesis we examine four reasons why it is difficult to apply work in image analysis directly to real-time, general purpose computer vision applications. These are: improvement in the computational complexity of image analysis algorithms, robustness to dynamic and unpredictable visual conditions, independence from domain specific knowledge in object recognition and the development of debugging facilities. This thesis examines each of these areas making several innovative contributions in each area. We argue that, although each area is distinct, improvement must be made in all four areas before vision will be utilised as the primary sensory input for mobile, autonomous robotic applications. In the first area, the computational complexity of image analysis algorithms, we note the dependence of a large number of high-level processing routines on a small number of low-level algorithms. Therefore, improvement to a small set of highly utilised algorithms will yield benefits in a large number of applications. In this thesis we examine the common tasks of image segmentation, edge and straight line detection and vectorisation. In the second area, robustness to dynamic and unpredictable conditions, we examine how vision systems can be made more tolerant to changes of illumination in the visual scene. We examine the classical image segmentation task and present a method for illumination independence that builds on our work from the first area. The third area is the reliance on domain-specific knowledge in object recognition. Many current systems depend on a large amount of hard-coded domainspecific knowledge to understand the world around them. This makes the system hard to modify, even for slight changes in the environment, and very difficult to apply in a different context entirely. We present an XML-based language, the XML Object Definition (XOD) language, as a solution to this problem. The language is largely descriptive instead of imperative so, instead of describing how to locate objects within each image, the developer simply describes the properties of the objects. The final area is the development of support tools. Vision system programming is extremely difficult because large amounts of data are handled at a very fast rate. If the system is running on an embedded device (such as a robot) then locating defects in the code is a time consuming and frustrating task. Many development-support applications are available for specific applications. We present a general purpose development-support tool for embedded, real-time vision systems. The primary case study for this research is that of Robotic soccer, in the international RoboCup Four-Legged league. We utilise all of the research of this thesis to provide the first illumination-independent object recognition system for RoboCup. Furthermore we illustrate the flexibility of our system by applying it to several other tasks and to marked changes in the visual environment for RoboCup itself.<br>Thesis (PhD Doctorate)<br>Doctor of Philosophy (PhD)<br>School of Information and Communication Technology<br>Full Text
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Ortman, Robert L. "Sensory input encoding and readout methods for in vitro living neuronal networks." Thesis, Georgia Institute of Technology, 2012. http://hdl.handle.net/1853/44856.

Texto completo
Resumen
Establishing and maintaining successful communication stands as a critical prerequisite for achieving the goals of inducing and studying advanced computation in small-scale living neuronal networks. The following work establishes a novel and effective method for communicating arbitrary "sensory" input information to cultures of living neurons, living neuronal networks (LNNs), consisting of approximately 20 000 rat cortical neurons plated on microelectrode arrays (MEAs) containing 60 electrodes. The sensory coding algorithm determines a set of effective codes (symbols), comprised of different spatio-temporal patterns of electrical stimulation, to which the LNN consistently produces unique responses to each individual symbol. The algorithm evaluates random sequences of candidate electrical stimulation patterns for evoked-response separability and reliability via a support vector machine (SVM)-based method, and employing the separability results as a fitness metric, a genetic algorithm subsequently constructs subsets of highly separable symbols (input patterns). Sustainable input/output (I/O) bit rates of 16-20 bits per second with a 10% symbol error rate resulted for time periods of approximately ten minutes to over ten hours. To further evaluate the resulting code sets' performance, I used the system to encode approximately ten hours of sinusoidal input into stimulation patterns that the algorithm selected and was able to recover the original signal with a normalized root-mean-square error of 20-30% using only the recorded LNN responses and trained SVM classifiers. Response variations over the course of several hours observed in the results of the sine wave I/O experiment suggest that the LNNs may retain some short-term memory of the previous input sample and undergo neuroplastic changes in the context of repeated stimulation with sensory coding patterns identified by the algorithm.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Chakrabarty, Arnab. "Role of sensory input in structural plasticity of dendrites in adult neuronal networks." Diss., lmu, 2013. http://nbn-resolving.de/urn:nbn:de:bvb:19-155241.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Zhao, Yifan. "Language Learning through Dialogs:Mental Imagery and Parallel Sensory Input in Second Language Learning." University of Toledo / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1396634043.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

MacBride, Claire Ann MacBride. "Mental Imagery as a Substitute for Parallel Sensory Input in the Field of SLA." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1525379740507044.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Libros sobre el tema "Sensory input"

1

Proebster, Walter E. Peripherie von Informationssystemen: Technologie und Anwendung : Eingabe, Tastatur, Sensoren, Sprache etc. : Ausgabe, Drucker, Bildschirm, Anzeigen etc. : externe Speicher, Magnetik, Optik etc. Berlin: Springer-Verlag, 1987.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

AIPR Workshop (26th 1997 Washington, D.C.). Exploiting new image sources and sensors: 26th AIPR Workshop, 15-17 October 1997, Washington, D.C. Edited by Selander J. Michael 1952-, Society of Photo-optical Instrumentation Engineers., and AIPR Executive Committee. Bellingham, Wash: SPIE, 1998.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Tyagi, Amit Kumar. Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality. IGI Global, 2021.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Tyagi, Amit Kumar. Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality. IGI Global, 2021.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Tyagi, Amit Kumar. Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality. IGI Global, 2021.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Tyagi, Amit Kumar. Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality. IGI Global, 2021.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Tyagi, Amit, and Shamila Mohammed. Multimedia and Sensory Input for Augmented, Mixed, and Virtual Reality. IGI Global, 2020.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Stoneley, Sarah, and Simon Rinald. Sensory loss. Edited by Patrick Davey and David Sprigings. Oxford University Press, 2018. http://dx.doi.org/10.1093/med/9780199568741.003.0047.

Texto completo
Resumen
Sensory disturbance can either be a complete loss (anaesthesia) or a reduction (hypoaesthesia) in the ability to perceive the sensory input. Dysaesthesia is an abnormal increase in the perception of normal sensory stimuli. Hyperalgesia is an increased sensitivity to normally painful stimuli, and allodynia is the perception of usually innocuous stimuli as painful. A complete loss of sensation is likely to be due to a central nervous system problem, while a tingling/paraesthesia (large fibre) or burning/temperature (small fibre) sensation is likely due to an acquired peripheral nervous system problem. Shooting, electric-shock-like pains suggest radicular pathology, a tight-band spinal cord dysfunction. Positive sensory symptoms are usually absent in inherited neuropathies, even in the context of significant deficits on examination. This chapter describes the clinical approach to patients with sensory symptoms. Common patterns of sensory loss and their causes are described.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Strayer. Lose Weight by Decreasing Sensory Input: A Revolutionary Mind-Body Approach. Dorrance Publishing Co., Inc., 2004.

Buscar texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Thoonsen, Monique, and Carmen Lamp. Sensory Solutions in the Classroom. Jessica Kingsley Publishers, 2022. https://doi.org/10.5040/9781805014836.

Texto completo
Resumen
Every teacher knows them - the students who are continuously balancing on their chair legs or who prefer to hide in their hoodies all day long. These students are using all kinds of tricks to be able to stay focused, as they are under- or overresponsive to sensory input and trying to restore their balance. Children who struggle with processing sensory input can experience a wide range of symptoms, including hypersensitivity to sound, sight and touch, poor fine motor skills and easy distractibility. Using this accessible, science-based guide, school staff can support these students by understanding their symptoms and how they impact their learning. Teachers can learn to look at students in a different way: through so-called 'SPi glasses', introduced in the book. With these glasses on, you learn to recognize behaviours linked to sensory processing and respond quickly, easily and with more understanding, without using a diagnosis, medication or therapy. The techniques provided help children feel settled and soothed at school, enabling them to learn and communicate better. Creating the perfect learning environment for all students - a sensory supportive classroom - this tried and tested guide is an essential tool for teachers (with or without prior knowledge of SPD), to better support and understand their students and their sensory needs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Más fuentes

Capítulos de libros sobre el tema "Sensory input"

1

Wells-Jensen, Sheri. "Cognition, Sensory Input, and Linguistics." In Xenolinguistics, 138–51. London: Routledge, 2023. http://dx.doi.org/10.4324/9781003352174-13.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Stein, Wolfgang. "Sensory Input to Central Pattern Generators." In Encyclopedia of Computational Neuroscience, 2668–76. New York, NY: Springer New York, 2015. http://dx.doi.org/10.1007/978-1-4614-6675-8_465.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Johansson, Roland S. "Sensory Input and Control of Grip." In Novartis Foundation Symposia, 45–63. Chichester, UK: John Wiley & Sons, Ltd., 2007. http://dx.doi.org/10.1002/9780470515563.ch4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Stein, Wolfgang. "Sensory Input to Central Pattern Generators." In Encyclopedia of Computational Neuroscience, 1–11. New York, NY: Springer New York, 2014. http://dx.doi.org/10.1007/978-1-4614-7320-6_465-3.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Stein, Wolfgang. "Sensory Input to Central Pattern Generators." In Encyclopedia of Computational Neuroscience, 1–10. New York, NY: Springer New York, 2020. http://dx.doi.org/10.1007/978-1-4614-7320-6_465-4.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Strösslin, Thomas, Christophe Krebser, Angelo Arleo, and Wulfram Gerstner. "Combining Multimodal Sensory Input for Spatial Learning." In Artificial Neural Networks — ICANN 2002, 87–92. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_15.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Bullock, Theodore H. "The Comparative Neurology of Expectation: Stimulus Acquisition and Neurobiology of Anticipated and Unanticipated Input." In Sensory Biology of Aquatic Animals, 269–84. New York, NY: Springer New York, 1988. http://dx.doi.org/10.1007/978-1-4612-3714-3_10.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Bereiter, D. A., E. J. DeMaria, W. C. Engeland, and D. S. Gann. "Endocrine Responses to Multiple Sensory Input Related to Injury." In Advances in Experimental Medicine and Biology, 251–63. Boston, MA: Springer US, 1988. http://dx.doi.org/10.1007/978-1-4899-2064-5_20.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Clark, Lauren. "Sensory Awareness – Understanding Your Unique Brain Response to Sensory Input from the World Around You." In Das menschliche Büro - The human(e) office, 179–85. Wiesbaden: Springer Fachmedien Wiesbaden, 2021. http://dx.doi.org/10.1007/978-3-658-33519-9_9.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Katori, Yuichi. "Brain-Inspired Reservoir Computing Models." In Photonic Neural Networks with Spatiotemporal Dynamics, 259–78. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-5072-0_13.

Texto completo
Resumen
AbstractThis chapter presents an overview of brain-inspired reservoir computing models for sensory-motor information processing in the brain. These models are based on the idea that the brain processes information using a large population of interconnected neurons, where the dynamics of the system can amplify, transform, and integrate incoming signals. We discuss the reservoir predictive coding model, which uses predictive coding to explain how the brain generates expectations regarding sensory input and processes incoming signals. This model incorporates a reservoir of randomly connected neurons that can amplify and transform sensory inputs. Moreover, we describe the reservoir reinforcement learning model, which explains how the brain learns to make decisions based on rewards or punishments received after performing a certain action. This model uses a reservoir of randomly connected neurons to represent various possible actions and their associated rewards. The reservoir dynamics allow the brain to learn which actions lead to the highest reward. We then present an integrated model that combines these two reservoir computing models based on predictive coding and reinforcement learning. This model demonstrates how the brain integrates sensory information with reward signals to learn the most effective actions for a given situation. It also explains how the brain uses predictive coding to generate expectations about future sensory inputs and accordingly adjusts its actions. Overall, brain-inspired reservoir computing models provide a theoretical framework for understanding how the brain processes information and learns to make decisions. These models have the potential to revolutionize fields such as artificial intelligence and neuroscience, by advancing our understanding of the brain and inspiring new technologies.
Los estilos APA, Harvard, Vancouver, ISO, etc.

Actas de conferencias sobre el tema "Sensory input"

1

Bounou, Oumayma, Jean Ponce, and Justin Carpentier. "Learning System Dynamics from Sensory Input under Optimal Control Principles." In 2024 IEEE 63rd Conference on Decision and Control (CDC), 1885–92. IEEE, 2024. https://doi.org/10.1109/cdc56724.2024.10886191.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Morcos, Michael, Edward Bachelder, Martine Godfroy-Cooper, Spencer Fishman, and Umberto Saetti. "Full-Body Haptic and Spatial Audio Cueing Algorithms for Augmented Pilot Perception." In Vertical Flight Society 80th Annual Forum & Technology Display, 1–18. The Vertical Flight Society, 2024. http://dx.doi.org/10.4050/f-0080-2024-1179.

Texto completo
Resumen
This paper illustrates the development, implementation, and testing of full-body haptic and spatial audio cueing algorithms for augmented pilot perception. Cueing algorithms are developed for roll-axis compensatory tracking tasks where the pilot acts on the displayed error between a desired input and the comparable vehicle output motion to produce a control action. The error is displayed to the pilot using multiple cueing modalities: visual, haptic, audio, and combinations of these. For the visual and combined visual haptic/audio modalities, visual cues are also considered in degraded visual environments (DVE). Full-body haptic and spatial audio algorithms that are based on a proportional derivative (PD) compensation strategy on the tracking error are found to provide satisfactory pilot vehicle system (PVS) performance for the task in consideration in absence of visual cueing, and to improve PVS performance in DVE when used in combination with visual feedback. These results are consistent with previous studies on the use of secondary perceptual cues for augmentation of human perception. The combination of these results indicate that the use of secondary sensory cues such as full-body haptics and spatial audio to augment the pilot perception can lead to improved/partially-restored PVS performance when primary sensory cues like vision are impaired or denied.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Evans, Richard, Matko Bošnjak, Lars Buesing, Kevin Ellis, David Pfau, Pushmeet Kohli, and Marek Sergot. "Making Sense of Raw Input (Extended Abstract)." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/799.

Texto completo
Resumen
How should a machine intelligence perform unsupervised structure discovery over streams of sensory input? One approach to this problem is to cast it as an apperception task. Here, the task is to construct an explicit interpretable theory that both explains the sensory sequence and also satisfies a set of unity conditions, designed to ensure that the constituents of the theory are connected in a relational structure. However, the original formulation of the apperception task had one fundamental limitation: it assumed the raw sensory input had already been parsed using a set of discrete categories, so that all the system had to do was receive this already-digested symbolic input, and make sense of it. But what if we don't have access to pre-parsed input? What if our sensory sequence is raw unprocessed information? The central contribution of this paper is a neuro-symbolic framework for distilling interpretable theories out of streams of raw, unprocessed sensory experience. First, we extend the definition of the apperception task to include ambiguous (but still symbolic) input: sequences of sets of disjunctions. Next, we use a neural network to map raw sensory input to disjunctive input. Our binary neural network is encoded as a logic program, so the weights of the network and the rules of the theory can be solved jointly as a single SAT problem. This way, we are able to jointly learn how to perceive (mapping raw sensory information to concepts) and apperceive (combining concepts into declarative rules).
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jeon, Soo. "State Estimation for Kinematic Model Over Lossy Network." In ASME 2010 Dynamic Systems and Control Conference. ASMEDC, 2010. http://dx.doi.org/10.1115/dscc2010-4297.

Texto completo
Resumen
The major benefit of the kinematic Kalman filter (KKF), i.e., the state estimation based on kinematic model is that it is immune to parameter variations and unknown disturbances regardless of the operating conditions. In carrying out complex motion tasks such as the coordinated manipulation among multiple machines, some of the motion variables measured by sensors may only be available through the communication layer, which requires to formulate the optimal state estimator subject to lossy network. In contrast to standard dynamic systems, the kinematic model used in the KKF relies on sensory data not only for the output but also for the process input. This paper studies how the packet dropout occurring from the input sensor as well as the output sensor affects the performance of the KKF. When the output sensory data are delivered through the lossy network, it has been shown that the mean error covariance of the KKF is bounded for any non-zero packet arrival rate. On the other hand, if the input sensory data are subject to lossy network, the Bernoulli dropout model results in an unbounded mean error covariance. More practical strategy is to adopt the previous input estimate in case the current packet is dropped. For each case of packet dropout models, the stochastic characteristics of the mean error covariance are analyzed and compared. Simulation results are presented to illustrate the analytical results and to compare the performance of the time varying (optimal) filter gain with that of the static (sub-optimal) filter gain.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hill, Chris, Casey Lee Hunt, Sammie Crowder, Brett Fiedler, Emily B. Moore, and Ann Eisenberg. "Investigating Sensory Extensions as Input for Interactive Simulations." In TEI '23: Seventeenth International Conference on Tangible, Embedded, and Embodied Interaction. New York, NY, USA: ACM, 2023. http://dx.doi.org/10.1145/3569009.3573108.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Wurdemann, Helge A., Evangelos Georgiou, Lei Cui, and Jian S. Dai. "SLAM Using 3D Reconstruction via a Visual RGB and RGB-D Sensory Input." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-47735.

Texto completo
Resumen
This paper investigates simultaneous localization and mapping (SLAM) problem by exploiting the Microsoft Kinect™ sensor array and an autonomous mobile robot capable of self-localization. The combination of them covers the major features of SLAM including mapping, sensing, locating, and modeling. The Kinect™ sensor array provides a dual camera output of RGB, using a CMOS camera, and RGB-D, using a depth camera. The sensors will be mounted on the KCLBOT, an autonomous nonholonomic two wheel maneuverable mobile robot. The mobile robot platform has the ability to self-localize and preform navigation maneuvers to traverse to set target points using intelligent processes. The target point for this operation is a fixed coordinate position, which will be the goal for the mobile robot to reach, taking into consideration the obstacles in the environment which will be represented in a 3D spatial model. Extracting the images from the sensor after a calibration routine, a 3D reconstruction of the traversable environment is produced for the mobile robot to navigate. Using the constructed 3D model the autonomous mobile robot follows a polynomial-based nonholonomic trajectory with obstacle avoidance. The experimental results demonstrate the cost effectiveness of this off the shelf sensor array. The results show the effectiveness to produce a 3D reconstruction of an environment and the feasibility of using the Microsoft Kinect™ sensor for mapping, sensing, locating, and modeling, that enables the implementation of SLAM on this type of platform.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Kruijff, Ernst, Gerold Wesche, Kai Riege, Gernot Goebbels, Martijn Kunstman, and Dieter Schmalstieg. "Tactylus, a pen-input device exploring audiotactile sensory binding." In the ACM symposium. New York, New York, USA: ACM Press, 2006. http://dx.doi.org/10.1145/1180495.1180557.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Wakatabe, Ryo, Yasuo Kuniyoshi, and Gordon Cheng. "O (logn) algorithm for forward kinematics under asynchronous sensory input." In 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2017. http://dx.doi.org/10.1109/icra.2017.7989291.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Richards, Deborah. "Intimately intelligent virtual agents: knowing the human beyond sensory input." In ICMI '17: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3139491.3139505.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Connor, Jack, Jordan Nowell, Benjamin Champion, and Matthew Joordens. "Analysis of Robotic Fish Using Swarming Rules with Limited Sensory Input." In 2019 14th Annual Conference System of Systems Engineering (SoSE). IEEE, 2019. http://dx.doi.org/10.1109/sysose.2019.8753879.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.

Informes sobre el tema "Sensory input"

1

Parker, Michael, Alex Stott, Brian Quinn, Bruce Elder, Tate Meehan, and Sally Shoop. Joint Chilean and US mobility testing in extreme environments. Engineer Research and Development Center (U.S.), November 2021. http://dx.doi.org/10.21079/11681/42362.

Texto completo
Resumen
Vehicle mobility in cold and challenging terrains is of interest to both the US and Chilean Armies. Mobility in winter conditions is highly vehicle dependent with autonomous vehicles experiencing additional challenges over manned vehicles. They lack the ability to make informed decisions based on what they are “seeing” and instead need to rely on input from sensors on the vehicle, or from Unmanned Aerial Systems (UAS) or satellite data collections. This work focuses on onboard vehicle Controller Area Network (CAN) Bus sensors, driver input sensors, and some externally mounted sensors to assist with terrain identification and overall vehicle mobility. Analysis of winter vehicle/sensor data collected in collaboration with the Chilean Army in Lonquimay, Chile during July and August 2019 will be discussed in this report.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Madsen, Jens, Nikhil Kuppa, and Lucas Parra. The Brain, Body, and Behaviour Dataset - Neural Engineering Lab, CCNY. Fcp-indi, 2025. https://doi.org/10.15387/fcp_indi.retro.bbbd.

Texto completo
Resumen
When humans engage with video, their brain and body interact in response to sensory input. To investigate these interactions, we recorded and are releasing a dataset from N=178 participants across five experiments featuring short online educational videos. This dataset comprises approximately 110 hours of multimodal data including electrocardiogram (ECG), heart rate, respiration, breathing rate, pupil size, electrooculogram (EOG), gaze position, saccades, blinks, fixations, head movement, and electroencephalogram (EEG). Participants viewed 3-6 videos (mean total duration: 28±5 min) to test attentional states (attentive vs. distracted), memory retention (multiple-choice questions), learning scenarios (incidental vs. intentional), and an intervention (monetary incentive). Demographic data, ADHD self-report (ASRS), and working memory assessments (digit span) were collected. Basic statistics and noteworthy effects: increased alpha power in a distracted condition, broadband EEG power increases from posterior to anterior scalp, increased blink-rate, and decreased saccade-rate in distracted and intervention conditions. All modalities are time-aligned with stimuli and standardized using BIDS, making the dataset valuable for researchers investigating attention, memory, and learning in naturalistic settings.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, July 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Texto completo
Resumen
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detection, refractometer and a scale (mass). Data were analyzed and provided input for five classification models. Chlorophyll from fluorescence was found to give the best estimation for ripeness stage while the combination of machine vision and firmness from impact performed best for quality sorting. A new algorithm was developed to estimate and minimize training size for supervised classification. A new criteria was established to choose a training set such that a recurrent auto-associative memory neural network is stabilized. Moreover, this method provides for rapid and accurate updating of the classifier over growing seasons, production environments and cultivars. Different classification approaches (parametric and non-parametric) for grading were examined. Statistical methods were found to be as accurate as neural networks in grading. Classification models by voting did not enhance the classification significantly. A hybrid model that incorporated heuristic rules and either a numerical classifier or neural network was found to be superior in classification accuracy with half the required processing of solely the numerical classifier or neural network. In Israel: A multi-sensing approach utilizing non-destructive sensors was developed. Shape, color, stem identification, surface defects and bruises were measured using a color image processing system. Flavor parameters (sugar, acidity, volatiles) and ripeness were measured using a near-infrared system and an electronic sniffer. Mechanical properties were measured using three sensors: drop impact, resonance frequency and cyclic deformation. Classification algorithms for quality sorting of fruit based on multi-sensory data were developed and implemented. The algorithms included a dynamic artificial neural network, a back propagation neural network and multiple linear regression. Results indicated that classification based on multiple sensors may be applied in real-time sorting and can improve overall classification. Advanced image processing algorithms were developed for shape determination, bruise and stem identification and general color and color homogeneity. An unsupervised method was developed to extract necessary vision features. The primary advantage of the algorithms developed is their ability to learn to determine the visual quality of almost any fruit or vegetable with no need for specific modification and no a-priori knowledge. Moreover, since there is no assumption as to the type of blemish to be characterized, the algorithm is capable of distinguishing between stems and bruises. This enables sorting of fruit without knowing the fruits' orientation. A new algorithm for on-line clustering of data was developed. The algorithm's adaptability is designed to overcome some of the difficulties encountered when incrementally clustering sparse data and preserves information even with memory constraints. Large quantities of data (many images) of high dimensionality (due to multiple sensors) and new information arriving incrementally (a function of the temporal dynamics of any natural process) can now be processed. Furhermore, since the learning is done on-line, it can be implemented in real-time. The methodology developed was tested to determine external quality of tomatoes based on visual information. An improved model for color sorting which is stable and does not require recalibration for each season was developed for color determination. Excellent classification results were obtained for both color and firmness classification. Results indicted that maturity classification can be obtained using a drop-impact and a vision sensor in order to predict the storability and marketing of harvested fruits. In conclusion: We have been able to define quantitatively the critical parameters in the quality sorting and grading of both fresh market cantaloupes and tomatoes. We have been able to accomplish this using nondestructive measurements and in a manner consistent with expert human grading and in accordance with market acceptance. This research constructed and used large databases of both commodities, for comparative evaluation and optimization of expert system, statistical and/or neural network models. The models developed in this research were successfully tested, and should be applicable to a wide range of other fruits and vegetables. These findings are valuable for the development of on-line grading and sorting of agricultural produce through the incorporation of multiple measurement inputs that rapidly define quality in an automated manner, and in a manner consistent with the human graders and inspectors.
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Jones, Scott B., Shmuel P. Friedman, and Gregory Communar. Novel streaming potential and thermal sensor techniques for monitoring water and nutrient fluxes in the vadose zone. United States Department of Agriculture, January 2011. http://dx.doi.org/10.32747/2011.7597910.bard.

Texto completo
Resumen
The “Novel streaming potential (SP) and thermal sensor techniques for monitoring water and nutrient fluxes in the vadose zone” project ended Oct. 30, 2015, after an extension to complete travel and intellectual exchange of ideas and sensors. A significant component of this project was the development and testing of the Penta-needle Heat Pulse Probe (PHPP) in addition to testing of the streaming potential concept, both aimed at soil water flux determination. The PHPP was successfully completed and shown to provide soil water flux estimates down to 1 cm day⁻¹ with altered heat input and timing as well as use of larger heater needles. The PHPP was developed by Scott B. Jones at Utah State University with a plan to share sensors with Shmulik P. Friedman, the ARO collaborator. Delays in completion of the PHPP resulted in limited testing at USU and a late delivery of sensors (Sept. 2015) to Dr. Friedman. Two key aspects of the subsurface water flux sensor development that delayed the availability of the PHPP sensors were the addition of integrated electrical conductivity measurements (available in February 2015) and resolution of bugs in the microcontroller firmware (problems resolved in April 2015). Furthermore, testing of the streaming potential method with a wide variety of non-polarizable electrodes at both institutions was not successful as a practical measurement tool for water flux due to numerous sources of interference and the M.S. student in Israel terminated his program prematurely for personal reasons. In spite of these challenges, the project funded several undergraduate students building sensors and several master’s students and postdocs participating in theory and sensor development and testing. Four peer-reviewed journal articles have been published or submitted to date and six oral/poster presentations were also delivered by various authors associated with this project. We intend to continue testing the "new generation" PHPP probes at both USU and at the ARO resulting in several additional publications coming from this follow-on research. Furthermore, Jones is presently awaiting word on an internal grant application for commercialization of the PHPP at USU.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Kuznetsov, Victor, Vladislav Litvinenko, Egor Bykov, and Vadim Lukin. A program for determining the area of the object entering the IR sensor grid, as well as determining the dynamic characteristics. Science and Innovation Center Publishing House, April 2021. http://dx.doi.org/10.12731/bykov.0415.15042021.

Texto completo
Resumen
Currently, to evaluate the dynamic characteristics of objects, quite a large number of devices are used in the form of chronographs, which consist of various optical, thermal and laser sensors. Among the problems of these devices, the following can be distinguished: the lack of recording of the received data; the inaccessibility of taking into account the trajectory of the object flying in the sensor area, as well as taking into consideration the trajectory of the object during the approach to the device frame. The signal received from the infrared sensors is recorded in a separate document in txt format, in the form of a table. When you turn to the document, data is read from the current position of the input data stream in the specified list by an argument in accordance with the given condition. As a result of reading the data, it forms an array that includes N number of columns. The array is constructed in a such way that the first column includes time values, and columns 2...N- the value of voltage . The algorithm uses cycles that perform the function of deleting array rows where there is a fact of exceeding the threshold value in more than two columns, as well as rows where the threshold level was not exceeded. The modified array is converted into two new arrays, each of which includes data from different sensor frames. An array with the coordinates of the centers of the sensor operation zones was created to apply the Pythagorean theorem in three-dimensional space, which is necessary for calculating the exact distance between the zones. The time is determined by the difference in the response of the first and second sensor frames. Knowing the path and time, we are able to calculate the exact speed of the object. For visualization, the oscillograms of each sensor channel were displayed, and a chronograph model was created. The chronograph model highlights in purple the area where the threshold has been exceeded.
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Beshouri, Greg, and Bob Goffin. PR-309-15209-R01 Evaluation of NSCR Specific Models for Use in CEPM. Chantilly, Virginia: Pipeline Research Council International, Inc. (PRCI), January 2019. http://dx.doi.org/10.55274/r0011554.

Texto completo
Resumen
This 2015 NSCR project continues NSCR research started in 2009 and continued in 2011 under ERLE 2c and combines it with OBD research started in 2008 and continued in 2011. The 2009 NSCR research concluded that downstream measurement of lambda, O2 and NOx are useful for understanding performance of the entire package and evaluating compliance status and diagnosing system problems. However, that research also concluded that advanced signal conditioning and algorithms are required for unambiguous diagnostics. It also concluded system diagnostics was complex and beyond the capabilities of typical technicians. The 2011 OBD project demonstrated that a model-based diagnostics approach could precisely detect and diagnose typical combustion faults on lean burn engines. This 2015 project will specifically test and demonstrate the effectiveness of model based NSCR diagnostics using upstream and downstream exhaust sensors and other typical sensor inputs.
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

McMurtrey, Michael, Kunal Mondal, Joseph Bass, Kiyo Fujimoto, and Austin Biaggne. Report on plasma jet printer for sensor fabrication with process parameters optimized by simulation input. Office of Scientific and Technical Information (OSTI), September 2019. http://dx.doi.org/10.2172/1668670.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Alchanatis, Victor, Stephen W. Searcy, Moshe Meron, W. Lee, G. Y. Li, and A. Ben Porath. Prediction of Nitrogen Stress Using Reflectance Techniques. United States Department of Agriculture, November 2001. http://dx.doi.org/10.32747/2001.7580664.bard.

Texto completo
Resumen
Commercial agriculture has come under increasing pressure to reduce nitrogen fertilizer inputs in order to minimize potential nonpoint source pollution of ground and surface waters. This has resulted in increased interest in site specific fertilizer management. One way to solve pollution problems would be to determine crop nutrient needs in real time, using remote detection, and regulating fertilizer dispensed by an applicator. By detecting actual plant needs, only the additional nitrogen necessary to optimize production would be supplied. This research aimed to develop techniques for real time assessment of nitrogen status of corn using a mobile sensor with the potential to regulate nitrogen application based on data from that sensor. Specifically, the research first attempted to determine the system parameters necessary to optimize reflectance spectra of corn plants as a function of growth stage, chlorophyll and nitrogen status. In addition to that, an adaptable, multispectral sensor and the signal processing algorithm to provide real time, in-field assessment of corn nitrogen status was developed. Spectral characteristics of corn leaves reflectance were investigated in order to estimate the nitrogen status of the plants, using a commercial laboratory spectrometer. Statistical models relating leaf N and reflectance spectra were developed for both greenhouse and field plots. A basis was established for assessing nitrogen status using spectral reflectance from plant canopies. The combined effect of variety and N treatment was studied by measuring the reflectance of three varieties of different leaf characteristic color and five different N treatments. The variety effect on the reflectance at 552 nm was not significant (a = 0.01), while canonical discriminant analysis showed promising results for distinguishing different variety and N treatment, using spectral reflectance. Ambient illumination was found inappropriate for reliable, one-beam spectral reflectance measurement of the plants canopy due to the strong spectral lines of sunlight. Therefore, artificial light was consequently used. For in-field N status measurement, a dark chamber was constructed, to include the sensor, along with artificial illumination. Two different approaches were tested (i) use of spatially scattered artificial light, and (ii) use of collimated artificial light beam. It was found that the collimated beam along with a proper design of the sensor-beam geometry yielded the best results in terms of reducing the noise due to variable background, and maintaining the same distance from the sensor to the sample point of the canopy. A multispectral sensor assembly, based on a linear variable filter was designed, constructed and tested. The sensor assembly combined two sensors to cover the range of 400 to 1100 nm, a mounting frame, and a field data acquisition system. Using the mobile dark chamber and the developed sensor, as well as an off-the-shelf sensor, in- field nitrogen status of the plants canopy was measured. Statistical analysis of the acquired in-field data showed that the nitrogen status of the com leaves can be predicted with a SEP (Standard Error of Prediction) of 0.27%. The stage of maturity of the crop affected the relationship between the reflectance spectrum and the nitrogen status of the leaves. Specifically, the best prediction results were obtained when a separate model was used for each maturity stage. In-field assessment of the nitrogen status of corn leaves was successfully carried out by non contact measurement of the reflectance spectrum. This technology is now mature to be incorporated in field implements for on-line control of fertilizer application.
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Baker, John L., James L. Olds, and Joel L. Davis. A Novel Approach to Large Scale Brain Network Models: An Algorithmic Model for Place Cell Emergence With Robotic Sensor Input. Fort Belvoir, VA: Defense Technical Information Center, June 2004. http://dx.doi.org/10.21236/ada425321.

Texto completo
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Meiri, Noam, Michael D. Denbow, and Cynthia J. Denbow. Epigenetic Adaptation: The Regulatory Mechanisms of Hypothalamic Plasticity that Determine Stress-Response Set Point. United States Department of Agriculture, November 2013. http://dx.doi.org/10.32747/2013.7593396.bard.

Texto completo
Resumen
Our hypothesis was that postnatal stress exposure or sensory input alters brain activity, which induces acetylation and/or methylation on lysine residues of histone 3 and alters methylation levels in the promoter regions of stress-related genes, ultimately resulting in long-lasting changes in the stress-response set point. Therefore, the objectives of the proposal were: 1. To identify the levels of total histone 3 acetylation and different levels of methylation on lysine 9 and/or 14 during both heat and feed stress and challenge. 2. To evaluate the methylation and acetylation levels of histone 3 lysine 9 and/or 14 at the Bdnfpromoter during both heat and feed stress and challenge. 3. To evaluate the levels of the relevant methyltransferases and transmethylases during infliction of stress. 4. To identify the specific localization of the cells which respond to both specific histone modification and the enzyme involved by applying each of the stressors in the hypothalamus. 5. To evaluate the physiological effects of antisense knockdown of Ezh2 on the stress responses. 6. To measure the level of CpG methylation in the promoter region of BDNF in thermal treatments and free-fed, 12-hour fasted, and re-fed chicks during post-natal day 3, which is the critical period for feed-control establishment, and 10 days later to evaluate longterm effects. 7. The phenotypic effect of antisense “knock down” of the transmethylaseDNMT 3a. Background: The growing demand for improvements in poultry production requires an understanding of the mechanisms governing stress responses. Two of the major stressors affecting animal welfare and hence, the poultry industry in both the U.S. and Israel, are feed intake and thermal responses. Recently, it has been shown that the regulation of energy intake and expenditure, including feed intake and thermal regulation, resides in the hypothalamus and develops during a critical post-hatch period. However, little is known about the regulatory steps involved. The hypothesis to be tested in this proposal is that epigenetic changes in the hypothalamus during post-hatch early development determine the stress-response set point for both feed and thermal stressors. The ambitious goals that were set for this proposal were met. It was established that both stressors i.e. feed and thermal stress, can be manipulated during the critical period of development at day 3 to induce resilience to stress later in life. Specifically it was established that unfavorable nutritional conditions during early developmental periods or heat exposure influences subsequent adaptability to those same stressful conditions. Furthermore it was demonstrated that epigenetic marks on the promoter of genes involved in stress memory are altered both during stress, and as a result, later in life. Specifically it was demonstrated that fasting and heat had an effect on methylation and acetylation of histone 3 at various lysine residues in the hypothalamus during exposure to stress on day 3 and during stress challenge on day 10. Furthermore, the enzymes that perform these modifications are altered both during stress conditioning and challenge. Finally, these modifications are both necessary and sufficient, since antisense "knockdown" of these enzymes affects histone modifications, and as a consequence stress resilience. DNA methylation was also demonstrated at the promoters of genes involved in heat stress regulation and long-term resilience. It should be noted that the only goal that we did not meet because of technical reasons was No. 7. In conclusion: The outcome of this research may provide information for the improvement of stress responses in high yield poultry breeds using epigenetic adaptation approaches during critical periods in the course of early development in order to improve animal welfare even under suboptimum environmental conditions.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía