Academic literature on the topic 'Artificial vision system'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Artificial vision system.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Artificial vision system"

1

Fink, Wolfgang, and Mark A. Tarbell. "Artificial vision support system (AVS2) for improved prosthetic vision." Journal of Medical Engineering & Technology 38, no. 8 (2014): 385–95. http://dx.doi.org/10.3109/03091902.2014.957869.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Loresco, Pocholo James M., and Elmer Dadios. "Vision-Based Lettuce Growth Stage Decision Support System Using Artificial Neural Networks." International Journal of Machine Learning and Computing 10, no. 4 (2020): 534–41. http://dx.doi.org/10.18178/ijmlc.2020.10.4.969.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Mauledoux, Mauricio M., Carlos Hernandez, Crhistian C. G. Segura, and Oscar F. Aviles. "Object Tracking System Based on Artificial Vision Algorithms." Applied Mechanics and Materials 713-715 (January 2015): 420–23. http://dx.doi.org/10.4028/www.scientific.net/amm.713-715.420.

Full text
Abstract:
This document describes the architecture of a tracking system with two degrees of freedom (pan and tilt), endowed with artificial vision to follow the path of a moving object. The mechanism with a fixed base was designed to cover lateral and vertical ranges of movement, similar to the visual field in humans, limiting its depth by the resolution of the camera. The object that defines the motion path presents color uniformity across its surface, becoming the main feature in which the recognition and tracking algorithm were based. The tracking is performed by reducing the error between the object position and the reference axis of the camera. Several tests were carried out to evaluate the control and visual systems and to illustrate the behavior of the proposed methods.
APA, Harvard, Vancouver, ISO, and other styles
4

Seo, Chang-Jin. "Artificial Vision System using Human Visual Information Processing." Journal of Digital Convergence 12, no. 11 (2014): 349–55. http://dx.doi.org/10.14400/jdc.2014.12.11.349.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sanmartín and Briceño. "Development of an Artificial Vision System for Underwater Vehicles." Proceedings 21, no. 1 (2019): 1. http://dx.doi.org/10.3390/proceedings2019021001.

Full text
Abstract:
Beyond certain depth there is no light, supposing the main obstacle in the use of optical systems beneath the water. Therefore, the underwater vision system developed is composed of a set of underwater lights which allow the system to work properly and the cameras. These are integrated with the navigation system through the Robot Operating System (ROS) framework, which handles the acquisition and processing of information to be used as support for the navigation and which is also essential for its use in reconnaissance missions.
APA, Harvard, Vancouver, ISO, and other styles
6

Faber, Tracy L. "Next generation artificial vision systems: reverse engineering the human visual system." Academic Radiology 16, no. 5 (2009): 642. http://dx.doi.org/10.1016/j.acra.2009.01.013.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bourbakis, Nikolaos G., Mike Papazoglou, and George Alexiou. "Multiprocessor vision system." Microprocessors and Microsystems 14, no. 9 (1990): 573–82. http://dx.doi.org/10.1016/0141-9331(90)90092-a.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

MERTOGUNO, J. S., and N. G. BOURBAKIS. "KYDON VISION SYSTEM: THE ADAPTIVE LEARNING MODEL." International Journal on Artificial Intelligence Tools 04, no. 04 (1995): 453–69. http://dx.doi.org/10.1142/s021821309500022x.

Full text
Abstract:
In this paper, an adaptive learning model for an autonomous vision system multi-layers architecture, called Kydon, are presented, modeled, and analyzed. In particular two critical (deletion and saturation) points on the learning curve are evaluated. These points represent two extreme states on the learning process. The Kydon architecture consists of ‘k’ layers array processors. The lowest layers consists of lower-level processing layers, and the rest consists of higher-level processing layers. The interconnectivity of the PEs in each array is based on a full hexagonal mesh structure. Kydon uses graph models to represent and process the knowledge, extracted from the image. The knowledge base of Kydon is distributed among its PE’s. A unique model for evolving knowledge base has been developed especially for Kydon in order to provide it with some intelligence properties.
APA, Harvard, Vancouver, ISO, and other styles
9

STEFANOV, S. Z. "DAILY ARTIFICIAL DISPATCHER LONG-TERM VISION." New Mathematics and Natural Computation 09, no. 01 (2013): 65–75. http://dx.doi.org/10.1142/s1793005713500051.

Full text
Abstract:
Long-term vision of an electrical power system (EPS) "Daily Artificial Dispatcher" is expressed as intentions for secure and effective leading for a day ahead, transferred into modes via memory and adaptation. This long-term vision is interpreted as a story for a nine or ten-year-old child for his efforts not to disperse his friends.
APA, Harvard, Vancouver, ISO, and other styles
10

Komati, Karin S., and Alberto F. De Souza. "Using Weightless Neural Networks for Vergence Control in an Artificial Vision System." Applied Bionics and Biomechanics 1, no. 1 (2003): 21–31. http://dx.doi.org/10.1155/2003/626283.

Full text
Abstract:
This paper presents a methodology we have developed and used to implement an artificial binocular vision system capable of emulating the vergence of eye movements. This methodology involves using weightless neural networks (WNNs) as building blocks of artificial vision systems. Using the proposed methodology, we have designed several architectures of WNN-based artificial vision systems, in which images captured by virtual cameras are used for controlling the position of the ‘foveae’ of these cameras (high-resolution region of the images captured). Our best architecture is able to control the foveae vergence movements with average error of only 3.58 image pixels, which is equivalent to an angular error of approximately 0.629°.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Artificial vision system"

1

Luwes, Nicolaas Johannes. "Artificial intelligence machine vision grading system." Thesis, Bloemfontein : Central University of Technology, Free State, 2014. http://hdl.handle.net/11462/35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wan, Chuen L. "Traffic representation by artificial neural system and computer vision." Thesis, Edinburgh Napier University, 1994. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.261024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Cross, Nicola. "An attentional what-where vision system using artificial neural networks." Thesis, University of Warwick, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.342611.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hasan, N. K. "An adaptive artificial vision system for recognition and handling of industrial parts." Thesis, Brunel University, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.374430.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Santos, Joaquim Vasco Oliveira dos. "Development of a robotic vision system with a modular architecture." Master's thesis, Universidade de Aveiro, 2014. http://hdl.handle.net/10773/14028.

Full text
Abstract:
Mestrado em Engenharia Computadores e Telemática<br>Vision systems are becoming a very active research area and rapidly changing with new applications arising more and more. Applications using image processing are getting common as time moves forward, and applications such as converting documents to text, cameras detecting faces and people smiling, object recognition among others are found in daily objects such as cameras and phones. With the development of vision systems, robotics is an area that can bene t a lot more from abilities such as object detection and information extraction such as object position and orientation. The purpose of this thesis is the development of a modular vision system to be used by the robotic soccer players of team CAMBADA, participant in the robocup Middle Size Leage (MSL). The modular vision system is also easily exported onto other robotic projects that possess vision, a way of seeing the world around them. The vision system will possess modules with speci cs tasks such as inage acquisition and object detection, visual debug and remote con guration of the inherent system parameters. The vision system will use the UAVision library to acquire images and information extraction. A remote application to interact and con gure the vision system was also developed using the Qt4 application programming interface. This remote application will interact with the server module of the modular vision system through the network using the transmission control protocol. In order to transfer images and parameters of the vision system a library was developed to handle the transmission control protocol using as base the POSIX sockets application programming interface. This library is used in the modular vision system server as well as in the remote application. The main objectives of this thesis have been accomplished and part of this work is already being used by the CAMBADA team.<br>Os sistemas de visão estão a tornar-se uma área de pesquisa bastante activa e de grande mudança com novas aplicações a aparecerem cada vez mais. Aplicações que usam processamento de imagem tornam-se cada vez mais comuns com o passar do tempo executando tarefas como converter documentos manuscritos para documentos digitais, fazer detecção ao de caras humanas e acções humanas como sorrir, reconhecimento de objectos específicos entre outras. Estes tipos de aplicações encontram-se em objectos do dia-a-dia como em câmaras e telemóveis. Com o desenvolvimento de sistemas de visão, a área da robótica pode beneficiar mais de capacidades como detecção de objectos e a sua posição. O objectivo desta tese e o desenvolvimento de um sistema de visão modular a ser usado nos jogadores robóticos da equipa CAMBADA, participantes na Liga de Tamanho Médio (MSL). O sistema de visão modular pode ser facilmente exportado para outros projectos de robótica que possuam visão, uma forma de ver o mundo que os rodeia. Este possuir a módulos com tarefas específicas como aquisição de imagem e detecção de objectos, depuramento visual e configuração remota dos parâmetros inerentes ao sistema. O sistema de visão usar a a biblioteca UAVision para aquirir imagens e extrair informação. Uma aplicação remota para interagir e configurar o sistema de visão foi também desenvolvido usando a interface de programação de aplicações Qt4. Esta aplicação remota ir a interagir com o modulo servidor do sistema de visão modular através da rede usando o protocolo de controle de transmissão. Para transferir images e parâmetros do sistema de visão foi desenvolvida uma biblioteca para lidar com o protocolo de controlo de transmissão usando como base a interface de programação de aplicações de sockets da POSIX. Esta biblioteca e usada no servidor do sistema de visão modular assim como na aplicação remota. Os objectivos principais desta tese foram cumpridos e parte deste trabalho encontra-se j a em uso pela equipa CAMBADA.
APA, Harvard, Vancouver, ISO, and other styles
6

Trifan, Alina Liliana. "Development of a vision system for humanoid robots." Master's thesis, Universidade de Aveiro, 2011. http://hdl.handle.net/10773/7173.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tabandeh, Amir S. "Artificial intelligence techniques and concepts for integrating a robot vision system with a solid modeller." Thesis, University of Cambridge, 1988. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.253728.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Fuentes, Hurtado Félix José. "A system for modeling social traits in realistic faces with artificial intelligence." Doctoral thesis, Universitat Politècnica de València, 2018. http://hdl.handle.net/10251/101943.

Full text
Abstract:
Los seres humanos han desarrollado especialmente su capacidad perceptiva para procesar caras y extraer información de las características faciales. Usando nuestra capacidad conductual para percibir rostros, hacemos atribuciones tales como personalidad, inteligencia o confiabilidad basadas en la apariencia facial que a menudo tienen un fuerte impacto en el comportamiento social en diferentes dominios. Por lo tanto, las caras desempeñan un papel fundamental en nuestras relaciones con otras personas y en nuestras decisiones cotidianas. Con la popularización de Internet, las personas participan en muchos tipos de interacciones virtuales, desde experiencias sociales, como juegos, citas o comunidades, hasta actividades profesionales, como e-commerce, e-learning, e-therapy o e-health. Estas interacciones virtuales manifiestan la necesidad de caras que representen a las personas reales que interactúan en el mundo digital: así surgió el concepto de avatar. Los avatares se utilizan para representar a los usuarios en diferentes escenarios y ámbitos, desde la vida personal hasta situaciones profesionales. En todos estos casos, la aparición del avatar puede tener un efecto no solo en la opinión y percepción de otra persona, sino en la autopercepción, que influye en la actitud y el comportamiento del sujeto. De hecho, los avatares a menudo se emplean para obtener impresiones o emociones a través de expresiones no verbales, y pueden mejorar las interacciones en línea o incluso son útiles para fines educativos o terapéuticos. Por lo tanto, la posibilidad de generar avatares de aspecto realista que provoquen un determinado conjunto de impresiones sociales supone una herramienta muy interesante y novedosa, útil en un amplio abanico de campos. Esta tesis propone un método novedoso para generar caras de aspecto realistas con un perfil social asociado que comprende 15 impresiones diferentes. Para este propósito, se completaron varios objetivos parciales. En primer lugar, las características faciales se extrajeron de una base de datos de caras reales y se agruparon por aspecto de una manera automática y objetiva empleando técnicas de reducción de dimensionalidad y agrupamiento. Esto produjo una taxonomía que permite codificar de manera sistemática y objetiva las caras de acuerdo con los grupos obtenidos previamente. Además, el uso del método propuesto no se limita a las características faciales, y se podría extender su uso para agrupar automáticamente cualquier otro tipo de imágenes por apariencia. En segundo lugar, se encontraron las relaciones existentes entre las diferentes características faciales y las impresiones sociales. Esto ayuda a saber en qué medida una determinada característica facial influye en la percepción de una determinada impresión social, lo que permite centrarse en la característica o características más importantes al diseñar rostros con una percepción social deseada. En tercer lugar, se implementó un método de edición de imágenes para generar una cara totalmente nueva y realista a partir de una definición de rostro utilizando la taxonomía de rasgos faciales antes mencionada. Finalmente, se desarrolló un sistema para generar caras realistas con un perfil de rasgo social asociado, lo cual cumple el objetivo principal de la presente tesis. La principal novedad de este trabajo reside en la capacidad de trabajar con varias dimensiones de rasgos a la vez en caras realistas. Por lo tanto, en contraste con los trabajos anteriores que usan imágenes con ruido, o caras de dibujos animados o sintéticas, el sistema desarrollado en esta tesis permite generar caras de aspecto realista eligiendo los niveles deseados de quince impresiones: Miedo, Enfado, Atractivo, Cara de niño, Disgustado, Dominante, Femenino, Feliz, Masculino, Prototípico, Triste, Sorprendido, Amenazante, Confiable e Inusual. Los prometedores resultados obtenidos permitirán investigar más a fondo cómo modelar l<br>Humans have specially developed their perceptual capacity to process faces and to extract information from facial features. Using our behavioral capacity to perceive faces, we make attributions such as personality, intelligence or trustworthiness based on facial appearance that often have a strong impact on social behavior in different domains. Therefore, faces play a central role in our relationships with other people and in our everyday decisions. With the popularization of the Internet, people participate in many kinds of virtual interactions, from social experiences, such as games, dating or communities, to professional activities, such as e-commerce, e-learning, e-therapy or e-health. These virtual interactions manifest the need for faces that represent the actual people interacting in the digital world: thus the concept of avatar emerged. Avatars are used to represent users in different scenarios and scopes, from personal life to professional situations. In all these cases, the appearance of the avatar may have an effect not only on other person's opinion and perception but on self-perception, influencing the subject's own attitude and behavior. In fact, avatars are often employed to elicit impressions or emotions through non-verbal expressions, and are able to improve online interactions or even useful for education purposes or therapy. Then, being able to generate realistic looking avatars which elicit a certain set of desired social impressions poses a very interesting and novel tool, useful in a wide range of fields. This thesis proposes a novel method for generating realistic looking faces with an associated social profile comprising 15 different impressions. For this purpose, several partial objectives were accomplished. First, facial features were extracted from a database of real faces and grouped by appearance in an automatic and objective manner employing dimensionality reduction and clustering techniques. This yielded a taxonomy which allows to systematically and objectively codify faces according to the previously obtained clusters. Furthermore, the use of the proposed method is not restricted to facial features, and it should be possible to extend its use to automatically group any other kind of images by appearance. Second, the existing relationships among the different facial features and the social impressions were found. This helps to know how much a certain facial feature influences the perception of a given social impression, allowing to focus on the most important feature or features when designing faces with a sought social perception. Third, an image editing method was implemented to generate a completely new, realistic face from just a face definition using the aforementioned facial feature taxonomy. Finally, a system to generate realistic faces with an associated social trait profile was developed, which fulfills the main objective of the present thesis. The main novelty of this work resides in the ability to work with several trait dimensions at a time on realistic faces. Thus, in contrast with the previous works that use noisy images, or cartoon-like or synthetic faces, the system developed in this thesis allows to generate realistic looking faces choosing the desired levels of fifteen impressions, namely Afraid, Angry, Attractive, Babyface, Disgusted, Dominant, Feminine, Happy, Masculine, Prototypical, Sad, Surprised, Threatening, Trustworthy and Unusual. The promising results obtained in this thesis will allow to further investigate how to model social perception in faces using a completely new approach.<br>Els sers humans han desenvolupat especialment la seua capacitat perceptiva per a processar cares i extraure informació de les característiques facials. Usant la nostra capacitat conductual per a percebre rostres, fem atribucions com ara personalitat, intel·ligència o confiabilitat basades en l'aparença facial que sovint tenen un fort impacte en el comportament social en diferents dominis. Per tant, les cares exercixen un paper fonamental en les nostres relacions amb altres persones i en les nostres decisions quotidianes. Amb la popularització d'Internet, les persones participen en molts tipus d'inter- accions virtuals, des d'experiències socials, com a jocs, cites o comunitats, fins a activitats professionals, com e-commerce, e-learning, e-therapy o e-health. Estes interaccions virtuals manifesten la necessitat de cares que representen a les persones reals que interactuen en el món digital: així va sorgir el concepte d'avatar. Els avatars s'utilitzen per a representar als usuaris en diferents escenaris i àmbits, des de la vida personal fins a situacions professionals. En tots estos casos, l'aparició de l'avatar pot tindre un efecte no sols en l'opinió i percepció d'una altra persona, sinó en l'autopercepció, que influïx en l'actitud i el comportament del subjecte. De fet, els avatars sovint s'empren per a obtindre impressions o emocions a través d'expressions no verbals, i poden millorar les interaccions en línia o inclús són útils per a fins educatius o terapèutics. Per tant, la possibilitat de generar avatars d'aspecte realista que provoquen un determinat conjunt d'impressions socials planteja una ferramenta molt interessant i nova, útil en un ampla varietat de camps. Esta tesi proposa un mètode nou per a generar cares d'aspecte realistes amb un perfil social associat que comprén 15 impressions diferents. Per a este propòsit, es van completar diversos objectius parcials. En primer lloc, les característiques facials es van extraure d'una base de dades de cares reals i es van agrupar per aspecte d'una manera automàtica i objectiva emprant tècniques de reducció de dimensionalidad i agrupament. Açò va produir una taxonomia que permet codificar de manera sistemàtica i objectiva les cares d'acord amb els grups obtinguts prèviament. A més, l'ús del mètode proposat no es limita a les característiques facials, i es podria estendre el seu ús per a agrupar automàticament qualsevol altre tipus d'imatges per aparença. En segon lloc, es van trobar les relacions existents entre les diferents característiques facials i les impressions socials. Açò ajuda a saber en quina mesura una determinada característica facial influïx en la percepció d'una determinada impressió social, la qual cosa permet centrar-se en la característica o característiques més importants al dissenyar rostres amb una percepció social desitjada. En tercer lloc, es va implementar un mètode d'edició d'imatges per a generar una cara totalment nova i realista a partir d'una definició de rostre utilitzant la taxonomia de trets facials abans mencionada. Finalment, es va desenrotllar un sistema per a generar cares realistes amb un perfil de tret social associat, la qual cosa complix l'objectiu principal de la present tesi. La principal novetat d'este treball residix en la capacitat de treballar amb diverses dimensions de trets al mateix temps en cares realistes. Per tant, en contrast amb els treballs anteriors que usen imatges amb soroll, o cares de dibuixos animats o sintètiques, el sistema desenrotllat en esta tesi permet generar cares d'aspecte realista triant els nivells desitjats de quinze impressions: Por, Enuig, Atractiu, Cara de xiquet, Disgustat, Dominant, Femení, Feliç, Masculí, Prototípic, Trist, Sorprés, Amenaçador, Confiable i Inusual. Els prometedors resultats obtinguts en esta tesi permetran investigar més a fons com modelar la percepció social en les cares utilitzant un enfocament complet<br>Fuentes Hurtado, FJ. (2018). A system for modeling social traits in realistic faces with artificial intelligence [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/101943<br>TESIS
APA, Harvard, Vancouver, ISO, and other styles
9

Okapuu-von, Veh Alexander. "Sound and vision : audiovisual aspects of a virtual-reality personnel-training system." Thesis, McGill University, 1996. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=23752.

Full text
Abstract:
This thesis describes a prototype virtual reality (VR) training system. E scSOPE-VR, designed and implemented for Hydro-Quebec by graduate students at McGill University and Ecole Polytechnique de Montreal. The project was motivated by the necessity of providing a realistic training environment for substation operators, while ensuring their safety and the network's integrity at all times.<br>With the simulator, trainees can carry out all the switching operations necessary for their work in absolute safety, while staying in a realistic environment. A speech-recognition system controls the training session, while audio immersion adds a dimension of realism to the virtual world. An expert-system validates the trainee's operations at all times and a steady-state power-flow simulator recalculates network parameters. The automatic conversion of single-line diagrams enables the construction of three-dimensional models of substation equipment.<br>The present thesis focuses on the speech command, audio, video and network aspects of the system. A survey of current VR applications and an overview of VR technology are followed by a summary of the E scSOPE-VR project.
APA, Harvard, Vancouver, ISO, and other styles
10

Dias, René Octavio Queiroz. "A computer vision system for recognizing plant species in the wild using convolutional neural networks." reponame:Repositório Institucional da UnB, 2017. http://repositorio.unb.br/handle/10482/24650.

Full text
Abstract:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Mecânica, 2017.<br>Submitted by Albânia Cézar de Melo (albania@bce.unb.br) on 2017-08-24T13:58:08Z No. of bitstreams: 1 2017_RenéOctavioQueirozDias.pdf: 17746801 bytes, checksum: 9dc00a9435aa0263edd4056fbbad2612 (MD5)<br>Approved for entry into archive by Raquel Viana (raquelviana@bce.unb.br) on 2017-09-26T16:44:32Z (GMT) No. of bitstreams: 1 2017_RenéOctavioQueirozDias.pdf: 17746801 bytes, checksum: 9dc00a9435aa0263edd4056fbbad2612 (MD5)<br>Made available in DSpace on 2017-09-26T16:44:32Z (GMT). No. of bitstreams: 1 2017_RenéOctavioQueirozDias.pdf: 17746801 bytes, checksum: 9dc00a9435aa0263edd4056fbbad2612 (MD5) Previous issue date: 2017-09-26<br>Classificação de plantas tem sido um problema recorrente na comunidade de Visão Computacional. Visualmente, as plantas apresentam uma variabilidade muito grande, decorrente principalmente de efeitos sazonais, idade e fundos. Sistemas de classificação mais antigos tinham problemas para lidar com estas variações e seus bancos de dados usavam imagens mais simples com apenas partes desmembradas de plantas (como folhas e flores) e fundo branco. Com o advento das Redes Neurais Profundas, que demostraram ser bastante competitivas como classificadores de propósito geral, o objetivo é testá-las com um banco de dados de propósito mais específico, que podem tencionar mais estes classificadores tentando classificar espécies de plantas similares em poses bastante diferentes. Construiu-se um banco de dados que é focado em como o usuário comum tira retratos de plantas. Este novo banco de dados, chamado Plantas, foi feito para ter poucas restrições. Inicialmente, há 50 espécies diferentes que são usados comumente em jardinagem, e há mais de 33.000 imagens. Estas fotos foram tiradas in loco e da Internet. Depois, treinou-se com técnicas recentes do estado da arte, como os Métodos de Codificação e Redes Neurais Profundas. Nos Métodos de Codificação, são usados três codificadores: Saco de Palavras Visuais (BoVW), Vetores Fisher (FV) e Vetores de Descritores Linearmente Agregados (VLAD). Nos Métodos de Codificação, há duas fases: uma aprendizagem sem-supervisão e em seguida uma supervisionada. Em todos os métodos, o processo é parecido. Na fase sem-supervisão, obtêm-se os descritores SIFT, retira-se uma amostra destes descritores, faz uma aprendizagem da projeção da Análise de Componentes Principais e usa-se k-médias para agregar estas características em k grupos, que são o número de palavras. Aqui se separa o treinamento de BoVW e VLAD dos Vetores Fisher. Para os primeiros, cria-se uma árvore k-d para facilitar o posterior processo de pesquisa. Para os Vetores Fisher, usa-se os grupos como inicialização dos Modelos de Mistura de Distribuições Normais. Na fase de aprendizagem supervisionada, passa-se uma imagem pelos processos de obtenção dos descritores SIFT, amostragem e PCA. Então, para cada característica de uma imagem, pesquisase o grupo a qual pertencente. Para BoVW, obtém-se um histograma que conta cada palavra da imagem que tem o equivalente no dicionário. Para VLAD, obtém-se o desvio à média destas palavras, e com Vetores Fisher, além do desvio à média, calcula-se o desvio à covariância. Estes, representam os descritores finais que são posteriormente treinados com uma Máquina de Vetores de Suporte Linear (Linear-SVM). Nas redes neurais, são treinadas diferentes arquiteturas recentes como AlexNet, CaffeNet, GoogLeNet e ResNet. Elas contêm técnicas que exploram a estrutura espacial das imagens, como as camadas de convoluções, e usam técnicas de regularização que evitam sobreajuste—que era algo especialmente comum em redes com muitos parâmetros—como Dropout e Normalização em Lotes. Também foi a primeira vez em que se usou uma função de ativação que não sofre problemas de saturação, a Unidade Linear Retificada (ReLU) que tomou o lugar de Sigmóides e Tangentes Hiperbólicas. Usando estas arquiteturas, faz-se experimentos para saber como elas respondem ao novo banco de dados, e quais são as melhores especificações para obter-se a melhor acurácia e quais as razões que uma escolha é melhor que a outra. Nestes experimentos, funções de ativações mais recentes como a Unidade Linear Retificada Parametrizada (PReLU) e a Unidade Linear Exponencial (ELU) foram testadas. Também, usa-se técnicas de ajuste fino em que se reutiliza parâmetros de uma rede treinada para um certo banco de dados em outro, também conhecido como transferência de conhecimento.<br>Classifying plant species has been a recurrent topic in the Computer Vision community. Visually, plants present a high level of variability, mostly because of seasonal effects, age and background. Early classification systems had difficulties to deal with this variability and early databases relied on simple images, using dismembered parts of the plants, such as leaves and flowers, and a distinctive background (usually white). With the advent of Deep Neural Networks, which proved to be very competitive as a generalpurpose classifier, we aim to assess them with a more specific-purpose database, which can be further strained by trying to classify similar plant species in some very different poses. We created a new database that focus on how the common user takes plant pictures. This database, named Plantas, is meant to be highly unconstrained. Initially, it contains 50 common different species and cultivars used in gardening worldwide, and more than 33,000 images. These images were taken on site and download from the Internet. Then, we train this database with the latest state of the art techniques, such as Encoding Methods and Deep Neural Networks. We further explore neural networks by testing some recent activation functions and also fine-tuning.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Artificial vision system"

1

Cross, Nicola. An attentional what-where vision system using artificial neural networks. typescript, 1999.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hasan, Nameer Kamal. An adaptive artificial vision system for recognition and handling of industrial parts. Brunel University, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kamal, A. K. M. Mostafa. Development of a flexible vision system using artificial intelligence for robotic assembly tasks. University of Birmingham, 1993.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

A, Sousa Leonel, ed. Bioelectronic vision: Retina models, evaluation metrics, and system design. World Scientific, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

1970-, Timmis Jonathan, ed. Artificial immune systems: A new computational intelligence approach. Springer, 2002.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rutkowski, Leszek. New Soft Computing Techniques for System Modeling, Pattern Classification and Image Processing. Springer Berlin Heidelberg, 2004.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Song, Dezhen. Sharing a Vision: Systems and Algorithms for Collaboratively-Teleoperated Robotic Cameras. Springer Berlin Heidelberg, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Orban, Guy A., and Hans-Hellmut Nagel, eds. Artificial and Biological Vision Systems. Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-77840-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Boyer, Kim L. Perceptual Organization for Artificial Vision Systems. Springer US, 2000.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

Boyer, Kim L., and Sudeep Sarkar, eds. Perceptual Organization for Artificial Vision Systems. Springer US, 2000. http://dx.doi.org/10.1007/978-1-4615-4413-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Artificial vision system"

1

Falabella, Paulo, Hossein Nazari, Paulo Schor, James D. Weiland, and Mark S. Humayun. "Argus® II Retinal Prosthesis System." In Artificial Vision. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41876-6_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Potapov, Alexey, Sergey Rodionov, Maxim Peterson, Oleg Scherbakov, Innokentii Zhdanov, and Nikolai Skorobogatko. "Vision System for AGI: Problems and Directions." In Artificial General Intelligence. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97676-1_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ballard, Dana, and Nathan Sprague. "Modeling the Brain’s Operating System." In Brain, Vision, and Artificial Intelligence. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11565123_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Grittani, Gianmichele, Gilberto Gallinelli, and José Ramŕez. "FutBot: A Vision System for Robotic Soccer." In Advances in Artificial Intelligence. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44399-1_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nogueira, Sergio, Yassine Ruichek, Franck Gechter, Abderrafiaa Koukam, and Francois Charpillet. "An Artificial-Vision Based Environment Perception System." In Advances for In-Vehicle and Mobile Systems. Springer US, 2007. http://dx.doi.org/10.1007/978-0-387-45976-9_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Antunes, Cláudia, and J. P. Martins. "Knowledge Acquisition System to Support Low Vision Consultation." In Artificial Intelligence in Medicine. Springer Berlin Heidelberg, 2001. http://dx.doi.org/10.1007/3-540-48229-6_46.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Zhong-rong, and Da-peng Zhang. "An Intelligent Vision System for Robot." In Applications of Artificial Intelligence in Engineering Problems. Springer Berlin Heidelberg, 1986. http://dx.doi.org/10.1007/978-3-662-21626-2_10.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kim, Jin Whan, Hyuk Gyu Cho, and Eui Young Cha. "A Study on Enhanced Dynamic Signature Verification for the Embedded System." In Brain, Vision, and Artificial Intelligence. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11565123_42.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sehairi, Kamal, Cherrad Benbouchama, El Houari Kobzili, and Fatima Chouireb. "Real-Time Implementation of Human Action Recognition System Based on Motion Analysis." In Artificial Intelligence and Computer Vision. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46245-5_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Park, Kang Ryoung. "Vision-Based Facial and Eye Gaze Tracking System." In KI 2004: Advances in Artificial Intelligence. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30221-6_34.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Artificial vision system"

1

Andes, David K., James C. Witham, and Michael D. Miles. "Missileborne artificial vision system (MAVIS)." In SPIE's International Symposium on Optical Engineering and Photonics in Aerospace Sensing, edited by Steven K. Rogers and Dennis W. Ruck. SPIE, 1994. http://dx.doi.org/10.1117/12.169965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Van Eenwyk, Jonathan, Arvin Agah, and Gerhard W. Cibis. "Automated human vision assessment using computer vision and artificial intelligence." In 2008 IEEE International Conference on System of Systems Engineering (SoSE). IEEE, 2008. http://dx.doi.org/10.1109/sysose.2008.4724184.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sereno, Juan Esteban, Freddy Bolanos, and Monica Vallejo. "Artificial vision system for differential multiples robots." In 2016 XII Congreso de Tecnologia, Aprendizaje y Ensenanza de la Electronica (XII Technologies Applied to Electronics Teaching Conference) (TAEE). IEEE, 2016. http://dx.doi.org/10.1109/taee.2016.7528379.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Prabagar, Ajanthwin, N. Sri Madhavaraja, S. Arunmozhi, and K. Suresh Manic. "Artificial Vision Based Smart Urban Parking System." In 2021 International Conference on System, Computation, Automation and Networking (ICSCAN). IEEE, 2021. http://dx.doi.org/10.1109/icscan53069.2021.9526383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ramadoss, Balaji, Jin-Choon Ng, Andreas Koschan, and Mongi A. Abidi. "Scene inspection using a robotic imaging system." In Quality Control by Artificial Vision, edited by Kenneth W. Tobin, Jr. and Fabrice Meriaudeau. SPIE, 2003. http://dx.doi.org/10.1117/12.515092.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Heger, Thomas, and Madhukar C. Pandit. "Optical wear assessment system for grinding tools." In Quality Control by Artificial Vision, edited by Kenneth W. Tobin, Jr. and Fabrice Meriaudeau. SPIE, 2003. http://dx.doi.org/10.1117/12.515158.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Weiss, Michael, Arnulf Schiller, Paul O'Leary, Ewald Fauster, and Peter Schalk. "Development of a distributed vision system for industrial conditions." In Quality Control by Artificial Vision, edited by Kenneth W. Tobin, Jr. and Fabrice Meriaudeau. SPIE, 2003. http://dx.doi.org/10.1117/12.514944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Kohler, Sophie, and Ernest Hirsch. "Cognitive intelligent sensory system for vision-based quality control." In Quality Control by Artificial Vision, edited by Kenneth W. Tobin, Jr. and Fabrice Meriaudeau. SPIE, 2003. http://dx.doi.org/10.1117/12.515118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Turner, Chris, Hamed Sari-Sarraf, Eric F. Hequet, and Sunho Lee. "Preliminary validation of a fabric smoothness assessment system." In Quality Control by Artificial Vision, edited by Kenneth W. Tobin, Jr. and Fabrice Meriaudeau. SPIE, 2003. http://dx.doi.org/10.1117/12.514953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Yamaguchi, Tsuyoshi, Masafumi Tominaga, and Hiroyasu Koshimizu. "Interactive facial caricaturing system based on eye camera." In Quality Control by Artificial Vision, edited by Kenneth W. Tobin, Jr. and Fabrice Meriaudeau. SPIE, 2003. http://dx.doi.org/10.1117/12.515136.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!