Academic literature on the topic 'Visión artificial'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Visión artificial.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Journal articles on the topic "Visión artificial"
Mota-Delfin, C., C. Juárez-González, and J. C. Olguín-Rojas. "Clasificación de manzanas utilizando visión artificial y redes neuronales artificiales." Ingeniería y Región 20 (December 28, 2018): 52–57. http://dx.doi.org/10.25054/22161325.1917.
Full textRojas Hernández, Rogelio, Ramón Silva Ortigoza, and María Aurora Molina Vilchis. "La Visión Artificial en la Robótica." Polibits 35 (January 31, 2007): 22–28. http://dx.doi.org/10.17562/pb-35-5.
Full textGarrote, Estibaliz, José A. Gutiérrez, Juan J. Andueza, and Javier García-Tejedor. "Visión artificial: Nuevas vías para el reciclaje." Informador Técnico 66 (November 21, 2003): 6. http://dx.doi.org/10.23850/22565035.851.
Full textNadal, J. "Visión artificial: una alternativa terapéutica de futuro." Archivos de la Sociedad Española de Oftalmología 93, no. 6 (June 2018): 261–62. http://dx.doi.org/10.1016/j.oftal.2018.03.002.
Full textCRUZ, MAYO de COM HENRY. "LA VISIÓN POR COMPUTADORA Y LAS FUTURAS APLICACIONES TECNOLÓGICAS EN DIVERSOS ESCENARIOS." Revista de la Academia del Guerra del Ejército Ecuatoriano 12, no. 1 (July 14, 2021): 5. http://dx.doi.org/10.24133/age.n12.2019.13.
Full textSanta María Pinedo, John Clark, Carlos Armando Ríos López, Carlos Rodríguez Grández, and Cristian Werner García Estrella. "Reconocimiento de patrones de imágenes a través de un sistema de visión artificial en MATLAB." Revista Científica de Sistemas e Informática 1, no. 2 (July 18, 2021): 15–26. http://dx.doi.org/10.51252/rcsi.v1i2.131.
Full textCerezo-Sánchez, Jorge, Griselda Saldaña-González, Mario M. Bustillo-Díaz, and Apolonio Ata-Pérez. "Sistema de planificación de trayectorias utilizando visión artificial." Research in Computing Science 128, no. 1 (December 31, 2016): 141–48. http://dx.doi.org/10.13053/rcs-128-1-13.
Full textRosado Rodrigo, Pilar, Eva Figueras Ferrer, and Ferran Reverter Comes. "Intersecciones Semánticas entre Visión Artificial y Mirada Artística." Barcelona Investigación Arte Creación 2, no. 1 (January 30, 2014): 1. http://dx.doi.org/10.17583/brac.2014.v2i1.a891.1-54.
Full textFigueredo Avila, Gustavo Andrés. "Clasificación de la manzana royal gala usando visión artificial y redes neuronales artificiales." Research in Computing Science 114, no. 1 (December 31, 2016): 23–32. http://dx.doi.org/10.13053/rcs-114-1-2.
Full textMARTINEZ ROMO, JULIO CESAR, FRANCISCO LUNA ROSAS, RICARDO MENDOZA GONZALEZ, VALENTIN LOPEZ RIVAS, and MARIO ALBERTO RODRIGUEZ DIAZ. "VEHÍCULO AUTOMÁTICAMENTE GUIADO (AGV) POR ODOMETRÍA Y VISIÓN ARTIFICIAL." DYNA INGENIERIA E INDUSTRIA 92, no. 1 (2017): 490. http://dx.doi.org/10.6036/8420.
Full textDissertations / Theses on the topic "Visión artificial"
Gómez, Bruballa Raúl Álamo. "Exploiting the Interplay between Visual and Textual Data for Scene Interpretation." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/670533.
Full textLa experimentación en aprendizaje automático en escenarios controlados y con bases de datos estándares es necesaria para comparar el desempeño entre algoritmos evaluándolos en las mismas condiciones. Sin embargo, también en necesaria experimentación en cómo se comportan estos algoritmos cuando son entrenados con datos menos controlados y aplicados a problemas reales para indagar en cómo los avances en investigación pueden contribuir a nuestra sociedad. En esta tesis experimentamos con los algoritmos más recientes de visión por ordenador y procesado del lenguaje natural aplicándolos a la interpretación de escenas multimodales. En particular, investigamos en cómo la interpretación automática de imagen y texto se puede explotar conjuntamente para resolver problemas reales, enfocándonos en aprender de datos de redes sociales. Encaramos diversas tareas que implican información visual y textual, discutimos sus características y retos y exponemos nuestras conclusiones experimentales. Primeramente trabajamos en la detección de texto en imágenes. A continuación, trabajamos con publicaciones de redes sociales, usando las leyendas textuales de imágenes como supervisión para aprender características visuales, que aplicamos a la búsqueda de imágenes semántica con consultas multimodales. Después, trabajamos con imágenes de redes sociales geolocalizadas con etiquetas textuales asociadas, experimentando en cómo usar las etiquetas como supervisión, en búsqueda de imágenes sensible a localización, y en explotar la localización para el etiquetado de imágenes. Finalmente, encaramos un problema de clasificación específico de publicaciones de redes sociales formadas por una imagen y un texto: Clasificación de discurso del odio multimodal.
Machine learning experimentation under controlled scenarios and standard datasets is necessary to compare algorithms performance by evaluating all of them in the same setup. However, experimentation on how those algorithms perform on unconstrained data and applied tasks to solve real world problems is also a must to ascertain how that research can contribute to our society. In this dissertation we experiment with the latest computer vision and natural language processing algorithms applying them to multimodal scene interpretation. Particularly, we research on how image and text understanding can be jointly exploited to address real world problems, focusing on learning from Social Media data. We address several tasks that involve image and textual information, discuss their characteristics and offer our experimentation conclusions. First, we work on detection of scene text in images. Then, we work with Social Media posts, exploiting the captions associated to images as supervision to learn visual features, which we apply to multimodal semantic image retrieval. Subsequently, we work with geolocated Social Media images with associated tags, experimenting on how to use the tags as supervision, on location sensitive image retrieval and on exploiting location information for image tagging. Finally, we work on a specific classification problem of Social Media publications consisting on an image and a text: Multimodal hate speech classification.
Salvi, Joaquim. "An approach to coded structured light to obtain three dimensional information." Doctoral thesis, Universitat de Girona, 1998. http://hdl.handle.net/10803/7714.
Full textThe stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope.
The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches.
In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision.
Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Carbonell, Nuñez Manuel. "Neural Information Extraction from Semi-structured Documents." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/671583.
Full textSectores como la información y tecnología de seguros, finanzas y legal, procesan un continuo de facturas, justificantes, reclamaciones o similar diariamente. El éxito en la automatización de estas transacciones se basa en la habilidad de digitalizar correctamente el contenido textual asi como incorporar la comprensión semántica. Este proceso, conococido como Extracción de Información (EI) consiste en varios pasos que son, el reconocimiento del texto, la identificación de entidades nombradas y en ocasiones en reconocer relaciones entre estas entidades. En nuestro trabajo exploramos modelos neurales multi-tarea a nivel de imagen y de grafo para solucionar los pasos de este proceso de forma unificada. En el camino, estudiamos los beneficios e inconvenientes de estos enfoques en comparación con métodos que resuelven las tareas secuencialmente por separado.
Sectors as fintech, legaltech or insurance process an inflow of million of forms, invoices, id documents, claims or similar every day. The success in the automation of these transactions depends on the ability to correctly digitize the textual content as well as to incorporate semantic understanding. This procedure, known as information extraction (IE) comprises the steps of localizing and recognizing text, identifying named entities contained in it and optionally finding relationships among its elements. In this work we explore multi-task neural models at image and graph level to solve all steps in a unified way. While doing so we find benefits and limitations of these end-to-end approaches in comparison with sequential separate methods.
Murrugarra, Ortiz Lhester. "Sistema mecatrónico para determinar automáticamente las dimensiones de anchovetas usando visión artificial." Bachelor's thesis, Pontificia Universidad Católica del Perú, 2021. http://hdl.handle.net/20.500.12404/19706.
Full textTrabajo de investigación
Rivera, Mujica Elvira del Carmen. "Supervisión y control de un proceso industrial autónomo de pintado aplicando lógica difusa y visión artificial." Bachelor's thesis, Universidad Ricardo Palma, 2014. http://cybertesis.urp.edu.pe/handle/urp/1175.
Full textValdivia, Arias César Javier. "Diseño de un sistema de visión artificial para la clasificación de chirimoyas basado en medidas." Master's thesis, Pontificia Universidad Católica del Perú, 2016. http://tesis.pucp.edu.pe/repositorio/handle/123456789/7849.
Full textTesis
BUENDIA, RIOS ANGHELLO ARTURO 711753, and RIOS ANGHELLO ARTURO BUENDIA. "Navegación Autónoma de un vehículo Pequeño en Interiores Empleando Visión Artificial y Diferentes Sensores." Tesis de maestría, Universidad Autónoma del Estado de México, 2017. http://hdl.handle.net/20.500.11799/68536.
Full textBeca para estudios de posgrado CONACyT No. de cuenta: 1530015
NIETO, GONZALEZ JOSE LUIS 786642, and GONZALEZ JOSE LUIS NIETO. "Detección de incendios mediante identificación de humo con visión artificial en condiciones de iluminación variable." Tesis de maestría, Universidad Autónoma del Estado de México, 2018. http://hdl.handle.net/20.500.11799/95189.
Full textSobrado, Malpartida Eddie Ángel. "Sistema de visión artificial para el reconocimiento y manipulación de objetos utilizando un brazo robot." Master's thesis, Pontificia Universidad Católica del Perú, 2003. http://tesis.pucp.edu.pe/repositorio/handle/123456789/68.
Full textTesis
Wang, Yaxing. "Transferring and learning representations for image generation and translation." Doctoral thesis, Universitat Autònoma de Barcelona, 2020. http://hdl.handle.net/10803/669579.
Full textLa generación de imágenes es una de las tareas más atractivas, fascinantes y complejas en la visión por computador. De los diferentes métodos para la generación de imágenes, las redes generativas adversarias (o también llamadas ""GANs"") juegan un papel crucial. Los modelos generativos más comunes basados en GANs se pueden dividir en dos apartados. El primero, simplemente llamado generativo, utiliza como entrada ruido aleatorio y sintetiza una imagen que sigue la misma distribución que las imágenes de entrenamiento. En el segundo apartado encontramos la traducción de imagen a imagen, cuyo objetivo consiste en transferir la imagen de un dominio origen a uno que es indistinguible del dominio objetivo. Los métodos de esta categoria de traducción de imagen a imagen se pueden subdividir en emparejados o no emparejados, dependiendo de si se requiere que los datos sean emparejados o no. En esta tesis, el objetivo consiste en resolver algunos de los retos tanto en la generación de imágenes como en la traducción de imagen a imagen. Las GANs dependen en gran parte del acceso a gran cantidad de datos, y fallan al generar imágenes realistas a partir de ruido aleatorio cuando se aplican a dominios con pocas imágenes. Para solucionar este problema, proponemos transferir el conocimiento de un modelo entrenado a partir de un conjunto de datos con muchas imágenes (dominio origen) a uno entrenado con datos limitados (dominio objetivo). Encontramos que tanto las GANs como las GANs condicionales pueden beneficiarse de los modelos entrenados con grandes conjuntos de datos. Nuestros experimentos muestran que transferir el discriminador es más importante que hacerlo para el generador. Usar tanto el generador como el discriminador resulta en un mayor rendimiento. Sin embargo, este método sufre de overfitting, dado que actualizamos todos los parámetros para adaptar el modelo a los datos del objetivo. Para ello proponemos una arquitectura nueva, hecha a medida para resolver la transferencia de conocimiento en el caso de dominios objetivo con muy pocas imágenes. Nuestro método explora eficientemente qué parte del espacio latente está más relacionado con el dominio objetivo. Adicionalmente, el método propuesto es capaz de transferir el conocimiento a partir de múltiples GANs pre-entrenadas. Aunque la traducción de imagen a imagen ha conseguido rendimientos extraordinarios, tiene que enfrentarse a diferentes problemas. Primero, para el caso de la traducción entre dominios complejos (cuyas traducciones son entre diferentes modalidades) se ha observado que los métodos de traducción de imagen a imagen requieren datos emparejados. Demostramos que únicamente cuando algunas de las traducciones disponen de esta información, podemos inferir las traducciones restantes. Proponemos un método nuevo en el cual alineamos diferentes codificadores y decodificadores de imagen de una manera que nos permite obtener la traducción simplemente encadenando el codificador de origen con el decodificador objetivo, aún cuando estos no han interactuado durante la fase de entrenamiento (i.e. sin disponer de dicha información). Segundo, existe el problema del sesgo en la traducción de imagen a imagen. Los conjuntos de datos sesgados inevitablemente contienen cambios no deseados, eso se debe a que el dataset objetivo tiene una distribución visual subyacente. Proponemos el uso de restricciones semánticas cuidadosamente diseñadas para reducir los efectos del sesgo. El uso de la restricción semántica implica la preservación de las propiedades de imagen deseada. Finalmente, los métodos actuales fallan en generar resultados diversos o en realizar transferencia de conocimiento escalables a un único modelo. Para aliviar este problema, proponemos una manera escalable y diversa para la traducción de imagen a imagen. Para ello utilizamos ruido aleatorio para el control de la diversidad. La escalabilidad es determinada a partir del condicionamiento de la etiqueta del dominio.
Image generation is arguably one of the most attractive, compelling, and challenging tasks in computer vision. Among the methods which perform image generation, generative adversarial networks (GANs) play a key role. The most common image generation models based on GANs can be divided into two main approaches. The first one, called simply image generation takes random noise as an input and synthesizes an image which follows the same distribution as the images in the training set. The second class, which is called image-to-image translation, aims to map an image from a source domain to one that is indistinguishable from those in the target domain. Image-to-image translation methods can further be divided into paired and unpaired image-to-image translation based on whether they require paired data or not. In this thesis, we aim to address some challenges of both image generation and image-to-image generation. GANs highly rely upon having access to vast quantities of data, and fail to generate realistic images from random noise when applied to domains with few images. To address this problem, we aim to transfer knowledge from a model trained on a large dataset (source domain) to the one learned on limited data (target domain). We find that both GANs and conditional GANs can benefit from models trained on large datasets. Our experiments show that transferring the discriminator is more important than the generator. Using both the generator and discriminator results in the best performance. We found, however, that this method suffers from overfitting, since we update all parameters to adapt to the target data. We propose a novel architecture, which is tailored to address knowledge transfer to very small target domains. Our approach effectively explores which part of the latent space is more related to the target domain. Additionally, the proposed method is able to transfer knowledge from multiple pretrained GANs. Although image-to-image translation has achieved outstanding performance, it still faces several problems. First, for translation between complex domains (such as translations between different modalities) image-to-image translation methods require paired data. We show that when only some of the pairwise translations have been seen (i.e. during training), we can infer the remaining unseen translations (where training pairs are not available). We propose a new approach where we align multiple encoders and decoders in such a way that the desired translation can be obtained by simply cascading the source encoder and the target decoder, even when they have not interacted during the training stage (i.e. unseen). Second, we address the issue of bias in image-to-image translation. Biased datasets unavoidably contain undesired changes, which are due to the fact that the target dataset has a particular underlying visual distribution. We use carefully designed semantic constraints to reduce the effects of the bias. The semantic constraint aims to enforce the preservation of desired image properties. Finally, current approaches fail to generate diverse outputs or perform scalable image transfer in a single model. To alleviate this problem, we propose a scalable and diverse image-to-image translation. We employ random noise to control the diversity. The scalabitlity is determined by conditioning the domain label.
Books on the topic "Visión artificial"
Gabel, Veit Peter, ed. Artificial Vision. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-41876-6.
Full textOrban, Guy A., and Hans-Hellmut Nagel, eds. Artificial and Biological Vision Systems. Berlin, Heidelberg: Springer Berlin Heidelberg, 1992. http://dx.doi.org/10.1007/978-3-642-77840-7.
Full textLu, Huimin, and Yujie Li, eds. Artificial Intelligence and Computer Vision. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-46245-5.
Full textDe Gregorio, Massimo, Vito Di Maio, Maria Frucci, and Carlo Musio, eds. Brain, Vision, and Artificial Intelligence. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11565123.
Full textBoyer, Kim L. Perceptual Organization for Artificial Vision Systems. Boston, MA: Springer US, 2000.
Find full textZhou, Yi-Tong, and Rama Chellappa. Artificial Neural Networks for Computer Vision. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4612-2834-9.
Full textBoyer, Kim L., and Sudeep Sarkar, eds. Perceptual Organization for Artificial Vision Systems. Boston, MA: Springer US, 2000. http://dx.doi.org/10.1007/978-1-4615-4413-5.
Full textRama, Chellappa, ed. Artificial neural networks for computer vision. New York: Springer-Verlag, 1992.
Find full textJ, Nilsson Nils. Artificial Intelligence: A New Synthesis. Burlington: Elsevier Science, 1998.
Find full textAyache, Nicholas. Artificial vision for mobile robots: Stereo vision and multisensory perception. Cambridge, Mass: MIT Press, 1991.
Find full textBook chapters on the topic "Visión artificial"
Aliano, Antonio, Giancarlo Cicero, Hossein Nili, Nicolas G. Green, Pablo García-Sánchez, Antonio Ramos, Andreas Lenshof, et al. "Artificial Vision." In Encyclopedia of Nanotechnology, 141. Dordrecht: Springer Netherlands, 2012. http://dx.doi.org/10.1007/978-90-481-9751-4_100034.
Full textRathore, Sneh, Sahil Sharma, and Lisha Singh. "Drishti—Artificial Vision." In Lecture Notes in Electrical Engineering, 581–90. Singapore: Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-13-6772-4_50.
Full textFujikado, Takashi. "Retinal Prosthesis by Suprachoroidal-Transretinal Stimulation (STS), Japanese Approach." In Artificial Vision, 139–50. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41876-6_11.
Full textAyton, Lauren N., and Joseph Rizzo. "Assessing Patient Suitability and Outcome Measures in Vision Restoration Trials." In Artificial Vision, 3–8. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41876-6_1.
Full textAyton, Lauren N., Gregg J. Suaning, Nigel H. Lovell, Matthew A. Petoe, David A. X. Nayagam, Tamara-Leigh E. Brawn, and Anthony N. Burkitt. "Suprachoroidal Retinal Prostheses." In Artificial Vision, 125–38. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41876-6_10.
Full textWalter, Peter. "A Fully Intraocular Approach for a Bi-Directional Retinal Prosthesis." In Artificial Vision, 151–61. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41876-6_12.
Full textLi, Menghui, Yan Yan, Kaijie Wu, Yiliang Lu, Jingjing Sun, Yao Chen, Xinyu Chai, et al. "Penetrative Optic Nerve-Based Visual Prosthesis Research." In Artificial Vision, 165–76. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41876-6_13.
Full textKyada, Margee J., Nathaniel J. Killian, and John S. Pezaris. "Thalamic Visual Prosthesis Project." In Artificial Vision, 177–89. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41876-6_14.
Full textFernández, Eduardo, and Richard A. Normann. "CORTIVIS Approach for an Intracortical Visual Prostheses." In Artificial Vision, 191–201. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41876-6_15.
Full textTroyk, Philip R. "The Intracortical Visual Prosthesis Project." In Artificial Vision, 203–14. Cham: Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-41876-6_16.
Full textConference papers on the topic "Visión artificial"
Reyes Duke, Alicia María, David A. López, and Adrián M. Mora. "Conteo, Monitoreo y Clasificación de Accesos Utilizando Visión Artificial." In The 18th LACCEI International Multi-Conference for Engineering, Education, and Technology: Engineering, Integration, And Alliances for A Sustainable Development” “Hemispheric Cooperation for Competitiveness and Prosperity on A Knowledge-Based Economy”. Latin American and Caribbean Consortium of Engineering Institutions, 2020. http://dx.doi.org/10.18687/laccei2020.1.1.468.
Full textParra, Pablo, Teddy Negrete, and Nino Vega. "Determinación del Grado de Fermentación del Cacao mediante diferentes técnicas de visión artificial." In The 16th LACCEI International Multi-Conference for Engineering, Education, and Technology: “Innovation in Education and Inclusion”. Latin American and Caribbean Consortium of Engineering Institutions, 2018. http://dx.doi.org/10.18687/laccei2018.1.1.163.
Full textBorja Borja, Mario G. "Sistema de posicionamiento con visión artificial para un brazo robótico articulado de seis grados mediante redes neruonales artificiales." In The 16th LACCEI International Multi-Conference for Engineering, Education, and Technology: “Innovation in Education and Inclusion”. Latin American and Caribbean Consortium of Engineering Institutions, 2018. http://dx.doi.org/10.18687/laccei2018.1.1.498.
Full textGarcía-Haro, Juan Miguel, Santiago Martínez, and Carlos Balaguer. "Detección de la orientación mediante visión artificial para el control de equilibrio en robots humanoides." In XXXIX Jornadas de Automática. Universidade da Coruña. Servizo de Publicacións, 2020. http://dx.doi.org/10.17979/spudc.9788497497565.0951.
Full textArroyo, Sebastian I., Felix Safar, and Damian Oliva. "Probabilidad de infracción de velocidad de vehículos utilizando visión artificial en cámaras de campo amplio." In 2016 IEEE Biennial Congress of Argentina (ARGENCON). IEEE, 2016. http://dx.doi.org/10.1109/argencon.2016.7585314.
Full textMunera, Sandra, Francisca Hernandez, Nuria Aleixos, Sergio Cubero, and Jose Blasco. "Estudio de la evolución de la calidad de granada ‘Mollar de Elche’ durante su maduración usando sistemas de visión artificial." In X Congreso Ibérico de Agroingeniería = X Congresso Ibérico de Agroengenharia. Zaragoza: Servicio de Publicaciones Universidad, 2019. http://dx.doi.org/10.26754/c_agroing.2019.com.3419.
Full textde Jódar Lázaro, Manuel, Antonio Madueño Luna, Alberto Lucas Pascual, Antonio Ruíz Canales, Jose Miguel Molina Martínez, Meritxell Justicia Segovia, and Montserrat Baena Sánchez. "Análisis en tiempo real del funcionamiento de la cadena de alimentación de las máquinas deshuesadoras de aceitunas mediante diagnosis por visión artificial y redes neuronales." In X Congreso Ibérico de Agroingeniería = X Congresso Ibérico de Agroengenharia. Zaragoza: Servicio de Publicaciones Universidad, 2019. http://dx.doi.org/10.26754/c_agroing.2019.com.3423.
Full textGoudou, J. F., S. Maggio, and M. Fagno. "Artificial human vision camera." In SPIE Security + Defence, edited by Mark T. Gruneisen, Miloslav Dusek, John G. Rarity, Keith L. Lewis, Richard C. Hollins, Thomas J. Merlet, and Alexander Toet. SPIE, 2014. http://dx.doi.org/10.1117/12.2074129.
Full textChin, Kevin, and Derek Abbott. "Artificial color insect vision." In Photonics East '99, edited by Douglas W. Gage and Howie M. Choset. SPIE, 1999. http://dx.doi.org/10.1117/12.369254.
Full textShang, Junyuan, Tengfei Ma, Cao Xiao, and Jimeng Sun. "Pre-training of Graph Augmented Transformers for Medication Recommendation." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/825.
Full textReports on the topic "Visión artificial"
Brown, Christopher M., and Randal C. Nelson. Northeast Artificial Intelligence Consortium Annual Report - 1988 Parallel Vision. Volume 9. Fort Belvoir, VA: Defense Technical Information Center, October 1989. http://dx.doi.org/10.21236/ada276098.
Full textWaxman, A. M., and R. K. Cunningham. Parametric Study of Diffusion-Enhancement Networks for Spatiotemporal Grouping in Real-Time Artificial Vision. Fort Belvoir, VA: Defense Technical Information Center, June 1991. http://dx.doi.org/10.21236/ada238782.
Full textCunningham, Robert K., and Allen M. Waxman. Parametric Study of Diffusion-Enhancement Networks for Spatiotemporal Grouping in Real-Time Artificial Vision. Fort Belvoir, VA: Defense Technical Information Center, April 1993. http://dx.doi.org/10.21236/ada265065.
Full textRodriguez, Simon, Autumn Toney, and Melissa Flagg. Patent Landscape for Computer Vision: United States and China. Center for Security and Emerging Technology, September 2020. http://dx.doi.org/10.51593/20200054.
Full textHofer, Martin, Tomas Sako, Arturo Martinez Jr., Mildred Addawe, Joseph Bulan, Ron Lester Durante, and Marymell Martillan. Applying Artificial Intelligence on Satellite Imagery to Compile Granular Poverty Statistics. Asian Development Bank, December 2020. http://dx.doi.org/10.22617/wps200432-2.
Full textKeyvan, Shahla. Diagnostics and Control of Natural Gas-Fired furnaces via Flame Image Analysis using Machine Vision & Artificial Intelligence Techniques. Office of Scientific and Technical Information (OSTI), December 2005. http://dx.doi.org/10.2172/862201.
Full textMurdick, Dewey, Daniel Chou, Ryan Fedasiuk, and Emily Weinstein. The Public AI Research Portfolio of China’s Security Forces. Center for Security and Emerging Technology, March 2021. http://dx.doi.org/10.51593/20200057.
Full text