Academic literature on the topic 'Computer vision recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computer vision recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computer vision recognition"

1

Kotyk, Vladyslav, and Oksana Lashko. "Software Implementation of Gesture Recognition Algorithm Using Computer Vision." Advances in Cyber-Physical Systems 6, no. 1 (2021): 21–26. http://dx.doi.org/10.23939/acps2021.01.021.

Full text
Abstract:
This paper examines the main methods and principles of image formation, display of the sign language recognition algorithm using computer vision to improve communication between people with hearing and speech impairments. This algorithm allows to effectively recognize gestures and display information in the form of labels. A system that includes the main modules for implementing this algorithm has been designed. The modules include the implementation of perception, transformation and image processing, the creation of a neural network using artificial intelligence tools to train a model for predicting input gesture labels. The aim of this work is to create a full-fledged program for implementing a real-time gesture recognition algorithm using computer vision and machine learning.
APA, Harvard, Vancouver, ISO, and other styles
2

Zheng, Nanning, George Loizou, Xiaoyi Jiang, Xuguang Lan, and Xuelong Li. "Computer vision and pattern recognition." International Journal of Computer Mathematics 84, no. 9 (2007): 1265–66. http://dx.doi.org/10.1080/00207160701303912.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Xianghan, Jie Jiang, Yingmei Wei, Lai Kang, and Yingying Gao. "Research on Gesture Recognition Method Based on Computer Vision." MATEC Web of Conferences 232 (2018): 03042. http://dx.doi.org/10.1051/matecconf/201823203042.

Full text
Abstract:
Gesture recognition is an important way of human-computer interaction. With time going on, people are no longer satisfied with gesture recognition based on wearable devices, but hope to perform gesture recognition in a more natural way. Computer vision-based gesture recognition can transfer human feelings and instructions to computers conveniently and efficiently, and improve the efficiency of human-computer interaction significantly. The gesture recognition based on computer vision is mainly based on hidden Markov, dynamic time rounding algorithm and neural network algorithm. The process is roughly divided into three steps: image collection, hand segmentation, gesture recognition and classification. This paper reviews the computer vision-based gesture recognition methods in the past 20 years, analyses the research status at home and abroad, summarizes its current development, the advantages and disadvantages of different gesture recognition methods, and looks forward to the development trend of gesture recognition technology in the next stage.
APA, Harvard, Vancouver, ISO, and other styles
4

Vodyanitskyi, V., and V. Yuskovych-Zhukovska. "ADAPTIVE VISION AI." Automation of technological and business processes 16, no. 4 (2024): 73–81. https://doi.org/10.15673/atbp.v16i4.3013.

Full text
Abstract:
Abstract. As of today, computer vision systems are continuously developing and systematically improving. Machines see visual content in the form of numbers, in which each pixel represents its own piece of information. Computer vision, as a component of artificial intelligence, allows machines to see, observe and understand everything. It enables computer systems to obtain useful information from digital images, video, visual data and perform programmed actions. Computer vision technologies rely on pattern recognition, machine learning, and neural networks to allow computers to break down images, interpret data, and identify features. Tracking moving objects and their identification is a difficult task, as it requires the accuracy of pattern recognition. An untrained computer vision algorithm is unable to understand the relationship between the shapes in the image and the objects. Therefore, the algorithm must be trained. The paper considers models that are trained on a high-performance computing cluster with GPU support. The developed open source software allows detection, tracking and recognition of blurry moving objects with the help of artificial intelligence that adapts to any video camera. A significant increase in accuracy is achieved thanks to machine learning.
APA, Harvard, Vancouver, ISO, and other styles
5

Pandey, Mrs Arjoo. "Computer Vision." International Journal for Research in Applied Science and Engineering Technology 11, no. 7 (2023): 510–14. http://dx.doi.org/10.22214/ijraset.2023.54701.

Full text
Abstract:
Abstract: Computer vision is a field of artificial intelligence that focuses on enabling computers to understand and interpret visual information from images or videos. It involves developing algorithms and techniques to extract meaningful insights, patterns, and knowledge from visual data, mimicking human visual perception capabilities. The abstract of computer vision encompasses a range of fundamental tasks and objectives, including: Image Classification: Classifying images into predefined categories or classes, such as distinguishing between different objects, animals, or scenes. Object Detection and Recognition: Locating and identifying specific objects within an image or video, often through the use of bounding boxes or pixel-level segmentation. Semantic Segmentation: Assigning semantic labels to each pixel in an image to distinguish between different objects or regions.
APA, Harvard, Vancouver, ISO, and other styles
6

Matsuzaka, Yasunari, and Ryu Yashiro. "AI-Based Computer Vision Techniques and Expert Systems." AI 4, no. 1 (2023): 289–302. http://dx.doi.org/10.3390/ai4010013.

Full text
Abstract:
Computer vision is a branch of computer science that studies how computers can ‘see’. It is a field that provides significant value for advancements in academia and artificial intelligence by processing images captured with a camera. In other words, the purpose of computer vision is to impart computers with the functions of human eyes and realise ‘vision’ among computers. Deep learning is a method of realising computer vision using image recognition and object detection technologies. Since its emergence, computer vision has evolved rapidly with the development of deep learning and has significantly improved image recognition accuracy. Moreover, an expert system can imitate and reproduce the flow of reasoning and decision making executed in human experts’ brains to derive optimal solutions. Machine learning, including deep learning, has made it possible to ‘acquire the tacit knowledge of experts’, which was not previously achievable with conventional expert systems. Machine learning ‘systematises tacit knowledge’ based on big data and measures phenomena from multiple angles and in large quantities. In this review, we discuss some knowledge-based computer vision techniques that employ deep learning.
APA, Harvard, Vancouver, ISO, and other styles
7

Niu, Xiang Jie, and Bin Lan. "The Agricultural Products Deterioration Recognition Based on Computer Vision." Applied Mechanics and Materials 602-605 (August 2014): 2027–30. http://dx.doi.org/10.4028/www.scientific.net/amm.602-605.2027.

Full text
Abstract:
The computer vision technology is an important branch of computer science and artificial intelligence which is regarded as a non-destructive testing technique in the field of agriculture with a broad application prospects. This paper introduces the application of the computer vision technology in the agricultural products deterioration recognition, builds foundations for the accurate measurement of the agricultural products quality with computer visions, and establish the relationship between the feature information and quality of the agricultural products. Meanwhile, this paper combined the computer vision technology with infrared, microwave, NMR techniques to extract and test the visual information of the internal quality of the agricultural products.
APA, Harvard, Vancouver, ISO, and other styles
8

Xiaoning Bo, Xiaoning Bo, Jin Wang Xiaoning Bo, Qingfang Liu Jin Wang, Peng Yang Qingfang Liu, and Honglan Li Peng Yang. "Computer Vision Recognition Method for Surface Defects of Casting Workpieces." 電腦學刊 34, no. 3 (2023): 305–13. http://dx.doi.org/10.53106/199115992023063403022.

Full text
Abstract:
<p>To improve the recognition efficiency of surface defects in castings, this article first uses median filtering algorithm to denoise the defect image to distinguish between defects and background. Then, gray threshold method is used to segment the image, and the processed image is sent to the improved RefineDet network structure. Improving the RefineDet network structure mainly improves the network depth and incorporates dataset augmentation algorithms. Finally, an experimental platform was built to train, recognize, and compare the collected image dataset. The results show that the accuracy of detecting porosity, blowhole, and flaw defects is 95.6% and 97.3% and 98.15%, the method proposed in this article is accurate and efficient. </p> <p> </p>
APA, Harvard, Vancouver, ISO, and other styles
9

Dharshini, M., P. Santhiya, A. Susmitha, and V. S. Balambiga. "Real Time Sign Language Recognition Using Computer Vision And Ai." International Journal of Research Publication and Reviews 6, no. 5 (2025): 9997–10003. https://doi.org/10.55248/gengpi.6.0525.1867.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zheng, Zepei. "Human Gesture Recognition in Computer Vision Research." SHS Web of Conferences 144 (2022): 03011. http://dx.doi.org/10.1051/shsconf/202214403011.

Full text
Abstract:
Human gesture recognition is a popular issue in the studies of computer vision, since it provides technological expertise required to advance the interaction between people and computers, virtual environments, smart surveillance, motion tracking, as well as other domains. Extraction of the human skeleton is a rather typical gesture recognition approach using existing technologies based on two-dimensional human gesture detection. Likewise, I t cannot be overlooked that objects in the surrounding environment give some information about human gestures. To semantically recognize the posture of the human body, the logic system presented in this research integrates the components recognized in the visual environment alongside the human skeletal position. In principle, it can improve the precision of recognizing postures and semantically represent peoples’ actions. As such, the paper suggests a potential and notion for recognizing human gestures, as well as increasing the quantity of information offered through analysis of images to enhance interaction between humans and computers.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Computer vision recognition"

1

PAOLANTI, MARINA. "Pattern Recognition for challenging Computer Vision Applications." Doctoral thesis, Università Politecnica delle Marche, 2018. http://hdl.handle.net/11566/252904.

Full text
Abstract:
La Pattern Recognition è lo studio di come le macchine osservano l'ambiente, imparano a distinguere i pattern di interesse dal loro background e prendono decisioni valide e ragionevoli sulle categorie di modelli. Oggi l'applicazione degli algoritmi e delle tecniche di Pattern Recognition è trasversale. Con i recenti progressi nella computer vision, abbiamo la capacità di estrarre dati multimediali per ottenere informazioni preziose su ciò che sta accadendo nel mondo. Partendo da questa premessa, questa tesi affronta il tema dello sviluppo di sistemi di Pattern Recognition per applicazioni reali come la biologia, il retail, la sorveglianza, social media intelligence e i beni culturali. L'obiettivo principale è sviluppare applicazioni di computer vision in cui la Pattern Recognition è il nucleo centrale della loro progettazione, a partire dai metodi generali, che possono essere sfruttati in più campi di ricerca, per poi passare a metodi e tecniche che affrontano problemi specifici. Di fronte a molti tipi di dati, come immagini, dati biologici e traiettorie, una difficoltà fondamentale è trovare rappresentazioni vettoriali rilevanti. Per la progettazione del sistema di riconoscimento dei modelli vengono eseguiti i seguenti passaggi: raccolta dati, estrazione delle caratteristiche, approccio di apprendimento personalizzato e analisi e valutazione comparativa. Per una valutazione completa delle prestazioni, è di grande importanza collezionare un dataset specifico perché i metodi di progettazione che sono adattati a un problema non funzionano correttamente su altri tipi di problemi. I metodi su misura, adottati per lo sviluppo delle applicazioni proposte, hanno dimostrato di essere in grado di estrarre caratteristiche statistiche complesse e di imparare in modo efficiente le loro rappresentazioni, permettendogli di generalizzare bene attraverso una vasta gamma di compiti di visione computerizzata.<br>Pattern Recognition is the study of how machines can observe the environment, learn to distinguish patterns of interest from their background, and make sound and reasonable decisions about the patterns categories. Nowadays, the application of Pattern Recognition algorithms and techniques is ubiquitous and transversal. With the recent advances in computer vision, we now have the ability to mine such massive visual data to obtain valuable insight about what is happening in the world. The availability of affordable and high resolution sensors (e.g., RGB-D cameras, microphones and scanners) and data sharing have resulted in huge repositories of digitized documents (text, speech, image and video). Starting from such a premise, this thesis addresses the topic of developing next generation Pattern Recognition systems for real applications such as Biology, Retail, Surveillance, Social Media Intelligence and Digital Cultural Heritage. The main goal is to develop computer vision applications in which Pattern Recognition is the key core in their design, starting from general methods, that can be exploited in more fields, and then passing to methods and techniques addressing specific problems. The privileged focus is on up-to-date applications of Pattern Recognition techniques to real-world problems, and on interdisciplinary research, experimental and/or theoretical studies yielding new insights that advance Pattern Recognition methods. The final ambition is to spur new research lines, especially within interdisciplinary research scenarios. Faced with many types of data, such as images, biological data and trajectories, a key difficulty was to nd relevant vectorial representations. While this problem had been often handled in an ad-hoc way by domain experts, it has proved useful to learn these representations directly from data, and Machine Learning algorithms, statistical methods and Deep Learning techniques have been particularly successful. The representations are then based on compositions of simple parameterized processing units, the depth coming from the large number of such compositions. It was desirable to develop new, efficient data representation or feature learning/indexing techniques, which can achieve promising performance in the related tasks. The overarching goal of this work consists of presenting a pipeline to select the model that best explains the given observations; nevertheless, it does not prioritize in memory and time complexity when matching models to observations. For the Pattern Recognition system design, the following steps are performed: data collection, features extraction, tailored learning approach and comparative analysis and assessment. The proposed applications open up a wealth of novel and important opportunities for the machine vision community. The newly dataset collected as well as the complex areas taken into exam, make the research challenging. In fact, it is crucial to evaluate the performance of state of the art methods to demonstrate their strength and weakness and help identify future research for designing more robust algorithms. For comprehensive performance evaluation, it is of great importance developing a library and benchmark to gauge the state of the art because the methods design that are tuned to a specic problem do not work properly on other problems. Furthermore, the dataset selection is needed from different application domains in order to offer the user the opportunity to prove the broad validity of methods. Intensive attention has been drawn to the exploration of tailored learning models and algorithms, and their extension to more application areas. The tailored methods, adopted for the development of the proposed applications, have shown to be capable of extracting complex statistical features and efficiently learning their representations, allowing it to generalize well across a wide variety of computer vision tasks, including image classication, text recognition and so on.
APA, Harvard, Vancouver, ISO, and other styles
2

Crossley, Simon. "Robust temporal stereo computer vision." Thesis, University of Sheffield, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.327614.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Fletcher, Gordon James. "Geometrical problems in computer vision." Thesis, University of Liverpool, 1996. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.337166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Matilainen, M. (Matti). "Embedded computer vision methods for human activity recognition." Doctoral thesis, Oulun yliopisto, 2017. http://urn.fi/urn:isbn:9789526216256.

Full text
Abstract:
Abstract The way how people interact with machines will change in the future. Long have been the traditional ways – mouse and keyboard – the primary interface between man and computer. Recently, the voice and gesture controlled interfaces have been introduced in many devices but they have not yet become very popular. One possible direction where human-computer interfaces can go is to be able to completely hide the interface from the user and allow him or her to interact with the machines in a way that is more natural to human. This thesis introduces a smart living space concept that is a small step towards that direction. The interfacing is assumed to be done unnoticeably to the user via a wireless sensor network that is monitoring the user and analysing his or her behaviour and also using a hand held mobile device which can be used to control the system. A system for human body part segmentation is presented. The system is applied in various applications related to person identification from one’s gait and unusual activity detection. The system is designed to work robustly when the data streams provided by the sensor network are noisy. This increases the usefulness of the system in home environments where the person using the interface is either occluded by the static objects in the room or is interacting with any movable objects. The second part of the proposed smart living space concept is the mobile device carried by the user. Two methods that can be used in a hand gesture-based UI are proposed. A database for training such methods is proposed<br>Tiivistelmä Tapa jolla ihmiset käyttävät tietokonetta on muuttumassa. Hiiri ja näppäimistö ovat olleet jo pitkään yleisimmät tavat, joilla tietokoneita on ohjattu. Uusia tapoja ohjata tietokonetta on kehitetty, mutta ne eivät ole vielä syrjäyttäneet perinteisiä menetelmiä täysin. Yksi todennäköinen muutos tulevaisuudessa on se, että käyttöliittymät sulautetaan ympäristöön ja sen myötä tehdään käyttökokemuksesta luonnollisempi ihmiselle. Tässä väitöskirjassa esitellään järjestelmä, joka muuttaa ihmisen elinympäristön älykkääksi. Langaton kameraverkko analysoi automaattisesti huoneen tapahtumia ja käyttäjä kontrolloi järjestelmää eleohjatulla mobiililaitteella. Väitöskirjassa esitellään menetelmä ihmisen ruumiinosien tunnistukseen, jota sovelletaan myös ihmisen tunnistukseen kävelytyylistä ja epänormaalien aktiviteettien tunnistukseen. Menetelmää suunnitellessa on painotettu sitä, että se toimisi myös silloin, kun käytettävissä on vain huonolaatuista ja kohinaista videodataa. Kohinaa aiheuttaa kotiympäristöissä erityisesti huonekalut, jotka osittain peittävät näkymää ja tavarat, joita huoneessa oleskeleva ihminen saattaa siirrellä. Toinen osa väitöskirjaa käsittelee mobiililaitteen ohjausta käsielein ja esittelee kaksi menetelmää, joilla tällainen käyttöliittymä on mahdollista toteuttaa. Toisen menetelmän opetuksessa käytetty käsitietokanta ja tietokannan vertailutulokset julkaistaan
APA, Harvard, Vancouver, ISO, and other styles
5

Ali, Abdulamer T. "Computer vision aided road traffic analysis." Thesis, University of Bristol, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.333953.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Millman, Michael Peter. "Computer vision for yarn quality inspection." Thesis, Loughborough University, 2000. https://dspace.lboro.ac.uk/2134/34196.

Full text
Abstract:
Structural parameters that determine yarn quality include evenness, hairiness and twist. This thesis applies machine vision techniques to yarn inspection, to determine these parameters in a non-contact manner. Due to the increased costs of such a solution over conventional sensors, the thesis takes a wide look at, and where necessary develops, the potential uses of machine vision for several key aspects of yarn inspection at both low and high speed configurations. Initially, the optimum optical / imaging conditions for yarn imaging are determined by investigating the various factors which degrade a yarn image. The depth of field requirement for imaging yarns is analysed, and various solutions are discussed critically including apodisation, wave front encoding and mechanical guidance. A solution using glass plate guides is proposed, and tested in prototype. The plates enable the correct hair lengths to be seen in the image for long hairs, and also prevent damaging effects on the hairiness definition due to yarn vibration and yarn rotation. The optical system parameters and resolution limits of the yarn image when using guide plates are derived and optimised. The thesis then looks at methods of enhancing the yarn image, using various illumination methods, and incoherent and coherent dark-field imaging.
APA, Harvard, Vancouver, ISO, and other styles
7

Steigerwald, Richard. "Computer Sketch Recognition." DigitalCommons@CalPoly, 2013. https://digitalcommons.calpoly.edu/theses/1009.

Full text
Abstract:
Tens of thousands of years ago, humans drew sketches that we can see and identify even today. Sketches are the oldest recorded form of human communication and are still widely used. The universality of sketches supersedes that of culture and language. Despite the universal accessibility of sketches by humans, computers are unable to interpret or even correctly identify the contents of sketches drawn by humans with a practical level of accuracy. In my thesis, I demonstrate that the accuracy of existing sketch recognition techniques can be improved by optimizing the classification criteria. Current techniques classify a 20,000 sketch crowd-sourced dataset with 56% accuracy. I classify the same dataset with 52% accuracy, but identify factors that have the greatest effect on the accuracy. The ability for computers to identify human sketches would be useful particularly in pictionary-like games and other kinds of human-computer interaction; the concepts from sketch recognition could be extended to other kinds of object recognition.
APA, Harvard, Vancouver, ISO, and other styles
8

Tordoff, Ben. "Active control of zoom for computer vision." Thesis, University of Oxford, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.270752.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Hunt, Neil. "Tools for image processing and computer vision." Thesis, University of Aberdeen, 1990. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU025003.

Full text
Abstract:
The thesis describes progress towards the construction of a seeing machine. Currently, we do not understand enough about the task to build more than the simplest computer vision systems; what is understood, however, is that tremendous processing power will surely be involved. I explore the pipelined architecture for vision computers, and I discuss how it can offer both powerful processing and flexibility. I describe a proposed family of VLSI chips based upon such an architecture, each chip performing a specific image processing task. The specialisation of each chip allows high performance to be achieved, and a common pixel interconnect interface on each chip allows them to be connected in arbitrary configurations in order to solve different kinds of computational problems. While such a family of processing components can be assembled in many different ways, a programmable computer offers certain advantages, in that it is possible to change the operation of such a machine very quickly, simply by substituting a different program. I describe a software design tool which attempts to secure the same kind of programmability advantage for exploring applications of the pipelined processors. This design tool simulates complete systems consisting of several of the proposed processing components, in a configuration described by a graphical schematic diagram. A novel time skew simulation technique developed for this application allows coarse grain simulation for efficiency, while preserving the fine grain timing details. Finally, I describe some experiments which have been performed using the tools discussed earlier, showing how the tools can be put to use to handle real problems.
APA, Harvard, Vancouver, ISO, and other styles
10

Moore, Darnell Janssen. "Vision-based recognition of actions using context." Diss., Georgia Institute of Technology, 2000. http://hdl.handle.net/1853/16346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Computer vision recognition"

1

Ahad, Md Atiqur Rahman. Computer Vision and Action Recognition. Atlantis Press, 2011. http://dx.doi.org/10.2991/978-94-91216-20-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ma, Huimin, Liang Wang, Changshui Zhang, et al., eds. Pattern Recognition and Computer Vision. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88007-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ma, Huimin, Liang Wang, Changshui Zhang, et al., eds. Pattern Recognition and Computer Vision. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88004-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Ma, Huimin, Liang Wang, Changshui Zhang, et al., eds. Pattern Recognition and Computer Vision. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88013-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Ma, Huimin, Liang Wang, Changshui Zhang, et al., eds. Pattern Recognition and Computer Vision. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-88010-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Chowdhary, Chiranji Lal, G. Thippa Reddy, and B. D. Parameshachari. Computer Vision and Recognition Systems. Apple Academic Press, 2022. http://dx.doi.org/10.1201/9781003180593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Zhouchen, Liang Wang, Jian Yang, et al., eds. Pattern Recognition and Computer Vision. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31654-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Lin, Zhouchen, Liang Wang, Jian Yang, et al., eds. Pattern Recognition and Computer Vision. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31723-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Zhouchen, Liang Wang, Jian Yang, et al., eds. Pattern Recognition and Computer Vision. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-31726-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Peng, Yuxin, Qingshan Liu, Huchuan Lu, et al., eds. Pattern Recognition and Computer Vision. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60633-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Computer vision recognition"

1

Qiao, Yu, and Xiaogang Wang. "Face Recognition." In Computer Vision. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_354-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Turk, Matthew, and Vassilis Athitsos. "Gesture Recognition." In Computer Vision. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_376-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Li, Wanqing, Zicheng Liu, and Zhengyou Zhang. "Activity Recognition." In Computer Vision. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_63-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tsotsos, John K. "Active Recognition." In Computer Vision. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-03243-2_866-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Pietikäinen, Matti. "Texture Recognition." In Computer Vision. Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Matovski, Darko S., Mark S. Nixon, and John N. Carter. "Gait Recognition." In Computer Vision. Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_375.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Turk, Matthew. "Gesture Recognition." In Computer Vision. Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_376.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Li, Wanqing, Zicheng Liu, and Zhengyou Zhang. "Activity Recognition." In Computer Vision. Springer US, 2014. http://dx.doi.org/10.1007/978-0-387-31439-6_63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Qiao, Yu, and Xiaogang Wang. "Face Recognition." In Computer Vision. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Li, Wanqing, Zicheng Liu, and Zhengyou Zhang. "Activity Recognition." In Computer Vision. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-63416-2_63.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computer vision recognition"

1

Swedheetha, C., Palanichamy Naveen, T. Akilan, P. Manikandan, and B. Pushpavanam. "Computer Vision-Based Hand Gesture Recognition." In 2024 4th International Conference on Ubiquitous Computing and Intelligent Information Systems (ICUIS). IEEE, 2024. https://doi.org/10.1109/icuis64676.2024.10866139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Radojčić, Vesna, and Miloš Dobrojević. "Traffic Sign Recognition Using Computer Vision." In Sinteza 2025. Singidunum University, 2025. https://doi.org/10.15308/sinteza-2025-10-15.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gupta, Pradeep Kumar, Nayantara Varadharajan, Keerthana Ajith, Tripty Singh, and Payel Patra. "Facial Emotion Recognition Using Computer Vision Techniques." In 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT). IEEE, 2024. http://dx.doi.org/10.1109/icccnt61001.2024.10725699.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Syahida, Khairunnisa Atika, Indrabayu, and Ingrid Nurtanio. "Thin Books Recognition on Bookshelves Using Computer Vision." In 2024 8th International Conference on Information Technology, Information Systems and Electrical Engineering (ICITISEE). IEEE, 2024. http://dx.doi.org/10.1109/icitisee63424.2024.10730686.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

D P, Akarsha, Hansitha C, N. Padma Priya, Santhwana K. Suresh, and Vaaruni K S. "Bharatanatyam Hastas Recognition using CNN and Computer Vision." In 2025 5th International Conference on Pervasive Computing and Social Networking (ICPCSN). IEEE, 2025. https://doi.org/10.1109/icpcsn65854.2025.11035099.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bhardwaj, Akanksha, Sanjeev Kumar Tomar, Ashish Anil, et al. "Flame recognition using computer vision." In INTERNATIONAL CONFERENCE OF NUMERICAL ANALYSIS AND APPLIED MATHEMATICS ICNAAM 2021. AIP Publishing, 2023. http://dx.doi.org/10.1063/5.0152489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Craw, I., and P. Cameron. "Face Recognition by Computer." In British Machine Vision Conference 1992. Springer-Verlag London Limited, 1992. http://dx.doi.org/10.5244/c.6.52.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Chiu, Kevin, and Ramesh Raskar. "Computer vision on tap." In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2009. http://dx.doi.org/10.1109/cvprw.2009.5204229.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jonathan, Andreas Pangestu Lim, Paoline, Gede Putra Kusuma, and Amalia Zahra. "Facial Emotion Recognition Using Computer Vision." In 2018 Indonesian Association for Pattern Recognition International Conference (INAPR). IEEE, 2018. http://dx.doi.org/10.1109/inapr.2018.8626999.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Fang. "Ripe Tomato Recognition with Computer Vision." In 2015 International Industrial Informatics and Computer Engineering Conference. Atlantis Press, 2015. http://dx.doi.org/10.2991/iiicec-15.2015.107.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Computer vision recognition"

1

Wright, John, Yi Ma, Julien Mairal, Guillermo Sapiro, Thomas Huang, and Shuicheng Yan. Sparse Representation for Computer Vision and Pattern Recognition. Defense Technical Information Center, 2009. http://dx.doi.org/10.21236/ada513248.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Rodriguez, Simon, Autumn Toney, and Melissa Flagg. Patent Landscape for Computer Vision: United States and China. Center for Security and Emerging Technology, 2020. http://dx.doi.org/10.51593/20200054.

Full text
Abstract:
China’s surge in artificial intelligence development has been fueled, in large part, by advances in computer vision, the AI subdomain that makes powerful facial recognition technologies possible. This data brief compares U.S. and Chinese computer vision patent data to illustrate the different approaches each country takes to AI development.
APA, Harvard, Vancouver, ISO, and other styles
3

Schoening, Timm. OceanCV. GEOMAR, 2022. http://dx.doi.org/10.3289/sw_5_2022.

Full text
Abstract:
OceanCV provides computer vision algorithms and tools for underwater image analysis. This includes image processing, pattern recognition, machine learning and geometric algorithms but also functionality for navigation data processing, data provenance etc.
APA, Harvard, Vancouver, ISO, and other styles
4

Stiller, Peter. Algebraic Geometry and Computational Algebraic Geometry for Image Database Indexing, Image Recognition, And Computer Vision. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/ada384588.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Bragdon, Sophia, Vuong Truong, and Jay Clausen. Environmentally informed buried object recognition. Engineer Research and Development Center (U.S.), 2022. http://dx.doi.org/10.21079/11681/45902.

Full text
Abstract:
The ability to detect and classify buried objects using thermal infrared imaging is affected by the environmental conditions at the time of imaging, which leads to an inconsistent probability of detection. For example, periods of dense overcast or recent precipitation events result in the suppression of the soil temperature difference between the buried object and soil, thus preventing detection. This work introduces an environmentally informed framework to reduce the false alarm rate in the classification of regions of interest (ROIs) in thermal IR images containing buried objects. Using a dataset that consists of thermal images containing buried objects paired with the corresponding environmental and meteorological conditions, we employ a machine learning approach to determine which environmental conditions are the most impactful on the visibility of the buried objects. We find the key environmental conditions include incoming shortwave solar radiation, soil volumetric water content, and average air temperature. For each image, ROIs are computed using a computer vision approach and these ROIs are coupled with the most important environmental conditions to form the input for the classification algorithm. The environmentally informed classification algorithm produces a decision on whether the ROI contains a buried object by simultaneously learning on the ROIs with a classification neural network and on the environmental data using a tabular neural network. On a given set of ROIs, we have shown that the environmentally informed classification approach improves the detection of buried objects within the ROIs.
APA, Harvard, Vancouver, ISO, and other styles
6

Тарасова, Олена Юріївна, and Ірина Сергіївна Мінтій. Web application for facial wrinkle recognition. Кривий Ріг, КДПУ, 2022. http://dx.doi.org/10.31812/123456789/7012.

Full text
Abstract:
Facial recognition technology is named one of the main trends of recent years. It’s wide range of applications, such as access control, biometrics, video surveillance and many other interactive humanmachine systems. Facial landmarks can be described as key characteristics of the human face. Commonly found landmarks are, for example, eyes, nose or mouth corners. Analyzing these key points is useful for a variety of computer vision use cases, including biometrics, face tracking, or emotion detection. Different methods produce different facial landmarks. Some methods use only basic facial landmarks, while others bring out more detail. We use 68 facial markup, which is a common format for many datasets. Cloud computing creates all the necessary conditions for the successful implementation of even the most complex tasks. We created a web application using the Django framework, Python language, OpenCv and Dlib libraries to recognize faces in the image. The purpose of our work is to create a software system for face recognition in the photo and identify wrinkles on the face. The algorithm for determining the presence and location of various types of wrinkles and determining their geometric determination on the face is programmed.
APA, Harvard, Vancouver, ISO, and other styles
7

Bajcsy, Ruzena. A Query Driven Computer Vision System: A Paradigm for Hierarchical Control Strategies during the Recognition Process of Three-Dimensional Visually Perceived Objects. Defense Technical Information Center, 1986. http://dx.doi.org/10.21236/ada185507.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Pasupuleti, Murali Krishna. Next-Generation Extended Reality (XR): A Unified Framework for Integrating AR, VR, and AI-driven Immersive Technologies. National Education Services, 2025. https://doi.org/10.62311/nesx/rrv325.

Full text
Abstract:
Abstract: Extended Reality (XR), encompassing Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR), is evolving into a transformative technology with applications in healthcare, education, industrial training, smart cities, and entertainment. This research presents a unified framework integrating AI-driven XR technologies with computer vision, deep learning, cloud computing, and 5G connectivity to enhance immersion, interactivity, and scalability. AI-powered neural rendering, real-time physics simulation, spatial computing, and gesture recognition enable more realistic and adaptive XR environments. Additionally, edge computing and federated learning enhance processing efficiency and privacy in decentralized XR applications, while blockchain and quantum-resistant cryptography secure transactions and digital assets in the metaverse. The study explores the role of AI-enhanced security, deepfake detection, and privacy-preserving AI techniques to mitigate risks associated with AI-driven XR. Case studies in healthcare, smart cities, industrial training, and gaming illustrate real-world applications and future research directions in neuromorphic computing, brain-computer interfaces (BCI), and ethical AI governance in immersive environments. This research lays the foundation for next-generation AI-integrated XR ecosystems, ensuring seamless, secure, and scalable digital experiences. Keywords: Extended Reality (XR), Augmented Reality (AR), Virtual Reality (VR), Mixed Reality (MR), Artificial Intelligence (AI), Neural Rendering, Spatial Computing, Deep Learning, 5G Networks, Cloud Computing, Edge Computing, Federated Learning, Blockchain, Cybersecurity, Brain-Computer Interfaces (BCI), Quantum Computing, Privacy-Preserving AI, Human-Computer Interaction, Metaverse.
APA, Harvard, Vancouver, ISO, and other styles
9

Ferdaus, Md Meftahul, Mahdi Abdelguerfi, Elias Ioup, et al. KANICE : Kolmogorov-Arnold networks with interactive convolutional elements. Engineer Research and Development Center (U.S.), 2025. https://doi.org/10.21079/11681/49791.

Full text
Abstract:
We introduce KANICE, a novel neural architecture that combines Convolutional Neural Networks (CNNs) with Kolmogorov-Arnold Network (KAN) principles. KANICE integrates Interactive Convolutional Blocks (ICBs) and KAN linear layers into a CNN framework. This leverages KANs’ universal approximation capabilities and ICBs’ adaptive feature learning. KANICE captures complex, non-linear data relationships while enabling dynamic, context-dependent feature extraction based on the Kolmogorov-Arnold representation theorem. We evaluated KANICE on four datasets: MNIST, Fashion-MNIST, EMNIST, and SVHN, comparing it against standard CNNs, CNN-KAN hybrids, and ICB variants. KANICE consistently outperformed baseline models, achieving 99.35% accuracy on MNIST and 90.05% on the SVHN dataset. Furthermore, we introduce KANICE-mini, a compact variant designed for efficiency. A comprehensive ablation study demonstrates that KANICE-mini achieves comparable performance to KANICE with significantly fewer parameters. KANICE-mini reached 90.00% accuracy on SVHN with 2,337,828 parameters, compared to KAN-ICE’s 25,432,000. This study highlights the potential of KAN-based architectures in balancing performance and computational efficiency in image classification tasks. Our work contributes to research in adaptive neural networks, integrates mathematical theorems into deep learning architectures, and explores the trade-offs between model complexity and performance, advancing computer vision and pattern recognition. The source code for this paper is publicly accessible through our GitHub repository (https://github.com/mferdaus/kanice).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!