To see the other types of publications on this topic, follow the link: Optical Character Identification.

Journal articles on the topic 'Optical Character Identification'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Optical Character Identification.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lestari, Ikha Novie Tri, and Dadang Iskandar Mulyana. "Implementation of OCR (Optical Character Recognition) Using Tesseract in Detecting Character in Quotes Text Images." Journal of Applied Engineering and Technological Science (JAETS) 4, no. 1 (2022): 58–63. http://dx.doi.org/10.37385/jaets.v4i1.905.

Full text
Abstract:
The development of technology in Indonesia is currently increasingly advanced in people's lives and cannot be avoided. The use of Artificial Intelligence in helping humans in dealing with problems is growing. Humans can take advantage of computer/smartphone media in today's technological era. One of its uses is Optical Character Recognition. This research is motivated by the problem where the running system requires development in terms of technology to detect characters in the quote text image, because the previous system still performs manual input. Optical Character Recognition has been widely used to extract characters contained in digital image media. The ability of OCR methods and techniques is very dependent on the normalization process as an initial process before entering into the next stages such as segmentation and identification. The image normalization process aims to obtain a better input image so that the segmentation and identification process can produce optimal accuracy. To get maximum results, it takes several pre-processing stages on the image to be used. To achieve this, it is necessary to perform Optical Character Recognition which can be done using Tesseract-OCR. The OCR program that was created was successfully used to scan or scan a quote text image if the document was lost or damaged, and it could save time for creating, processing and typing documents.
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Yifan, and Yuxi Zhang. "Optical character recognition with different languages." Applied and Computational Engineering 17, no. 1 (2023): 60–64. http://dx.doi.org/10.54254/2755-2721/17/20230914.

Full text
Abstract:
Optical character recognition is the combination of optical technology and computer technology to identify text in an image and then recognize the text content in the image, providing individuals with a great deal of ease in their daily lives. Document text recognition, natural scene text recognition, bill text recognition, and ID card recognition have been used in daily life, but there are still many factors that lead to inaccurate identification and detection. Therefore, different texts, patterns or characters are suitable for different types of Optical character recognition. In this paper, we can learn about the Optical character recognition operation methods and find the similarities and differences through researching the technical routes and four different types of Optical character recognition. In addition, by comparing the Optical character recognition of several commonly used languages, the advantages and disadvantages of each method can be analysed.
APA, Harvard, Vancouver, ISO, and other styles
3

Narendra, Sahu, and Sonkusare Manoj. "A STUDY ON OPTICAL CHARACTER RECOGNITION TECHNIQUES." International Journal of Computational Science, Information Technology and Control Engineering (IJCSITCE) 4, no. 1 (2017): 1–14. https://doi.org/10.5121/ijcsitce.2017.4101.

Full text
Abstract:
Optical Character Recognition (OCR) is the process which enables a system to without human intervention identifies the scripts or alphabets written into the users’ verbal communication. Optical Character identification has grown to be individual of the mainly flourishing applications of knowledge in the field of pattern detection and artificial intelligence. In our survey we study on the various OCR techniques. In this paper we resolve and examine the hypothetical and numerical models of Optical Character Identification. The Optical character identification or classification (OCR) and Magnetic Character Recognition (MCR) techniques are generally utilized for the recognition of patterns or alphabets. In general the alphabets are in the variety of pixel pictures and it could be either handwritten or stamped, of any series, shape or direction etc. Alternatively in MCR the alphabets are stamped with magnetic ink and the studying machine categorize the alphabet on the basis of the exclusive magnetic field that is shaped by every alphabet. Both MCR and OCR discover utilization in banking and different trade appliances. Earlier exploration going on Optical Character detection or recognition has shown that the In Handwritten text there is no limitation lying on the script technique. Hand written correspondence is complicated to be familiar through due to diverse human handwriting style, disparity in angle, size and shape of calligraphy. An assortment of approaches of Optical Character Identification is discussed here all along through their achievement.
APA, Harvard, Vancouver, ISO, and other styles
4

Gurav, Savitri. "Review of methods for Handwritten Character Identification using Optical Character Recognition (OCR)." International Journal for Research in Applied Science and Engineering Technology 7, no. 6 (2019): 2508–11. http://dx.doi.org/10.22214/ijraset.2019.6422.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Oudah, Nabeel, Maher Faik Esmaile, and Estabraq Abdulredaa. "Optical Character Recognition Using Active Contour Segmentation." Journal of Engineering 24, no. 1 (2018): 146–58. http://dx.doi.org/10.31026/j.eng.2018.01.10.

Full text
Abstract:
Document analysis of images snapped by camera is a growing challenge. These photos are often poor-quality compound images, composed of various objects and text; this makes automatic analysis complicated. OCR is one of the image processing techniques which is used to perform automatic identification of texts. Existing image processing techniques need to manage many parameters in order to clearly recognize the text in such pictures. Segmentation is regarded one of these essential parameters. This paper discusses the accuracy of segmentation process and its effect over the recognition process. According to the proposed method, the images were firstly filtered using the wiener filter then the active contour algorithm could be applied in the segmentation process. The Tesseract OCR Engine was selected in order to evaluate the performance and identification accuracy of the proposed method. The results showed that a more accurate segmentation process shall lead to a more accurate recognition results. The rate of recognition accuracy was 0.95 for the proposed algorithm compared with 0.85 for the Tesseract OCR Engine. 
APA, Harvard, Vancouver, ISO, and other styles
6

Shanmugavel, Subramanian, Jagadeesh Kannan, Arjun Vaithilingam Sudhakar, and . "Handwritten Optical Character Extraction and Recognition from Catalogue Sheets." International Journal of Engineering & Technology 7, no. 4.5 (2018): 36. http://dx.doi.org/10.14419/ijet.v7i4.5.20005.

Full text
Abstract:
The dataset consists of 20000 scanned catalogues of fossils and other artifacts compiled by the Geological Sciences Department. The images look like a scanned form filled with blue ink ball pen. The character extraction and identification is the first phase of the research and in the second phase we are planning to use the HMM model to extract the entire text from the form and store it in a digitized format. We used various image processing and computer vision techniques to extract characters from the 20000 handwritten catalogues. Techniques used for character extraction are Erode, MorphologyEx, Dilate, canny edge detection, find Counters, Counter Area etc. We used Histogram of Gradients (HOGs) to extract features from the character images and applied k-means and agglomerative clustering to perform unsupervised learning. This would allow us to prepare a labelled training dataset for the second phase. We also tried converting images from RGB to CMYK to improve k-means clustering performance. We also used thresholding to extract blue ink characters from the form after converting the image in HSV color format, but the background noise was significant, and results obtained were not promising. We are researching a more robust method to extract characters that doesn’t deform the characters and takes alignment into consideration.
APA, Harvard, Vancouver, ISO, and other styles
7

Hyun, Young-Joo, Eunseok Nam, and Youngjun Yoo. "Real-time Optical Character Recognition in Manufacturing Using YOLOv8 and Embedded Systems for Engraved Characters on a Metal Surface." International Journal of Precision Engineering and Manufacturing-Smart Technology 3, no. 2 (2025): 107–15. https://doi.org/10.57062/ijpem-st.2025.00052.

Full text
Abstract:
This study introduces a YOLOv8-based Optical Character Recognition (OCR) system specifically optimized for engraved character recognition, aiming to facilitate digital transformation and enhance smart manufacturing processes. To overcome limitations of manual part identification and quality inspection prevalent in conventional manufacturing environments, this study employed engraved character data from metal scroll compressor components. A lightweight deep learning model was designed and deployed on a Raspberry Pi platform to enable real-time character recognition. In a controlled laboratory environment, more than 150 images were acquired and processed through data augmentation and normalization techniques. The YOLOv8 object detection model was trained under various lighting and angular conditions, achieving high recognition performance (recall: 94.8%, mAP: 88.8%). Additionally, a post-processing algorithm was implemented to organize detected characters by their positions and classes, thereby reconstructing final product identification codes. Results confirmed the feasibility of real-time quality inspection and the potential for process automation in manufacturing environments. Future research needs to focus on enhancing recognition precision through improved post-processing techniques for reverse-oriented characters and diverse text layouts, while also exploring alternative embedded platforms to further optimize system efficiency.
APA, Harvard, Vancouver, ISO, and other styles
8

Muthusundari, Muthusundari, A. Velpoorani, S. Venkata Kusuma, Trisha L, and Om k. Rohini. "Optical character recognition system using artificial intelligence." LatIA 2 (August 13, 2024): 98. http://dx.doi.org/10.62486/latia202498.

Full text
Abstract:
Abstract A technique termed optical character recognition, or OCR, is used to extract text from images. An OCR the system's primary goal is to transform already present paper-based paperwork or picture data into usable papers. Character as well as word detection are the two main phases of an OCR, which is designed using many algorithms. An OCR also maintains a document's structure by focusing on sentence identification, which is a more sophisticated approach. Research has demonstrated that despite the efforts of numerous scholars, no error-free Bengali OCR has been produced. This issue is addressed by developing an OCR for the Bengali language using the latest 3.03 version of the Tesseract OCR engine for Windows.
APA, Harvard, Vancouver, ISO, and other styles
9

Veni, S., R. S. Sabeenian, T. Shanthi, and R. Anand. "Real time noisy dataset implementation of optical character identification using CNN." International Journal of Intelligent Enterprise 7, no. 1/3 (2020): 67. http://dx.doi.org/10.1504/ijie.2020.10026346.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Anand, R., T. Shanthi, R. S. Sabeenian, and S. Veni. "Real time noisy dataset implementation of optical character identification using CNN." International Journal of Intelligent Enterprise 7, no. 1/2/3 (2020): 67. http://dx.doi.org/10.1504/ijie.2020.104646.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Cahyono, Daniel Setiawan, and Shinta Estri Wahyuningrum. "ALPHABETS IMAGE IDENTIFICATION USING ADVANCED LOCAL BINARY PATTERN AND CHAIN CODE ALGORITHM." Proxies : Jurnal Informatika 2, no. 2 (2021): 68. http://dx.doi.org/10.24167/proxies.v2i2.3209.

Full text
Abstract:
Optical Character Recognition (OCR) is a method for computer to process an image that contains some text and then try to find any characters in that image, then convert it to digital text. In this research, Advanced Local Binary Pattern and Chain Code algorithm will be tested to identify alphabets in the image. Several method image preprocessing are also needed, such as image transformation, image rescaling, grayscale conversion, edge detection and edge thinning.
APA, Harvard, Vancouver, ISO, and other styles
12

Raihan, Raihan Fadhil, Hendrick Hendrick, Novi Novi, and Gwo -Jiun Horn. "Parking Identification System with Integration of Optical Character Recognition (OCR) and Radio Frequency Identification (RFID)." JECCOM: International Journal of Electronics Engineering and Applied Science 2, no. 1 (2024): 10–17. https://doi.org/10.30630/jeccom.2.1.10-17.2024.

Full text
Abstract:
The Indonesian National Police (Polri) reported a significant increase in the number of crime cases during the period January - April 2023. Based on these conditions, an innovative security system is needed that can reduce the risk of motor vehicle theft. This tool is equipped with the help of an IP Camera and runs a program that has been made on programming software using OpenCV Python. The IP Camera is used to capture video that can be accessed on the server to process whether there is a license plate number detected and read the vehicle license plate number. This system integrates Radio Frequency Identification (RFID) and Optical Character Recognition (OCR) technology in the parking system. RFID technology will provide unique identification to each user who will enter the parking area, while OCR is used to read the license plate numbers of vehicles entering the parking area. OCR reading results are also sent to the microcontroller to be displayed on the LCD when the customer enters the parking area. The OCR reading results have a truth accuracy rate of 98.7%.
APA, Harvard, Vancouver, ISO, and other styles
13

Alwaqfi, Yazan, Mumtazimah Mohamad, and Ahmad Al-Taani. "Generative Adversarial Network for an Improved Arabic Handwritten Characters Recognition." International Journal of Advances in Soft Computing and its Applications 14, no. 1 (2022): 177–95. http://dx.doi.org/10.15849/ijasca.220328.12.

Full text
Abstract:
Abstract Currently, Arabic character recognition remains one of the most complicated challenges in image processing and character identification. Many algorithms exist in neural networks, and one of the most interesting algorithms is called generative adversarial networks (GANs), where 2 neural networks fight against one another. A generative adversarial network has been successfully implemented in unsupervised learning and it led to outstanding achievements. Furthermore, this discriminator is used as a classifier in most generative adversarial networks by employing the binary sigmoid cross-entropy loss function. This research proposes employing sigmoid cross-entropy to recognize Arabic handwritten characters using multi-class GANs training algorithms. The proposed approach is evaluated on a dataset of 16800 Arabic handwritten characters. When compared to other approaches, the experimental results indicate that the multi-class GANs approach performed well in terms of recognizing Arabic handwritten characters as it is 99.7% accurate. Keywords: Generative Adversarial Networks (GANs), Arabic Characters, Optical Character Recognition, Convolutional Neural Networks (CNNs).
APA, Harvard, Vancouver, ISO, and other styles
14

David, Jiří, Pavel Švec, Vít Pasker, and Romana Garzinová. "Usage of Real Time Machine Vision in Rolling Mill." Sustainability 13, no. 7 (2021): 3851. http://dx.doi.org/10.3390/su13073851.

Full text
Abstract:
This article deals with the issue of computer vision on a rolling mill. The main goal of this article is to describe the designed and implemented algorithm for the automatic identification of the character string of billets on the rolling mill. The algorithm allows the conversion of image information from the front of the billet, which enters the rolling process, into a string of characters, which is further used to control the technological process. The purpose of this identification is to prevent the input pieces from being confused because different parameters of the rolling process are set for different pieces. In solving this task, it was necessary to design the optimal technical equipment for image capture, choose the appropriate lighting, search for text and recognize individual symbols, and insert them into the control system. The research methodology is based on the empirical-quantitative principle, the basis of which is the analysis of experimentally obtained data (photographs of billet faces) in real operating conditions leading to their interpretation (transformation into the shape of a digital chain). The first part of the article briefly describes the billet identification system from the point of view of technology and hardware resources. The next parts are devoted to the main parts of the algorithm of automatic identification—optical recognition of strings and recognition of individual characters of the chain using artificial intelligence. The method of optical character recognition using artificial neural networks is the basic algorithm of the system of automatic identification of billets and eliminates ambiguities during their further processing. Successful implementation of the automatic inspection system will increase the share of operation automation and lead to ensuring automatic inspection of steel billets according to the production plan. This issue is related to the trend of digitization of individual technological processes in metallurgy and also to the social sustainability of processes, which means the elimination of human errors in the management of the billet rolling process.
APA, Harvard, Vancouver, ISO, and other styles
15

Pramila, P. V., and Atani Yamini. "Precision Analysis of Salient Object Detection in Moving Video Using Region Based Convolutional Neural Network Compared Over Optical Character Recognition." ECS Transactions 107, no. 1 (2022): 14001–15. http://dx.doi.org/10.1149/10701.14001ecst.

Full text
Abstract:
To detect the number plate of the vehicles which violates the traffic rules using optical character recognition compared over region based convolutional networks. Materials and methods: In this work, number plates are identified by optical character recognition with the sample size of 22 and region based convolutional neural network of sample size 22. The number plate of the vehicle will be detected and converted into string format. Results: A prediction accuracy of 96.4% using the optical character recognition method was achieved, while the region based convolutional neural network was 94.2.%.The significance value obtained in statistical analysis is 0.02(p<0.05). Conclusion: The results show that identification of number plates is significantly better in optical character recognition than region based convolutional neural networks.
APA, Harvard, Vancouver, ISO, and other styles
16

Et.al, Siddharth Salar. "Automate Identification and Recognition of Handwritten Text from an Image." Turkish Journal of Computer and Mathematics Education (TURCOMAT) 12, no. 3 (2021): 3800–3808. http://dx.doi.org/10.17762/turcomat.v12i3.1666.

Full text
Abstract:
Handwritten text acknowledgment is yet an open examination issue in the area of Optical Character Recognition (OCR). This paper proposes a productive methodology towards the advancement of handwritten text acknowledgment frameworks. The primary goal of this task is to create AI calculation to empower element and information extraction from records with manually written explanations, with an, expect to distinguish transcribed words on a picture.
 The main aim of this project is to extract text, this text can be handwritten text or it can machine printed text and convert it into computer understandable or wNe can say computer editable format. To implement thais project we have used PyTesseract which is an open-sourcemOCR engine used to recognize handwritten text and OpenCV a library in python used to solve computer vision problems. So the input image is executed in various steps, first there is pre-processing of an image then there is text localization after that there is character segmentation and character recognition and finally we have post-processing of image. Further image processingalgorithms can also be used to deal with the multiple characters input in a single image, tilt image, or rotated image. The prepared framework gives a normal precision of more than 95 % with the concealed test picture.
APA, Harvard, Vancouver, ISO, and other styles
17

Jehangir, Sardar, Sohail Khan, Sulaiman Khan, Shah Nazir, and Anwar Hussain. "Zernike Moments Based Handwritten Pashto Character Recognition Using Linear Discriminant Analysis." January 2021 40, no. 1 (2021): 152–59. http://dx.doi.org/10.22581/muet1982.2101.14.

Full text
Abstract:
This paper presents an efficient Optical Character Recognition (OCR) system for offline isolated Pashto characters recognition. Developing an OCR system for handwritten character recognition is a challenging task because of the handwritten characters vary both in shape and in style and most of the time the handwritten characters also vary among the individuals. The identification of the inscribed Pashto letters becomes even palling due to the unavailability of a standard handwritten Pashto characters database. For experimental and simulation purposes a handwritten Pashto characters database is developed by collecting handwritten samples from the students of the university on A4 sized page. These collected samples are then scanned, stemmed and preprocessed to form a medium sized database that encompasses 14784 handwritten Pashto character images (336 distinguishing handwritten samples for each 44 characters in Pashto script). Furthermore, the Zernike moments are considered as a feature extractor tool for the proposed OCR system to extract features of each individual character. Linear Discriminant Analysis (LDA) is followed as a recognition tool for the proposed recognition system based on the calculated features map using Zernike moments. Applicability of the proposed system is tested by validating it with 10-fold cross-validation method and an overall accuracy of 63.71% is obtained for the handwritten Pashto isolated characters using the proposed OCR system.
APA, Harvard, Vancouver, ISO, and other styles
18

Gupta, Monica, Alka Choudhary, and Jyotsna Parmar. "Analysis of Text Identification Techniques Using Scene Text and Optical Character Recognition." International Journal of Computer Vision and Image Processing 11, no. 4 (2021): 39–62. http://dx.doi.org/10.4018/ijcvip.2021100104.

Full text
Abstract:
In today's era, data in digitalized form is needed for faster processing and performing of all tasks. The best way to digitalize the documents is by extracting the text from them. This work of text extraction can be performed by various text identification tasks such as scene text recognition, optical character recognition, handwriting recognition, and much more. This paper presents, reviews, and analyses recent research expansion in the area of optical character recognition and scene text recognition based on various existing models such as convolutional neural network, long short-term memory, cognitive reading for image processing, maximally stable extreme regions, stroke width transformation, and achieved remarkable results up to 90.34% of F-score with benchmark datasets such as ICDAR 2013, ICDAR 2019, IIIT5k. The researchers have done outstanding work in the text recognition field. Yet, improvement in text detection in low-quality image performance is required, as text identification should not be limited to the input quality of the image.
APA, Harvard, Vancouver, ISO, and other styles
19

Singh, Mr Santosh Kumar. "Automatic Number Plate Recognition System for Vehicle Identification using Optical Character Recognition." International Journal for Research in Applied Science and Engineering Technology 7, no. 4 (2019): 1658–62. http://dx.doi.org/10.22214/ijraset.2019.4300.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Svitlana, Maksymova, Yevsieiev Vladyslav, and Abu-Jassar Amer. "MICROCHIP MARKING RECOGNITION AND IDENTIFICATION USING A COMPUTER VISION SYSTEM MATHEMATICAL MODEL." Multidisciplinary Journal of Science and Technology 5, no. 4 (2025): 321–30. https://doi.org/10.5281/zenodo.15194651.

Full text
Abstract:
The article considers a microcircuit markings recognition and identification using a computer vision system mathematical model that provides automation of electronic component control. An image processing algorithm is proposed, which includes preliminary filtering, binarization and optical character recognition to increase the accuracy of identification. An experimental analysis of the influence of the angle of inclination and the level of illumination on the speed and quality of recognition was carried out, which allowed formulating recommendations on the optimal shooting parameters. The results obtained can be used to create automated quality control systems in electronics production.
APA, Harvard, Vancouver, ISO, and other styles
21

Farrell, J. E., and M. Desmarais. "Equating character-identification performance across the visual field." Journal of the Optical Society of America A 7, no. 1 (1990): 152. http://dx.doi.org/10.1364/josaa.7.000152.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Ritonga, Mahyudin, Manoj L. Bangare, Pushpa Manoj Bangare, et al. "Optimized convolutional neural network deep learning for Arabian handwritten text recognition." Bulletin of Electrical Engineering and Informatics 14, no. 2 (2025): 1497–506. https://doi.org/10.11591/eei.v14i2.7696.

Full text
Abstract:
In general, the term handwritten character recognition (HCR) refers to the process of recognizing handwritten characters in any form, whereas handwritten text recognition (HTR) refers to the process of reading scanned document images that include text lines and converting those text lines into editable text. The identification of recurring structures and configurations in data is the primary focus of the field of machine learning known as pattern recognition. Optical character recognition, often known as OCR, is a challenging issue to solve when it comes to the field of pattern recognition. This article presents machine learning enabled framework for accurate identification of Arabian handwriting. This framework has provisions for image processing, image segmentation, feature extraction and classification of handwritten images. Images are enhanced using contrast limited adaptive histogram equalization (CLAHE) algorithm. Image segmentation is performed by k-means algorithm. Classification is performed using convolutional neural network (CNN) VGG 16 and support vector machine (SVM) algorithm. Classification accuracy of CNN VGG 16 is 99.33%.
APA, Harvard, Vancouver, ISO, and other styles
23

Sharma, Shubhankar, and Vatsala Arora. "Script Identification for Devanagari and Gurumukhi using OCR." International Journal of Computer Science and Mobile Computing 10, no. 9 (2021): 12–22. http://dx.doi.org/10.47760/ijcsmc.2021.v10i09.002.

Full text
Abstract:
The study of character research is an active area for research as it pertains a lot of challenges. Various pattern recognition techniques are being used every day. As there are so many writing styles available, development of OCR (Optical Character Recognition) for handwritten text is difficult. Therefore, several measures have to be taken to improve the recognition process so that the burden of computation can be decreased and the accuracy for pattern recognition can be increased. The main objective of this review was to recognize and analyze handwritten document images. In this paper, we present a scheme to identify different Indian scripts like Devanagari and Gurumukhi.
APA, Harvard, Vancouver, ISO, and other styles
24

Adedayo, Kayode David, and Ayomide Oluwaseyi Agunloye. "Real-time Automated Detection and Recognition of Nigerian License Plates via Deep Learning Single Shot Detection and Optical Character Recognition." Computer and Information Science 14, no. 4 (2021): 11. http://dx.doi.org/10.5539/cis.v14n4p11.

Full text
Abstract:
License plate detection and recognition are critical components of the development of a connected Intelligent transportation system, but are underused in developing countries because to the associated costs. Existing license plate detection and recognition systems with high accuracy require the usage of Graphical Processing Units (GPU), which may be difficult to come by in developing nations. Single stage detectors and commercial optical character recognition engines, on the other hand, are less computationally expensive and can achieve acceptable detection and recognition accuracy without the use of a GPU. In this work, a pretrained SSD model and a tesseract tessdata-fast traineddata were fine-tuned on a dataset of more than 2,000 images of vehicles with license plate. These models were combined with a unique image preprocessing algorithm for character segmentation and tested using a general-purpose personal computer on a new collection of 200 automobiles with license plate photos. On this testing set, the plate detection system achieved a detection accuracy of 99.5 % at an IOU threshold of 0.45 while the OCR engine successfully recognized all characters on 150 license plates, one character incorrectly on 24 license plates, and two or more incorrect characters on 26 license plates. The detection procedure took an average of 80 milliseconds, while the character segmentation and identification stages took an average of 95 milliseconds, resulting in an average processing time of 175 milliseconds per image, or 6 photos per second. The obtained results are suitable for real-time traffic applications.
APA, Harvard, Vancouver, ISO, and other styles
25

Glasenapp, Luiz Alfonso, Aurélio Faustino Hoppe, Miguel Alexandre Wisintainer, Andreza Sartori, and Stefano Frizzo Stefenon. "OCR Applied for Identification of Vehicles with Irregular Documentation Using IoT." Electronics 12, no. 5 (2023): 1083. http://dx.doi.org/10.3390/electronics12051083.

Full text
Abstract:
Given the lack of investments in surveillance in remote places, this paper presents a prototype that identifies vehicles in irregular conditions, notifying a group of people, such as a network of neighbors, through a low-cost embedded system based on the Internet of things (IoT). The developed prototype allows the visualization of the location, date and time of the event, and vehicle information such as license plate, make, model, color, city, state, passenger capacity and restrictions. It also offers a responsive interface in two languages: Portuguese and English. The proposed device addresses technical concepts pertinent to image processing such as binarization, analysis of possible characters on the plate, plate border location, perspective transformation, character segmentation, optical character recognition (OCR) and post-processing. The embedded system is based on a Raspberry having support to GPS, solar panels, communication via 3G modem, wi-fi, camera and motion sensors. Tests were performed regarding the vehicle’s positioning and the percentage of assertiveness in image processing, where the vehicles are at different angles, speeds and distances. The prototype can be a viable alternative because the results were satisfactory concerning the recognition of the license plates, mobility and autonomy.
APA, Harvard, Vancouver, ISO, and other styles
26

Songa, Akhil, Rahul Bolineni, Harish Reddy, Sohini Korrapolu, and Vani Jayasri Geddada. "Vehicle Number Plate Recognition System Using TESSERACT-OCR." International Journal for Research in Applied Science and Engineering Technology 10, no. 4 (2022): 323–27. http://dx.doi.org/10.22214/ijraset.2022.41198.

Full text
Abstract:
Abstract: With the increase in the number of vehicles, automated systems to store vehicle information are becoming increasingly necessary. Communication is critical for traffic management and crime reduction, and it cannot be overlooked. Automatic vehicle identification using number plate recognition is a reliable method of identifying vehicles. It requires a lengthy time and a lot of practice to develop satisfactory results using present algorithms that are based on the idea of learning. Even so, accuracy is not a significant concern. It has been devised as an efficient approach for recognizing vehicle number plates, which is included in the suggested algorithm. The technique is intended to address the difficulties of scaling and recognition of the position of characters as long as the accuracy is maintained. Automatic Number Plate Detection is a unique application in Machine Learning as it detects images and converts them to text form. The algorithm detects and captures the vehicle image and extracts the vehicle number plate using image segmentation. The extracted image is later sent to optical character recognition technology for character recognition. This system is implemented in areas like traffic surveillance, military zones, apartments, etc. Keywords: Optical Character Recognition, tesseract ocr, matplotlib, Number Plate Recognition.
APA, Harvard, Vancouver, ISO, and other styles
27

Kamalakannan, Manokaran. "The identification of Takin Budorcas taxicolor (Mammalia: Bovidae) through dorsal guard hair." Journal of Threatened Taxa 10, no. 15 (2018): 13014–16. http://dx.doi.org/10.11609/jott.3357.10.15.13014-13016.

Full text
Abstract:
The dorsal guard hairs of Takin Budorcas taxicolor was examined using the optical light and scanning electron microscopes for species identification. It is found that the dorsal guard hair of B. taxicolor is possessed a completely unique microscopic characteristic especially the medullary character-uniserial ladder structure, which differs from other species of mammals. The ‘irregular wave’ of scale patterns and ‘rippled’ scale margins of cuticular, and the ‘circular’ shape of a transverse section of hair also determines the species identity of B. taxicolor, because these characteristics are infrequent in other species of mammals. The micro-photographs and characters of hairs are presented here can be used in the forensic science, as an appropriate reference for species identification of B. taxicolor.
APA, Harvard, Vancouver, ISO, and other styles
28

Awad, Mona, and Marwa M. Eid. "Optical Character Recognition System for Digit Recognition Using Deep Learning." Journal of Artificial Intelligence and Metaheuristics 6, no. 1 (2023): 18–26. http://dx.doi.org/10.54216/jaim.060102.

Full text
Abstract:
Because it is so difficult to distinguish handwritten digits, digit identification is one of the most critical applications in computer vision. This is one of the reasons why it is so tough. The field of handwritten character recognition is one in which a great deal of application of numerous deep learning models has occurred. The startling parallels that can be drawn between deep learning and the brain are primarily responsible for its meteoric rise in popularity. In this study, the Artificial Neural Network and the Convolutional Neural Network, two of the most used Deep Learning algorithms, were investigated with an eye toward the recognition process's feature extraction and classification phases. With the assistance of the categorical cross-entropy loss and the ADAM optimizer, the models were trained on the MNIST dataset. Backpropagation and gradient descent are the two methods utilized during the training process of neural networks that contain reLU activations and carry out automatic feature extraction. In computer vision, one of the most common and widely used classifiers is the Convolution Neural Network, sometimes referred to as ConvNets or Convolutional neural networks. This network is used for the recognition and categorization of images.
APA, Harvard, Vancouver, ISO, and other styles
29

Kiran, Perveen Rukhsana Perveen Awais Yasin. "Survey of Multilingual Script Identification Techniques on Wild Images." LC International Journal of STEM (ISSN: 2708-7123) 3, no. 1 (2021): 1–14. https://doi.org/10.5281/zenodo.6547188.

Full text
Abstract:
Multilingual Script Identification on natural images has recently increase research attention and this is very challenging task. This paper presents a review of latest techniques for the multilingual scripts. The system can choose the appropriate Optical Character Recognition (OCR) engine to recognize a script based here on script identity of a retrieved line of text or word. A number of approaches for identifying different characters, including such Japanese, Chinese, Arabic, Korean and Indian, have been developed. scripts are used in written on natural scenes captured by a voyager from cameras or text recognitions system. Here we also present the difficulties that come with script identification, methods used for features extraction and also the classifiers used for identification. We provided a comprehensive description and evaluation of previous and state-of-the-art script identification approaches. It should be emphasized that researchers in the area of multilingual script recognition is still in its early stages, and additional analysis is needed.
APA, Harvard, Vancouver, ISO, and other styles
30

Caldeira, Thais, Patrick Marques Ciarelli, and Gentil Auer Neto. "Industrial Optical Character Recognition System in Printing Quality Control of Hot-Rolled Coils Identification." Journal of Control, Automation and Electrical Systems 31, no. 1 (2019): 108–18. http://dx.doi.org/10.1007/s40313-019-00551-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Adhwaith A M, Adhwaith A. M., Irin Jossy, Mahima Rachel Bijoy, Nikitha Liz Koshy, and Sreelekshmi K. R. Sreelekshmi K R. "Handwritten Text Recognition: A Survey of OCR Techniques." International Journal of Advances in Engineering and Management 6, no. 11 (2024): 205–20. https://doi.org/10.35629/5252-0611205220.

Full text
Abstract:
Optical Character Recognition of handwritten texts has witnessed remarkable advancements with the integration of deep learning and machine learning techniques. Recognizing handwritten characters poses unique challenges due to script variability, linguistic diversity, and the complexities of historical documents. This survey explores recent developments in OCR for various languages, emphasizing innovative approaches such as Convolutional Neural Networks, attention mechanisms, and transfer learning. We analyze methodologies that enhance character recognition accuracy across languages, including Amharic, Arabic, Uchen Tibetan, Devanagari, and Tamil, while addressing resource efficiency in scene text recognition. Furthermore, the paper discusses advanced techniques for multilingual numeral recognition and writer identification in Indic scripts, highlighting cutting-edge strategies that push the boundaries of OCR technology. By synthesizing findings from the latest literature, this review provides valuable insights into ongoing challenges and future research directions in handwritten text recognition.
APA, Harvard, Vancouver, ISO, and other styles
32

Holila, Holila, Adi Rizky Pratama, Santi Arum Puspita Lestari, and Jamaludin Indra. "INTRODUCTION NATIONAL IDENTIFICATION NUMBER AND NAME ON ID CARD USING OCR (OPTICAL CHARACTER RECOGNITION) METHOD." Jurnal Teknik Informatika (Jutif) 5, no. 4 (2024): 1191–96. https://doi.org/10.52436/1.jutif.2024.5.4.2242.

Full text
Abstract:
This study examines the use of Optical Character Recognition (OCR) methods for the automatic recognition and extraction of text from images of Identity Cards (KTP). The aim is to provide an effective solution to the problems of document forgery and duplication, particularly in the use of KTP as an identity verification tool. Utilizing the Tesseract library, this research involves preprocessing steps such as conversion to grayscale, perspective transformation, and noise reduction to enhance OCR accuracy. Testing was conducted with 50 different KTP images using Python programming, achieving an Optical Character Recognition accuracy rate of 91%. Additionally, tests conducted with a dataset of 50 KTP images containing NIK and name variables showed that all images were successfully detected with an accuracy rate of 90%. This study confirms that the OCR method is effective in reading text from KTP images in real-time, thus it can be implemented for automatic identity verification.
APA, Harvard, Vancouver, ISO, and other styles
33

Obaidullah, Sk Md, K. C. Santosh, Nibaran Das, Chayan Halder, and Kaushik Roy. "Handwritten Indic Script Identification in Multi-Script Document Images: A Survey." International Journal of Pattern Recognition and Artificial Intelligence 32, no. 10 (2018): 1856012. http://dx.doi.org/10.1142/s0218001418560128.

Full text
Abstract:
Script identification is crucial for automating optical character recognition (OCR) in multi-script documents since OCRs are script-dependent. In this paper, we present a comprehensive survey of the techniques developed for handwritten Indic script identification. Different pre-processing and feature extraction techniques, including classifiers used for script identification, are categorized and their merits and demerits are discussed. We also provide information about some handwritten Indic script datasets. Finally, we highlight the extensions and/or future scope of works together with challenges.
APA, Harvard, Vancouver, ISO, and other styles
34

Valentino, Jonathan, and Yeremia Alfa Susetyo. "Analisis Perbandingan Optical Character Recognition Google Vision dengan Microsoft Computer Vision pada Pembacaan KTP-el." Jurnal JTIK (Jurnal Teknologi Informasi dan Komunikasi) 7, no. 4 (2023): 552–61. http://dx.doi.org/10.35870/jtik.v7i4.1046.

Full text
Abstract:
In this era, the need of digital data is rapidly increasing. Electronic Residental Identity Card or KTP-el is the official identity card for resident of Indonesia. One fast way to extract information on an image is by using OCR/Optical Character Recognition. Competition between Google Vision API and Microsoft Computer Vision in providing OCR service encourage companies to choose the right provider. Method conducted in this research including literature review on both OCR service provider, identification and KTP-el sample image retrieval, data grouping, code implementation and accuracy testing, result analysis and discussion, and conclusion. The result of this research show that Microsoft Computer Vision have better accuracy in reading characters in KTP-el with an accuracy percentage of 0,81% to 15,8% difference to Google Vision. Google Vision has competitive accuracy, but suffers from deficiencies when reading KTP-el with blur and noise.
APA, Harvard, Vancouver, ISO, and other styles
35

Abdul Samad, Shakeeb M. A. N., Fahri Heltha, and M. Faliq. "The Study of Plate Number Recognition for Parking Security System." International Journal of Advanced Technology in Mechanical, Mechatronics and Materials 1, no. 3 (2020): 100–107. http://dx.doi.org/10.37869/ijatec.v1i3.34.

Full text
Abstract:
Car Plate Number Recognition System is an important platform that can be used to identify a car vehicle identity. The Recognition System is based on image processing techniques and computer vision. A webcam is used to capture an image of the car plate number from different distance, and the identification is conducted through four processes of stages: Image Acquisition Pre-processing, Extraction, Segmentation, and Character Recognition. The Acquisition Pre-processing stage is extracted the region of interest of the image. The image is captured by live video of the webcam, then converted to grayscale and binary image. The Extraction stage is extracted the plate number characters from binary image using a connected components method. In the Segmentation stage is done by implementing horizontal projection as well as moving average filter. Lastly, in the Character Recognition, is used to identify the segmented characters of the plate number using optical character recognition. The proposed method is worked well for Malaysian's private cars plate number, and can be implemented in car park system to increase level of security of the system by confirming the bar code of the parking ticket and the plate number of the car at the incoming and outgoing gates.
APA, Harvard, Vancouver, ISO, and other styles
36

U, Nadhiya, Mahalakshmi K, and Kaviyarasu P. "Vehicle Detection and Recognition for Allowing into Premises Using OCR." Journal of Ubiquitous Computing and Communication Technologies 5, no. 2 (2023): 125–32. http://dx.doi.org/10.36548/jucct.2023.2.002.

Full text
Abstract:
The research has proposed a system to automatically detect the number plate and recognize it using optical character recognition method. Before taking a image of the number plate, the developed system first recognizes the car. Image segmentation is used to recover the portion of an image that contains the vehicle identification number. Character recognition is accomplished using an optical character recognition technique. This involves using matching techniques to check whether the car plate image matches the data in the database. The warning sign will show when authenticity is verified, and the car will then be permitted to enter the designated area. Real-time video is captured to evaluate the system's functionality, and Python is used to create and simulate the system. Due to its promising nature the suggested method is employed in the automated vehicle authentication in universities in future.
APA, Harvard, Vancouver, ISO, and other styles
37

Sharmin, Sabrina, Tasauf Mim, and Mohammad Rahman. "Bangla Optical Character Recognition for Mobile Platforms: A Comprehensive Cross-Platform Approach." American Journal of Electrical and Computer Engineering 8, no. 2 (2024): 31–42. http://dx.doi.org/10.11648/j.ajece.20240802.12.

Full text
Abstract:
The development of Optical Character Recognition (OCR) systems for Bangla script has been an area of active research since the 1980s. This study presents a comprehensive analysis and development of a cross-platform mobile application for Bangla OCR, leveraging the Tesseract OCR engine. The primary objective is to enhance the recognition accuracy of Bangla characters, achieving rates between 90% and 99%. The application is designed to facilitate the automatic extraction of text from images selected from the device's photo library, promoting the preservation and accessibility of Bangla language materials. This paper discusses the methodology, including the preparation of training datasets, preprocessing steps, and the integration of the Tesseract OCR engine within a Dart programming environment for cross-platform functionality. This integration provides that the application could be introduced on mobile platforms without substantial alterations. The results demonstrate significant improvements in recognition accuracy, making this application a valuable tool for various practical applications such as data entry for printed Bengali documents, automatic recognition of Bangla number plates, and the digital archiving of vintage Bangla books. These improvements are crucial to further enhance the usability and reliability of Bangla OCR on mobile devices. Our cross-platform method for Bangla OCR on mobile devices provides a strong solution with exceptional identification accuracy, which helps in preserving and making Bangla language information accessible in digital format. This study has significant implications for future research and advancement in the field of optical character recognition (OCR) for intricate writing systems, especially in mobile settings.
APA, Harvard, Vancouver, ISO, and other styles
38

Mahavarkar, Jait, Siddhi Nirmale, and Pratiksha Kolte. "Virtual Valet." International Journal for Research in Applied Science and Engineering Technology 10, no. 5 (2022): 1645–50. http://dx.doi.org/10.22214/ijraset.2022.42557.

Full text
Abstract:
Abstract: Virtual Valet is an image processing technology which is used to retrieve the registered number of vehicles from its number plate to identify the vehicle. The main objective is to design an efficient vehicle identification system with the help of its number plate. This system will help in reducing crime, it will help in improving road safety, deterring terrorism. It can be implemented in automatic parking systems, border control. It is useful to detect the stolen cars. This app is also useful in traffic management. It can be used by various police forces and by electronic toll collection. Firstly, the system will ask for the vehicle’s image. After that the region of interest that is number plate is extracted using image segmentation in an image. For character recognition we have used the optical character recognition technique. Optical character recognition is the last step in vehicle number plate detection. After this the resulting data will be used to compare with the database so that we can get specific information like the owner’s name, registration number, address etc. Keywords: (OCR)Optical Character Recognition, Pytesseract, OpenCV, Django
APA, Harvard, Vancouver, ISO, and other styles
39

Chandrika, C. P., and Jagadish S. Kallimani. "Polarity Identification for Handwritten Text in Multilingual Documents Using Open Source Optical Character Recognition Tools." Journal of Computational and Theoretical Nanoscience 17, no. 9 (2020): 4045–49. http://dx.doi.org/10.1166/jctn.2020.9017.

Full text
Abstract:
Sentimental analysis is a prerequisite for many applications. We propose a model which scans handwritten text in English and Kannada languages by a CamScanner and then translated into editable text by using various Open Source Optical Character Recognition tools. The performances of different OCRs are analyzed and tabulated. Sentimental analysis is performed on the statements written in both English and Kannada languages using Wordnet, Algorithmia Rest API and local dictionaries and we have obtained the satisfied results. The same sentimental analysis module is also applied on customer reviews for the mobile product and reviews are taken from Amazon Web Services. The opinion of the customer about the product can be identified correctly.
APA, Harvard, Vancouver, ISO, and other styles
40

P, Amruth, Amruth P, Rosemol Jacob M, Suseela Mathew, and Anandan. "Developmental prospectives of new generation super absorbent wound dressing materials from sulfated polysaccharide of marine red algae." Journal of University of Shanghai for Science and Technology 23, no. 09 (2021): 400–408. http://dx.doi.org/10.51201/jusst/21/09559.

Full text
Abstract:
Wound healing remains as a dynamic process and the type of dressing material significantly affects the efficacy of healing. The identification of ideal dressings to use for a particular wound type is an important requisite facilitating the entire process of healing. Chronic, high exudate wounds are dynamic in presentation and remain as a major health care burden. Researchers have sort to design and optimize biodegradable wound dressings that focuses to optimize moisture retentiveness, as superior character in the healing process. In addition, dressings have been designed to visualize the wound bed by improving the optical property, target and kill infection-causing bacteria, with the incorporation of antimicrobial agents, nanomaterials and numerous other measures. For the practitioners, choosing the optimal dressing decreases time to healing, provides cost-effective care and improves patient quality of life. The current mini review highlights the ideal characters of wound dressing materials and presents insights on the superior characters of carrageenan bio composites for prospective advancements in research in the area of wound care and management.
APA, Harvard, Vancouver, ISO, and other styles
41

Kulkarni, Soham, Rhushabh Madurwar, Rushikesh Narlawar, Anuj Pandya, and Namrata Gawande. "Digitizing Notes using Optical Character Recognition and Automatic Topic Identification and Classification using Natural Language Processing." International Journal for Research in Applied Science and Engineering Technology 11, no. 5 (2023): 5433–40. http://dx.doi.org/10.22214/ijraset.2023.52950.

Full text
Abstract:
Abstract: In today’s world digital documents are a major part of everyone’s life as they have a wide scope of usage. However handwritten notes still contain loads of important and valuable information. In our research, we explore the different methods of Optical Character Recognition, or OCR which can be used for digitizing manual notes. Along with it we deep dive into the concept of Topic Detection and Identification and methods to implement it which are useful for extracting the crux of any document or piece of information. With the aim of integrating both processes into a single system, we study various algorithms involving neural networks like ANN, RNN, and CNN, and methods such as Tesseract, KNN, and LSTM that are used for implementing OCR while techniques such as K means clustering, TF-IDF, LDA and LINGO have been employed to perform topic detection and identification. Based on our study and results from various papers, we have decided to use CNN for OCR
APA, Harvard, Vancouver, ISO, and other styles
42

Yamini, Maidam. "Number Plate Detection in an Image." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 07, no. 09 (2023): 1–11. http://dx.doi.org/10.55041/ijsrem25883.

Full text
Abstract:
Automatic Vehicle license plate detection and recognition is a key technique in most of traffic related applications and is an active research topic in the image processing domain. Different methods, techniques and algorithms have been developed for license plate detection and recognitions. Due to the varying characteristics of the license plate like numbering system, colors, style and sizes of license plate, When detection and recognition are two separate jobs, which also results in a huge number of factors, there is an issue with identification. So,further research is still needed in this area. We propose a unified convolutional neural network (CNN) and the F1 score as metrics in a deep learning project for picture categorization which can localize license plates and recognize the letters. We work on license plate recognition and segments characters in the license plate firstly, and then recognizes each segmented character using Optical Character Recognition(OCR)techniques. Extensive experiments show the effectiveness and the efficiency of our proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
43

Mudda, Avinash, P. Sashi Kiran, Ashish Kumar, and Venkata Sreenivas. "Vehicle Allowance System." International Journal for Research in Applied Science and Engineering Technology 11, no. 4 (2023): 1085–89. http://dx.doi.org/10.22214/ijraset.2023.50169.

Full text
Abstract:
Abstract: License plate detection is an image processing technology that uses a license (number) plate for vehicle identification. The objective is to design and implement an efficient vehicle identification system that identifies the vehicle using the vehicle’s license plate. The system can be implemented at the entrance of parking lots, toll booths, or any private premises like colleges, etc. to keep records of ongoing and outgoing vehicles. It can be used to allow access to only permitted vehicles inside the premises. The developed system first captures the image of the vehicle’s front, then detects the license plate, and then reads the license plate. The vehicle license plate is extracted using image processing of the image. Optical character recognition (OCR) is used for character recognition. The system is implemented using OpenCV and its performance is tested on various images. It is observed that the developed system successfully detects and recognizes the vehicle license plate. To recognize License number plates using the Python programming language. We will utilize OpenCV for this project to identify the license number plates and the python py-tesseract for the characters and digits extraction from the plate. We will build a Python program that automatically recognizes the License Number Plate
APA, Harvard, Vancouver, ISO, and other styles
44

Zaheen, Muhammad Yasir, Zia Mohi-u-din, Ali Akber Siddique, and Muhammad Tahir Qadri. "Exhaustive Security System Based on Face Recognition Incorporated with Number Plate Identification using Optical Character Recognition." January 2020 39, no. 1 (2020): 145–52. http://dx.doi.org/10.22581/muet1982.2001.14.

Full text
Abstract:
In recent times due to rise in terrorism, people need to live in a safer place where unidentified persons will not be allowed to enter in the premises. Securing of major areas is a vital issue that needs to be addressed for the intelligence and security agencies. At the surrounding of premises, CCTV (CloseCircuit Television) cameras are usually installed to identify the number plate from database by using OCR (Optical Character Recognition) algorithm. This method of security by identifying only vehicle without verifying the person inside it is usually causing serious security issues. Identification of a person is usually done through image processing by using Viola Jones algorithm and acquire the information of the facial components to create a dataset for machine learning. It is imperative to introduce such a system that will be capable to identify the person along with the number plate of vehicle from the stored database. In this research, a comprehensive security system based on face recognition integrated with the vehicle number plate is proposed. The combined information of both dedicated cameras is then transferred to the based station for identification. This system is capable, of securing premises from crime in a more enhanced way.
APA, Harvard, Vancouver, ISO, and other styles
45

Parathra Sreedharanpillai, Ambili, Biku Abraham, and Arun Kotapuzakal Varghese. "Optically processed Kannada script realization with Siamese neural network model." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 1112. http://dx.doi.org/10.11591/ijai.v13.i1.pp1112-1118.

Full text
Abstract:
<p><span>Optical character recognition (OCR) is a technology that allows computers to recognize and extract text from images or scanned documents. It is commonly used to convert printed or handwritten text into machine-readable format. This Study presents an OCR system on Kannada Characters based on siamese neural network (SNN). Here the SNN, a Deep neural network which comprises of two identical convolutional neural network (CNN) compare the script and ranks based on the dissimilarity. When lesser dissimilarity score is identified, prediction is done as character match. In this work the authors use 5 classes of Kannada characters which were initially preprocessed using grey scaling and convert it to pgm format. This is directly input into the Deep convolutional network which is learnt from matching and non-matching image between the CNN with contrastive loss function in Siamese architecture. The Proposed OCR system uses very less time and gives more accurate results as compared to the regular CNN. The model can become a powerful tool for identification, particularly in situations where there is a high degree of variation in writing styles or limited training data is available.</span></p>
APA, Harvard, Vancouver, ISO, and other styles
46

Parathra, Sreedharanpillai Ambili, Biku Abraham, and Varghese Arun Kotapuzakal. "Optically processed Kannada script realization with Siamese neural network model." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 1 (2024): 1112–18. https://doi.org/10.11591/ijai.v13.i1.pp1112-1118.

Full text
Abstract:
Optical character recognition (OCR) is a technology that allows computers to recognize and extract text from images or scanned documents. It is commonly used to convert printed or handwritten text into machine-readable format. This Study presents an OCR system on Kannada Characters based on siamese neural network (SNN). Here the SNN, a Deep neural network which comprises of two identical convolutional neural network (CNN) compare the script and ranks based on the dissimilarity. When lesser dissimilarity score is identified, prediction is done as character match. In this work the authors use 5 classes of Kannada characters which were initially preprocessed using grey scaling and convert it to pgm format. This is directly input into the Deep convolutional network which is learnt from matching and non-matching image between the CNN with contrastive loss function in Siamese architecture. The Proposed OCR system uses very less time and gives more accurate results as compared to the regular CNN. The model can become a powerful tool for identification, particularly in situations where there is a high degree of variation in writing styles or limited training data is available.
APA, Harvard, Vancouver, ISO, and other styles
47

Rizal Fitrian, Mohamad, and Dion Ogi. "Implementasi RFID dan Optical Character Recognition (OCR) pada Prototipe Sistem Gerbang Otomatis." Info Kripto 19, no. 1 (2025): 1–12. https://doi.org/10.56706/ik.v19i1.115.

Full text
Abstract:
Masalah keamanan di berbagai jenis lokasi seperti hotel, fasilitas pemerintah, dan bangunan komersial terus meningkat, menuntut sistem identifikasi kendaraan yang masuk dan kontrol akses yang lebih ketat. Tradisionalnya, tugas ini ditangani oleh petugas keamanan manusia yang harus siaga berjam-jam, memeriksa setiap kendaraan yang memasuki area tersebut, dan mencatat data mereka secara manual. Kampus Politeknik Siber dan Sandi Negara, sebagai contoh institusi pemerintah dengan akses terbatas, hanya memungkinkan akses bagi pihak yang berwenang. Ancaman yang dapat terjadi adalah serangan obfuscation, di mana penyerang menyamarkan identitas mereka saat mengemudi kendaraan, dapat mengelabui petugas keamanan. Salah satu cara untuk mengatasi ini adalah dengan menerapkan sistem kontrol akses yang lebih ketat menggunakan teknologi Radio Frequency Identification (RFID). Selain itu, serangan kloning tag RFID juga menjadi ancaman. Untuk meningkatkan keamanan, dapat ditambahkan faktor autentikasi seperti pengenalan plat nomor kendaraan. Penelitian ini melibatkan penerapan teknologi RFID dalam sistem gerbang otomatis yang menggunakan pengenalan plat nomor kendaraan berbasis Optical Character Recognition (OCR). Hasil penelitian menunjukkan tingkat keberhasilan sistem autentikasi sekitar 15%, dengan waktu pemrosesan rata-rata sekitar 35,42 detik. Tingkat keberhasilan dalam mendeteksi setiap karakter plat nomor dengan latar belakang hitam adalah sekitar 60,17%, sementara plat nomor dengan latar belakang merah mencapai tingkat keberhasilan sekitar 79,82%. Penting untuk dicatat bahwa penerapan teknologi seperti easyOCR tidak sesuai untuk perangkat ini karena memerlukan perangkat keras khusus seperti CUDA dan MPS.
APA, Harvard, Vancouver, ISO, and other styles
48

Setiawan, Santoso, Daning Nur Sulistyowati, and Nurman Machmud. "IMPLEMENTATION OF IMAGE PROCESSING IN THE RECOGNITION OF OFFICIAL VEHICLE LICENSE PLATES." JITK (Jurnal Ilmu Pengetahuan dan Teknologi Komputer) 9, no. 1 (2023): 23–29. http://dx.doi.org/10.33480/jitk.v9i1.4181.

Full text
Abstract:
Vehicle license plates are identifiers used to uniquely identify vehicles. However, to identify vehicle license plates there are several problems encountered, namely the different formats of vehicle license plates that make license plate recognition more complicated, vehicle license plates often contain visually similar combinations of letters and numbers (for example the letter "O" and the number "0" or the letter "I" and the number "1"), . in poor lighting conditions license plates may not be clearly visible. To solve this problem, image recognition, image processing, and pattern recognition technologies can be used. The three technologies can be used to recognize characters on vehicle license plates, but cannot yet be used to recognize the colors contained on vehicle license plates. The purpose of this research is to identify and record vehicle license plate numbers quickly and accurately, monitor the presence of vehicles in a supervised area, assist in managing parking, reduce the need for human interaction in the vehicle identification process, The methods used to recognize motor vehicle plates are edge detection and character segmentation which involves image processing to detect the edges of the vehicle plate, followed by segmentation of individual characters in the plate. Another method used is optical character recognition which involves using an optical sensor to take an image of a vehicle plate, then using character recognition techniques to identify the numbers and letters on the plate. The result of this research is that the motor vehicle number recognition system can work in various lighting conditions and poor weather conditions and can monitor and control vehicles in the parking area. The finding obtained from this research is that no method has been used for color recognition on motor vehicle plates.
APA, Harvard, Vancouver, ISO, and other styles
49

Ravikumar, Hodikehosahally Channegowda, Srinivasaiah Raghavendra, Kumar Jankatti Santosh, Meenakshi Meenakshi, Shravanabelagola Jinachandra Niranjana, and Kumar Tavarekere Hombegowda Raveendra. "Detection and identification of un-uniformed shape text from blurred video frames." IAES International Journal of Artificial Intelligence (IJ-AI) 13, no. 4 (2024): 4795–805. https://doi.org/10.11591/ijai.v13.i4.pp4795-4805.

Full text
Abstract:
The identification and recognition of text from video frames have received a lot of attention recently, that makes many computer vision-based applications conceivable. In this study, we modify the picture mask and the original identification of the mask region convolution neural network and permit detection in three levels, including holistic, sequence, and at the level of pixels. To identify the texts and determine the text forms, semantics at the pixel and holistic levels can be used. With masking and detection, existences of the character and the word are separated and recognised. In addition, text detection using the results of 2-D feature space instance segmentation is done. Moreover, we explore text recognition using an attention-based optical character recognition (OCR) method with mask region convolution neural networks (R-CNN) to address and detect the problem of smaller and blurrier texts at the sequential level. Using attribute maps of the word occurrences in sequence to seq, the OCR method calculates the character sequence. At last, a fine-grained learning strategy is proposed to constructs models at word level using the annotated datasets, resulting in the training of a more precise and reliable model. The well-known benchmark datasets ICDAR 2013 and ICDAR 2015 are used to test our suggested methodology.
APA, Harvard, Vancouver, ISO, and other styles
50

Deepali Godse, Chaitali Nigade, Nilofar Mulla, and Shital Jadhav. "Fake Product Identification Using Machine Learning." International Research Journal on Advanced Engineering and Management (IRJAEM) 6, no. 07 (2024): 2437–42. http://dx.doi.org/10.47392/irjaem.2024.0351.

Full text
Abstract:
Recognizing counterfeit goods can be difficult in some situations. If a person does not thoroughly inspect the product's details, it becomes simpler to create and sell counterfeit goods. For less tech-savvy clients who can scan the product with the use of a smartphone application to check the authenticity of the product received, this paper offers a superior alternative employing machine learning. The detection of logos (which includes both visual and textual representations) is the main focus. The model also includes the sentiment analysis of the product’s reviews. This technique is useful for predicting the validity of a product. The paper describes the Fake Product Identification Model developed using Convolution Neural Network (CNN) and Optical Character Recognition (OCR). This model determines whether a product is real or fake, and the user can make a wise decision before buying the product.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!