To see the other types of publications on this topic, follow the link: Code recognition.

Dissertations / Theses on the topic 'Code recognition'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Code recognition.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Visaggi, Salvatore. "Multimodal Side-Tuning for Code Snippets Programming Language Recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2021. http://amslaurea.unibo.it/22993/.

Full text
Abstract:
Identificare in modo automatico il linguaggio di programmazione di una porzione di codice sorgente è uno dei temi che ancora oggi presenta diverse difficoltà. Il numero di linguaggi di programmazione, la quantità di codice pubblicato e reso open source, e il numero di sviluppatori che producono e pubblicano nuovo codice sorgente è in continuo aumento. Le motivazioni che richiedono la necessità di disporre di strumenti in grado di riconoscere il tipo di linguaggio per snippet di codice sorgente sono svariate. Ad esempio, tali strumenti trovano applicazione in ambiati quali: la ricerca di codice sorgente; la ricerca di possibili vulnerabilità nel codice; la syntax highlighting; o semplicemente per comprendere il contenuto di progetti software. Nasce così l'esigenza di disporre di dataset di snippet di codice allineati in modo adeguato con il linguaggio di programmazione. StackOverflow, una piattaforma di condivisione di conoscenza tra sviluppatori, offre la possibilità di avere accesso a centinaia di migliaia di snippet di codice sorgente scritti nei linguaggi più usati dagli sviluppatori, rendendolo il luogo ideale da cui estrarre snippet per la risoluzione del task proposto. Nel lavoro svolto si è dedicata molta attenzione a tale problematica, iterando sull'approccio scelto al fine di ottenere una metodologia che ha permesso l'estrazione di un dataset adeguato. Al fine di risolvere il task dell'identificazione del linguaggio per gli snippet estratti da StackOverflow, nel lavoro svolto si fa uso di un approccio multimodale (considerando rappresentazioni testuali e di immagini degli snippet), prendendo in esame la tecnica innovativa di side-tuning (basata sull'adattamento incrementale di una rete neurale pre-addestrata). I risultati ottenuti sono confrontabili con lo stato dell'arte e in alcuni casi migliori, in considerazione della difficoltà del task affrontato nel caso di snippet di codice sorgente che presentano poche linee di codice.
APA, Harvard, Vancouver, ISO, and other styles
2

Assal, Mohamed Helmy Anwar Mohamed. "A study of low level vision algorithms in bar code recognition." Thesis, University of Kent, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.328035.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Walker, Michael. "Designing the Haptic Interface for Morse Code." Scholar Commons, 2016. http://scholarcommons.usf.edu/etd/6600.

Full text
Abstract:
Two siblings have a muscular degenerative condition that has rendered them mostly blind, deaf and paraplegic. Currently, the siblings receive communication by close range sign language several feet in front of their vision. Due to the degenerative nature of their condition, it is believed that the siblings will eventually become completely blind and unable to communicate in this fashion. There are no augmented communication devices on the market that allow communication reception for individuals who cannot see, hear or possess hand dexterity (such as braille reading). To help the siblings communicate, the proposed communication device will transmit Morse code information tactically with vibration motors to either the forearm or bicep in the form of an armband wearable. However, no research has been done to determine the best haptic interface for displaying Morse code in a tactile modality. This research investigates multiple haptic interfaces that aim to alleviate common mistakes made in Morse code reception. The results show that a bimanual setup, discriminating dots/dashes by left/right location, yields 56.6% the amount of Morse code errors made under a unimanual setup of Morse code that uses temporal discrimination to distinguish dots and dashes. The bimanual condition resulted in less judgment interference that is either due to the brain having an easier time processing two separate tasks when judgments are shared between the hemispheres or a judgment buffer effect being present for temporal discrimination.
APA, Harvard, Vancouver, ISO, and other styles
4

Ren, Manling. "Algorithms for off-line recognition of Chinese characters." Thesis, Nottingham Trent University, 1998. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.245175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Steven. "A Novel Approach to Iris Localization and Code Matching for Iris Recognition." NSUWorks, 2009. http://nsuworks.nova.edu/gscis_etd/346.

Full text
Abstract:
In recent years, computing power and biometric sensors have not only become more powerful, but also more affordable to the general public. In turn, there has been great interest in developing and deploying biometric personal ID systems. Unlike the conventional security systems that often require people to provide artificial identification for verification, i.e. password or algorithmic generated keys, biometric security systems use an individual's biometric measurements, including fingerprint, face, hand geometry, and iris. It is believed that these measurements are unique to the individual, making them much more reliable and less likely to be stolen, lost, forgotten, or forged. Among these biometric measurements, the iris is regarded as one of the most reliable and accurate security approaches because it is an internal organ protected by the body's own biological mechanisms. It is easy to access, and almost impossible to modify without the risk of damaging the iris. Although there have been significant advancements in developing iris-based identification processes during recent years, there remains significant room for improvement. This dissertation presents a novel approach to the iris localization and code matching. It uses a fixed diameter method and a parabolic curve fitting approach for locating the iris and eyelids as well as a k-d tree for iris matching. The iris recognition rate is improved by accurately locating the eyelids and eliminating the signal noise in an eye image. Furthermore, the overall system performance is increased significantly by using a partial iris image and taking the advantage of the k-d binary tree. We present the research results of four processing stages of iris recognition: localization, normalization, feature extraction, and code matching. The localization process is based on histogram analysis, morphological process, Canny edge detection, and parabolic curve fitting. The normalization process adopts Daugman's rubber-sheet approach and converts the iris image from Cartesian coordinators to polar coordinates. In the feature extraction process, the feature vectors are created and quantized using 1-D Log-Gabor wavelet. Finally, the iris code matching process is conducted using a k-dimensional binary tree and Hamming distance.
APA, Harvard, Vancouver, ISO, and other styles
6

Miranda, Rafael. "Sequence Specific RNA Recognition by Pentatricopeptide Repeat Proteins: Beyond the PPR Code." Thesis, University of Oregon, 2018. http://hdl.handle.net/1794/23135.

Full text
Abstract:
Pentatricopeptide repeat (PPR) proteins are helical-repeat proteins that bind RNAs through a simple 1-repeat:1-nucleotide manner. Nucleotide specificity is determined by an amino acid code, the PPR code. This modular interaction mode, predictable code for nucleotide specificity, and simple repeating architecture make them a promising scaffold for engineering proteins to bind custom RNA sequences and binding site prediction of native PPR proteins. Despite these features, the alignments of the binding sites of well-characterized PPR proteins to the predicted binding sites often have mismatches and discontinuities, suggesting a tolerance for mismatches. In order to maximize the ability to predict the binding sites of native PPR proteins and effectively generate designer PPR proteins with predictable specificity, it will be important to address how affinity and specificity is distributed across a PPR tract. I developed a high- throughput bind-n-seq technique to rapidly and thoroughly address these questions. The affinity and specificity of the native PPR protein, PPR10 was determined using bind-n- seq. The results demonstrate that not all of PPR10’s repeats contribute equally to binding affinity, and there were sequence specific interactions that could not be explained by the PPR code, suggesting alternate modes of nucleotide recognition. A similar analysis of four different designer PPR proteins showed that they recognize RNA according to the code and lacked any alternate modes of nucleotide recognition, implying that the non- canonical sequence specific interactions represent idiosyncratic features of PPR10. This analysis also showed that N-terminal and purine specifying repeats have greater contributions to binding affinity, and that longer scaffolds have a greater tolerance for mismatches. Together, these findings highlight the challenges for binding site prediction and present implications for the design of PPR proteins with minimum off-target binding. This dissertation contains previously published and unpublished co-authored material.<br>10000-01-01
APA, Harvard, Vancouver, ISO, and other styles
7

Lategano, Antonio. "Image-based programming language recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/22208/.

Full text
Abstract:
Nel presente lavoro di tesi viene affrontato per la prima volta il problema della classificazione dei linguaggi di programmazione mediante approcci image-based. Abbiamo utilizzato alcune Convolutional Neural Network pre-addestrate su task di classificazione di immagini, adattandole alla classificazione di immagini contenenti porzioni di codice sorgente scritto in 149 diversi linguaggi di programmazione. I nostri risultati hanno dimostrato che tali modelli riescono ad apprendere, con buone prestazioni, le feature lessicali presenti nel testo. Aggiungendo del rumore, tramite modifica dei caratteri presenti nelle immagini, siamo riusciti a comprendere quali fossero i caratteri che meglio permettevano ai modelli di discriminare tra una una classe e l’altra. Il risultato, confermato tramite l’utilizzo di tecniche di visualizzazione come la Class Activation Mapping, è che la rete riesce ad apprendere delle feature lessicali di basso livello concentrandosi in particolare sui simboli tipici di ogni linguaggio di programmazione (come punteggiatura e parentesi), piuttosto che sui caratteri alfanumerici.
APA, Harvard, Vancouver, ISO, and other styles
8

Shafiee, Sarvestani Amin. "Automated Recognition of Algorithmic Patterns in DSP Programs." Thesis, Linköpings universitet, PELAB - Laboratoriet för programmeringsomgivningar, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-73934.

Full text
Abstract:
We introduce an extensible knowledge based tool for idiom (pattern) recognition in DSP(digital signal processing) programs. Our tool utilizesfunctionality provided by the Cetus compiler infrastructure fordetecting certain computation patterns that frequently occurin DSP code. We focus on recognizing patterns for for-loops andstatements in their bodies as these often are the performance criticalconstructs in DSP applications for which replacementby highly optimized, target-specific parallel algorithms will bemost profitable. For better structuring and efficiency of patternrecognition, we classify patterns by different levels of complexitysuch that patterns in higher levels are defined in terms of lowerlevel patterns.The tool works statically on the intermediate representation(IR). It traverses the abstract syntax tree IR in post-orderand applies bottom-up pattern matching, at each IR nodeutilizing information about the patterns already matched for itschildren or siblings.For better extensibility and abstraction,most of the structuralpart of recognition rules is specified in XML form to separatethe tool implementation from the pattern specifications.Information about detected patterns will later be used foroptimized code generation by local algorithm replacement e.g. for thelow-power high-throughput multicore DSP architecture ePUMA.
APA, Harvard, Vancouver, ISO, and other styles
9

Gibson, Maryika Ivanova. "Effective Strategies for Recognition and Treatment of In-Hospital Strokes." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/6756.

Full text
Abstract:
In-hospital onset strokes represent 4% to 20% of all reported strokes in the United States. The variability of treatment protocols and workflows as well as the complex etiology and multiple comorbidities of the in-hospital stroke subpopulation often result in unfavorable outcomes and higher mortality rates compared to those who experience strokes outside of the hospital setting. The purpose of this project was to conduct a systematic review to identify and summarize effective strategies and practices for prompt recognition and treatment of in-hospital strokes. The results of the literature review with leading-edge guidelines for stroke care were corelated to formulate recommendations at an organizational level for improving care delivery and workflow. Peer-reviewed publications and literature not controlled by publishers were analyzed. An appraisal of 24 articles was conducted, using the guide for classification of level of evidence by Fineout-Overholt, Melnyk, Stillwell, and Williamson. The results of this systematic review revealed that the most effective strategies and practices for prompt recognition and treatment of in-hospital strokes included: staff education, creating a dedicated responder team, analysis and improvement of internal processes to shorten the time from discovery to diagnosis, and offering appropriate evidence-based treatments according to acute stroke guidelines. Creating organizational protocols and quality metrics to promote timely and evidence-based care for in-hospital strokes may result in a positive social change by eliminating the existing care disparities between community and in-hospital strokes and improving the health outcomes of this subpopulation of strokes.
APA, Harvard, Vancouver, ISO, and other styles
10

Tadokoro, Yukihiro, Hiraku Okada, Takaya Yamazato, and Masaaki Katayama. "Application of Successive Interference Cancellation to a Packet-Recognition/Code-Acquisition Scheme in CDMA Unslotted ALOHA Systems." IEICE, 2005. http://hdl.handle.net/2237/7224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Putta, Advaith. "Implementation of Augmented Reality applications to recognize Automotive Vehicle using Microsoft HoloLens : Performance comparison of Vuforia 3-D recognition and QR-code recognition Microsoft HoloLens applications." Thesis, Blekinge Tekniska Högskola, Institutionen för datalogi och datorsystemteknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-17651.

Full text
Abstract:
Context. Volvo Construction Equipment is planning to use Microsoft Hololens as a tool for the on-site manager to keep a track on the automotive machines and obtain their corresponding work information. For that, a miniature site has been build at PDRL BTH consisting of three different automotive vehicles. We are developing Augmented Reality applications for Microsoft Hololens to recognize these automotive vehicles. There is a need to identify the most feasible recognition method that can be implemented using Microsoft Hololens. Objectives. In this study, we investigate which among the Vuforia 3-D recognition method and the feasible method is best suited for the Microsoft Hololens and we also find out the maximum distance at which an automotive vehicle can be recognized by the Microsoft Hololens. Methods. In this study, we conducted a literature review and the number of articles has been reviewed for IEEE Xplore, ACM Digital Library, Google Scholar and Scopus sources. Seventeen articles were selected for review after reading their titles and abstracts of articles obtained from the search. Two experiments were performed to find out the best recognition method of the Microsoft Hololens and the maximum distance at which an automotive vehicle can be recognized by the Microsoft Hololens. Results. QR-code recognition method is the best recognition method to be used by Microsoft Hololens for recognizing automotive vehicles in the range of one to two feet and Vuforia 3-D recognition method is recommended for more than two feet distance. Conclusions. We conclude that the QR-code recognition method is suitable for recognizing vehicles in the close range (1-2 feet) and Vuforia 3-D object recognition is suitable for recognition for distance over two feet. These two methods are different from each other. One used the 3-D scan of the vehicle to recognize the vehicle and the other uses image recognition (using unique QR-codes). We covered effect of distance on the recognition capability of the application and a lot of work has to be done in terms of how does the QR-code size effects the maximum distance at which an automotive vehicle can be recognized. We conclude that there is a need for further experimentation in order to find out the impact of QR-code size on the maximum recognition distance.
APA, Harvard, Vancouver, ISO, and other styles
12

Chan-Reynolds, Michael G. "Reading aloud is not automatic: Processing capacity is required to generate a phonological code from print." Thesis, University of Waterloo, 2005. http://hdl.handle.net/10012/753.

Full text
Abstract:
The process of generating a phonological code from print is widely described as automatic. This claim is tested in Chapter 1 by assessing whether phonological recoding uses central attention in the context of the Psychological Refractory Period (PRP) paradigm. Task 1 was a tone discrimination task and Task 2 was reading aloud. Nonword letter length and grapheme-phoneme complexity yielded additive effects with SOA in Experiments 1 and 2 suggesting that <em>assembled phonology</em> uses central attention. Neighborhood density (N) yielded additive effects with SOA in Experiments 3 and 4, suggesting that one form of lexical contribution to phonological recoding also uses central attention. Taken together, the results of these experiments are <em>inconsistent</em> with the widespread claim that phonological codes are computed automatically. Chapter 2 begins by reconsidering the utility of ?automaticity? as an explanatory framework. It is argued that automaticity should be replaced by accounts that make more specific claims about how processing unfolds. Experiment 5 yielded underadditivity of long-lag word repetition priming with decreasing SOA, suggesting that an early component of the lexical contribution to phonology does not use central attention. There was no evidence of Task 1 slowing with decreasing SOA in Experiments 6 and 7, suggesting that phonological recoding processes are postponed until central attention becomes available. Theoretical development in this field (and others) will be facilitated by abandoning the idea that skilled performance inevitably means that all the underlying processes are automatic.
APA, Harvard, Vancouver, ISO, and other styles
13

Buchini, Sabrina. "2'-O-Aminoethyl-oligoribonucleotides in DNA triple-helix formation : extending the sequence recognition code to three base pairs /." [S.l.] : [s.n.], 2004. http://www.zb.unibe.ch/download/eldiss/04buchini_s.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Luvizotto, André Luiz. "The Encoding and decoding of complex visual stimuli : a neural model to optimize and read out a temporal population code." Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/94143.

Full text
Abstract:
The mammalian visual system has a remarkable capacity of processing a large amount of information within milliseconds under widely varying conditions into invariant representations. Recently a model of the primary visual system exploited the unique feature of dense local excitatory connectivity of the neo-cortex to match these criteria. The model rapidly generates invariant representations integrating the activity of spatially distributed modeled neurons into a so-called Temporal Population Code (TPC). In this thesis, we first investigate an issue that has persisted TPC since its introduction: to extend the concept to a biologically compatible readout stage. We propose a novel neural readout circuit based on wavelet transform that decodes the TPC over different frequency bands. We show that, in comparison with pure linear readouts used previously, the proposed system provides a robust, fast and highly compact representation of visual input. We then generalized this optimized encoding-decoding paradigm to deal with a number of robotics application in real-world tasks to investigate its robustness. Our results show that complex stimuli such as human faces, hand gestures and environmental cues can be reliably encoded by TPC which provides a powerful biologically plausible framework for real-time object recognition. In addition, our results suggest that the representation of sensory input can be built into a spatial-temporal code interpreted and parsed in series of wavelet like components by higher visual areas.<br>El sistema visual dels mamífers té una remarcable capacitat per processar informació en intervals de temps de mili-segons sota condicions molt variables i adquirir representacions invariants d'aquesta informació. Recentment un model del còrtex primari visual explota les característiques d'alta connectivitat excitatriu local del neocortex per modelar aquestes capacitats. El model integra ràpidament l'activitat repartida espaialment de les neurones i genera codificacions invariants que s'anomenen Temporal Population Codes (TPC). Aquí investiguem una qüestió que ha persistit des de la introducció del TPC: estudiar un procés biològicament possible capaç de fer la lectura d'aquestes codificacions. Nosaltres proposem un nou circuit neuronal de lectura basat en la Wavelet Transform que decodifica la senyal TPC en diferents intervals de freqüència. Monstrem que, comparat amb lectures purament lineals utilitzades previament, el sistema proposat proporciona una representació robusta, ràpida i compacta de l'entrada visual. També presentem una generalització d'aquest paradigma de codificació-decodificació optimitzat que apliquem a diferents tasques de visió per computador i a la visió dins del context de la robòtica. Els resultats del nostre estudi suggereixen que la representació d'escenes visuals complexes, com cares humanes, gestos amb les mans i senyals del medi ambient podrien ser codificades pel TPC el qual es pot considerar un poderós marc biològic per reconeixement d'objectes en temps real. A més a més, els nostres resultats suggereixen que la representació de l'entrada sensorial pot ser integrada en un codi espai-temporal interpretat i analitzat en una serie de components Wavelet per àrees visuals superiors.
APA, Harvard, Vancouver, ISO, and other styles
15

Aljada, Muhsen. "Design and analysis of high-speed optical correlators for multiwavelength optical header recognition and optical code division multiple access." Thesis, Edith Cowan University, Research Online, Perth, Western Australia, 2007. https://ro.ecu.edu.au/theses/310.

Full text
Abstract:
Optical correlators arc attractive elements for packet-switched optical networks because they enable the headers of high-speed optical packets to be processed and recognised "on-the-fly", thus, switching the packets to different destinations without the need for optical-to-electrical and electrical-to-optical conversions. In the first part of the thesis, three novel all-optical header recognition structures based on time-domain optical correlation arc proposed and experimentally demonstrated. The novel optical correlator structures for header recognition, are based on the use of Opto-VLSI processors, fibre Bragg gratings, and arrayed waveguide gratings, respectively, and are demonstrated at IOGb/s by generating auto-correlation functions of high peaks whenever the optical header bit pattern matches a predetermined pattern in the lookup table, while for other bit patterns, only low intensity (below a threshold level) cross-correlation functions are produced. As a result, these structures eliminate the bottleneck that exists in the conventional ortical packet switching networks, thus greatly enhancing the performance of such networks.
APA, Harvard, Vancouver, ISO, and other styles
16

Sauval, Karinne. "Apprentissage de la lecture et phonologie : implication du code phonologique dans la reconnaissance de mots écrits chez l'enfant." Thesis, Lille 3, 2014. http://www.theses.fr/2014LIL30045/document.

Full text
Abstract:
Nous avons examiné à travers cinq études le rôle de la phonologie dans la reconnaissance de mots écrits chez des enfants tout-venant plus ou moins avancés dans l’apprentissage de la lecture. Pour cela, nous avons utilisé le paradigme de l’amorçage dans des versions visuelles, auditives et intermodales. Ce paradigme, à ce jour peu utilisé dans les études chez l’enfant, permet d’étudier, en temps réel et de manière précise, les processus phonologique et orthographiques engagés dans la reconnaissance de mots. Les études 1 et 2 montrent que chez les jeunes lecteurs, les représentations phonologiques du langage oral sont impliquées dans la lecture silencieuse de pseudomots, dans un format trait phonétique et dans la reconnaissance de mots familiers écrits, dans un format phonémique. L’étude 3 indique que le code phonologique contribue à la reconnaissance des mots de manière stable entre le CE2 et le CM2. Néanmoins, lorsque les représentations orthographiques sont peu spécifiées, la contribution du code phonologique est plus importante. Les études 4 et 5, en amorçage masqué visuel phonologique (O-P+ vs O-P-) et ortho-phonologique (O+P+ vs O+P-), montrent qu’au cours de la reconnaissance de mots familiers, les représentations phonologiques sont activées de manière automatique et ce dès le CE2. En revanche, l’activation automatique des représentations orthographiques semble apparaître plus tardivement dans le développement (CM2). Nos résultats suggèrent que lorsque le processus orthographique est fonctionnel mais pas encore pleinement efficace (CE2), la reconnaissance des mots écrits bénéficie de l’activation phonologique alors que, lorsque le processus est pleinement efficace (CM2), la reconnaissance des mots bénéficie de l’activation orthographique. Il semble donc que le développement du processus d’activation automatique des représentations phonologiques et le développement du processus d’activation automatique du lexique orthographique soient indépendants, le premier se développerait pleinement avant le second<br>We conducted five studies to examine the role of phonological code in visual word recognition in children more or less advanced in learning to read. For that, we used the priming paradigm (in visual, auditory and inter modalities). This paradigm allows to study on-line and in precise manner, phonological and orthographic processes engaged in visual word recognition. The studies 1 and 2 indicate that, in Grade 3 and Grade 5, speech representations are involved in silent reading of pseudowords, in phonetic feature format and in visual familiar word recognition, in phonemic format. The study 3 indicates that phonological code contributes to visual word recognition in stable manner through Grade 3 and Grade 5. Nevertheless, when lexical orthographic representations are not well specified, phonological contribution is greater. The studies 4 and 5, in phonological (O-P+ vs O-P-) and ortho-phonological (O+P+ vs O+P-) visual masked priming, show that familiar visual word recognition involves phonological representations in automatic manner from Grade 3 to Grade 5. In contrast, automatic activation of orthographic representations seems to develop later (Grade 5). These results suggest that when orthographic process is functional but not fully effective (Grade 3), visual word recognition benefits from phonological activation whereas when orthographic process is fully effective (Grade 5), visual word recognition benefits from orthographic activation. That suggests that development of phonological automatic activation and development of orthographic automatic activation are independent. The process of phonological automatic activation is entirely developed earlier than the process of orthographic automatic activation
APA, Harvard, Vancouver, ISO, and other styles
17

Dobrovolný, Martin. "Detekce a rozpoznání maticového kódu v reálném čase." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2012. http://www.nusl.cz/ntk/nusl-236564.

Full text
Abstract:
This work is dealing with detecting and recongnizing matrix codes. It is experimenting use of PCLines algorithm. PCLines is using Hough transform and parallel coordinates for fast detection of lines. Suggested algorithm with double use PCLines detects sets of parallels and problem with image distorted by with parallel projection is solved by cross-ratio equation. We did some optimizations for realtime running and created experimental implementation. Test results shows, that use PCLines one way to detect matrix codes.
APA, Harvard, Vancouver, ISO, and other styles
18

France, Alexander Adam. "Toward an Understanding of Polarizing Leadership: An Operational Code Analysis of Israeli Prime Minister Benjamin Netanyahu." Ohio University Honors Tutorial College / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=ouhonors1461283894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Candau, Marion. "Codes correcteurs d'erreurs convolutifs non commutatifs." Thesis, Brest, 2014. http://www.theses.fr/2014BRES0050/document.

Full text
Abstract:
Un code correcteur d'erreur ajoute de la redondance à un message afin de pouvoir corriger celui-ci lorsque des erreurs se sont introduites pendant la transmission. Les codes convolutifs sont des codes performants, et par conséquent, souvent utilisés. Le principe d'un code convolutif consiste à se fixer une fonction de transfert définie sur le groupe des entiers relatifs et à effectuer la convolution d'un message avec cette fonction de transfert. Ces codes ne protègent pas le message d'une interception par une tierce personne. C'est pourquoi nous proposons dans cette thèse, des codes convolutifs avec des propriétés cryptographiques, définis sur des groupes non-commutatifs. Nous avons tout d'abord étudié les codes définis sur le groupe diédral infini, qui, malgré de bonnes performances, n'ont pas les propriétés cryptographiques recherchées. Nous avons donc étudié ensuite des codes convolutifs en bloc sur des groupes finis, avec un encodage variable dans le temps. Nous avons encodé chaque message sur un sous-ensemble du groupe différent à chaque encodage. Ces sous-ensembles sont générés de façon chaotique à partir d'un état initial, qui est la clé du cryptosystème symétrique induit par le code. Nous avons étudié plusieurs groupes et plusieurs méthodes pour définir ces sous-ensembles chaotiques. Nous avons étudié la distance minimale des codes que nous avons conçus et montré qu'elle est légèrement plus petite que la distance minimale des codes en blocs linéaires. Cependant, nous avons, en plus, un cryptosystème symétrique associé à ces codes. Ces codes convolutifs non-commutatifs sont donc un compromis entre correction d'erreur et sécurité<br>An error correcting code adds redundancy to a message in order to correct it when errors occur during transmission.Convolutional codes are powerful ones, and therefore, often used. The principle of a convolutional code is to perform a convolution product between a message and a transfer function, both defined over the group of integers. These codes do not protect the message if it is intercepted by a third party. That is why we propose in this thesis, convolutional codes with cryptographic properties defined over non-commutative groups. We first studied codes over the infinite dihedral group, which despite good performance, do not have the desired cryptographic properties. Consequently, we studied convolutional block codes over finite groups with a time-varying encoding. Every time a message needs to be encoded, the process uses a different subset of the group. These subsets are chaotically generated from an initial state. This initial state is considered as the symmetric key of the code-induced cryptosystem. We studied many groups and many methods to define these chaotic subsets. We examined the minimum distance of the codes we conceived and we showed that it is slightly smaller than the minimum distance of the linear block codes. Nevertheless, our codes have, in addition, cryptographic properties that the others do not have. These non-commutative convolutional codes are then a compromise between error correction and security
APA, Harvard, Vancouver, ISO, and other styles
20

Povolný, Filip. "Detekce změny jazyka při hovoru." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2015. http://www.nusl.cz/ntk/nusl-234890.

Full text
Abstract:
This master's thesis deals with code-switching detection in speech. The state-of-the-art methods of language diarization are described in the first part of the thesis. The proposed method for implementation is based on acoustic approach to language identification using combination of GMM, i-vector and LDA. New Mandarin-English code-switching database was created for these experiments. Using this system, accuracy of 89,3 % is achieved on this database.
APA, Harvard, Vancouver, ISO, and other styles
21

Robbeloth, Michael Christopher. "Recognition of Incomplete Objects based on Synthesis of Views Using a Geometric Based Local-Global Graphs." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1557509373174391.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Carrier, Kevin. "Recherche de presque-collisions pour le décodage et la reconnaissance de codes correcteurs." Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS281.

Full text
Abstract:
Les codes correcteurs d'erreurs sont des outils ayant pour fonction originale de corriger les erreurs produites par des canaux de communication imparfaits. Dans un contexte non coopératif, se pose le problème de reconnaître des codes inconnus à partir de la seule connaissance de mots de code bruités. Ce problème peut s'avérer difficile pour certaines familles de codes, notamment pour les codes LDPC qui sont très présents dans nos systèmes de télécommunication modernes. Dans cette thèse, nous proposons de nouvelles techniques pour reconnaître plus facilement ces codes. À la fin des années 70, McEliece eu l'idée de détourner la fonction première des codes pour les utiliser dans des chiffrements, initiant ainsi une famille de solutions cryptographiques alternative à celle fondée sur la théorie des nombres. Un des avantages de la cryptographie fondée sur les codes est qu'elle semble résister au paradigme de calcul quantique ; notamment grâce à la robustesse du problème de décodage générique. Ce dernier a été profondément étudié ces 60 dernières années. Les dernières améliorations utilisent toutes des algorithmes de recherche de couples de points proches dans une liste. Dans cette thèse, nous améliorons le décodage générique en proposant notamment une nouvelle façon de rechercher des couples proches. Notre méthode repose sur l'utilisation de décodages en liste de codes polaires pour construire des fonctions de hachage floues. Dans ce manuscrit, nous traitons également la recherche de couples de points éloignés. Notre solution peut être utilisée pour améliorer le décodage en grandes distances qui a récemment trouvé des applications dans des designs de signature<br>Error correcting codes are tools whose initial function is to correct errors caused by imperfect communication channels. In a non-cooperative context, there is the problem of identifying unknown codes based solely on knowledge of noisy codewords. This problem can be difficult for certain code families, in particular LDPC codes which are very common in modern telecommunication systems. In this thesis, we propose new techniques to more easily recognize these codes. At the end of the 1970s, McEliece had the idea of ​​redirecting the original function of codes to use in ciphers; thus initiating a family of cryptographic solutions which is an alternative to those based on number theory problems. One of the advantages of code-based cryptography is that it seems to withstand the quantum computing paradigm; notably thanks to the robustness of the generic decoding problem. The latter has been thoroughly studied for more than 60 years. The latest improvements all rely on using algorithms for finding pairs of points that are close to each other in a list. This is the so called near-collisions search problem. In this thesis, we improve the generic decoding by asking in particular for a new way to find close pairs. To do this, we use list decoding of Arikan's polar codes to build new fuzzy hashing functions. In this manuscript, we also deal with the search for pairs of far points. Our solution can be used to improve decoding over long distances. This new type of decoding finds very recent applications in certain signature models
APA, Harvard, Vancouver, ISO, and other styles
23

Menacer, Mohamed Amine. "Reconnaissance et traduction automatique de la parole de vidéos arabes et dialectales." Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0157.

Full text
Abstract:
Les travaux de recherche ont été développés dans le cadre du projet AMIS (Access to Multilingual Information and opinionS) dont l'objectif principal est de développer un système d’aide à la compréhension de vidéos dans des langues étrangères en générant un résumé automatique de ces dernières dans une langue compréhensible par l'utilisateur. Dans le cadre de cette thèse, nous nous sommes concentrés sur la reconnaissance et la traduction automatique de la parole de vidéos arabes et dialectales. Les approches statistiques proposées dans la littérature pour la reconnaissance automatique de la parole (RAP) sont indépendantes de la langue et elles sont applicables à l'arabe standard. Cependant, cette dernière présente quelques caractéristiques que nous devons prendre en considération afin de booster les performances du système de RAP. Parmi ces caractéristiques on peut citer l'absence de l'indication des voyelles dans le texte ce qui rend difficile leur apprentissage par le modèle acoustique. Nous avons proposé plusieurs approches de modélisation acoustique et/ou de langage afin de mieux reconnaître la parole arabe. L'arabe standard n'est pas la langue maternelle, c'est pourquoi dans les conversations quotidiennes, on utilise le dialecte, un arabe inspiré de l'arabe standard, mais pas seulement. Nous avons travaillé sur l'adaptation du système développé pour l'arabe standard au dialecte algérien qui est l'une des variantes de la langue arabe les plus difficiles à reconnaître par les systèmes de RAP. Cela est dû aux mots empruntés d'autres langues, au code-switching et au manque de ressources. Notre proposition pour remédier à ces problèmes est de tirer profit des données orales et textuelles d'autres langues impactant le dialecte. Le texte résultant de la RAP arabe a été utilisé pour la traduction automatique (TA). Nous avons réalisé dans un premier temps une étude comparative entre l'approche statistique à base de segments et l'approche neuronale utilisées dans le cadre de la TA. Ensuite, nous nous sommes intéressés à l’adaptation de ces deux approches pour traduire le texte code-switché. Notre étude portait sur le mélange de l'arabe et de l'anglais dans des documents officiels des nations unies. Pour pallier les différents problèmes dus à la propagation des erreurs dans le système séquentiel, nous avons travaillé sur l'adaptation du vocabulaire du système de RAP et sur la proposition d'une nouvelle modélisation permettant la traduction directe de la parole<br>This research has been developed in the framework of the project AMIS (Access to Multilingual Information and opinionS). AMIS is an European project which aims to help people to understand the main idea of a video in a foreign language by generating an automatic summary of it. In this thesis, we focus on the automatic recognition and translation of the speech of Arabic and dialectal videos. The statistical approaches proposed in the literature for automatic speech recognition are language independent and they are applicable to modern standard Arabic. However, this language presents some characteristics that we need to take into consideration in order to boost the performance of the speech recognition system. Among these characteristics we can mention the absence of short vowels in the text, which makes their training by the acoustic model difficult. We proposed several approaches to acoustic and/or language modeling in order to better recognize the Arabic speech. In the Arab world, modern standard Arabic is not the mother tongue, that is why daily conversations are carried out with dialect, an Arabic inspired from modern standard Arabic, but not only. We worked on the adaptation of the speech recognition system developed for the modern standard Arabic to the Algerian dialect, which is one of the most difficult variants of the Arabic language to recognize by automatic speech recognition systems. This is mainly due to the borrowed words from other languages, the code-switching and the lack of resources. Our approach to overcome all these problems is to take advantage from oral and textual data of other languages that have an impact on the dialect in order to train the required models for dialect speech recognition. The resulting text from Arabic speech recognition system was then used for machine translation. As a starting point, we conducted a comparative study between the phrase based approach and the neural approach used in machine translation. Then, we adapted these two approaches to translate the code-switched text. Our study focused on the mix of Arabic and English in a parallel corpus extracted from official documents of the United Nations. In order to prevent the error propagation in the pipeline system, we worked on the adaptation of the vocabulary of the automatic speech recognition system and on the proposition of a new model that directly transforms a speech signal in language A into a sequence of words in another language B
APA, Harvard, Vancouver, ISO, and other styles
24

Poinsot, Audrey. "Traitements pour la reconnaissance biométrique multimodale : algorithmes et architectures." Thesis, Dijon, 2011. http://www.theses.fr/2011DIJOS010.

Full text
Abstract:
Combiner les sources d'information pour créer un système de reconnaissance biométrique multimodal permet d'atténuer les limitations de chaque caractéristique utilisée, et donne l'opportunité d'améliorer significativement les performances. Le travail présenté dans ce manuscrit a été réalisé dans le but de proposer un système de reconnaissance performant, qui réponde à des contraintes d'utilisation grand-public, et qui puisse être implanté sur un système matériel de faible coût. La solution choisie explore les possibilités apportées par la multimodalité, et en particulier par la fusion du visage et de la paume. La chaîne algorithmique propose un traitement basé sur les filtres de Gabor, ainsi qu’une fusion des scores. Une base multimodale réelle de 130 sujets acquise sans contact a été conçue et réalisée pour tester les algorithmes. De très bonnes performances ont été obtenues, et ont été confirmées sur une base virtuelle constituée de deux bases publiques (les bases AR et PolyU). L'étude approfondie de l'architecture des DSP, et les différentes implémentations qui ont été réalisées sur un composant de type TMS320c64x, démontrent qu'il est possible d'implanter le système sur un unique DSP avec des temps de traitement très courts. De plus, un travail de développement conjoint d'algorithmes et d'architectures pour l'implantation FPGA a démontré qu'il était possible de réduire significativement ces temps de traitement<br>Including multiple sources of information in personal identity recognition reduces the limitations of each used characteristic and gives the opportunity to greatly improve performance. This thesis presents the design work done in order to build an efficient generalpublic recognition system, which can be implemented on a low-cost hardware platform. The chosen solution explores the possibilities offered by multimodality and in particular by the fusion of face and palmprint. The algorithmic chain consists in a processing based on Gabor filters and score fusion. A real database of 130 subjects has been designed and built for the study. High performance has been obtained and confirmed on a virtual database, which consists of two common public biometric databases (AR and PolyU). Thanks to a comprehensive study on the architecture of the DSP components and some implementations carried out on a DSP belonging to the TMS320c64x family, it has been proved that it is possible to implement the system on a single DSP with short processing times. Moreover, an algorithms and architectures development work for FPGA implementation has demonstrated that these times can be significantly reduced
APA, Harvard, Vancouver, ISO, and other styles
25

Norris-Jones, Lynne. "Demonstrate and document : the development of a best practice model for biometric access control management." Thesis, Cardiff Metropolitan University, 2011. http://hdl.handle.net/10369/6411.

Full text
Abstract:
This thesis investigates the social, legal and ethical perceptions of participants towards the implementation of biometric access control systems within a sample of United Kingdom work-based environments. It focuses on the application of fingerprint scanning and facial recognition systems, whilst alluding to the development of more advanced (bleeding edge) technologies in the future. The conceptual framework is based on a tripartite model in which Maslow's Hierarchy of Needs is applied to the workforce whilst the principles of Utilitarianism and the Psychological Contract are applied to both management strategies and workforce perceptions. A qualitative paradigm is used in which semi-structured interviews are conducted with management and workforce participants within a sample of United Kingdom-based organisations (represented by Case Studies A-D). Discourse from these interviews are analysed, leading to the development of a series of first-cut findings for suggested "Best Practice " in the social, legal and ethical management of biometric access control systems. This process is subsequently developed with a refined sample of respondents (Case Studies A and C) culminating in the presentation of a suggested "Best Practice Model" for application to all four case studies. The model is based upon elements of a pre-determined Code of Practice (ISO/IEC 27002lnformation Technology - Security techniques - Code of Practice for Information Security Management) towards fostering acceptance of biometric technology within the workplace, in answering the question: How should organisations using biometric access control systems address social, legal and ethical concerns in the management of specific working environments in the United Kingdom?
APA, Harvard, Vancouver, ISO, and other styles
26

Mareš, Petr. "Automatická detekce knihovního kódu ze spustitelných souborů typu PE." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2008. http://www.nusl.cz/ntk/nusl-235976.

Full text
Abstract:
Master's thesis describes imported functions detection in PE executables, which are from static libraries. Main reason is process automatization and analysis simplification. Detection is solved by searching prepared patterns with missmatch tolerance. Missmatch are caused by changing address during building application. Resulting application supports compiler detection and it contains patterns for MinGW32, Visual studio 2005 and C++ Builder 6.
APA, Harvard, Vancouver, ISO, and other styles
27

Cardinal, Patrick. "Speech recognition on multi-core processors and GPUS." Mémoire, École de technologie supérieure, 2013. http://espace.etsmtl.ca/1192/1/CARDINAL_Patrick.pdf.

Full text
Abstract:
Depuis plusieurs années, la vitesse des processeurs demeure stable. La tendance semble maintenant être à la diminution de la vitesse afin de réduire la consommation d’énergie. Cette tendance est déjà visible dans le monde des appareils mobiles. Pour profiter de toute la puissance de calcul des processeurs modernes et à venir, les applications se doivent d’intégrer le parallélisme et la reconnaissance de la parole ne fait pas exception. Malheureusement, l’algorithme de décodage (Viterbi), qui utilise la programmation dynamique pour la recherche dans le graphe de reconnaissance, n’arrive pas à utiliser pleinement toute cette puissance. La raison principale est que ce graphe de reconnaissance contient plusieurs millions de noeuds et de transitions, il est donc impensable l’explorer exhaustivement et doit être élagué afin d’explorer seulement les hypothèses les plus prometteuses. Cet élagage fait en sorte que l’architecture de la mémoire utilisée dans les ordinateurs de type Intel n’est pas utilisée de manière efficace. Pour contourner le problème, un autre type d’algorithme de recherche est envisagée: la recherche A*. Ce type de recherche utilise une heuristique qui donne une approximation de la distance à parcourir pour atteindre le noeud final. La proposition d’une bonne heuristique fait en sorte que le nombre de noeuds explorés devient négligeable, ce qui a pour effet de transférer le temps de calcul de la recherche dans le graphe au calcul de l’heuristique, qui peut être conçu afin de profiter au maximum de l’architecture des processeurs actuels. Pour la reconnaissance de la parole, un graphe de reconnaissance beaucoup plus petit est utilisé comme heuristique pouvant ainsi être explorer exhaustivement, ce qui permet d’éliminer les problèmes de mauvaise utilisation de l’architecture mémoire. Un aspect important pour la reconnaissance de la parole est le calcul acoustique. Pour cette tâche, une accélération par un facteur de 3,6 a été observée sur un processeur à 4 coeurs. Sur GPU, l’accélération est de 24,8x par rapport à l’algorithme de Viterbi. En ce qui concerne la recherche dans le graphe de reconnaissance, les résultats ont montré que le nombre de noeuds explorés par l’algorithme A* est 28 fois inférieur comparé à sa version séquentielle à l’algorithme originale. De plus, le calcul de l’heuristique est respectivement 4,1 et 10,1 fois plus rapide sur un processeur à 4 coeurs et sur GPU par rapport à la version séquentielle. Finalement, si on compare la version originale et la nouvelle version parallélisée du point de vue du taux de reconnaissance au temps réel, la version parallèle a un taux de reconnaissance supérieure de 4% absolu par rapport à la version classique.
APA, Harvard, Vancouver, ISO, and other styles
28

Bayless, Mark D. "Improving optical character recognition accuracy for cargo container identification numbers." [Denver, Colo.] : Regis University, 2010. http://adr.coalliance.org/codr/fez/view/codr:139.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Rideout, Robert Martin. "Coded imaging systems for X-ray astronomy." Thesis, University of Birmingham, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.364854.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Ng, Pak-hung David. "The predominant role of visual codes in Chinese character recognition." Click to view the E-thesis via HKUTO, 2006. http://sunzi.lib.hku.hk/hkuto/record/B36910314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Ng, Pak-hung David, and 伍柏鴻. "The predominant role of visual codes in Chinese character recognition." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2006. http://hub.hku.hk/bib/B36910314.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Tan, Kwee Teck. "Objective picture quality measurement for MPEG-2 coded video." Thesis, University of Essex, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.324249.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Lin, Fung-Yaw. "Core-line tracing of digital images of line drawings." Thesis, Cranfield University, 1991. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302773.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Olby, Linnea, and Isabel Thomander. "A Step Toward GDPR Compliance : Processing of Personal Data in Email." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-238754.

Full text
Abstract:
The General Data Protection Regulation enforced on the 25th of may in 2018 is a response to the growing importance of IT in today’s society, accompanied by public demand for control over personal data. In contrast to the previous directive, the new regulation applies to personal data stored in an unstructured format, such as email, rather than solely structured data. Companies are now forced to accommodate to this change, among others, in order to be compliant. This study aims to provide a code of conduct for the processing of personal data in email as a measure for reaching compliance. Furthermore, this study investigates whether Named Entity Recognition (NER) can aid this process as a means of finding personal data in the form of names. A literature review of current research and recommendations was conducted for the code of conduct proposal. A NER system was constructed using a hybrid approach with Binary Logistic Regression, hand-crafted rules and gazetteers. The model was applied to a selection of emails, including attachments, obtained from a small consultancy company in the automotive industry. The proposed code of conduct consists of six items, applied to the consultancy firm. The NER-model demonstrated low ability to identify names and was therefore deemed insufficient for this task.<br>Dataskyddsförordningen började gälla den 25e maj 2018, och uppstod som ett svar på den okände betydelsen av IT i dagens samhälle samt allmänhetens krav på ökad kontroll över personuppgifter för den enskilde individen. Till skillnad från det tidigare direktivet, omfattar den nya förordningen även personuppgifter som är lagrad i ostrukturerad form, som till exempel e-post, snarare än endast i strukturerad form. Många företag tvingas därmed att anpassa sig efter detta, tillsammans med ett flertal andra nya krav, i syfte att efterfölja förordningen. Den här studien syftar till att lägga fram ett förslag på en uppförandekod för behandling av personuppgifter i e-post som ett verktyg för att nå medgörlighet. Utöver detta undersöks det om Named Entity Recognition (NER) kan användas som ett hjälpmedel vid identifiering av personuppgifter, mer specifikt namn. En litteraturstudie kring tidigare forskning och aktuella rekommendationer utfördes inför utformningen av uppförandekoden. Ett NER-system konstruerades med hjälp av Binär Logistisk Regression, handgjorda regler och ordlistor. Modellen applicerades på ett urval av e-postmeddelanden, med eventuella bilagor, som tillhandahölls från ett litet konsultbolag aktivt inom bilindustrin. Den rekommenderade uppförandekoden består av sex punkter, applicerade på konsultbolaget. NER-modellen påvisade en låg förmåga att identifiera namn och ansågs därför inte vara lämplig för den utsatta uppgiften.
APA, Harvard, Vancouver, ISO, and other styles
35

Hsin, Han Chun, and 辛漢君. "Container Code Recognition System." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/98817137276306776719.

Full text
Abstract:
碩士<br>元智大學<br>電機工程學系<br>98<br>The code on the container needs to be registered at International Container Bureau to ensure the uniqueness. Therefore how to manage such many containers has become the key issue for the shipping companies. The image processing technology is a management for the container code recognition and this study proposed the rectangular box detection and Otsu Binary as the recognition methods. The first was to utilize Otsu binary to the gray-scaled image to find the high-resolution threshold and then grouped the background and foreground. Afterward, the binary image is done on edge detection and designing a rectangular box to search the container code location. The one coincided was what the container code located. The second was to search for the text apex of the found container code by vertical projection preceded the text segmentation. The every segmented apex must be in the process of Otsu binary and vote and finally seek the best result. Based on the experimental data and result, the applied research methods ways not only can precisely the container code location but also can achieve the better recognition rate.
APA, Harvard, Vancouver, ISO, and other styles
36

chan, zhi-xun, and 陳志勳. "GT Code Generation by Feature Recognition." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/97520200633312471400.

Full text
Abstract:
碩士<br>國立中興大學<br>機械工程學系<br>86<br>AbstractIn the age of highly advanced technology, the entire automation in the manufacturing system is an unavoidable trend. Therefore, the purposes of automation become how to integrate effectively the computer-aided-tools, such as CAD/CAM, CAPP, CAE etc. Computer-aided process planning (CAPP) is the bridge between computer-aided design (CAD) and computer-aided manufacturing (CAM). But it will be the key factor of system automation that whether the interface of CAPP and CAD is works perfectly.In this research, the most important objective is on the communication interface of CAD and CAPP, which aims at developing the auto-feature recognition and encoded system, abbreviated to feature recognition (FR). According to the profile-feature group technology (GT), parts can firstly be processed by the way of directly comparing the entity data, and then encoded by rule base. Thus the encoded parts will be helpful for the processing of CAPP system.In this thesis, the rotational symmetric parts plotted by AutoCAD are mainly considered. The recognition method used in this paper is Entity Compare Method (ECM), which is improved from the method of Syntactic Pattern Recognition (SPR). This system is programmed in AutoLISP and DCL language.
APA, Harvard, Vancouver, ISO, and other styles
37

Chang, Ting-Yu, and 張廷瑜. "Recognition of numeric code of hand gesture." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/468u89.

Full text
Abstract:
碩士<br>崑山科技大學<br>電機工程研究所<br>97<br>The main purpose of this thesis is to develop a recognition system of numeric code of gesture. The recognition process of numeric code of gesture contains three main steps : hand detection、fingertip detection、and hand gesture recognition. In the stage of hand detection, first we extract skin region from image by color segmentation, then the methods of image projection and connected components labeling are utilized to detect the hand regions. In the stage of fingertip detection, the edge detection and fingertip mask are used to find out the coordinates of fingertip. Finally, in the stage of hand gesture recognition, the different gestures which have the same number of fingers are identified according to the relationship of angle and distance between the recognized fingers.
APA, Harvard, Vancouver, ISO, and other styles
38

Lin, Yu-Shan, and 林鈺山. "Expression Recognition using Cascade Local Deformation Code." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/836p6p.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>機械工程系<br>100<br>An appearance-based coding scheme, called Cascade Local Deformation Code (CLDC), is proposed for expression recognition. CLDC has two component codes, Human Observable Code (HOC) and Haar-like Feature Code (HFC). The HOC encodes the local deformation regions caused by facial muscle contractions observable to humans, and the HFC encodes the Haar-like features selected by an AdaBoost algorithm. Given a training set, one first selects the observable local deformation regions, and trains a HOC detector which encodes the local deformation regions into HOC codewords according to seven predefined expressions. The training set is also used for the extraction of Haar-like features and encoding of the features into HFC codewords for the seven expressions. The combination of HOC and HFC gives the CLDC, which is proven to outperform either component in the decoding phase for the expression recognition on disjoint testing sets. Experiments on the CK+, JAFFE and the latest FERA databases show that the performance of the CLDC is competitive to the state-of-the-art approaches.
APA, Harvard, Vancouver, ISO, and other styles
39

ZHANG, XUE-YI, and 張學誼. "Pattern recognition by circular layer code approach." Thesis, 1986. http://ndltd.ncl.edu.tw/handle/85158188244585145791.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Su, Che-Min, and 蘇哲民. "Skewed QR Code Recognition on Handheld Device." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/63cdzb.

Full text
Abstract:
碩士<br>國立臺北科技大學<br>資訊工程系所<br>95<br>Nowadays most mobile phones are combined with photography, and this function also enhances 2-dimension application on hand held device. However, when using camera phone to take pictures, we not only get irrelevant backgrounds to barcode but also get skewed barcode image due to shot angles. Therefore, this paper provides us an algorithm to gather and adjust QR Code. First of all, we use QR Code Finder Patter to find their position, and then we draw a frame to cover the rectangle of the barcode using edge extension of Finder Pattern. Finally, we use plane projection algorithm to transfer this region to standard QR Code and recollect sample for decoder to recognize it.
APA, Harvard, Vancouver, ISO, and other styles
41

HO, I.-CHING, and 何宜靜. "Container Code Image Recognition Method Using Superpixel." Thesis, 2019. http://ndltd.ncl.edu.tw/handle/86cn82.

Full text
Abstract:
碩士<br>國立高雄科技大學<br>電腦與通訊工程系<br>107<br>The advancement of science and technology has prompted various types of industries to move towards intelligent automation. Among them, marine transportation in the transportation industry is the largest transportation mode of modern trade, The current development of intelligent transportation in maritime transport is relatively slow compared to land transport and air transport. However, its post-development benefits are extremely high, which can significantly reduce manpower and time costs. Most of the shipping is loaded with trade goods in the form of containers. Since each container code is unique, it can identify the container code. It is possible to know the basic information about this container, so it is helpful to automatically identify the container code for container terminal automation. In this thesis, the recognition of container code will be studied. The superpixel is applied to the container code image segmentation. First, the image superpixel clustering method is introduced, and uses the superpixel to obtain the basic elements and effective grouping. After combination and segmentation, the obtained object is identified by template matching. Then, the recognition result is obtained according to the conditional order. Finally, the performance of the proposed method is tested using various container code images, and analyze the reasons for the situation that cannot be successfully recognition, these can be used as the basis and reference for future improvement.
APA, Harvard, Vancouver, ISO, and other styles
42

Tsai, Szu-Lang, and 蔡賜郎. "Grey System Theory Applied to IC Code Recognition." Thesis, 1999. http://ndltd.ncl.edu.tw/handle/82594745119727614655.

Full text
Abstract:
碩士<br>元智大學<br>工業工程研究所<br>87<br>In electronic aspects, IC chips assembling errors make a lot of troubles. The topic of this project is to identify the IC coeds by using Grey Relational Analysis and reconstruct the broken word with Grey prediction model, GM(1,1). The Grey Theorem may find the Grey Relational Grades of all factors we want by choosing the highest Grey Relational Grade even under an message uncompleted circumstances. In IC codes identification procedure, we would rotate an image first anb segment it. Secondly, we use thresholding and thinning method to reduce calculating process and get the message feature from the segment message. After that, we use Grey Relational Analysis method to identify the IC codes. This recognition rate is up to 97.5%. Rather than the traditional method, there are three advantages of Grey Relational Analysis:1. No large data. 2. No specific statistical distribution. 3. No influence from various factors. It is quite easy and practical method in the field of IC codes identification.
APA, Harvard, Vancouver, ISO, and other styles
43

"Four cornered code based Chinese character recognition system." Chinese University of Hong Kong, 1993. http://library.cuhk.edu.hk/record=b5887768.

Full text
Abstract:
by Tham Yiu-Man.<br>Thesis (M.Phil.)--Chinese University of Hong Kong, 1993.<br>Includes bibliographical references.<br>Abstract --- p.i<br>Acknowledgements --- p.iii<br>Table of Contents --- p.iv<br>Chapter Chapter I --- Introduction<br>Chapter 1.1 --- Introduction --- p.1-1<br>Chapter 1.2 --- Survey on Chinese Character Recognition --- p.1-4<br>Chapter 1.3 --- Methodology Adopts in Our System --- p.1-7<br>Chapter 1.4 --- Contributions and Organization of the Thesis --- p.1-11<br>Chapter Chapter II --- Pre-processing and Stroke Extraction<br>Chapter 2.1 --- Introduction --- p.2-1<br>Chapter 2.2 --- Thinning --- p.2-1<br>Chapter 2.2.1 --- Introduction to Thinning --- p.2-1<br>Chapter 2.2.2 --- Proposed Thinning Algorithm Cater for Stroke Extraction --- p.2-6<br>Chapter 2.2.3 --- Thinning Results --- p.2-9<br>Chapter 2.3 --- Stroke Extraction --- p.2-13<br>Chapter 2.3.1 --- Introduction to Stroke Extraction --- p.2-13<br>Chapter 2.3.2 --- Proposed Stroke Extraction Method --- p.2-14<br>Chapter 2.3.2.1 --- Fork point detection --- p.2-16<br>Chapter 2.3.2.2 --- 8-connected fork point merging --- p.2-18<br>Chapter 2.3.2.3 --- Sub-stroke extraction --- p.2-18<br>Chapter 2.3.2.4 --- Fork point merging --- p.2-19<br>Chapter 2.3.2.5 --- Sub-stroke connection --- p.2-24<br>Chapter 2.3.3 --- Stroke Extraction Accuracy --- p.2-27<br>Chapter 2.3.4 --- Corner Detection --- p.2-29<br>Chapter 2.3.4.1 --- Introduction to Corner Detection --- p.2-29<br>Chapter 2.3.4.2 --- Proposed Corner Detection Formulation --- p.2-30<br>Chapter 2.4 --- Concluding Remarks --- p.2-33<br>Chapter Chapter III --- Four Corner Code<br>Chapter 3.1 --- Introduction --- p.3-1<br>Chapter 3.2 --- Deletion of Hook Strokes --- p.3-3<br>Chapter 3.3 --- Stroke Types Selection --- p.3-5<br>Chapter 3.4 --- Probability Formulations of Stroke Types --- p.3-7<br>Chapter 3.4.1 --- Simple Strokes --- p.3-7<br>Chapter 3.4.2 --- Square --- p.3-8<br>Chapter 3.4.3 --- Cross --- p.3-10<br>Chapter 3.4.4 --- Upper Right Corner --- p.3-12<br>Chapter 3.4.5 --- Lower Left Corner --- p.3-12<br>Chapter 3.5 --- Corner Segments Extraction Procedure --- p.3-14<br>Chapter 3.5.1 --- Corner Segment Probability --- p.3-21<br>Chapter 3.5.2 --- Corner Segment Extraction --- p.3-23<br>Chapter 3.6 4 --- C Codes Generation --- p.3-26<br>Chapter 3.7 --- Parameters Determination --- p.3-29<br>Chapter 3.8 --- Sensitivity Test --- p.3-31<br>Chapter 3.9 --- Classification Rate --- p.3-32<br>Chapter 3.10 --- Feedback by Corner Segments --- p.3-34<br>Chapter 3.11 --- Classification Rate with Feedback by Corner Segment --- p.3-37<br>Chapter 3.12 --- Reasons for Mis-classification --- p.3-38<br>Chapter 3.13 --- Suggested Solution to the Mis-interpretation of Stroke Type --- p.3-41<br>Chapter 3.14 --- Reduce Size of Candidate Set by No.of Input Segments --- p.3-43<br>Chapter 3.15 --- Extension to Higher Order Code --- p.3-45<br>Chapter 3.16 --- Concluding Remarks --- p.3-46<br>Chapter Chapter IV --- Relaxation<br>Chapter 4.1 --- Introduction --- p.4-1<br>Chapter 4.1.1 --- Introduction to Relaxation --- p.4-1<br>Chapter 4.1.2 --- Formulation of Relaxation --- p.4-2<br>Chapter 4.1.3 --- Survey on Chinese Character Recognition by using Relaxation --- p.4-5<br>Chapter 4.2 --- Relaxation Formulations --- p.4-9<br>Chapter 4.2.1 --- Definition of Neighbour Segments --- p.4-9<br>Chapter 4.2.2 --- Formulation of Initial Probability Assignment --- p.4-12<br>Chapter 4.2.3 --- Formulation of Compatibility Function --- p.4-14<br>Chapter 4.2.4 --- Formulation of Support from Neighbours --- p.4-16<br>Chapter 4.2.5 --- Stopping Criteria --- p.4-17<br>Chapter 4.2.6 --- Distance Measures --- p.4-17<br>Chapter 4.2.7 --- Parameters Determination --- p.4-21<br>Chapter 4.3 --- Recognition Rate --- p.4-23<br>Chapter 4.4 --- Reasons for Mis-recognition in Relaxation --- p.4-27<br>Chapter 4.5 --- Introduction of No-label Class --- p.4-31<br>Chapter 4.5.1 --- No-label Initial Probability --- p.4-31<br>Chapter 4.5.2 --- No-label Compatibility Function --- p.4-32<br>Chapter 4.5.3 --- Improvement by No-label Class --- p.4-33<br>Chapter 4.6 --- Rate of Convergence --- p.4-35<br>Chapter 4.6.1 --- Updating Formulae in Exponential Form --- p.4-38<br>Chapter 4.7 --- Comparison with Yamamoto et al's Relaxation Method --- p.4-40<br>Chapter 4.7.1 --- Formulations in Yamamoto et al's Relaxation Method --- p.4-40<br>Chapter 4.7.2 --- Modifications in [YAMAM82] --- p.4-42<br>Chapter 4.7.3 --- Performance Comparison with [YAMAM82] --- p.4-43<br>Chapter 4.8 --- System Overall Recognition Rate --- p.4-45<br>Chapter 4.9 --- Concluding Remarks --- p.4-48<br>Chapter Chapter V --- Concluding Remarks<br>Chapter 5.1 --- Recapitulation and Conclusions --- p.5-1<br>Chapter 5.2 --- Limitations in the System --- p.5-4<br>Chapter 5.3 --- Suggestions for Further Developments --- p.5-6<br>References --- p.R-1<br>Appendix User's Guide<br>Chapter A .l --- System Functions --- p.A-1<br>Chapter A.2 --- Platform and Compiler --- p.A-1<br>Chapter A.3 --- File List --- p.A-2<br>Chapter A.4 --- Directory --- p.A-3<br>Chapter A.5 --- Description of Sub-routines --- p.A-3<br>Chapter A.6 --- Data Structures and Header Files --- p.A-12<br>Chapter A.7 --- Character File charfile Structure --- p.A-15<br>Chapter A.8 --- Suggested Program to Implement the System --- p.A-17
APA, Harvard, Vancouver, ISO, and other styles
44

Chen, Shu-Lin, and 簡琡玲. "Code-Switched Word Recognition by Taiwanese-Mandarin Bilinguals." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/48950752461107655475.

Full text
Abstract:
碩士<br>國立新竹師範學院<br>臺灣語言與語文教育研究所<br>88<br>Code-Switched Word Recognition by Taiwanese - Mandarin Bilinguals Abstract Code-switching involves the use of words from two different languages within a single discourse or even a single utterance. It is particularly frequent in bilingual communities. In Taiwan, where both Mandarin and Taiwanese are commonly used, code-switching occurs on a daily basis. Although the cognitive process of code-switching has attracted a lot attention in psycholinguistic studies, most of them examined the issues with written words. Besides, most studies provide us the important information about Indo-European languages, in both its phonological and its grammatical structures, while few focus on tone language. The aim of the present study is to explore the cues of tone and semantic context in the recognition of code-switched words processed by Mandarin- Taiwanese bilingual listeners. In Experiment 1 and 2 , different types of Mandarin and Taiwanese tones were embedded in Mandarin sentences. The gating paradigm (Grosjean, 1980) was used to present these words to Mandarin-Taiwanese bilingual listeners so as to determine the role played by tone type in the lexical access of code-switched words, as well as to uncover the underlying operations involved in the recognition process. Results showed that different tone types and another design of gating paradigm play a role in the recognition process. The result show that difficult one to detect, and that the tone types whose pattern and range are different in Mandarin and Taiwanese are the easiest one to detect, while the recognition rate of the tone types which have similar pattern but different range are in between of the two condition mentioned above. In Experiment 3, we also discussed the interaction between phonetics and semantic information in the recognition process. Subjects in this experiment were asked to detect the code-switched word embedded in both semantic biased and unbiased context. Results showed that phonetic is much stronger than semantic cues in the recognition process.
APA, Harvard, Vancouver, ISO, and other styles
45

Kuanr, Debesh, and Lokanath Tripathy. "Accuracy improvement in odia zip code recognition technique." Thesis, 2012. http://ethesis.nitrkl.ac.in/3302/1/Debeshk-ei016.pdf.

Full text
Abstract:
Odia is a very popular language in India which is used by more than 45 million people worldwide, especially in the eastern region of India. The proposed recognition schemes for foreign languages such as Roman, Japanese, Chinese and Arabic can’t be applied directly for odia language because of the different structure of odia script. Hence, this report deals with the recognition of odia numerals with taking care of the varying style of handwriting. The main purpose is to apply the recognition scheme for zip code extraction and number plate recognition. Here, two methods “gradient and curvature method” and “box-method approach” are used to calculate the features of the preprocessed scanned image document. Features from both the methods are used to train the artificial neural network by taking a large no of samples from each numeral. Enough testing samples are used and results from both the features are compared. Principal component analysis has been applied to reduce the dimension of the feature vector so as to help further processing. The features from box-method of an unknown numeral are correlated with that of the standard numerals. While using neural networks, the average recognition accuracy using gradient and curvature features and box-method features are found to be 93.2 and 88.1 respectively.
APA, Harvard, Vancouver, ISO, and other styles
46

"Automatic speech recognition of Cantonese-English code-mixing utterances." 2005. http://library.cuhk.edu.hk/record=b5892425.

Full text
Abstract:
Chan Yeuk Chi Joyce.<br>Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.<br>Includes bibliographical references.<br>Abstracts in English and Chinese.<br>Chapter Chapter 1 --- Introduction --- p.1<br>Chapter 1.1 --- Background --- p.1<br>Chapter 1.2 --- Previous Work on Code-switching Speech Recognition --- p.2<br>Chapter 1.2.1 --- Keyword Spotting Approach --- p.3<br>Chapter 1.2.2 --- Translation Approach --- p.4<br>Chapter 1.2.3 --- Language Boundary Detection --- p.6<br>Chapter 1.3 --- Motivations of Our Work --- p.7<br>Chapter 1.4 --- Methodology --- p.8<br>Chapter 1.5 --- Thesis Outline --- p.10<br>Chapter 1.6 --- References --- p.11<br>Chapter Chapter 2 --- Fundamentals of Large Vocabulary Continuous Speech Recognition for Cantonese and English --- p.14<br>Chapter 2.1 --- Basic Theory of Speech Recognition --- p.14<br>Chapter 2.1.1 --- Feature Extraction --- p.14<br>Chapter 2.1.2 --- Maximum a Posteriori (MAP) Probability --- p.15<br>Chapter 2.1.3 --- Hidden Markov Model (HMM) --- p.16<br>Chapter 2.1.4 --- Statistical Language Modeling --- p.17<br>Chapter 2.1.5 --- Search A lgorithm --- p.18<br>Chapter 2.2 --- Word Posterior Probability (WPP) --- p.19<br>Chapter 2.3 --- Generalized Word Posterior Probability (GWPP) --- p.23<br>Chapter 2.4 --- Characteristics of Cantonese --- p.24<br>Chapter 2.4.1 --- Cantonese Phonology --- p.24<br>Chapter 2.4.2 --- Variation and Change in Pronunciation --- p.27<br>Chapter 2.4.3 --- Syllables and Characters in Cantonese --- p.28<br>Chapter 2.4.4 --- Spoken Cantonese vs. Written Chinese --- p.28<br>Chapter 2.5 --- Characteristics of English --- p.30<br>Chapter 2.5.1 --- English Phonology --- p.30<br>Chapter 2.5.2 --- English with Cantonese Accents --- p.31<br>Chapter 2.6 --- References --- p.32<br>Chapter Chapter 3 --- Code-mixing and Code-switching Speech Recognition --- p.35<br>Chapter 3.1 --- Introduction --- p.35<br>Chapter 3.2 --- Definition --- p.35<br>Chapter 3.2.1 --- Monolingual Speech Recognition --- p.35<br>Chapter 3.2.2 --- Multilingual Speech Recognition --- p.35<br>Chapter 3.2.3 --- Code-mixing and Code-switching --- p.36<br>Chapter 3.3 --- Conversation in Hong Kong --- p.38<br>Chapter 3.3.1 --- Language Choice of Hong Kong People --- p.38<br>Chapter 3.3.2 --- Reasons for Code-mixing in Hong Kong --- p.40<br>Chapter 3.3.3 --- How Does Code-mixing Occur? --- p.41<br>Chapter 3.4 --- Difficulties for Code-mixing - Specific to Cantonese-English --- p.44<br>Chapter 3.4.1 --- Phonetic Differences --- p.45<br>Chapter 3.4.2 --- Phonology difference --- p.48<br>Chapter 3.4.3 --- Accent and Borrowing --- p.49<br>Chapter 3.4.4 --- Lexicon and Grammar --- p.49<br>Chapter 3.4.5 --- Lack of Appropriate Speech Corpus --- p.50<br>Chapter 3.5 --- References --- p.50<br>Chapter Chapter 4 --- Data Collection --- p.53<br>Chapter 4.1 --- Data Collection --- p.53<br>Chapter 4.1.1 --- Corpus Design --- p.53<br>Chapter 4.1.2 --- Recording Setup --- p.59<br>Chapter 4.1.3 --- Post-processing of Speech Data --- p.60<br>Chapter 4.2 --- A Baseline Database --- p.61<br>Chapter 4.2.1 --- Monolingual Spoken Cantonese Speech Data (CUMIX) --- p.61<br>Chapter 4.3 --- References --- p.61<br>Chapter Chapter 5 --- System Design and Experimental Setup --- p.63<br>Chapter 5.1 --- Overview of the Code-mixing Speech Recognizer --- p.63<br>Chapter 5.1.1 --- Bilingual Syllable / Word-based Speech Recognizer --- p.63<br>Chapter 5.1.2 --- Language Boundary Detection --- p.64<br>Chapter 5.1.3 --- Generalized Word Posterior Probability (GWPP) --- p.65<br>Chapter 5.2 --- Acoustic Modeling --- p.66<br>Chapter 5.2.1 --- Speech Corpus for Training of Acoustic Models --- p.67<br>Chapter 5.2.2 --- Features Extraction --- p.69<br>Chapter 5.2.3 --- Variability in the Speech Signal --- p.69<br>Chapter 5.2.4 --- Language Dependency of the Acoustic Models --- p.71<br>Chapter 5.2.5 --- Pronunciation Dictionary --- p.80<br>Chapter 5.2.6 --- The Training Process of Acoustic Models --- p.83<br>Chapter 5.2.7 --- Decoding and Evaluation --- p.88<br>Chapter 5.3 --- Language Modeling --- p.90<br>Chapter 5.3.1 --- N-gram Language Model --- p.91<br>Chapter 5.3.2 --- Difficulties in Data Collection --- p.91<br>Chapter 5.3.3 --- Text Data for Training Language Model --- p.92<br>Chapter 5.3.4 --- Training Tools --- p.95<br>Chapter 5.3.5 --- Training Procedure --- p.95<br>Chapter 5.3.6 --- Evaluation of the Language Models --- p.98<br>Chapter 5.4 --- Language Boundary Detection --- p.99<br>Chapter 5.4.1 --- Phone-based LBD --- p.100<br>Chapter 5.4.2 --- Syllable-based LBD --- p.104<br>Chapter 5.4.3 --- LBD Based on Syllable Lattice --- p.106<br>Chapter 5.5 --- "Integration of the Acoustic Model Scores, Language Model Scores and Language Boundary Information" --- p.107<br>Chapter 5.5.1 --- Integration of Acoustic Model Scores and Language Boundary Information. --- p.107<br>Chapter 5.5.2 --- Integration of Modified Acoustic Model Scores and Language Model Scores --- p.109<br>Chapter 5.5.3 --- Evaluation Criterion --- p.111<br>Chapter 5.6 --- References --- p.112<br>Chapter Chapter 6 --- Results and Analysis --- p.118<br>Chapter 6.1 --- Speech Data for Development and Evaluation --- p.118<br>Chapter 6.1.1 --- Development Data --- p.118<br>Chapter 6.1.2 --- Testing Data --- p.118<br>Chapter 6.2 --- Performance of Different Acoustic Units --- p.119<br>Chapter 6.2.1 --- Analysis of Results --- p.120<br>Chapter 6.3 --- Language Boundary Detection --- p.122<br>Chapter 6.3.1 --- Phone-based Language Boundary Detection --- p.123<br>Chapter 6.3.2 --- Syllable-based Language Boundary Detection (SYL LB) --- p.127<br>Chapter 6.3.3 --- Language Boundary Detection Based on Syllable Lattice (BILINGUAL LBD) --- p.129<br>Chapter 6.3.4 --- Observations --- p.129<br>Chapter 6.4 --- Evaluation of the Language Models --- p.130<br>Chapter 6.4.1 --- Character Perplexity --- p.130<br>Chapter 6.4.2 --- Phonetic-to-text Conversion Rate --- p.131<br>Chapter 6.4.3 --- Observations --- p.131<br>Chapter 6.5 --- Character Error Rate --- p.132<br>Chapter 6.5.1 --- Without Language Boundary Information --- p.133<br>Chapter 6.5.2 --- With Language Boundary Detector SYL LBD --- p.134<br>Chapter 6.5.3 --- With Language Boundary Detector BILINGUAL-LBD --- p.136<br>Chapter 6.5.4 --- Observations --- p.138<br>Chapter 6.6 --- References --- p.141<br>Chapter Chapter 7 --- Conclusions and Suggestions for Future Work --- p.143<br>Chapter 7.1 --- Conclusion --- p.143<br>Chapter 7.1.1 --- Difficulties and Solutions --- p.144<br>Chapter 7.2 --- Suggestions for Future Work --- p.149<br>Chapter 7.2.1 --- Acoustic Modeling --- p.149<br>Chapter 7.2.2 --- Pronunciation Modeling --- p.149<br>Chapter 7.2.3 --- Language Modeling --- p.150<br>Chapter 7.2.4 --- Speech Data --- p.150<br>Chapter 7.2.5 --- Language Boundary Detection --- p.151<br>Chapter 7.3 --- References --- p.151<br>Appendix A Code-mixing Utterances in Training Set of CUMIX --- p.152<br>Appendix B Code-mixing Utterances in Testing Set of CUMIX --- p.175<br>Appendix C Usage of Speech Data in CUMIX --- p.202
APA, Harvard, Vancouver, ISO, and other styles
47

Xu, En-Yuan, and 許恩源. "An Application of Computer Vision to Bar Code Recognition." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/59710909522917033908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

EN-YUAN, HSU, and 許恩源. "An Application of Computer Vision to Bar Code Recognition." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/62213822913619383757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wu, Jia-Hau, and 吳家豪. "Container Code Recognition Using Neural Network and Knowledge Rule." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/94797394048816524336.

Full text
Abstract:
碩士<br>國立高雄第一科技大學<br>電腦與通訊工程所<br>94<br>Abstract In this thesis, we propose two methods to recognize container code in container image .One is to do the recognition of container code by combining the Neural Network with the digital image processing, the other is the method combines that the Rule Base with the digital image processing to do the recognition of container code. In this thesis, the neural network used to recognize container code is Back-Propagation Neural Network. Because front four characters of container code are English and post seven characters are Number. we divide the neural network into English part and Number part. The rule base approach is also divided into English rule base and Number rule base. Image processing techniques involving in the proposed methods are noise elimination, container code location, container code segmentation. The feature vectors used here are the White Run-Length Code and Pixel Density. In finial experiment, we use container code of 120 pictures to test recognition rate. As a result, the recognition rates of two method are both more than 85%. Recognition rate of Neural network is 86.6% and recognition rate of knowledge Rule Base is 87.5%.
APA, Harvard, Vancouver, ISO, and other styles
50

Duu-Tong, Fuh, and 傅篤棟. "Unstable Morse Code Auto-Recognition System using Neural Networks." Thesis, 2003. http://ndltd.ncl.edu.tw/handle/67850951171791059920.

Full text
Abstract:
博士<br>國立成功大學<br>電機工程學系碩博士班<br>91<br>Morse code continues to be one of the most important communication tools in use nowadays. Standard Morse code specifies tone ratios (dash/dot) and silence ratios (dash-space/dot-space) of 3:1. An effective Morse code auto-recognition system requires the operator to maintain these ratios precisely and to demonstrate a consistent typing speed. However, these requirements are seldom met by even the most experienced operators, while for operators with disabilities, they are virtually impossible to satisfy. Studies have shown that the auto-recognition algorithms published previously do not compensate adequately for the resultant unstable Morse code. Therefore, this thesis presents four single-chip neural networks designed to perform a more effective online auto-recognition of such code. The first method employs a Back Propagation Neural (BPN) network which recognizes the tone and silence signals of Morse code individually. The second proposal adopts a Modified Expert-Gating neural network, in which the Expert network recognizes the tone signals, and the Gating network identifies the silence signals. The third approach implements a modular neural network (MNN) which uses a Self-Organizing-Map (SOM) neural network to recognize the tone signals and a Modified Track Bayesian (MTB) decision boundary neural network to recognize the silence signals. The final method adopts two MTB decision boundary neural networks to recognize the tone and silence signals individually. The effectiveness of each network is verified by analyzing the auto-recognition results for Morse code transmissions generated by four test subjects of varying abilities, i.e. a skilled operator, a novice operator, an amputee and a cerebral palsy sufferer. The experimental results for the cerebral palsy sufferer demonstrate a maximum average recognition rate of 91% for the four proposed neural networks, while the average recognition rate for the amputee, who used a prosthesis to carry out typing, is shown to be 97%. The average recognition rate for the novice operator is found to be slightly lower, i.e. 96%. However, these results all compare favorably with the recognition rate of 99% obtained for the skilled operator. In order to overcome the difficulties involved in recognizing a severely unstable Morse code transmission, the algorithms presented in this thesis focus upon the classification of long to short intervals (dash to dot) rather than upon the classification of the tone and silence ratios. In general, the proposed neural networks must undergo a learning process before they are capable of performing an accurate classification of the input Morse code. This necessarily involves a significant amount of computational effort. However, since a typical operator’s typing speed is far slower than the signal processing time required by the computer, the neural networks are still capable of processing the input Morse code on a virtually real-time basis.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!